id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
196187342
pes2o/s2orc
v3-fos-license
AdaNSP: Uncertainty-driven Adaptive Decoding in Neural Semantic Parsing Neural semantic parsers utilize the encoder-decoder framework to learn an end-to-end model for semantic parsing that transduces a natural language sentence to the formal semantic representation. To keep the model aware of the underlying grammar in target sequences, many constrained decoders were devised in a multi-stage paradigm, which decode to the sketches or abstract syntax trees first, and then decode to target semantic tokens. We instead to propose an adaptive decoding method to avoid such intermediate representations. The decoder is guided by model uncertainty and automatically uses deeper computations when necessary. Thus it can predict tokens adaptively. Our model outperforms the state-of-the-art neural models and does not need any expertise like predefined grammar or sketches in the meantime. Introduction Semantic Parsing (SP) maps a natural language utterance into a formal language, which is crucial in abundant tasks, such as question answering Collins, 2005, 2007) and code generation (Yin and Neubig, 2017). The prevailing neural semantic parsers view semantic parsing as a sequence transduction task, and adopt the encoder-decoder framework similar to machine translation. The distinguishing difference of semantic parsing, however, is in its target sequences, which are token sequences of well-formed semantic representations. SQL language and lambda expressions are typical examples of SP targets. The "SELECT..FROM..WHERE" pattern in SQL and the paired parentheses in lambda expressions are consequences of underlying grammars. However, standard Seq2Seq models ignore the patterns and may give ill-formed results. To better model the grammatical and semantical constraints, many decoding methods were devised. proposed to generate tokens of an intermediate sketch first, followed by decoding into final formal targets. Others chose to gradually build abstract syntax trees using a transition-based paradigm, and tokens are generated at the tree leaves or in the middle of the transitions (Krishnamurthy et al., 2017;Chen et al., 2018;Yin and Neubig, 2018). There are also some decoders comprised of several submodules which are intended to generate different parts of the semantic output, respectively (Yu et al., 2018a,b). However, the aforementioned methods still have the following key issue. They explicitly require the expertise to design intermediate representations or model structures, which is not ideal or acceptable for scenarios with Domain Specific Languages (DSL) or new representations because of domain alterations and the incompleteness of the expert knowledge. To follow the successful idea and overcome the above issue, we introduce a novel adaptive decoding mechanism. Inspired by adaptive computing (Graves, 2016), pervasive tokens in training data will be generated immediately with no doubt. But for tokens seen less often, the model may be pondering and less confident, and it will be better to carry out more computations. In this way, it is unnecessary to pre-build any intermediate supervision for training, such as preprocessed sketches and predesigned grammars (Yin and Neubig, 2018), which must be manually redesigned for an unseen kind of target language. Furthermore, we use the model uncertainty estimates to reflect its prediction confidence. Although different uncertainty estimates have been explored in semantic parsing , we use Dropout (Srivastava et al., 2014) as the uncertainty signal (Gal and Ghahramani, 2016) due to its simplicity, and use policy gradient algorithm to guide the model search. Our contributions are thus three-fold. • We introduce the adaptive decoding mechanism into semantic parsing, which is well rid of intermediate representations and easily adaptable to new target languages. • We adopt uncertainty estimates to bias the decoder search, which is not covered in architecture searching literature to our best knowledge. • Our model outperforms the state-of-the-art neural models without other intermediate supervisions. Uncertainty-driven Adaptive Decoding Model Our semantic parser is learned from pairs of natural language sentences and formal semantic representations. Let x = {x 1 , x 2 , . . . , x m } denote the words in an input sentence, and y = {y 1 , y 2 , . . . , y n } be the tokens of the corresponding target lambda expression. Adaptive Decoding Model We first introduce the general model for adaptive decoding. In general, the model consists of an encoder, a decoder, a halting module, and the attention mechanism. Encoder. Input words x are first embedded using an embedding matrix W x ∈ R d×|Vx| , where d is the dimension of embedded vectors and V x is the set of all input words. We use a stacked two-layer BiLSTM to encode the input embedding. Hidden states from both direction at the same position of the second layer are concatenated as final encoder outputs {h 1 , . . . , h m }. Decoder. We stack two LSTM cells as one basic decoding unit. Similarlly, we use a matrix to embed target tokens y, y i = W y o(y i ). The token embedding will serve the input of the decoding cell. where [·; ·] means the concatenation of vectors, c e t and c d t are two attention context vectors described later, and flag is what we additionally concatenated to the input embedding, being either 1 or 0, Figure 1: The illustration of our adaptive decoding. Attention and pondering mode are only shown at time t for brevity. Every decoder will go into pondering mode before the next timestep. The decoder cell is a stacked two-layer LSTM and initialized by the last forward states of the corresponeding encoder layer. based on whether the model is acting in pondering mode or not, which will be introduced later. We further apply a linear mapping and a softmax function to the concatenation of s t and attention vectors to obtain the word predicting probabilities. We greedily decode the tokens at testing time. Attention. We adopt two types of attention when decoding. One attends the decoder state upon encoder outputs and yield the input context vector c e t , where [·, ·] means to vector stacking. The other similarly attends the hidden state to previous decoder outputs, yielding the context vector c d t over the decoding history. We use the bilinear function for encoder attention Attn(x, y) = x T Wy + b, with trained parameters W and b, and use the dot production function for decoding history attentions Attn(x, y) = x T y. Halting and Pondering. The key feature of our model is to adaptively choose the decoder depth before predicting tokens. Given the output state s t from (1), the model goes into the pondering mode. The output state s t is further sent to a halting module, which will generate a probability p t positively correlated with the model uncertainty. We use an MLP with ReLU and sigmoid activations for the halting module. Then a choice is sampled from the Bernoulli distribution determined by p t . If it chooses to continue, we again use (1) to update the state, meanwhile using the same embedding y t for the input. where s 0 t = s t , flag = 1, and c e k , c d k are attention vectors recomputed with s (k−1) t using (2). The model will keep pondering until it chooses to stop or reaches our limit of k = 3. The final state s (k) t will act as original s t in (1) for other modules. Uncertainty Estimates Since the halting module outputs a Bernoulli distribution to guide the decoder, we have to provides some uncertainty quantification for training. Fortunately, Dropout (Srivastava et al., 2014) was proved a good uncertainty estimate (Gal and Ghahramani, 2016). It's simple and effective that neither the model nor the optimization algorithm would need to be changed. We left other estimates like those proposed in in future work. To estimate uncertainty with Dropout, we leave the model in training mode and thus the Dropout enabled. We run the forward pass of the equation (3) for F times with the same inputs. Output states are further sent to get token probabilities, where Θ i is the set of all pertubated parameters affected by Dropout in the i th forward pass. We take the variance of q to reflect model uncertainty U(s t ) = Var(q) as suggested in Gal and Ghahramani (2016). We disable the gradient propagation when computing the variance such that the gradient-based optimization is not influenced. Note that the variance of a set of probabilities many not be quite large in practice, we thus rescale the variance to make it more numerically robust U n (s t ) = min(γ, Var(q))/γ, where γ = 0.15 in our case. Learning Our model consists of the Seq2Seq part (encoder, decoder, and attention) and the halting mod-ule. For the former, we minimize the traditional cross entropy loss with gradient decent, J ent = E (x,y) log p(y | x). We use the REINFORCE algorithm to optimize the halting module. The module acts as our policy network, by which the model consecutively make decisions from the action space A = {Ponder, Stop}. Each time the model make a choice a ∈ A, the uncertainty of the seq2seq part is involved in the reward, where a = 1 means a Ponder choice and a = 0 the other. We measure the correctness by examining the greedily decoded token if arg max y p(y | s k t ) = y t+1 . The model will be rewarded for a Stop action if the prediction is correct, and for a Ponder action if the prediction is incorrect. This is similar to the ponder cost of ACT that does not encourage excessive pondering steps. Experiments We compare our method with other models on two datasets. Our codes could be obtained via https://github.com/zxteloiv/AdaNSP. Experimental Setup Datasets. We use the preprocessed ATIS and Geo-Query datasets kindly provided by Dong and Lapata (2018). All natural language sentences are converted to lower cases and stemmed with NLTK (Bird et al., 2009). Entity mentions like city codes, flight numbers are anonymized using numbered placeholders. Setups. We choose hyperparameters on the ATIS dataset with the validation set. For the Geo-Query dataset that doesn't come with a validation set, we randomly shuffle the training set and select the top 100 records as the validation set, and the remaining as the new training data. After choosing the best hyperparameters, we resort back to train on the original set. The Dropout ratio is selected from {0.5, 0.6, 0.7, 0.8}, and the embedding dimension d is chosen from {64, 128, 256, 512}. We fix the batch size to 20, and both the encoder and decoder cell are two stacked LSTM layers. We apply scheduled sampling (Bengio et al., 2015) with the ratio 0.2 during training. We run F = 5 forward passes before computing the variance. We use Adam (Kingma and Ba, 2015) for the optimizer, and use its default parameters from the paper. Evaluation. We use the logical form accuracy as the evaluation metric, which is computed with parsed trees of the predictions and true labels. Two trees are considered identical as long as their structures are the same, i.e., the order to sibling predicates doesn't matter. We reuse the STree parser code from . Results and Analysis Our model outperforms the other comparative neural semantic parsers on this two set. We reuse the data from since the datasets are identical. Results are listed in Table 1. Our results are better than the SO-TAs Yin and Neubig, 2018) even without any intermediate representations, whereas Coarse2fine defines a sketch and TranX uses ASDL for every type of target semantic sequences. We outperform Coarse2fine by 0.7% and 0.9% on GeoQuery and ATIS datasets respectively. Although Jia and Liang (2016) has a slightly better result on GeoQuery, they introduced a synchronous CFG to learn new and recombinated examples from the training data, which is a novel method of data augmentation and requires much human effort for preprocessing. For an ablation test, our degenerated model without the pondering part receives considerable performance decreases by 2.8% and 2.9% on GeoQuery and ATIS datasets respectively. Model Geo ATIS ZC07 (Zettlemoyer and Collins, 2007) 86.1 84.6 λ-WASP (Wong and Mooney, 2007) 86.6 -FUBL (Kwiatkowski et al., 2011) 88.6 82.8 TISP (Zhao and Huang, 2015) 88.9 84.2 Neural network models Seq2Seq (Dong and Lapata, 2016) 84.6 84.2 Seq2Tree (Dong and Lapata, 2016) 87.1 84.6 JL16 (Jia and Liang, 2016) 89.3 83.3 TranX (Yin and Neubig, 2018) 88.2 86.2 Coarse2fine 88.2 87.7 AdaNSP (ours) 88.9 88.6 -halting module 86.1 85.7 Collins, 2005, 2007;Kwiatkowski et al., 2010;Mooney, 2006, 2007) try to model the correlation between semantic tokens and lexical meaning of natural language sentences. Methods based on dependency trees (Ge and Mooney, 2009;Liang et al., 2011;Reddy et al., 2016) otherwise convert outputs from an existing syntactic parser into semantic representations, which can be easily adopted in languages with much fewer resources than English. Recently neural semantic parsers, especially under the encoder-decoder framework, also sprang up Lapata, 2016, 2018;Jia and Liang, 2016;Xiao et al., 2016). To make the model aware of the underlying grammar of targets, people try to exert constraints on the decoder side by sketches, typing, grammars and runtime execution guides Krishnamurthy et al., 2017;Groschwitz et al., 2018;Wang et al., 2018). Moreover, learning algorithms in SP like structural learning and maximum marginal likelihood are combined with reinforcement algorithms (Guu et al., 2017;Iyyer et al., 2017;Misra et al., 2018). Adaptive Computing. Adaptive Computation Times (ACT) was first proposed to adaptively learn the depth of RNN models from data (Graves, 2016). Skip-RNN (Campos et al., 2018) used a similar idea to equip a skipping mechanism with existing RNN cells, which adaptively skip some recurrent blocks along the computational graph and thus saved many computations. BlockDrop (Wu et al., 2018) also introduced the REINFORCE algorithm to jointly learn a dropping policy and discard some blocks of the ResNet by the policy network. Recently, Dehghani et al. (2019) proposed Universal Transformers (UT) as an alternative form of the vanilla Transformer (Vaswani et al., 2017). It utilized ACT to control the recurrence times of the basic layer blocks (same parameters) in UT, instead of stacking different block layers in the vanilla Transformer. This helped UT mimic the inductive bias of RNNs and was shown Turing-completed, and has outperformed the vanilla Transformer in many tasks. Conclusion We present the AdaNSP that adaptively searches the corresponding computation structure of RNNs for semantic parsing. Our method does not need Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higherorder unification. In Proceedings of the 2010 conference on empirical methods in natural language
2019-07-14T07:01:44.817Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "421f9f3c19e2a6e99149280ae6259990ce98803a", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/P19-1418.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "421f9f3c19e2a6e99149280ae6259990ce98803a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
151887573
pes2o/s2orc
v3-fos-license
Discursive Strategies and the Maintenance of Legitimacy Any organization, to fulfill its mandate from the society, needs to have the legitimacy to use collective resources. Conferred almost automatically at the birth of the organization, it has to be maintained and even repaired when necessary. Legitimacy appears then as a conversation between the organization and the general public. Noticeably, this continuous conversation is sustained through the media and also through documents issued by the firm, particularly the annual report. The firms use discursive strategies to entertain their legitimacy. Using semiotic analysis in the frame of a multiple cases study (6 firms over 5 years), this paper isolates the different stories in the annual reports, including the images that are integrated parts of these narrations. We apply the semiotic instrument to these stories to deconstruct the content and expose the actor filling actantial roles. We found a substantial amount of stories (187 in 30 reports) containing the categories developed by Greimas & Bremond from the work of Propp, implying an intensive use of the report in the conversation maintaining legitimacy. Introduction Legitimacy is an unavoidable asset (Pfeffer & Salancik, 2003) for an industry (Hasbani & Breton, 2013).It is originally granted through a series of papers officialising the existence of the organization and its belonging to a particular sector exerting an activity recognized of public interest (Suchman, 1995) and having ways to do it that are generally accepted (Hasbani & Breton, 2013). The maintenance of the legitimacy is not always problematic.In the sector of appliances, for instance, the activity is recognized as useful and there had been no controversy around the way it is conducted.But other sectors are more visible and not only because of the size of the firms (Watts & Zimmerman, 1986) but also because what they do is considered more crucial or how they do it hardly acceptable.Nike having been accused of making young children to work in third world countries is an example of the legitimacy challenged because of the way the activities are conducted.The pharmaceutical sector is among those having an activity impossible to question although the practices remain open to criticism.A casual observation allows to naming some of these sectors: pharmaceuticals, medicine, oil and gas, etc.Other sectors will gain visibility because of their potential negative effect on the environment and public health: chemicals, pulp and paper, tobacco etc. Legitimacy concerns by the public are no matter to be settled in a week or a month.If we take the case of the tobacco industry, we see that this activity having no real opposition fifty years ago is now widely attacked from all sides.However, it still continues to exist for a period difficult to estimate.Therefore, the movement to ban tobacco, if it wins some day, will have taken near a century to concretize its victory.From these examples, we may propose that legitimacy is like a conversation between an industrial sector and the general public (Hasbani & Breton, 2013). Legitimacy has also an accounting side.Sectors having a high sensitivity to legitimacy fluctuations, conduct activities that are conferred a high degree of importance.This importance will allow them to exhibit a greater flexibility in the processes they use.These sectors maybe highly profitable, because of this flexibility; however, at a certain level, they will be considered as able to do anything for profits.This is the accounting component of the legitimacy.This is also connected with the discourse of Watts & Zimmerman (1986) because these firms may try then to avoid criticisms by diminishing their disclosed profits. The firm is a discursive reality (Boje, 2001).It can have tangible assets or not, but contrary to a "physical person", the existence of this "moral person" is ultimately the effect of a consensus.Therefore, its very existence and its potential possession of intangible assets, like legitimacy, are all the effect of some discursive process.We then argue that the maintenance of the legitimacy is a discursive activity and must be studied through discursive approaches.Then, our study follows researches like those of Davison (2008), or Froud et al. (2006) aiming at understanding the use of organizational discourse wanting to influence the perception of users and those following them. Our study is structured firstly around the notion of legitimacy.Then we present the pharmaceutical sector, followed by the method and the results.Finally, we draw some conclusions. Legitimacy Although the legitimacy has been widely discussed through the years, the concept, when applied to organizations, has not been very much developed.There are many form of legitimacy.Figure 1 shows a characterisation of the legitimacy, applied to firms, to differentiate the different periods, objects and sources (Hasbani & Breton, 2013). Legitimacy from the citizens Fundamental recognition that a firm enter in a sector socially important Figure 1.Types of legitimacy related with the sector and with the organisation Legitimacy is given firstly at the birth of the firm, through legal documents.It is given by the authorities, based on the declaration of the firm that it will work in a sector that is recognized as socially acceptable.Suchman, (1995), define legitimacy as a generalized perception or assumption that the actions of an entity are desirable proper or appropriate in a given society.That is the beginning of the conversation about legitimacy that is continued on the public place. Each sector does not have the same importance in this conversation.Some are more visible and other less.This visibility is related to the size of the firms (Watts & Zimmerman, 1986), to their appearance of being "naturally" there (Weber, 1971), like a restaurant, for instance or their perceived importance for the life of the society.In this latter category we may place the pharmaceutical industry and all the sectors related with health, energy, banking, etc.In these sectors, the activity is never discussed but the processes may be put into question.These sectors will often allow themselves some liberties with the processes because their activities are considered to be crucial, not just acceptable.It would probably be possible to make a kind of index of the activities considered crucial, acceptable or barely tolerable by the society.This index will most probably shows that the oil and gas sector is allowed to make huge profits and is not really disturbed by huge environmental catastrophes (for instance: the profit figures of British Petroleum during and after the Gulf of Mexico oil spill). Legitimacy must be maintained at all time to a certain level.There will be many aspects of this maintenance, many fronts on which to conduct the battle.One front will take the form of a conversation between the sector and the citizens.Such conversation will last forever even if, in the period of the crisis, it will intensify.Even during a crisis, the conversation continues until the sector is closed and even then, it can continue even if the activity has been banned, like in the case of the prohibition in the US.This conversational battle takes place in the medias where both sides arguments will be reflected and in other publications of the firm, and, lately, on Internet.This conversation implies the use of discursive strategies. Discursive Strategies Legitimacy is mainly a matter of discourse.It is a kind of endless conversation between an industry and the general public.The public is expressing its perception of the social importance of the activity of an industry and how this activity is conducted (Hasbani & Breton, 2013), through the media.The sector (firms and lobbies) is trying to influence this perception. The organisation can adapt its output, goals, and methods of operation to conform to prevailing definitions of legitimacy; the organisation can attempt, through communication, to alter the definition of social legitimacy so that it conforms to the organisation's present practices, output and values; and/or; the organisation can attempt through communication to become identified with symbols, values or institutions which have a strong base of legitimacy.(Dowling & Pfeffer, 1975, p. 127) Discursive strategies, for Dowling & Pfeffer, (1975), are more important than changing the actions.Lindbloom (1994) and Massey (2001) also propose discursive strategies to maintain and regain legitimacy.Guthrie & Parker (1989) add that the annual report is a designated place to do this work.The pharmaceutical sector has developed particularly efficient rhetorical strategies to perform this task. The Pharmaceutical Sector As we said before, the legitimacy of this sector's activity is very high.Therefore, the sector might be inclined to take liberties with the procedural or the legal aspect of the legitimacy (Figure 1).Firstly, we will look at the position of the sector in the economy and then we will discuss the main elements around which their discourse is evolving. The Profitability of the Sector The sector conserves an average net profit of 20% between 2006 and 2010.During those five years, the sales have increased by 40%.In term of sales, these firms are quite well placed in the world's first five hundreds.However, they do even better in term of net profit.These companies seem to resist quite well to the crisis with 10 of them figuring in the 50 most profitable enterprises in the world.Probably because in difficult times, drugs are considered an essential expense given priority.This goes in the direction of a high level of legitimacy of the activity.This sector is one of the most profitable: Interestingly enough, the tobacco sector for instance, despite its legitimacy crisis, report large profit margins possibly reflecting the information about incorporating more addictive substances into their products.So, those who are still smoking are more strongly addicted than ever.Therefore, the users consider this product of first necessity, even during an economic crisis period. The Discourse of the Pharmaceutical Industry Historically, these industries have justified their high level of profit by the necessity of financing research & development (R&D).Accountingly speaking, this argument has no basis, as the profit is calculated after deducting research & development expenses.However, it is not destined to accountants, but to the general public. In fact, the sector spends twice as much for marketing, sales and administrative expenses than for R&D.Moreover, taking into consideration that doing R&D implies investing in fixed assets (laboratories and up-to-date equipment), the sector is disinvesting considering that its fixed assets have decreased in proportion of total assets by more than five percent between 2006 and 2010.Large pharmaceutical companies have merged and bought other ones, shredding the sector and decreasing the competition.This lost competition (legal legitimacy in Figure 1) is compensated by discourse: Overall GSK's strategy is one keeping things going (by merger) for the stock market, in a world where multiple and contested narratives are as important as the financial numbers.(Froud et al., 2006, p. 151) The discourse is then essential to maintain the level of legitimacy despite practices (R&D level of expense or reducing the market through mergers and acquisitions) opposed to what the general public finds legitimate.The discourse is perverting the meaning of actions isolating itself from any referential connection, evolving in a schizophrenic relation with the "reality" that Baudrillard (1981) named hyperreality. The importance given to the narratives to replace facts lead to an extensive usage of the storytelling to frame the activity of industrial sectors.Storytelling has invaded every parts of our life (Barthes, 1957) including the annual report.To substantiate this, let's look at the practices of TAXI, a public relation firm producing many annual reports. Whatever the media used, Taxi is telling captivating stories.From the televised advertising to the internet site, in passing through our famous publicity schemes, we are guided by out of common perspectives inspiring us with smashing ideas having durable effects.(TAXI, 2010) (Our translation) Consequently, we need a method to analyze stories told in the annual report through the texts, obviously, but also through the pictures that are reputed to worth a thousand words. Proposition 1: the firm are using discursive strategies to maintain their legitimacy Proposition 2: these strategies are conducted through the media and the documents produced by the firms, noticeably, the annual report Proposition 3: the use of the storytelling helps to format the information in a way to influence the perception of the public Methods As we are exploring new areas, we opted for a multiple cases study approach.This section explains the way we selected our cases and how we analyzed the annual report of these firms. An Embedded Case Study Following Yin (2009), we opted for a simple case study with six embodied units of analysis: Johnson & Johnson, Pfizer, Merck, Bristol-Myers Squibb, Eli Lilly, & Abbott, reducing our group to US based firms.Our choice of studying the six US leaders is based on the idea that smaller firms in a sector will have a tendency to align on the practices of the leaders (DiMaggio & Powell, 1983) and that these leaders will control the lobbying activity in their sector if only by financing it. As we look for the discourse of the sector, our source of data will be mainly documentary.We will focus on the annual report of these firms, to see how they maintain the legitimacy of the sector. However, the documents are not, in a scientific perspective, immediately accessible in all their dimensions for the casual reader.Consequently, we need a method to extract the content of the texts or images.There are some of them on the market (Breton, 2009).To conduct our analysis, we have decided to use some semiotic instruments (Hasbani & Breton, 2013). Firstly, our analysis lays on the use of storytelling in the organizational discourse, verbal or written.This use has been discussed by many authors: Salmon (2007), Gendron & Breton (2013). The Semiotic Tools Courtès (2007) describes the situation as follows: We will understand then that it is possible to oppose a "realistic" discourse-which appealing to the traditional five senses, give an impression of "truth"-to a treaty of logic, or more widely of philosophy from the conceptual universe.(…).At this level, we must recognize that the figurative discourse is always more convincing than the thematic one, and, if we want inculcate to children one or another system of values, the best is obviously to present it in a figurative manner, reaching the "realism" style.(Courtès, 2007, p. 107) This discursive principles had been deeply understood by those in charge of the communication in the firms (Salmon, 2007). To substantiate that storytelling has invaded the business world, and present it in a figurative manner, we have a paragraph from the Annual report 2006 of Johnson & Johnson presented as Figure 2. Therefore, the annual report is presented as a collection of stories showing, more than telling, what are the company and mostly the people in it.For a more complete description of the use of semiotics analysis on texts we will refer the reader to Breton (2009) and Hasbani & Breton (2013).However, we will use two analytical instruments from the semiotics toolbox.The first will be the actantial structure (Greimas, 1976), and the second will be the function (Propp, 1965;Bremond, 1964Bremond, , 1966)). Propp studied folktales.He was interested in a level of autonomous signification, doted of a structure that can be isolated from the content per se, and that Bremond called the narration (le récit) (Bremond, 1964).This is not the form in classical linguistics terms, this level, analogically with Hjelmslev, can be termed the form of the content while the events reported will be the substance of the content.While the substance of the content is changing for each story, its structure, the "form of the content" remains relatively stable.The actantial structure of Greimas is one of those persistent structures.This level of analysis is precisely where lies the semiotics, recognizing the recurrent structures in the narratives and taking this study as the basis for a linguistic of a higher order (Barthes, 1966). The transformation (Everaert-Desmedt, 2000) leads to a state of happy stability from a situation of crisis initiating the narration.This "happy ever after" situation is characteristic of the maturity consisting in a wise government of self, leading to happiness (Bettelheim, 1976).For Courtés (1976), it is not the end; it is an empowerment necessary for realizing the desired social ascension.Whatever the interpretation proposed of the "happy" end, it will occur out of the narration.So, as in the US TV series, for the story to restart the next week, this state has to be delayed indefinitely.This is also what the firm do in the annual report (Hasbani & Breton, 2013). Our Corpus We use the documentary approach on a series of communication from a number of leaders of the pharmaceutical industry.The main studied document is the annual report.In figure 2, we see that J&J talk of stories, implying that their annual report contains more than one.Actually we have 6 companies, with 5 reports each, meaning a total of 30 reports presenting a total of 187 stories.For a part of the text to be qualified of "story", it must contain an actantial structure and a diegesis.Table 4 presents some statistics about our corpus.We identified 187 stories in our sample.However, they are not coming in equal proportions from the firms.Table 4 shows that the narrative sections, in average for each firm, have a large standard deviation.Eli Lilly has 2 pages while Abbott has 36.It must be noticed that the 18.8 pages of J&J contain more stories (12.8) than the 36 of Abbott (9.6). Results This section will be in two parts.Firstly we will provide example of the analysis we have done.Secondly we will present statistics on the presence of the targeted characteristics in the retained corpus. Examples of Analyses Firstly, we want to identify the actants in the stories.Table 5 shows the category of actor filling the function of hero in our 187 narrations.The patient is, by far, the most popular hero.Interestingly enough, the management and marketing staff have the second place ex-aequo with the research staff.However, when this information is connected with the fact that pharmaceutical firms spend more money in marketing than in research, we understand that these people are quite important.Figure 3 shows the cover of the 2006 Annual report of Merck. Figure 3. Merck's cover of an Annual Report Five pictures imply five stories.The fourth story will be described that way: For much of her adult life, JAMILLA COLBERT suffered from the disfiguring effects of CUTANEOUS T-CELL LYMPHOMA... Most people have never heard of cutaneous Tcell lymphoma (CTCL).But for those who have this form of cancer, which affects the skin, every day is a challenge: pain and discomfort, stares from unthinking strangers, frustration that nothing provides real relief.It's been 25 years since Jamilla Colbert noticed the first signs of CTCL-itchy skin, followed by growths on every part of her body that wouldn't go away.Over the years, Jamilla's search for relief led to one disappointment after another. From topical ointments and chemotherapy to fullbody radiation and surgical removal of tumors, nothing proved completely satisfactory.Although she got relief from some of these treatments, over time she still experienced symptoms of her CTCL."I felt so alone," Jamilla recalls."The doctors had no idea what more they could do for me."Then, two years ago, her doctor learned about a Merck clinical trial for the treatment of CTCL, and immediately thought of Jamilla.She enrolled and had very positive results from treatment with the drug, called Zolinza.And while not all patients respond as favorably as Jamilla has, Zolinza has definitely improved her life.As Jamilla will tell you, "I have been blessed.There is hope out there." (Merck, 2006, p. 14) Table 6 shows the results of the application of the semiotic tools to this story: We have the typical structure of a story with a negative situation at the beginning, then a transformation (melioration, Bremond, 1966), helped by the adjuvant, leads to a much more satisfactory situation at the end.This happy ending is also the defeat of the opponent, the sickness and the resulting miserable life. Such story is legitimizing the pharmaceutical sector in many ways.Firstly, by eliminating the sickness, the pharmaceutical industry appears to fulfill its social contract and to redeem its right to use public resources.Secondly, the industry is sending back to society important resources that were not performing because of the illness.Therefore, the pharmaceuticals are rending back what they take.The sick person is the main agent of her recovery.Merck is just helping and bringing health to the people, as it is its mission.Figure 4 presents some extracts of the mission statements of some pharmaceutical firms as found on their Internet sites.The analysis of pictures is not well developed.Barthes (1964) did some famous example although not totally convincing.Since, we learn a few things about what we really see and what our brain constructs in an image (Groupe µ, 1992).A publicity picture is constructed in such a way that the eye will focus on the surfaces where the key information is placed (Joly, 2009).For Péninou (1970), there are four principal configurations in the construction of publicity pictures.The focalized construction places the product where all the lines are converging.The axial construction places the product at the center of the picture.The construction in depth integrates the product into a scene although it is placed at the first level.Finally the sequential construction places the product at the end of the path the eye is following.Normally, in our societies where we read from left to right the look will follow a kind of Z form, starting at the left top to scan the top then following a diagonal from the top right to the bottom left and then going to the bottom right (Péninou, 1970).We must also consider the light, because it will play a role in directing the attention.The fact that we have a high angle or a low angle shot also influences our perception of the image.In marketing they have developed some understanding of the use of pictures in messages.Even in health communication, those findings had been applied (Stones, 2013).Following Levie & Lentz (1982), illustrations have four fundamental functions presented in Table 7.In marketing, all these functions lead to the formation of a positive attitude toward what the firm is selling.In the images we present here, the affective function is quite prominent, provoking emotional response. Table 7. Functions of text illustrations The term "affective" design refers to empathetic, meaningful design that intents to evoke affect.Carliner (2000) added the term "affective" design to a framework for information Design, referring to it as "designing the communication product for its optimum emotional impact."(…).The terms then relate strongly to products that are either bought or selected/used over key competitors, with positive affection playing a role in that selection criteria or continued use.(Stones, 2013, p. 87) These affects will be feel through a certain way of "reading" the pictures. In our case, the image is clearly made to be read following the "Z" pattern.The first thing in the spot is the face of the researcher.We can describe his jocondial smile as expressing his satisfaction.Then we go down and see, from the smock he is wearing, that he is a scientific.If we do not understand at first glance, it is written on it.And, finally, we go toward the computer that we recognized mainly by the keyboard.The message is: "It's wonderful to make a difference in someone's life".The picture is also slightly low angle giving the superior place to Doctor Hess, the great researcher.We are at the bottom of the picture and, at the top, the Doctor is looking up, inside himself in fact, for new great ideas that we are too low to conceive but that will have wonderful effects on our lives.Table 8 presents the elements of our analysis of this story. Transformation "The vision we have is to use Harmonic technology as the cornerstone of a growing energy franchise that will offer multiple benefits to surgeons and patients in any procedure," Final situation "It makes a difference in the care I can provide for my patients."» Here again, it is about having fulfilled the mandate given to the industry to help people recovering health, and spreading this goodwill as widely as possible. But, the profits of the pharmaceuticals are huge.Therefore, firstly, to justify this situation, they claim to need such high profits to make more R&D.But, also, they create foundations to produce an impression of redistribution.In this category of stories, the foundation will be the hero.Figure 6 provides an example of a picture illustrating such story.A rhetorical strategy consists in creating a reality effect (Barthes, 1968).Aristote, in his Rhétorique (1991), said "believable is preferable to truth, as the truth is not always believable" (our translation).So, to create an effect of truth, it is important to show real people, to name them and to provide details of their life that is giving them a real existence. Figure 6, shows more than a real child, it shows, through the other silhouettes in the background, that he is playing, therefore happy and in reasonable good health taking into account the terrible illness he is carrying. The Baylor College of Medicine-Abbott Fund Children's Clinical Centre of Excellence-Malawi is the country's first outpatient clinic dedicated to serving children and families living with HIV.Brian is one of many children receiving medical care at the center through a comprehensive program.To date, Abbott and Abbott Fund's programs have assisted more than 600,000 children and families impacted by HIV/AIDS in the developing world.(Abbott, 2006, p. 40) Table 9 shows the semiotic categories applied to this story.Semiotic analysis may focus on the use of children in such story.Children are reputed to be innocent.Therefore, if the sickness can be resulting from something adults did, it is not the case for children making their sickness more revolting and the help someone can bring more admirable.The use of children is wide spread to create a dramatic effect.Now that we have illustrated our method let's see the results. Systematic Analysis After these examples, we may provide some results encompassing the 187 stories from the six pharmaceutical firms.This analysis will start with the pictures, as it is of public notoriety that every one worth 1000 words.We found 222 pictures illustrating the 187 stories.However, in term of social role (as opposed to actantial role) we will have more than 222 as photos can contain many peoples.Table 10 shows the frequencies of the social role of the persons on the pictures.While the pictures go along the stories, every appearing person is not necessarily referred to in the text.It is obvious that the patient (potential or actual) is the main target of the message.Therefore, the report shows principally patients' stories.When we find employees' stories, they are mostly researchers ( On the 90 persons internal to the organization, we have 87 employees fulfilling different functions as shown in Table 12.Even if, in budgetary terms, as we saw when presenting the industry, research is far from being the most important activity of the firm, it is by far the most widely represented in the stories and the pictures in the annual report.Following Froud et al. (2006) only 14% of the employees in the pharmaceutical sector work in research departments.They are over represented in the report.Conversely, administration and marketing represent 53% of the employees and manufacturing 33%.These departments are clearly underrepresented in the report. Acknowledging that the annual report has no obligation to present the different categories of workers proportionally, the realized proportions shows which part of the activity of the firm the management wants to place under the spotlights. These choices are made to create a particular image, which is not encompassing the ensemble of the situation. The lucrative aspect of the activity is mostly obliterated to create the image of a beneficial organization, almost a charitable one enhancing the public acceptation of the firm and therefore maintaining its legitimacy.(1995) proposes a series of strategies to maintain legitimacy classified into two groups: perceiving future changes and protecting past accomplishments. Suchman Our findings show at least a huge effort to protect past accomplishments.The firms are saying that they help people regaining health for decades.They say that by systematically telling stories about patients recovering from a severe illness and returned to normal life after taking some treatments offered by the firms. By showing researchers at the top of their discipline, they also orientate the perception of future changes.They demonstrate that they are well placed to continue to be at the leading position in the race for new drugs and new treatments. Our results also go in the direction of the propositions extracted from previous studies.The multiple stories found in the annual report of the six firms studied over five years (30 firms/years) support the studies having found an increasing number of such stories in firms' written documentation and verbal discourses.We can conclude that storytelling is widely used in the pharmaceutical sector opening the way to an extensive survey of such narrative form in annual reports and other firms' publications. This narrative form is a privilege tool formatting the message into a format that is easy for the reader to adhere to rather than understand.As we have seen, the pictures and the short texts add to produce an effect and carry the message than the firm has for first goal to help people being as healthy as possible while the numbers, to some extent, says the contrary.For instance, the Africans are overrepresented in those pictures regarding the effort that is done to care for their specific illnesses (malaria, etc).In a way, taking into account the cost of medicine in US, African-Americans are also overrepresented.Consequently, our three propositions enounced previously are, at least, not contradicted by our results. For the theory of legitimacy, we propose to acknowledge that legitimacy is related to an industry rather than a particular firm.It is an activity that is losing legitimacy not a firm, which may be losing its reputation.When the process is challenged, rather than the activity, it is also at the industry level.We also propose that legitimacy does not work the same for exposed industries and those who are not.As health can be considered very sensitive for the population, the pharmaceutical industry is quite under the spotlights therefore having to constantly maintain its legitimacy in an endless conversation with the citizens. Further research may be interested in building an index of the industries in order of sensitivity to legitimacy fluctuations.Such research may also be interested in analyzing pictures more extensively which is actually quite ignored in the studies of annual reports or even of Internet sites.Such researches will also have to consider a wider spectrum of sources, maybe placing the emphasis on more widely spread sources than the annual report. Figure 2 . Figure 2. Presentation of the storytelling activity by a pharmaceutical firm Figure 4 . Figure 4. Extracts of companies' mission as expressed on their Internet sites Figure 5 . Figure 5. Image of a satisfied doctor after having changed the life of one patient Figure 6 . Figure 6.Image of a young African now playing with his friends, because of the action of the firm in African countries Table 2 . Profit/sales of the largest pharmaceutical companies for 2006 and 2009 and their rank in 2009 Global 500-Annual ranking of the world's largest corporations, 20-07-2009 and 24-07 2006.Note.*Profit in proportion of the sales.**Last year Fortune published this information (Highest return on revenues) in its annual survey. Table 3 . List of the sectors in order of profitability in 2009 Fortune Magazine, Global 500-Annual ranking of the world's largest corporations, 20-07-2009. Table 4 . Characteristics of the annual reports in our corpus* Note.*The statistics are the means of the variables for the 5 years under study.**The number of pages of the report excluding the financial statements and the notes.***The number of pages before the MD&A section, including the front cover and the page inside it. Table 5 . Frequency of the different type of actors having the actantial role of the hero Table 6 . Application of the semiotic tools to the story of Jamilla Colbert Table 8 . Application of the semiotic tools to the story of Doctor Hess Table 9 . Application of the semiotic tools to the story of Doctor Hess Table 10 . Social roles of the persons appearing on the pictures Table 11 . Table 12 below).Doctors form a category of researchers or some foggy category of scientific, as Doctor Hess.However the fourteen doctors found in our table are not employees of the firm but outsiders having benefited of the innovations from the company.Consequent with this finding, three quarters of the persons on the pictures come from outside the firm as shown in table 11.Origin of the characters presented on the pictures Table 12 . Functions of the employees presented in the pictures
2018-12-15T01:26:59.386Z
2016-08-29T00:00:00.000
{ "year": 2016, "sha1": "f6313c7d7096ca95b376ec0c4553bfab3ac93463", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/ells.v6n3p1", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f6313c7d7096ca95b376ec0c4553bfab3ac93463", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
218618002
pes2o/s2orc
v3-fos-license
Should length of stay in hospital be the endpoint in arthroplasty? This is, interestingly, one of the 10 most cited papers in the history of Acta after year 2000 (Husted et al. 2011). Interestingly, since length of stay (LOS) is not the most important parameter in arthroplasty: freedom of pain, normalized function and longevity are the ultimate goals. Why is then LOS of such interest? Hospital beds are a limited resource in many parts of the world, irrespective of payer system. LOS has therefore come under surveillance, to the degree that day care arthroplasty has become common in certain hospitals (Hartog et al. 2015). Remember that it is not more than 15 years ago since patients stayed in hospital for 1 to 2 weeks after total joint arthroplasty (TJA). The study on 207 patients undergoing hip or knee arthroplasty registered 2 times a day whether fulfi llment of each of the discharge criteria had been obtained, and detailed reason(s) for not being discharged. Husted et al. found that in a fast track system pain, dizziness, and general weakness were the main reasons for not being discharged after 24 and 48 hours in 80% of patients. Median LOS was 2 days, and 95% were discharged after 3 days. Waiting for blood transfusion, start of physiotherapy, and for postoperative radiographic examination delayed the discharge for 20%. The fi rst factors can be seen as patient related, while the last ones are hospital factors. The hospital factors could be organizationally removed, while patient factors probably could not be changed. The authors had previously shown that readmissions were not increased by the fast-track system. The authors themselves concluded that the fi ndings offered the possibility of safe reduction of LOS after fast-track hip or knee arthroplasty. Now, nearly 10 years after its publication, it can be discussed whether being highly cited is equivalent to being an important scientifi c paper? The study was non-selective in including all patients scheduled for TJA in a 6 months period, thereby it was valid to all patients treated at Hvidovre hospital, and maybe to all patient in Denmark and Scandinavia. It was published in a period when LOS was rapidly decreasing due to implementation of fast-track surgery around the world. Husted et al. studied why some patients were in hospital while others had returned home, a topic which interested all researchers in hospital logistics and post-operative analgesia. The 176 citing papers are mostly on rapid recovery and analgesia. The study reached a peak with 18 citations in 2018. The most surprising citation was in pediatric urology, but also that study was on enhanced recovery after surgery (Haid et al. 2020). Husted and Kehlet have been the pioneers in rapid recovery in Scandinavia, with numerous publications on analgesia (which is a prerequisite for rapid discharge), and recently outpatient total joint surgery (Gromov et al. 2019). The value of the 2011 paper has perhaps been mostly to pave the way for this unthought possibility just 15 years ago, leaving hospital with a new hip or knee the same day as you went in through the hospital doors. Acta Orthopaedica 2011; 82 (6): 679–684 Why still in hospital after fast-track hip and knee arthroplasty? Guest editorial Should length of stay in hospital be the endpoint in arthroplasty? This is, interestingly, one of the 10 most cited papers in the history of Acta after year 2000 (Husted et al. 2011). Interestingly, since length of stay (LOS) is not the most important parameter in arthroplasty: freedom of pain, normalized function and longevity are the ultimate goals. Why is then LOS of such interest? Hospital beds are a limited resource in many parts of the world, irrespective of payer system. LOS has therefore come under surveillance, to the degree that day care arthroplasty has become common in certain hospitals (Hartog et al. 2015). Remember that it is not more than 15 years ago since patients stayed in hospital for 1 to 2 weeks after total joint arthroplasty (TJA). The study on 207 patients undergoing hip or knee arthroplasty registered 2 times a day whether fulfi llment of each of the discharge criteria had been obtained, and detailed reason(s) for not being discharged. Husted et al. found that in a fast track system pain, dizziness, and general weakness were the main reasons for not being discharged after 24 and 48 hours in 80% of patients. Median LOS was 2 days, and 95% were discharged after 3 days. Waiting for blood transfusion, start of physiotherapy, and for postoperative radiographic examination delayed the discharge for 20%. The fi rst factors can be seen as patient related, while the last ones are hospital factors. The hospital factors could be organizationally removed, while patient factors probably could not be changed. The authors had previously shown that readmissions were not increased by the fast-track system. The authors themselves concluded that the fi ndings offered the possibility of safe reduction of LOS after fast-track hip or knee arthroplasty. Now, nearly 10 years after its publication, it can be discussed whether being highly cited is equivalent to being an important scientifi c paper? The study was non-selective in including all patients scheduled for TJA in a 6 months period, thereby it was valid to all patients treated at Hvidovre hospital, and maybe to all patient in Denmark and Scandinavia. It was published in a period when LOS was rapidly decreasing due to implementation of fast-track surgery around the world. Husted et al. studied why some patients were in hospital while others had returned home, a topic which interested all researchers in hospital logistics and post-operative analgesia. The 176 citing papers are mostly on rapid recovery and analgesia. The study reached a peak with 18 citations in 2018. The most surprising citation was in pediatric urology, but also that study was on enhanced recovery after surgery (Haid et al. 2020). Husted and Kehlet have been the pioneers in rapid recovery in Scandinavia, with numerous publications on analgesia (which is a prerequisite for rapid discharge), and recently outpatient total joint surgery (Gromov et al. 2019). The value of the 2011 paper has perhaps been mostly to pave the way for this unthought possibility just 15 years ago, leaving hospital with a new hip or knee the same day as you went in through the hospital doors. Why still in hospital after fast-track hip and knee arthroplasty? Henrik Husted, Troels H Lunn, Anders Troelsen, Lissi Gaarn-Larsen, Billy B Kristensen, and Henrik Kehlet DOI 10.3109/17453674.2011.636682 Background and purpose Length of stay (LOS) following total hip and knee arthroplasty (THA and TKA) has been reduced to about 3 days in fast-track setups with functional discharge criteria. Earlier studies have identified patient characteristics predicting LOS, but little is known about specific reasons for being hospitalized following fast-track THA and TKA. Patients and methods To determine clinical and logistical factors that keep patients in hospital for the first postoperative 24-72 hours, we performed a cohort study of consecutive, unselected patients undergoing unilateral primary THA (n = 98) or TKA (n = 109). Median length of stay was 2 days. Patients were operated with spinal anesthesia and received multimodal analgesia with paracetamol, a COX-2 inhibitor, and gabapentin-with opioid only on request. Fulfillment of functional discharge criteria was assessed twice daily and specified reasons for not allowing discharge were registered. Results Pain, dizziness, and general weakness were the main clinical reasons for being hospitalized at 24 and 48 hours postoperatively while nausea, vomiting, confusion, and sedation delayed discharge to a minimal extent. Waiting for blood transfusion (when needed), for start of physiotherapy, and for postoperative radiographic examination delayed discharge in one fifth of the patients. Interpretation Future efforts to enhance recovery and reduce length of stay after THA and TKA should focus on analgesia, prevention of orthostatism, and rapid recovery of muscle function.
2020-05-14T13:03:12.083Z
2020-05-03T00:00:00.000
{ "year": 2020, "sha1": "8182ef6b79825709d1ed10e9f5f878db7b052a26", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17453674.2020.1763570?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c8064986b6e1dc46e4fa2774c05a8dd60ba31b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4355514
pes2o/s2orc
v3-fos-license
The p90rsk-mediated signaling of ethanol-induced cell proliferation in HepG2 cell line Ribosomal S6 kinase is a family of serine/threonine protein kinases involved in the regulation of cell viability. There are two subfamilies of ribosomal s6 kinase, (p90rsk, p70rsk). Especially, p90rsk is known to be an important downstream kinase of p44/42 MAPK. We investigated the role of p90rsk on ethanol-induced cell proliferation of HepG2 cells. HepG2 cells were treated with 10~50 mM of ethanol with or without ERK and p90rsk inhibitors. Cell viability was measured by MTT assay. The expression of pERK1, NHE1 was measured by Western blots. The phosphorylation of p90rsk was measured by ELISA kits. The expression of Bcl-2 was measured by qRT-PCR. When the cells were treated with 10~30 mM of ethanol for 24 hour, it showed significant increase in cell viability versus control group. Besides, 10~30 mM of ethanol induced increased expression of pERK1, p-p90rsk, NHE1 and Bcl-2. Moreover treatment of p90rsk inhibitor attenuated the ethanol-induced increase in cell viability and NHE1 and Bcl-2 expression. In summary, these results suggest that p90rsk, a downstream kinase of ERK, plays a stimulatory role on ethanol-induced hepatocellular carcinoma progression by activating anti-apoptotic factor Bcl-2 and NHE1 known to regulate cell survival. INTRODUCTION The p90 ribosomal s6 kinase (p90rsk) is a family of serine/ threonine kinases that is located downstream of MAPK cascade [1]. In humans, three isoforms of p90rsk have been identified which show similar overall structures with two kinase domains, amino-and carboxy-terminal domains. The amino-terminal is similar to p70 ribosomal s6 kinase showing ~60% of sequence identity, whereas carboxy-terminal kinase is most closely related to calcium/calmodulin-dependent kinase group of kinases (35% sequence identity) [2]. Activation of the amino-terminal domain leads to phosphorylation of all known targets of p90rsk, whereas activation of carboxy-terminal is involved in autophosphorylation [3]. MAPK-catalyzed phosphorylation of Ser364 and Thr574 is essential for activation of amino-terminal domain and carboxyterminal domain, respectively [4]. P90rsk is revealed to play an essential role in the cell survival and cell cycle regulation with the ability to phosphorylate and regulate the activity of several substrates, including many transcription factors and kinases, the cyclin-dependent kinase inhibitor, tumor suppressor, several cell survival factors [5]. Activation of p90rsk accompanies oncogenic transformation, stimulation of G0/G1 transition and differentiation of many types of cells [6][7][8]. Besides, increased activation of p90rsk is reported during meiotic maturation [9]. During maturation of Xenopus oocytes, activation of p90rsk is needed at the start of meiosis to suppress entry into S phase and to facilitate cyclin accumulation [10]. P90rsk is also related to anti-apoptotic effect. In various Original Article The p90rsk-mediated signaling of ethanol-induced cell proliferation in HepG2 cell line cancer cells, p90rsk is generally overexpressed for anti-apoptotic regulation [1,5,11,12]. Moreover, a recent study reported that p90rsk directly promotes cancer cell survival by interacting with heat shock protein 27 [13]. However, the detailed mechanism related to p90rsk is still elusive. Among targets downstream of p90rsk is NHE1 (Na + /H + exchanger isoform-1) [14]. NHE1 is expressed ubiquitously both in the plasma membrane and mitochondrial inner membrane of mammalian cells and extrude intracellular H + in exchange for extracellular Na + to regulate intracellular pH and the concentration of intracellular Na + [15]. In normal cells, the activity of NHE1 remains inactive at the resting pH, while in malignancy cells, it is usually hyperactive [16]. NHE1-dependent H + efflux leads to intracellular alkalinization, which prevents apoptotic event [17,18] and promotes cell proliferation and mitogenic stimulation [19]. For instance, NHE1 defends against apoptotic stress by inhibiting of caspase activity and is reported to be involved in the survival of several cell lines [20,21]. Furthermore, Inhibition of NHE1 has been reported in an early signal transduction that may participate in the regulation of apoptotic response by many drugs [22][23][24][25]. Many data also suggest that anti-apoptotic event of Bcl-2 family is shown to be dependent on NHE1-assocaited cellular alkalinization [26][27][28]. A series of studies have demonstrated that bcl-2 family members are associated with ERK-p90rsk pathway [29,30]. Bcl-2 family members can induce or inhibit cell death, with the ratios of proapoptotic protein family members to anti-apoptotic protein family members representing a critical indicator of sensitivity of mammalian cells to many kinds of apoptotic stress [31]. Several members of anti-apoptotic Bcl-2 family proteins can physically interact with each other, forming hetero-or homo-dimers [32]. Bcl-2 family members contain up to four Bcl-2 homology (BH) domains: BH1, BH2, BH3, and BH4. Some Bcl-2 family members contain only a BH3 domain. BH3-only subfamily of Bcl-2 family proteins heterodimerizes and antagonizes the activity of prosurvival proteins (Bcl-2, Bcl-xL) and promotes apoptosis [33]. Many studies have established that the increased activity of ERK is critical to progression of hepatocellular carcinoma [34]. Furthermore, recent studies have established that ethanolinduced growth of hepatocellular carcinoma involve increases in ERK-MAPK signaling [35]. Among the substrates downstream of ERK is the p90rsk. Importance of p90rsk in many diseases such as cancer is being appreciated. In prostate cancer, overexpression of p90rsk has been reported [5]. In HepG2 cells, among p90rsk subtype, rsk1 is known to be expressed [36]. Therefore, we questioned whether p90rsk, especially rsk1, may play an important role in ethanol-induced growth of human hepatocellular carcinoma in association with several substrates, which are known to regulate cell survival and apoptosis, downstream of p90rsk. In this study, we explored molecular changes within HepG2 cells treated with ethanol (10~50 mM) with or without various kinds of inhibitors associated with ERK-p90rsk signaling pathway. Cultures of HepG2 cells Human hepatoma HepG2 cells were obtained from Korean Cell Line Bank (KCLB, Seoul, Korea). Cells were cultured in DMEM supplemented with 10% FBS containing 100 U/ml penicillin, 0.1 mg/ml streptomycin, and 0.25 μg/ml amphotericin B and incubated in a humidified atmosphere of 5% CO 2 and 95% air at 37 o C. After reaching confluence, the cells were detached using 1% trypsin-EDTA in HBSS with bicarbonate. The cells were then counted, seeded at 2×10 5 cells/mL on 100 mm culture dishes, and maintained in DMEM containing 10% FBS. The medium was changed every 48 hours until the cells reached confluence. Measurement of cell viability The cell viability was measured using the conventional 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction assay. In this assay, viable cells convert MTT to insoluble blue formazan crystals via the mitochondrial respiratory chain enzyme succinate dehydrogenase. The cells were plated at a density of 2.5×10 5 /well in 6-well plates and grown in DMEM with 10% FBS and Antibiotic-Antimycotic. When the cells were made quiescent at confluence by incubation, the cells were synchronized in serum-free medium for 24 hours, followed by treatment with each indicated agent for the indicated time periods. After incubation, the cells were rapidly washed twice with PBS and incubated with MTT solution (final concentration, 5 mg/mL) for 4 hours at 37 o C. Then, the supernatant was removed and the formazan crystals were dissolved in DMSO with gently shaking for 15 min. Absorbance was monitored at 570 nm with a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Preparation of cell extracts The cells were plated at a density of 2.5×10 5 cells/well in 6-well plates. After the incubation of 48 hours they were serum starved by incubation in serum-free DMEM for 24 hours. The cells were then stimulated with each compound for the indicated time periods or at the indicated concentrations. After incubation, the cells were rapidly washed twice with PBS and lysed on ice for 5 min in 200 μl of lysis buffer (20 mM Tris-HCl (pH7.4), 0.5 mM EDTA, 0.5 mM EGTA, 1% (w/v) Triton X-100, 0.01% (w/v) SDS, 10 μg/mL leupeptin, 10 μg/mL aprotinin, 1 mM PMSF, and 0.7 μg/mL β-mercaptoethanol, Phosphatase inhibitor cocktail-3 10 μl/mL). The lysates were scraped with a cell scraper and collected in Eppendorf tubes. They were then sonicated (6 seconds, 2x) and centrifuged for 15 min at 13,000 rpm at 4 o C to remove cellular debris. After denaturation, the supernatants were collected and stored at -80 o C for protein assay and Western blot analysis. Protein assays The protein concentration of the supernatant in each lysate was determined spectrophotometrically by the Bradford reagents according to the instructions of the manufacturer (Bio-Rad Chemical Division, Richmond, CA, USA). Absorption was measured at a wavelength of 595 nm. Western blot analysis Equal amounts of proteins from each sample were subjected to electrophoresis on a 7.5% SDS-polyacrylamide gel and transferred to a NC membrane using the Power Pac 1000 (Bio-Rad, Melville, NY, U.S.A.) power supply. After blocking the NC membrane with 5% nonfat dried milk powder/ TBS containing 0.1% Tween 20 for 60 min followed by three rinses in milkfree TBS, the membranes were probed overnight at 4 o C with primary antibodies against p-ERK1, NHE1 and Actin. Primary antibodies were then removed by washing the membranes 3 times in TBS containing 0.1% Tween 20. This was followed by 70 min incubation in horseradish peroxidase-conjugated secondary antibody. Immunoreactive proteins were detected with ECL agent. Molecular masses were estimated by comparison with a prestained molecular mass marker. To confirm the uniformity of protein loading, the same blots were subsequently stripped with Western blot stripping buffer and reprobed with Actin antibodies. The results were analyzed by Quantity One analysis software (Bio-Rad Chemical Division, Richmond, CA, USA). Percent of p-ERK1 or NHE1 activation was calculated as the ratio of p-ERK1 or NHE1 to Actin. Quantitative real-time PCR At the end of the each treatment, total RNA was isolated from HepG2 cells using TRIzol reagent. The quality of RNA preparation was verified by NanoDrop ND-1000 Spectrophotometer (NanoDrop Technologies Inc, Rockland, DE) First-strand cDNA was generated by reverse transcription of the isolated RNA using the TOPscript cDNA synthesis kit according to the manufacturer's instruction. Quantitative real-time PCR reactions were performed with TOPreal TM qPCR 2X PreMIX Measurements of phosphorylated-p90rsk from HepG2 cells Cells were cultured in 100-mm culture dishes. HepG2 cells were then stimulated with ethanol in the presence or absence with PD98059 at the indicated concentration. The cell lysates were obtained as the manual and stored at -70 o C until the assays. The levels of phosphorylated p90rsk were quantified using p-p90rsk ELISA kit. Assays were performed according to the manufacturer's instructions. Data analysis Differences among the groups were analyzed using one way ANOVA and Student's t-test. Data are expressed as the means±S. E.M. of 3~6 experiments and differences between groups were considered significant at p<0.05. Ethanol induces cell proliferation of HepG2 cells To investigate the effect of ethanol on the proliferation of HepG2 cells, MTT assays were performed in HepG2 cells in the absence or presence of ethanol. The cells were incubated with ethanol at the indicated concentration for 24 hours, and then cell viability was measured using the MTT assay (Fig. 1). Addition of 10~30 mM ethanol led to significant increases in cell viability compared to control. The maximal viability of cells was observed when exposed to 20 mM ethanol (150% versus control). Activity of ERK is increased after ethanol treatment in HepG2 cells To examine whether ethanol induces activation ERK in HepG2 cells, serum starved cells were exposed to 10~50 mM ethanol for 24 hr, and then phosporylated-ERK1 expression was measured by Western blot (Fig. 2A). Densitometric analysis demonstrated that ethanol treatment stimulated ERK1 activation between 10~40 mM ethanol, and the maximal stimulation was occurred at 20 mM (Fig. 2B), suggesting that ethanol is involved in activation of ERK pathway. Ethanol treatment activates ERK-p90rsk pathway in HepG2 cells P90rsk, which is the substrate downstream of ERK, regulates cell proliferation and survival [5]. Thus, to investigate whether ethanol induces activation of p90rsk in HepG2 cells, we measured the amount of phosphorylated-p90rsk after the treatment of ethanol (10~50 mM) for 24 hr. As shown in Fig. 3, HepG2 cells maintained high expression of phosphorylated-p90rsk versus control with the treatment of 20~30 mM ethanol. To analyze the effect of ethanol upstream of p90rsk, we tested the changes in phosphorylated-p90rsk expression after the treatment of ethanol in the presence or absence of PD98059 (ERK inhibitor). Fig. 4 shows that PD98059 suppresses increased expression of phosphorylated-p90rsk in ethanol-stimulated HepG2 cells. These results suggest that ethanol induces p90rsk activation in HepG2 cells via activation of MAPK-p90rsk pathway. Ethanol-induced cell proliferation of HepG2 cells is reduced by inhibition of p90rsk activity To identify the cell viability after the inhibition of p90rsk, MTT assay had been performed with the treatment of ethanol in the presence or absence of SL0101 (selective inhibitor of p90rsk). Serum-starved HepG2 cells were exposed to ethanol (20 mM) with or without SL0101 (10, 20, 30 μM) for 24 hours. As shown in Fig. 5 cell proliferation of HepG2 cells in a dose-dependent manner, demonstrating that activation of p90rsk is associated with ethanol-induced cell proliferation of HepG2 cells. Stimulation of NHE1 activity induced by ethanol treatment in HepG2 cells depends on activation of p90rsk Next, we tested the effect of ethanol on the expression of NHE1, which is stimulated by ERK-p90rsk signaling pathway. In our study, to determine the effect of ethanol on expression of NHE1, the cells were treated with 10~50 mM ethanol. Western blot analysis of lysates from HepG2 cells treated with 10~30 Identical amounts of lysate proteins were subjected to 7.5% SDS-PAGE and immunoblotted (IB) with anti-NHE1 antibody. β-actin content within lystates is shown as an loading control. B, the graph represents fold expression of NHE1 relative to β-actin averaged from four independent experiments. Data are expressed as means±S.E of four experiments (student's t-test; *p<0.05 vs. control). Kim HS et al mM ethanol showed upregulation in the expression of NHE1 compared with lysates from naïve cells (Fig. 6A, 6B). To further explore the mechanism by which ethanol influences NHE1 expression, HepG2 cells were exposed to ethanol (20 mM) with or without SL0101 (5, 10, 15 μM). As shown in Fig. 7, inhibition of p90rsk by the treatment of SL0101 diminished the upregulation in the expression of NHE1 induced by ethanol, suggesting that activation of NHE1 in ethanol-stimulated HepG2 cells is mediated by p90rsk activation. Ethanol-induced increase in expression of Bcl-2 in HepG2 cells is mediated by p90rsk-NHE1 signaling pathway To examine whether inhibition of p90rsk or NHE1 downregulates Bcl-2 expression in HepG2 cells, cells were treated with or Cariporide (10 μM) SL0101 (15 μM) and exposed to 20 mM ethanol, and then Bcl-2 expression in mRNA level was measured by qRT-PCR (Fig. 8). The expression of Bcl-2 was upregulated when treated with 20 mM ethanol and this effect was diminished with the treatment of SL0101 or Cariporide. DISCUSSION It has been previously shown that 10~40 mM of ethanol treatment increases ERK activity, resulting in growth of a series of human hepatocellular carcinoma cells, while it showed no effect in normal liver cells [35]. The detailed mechanism downstream of ERK is not well understood. P90rsk, which is a well-known downstream substrate of ERK and an important regulator of apoptosis, is reported to be associated with cancer progression in various types of cells [13,37]. In this study, we investigated the role of p90rsk and its downstream substrates mediating the ethanolinduced cell proliferation. Ethanol induces cell proliferation of HepG2 cells through ERK-p90rsk pathway In the present study, ethanol, which is widely known to be hepatotoxic, exhibited stimulatory effect on growth of HepG2 cells. This stimulatory effect of ethanol on carcinoma cell proliferation is related with its ability to activate ERK-p90rsk pathway, suggesting an increase in cell cycle progression and decrease in apoptosis. The results showed that 24 hr of ethanol treatment (10~30 mM) induced increase in cell viability and the maximal increase in cell viability was occurred in the group treated with 20 mM of ethanol, which is consistent to the results of ERK1 and p90rsk activation confirmed by western blot analysis and ELISA, respectively. Ethanol treatment (10~30 mM) enhanced the activities of ERK1 and p90rsk, and the maximal effects were shown at the concentration of 20 mM both. Moreover, the elevated level of phosphorylated p90rsk was diminished with the treatment of ERK inhibitor, PD98059, demonstrating that ethanol increases p90rsk activity by ERK pathway. Ethanol activates NHE1 and anti-apopotitc Bcl-2 through p90rsk pathway Herein, SL0101 which inhibits p90rsk specifically was used to further explore the role of p90rsk in ethanol-induced cell proliferation of HepG2 cells. Inhibition of p90rsk has been studied and showed the efficacy in many types of cells. For example, inhibition of p90rsk was effective in radiation-induced cell proliferation of human breast carcinoma cells [38]. In this study, inhibition of p90rsk activity diminished ethanol-induced increase in cell viability dose-dependently implicating that ethanol-induced hepatocellular carcinoma cell proliferation is mediated by p90rsk activation. Among downstream substrates of p90rsk is NHE1 which can increase cellular pH by extruding intracellular H + [15]. It is previously studied that activation of NHE1 leads to increases in intracellular pH and anti-apoptotic effect of Bcl-2 family is dependent on its ability to alkalinize the cell [39]. Besides, NHE1dependent intracellular alkalinization is reported to be critical in malignant transformation [40]. Because activation of p90rsk was also responsible for regulation both of anti-apoptotic Bcl-2 family and NHE1 [29,30], we examined whether ethanol can induce increases in NHE1 and Bcl-2 expression, and this ethanolinduced upregulation of Bcl-2 was mediated by ERK-p90rsk pathway and stimulation of NHE1. When treated with ethanol (10~30 mM), the expression of NHE1 was increased and the maximal increase was shown at the concentration of 20 mM, which is same with previous results about ERK1 and p90rsk activation. Furthermore, this ethanolinduced elevation in expression of NHE1 was abrogated with the treatment of SL0101 (5~15 μM) which can selectively inhibit p90rsk activity. Bcl-2 is a well-known anti-apoptotic factor. Upstream pathways that affect bcl-2 expression include ERK pathway as well as many other pathways [29,30]. Several recent studies have revealed a survival pathway leading to activation of anti-apoptotic Bcl-2 that involves p90rsk pathway [41,42]. The studies presented here demonstrate that ethanol can induce increases in Bcl-2 expression and these effects are mediated by p90rsk and NHE1 activation. Treatment of ethanol (20 mM) increased Bcl-2 expression up to 2-fold. Since Bcl-2 is a major anti-apoptotic factor this suggests that 20 mM of ethanol can inhibit apoptosis. Moreover, ethanolinduced increase in expression of Bcl-2 was abrogated with the treatment of cariporide or SL0101. Inhibition of p90rsk with the treatment of SL0101 diminished ethanol-induced Bcl-2 upregulation, demonstrating that ethanol induces anti-apoptotic Bcl-2 activation through p90rsk pathway, while the results when treated with ethanol in the presence of cariporide implicate that increased intracellular pH may contribute to activation of antiapoptotic Bcl-2, since NHE1 extrudes intracellular H + out and increases intracellular pH [39]. The present study provides that mechanism of alcohol-induced hepatocellular carcinoma progression may involve increases in ERK-p90rsk pathways and activation of NHE1, resulting in decreased apoptosis. There have been many therapeutic approaches targeting p90rsk and many researches demonstrate that elevated levels of p90rsk are reported during stimulation of cell proliferation in many types of carcinoma cell lines [43,44]. Besides p90rsk, hyperactive NHE1 even at the resting pH and the resulting cellular alkalinization have been reported to be directly related to uncontrolled proliferation in malignancy cells [39]. Thus, p90rsk that regulates cellular proliferation, as well as NHE1, may be an important molecule for therapeutic targeting in ethanol-induced hepatocellular carcinoma progression.
2018-03-26T20:51:49.220Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "323cb2302cf5ea3bd77b785917639cc7cffbd0e4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4196/kjpp.2016.20.6.595", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "323cb2302cf5ea3bd77b785917639cc7cffbd0e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
199638082
pes2o/s2orc
v3-fos-license
5-Hydroxymethylcytosine and ten-eleven translocation dioxygenases in head and neck carcinoma Ten-eleven translocation (TET) enzymes are implicated in DNA demethylation through dioxygenase activity, which converts 5-methylcytosine to 5-hydroxymethylcytosine (5-hmC). However, the specific roles of TET enzymes and 5-hmC levels in head and neck squamous cell carcinoma (HNSCC) have not yet been evaluated. In this study, we analyzed 5-hmC levels and TET mRNA expression in a well-characterized dataset of 117 matched pairs of HNSCC tissues and normal tissues. 5-hmC levels and TET mRNA expression were examined via enzyme-linked immunosorbent assay and quantitative real-time PCR, respectively. 5-hmC levels were evaluated according to various clinical characteristics and prognostic implications. Notably, we found that 5-hmC levels were significantly correlated with tumor stage (P = 0.032) and recurrence (P = 0.018). Univariate analysis revealed that low levels of 5-hmC were correlated with poor disease-free survival (DFS; log-rank test, P = 0.038). The expression of TET family genes was not associated with outcomes. In multivariate analysis, low levels of 5-hmC were evaluated as a significant independent prognostic factor of DFS (hazard ratio: 2.352, 95% confidence interval: 1.136-4.896; P = 0.021). Taken together, our findings showed that reduction of TET family gene expression and subsequent low levels of 5-hmC may affect the development of HNSCC. Introduction Head and neck squamous cell carcinomas (HNSCCs) are heterogeneous diseases that involve multiple sites and cellular origins within the upper aerodigestive tract [1]. Despite aggressive multimodal treatment, survival for patients with HNSCC remains poor. Nevertheless, some patients survive much longer than expected [2]. Therefore, identification of prognostic biomarkers as clinical or biological characteristics that provide information on the likely health outcomes of patients, irrespective of the treatment, is essential [3,4]. In HNSCC, epigenetic inactivation associated with tumor-suppressor genes (TSGs) is more frequent than somatic mutations and may drive tumorigenic and progression potential [5][6][7]. Aberrant gene promoter methylation is a key event in cancer development and has attracted increasing interest in basic and translational oncology studies because of the induction of reversible chemical modifications [8,9]. Enzymes of the ten-eleven translocation (TET) family catalyze the stepwise oxidation of 5-methylcytosine (5-mC) in DNA to 5-hydroxymethylcytosine (5-hmC) and further oxidation products, not only generating new epigenetic marks but also initiating active or passive demethylation pathways [10]. Although tissue-and cell type-specific variations occur, it has been estimated that approximately 5% of all cytosines in the genome of mammalian cells are marked as 5-mC, and less than 1% are marked as 5-hmC. Moreover, 5-formylcytosine (5-fC) and 5-carboxylcytosine (5-caC) are 10-1000-fold less abundant than 5-hmC [11,12]. Accordingly, 5-fC and 5-caC may simply be short-lived intermediates in the active demethylation Ivyspring International Publisher process, whereas 5-hmC may be an active epigenetic mark that is stably maintained [13]. TET family proteins can convert 5-mC to 5-hmC, which is widely accepted as the sixth base in the mammalian genome, following 5-mC, the fifth base [14,15]. The few clinical investigations that examined global DNA hydroxymethylation in relation to HNSCC have used genomic DNA from tumors. Missense and truncating mutations in TET genes are present in nearly all solid tumor types at a relatively low frequency [16]. In the Cancer Genome Atlas cohort of HNSCC, TET1, TET2, and TET3 mutations were identified in nine (1.8%), eight patients (1.6%), and eight patients (1.6%) of 510 patients, respectively [17]. Our report indicated that TET mRNA is downregulated in HNSCC owing to DNA methylation; this may be a critical event in HNSCC progression. In particular, TET3 methylation confers HNSCC with unique clinicopathological features [18]. Recent studies have shown that aberrant levels of TET genes and 5-hmC are associated with tumorigenesis in different types of cancers [19]. In a number of cancers, 5-hmC has been shown to be markedly decreased and associated with tumorigenesis, progression, and outcomes [20]. Simultaneous analyses of 5-hmC and TET genes are important for predicting tumorigenesis and biological behaviors and for the development of future targeted therapies for HNSCC. However, systematic studies of the epigenetic and transcriptional regulation of 5-hmC and TET genes in HNSCC are still needed. Accordingly, in this study, we compared the 5-hmC profiles between normal mucosa and HNSCC tissues and characterized the associations between 5-hmC and HNSCC tumorigenesis, progression, and outcomes. Tumor samples In total, 117 primary HNSCC samples were obtained from patients during surgery at the Department of Otolaryngology, Hamamatsu University School of Medicine. All patients provided written informed consent, and the study protocol was approved by the Institutional Review Board of the Hamamatsu University School of Medicine (date of board approval: 2 October 2015, ethic code: 25-149). Clinical information, including age, sex, alcohol exposure, smoking status, tumor size, human papilloma virus (HPV) status, tumor size, lymph node status, stage, and recurrence, were obtained from the patients' clinical records. DNA extraction and ELISA for 5-hmC quantification The genomic DNA from 117 primary tumors and noncancerous mucosa was extracted using a QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The 5hmC content of genomic DNA was determined with a Quest 5-hmC DNA ELISA Kit (Zymo Research, Irvine, CA, USA), according to the manufacturer's instructions. Assays were performed using 4 μg/mL anti-5-hmC polyclonal antibodies, loading 200 ng of DNA per well. Absorbance at 430 nm was evaluated using a SynergyH1 microplate reader and Gen5 software (BioTek, Winooski, VT, USA). The amount of 5-hmC was calculated as a percentage based on a standard curve generated using kit controls. RNA extraction and qRT-PCR Total RNA was isolated using an RNeasy Plus Mini Kit (Qiagen), and cDNA was synthesized using a ReverTra Ace qPCR RT Kit (Toyobo, Tokyo, Japan). The mRNA levels of TET1, TET2, TET3, and glyceraldehyde 3-phosphate dehydrogenase (GAPDH) were measured via qRT-PCR using SYBR Premix Ex Taq (Takara, Tokyo, Japan) and a Takara Thermal Cycler Dice Real Time System TP8000 (Takara). The data were analyzed using the ΔΔCt method. Primer sequences were as follows: TET1 GAPDH R, TGGTGAAGACGCCAGTCTCTA. Data analysis and Statistics The 5-hmC and TET mRNA levels in tumors and normal mucosa and patient characteristics were analyzed statistically. Receiver-operator characteristic (ROC) curve analyses were performed for 5-hmC and TET mRNA levels and all patients for comparisons between tumor and normal tissues. DFS was measured from the date of the initial treatment to the date of diagnosis. Kaplan-Meier tests were used to calculate survival probabilities, and log-rank tests were used to compare survival rates. The prognostic value of methylation status was assessed by performing multivariate Cox proportional hazards analysis adjusting for age (≥ 65 versus < 65 years), sex, smoking status, alcohol intake, and tumor stage (I, II, and III versus IV). A p-value less than 0.05 was considered statistically significant. Statistical analyses were performed using StatMate IV software (ATMS Co. Ltd., Tokyo, Japan) and the Stata/SE 13.0 system (Stata Corporation, TX, USA). Determination of 5-hmC levels by ELISA in HNSCCs and matched normal mucosa First, we examined the 5-hmC contents of DNA in 117 matched pairs of HNSCC and matched normal mucosa using ELISA. Cancer tissues had significantly lower levels of 5-hmC (0.373% ± 0.087%) than matched normal mucosa (0.406% ± 0.090%; P < 0.001 by paired t-tests). Notably, the 5-hmC levels exhibited highly discriminative ROC curve profiles, which clearly distinguished HNSCC from normal mucosal tissues (area under the ROC [AUROC] = 0.612). At the cutoff value of 0.407, the sensitivity and specificity were 57.3% and 64.1%, respectively ( Figure 1A, 1B). At the cutoff value of 1.866, the sensitivity was 50.4%, and the specificity was 70.9%. The significance of differences between cancerous and normal mucosal tissues were determined by Student's t-tests. **P < 0.001. TET expression in HNSCCs and matched normal mucosa Quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed to examine mRNA expression of TET1, TET2, and TET3 in 117 matched pairs of HNSCC and normal mucosa. There were no significant differences in TET1, TET2, and TET3 mRNA levels between cancerous and normal tissues ( Figure 1C, 1E, 1G). TET1, TET2, and TET3 mRNA levels showed discriminative ROC curve profiles, which distinguished HNSCC from normal mucosal tissues (AUROC = 0.583, 0.536, and 0.598, respectively). Tumor samples were classified as positive when the mRNA expression levels exceeded 0.580, 0.102, and 1.866 for TET1, TET2, and TET3, respectively. The cutoff mRNA expression level was chosen from the ROC curve to maximize sensitivity and specificity ( Figure 1D, 1F, 1H). To identify factors affecting 5-hmC levels in HNSCC, we compared 5-hmC levels among the number of highly expressed TET genes and tumor sites of HNSCC. One or more TET high-expression events were associated with a significant increase in 5-hmC levels compared with no TET high-expression events (P < 0.05). The 5-hmC level showed the greatest increase when all three TET genes showed high expression (P < 0.001; Figure 3A). Moreover, in a comparison of 5-hmC levels at tumor sites among HNSCC, 5-hmC levels were found to be significantly higher in patients with oropharyngeal cancer than in patients with larynx (P = 0.019), oral cavity (P = 0.029), and hypopharynx (P = 0.009) cancer ( Figure 3B). Association of 5-hmC levels and TET expression with clinicopathological assessment Among the 117 patients, differences in 5-hmC levels and TET1, TET2, and TET3 mRNA expression statuses according to clinical information were examined using Chi-squared tests ( Table 1). The characteristics of patients with HNSCC are shown in Table S1. We found that 5-hmC levels were associated with clinical stage (P = 0.032) and recurrence (P = 0.018). Other clinical information, including age, sex, alcohol exposure, smoking status, tumor size, HPV status, tumor size, and lymph node status, was not related to 5-hmC levels. Smoking habit was associated with mRNA expression of TET1 (P = 0.031) and TET2 (P = 0.040). Other clinical information was not related to TET1, TET2, and TET3 mRNA expression ( Table 1). Comparison of TET1, TET2, and TET3 mRNA expression in patients with laryngeal cancer, oral cancer, hypopharyngeal cancer, and oropharyngeal cancer are shown in Figure S1. 5-hmC levels and TET expression in HNSCC and the relationship with patient survival Next, we confirmed the relationship between DFS in patients with HNSCC and 5-hmC levels/TET expression using Kaplan-Meier plots (Figure 4). Shorter DFS times were observed in patients with low 5-hmC levels compared with those with high 5-hmC levels (log-rank test, P = 0.038; Figure 4A). There were no relationships in DFS between the high and low expression groups for TET1 (78 versus 39, P = 0.955), TET2 (97 versus 20, P = 0.479), and TET3 (59 versus 58, P = 0.887) among the 117 patients enrolled in this study ( Figure 4B-D). Figure 3. Comparison of 5-hmC levels and the number of TET high-expression events or the anatomical location of 117 HNSCCs. (A) Relationship between number of TET high-expression events and 5-hmC levels. 0: all TET genes low expression; 1: one TET genes high expression; 2: two TET genes high expression; 3: all TET genes high expression. (B) Relationship between the anatomical location of the tumor and 5-hmC levels. The significance of relationships between 5-hmC levels and other factors was compared using Student's t-tests. *P < 0.05; **P < 0.01; ***P < 0.001. Discussion This is the first study examining 5-hmC and TET family gene levels in HNSCC. DNA methylation regulates epigenetic gene inactivation; however, the factors affecting DNA demethylation are still poorly understood in HNSCC. Recently, we showed that concurrent methylation analysis of TET genes was related to reduced DFS in unfavorable event groups [18]. Our current study found that aberrant expression of TET genes and altered levels of 5-hmC were associated with tumorigenesis and that lower 5-hmC levels were correlated with reduced survival. Loss of 5-hmC is associated with decreased expression of TET1 and TET2 in small intestinal neuroendocrine tumors [21]. Moreover, 5-hmC levels are significantly reduced in prostate cancer compared with normal prostate tissue samples [22]. In esophageal cancer tissues, 5-hmC expression is associated with shorter overall survival and TET2 expression levels [23]. TET proteins catalyze DNA CpG demethylation through converting 5-mC to 5-hmC, maintaining a delicate balance between CpG methylation and demethylation in normal cells [24]. Notably, promoter CpG methylation-mediated silencing of the TET1 gene further increases 5-mC levels in tumor cells, thus forming a DNA methylation feedback loop mediated by DNMT and TET1 [25]. 5-hmC is not simply an activating epigenetic mark, but is considered an intermediate in the active demethylation pathway and appears to play complex roles in gene regulation [11,12]. 5-hmC levels of protein-coding genes are positively correlated with RNA expression intensity [26]. A pathway recently suggested for active DNA demethylation in the early mouse embryo involves the conversion of 5-mC to 5-hmC mediated by TET3, which is expressed at high levels in oocytes and zygotes [27,28]. Future studies are needed to confirm the associations between 5-hmC and carcinogenesis and to examine potential mechanisms through which 5-hmC loss affects tumor growth. Bisulfite treatment, the gold-standard technology for detection of DNA methylation, results in the conversion of unmethylated cytosine into uracil, which will be read as thymine after PCR amplification, with both 5-mC and 5-hmC being read as cytosine, cannot distinguish between 5-mC and 5-hmC [29]. Therefore, quantitative analysis of genome-wide distribution of these epigenetic marks has been considered for clinical applications [30]. Immuno-based assays, including dot blots, immunohistochemical assays, and ELISA, have widely been used as a quantitative method due to their analytical merits for analyses of 5-hmC [31]. Several approaches for 5-hmC mapping have been developed in recent years. Cell-free 5-hmC may represent a new approach for liquid biopsy-based diagnosis and prognosis [32,33]. The 5-hmC profiles of cell-free DNA have been detected in patients with cancer, and 5-hmC gains in both gene bodies and promoter regions have been evaluated in patients with cancer and healthy controls [34]. Further studies of the loss of 5-hmC upon transformation of tissues may offer useful tools for dissecting 5-hmC biology in cancers. In summary, we demonstrated for the first time that 5-hmC levels were abnormally reduced in patients with HNSCC; this may be a critical event in HNSCC progression. Interestingly, the 5-hmC profiles in primary tumors may be used to identify patients with positive lymph node metastasis and high tumor stage that are at a higher risk of recurrence. Further studies are needed to examine the differences in global demethylation patterns observed between 5-hmC-low and -high tumors and their effects on the onset and progression of HNSCC in more detail. Misawa performed the data analysis and discussed the results. All authors read and approved the final manuscript.
2019-08-16T06:18:27.297Z
2019-08-28T00:00:00.000
{ "year": 2019, "sha1": "a78fea94852fcc8506b94c470fb5b35410e46698", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v10p5306.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f763eaf8c897b82977e8a9fe62eb198313b820bb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263248585
pes2o/s2orc
v3-fos-license
Adulteration Detection of Pork in Mutton Using Smart Phone with the CBAM-Invert-ResNet and Multiple Parts Feature Fusion To achieve accurate detection the content of multiple parts pork adulterated in mutton under the effect of mutton flavor essence and colorant by RGB images, the improved CBAM-Invert-ResNet50 network based on the attention mechanism and the inversion residual was used to detect the content of pork from the back, front leg, and hind leg in adulterated mutton. The deep features of different parts extracted by the CBAM-Invert-ResNet50 were fused by feature, stitched, and combined with transfer learning, and the content of pork from mixed parts in adulterated mutton was detected. The results showed that the R2 of the CBAM-Invert-ResNet50 for the back, front leg, and hind leg datasets were 0.9373, 0.8876, and 0.9055, respectively, and the RMSE values were 0.0268 g·g−1, 0.0378 g·g−1, and 0.0316 g·g−1, respectively. The R2 and RMSE of the mixed dataset were 0.9264 and 0.0290 g·g−1, respectively. When the features of different parts were fused, the R2 and RMSE of the CBAM-Invert-ResNet50 for the mixed dataset were 0.9589 and 0.0220 g·g−1, respectively. Compared with the model built before feature fusion, the R2 of the mixed dataset increased by 0.0325, and the RMSE decreased by 0.0070 g·g−1. The above results indicated that the CBAM-Invert-ResNet50 model could effectively detect the content of pork from different parts in adulterated mutton as additives. Feature fusion combined with transfer learning can effectively improve the detection accuracy for the content of mixed parts of pork in adulterated mutton. The results of this study can provide technical support and a basis for maintaining the mutton market order and protecting mutton food safety supervision. Introduction Mutton is popular because of its rich protein content, low cholesterol and fat content, unique flavor, and delicate taste [1].The prices of mutton have been rising in recent years.Under the temptation of huge economic benefits, some illegal traders take the risk of mixing low-value meat, such as pork, with mutton for the sale of adulterated products [2].At the same time, illegal traders add food additives, such as mutton flavor essence and colorant, to the adulterated mutton to further achieve the effect of "fake with real".It not only seriously infringes the economic interests of consumers and destroys the market order but also poses a threat to the health of consumers and causes food safety problems.Therefore, it is urgent to seek a rapid and accurate method for the detection of adulterated pork in mutton under the action of mutton flavor essence and colorant. At present, the detection methods for meat or food adulteration mainly include sensory tests, chromatographic analysis [3], immunoassay [4], DNA analysis [5], intelligent sensing technology [6,7], optical-colorimetric methods [8], and modern optical rapid detection technology.With the increasing level of adulteration, sensory analysis has been completely unable to meet the demand of current detection.The methods of chromatography, immunoassay, and DNA analysis require expensive instruments and complex pretreatment methods, so it is becoming increasingly difficult to meet the requirements of rapid and accurate detection.With the development of artificial intelligence, modern optical rapid detection methods have developed rapidly.Among them, with the popularity of smartphones and the great improvement in computing power, the development of mobile phone camera technology has made rapid progress.Smartphones have the characteristics of convenience, speed, and high calculation accuracy, and they have been widely used in the field of food detection [9,10].In recent years, there have also been some studies on the use of smartphone image technology to classify meat parts [11], detect adulteration [12,13], and perform other functions.Previous studies have shown that there are certain differences in different parts of meat.The images taken with a smartphone can be used to detect mutton parts and meat adulteration.However, there are few studies on detecting meat adulteration (using different and mixed parts) with smartphone images.In addition, the detection of adulterated pork content in mutton using RGB images of smartphones under the effect of mutton flavor essence and colorant presents some challenges. With the development of intelligence and information technology, deep learning has played an irreplaceable role in the fields of artificial intelligence, such as computer vision and natural language processing.As a typical representative of deep learning, a convolutional neural network (CNN) can effectively learn feature expressions from a large number of sample images and enhance the generalization ability of the model.It has the advantages of fast and accurate image processing and is currently widely used in the detection of agricultural products [14][15][16].With the continuous expansion of computing requirements, the network layers of CNN models were continuously deepened to improve network performance.As a result, the model began to have problems such as gradient disappearance and network degradation.He et al. [17] proposed the ResNet network that used residual structure in the model to effectively solve the above problems.The superior performance has enabled it to achieve good results in many tasks, such as image classification [1,18,19], object detection [20,21], and so on.However, the ResNet network still has problems such as too many network parameters and slow convergence speed.Studies have shown that the inverted residual structure in the MobileNet can improve the convergence speed and reduce the model parameters by reducing the computation amount of high-dimensional space and the memory requirement so as to realize the lightweight of the model structure [22,23].Cui et al. [24] added the inversion residual structure in the MobileNet v2 network to the DenseNet network model and proposed an improved lightweight DenseNet network to effectively realize the surface defect detection of mobile phone screens.Xu et al. [25] introduced an inverted residual structure into YOLOv5 for gesture recognition, and the model size was reduced by 33 MB compared with that before the improvement.Although the inverted residual structure meets the requirements of a lightweight model, its ability to learn features with small differences is limited.There is little difference in the characteristics of adulterated mutton with different contents of pork under the influence of additives such as mutton flavor essence and coloring agent, and it is still difficult to accurately predict its content by existing models [26].The convolutional block attention module (CBAM) [27] can effectively improve the accuracy of the model by using the spatial and channel features of the images to redistribute the feature weights and strengthen the feature differences of the image.Du et al. [28] effectively classified the quality of plug seedlings using the improved CNN based on the attention mechanism.Zhang et al. [29] improved the YOLOv4 model with the CBAM to realize sheep facial biometrics recognition.The results were compared with other different object detection models, and it was proved that the improved model had good recognition performance.The existing research has proved that adding the CBAM to deep learning models can effectively improve the performance of the model.At present, there is no report on the use of the CBAM to improve the ResNet50 network for the detection of the content of pork from different parts in adulterated mutton.However, most of the adulterated mutton on the market is mixed with multiple parts of pork.Previous studies have shown that there was some difference in different parts of the meat [11,19].The detection model established by using a single part makes it difficult to detect the content of pork from mixed parts in adulterated mutton.Feature fusion can comprehensively utilize the image features of multiple parts and complement the advantages of multiple features [30].It is helpful to establish a more accurate adulteration detection model for mixed parts.Although the models established by using fusion features realize the advantages of multiple features to a certain extent and meet the basic training needs when detecting the content of pork from mixed parts adulterated in mutton, the results of the model are often not accurate enough because of the difference between the fusion features and the actual features.Transfer learning uses the "knowledge" learned from previous tasks, such as data characteristics and model parameters, to assist the learning process in the new domain and obtain its own model [31,32].Therefore, when the model mixed parts are established, the prior parameters of the model built by a single part are transferred by transfer learning, and the models are fine-tuned by fusion features [33].Based on the full use of fusion features, the real features of adulterated meat in each part can be further extracted.At present, there are no reports on the improvement of the ResNet50 network using the CBAM to detect the content of pork from different parts in adulterated mutton and using feature fusion combined with transfer learning for the detection of mixed parts. To sum up, to quickly and accurately detect the content of specific and mixed parts of pork in adulterated mutton under the effect of mutton flavor essence and colorant using RGB images of the smartphone, an improved CBAM-Invert-ResNet50 based on the attention mechanism and inversion residual structure was used.The specific work of the current study is as follows: (1) The images of minced pork of different proportions (10,20,30, and 40%) from three parts (back, front leg, and hind leg) in adulterated mutton under mutton flavor essence and colorant were collected by a smartphone.(2) The effect of the improved network model on the feature extraction of different amounts of pork in adulterated mutton was analyzed by feature visualization.(3) The detection model of the content of pork from different parts adulterated in mutton was established using the improved network and compared with the conventional network model.(4) The features of different parts were fused by feature stitching and combined with transfer learning to detect the content from mixed parts in adulterated mutton.The results provide strong evidence for market regulators to crack down on the adulteration of mutton.At the same time, our study also provides a certain theoretical basis and technical support for the quantitative detection of ingredient content in agricultural and livestock products using images combined with deep learning. Sample Preparation Fresh mutton from the hind leg and fresh pork from different parts (back, foreleg, and hind leg) were selected to make adulterated mutton samples in this study.Mutton flavor essence and colorant were used to further interfere with the adulterated mutton samples to bring them closer to reality.The mutton flavor essence was purchased from Qingdao Xianghaisheng Food Ingredients Co., Ltd.(Qingdao, China), and the Monascus red colorant was purchased from Guangdong kelong biotechn.co., Ltd. (Jiangmen, China).Fresh hind leg meat of mutton and different parts of pork were purchased from the Youhao supermarket of Shihezi City in Xinjiang, and all of them met the quarantine standards.The meat was transported to the laboratory in an incubator.Adulterated mutton samples were prepared according to the following procedure.First, the obvious fascia and tissue on the surface of the meat were removed, and the meat was ground into 3 to 5 mm minced meat particles.After being marked and sealed with plastic wrap, the meat was stored in a refrigerator at −5 • C for subsequent use.The solvent of mutton flavor essence and colorant was obtained according to the food safety code.The mutton flavor essence solvent with a mass concentration of 0.05 g/mL was obtained by dissolving mutton flavor essence in distilled water at a dosage of 3 g per kilogram of pork and stirring for 5 min.The 0.001 g/mL solvent of the Monascus red colorant was obtained by dissolving the Monascus red colorant in distilled water at a dosage of 0.5 g per kilogram of pork.Then, the two solvents were mixed at a ratio of 1:1 and stirred for 10 min.The minced pork from different parts was soaked in the mixed solvent for 20 min, and the residual liquid on the surface was removed after the solvent was fully immersed in the minced pork.Finally, different parts of minced pork mixed with mutton flavor essence and colorant were mixed into minced mutton at different ratios (10, 20, 30, and 40%) to make adulterated mutton samples.Each sample was obtained from about 30 g of fully mixed minced meat, which was placed in a petri dish with a diameter of 6 cm and compacted to obtain a smooth surface.Eight samples were prepared from each part and each ratio of pork adulterated mutton.A total of 96 (8 × 4 × 3 = 96) samples were prepared from three parts with four ratios per part.The prepared samples were stored in a refrigerator at −5 • C for image data acquisition.The prepared various samples are shown in Figure 1. Youhao supermarket of Shihezi City in Xinjiang, and all of them met the quarantine standards.The meat was transported to the laboratory in an incubator.Adulterated mutton samples were prepared according to the following procedure.First, the obvious fascia and tissue on the surface of the meat were removed, and the meat was ground into 3 to 5 mm minced meat particles.After being marked and sealed with plastic wrap, the meat was stored in a refrigerator at −5 °C for subsequent use.The solvent of mutton flavor essence and colorant was obtained according to the food safety code.The mutton flavor essence solvent with a mass concentration of 0.05 g/mL was obtained by dissolving mutton flavor essence in distilled water at a dosage of 3 g per kilogram of pork and stirring for 5 min.The 0.001 g/mL solvent of the Monascus red colorant was obtained by dissolving the Monascus red colorant in distilled water at a dosage of 0.5 g per kilogram of pork.Then, the two solvents were mixed at a ratio of 1:1 and stirred for 10 min.The minced pork from different parts was soaked in the mixed solvent for 20 min, and the residual liquid on the surface was removed after the solvent was fully immersed in the minced pork.Finally, different parts of minced pork mixed with mutton flavor essence and colorant were mixed into minced mutton at different ratios (10,20,30, and 40%) to make adulterated mutton samples.Each sample was obtained from about 30 g of fully mixed minced meat, which was placed in a petri dish with a diameter of 6 cm and compacted to obtain a smooth surface.Eight samples were prepared from each part and each ratio of pork adulterated mutton.A total of 96 (8 × 4 × 3 = 96) samples were prepared from three parts with four ratios per part.The prepared samples were stored in a refrigerator at −5 °C for image data acquisition.The prepared various samples are shown in Figure 1. Sample Image Acquisition The mobile phone used for sample image acquisition was Huawei P40, and the camera model was ANA-AN00.Images were acquired with a camera sensitivity of 500, aperture of f/1.9, exposure time of 1/100, focal length of 7 mm, color temperature parameter of 4500 K, image resolution of 6144 × 8192 pixels, and image acquisition height of 18 cm.The ambient temperature of the laboratory was 26 ± 1 °C, and the relative humidity was 30 ± 5%.A schematic diagram of the sample image acquisition device is shown in Figure 2.There was a constant light source on the top of the dark box, and the mobile phone was fixed with a tripod.After adjusting the acquisition height of the mobile phone and the camera parameters, the images were collected.One image was collected for each sample, making it a total of 96 images. Sample Image Acquisition The mobile phone used for sample image acquisition was Huawei P40, and the camera model was ANA-AN00.Images were acquired with a camera sensitivity of 500, aperture of f/1.9, exposure time of 1/100, focal length of 7 mm, color temperature parameter of 4500 K, image resolution of 6144 × 8192 pixels, and image acquisition height of 18 cm.The ambient temperature of the laboratory was 26 ± 1 • C, and the relative humidity was 30 ± 5%.A schematic diagram of the sample image acquisition device is shown in Figure 2.There was a constant light source on the top of the dark box, and the mobile phone was fixed with a tripod.After adjusting the acquisition height of the mobile phone and the camera parameters, the images were collected.One image was collected for each sample, making it a total of 96 images. Image Preprocessing In order to reduce the interference factors of the image background, the HoughCircles detection algorithm was used to extract the region of interest (ROI) of samples.In the training of deep learning models, the effect is often not ideal when the amount of data is small.To learn enough features, deep learning models need to input a large amount of Image Preprocessing In order to reduce the interference factors of the image background, the HoughCircles detection algorithm was used to extract the region of interest (ROI) of samples.In the training of deep learning models, the effect is often not ideal when the amount of data is small.To learn enough features, deep learning models need to input a large amount of data.The sample images were expanded by randomly rotating and mirroring the original images in this study.The process of random rotation was as follows: The rotation threshold was set to 0.3, and two random seeds generated random numbers between 0 and 1.When the random number generated by the No. 1 random seed was greater than 0.3, the image was rotated at the center of the origin by 360 • times the random number generated by the No. 2 random seed.In addition, the brightness of the image was randomly increased and decreased to exclude the influence of different illumination intensities on the image.The process was similar to the random rotation.The preprocessed images are shown in Figure 3. Sample Objective table Figure 2. Schematic diagram of mobile phone image data acquisition system. Image Preprocessing In order to reduce the interference factors of the image background, the HoughCircles detection algorithm was used to extract the region of interest (ROI) of samples.In the training of deep learning models, the effect is often not ideal when the amount of data is small.To learn enough features, deep learning models need to input a large amount of data.The sample images were expanded by randomly rotating and mirroring the original images in this study.The process of random rotation was as follows: The rotation threshold was set to 0.3, and two random seeds generated random numbers between 0 and 1.When the random number generated by the No. 1 random seed was greater than 0.3, the image was rotated at the center of the origin by 360° times the random number generated by the No. 2 random seed.In addition, the brightness of the image was randomly increased and decreased to exclude the influence of different illumination intensities on the image.The process was similar to the random rotation.The preprocessed images are shown in Figure 3. Datasets of Pork from Different Parts in Adulterated Mutton The data were divided into three datasets according to the part of pork adulterated in mutton: the back, front leg, and hind leg.Each dataset contained four adulteration ratios: 10%, 20%, 30%, and 40%.First, 1/3 of the images were taken from each dataset as an independent validation set.A total of 700 images for the independent validation set were obtained by data augmentation.Then, the remaining 2/3 data of each set were divided into a training set and a test set according to the 3:1 ratio.All images were expanded according to the methods in Section 2.2.2.The images from each dataset were expanded to obtain 1575 images for the training set and 525 images for the test set. Production of Datasets 2.3.1. Datasets of Pork from Different Parts in Adulterated Mutton The data were divided into three datasets according to the part of pork adulterated in mutton: the back, front leg, and hind leg.Each dataset contained four adulteration ratios: 10%, 20%, 30%, and 40%.First, 1/3 of the images were taken from each dataset as an independent validation set.A total of 700 images for the independent validation set were obtained by data augmentation.Then, the remaining 2/3 data of each set were divided into a training set and a test set according to the 3:1 ratio.All images were expanded according to the methods in Section 2.2.2.The images from each dataset were expanded to obtain 1575 images for the training set and 525 images for the test set. Datasets of Pork from Mixed Parts in Adulterated Mutton The datasets of pork from mixed parts in adulterated mutton contain all data from three parts.First, 1/3 of the data were taken from each part dataset as an independent validation set.A total of 2100 images for the independent validation set were obtained by data expansion.Then, the remaining 2/3 data were divided into the training set and test set according to 3:1.A total of 4725 images for the training set and 1575 images for the test set were obtained. Due to the large size of the extended image, it takes a long time to train the model.To reduce the computational load and operation time of CNN, the expanded images of all datasets were compressed to 224 × 224 pixels. Construction of the CBAM-Invert-ResNet50 Model The ResNet network effectively solves the problems of gradient disappearance and network degradation in the deep CNN model by using a residual structure.However, the ResNet network still has problems such as too many network parameters and slow convergence speed, which is not conducive to porting to mobile terminals.Referring to the lightweight idea of the MobileNet, the inverted residual structure was used to replace the original residual structure in the ResNet50 network in this study, which could improve the convergence speed of the model and reduce the model parameters.Existing studies have shown that attention mechanisms can make full use of the spatial and channel features of the images [27,34].It strengthens the feature differences of the images and effectively improves the accuracy of the model through the adaptive allocation of feature weights.With the effects of additives such as mutton flavor essence and colorant, the characteristics of adulterated mutton meat with different pork content show little difference [26].Therefore, the feature differences between adulterated mutton with different pork content can be strengthened by adding a CBAM attention mechanism to the ResNet50 network.By strengthening the weight allocation of important features, the detection efficiency of the model for adulteration content was improved.Based on the ResNet50 network combined with the CBAM attention mechanism, our research team proposed a lightweight inversion residual network CBAM-Invert-ResNet50 [35].It was used to classify and detect mutton, adulterated mutton, and pork.However, its feasibility in quantitative detection needs to be further verified.Therefore, this study aimed to explore the feasibility of using the CBAM-Invert-ResNet50 to detect the content of pork from different parts in adulterated mutton and combine feature fusion and transfer learning to achieve an accurate prediction of adulteration content in mixed parts. The CBAM-Invert-ResNet network is mainly composed of seven parts: convolutional layer, pooling layer, normalization layer, inverted residual structure, CBAM structure, and fully connected layer.The structure of the CBAM-Invert-ResNet50 and ResNet50 is shown in Figure 4.The CBAM-Invert-Resnet50 is obtained by replacing the residual structure in the ResNet50 network with the inverted residual structure and adding the CBAM module after each inverted residual structure. Feature Fusion In order to realize the adulterated mutton with multiple pork parts, the feature fusion method was used to stitch the features of different parts and construct the model for the detection of the content of pork from mixed parts in adulterated mutton.The feature fusion method can comprehensively utilize a variety of image features and complement the advantages of multiple features to improve the accuracy and robustness of the model.Ac- Feature Fusion In order to realize the adulterated mutton with multiple pork parts, the feature fusion method was used to stitch the features of different parts and construct the model for the detection of the content of pork from mixed parts in adulterated mutton.The feature fusion method can comprehensively utilize a variety of image features and complement the advantages of multiple features to improve the accuracy and robustness of the model.According to the sequence of fusion and prediction, feature fusion was divided into early fusion and late fusion.Early fusion is first achieved by fusing the features of multiple network layers and then by using the fused features for model training.Late fusion improves the detection performance by combining the detection results of different layers.Before the final fusion is completed, the model starts to perform detection on the partially fused layer.Multiple detection results of multiple layers will be fused.The mixed dataset contains the back, foreleg, and hind leg datasets.Therefore, a series of feature fusion in the early fusion method was selected to join the features of the three detection models of the back, front leg, and hind leg to improve the accuracy of the model.First, the back, front leg, and back leg datasets were input into models 1, 2, and 3 for training, respectively.The features of the back, front leg, and back leg datasets were extracted using models 1, 2, and 3, respectively.Then the features extracted by the three models were stitched to obtain the fusion features.Finally, the fusion features were input into the feature fusion model for training.The feature fusion process is shown in Figure 5. back, front leg, and back leg datasets were extracted using models 1, 2, and 3, respectively.Then the features extracted by the three models were stitched to obtain the fusion features.Finally, the fusion features were input into the feature fusion model for training.The feature fusion process is shown in Figure 5. Transfer Learning When detecting the adulteration content of pork from mixed parts in adulterated mutton, the results of the model are often not accurate because of the difference between the fusion characteristics and the actual characteristics.Therefore, it is necessary to further extract the real features on the basis of making full use of the fused features.Transfer learning combined with fine-tuning was used to achieve the detection of adulteration content in the mixed part in this study.Fine-tuning was used to obtain data features or model parameters in both the original and new domains by freezing part of the convolutional layers of the pretrained model (usually the convolutional layers close to the input because these layers retain a large amount of underlying information) and training the remaining convolutional layers and fully connecting layers again.In this study, after the fusion features were fed into the pretrained model, the fusion features of the back, front leg, and hind leg datasets and the true features of the mixed dataset could be obtained by fine-tuning.The difference between the fused features and the actual features was elim- layers and fully connecting layers again.In this study, after the fusion features were fed into the pretrained model, the fusion features of the back, front leg, and hind leg datasets and the true features of the mixed dataset could be obtained by fine-tuning.The difference between the fused features and the actual features was eliminated by this method.Based on making full use of the fused features, the model further extracted the true features of the mixed dataset to improve the accuracy and robustness of the model. where xi represents the predicted value, x j represents the actual value, and x represents the mean value of the actual value.R 2 is the correlation between the predicted value of the model and the actual value, and a larger value of R 2 indicates a stronger correlation between the two.The RMSE represents the deviation between the predicted value of the model and the actual value, and a smaller value of RMSE indicates a smaller prediction error of the model. Performance Evaluation of the Model A boxplot is often used to reflect the characteristics of the distribution of the original data.It can also be used to compare the distribution characteristics of multiple groups of data.In this study, a boxplot was used to visually evaluate the stability of the model.In the boxplot, the data were divided into 4 equal fractions after being arranged from large to small.The three quartiles were the first quartile (Q1), the second quartile (Q2), and the third quartile (Q3) in descending order.In the boxplot, the top and bottom edges of the box are the third quartile (Q3) and first quartile (Q1) of the data, respectively.The entire box contains 50% of the data.IQR (Inter Quartile Range) is the interquartile range, and its formula is shown in (3): The upper and lower short horizontal lines represent the minimum and maximum data values except for outliers, respectively, and their equations are shown in ( 4) and ( 5): The variation range of IQR in the boxplot represents the distribution of the predictive value of the model for the dataset.The smaller the value, the more concentrated the distribution of the predictive value of the model.It indicates that the stability of the model is better. Model Test Environment The hardware used in this study included Intel ® CoreTM i7-10750HCPU @ 2.60 GHz processor, 16 GB memory, and NVIDIA GeForce RTX 2060 graphics card.The software included the operating system Windows 10 (64-bit), the programming language Python 3.8, the deep learning framework TensorFlow 2.3.0,General-purpose computing architecture CUDA 10.1.243,and GPU acceleration library CUDNN 7.4.1. Visualization and Comparison of Depth Features Extracted by Different Models To explore the effect of the improved model based on the attention mechanism on the feature extraction of different pork content from different parts in adulterated mutton, the models of the ResNet50, Invert-ResNet50, and CBAM-Invert-ResNet50 were used to extract the features of the original images of samples.The output features of the last layer for the three network models are visualized, as shown in Figure 6.In Figure 6, the columns represent the adulteration content, and from left to right are images of adulterated mutton with 10%, 20%, 30%, and 40% pork.The original input image of the sample is presented in the first row.The second, third, and fourth rows are the output features extracted by the ResNet50, Invert-ResNet50, and CBAM-Invert-ResNet50 models, respectively.It can be concluded that for the dataset of mutton adulterated with pork from the back, front leg, and hind leg, it is difficult to directly distinguish the differences in raw images of mutton adulterated with different contents of pork.After processing with the ResNet50 and Invert-ResNet50 network models, the differences in the output feature of the four proportions for adulterated mutton images are still small.Their colors and shapes are visually similar.After processing with the CBAM-Invert-ResNet50 network model, the color of the output features for the four proportions of the adulterated mutton images in the visualization map has obvious differences.The main reason is that the CABM attention mechanism can enlarge the receptive field, create dependencies between different channels, and strengthen the weight allocation of more important features [27].The above analysis shows that the addition of the CABM in the model can strengthen the differences in the characteristics of mutton with different levels of pork adulteration, which is conducive to the rapid and accurate prediction of the content of pork from different parts in adulterated mutton under the effects of mutton flavor essence and colorant. Lightweight Analysis of Improved Model In order to verify the effect of the inverted residual structure on the complexity of the adulteration detection model, the model size and the number of parameters were used to measure the lightweight degree of the model.The model size and the number of In Figure 6, the columns represent the adulteration content, and from left to right are images of adulterated mutton with 10%, 20%, 30%, and 40% pork.The original input image of the sample is presented in the first row.The second, third, and fourth rows are the output features extracted by the ResNet50, Invert-ResNet50, and CBAM-Invert-ResNet50 models, respectively.It can be concluded that for the dataset of mutton adulterated with pork from the back, front leg, and hind leg, it is difficult to directly distinguish the differences in raw images of mutton adulterated with different contents of pork.After processing with the ResNet50 and Invert-ResNet50 network models, the differences in the output feature of the four proportions for adulterated mutton images are still small.Their colors and shapes are visually similar.After processing with the CBAM-Invert-ResNet50 network model, the color of the output features for the four proportions of the adulterated mutton images in the visualization map has obvious differences.The main reason is that the CABM attention mechanism can enlarge the receptive field, create dependencies between different channels, and strengthen the weight allocation of more important features [27].The above analysis shows that the addition of the CABM in the model can strengthen the differences in the characteristics of mutton with different levels of pork adulteration, which is conducive to the rapid and accurate prediction of the content of pork from different parts in adulterated mutton under the effects of mutton flavor essence and colorant. Lightweight Analysis of Improved Model In order to verify the effect of the inverted residual structure on the complexity of the adulteration detection model, the model size and the number of parameters were used to measure the lightweight degree of the model.The model size and the number of parameters for the CBAM-Invert-ResNet50 model and ResNet50, Invert-ResNet50, and CBAM-ResNet50 models are shown in Figure 7.To verify the feasibility of the CBAM-InvertResNet50 model to detect the content of pork from different parts in adulterated mutton, the models of different pork contents from the back, front leg, and hind leg mixed into mutton were established.The results are shown in Table 1.It can be obtained from Table 1 that all three models used the CBAM-InvertResNet50 to predict the content of pork from the back, front leg, and hind leg in adulterated mutton have good effects.The values of R 2 were all greater than 0.88, and the RMSE values were all less than 0.38 g•g −1 .Among them, the effect of mutton mixed with the back dataset was the best, followed by the hind leg dataset, and the prediction effect of the front leg dataset was the worst.The R 2 of the back dataset was 0.9373, and of front leg dataset was 0.8876, with a difference of 0.0497.The results showed that using the RGB image in combination with the CBAM-Invert-ResNet50 could be able to detect the content of the different parts of pork in adulterated mutton, but pork parts had a great influence on the adulteration detection model.This may be caused by some differences in the color, texture, and other aspects among the different parts of the pork.Previous research results showed that different parts of mutton had certain differences in color, texture, and other aspects [11,19]. The Comparison of the Different Models To verify the superiority of the improved model, the ResNet50, Invert-ResNet50, and CBAM-ResNet50 networks were used to establish the prediction models of different pork content from the back, front leg, and hind leg in adulterated mutton, and the model results were compared with the CBAM-Invert-ResNet50.In addition, the CBAM-Invert-ResNet50 model was compared with the most popular lightweight network MobileNetV3 to verify its reliability.The validation set results of the five models for predicting the content of pork from the back, front leg, and hind leg in adulterated mutton are shown in Table 2. Table 2. Comparisons of the different models with three datasets of the back, front leg, and hind leg in the validation set. Models Back Dataset Front Leg Dataset Hind Leg Dataset As shown in Table 2, compared with the ResNet50 and Invert-ResNet50 network models, the CBAM-ResNet50 and CBAM-Invert-ResNet50 network models have large increases in R 2 and decreases in RMSE for three datasets (back, front leg, and hind leg datasets).In the three datasets, the R 2 value of the CBAM-ResNet50 network model was 0.019, 0.1368, and 0.1125 higher than that of the ResNet50 network model, and the RMSE value was 0.0041 g•g −1 , 0.0155 g•g −1 and 0.0147 g•g −1 lower than that of the ResNet50 network model, respectively.The R 2 values of the CBAM-Invert-ResNet50 network model for the back dataset, the front leg dataset, and the hind leg dataset were 0.0378, 0.1247, and 0.0391 higher than those of the Invert-ResNet50 network model, respectively.The RMSE values of the CBAM-Invert-ResNet50 network model were 0.0065 g•g −1 , 0.0125 g•g −1, and 0.0089 g•g −1 lower than those of the Invert-ResNet50 network model, respectively.Compared with the CBAM-ResNet50, the R 2 values of the CBAM-Invert-ResNet50 network model for the back and front leg datasets increased by 0.0257 and 0.0102, respectively, and the RMSE values decreased by 0.0033 g•g −1 and 0.0010 g•g −1 , respectively.But the results were slightly lower than the results of the CBAM-ResNet50 for the back dataset.The results showed that adding the attention mechanism CBAM to the ResNet50 and Invert-ResNet50 models could improve the model performance.Our research results were similar to those of Zhang et al. [29] who added the CBAM in the YOLOv4 model to enhance the feature extraction ability of the model, and the results showed that the mAP@0.5 of When they identified the sheep, group1 and group2 were 91.58% and 90.61%, respectively.This was also proved by the study of Du et al. [28].They incorporated the CBAM in the EfficientNet-B7 model to classify the plug seedling quality.The result showed that the achieved average accuracy of the test set for the proposed model was higher by 7.32% than the accuracy before the improvement.Based on the results in Section 3.2 that the inverted residual structure could make the model meet the requirements of the lightweight, the performance of the improved CBAM-Invert-ResNet50 model was ideal.In addition, compared with MobileNetV3, the R 2 values of the CBAM-Invert-ResNet50 network model for the back and front leg datasets increased by 0.0879, 0.0755, and 0.1657, respectively, and the RMSE values were reduced by 0.0132 g•g −1 , 0.0087 g•g −1 and 0.0181 g•g −1 , respectively.The results indicated that the improved CBAM-Invert-ResNet50 model was reliable for predicting the content of pork from the back, front leg, and hind leg in adulterated mutton. Stability Evaluation of the Models Boxplots were used to visually evaluate the performance of each model in predicting the content of pork from different parts in adulterated mutton.Figure 8 shows the boxplots of the adulterated content prediction values of three models (CBAM-Invert-ResNet50, ResNet50, and MobileNetV3) for the three datasets (back, front leg, and hind leg) of porkadulterated mutton, respectively. Figure 8 shows that the predictive values of the CBAM-Invert-ResNet50, ResNet50, and MobileNetV3 are relatively concentrated in the back dataset, and the differences in boxplots among the three are small.Among them, the CBAM-Invert-ResNet50 box is more concentrated than the other two.The boxplots of CBAM-Invert-ResNet50 and MobileNetV3 show little difference in the front leg dataset.In the boxplots of MobileNetV3, the IQR of the predicted value with an adulteration content of 0.4 is small and the data are relatively concentrated.However, when the adulteration content is 20% and 30%, the IQR of the predicted value is too large and the data are scattered.For the hind leg dataset, when the adulterant content is 10%, the IQR of the predicted values for the three models is small, which proves that the three models have a better prediction effect on the hind leg dataset.Among them, the IQR of the CBAM-Invert-ResNet50 is the smallest, which proves that the CBAM-Invert-ResNet50 has the best prediction effect.In addition, for the adulteration content of 20%, 30%, and 40%, the CBAM-Invert-ResNet50 obviously performed better compared with the results of the MobileNetV3 and ResNet50 network models.The above results show that the CBAM-Invert-ResNet50 model had the best stability and significantly better prediction results than ResNet50 and MobileNetV3 in the back dataset, front leg dataset, and back leg dataset. Stability Evaluation of the Models Boxplots were used to visually evaluate the performance of each model in predicting the content of pork from different parts in adulterated mutton.Figure 8 shows the boxplots of the adulterated content prediction values of three models (CBAM-Invert-ResNet50, ResNet50, and MobileNetV3) for the three datasets (back, front leg, and hind leg) of pork-adulterated mutton, respectively.Figure 8 shows that the predictive values of the CBAM-Invert-ResNet50, ResNet50, and MobileNetV3 are relatively concentrated in the back dataset, and the differences in boxplots among the three are small.Among them, the CBAM-Invert-ResNet50 box is more concentrated than the other two.The boxplots of CBAM-Invert-ResNet50 and Mo-bileNetV3 show little difference in the front leg dataset.In the boxplots of MobileNetV3, the IQR of the predicted value with an adulteration content of 0.4 is small and the data are relatively concentrated.However, when the adulteration content is 20% and 30%, the IQR of the predicted value is too large and the data are scattered.For the hind leg dataset, when the adulterant content is 10%, the IQR of the predicted values for the three models is small, which proves that the three models have a better prediction effect on the hind leg dataset.Among them, the IQR of the CBAM-Invert-ResNet50 is the smallest, which proves that the CBAM-Invert-ResNet50 has the best prediction effect.In addition, for the adulteration content of 20%, 30%, and 40%, the CBAM-Invert-ResNet50 obvious- 2 shows that the CBAM-Invert-ResNet50 model had obvious differences in model performance when detecting the content of pork from different parts in adulterated mutton.In order to use the CBAM-Invert-ResNet50 model to accurately detect the content of pork in adulterated mutton in the mixed -dataset, the features of three models, including back, front leg, and back leg, were stitched to eliminate the influence of different parts on the model.To further improve the prediction performance of the model on the mixed-part dataset, transfer learning was used to optimize the pretrained model.After the fusion features were input into the pretrained model, the differences between the fusion features and the actual features were eliminated by fine-tuning.This would ensure that the real features of the mixed-part dataset were further extracted on the basis of making full use of the fused features to improve the accuracy and robustness of the model.At the same time, ResNet50 and MobileNetV3 models were used to establish a feature fusion model to detect the adulteration content in the mixed-part dataset, and the results were compared with those of the CBAM-Invert-ResNet50.The R 2 and RMSE results of the validation set of the three models for the mixed-part dataset before and after feature fusion are shown in Figure 9. 2 shows that the CBAM-Invert-ResNet50 model had obvious differences in model performance when detecting the content of pork from different parts in adulterated mutton.In order to use the CBAM-Invert-ResNet50 model to accurately detect the content of pork in adulterated mutton in the mixed -dataset, the features of three models, including back, front leg, and back leg, were stitched to eliminate the influence of different parts on the model.To further improve the prediction performance of the model on the mixed-part dataset, transfer learning was used to optimize the pretrained model.After the fusion features were input into the pretrained model, the differences between the fusion features and the actual features were eliminated by fine-tuning.This would ensure that the real features of the mixed-part dataset were further extracted on the basis of making full use of the fused features to improve the accuracy and robustness of the model.At the same time, ResNet50 and MobileNetV3 models were used to establish a feature fusion model to detect the adulteration content in the mixed-part dataset, and the results were compared with those of the CBAM-Invert-ResNet50.The R 2 and RMSE results of the validation set of the three models for the mixed-part dataset before and after feature fusion are shown in Figure 9.According to Figure 9, before feature fusion, the R 2 values of MobileNetV3, Res-Net50, and CBAM-Invert-ResNet50 models for the mixed-part dataset were 0.7133, 0.8802, and 0.9264, respectively.Based on feature fusion, the R 2 values of the Mo-bileNetV3, ResNet50, and CBAM-Invert-ResNet50 combined with transfer learning for the mixed-part dataset were 0.8728, 0.9200, and 0.9589, respectively, with an increase of 0.1595, 0.0398, and 0.0325, respectively, compared with those before feature fusion.The RMSE values of the MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 combined with transfer learning for the mixed-part dataset were reduced by 0.0153 g•g −1 , 0.0059 g•g −1 , and 0.0070 g•g −1 , respectively, compared with those before feature fusion.The above results show that the prediction performance of MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 models based on feature fusion combined with transfer learning was improved on the mixed-part dataset.Among them, the CBAM-Invert-ResNet50 had the best prediction effect on the mixed-part dataset, with R 2 and RMSE of 0.9589 and 0.0220 g•g −1 , respectively.According to Figure 9, before feature fusion, the R 2 values of MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 models for the mixed-part dataset were 0.7133, 0.8802, and 0.9264, respectively.Based on feature fusion, the R 2 values of the MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 combined with transfer learning for the mixed-part dataset were 0.8728, 0.9200, and 0.9589, respectively, with an increase of 0.1595, 0.0398, and 0.0325, respectively, compared with those before feature fusion.The RMSE values of the Mo-bileNetV3, ResNet50, and CBAM-Invert-ResNet50 combined with transfer learning for the mixed-part dataset were reduced by 0.0153 g•g −1 , 0.0059 g•g −1 , and 0.0070 g•g −1 , respectively, compared with those before feature fusion.The above results show that the prediction performance of MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 models based on feature fusion combined with transfer learning was improved on the mixed-part dataset.Among them, the CBAM-Invert-ResNet50 had the best prediction effect on the mixed-part dataset, with R 2 and RMSE of 0.9589 and 0.0220 g•g −1 , respectively. Stability Evaluation of the Models Figure 10 shows the boxplots of the adulterated content predicted by CBAM-Invert-ResNet50, ResNet50, and MobileNetV3 models combined with transfer learning for the mixed dataset before and after feature fusion.It can be obtained from Figure 10 that the IQR range of the MobileNetV3 model was 0.1079-0.3299and 0.1827-0.3996,respectively, when the adulteration content was 20% and 30%.The range of the IQR was too large and the data were scattered.The IQR of the MobileNetV3 model based on feature fusion combined with transfer learning for the prediction of 20% and 30% adulterated mutton was significantly reduced and was 0.1296-0.2639and 0.2450-0.3568,respectively.Similar results were obtained by the ResNet50 model.The IQR of the ResNet50 model based on feature fusion combined with transfer learning for the prediction of 20% and 30% adulterated mutton was significantly reduced, and the range was 0.1590-0.2457and 0.2550-0.3705,respectively.Compared with the results before the feature fusion, the IQR range of the CBAM-Invert-ResNet50 model based on feature fusion combined with transfer learning for 10%, 20%, 30%, and 40% adulterated mutton was significantly reduced and was 0.0940-0.1391,0.1892-0.2390,0.2903-0.3399,and 0.3774-0.4321,respectively.The above results show that the three models, the MobileNetV3, ResNet50, and CBAM-Invert-ResNet50, based on feature fusion combined with transfer learning could improve the stability of the prediction value of the mixed-part dataset.The predicted values were all concentrated.Among them, the CBAM-Invert-ResNet50 had the best prediction stability for the mixed-part dataset. Conclusions The improved CBAM-Invert-ResNet50 model based on inverted residual structure and attention mechanism was used to detect the content of pork from the back, front leg, and hind leg in adulterated mutton under the effect of mutton flavor essence and colorant.Feature fusion and transfer learning were combined to accurately detect the content of pork from mixed parts in adulterated mutton.The results showed that the R 2 of the CBAM-Invert-ResNet50 model for predicting the contents of pork from the back, front leg, and hind leg in adulterated mutton was 0.9373, 0.8876, and 0.9055, respectively, and the RMSE was 0.0268 g•g −1 , 0.0357 g•g −1 and 0.0316 g•g −1 , respectively.After obtaining the fusion features of different parts by feature stitching, the CBAM-Invert-ResNet50 combined with transfer learning was used to predict the content of pork from mixed parts in adulterated mutton.The R 2 and RMSE were 0.9589 and 0.0220 g•g −1 , respectively.Compared with that before feature fusion, the R 2 of the mixed-part dataset increased by 0.0325 g•g −1 and RMSE decreased by 0.0070 g•g −1 , respectively.The results showed that the improved CBAM-Invert-ResNet50 model combined with RGB images from mobile phones can be used to quickly and accurately detect the content of pork from specific and mixed parts in adulterated mutton.Among them, the CBAM could effectively increase the feature differences between different content data and significantly improve the accuracy of the prediction model of mutton adulteration content under the effect of additives.Using an inversion residual structure to replace the original residual in the ResNet50 network can make the model more lightweight.For the mixed-part dataset with more complex data features, the feature fusion method could comprehensively utilize multiple image features and complement the advantages of multiple features.Combined with transfer learning, more robust and accurate results could be obtained to predict the content of pork from mixed parts in adulterated mutton.The results of this study can provide guidance for the safety of mutton and its products.At the same time, it promotes the development and application of deep learning combined with image data in the quantitative detection of components of agricultural and livestock products. Figure 2 . Figure 2. Schematic diagram of mobile phone image data acquisition system. Figure 5 . Figure 5.The diagram of feature fusion. Figure 5 . Figure 5.The diagram of feature fusion.2.4.3.Transfer LearningWhen detecting the adulteration content of pork from mixed parts in adulterated mutton, the results of the model are often not accurate because of the difference between the fusion characteristics and the actual characteristics.Therefore, it is necessary to further extract the real features on the basis of making full use of the fused features.Transfer learning combined with fine-tuning was used to achieve the detection of adulteration content in the mixed part in this study.Fine-tuning was used to obtain data features or model parameters in both the original and new domains by freezing part of the convolutional layers of the pretrained model (usually the convolutional layers close to the input because these layers retain a large amount of underlying information) and training the remaining convolutional 2. 5 . Test Environment and Model Evaluation 2.5.1.Evaluation Criteria of the Model When establishing the adulteration content prediction model, the predictive effect of the model was evaluated by calculating the correlation coefficient R 2 and root mean square error RMSE of the model.Their calculation equations are shown in (1) and (2): Figure 6 . Figure 6.Visualization and comparison of depth features of pork from different parts (back, front leg, and hind leg) in adulterated mutton extracted by different models: (a) back; (b) front leg; (c) hind leg. Figure 7 Figure 7 shows that the model of the Invert-ResNet50 was obtained by using the inverted residual structure to replace the original residual structure in the ResNet50.Compared with ResNet50, the total number of parameters of the Invert-ResNet50 was reduced by 58.25%, from 2.359 × 10 7 to 9.85 × 10 6 , and the size of the model was reduced from 44.89 MB to 18.66 MB, with a reduction of 58.43%.The CBAM-ResNet50 model was obtained by directly introducing the CBAM attention mechanism into the ResNet50 model.Compared with ResNet50, both the number of parameters and the model size were increased, which did not meet the requirements of the model lightweight.Therefore, the CBAM-Invert-ResNet50 network was obtained by replacing the residual structure in the CBAM-ResNet50 with the inverted residual structure.The number of parameters reduced to 2.612 × 10 7 from 1.002 × 10 7 , with a reduction of 61.64%.The size of the model was reduced to 19.11 MB from 49.75 MB, with a reduction of 61.59%.Compared with the ResNet50 and CBAM-ResNet50, the number of parameters of the Invert-ResNet50 and CBAM-Invert-ResNet50 networks was significantly reduced, indicating that the inverse residual structure could significantly reduce the number of network parameters of the model, thus reducing the volume of the model and realizing the lightweight of the model structure.The results were consistent with those reported in previous studies.Cui et al. added the residual structure to the DenseNet network, and the number of parameters in the model was reduced from 1.08 × 10 7 to 0.89 × 10 7 [24].Xu et al. added the inverted residual structure in YOLOv3 and combined it with depthwise separable convolution to recognize the gesture, and the size of the model was only 0.89 M [25].Compared with the Invert-ResNet50, the number of parameters of the CBAM-Invert-ResNet50 increased by only 1.73%, and the model size increased by 2.41%.However, the attention mechanism could strengthen the features of different pork content in pork-adulterated mutton.It made the model easier to realize the rapid and accurate prediction of the content of pork in adulterated mutton under the action of mutton flavor essence and colorant.Therefore, the CBAM-Invert-ResNet50 network could not only meet the lightweight requirements of the model but also ensure the precision of the model. Figure 7 Figure 7 shows that the model of the Invert-ResNet50 was obtained by using the inverted residual structure to replace the original residual structure in the ResNet50.Compared with ResNet50, the total number of parameters of the Invert-ResNet50 was reduced by 58.25%, from 2.359 × 10 7 to 9.85 × 10 6 , and the size of the model was reduced from 44.89 MB to 18.66 MB, with a reduction of 58.43%.The CBAM-ResNet50 model was obtained by directly introducing the CBAM attention mechanism into the ResNet50 model.Compared with ResNet50, both the number of parameters and the model size were increased, which did not meet the requirements of the model lightweight.Therefore, the CBAM-Invert-ResNet50 network was obtained by replacing the residual structure in the CBAM-ResNet50 with the inverted residual structure.The number of parameters reduced to 2.612 × 10 7 from 1.002 × 10 7 , with a reduction of 61.64%.The size of the model was reduced to 19.11 MB from 49.75 MB, with a reduction of 61.59%.Compared with the ResNet50 and CBAM-ResNet50, the number of parameters of the Invert-ResNet50 and CBAM-Invert-ResNet50 networks was significantly reduced, indicating that the inverse residual structure could significantly reduce the number of network parameters of the model, thus reducing the volume of the model and realizing the lightweight of the model structure.The results were consistent with those reported in previous studies.Cui et al. added the residual structure to the DenseNet network, and the number of parameters in the model was reduced from 1.08 × 10 7 to 0.89 × 10 7 [24].Xu et al. added the inverted residual structure in YOLOv3 and combined it with depthwise separable convolution to recognize the gesture, and the size of the model was only 0.89 M [25].Compared with the Invert-ResNet50, the number of parameters of the CBAM-Invert-ResNet50 increased by only 1.73%, and the model size increased by 2.41%.However, the attention mechanism could strengthen the features of different pork content in pork-adulterated mutton.It made the model easier to realize the rapid and accurate prediction of the content of pork in adulterated mutton under the action of mutton flavor essence and colorant.Therefore, the CBAM-Invert-ResNet50 network could not only meet the lightweight requirements of the model but also ensure the precision of the model. 3. 3 . The Content Detection Model of Adulterated Mutton with Pork from Different Parts 3.3.1.Results of the CBAM-InvertResNet50 Model Figure 8 . Figure 8. Boxplots of three network models of the MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 for the back, foreleg, and hind leg datasets in the validation set: (a) Res-Net50 for the back dataset; (b) MobileNetV3 for the back dataset; (c) CBAM-Invert-ResNet50 for the back dataset; (d) ResNet50 for the front leg dataset; (e) MobileNetV3 for the front leg dataset; (f) CBAM-InvertResNet50 for the front leg dataset; (g) ResNet50 for the hind leg dataset; (h) Mo-bileNetV3 for the hind leg dataset; (i) CBAM-Invert-ResNet50 for the hind leg dataset. Figure 8 . Figure 8. Boxplots of three network models of the MobileNetV3, ResNet50, and CBAM-Invert-ResNet50 for the back, foreleg, and hind leg datasets in the validation set: (a) ResNet50 for the back dataset; (b) MobileNetV3 for the back dataset; (c) CBAM-Invert-ResNet50 for the back dataset; (d) ResNet50 for the front leg dataset; (e) MobileNetV3 for the front leg dataset; (f) CBAM-InvertResNet50 for the front leg dataset; (g) ResNet50 for the hind leg dataset; (h) MobileNetV3 for the hind leg dataset; (i) CBAM-Invert-ResNet50 for the hind leg dataset. 3. 4 . The Content Detection Model of Mutton Adulterated with Pork from Mixed Parts 3.4.1.Results of the Different Models Table 3. 4 . The Content Detection Model of Mutton Adulterated with Pork from Mixed Parts 3.4.1.Results of the Different Models Table Figure 9 . Figure 9.The validation set results of the three models before and after feature fusion for the mixed parts dataset: (a) results of R 2 ; (b) results of RMSE. Figure 9 . Figure 9.The validation set results of the three models before and after feature fusion for the mixed parts dataset: (a) results of R 2 ; (b) results of RMSE. Figure 10 Figure 10 . Figure10shows the boxplots of the adulterated content predicted by CBAM-Invert-ResNet50, ResNet50, and MobileNetV3 models combined with transfer learning for the mixed dataset before and after feature fusion. Author Contributions: Conceptualization, Z.B. and R.Z.; methodology, Z.B. and D.H.; software, D.H. and Z.B.; formal analysis, Z.B. and S.W.; investigation, Z.B. and D.H.; resources, R.Z.; data curation, Z.B. and D.H.; writing-original draft preparation, Z.B.; writing-review and editing, S.W., R.Z. and Z.H.; visualization, S.W. and Z.H.; supervision, Z.B. and D.H.; validation, S.W. and Z.H.; project administration, R.Z.; funding acquisition, R.Z.All authors have read and agreed to the published version of the manuscript.Funding: This research was supported by the National Natural Science Foundation of China [Grant No. 31860465], the Bingtuan Innovation Leadership Program in Sciences and Technologies for Young and Middle-Aged Scientists [Grant No. 2020CB016], and the 2023 Corps Graduate Student Innovation Program. Table 1 . The results of the model for the content of pork from back, front leg, and hind leg adulterated in mutton.
2023-09-30T15:02:26.755Z
2023-09-27T00:00:00.000
{ "year": 2023, "sha1": "a44847867c6fa1c2956b94f914d8ca39ebfec2e7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/19/3594/pdf?version=1695819122", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5530947888f352d5fbd2e4e9f57d564242b19e0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
14345974
pes2o/s2orc
v3-fos-license
Insulin Secretion, Insulin Sensitivity, and Hepatic Insulin Extraction in First-degree Relatives of Type 2 Diabetic Patients To identify early metabolic abnormalities in type 2 diabetes mellitus, we measured insulin secretion, sensitivity to insulin, and hepatic insulin extraction in 48 healthy normal glucose-tolerant Brazilians, first-degree relatives of type 2 diabetic patients (FH+). Each individual was matched for sex, age, weight, and body fat distribution with a person without history of type 2 diabetes (FH-). Both groups were submitted to a hyperglycemic clamp procedure (180 mg/dl). Insulin release was evaluated in its two phases. The first was calculated as the sum of plasma insulin at 2.5, 5.0, 7.5, and 10.0 min after the beginning of glucose infusion, and the second as the mean plasma insulin level in the third hour of the clamp procedure. Insulin sensitivity index (ISI) was the mean glucose infusion rate in the third hour of the clamp experiment divided by the mean plasma insulin concentration during the same period of time. Hepatic insulin extraction was determined under fasting conditions and in the third hour of the clamp procedure as the ratio between C-peptide and plasma insulin levels. FH+ individuals did not differ from FH-individuals in terms of the following parameters [median (range)]: a) first-phase insulin secretion, 174 (116-221) vs 207 (108-277) µU/ml, b) second-phase insulin secretion, 64 (41-86) vs 53 (37-83) µU/ml, and c) ISI, 14.8 (9.0-20.8) vs 16.8 (9.0-27.0) mg kg-1 min-1 /µU ml-1. Hepatic insulin extraction in FH+ subjects was similar to that of FH-ones at basal conditions (median, 0.27 vs 0.27 ng/µU) and during glucose infusion (0.15 vs 0.15 ng/µU). Normal glucose-tolerant Brazilian FH+ individuals well-matched with FH-ones did not show defects of insulin secretion, insulin sensitivity, or hepatic insulin extraction as tested by hyperglycemic clamp procedures . Introduction Type 2 diabetes mellitus is a metabolic syndrome which is relatively common in most countries including Brazil (1,2), and is often the cause of severe micro-and macrovascular complications (3).Despite decades of research, its pathogenesis is poorly understood.It is generally agreed that it is a polygenic disorder characterized by varying degrees of impaired insulin secretion and insulin resistance (4,5), both of which can be affected by environmental and genetic factors (6,7).What remains controversial is which of these abnormalities is the major genetic factor.One approach has been to determine which factor is first detectable in individuals genetically predisposed to develop type 2 diabetes.To do this it is necessary to study individuals with normal glucose tolerance to avoid secondary effects of glucotoxicity on insulin secretion and insulin sensitivity (8).However, at this stage, it is not possible to be sure who is a true prediabetic.Moreover, in these studies it is important to properly match experimental and control groups for acquired variables (e.g., obesity, age, physical fitness) which affect ßcell function and insulin sensitivity (9).Finally, the findings for a specific ethnic group, such as the Pima Indians, the Nauruans, the Mexican-or African-Americans may be not valid for other type 2 diabetic patients. In a previous study of European normal glucose-tolerant individuals who were firstdegree relatives of type 2 diabetic patients and well-matched with the control group with no family history of diabetes we used the hyper-and euglycemic clamp techniques to assess insulin secretion and insulin sensitivity (10).We found that individuals with a first-degree relative with type 2 diabetes had impaired insulin secretion and were not insulin resistant.These findings were later supported by van Haeften et al. (11) in a similar study, while others observed decreased insulin sensitivity and apparently normal ß-cell function (12,13).However, in the study by Eriksson et al. (12), the subjects were not well-matched, and some of the participants in the study by Gulli et al. (13) who were Mexican-Americans probably had impaired glucose tolerance.With a less sensitive technique, i.e., the acute glucose infusion test associated with mathematical models, the results have also been controversial; ß-cell dysfunction (14), decreased insulin sensitivity (15), or no defect (16) has been observed.It should be noted, however, that in the study by Warram et al. (15) probands were markedly obese compared to the con-trols and none of these studies evaluated the appropriateness of ß-cell function in relation to insulin sensitivity (17).Johnston et al. (18) only observed decreased first-degree insulin release in offspring of type 2 diabetic patients when this variable was adjusted for their degree of insulin sensitivity. Our aim was to evaluate insulin secretion and insulin sensitivity in Brazilian glucosetolerant first-degree relatives of type 2 diabetic patients.The Brazilian population is characterized by a long history of miscegenation, in variable proportions, of European, Black, and Indian ancestries (19), the latter two having an increased risk to develop type 2 diabetes (20).We used the hyperglycemic clamp technique.Each subject was carefully matched for age, sex, weight, body fat distribution, smoking history, and physical activity. Subjects and Methods Moderately active white Brazilians without a history of alcoholism, drug use or chronic diseases were admitted to the study.Each subject gave informed consent to participate in the study, which was approved by the Medical Ethics Committee of our Institution.The first visit was then scheduled when each subject was submitted to general clinical and laboratory evaluation and to the oral glucose tolerance test according to the National Diabetes Data Group criteria (21).We selected 56 subjects with (FH+) and 56 without (FH-) type 2 diabetic first-degree relatives.All were healthy and normal glucosetolerant individuals at the time of evaluation.The two groups were individually matched for sex, age, body mass index, and waist-hip ratio. Forty-eight pairs of individuals from the initial 56 ones were able to participate in the second evaluation after about 15 days.The eight pairs who were excluded because of personal or technical problems were not different from the participants in relation to the clinical and biochemical characteristics and to glucose tolerance.At this second visit they underwent the hyperglycemic clamp study as described (10).Briefly, each volunteer came to the laboratory at 7:00 am after an overnight fast.A cannula was retrogradely inserted into a peripheral hand vein and kept patent by constant saline infusion.The hand was kept warm for blood arterialization.Blood samples were obtained from the hand vein every 15 min for half an hour under basal conditions and during glucose infusion, every 2.5 min for the first 10 min, and then every 5.0 min up to 180 min.Another cannula was inserted into an antecubital vein of the opposite arm for glucose infusion.For this infusion, we used a pump (Harvard Apparatus Co., Southnatick, MA, USA) beginning with the bolus dose (in ml = 2 {[weight (kg) x 1.5 x (180 mg/dl -basal plasma glucose (mg/dl))]/10 3 }) which was followed by a variable velocity of glucose infusion depending on the plasma glucose level.This was done to obtain and maintain this level at 180 mg/dl.Glucose was measured in all blood samples; insulin and C-peptide were also measured at the same time as glucose under basal conditions and during the first 20 min, and then every 20 min. Plasma glucose was determined by the glucose oxidase method (Beckman Instruments, Fullerton, CA, USA).Glycosylated hemoglobin (HbA 1 ) was measured by affinity chromatography (Isolab, Akron, OH, USA).Plasma insulin and C-peptide were determined using the solid phase and the double antibody radioimmunoassay techniques, respectively (Diagnostic Products Co., Los Angeles, CA, USA).Serum cholesterol, its HDL fraction, and triglycerides were measured by standard automated enzymatic techniques (Technicon Instruments Co., Tarrytown, NY, USA). The phases of insulin secretion were evaluated as follows: the first-phase insulin release was taken to be the sum of plasma insulin concentrations at 2.5, 5.0, 7.5, and 10.0 min of the hyperglycemic clamp experiment (10), and the second-phase insulin re-lease was reported as the average plasma insulin concentration during the last hour of the hyperglycemic clamp, when plasma insulin concentrations were expected to plateau (10).Insulin sensitivity was assessed as insulin sensitivity index (ISI) and was calculated by dividing the average glucose infusion rate (GIR) during the last hour of the clamp, minus the occasional glucose urinary excretion, by the average plasma insulin concentration during the same interval (10).Under stable conditions of constant hyperglycemia (third hour of the clamp), the amount of glucose infused (GIR) gives an estimate of the glucose that is metabolized by the tissues since endogenous glucose production should be suppressed.This value divided by the plasma insulin response (second-phase insulin secretion) provides an estimate of tissue sensitivity (ISI) to endogenously secreted insulin (10) and has been shown to correlate with values for insulin sensitivity obtained in euglycemic/hyperinsulinemic clamp experiments (10,22). Hepatic insulin extraction (HIE) under basal conditions was calculated as the ratio between mean basal plasma C-peptide and insulin (three determinations), and during glucose infusion, as the ratio between mean plasma C-peptide and insulin during the third hour of the hyperglycemic clamp experiment (23). Data are reported as either the mean ± SDM or the median and 1st and 3rd quartiles or the percent frequency.The unpaired Student t-test was used to compare means, the Mann-Whitney test to compare medians, and the chi-square test to compare frequencies (24).Correlations were performed using linear regression (24).A P value equal to or less than 0.05 was considered to be statistically significant. Results The main clinical and biochemical characteristics of the two groups are shown in Table 1.Both groups were well-matched for sex, age, weight, and body fat distribution.Also they did not differ in terms of ancestry, number of pregnancies, or smoking habit. In the FH+ group the mother alone was diabetic or other first-degree relatives were also diabetic in 50 and 66% of the cases, respectively, with the mother being the family member most frequently affected (P<0.01). Prior to the clamp, plasma glucose, insulin and C-peptide levels, serum lipids, and HIE were comparable in both groups (Table 1).Moreover, both groups showed equal normal glucose tolerance as assessed by HbA 1 level (Table 1) and by plasma glucose concentrations after 75-g oral glucose, and presented similar plasma insulin and C-peptide responses during the oral glucose tolerance test (Figure 1).The linear regression coefficients of plasma insulin on plasma glucose with the oral glucose stimulus did not differ between FH+ and FH-individuals (P>0.05). During the hyperglycemic clamp, mean plasma glucose concentrations were 179 ± 2 mg/dl (CV: 2.8 ± 0.9%) and 179 ± 2 mg/dl (CV: 2.7 ± 1.0%) (P>0.05) in the FH+ and FH-groups, respectively (Figure 2).In both groups, a biphasic plasma insulin response was observed (Figure 2).As shown in Table 2, first-and second-phase insulin secretion were comparable in the two groups.The average GIR necessary to maintain plasma glucose levels at 180 mg/dl during the last hour of the hyperglycemic clamp experiment was also not different between the two groups (Table 2).Consequently, ISI were similar in FH+ and FH-individuals (Table 2).During the third hour of glucose infusion, HIE showed a similar reduction from its basal value and reached a similar value in both groups (Table 2). In the two study groups there was a similar and significant inverse relation between ISI and body mass index, first-and secondphase insulin secretion (r = -0.41,-0.42, and -0.59 vs r = -0.47,-0.57, and -0.66, for FH+ and FH-individuals, respectively; P<0.01). Discussion A comparative evaluation of ß-cell function, ISI, and HIE was performed between two groups of similar individuals but with (FH+) or without (FH-) first-degree relatives with type 2 diabetes.Both groups showed normal glucose tolerance and were wellmatched for the main demographic characteristics.Under these conditions, the finding of a defect in any of the three evaluated variables could be considered to be genetically determined. The fact that the mother was the most frequent relative affected by type 2 diabetes in the FH+ group, although possibly influenced by confounding factors and deserving more investigation, agrees with other studies with type 2 diabetic patients (25). The ß-cell response to the oral glucose challenge of the FH+ group was similar to that of the FH-group.This finding was previously observed by us (10) and by others (12,14,26,27), whereas a decreased (28)(29)(30) or increased (13) insulin response to oral glucose has been less frequently reported.These divergent results in relation to ours may be due to the different ethnic groups studied (13) or to a small and specific group of prediabetic individuals that may have included future type 1 diabetic patients (28)(29)(30). Under similar conditions of ß-cell stimulation, the two groups were evaluated for insulin release and tissue sensitivity to insulin.The first and second phases of insulin secretion and the ISI did not differ between groups.Previous studies using hyperglycemic clamps in first-degree relatives of type 2 diabetic patients obtained the following results: unimpaired insulin secretion (12,18), decreased insulin secretion (10,11) or, in some cases, increased insulin secretion (13). Many studies have evaluated insulin release by an intravenous glucose stimulus in offspring of type 2 diabetic patients who showed normal glucose tolerance.Although most of these studies found lower insulin secretion (14,31,32), many authors did not observe any difference in relation to control individuals (16,33,34), and in some cases increased insulin release was reported (15,35). The reason the results of the present study differ from those of our previous one (10) may be due to the participation of different ethnic groups (36 vs 100% European ancestry, respectively), since type 2 diabetes inheritance is heterogenous.On the other hand, increased insulin release as a response to insulin resistance was only observed when the matching for weight was not good (15,35), or when the individuals come from ethnic groups characterized by insulin resistance, such as Mexican-Americans (13) and African-Americans (35). Our results may be due to the fact that most first-degree relatives of type 2 diabetics with normal glucose tolerance, even when both parents are diabetic, would not show an insulin release defect, or this defect could not be detectable by the best techniques available, as proposed by Johnston et al. (18).Another reason may be that the distinction between the FH+ and FH-groups based on the presence or absence of type 2 diabetic first-degree relatives may not be sufficient to obtain two groups with a significantly different concentration of diabetes genes, even with a large number of individuals.Also, most of the volunteers of this study were 40 years old or younger and had a body mass index lower than 26.8 kg/m 2 (36), thus possibly having fewer effects of acquired factors that facilitate the expression of an insulin secretion defect (5). We should point out that in this study and in previous ones, insulin was measured by radioimmunoassay using antibodies that sig-nificantly cross-reacted with proinsulin and its intermediates, causing an overestimation of true insulin release.For this reason, we measured plasma C-peptide during the hyperglycemic clamp procedures, a procedure that permitted us to evaluate the real secondphase insulin release.Both groups showed similar secretion (data not shown).However, the FH+ group may really present decreased first-phase insulin release since this is one of the initial defects of diabetes together with a disproportionate proinsulin release (37). More definitive results about ß-cell function and insulin sensitivity before the development of type 2 diabetes were obtained from studies of discordant identical twins.Among these, Vaag et al. (38) performed clamp experiments during the stage of normal glucose tolerance and observed decreased first-phase insulin secretion with no change in insulin sensitivity.These results partially agree with ours and suggest that we may not have observed a ß-cell secretion defect because we did not have two significantly different groups with respect to the number of diabetes genes. The similar insulin release displayed by the FH+ and FH-groups was not due to HIE differences under basal conditions or during glucose infusion.First-degree relatives of type 2 diabetic patients from southern Italy (39) and northern Europe (40) presented hyperinsulinemia due to decreased insulin clearance, a result which is not in agreement with our findings. As already established, we observed a similar inverse relationship between body weight and insulin sensitivity and between insulin sensitivity and insulin secretion in both phases in the two study groups.These findings may be explained by the fact that the subjects were well-matched, and/or by the fact that the associations between the variables are not determined by genetic diabetes factors.Similar results were observed by Byrne et al. (16) for ISI and body mass index, and by Vaag et al. (38) for ISI and First-degree relatives of type 2 diabetic patients first-phase insulin secretion. Normal glucose-tolerant, white Brazilian first-degree relatives of type 2 diabetic patients did not show defects of ß-cell secretory function, ISI, or HIE as tested by hyperglycemic clamp procedures. PlasmaFigure 1 . Figure 1.Plasma C-peptide (A), plasma insulin (B), and plasma glucose (C) responses during oral glucose tolerance tests of individuals with (FH+) and without (FH-) type 2 diabetic firstdegree relatives.Data are reported as means ± SEM for 56 individuals in each group. Figure 2 . Figure 2. Plasma glucose (A) and plasma insulin (B) concentrations during hyperglycemic clamp experiments.FH+ and FH-indicate individuals with and without first-degree relatives with type 2 diabetes, respectively.Data are reported as means ± SEM for 48 individuals in each group. Table 1 . Clinical characteristics of individuals with (FH+) and without (FH-) a firstdegree relative with type 2 diabetes. Data are reported as medians (1st-3rd quartiles) or as means ± SDM.BMI: body mass index; HIE: hepatic insulin extraction.No significant differences were observed between the two groups (Student t-test, Mann-Whitney test or chi-square test). Table 2 . Hyperglycemic clamp measurements of insulin secretion, insulin sensitivity, and hepatic insulin extraction of individuals with (FH+) and without (FH-) a first-degree relative with type 2 diabetes.
2017-04-02T14:55:07.070Z
2003-03-01T00:00:00.000
{ "year": 2003, "sha1": "7ca6925fbae33d68a06590a352c369dd9d772aba", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjmbr/a/9RL9PJKB5jcSs6FTNRvXGPM/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "7ca6925fbae33d68a06590a352c369dd9d772aba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73495788
pes2o/s2orc
v3-fos-license
The Optimally Designed Variational Autoencoder Networks for Clustering and Recovery of Incomplete Multimedia Data Clustering analysis of massive data in wireless multimedia sensor networks (WMSN) has become a hot topic. However, most data clustering algorithms have difficulty in obtaining latent nonlinear correlations of data features, resulting in a low clustering accuracy. In addition, it is difficult to extract features from missing or corrupted data, so incomplete data are widely used in practical work. In this paper, the optimally designed variational autoencoder networks is proposed for extracting features of incomplete data and using high-order fuzzy c-means algorithm (HOFCM) to improve cluster performance of incomplete data. Specifically, the feature extraction model is improved by using variational autoencoder to learn the feature of incomplete data. To capture nonlinear correlations in different heterogeneous data patterns, tensor based fuzzy c-means algorithm is used to cluster low-dimensional features. The tensor distance is used as the distance measure to capture the unknown correlations of data as much as possible. Finally, in the case that the clustering results are obtained, the missing data can be restored by using the low-dimensional features. Experiments on real datasets show that the proposed algorithm not only can improve the clustering performance of incomplete data effectively, but also can fill in missing features and get better data reconstruction results. Introduction The rapid development of communication technologies and sensor networks leads to the increase of heterogeneous data. The proliferation of these technologies in communication networks also has facilitated the development of the wireless multimedia sensor network (WMSN) [1]. Currently, multimedia data on WMSNs are successfully used in many applications, such as industrial control [2], target recognition [3] and intelligent traffic monitoring [4]. Nowadays, multimedia sensors produce a great deal of heterogeneous data, which require new models and technologies to process, particularly neural computing [5], to further promote the design and application of WMSNs [6,7]. However, heterogeneous networks and data are often very complex [8,9], which consist of structured data and unstructured data such as picture, voice, text, and video. Because heterogeneous data come from many input channels in the real world, these data are typical multimodal data, and there is a nonlinear relationship between them [10]. Different modes usually convey different information [11]. For example, images have many details, such as shadows, rich colors and complex scenes, and use titles to display invisible things like the names of objects in the image [12]. Moreover, different forms have complex relationships. In the real world, most multimedia data suffer from a lot of missing values due to sensor failures, measurement inaccuracy and network data transmission problems [13,14]. These features, especially incompleteness, lead to the widespread use of incomplete data in practical applications [15,16]. Lack of data values will affect the decision process of the application servers for specific tasks [17]. The resulting errors can be important for subsequent steps in data processing. Therefore, the recovery of data missing values is essential for processing big data in WMSNs. As a fundamental technology of big data analysis, clustering divides objects into different clusters based on different similarity measures, making objects in the same cluster more similar to other objects in different groups [18,19]. They are commonly used to organize, analyze, communicate, and retrieve tasks [20]. Traditional data clustering algorithms focus on complete data processing, such as image clustering [21], audio clustering [22] and text clustering [23]. Recently, heterogeneous data clustering methods have been widely concerned by researchers [24][25][26]. In addition, many algorithms have been proposed-for example, Meng et al. optimized the unified objective function by an iterative process, and a spectral clustering algorithm is developed for clustering heterogeneous data based on graph theory [27]. Li et al. [28] proposed a high-order fuzzy c-means algorithm to extend the conventional fuzzy c-means algorithm from vector space to tensor space. A high-order possibilistic c-means algorithm based on tensor decompositions was proposed for data clustering in Internet of Things (IoT) systems [29]. These algorithms are effective to improve clustering performance for heterogeneous data. However, they can only obtain clustering results and lack further analysis of incomplete data low-dimensional features. Therefore, their performance is limited with the heterogeneous data in the WMSNs' big data environment. More importantly, other existing feature clustering algorithms do not consider data reconstruction and missing data. WMSN systems require different modern data analysis methods, and deep learning (DL) has been actively applied in many applications due to its strong data feature extraction ability [30]. Deep embedded clustering (DEC) learns to map from data space to low-dimensional feature space, where it optimizes the clustering objectives [31]. Ref. [32] shows the feature representation ability of variational autoencoder (VAE). VAE learns the multi-faceted structure of data and achieves high clustering performance [33]. In addition, VAE has a strong ability in feature extraction and reconstruction, and it can be a good tool for handling incomplete data. Aiming at this research object, the variational autoencoder based high-order fuzzy c-means (VAE-HOFCM) algorithm is presented to cluster and reconstruction incomplete data in WMSNs in this paper. It can effectively cluster complete data and incomplete data and get better reconstruction results. VAE-HOFCM is mainly composed of three steps: feature learning and extraction, high-order clustering, and data reconstruction. First, the feature learning network is improved by using a variational autoencoder to learn the feature of incomplete data. To capture nonlinear correlations of different heterogeneous data, tensors are applied to form a feature representation of heterogeneous data. Then, the tensor distance is used as the distance measure to capture the unknown distribution of data as much as possible in the clustering process. The results of feature clustering and VAE output both affect the final clustering results. Finally, in the case of clustering results, the missing data can be restored by the low-dimensional features. The rest of the paper is organized as follows: Section 2 presents related work to this paper. The proposed algorithm is illustrated in Section 3, and experimental results and analysis are described in Section 4. Finally, the whole paper is concluded in the last section. Preliminaries This section describes the variational autoencoder (VAE) and the fuzzy c-means (FCM), which will be useful in the sequel. Variational Autoencoder The variational autoencoder, which is a new method for nonlinear dimensionality reduction, is a great case of combining probability plots with deep learning [34,35]. Consider a dataset X = {x 1 , x 2 , ..., x N } which consists of N independent and identically distributed samples of continuous or discrete variables x. To generate target data x from hidden variable z, two blocks are used: encoder block and decoder block. Suppose that z is generated by some prior normal distribution p θ = N µ, σ 2 . The true posterior density p θ (z |x ) is intractable. Approximate recognition model q φ (z |x ) as a probabilistic encoder. Similarly, refer to p θ (x |z ) as a probability decoder because, given the code z, it produces a distribution over the possible corresponding value x. The parameters θ and φ are used to represent the structure and weight of the neural network used. These parameters are adjusted as part of the VAE training process and are considered constant later. Minimize the true posterior approximation of the KL divergence (Kullback-Leibler Divergence). When the divergence of KL is zero, p θ (z |x ) = q φ (z |x ). Then, the true posterior distribution can be obtained. The KL divergence of approximation from the true posterior D KL q φ (z |x ) p θ (z |x ) can be formulated as: which can also be written as: The right half of the inequality is called the variational lower bound on the marginal likelihood of data x, and can be written as: The second term E q φ (z|x ) [log p θ (x |z )] requires estimation by sampling. A differentiable transformation g φ (x, ε) of an auxiliary noise variable ε is used to reparameterize the approximation q φ (z |x ). Then, form a Monte Carlo estimates of E q φ (z|x ) [log p θ (x |z )]: where z m = g φ (x, ε m ) = µ + ε m σ, ε m ∼ N (0, I) and m denotes the number of samples. Fuzzy C-Means Algorithm (FCM) The fuzzy c-means algorithm (FCM) is a typical soft clustering technique [36,37]. Given a dataset X = {x 1 , x 2 , ..., x N } with N objects and m observations, fuzzy partition of set X into predefined cluster number c and the number of clustering centers denoted by Their membership functions are defined as u ik = u v i (x k ), in which u ik denotes the membership of x k towards the i th clustering center and c denotes. FCM is defined by a c × m membership matrix FCM minimizes the following objective function [38,39] to calculate the membership matrix U and the clustering centers V: where every u ik belongs to the interval (0,1), the summary of all the u ik belonging to the same point is one (∑ c i=1 u ik = 1). In addition, none of the fuzzy clusters is empty, neither do any contain all the data Update the membership matrix and clustering centers by minimizing Equation (5) via the Lagrange multipliers method: In the traditional FCM algorithm, d ik denotes the Euclidean distance between x i and v k , and d jk denotes the Euclidean distance between x j and v k . Problem Formulation and Proposed Method Consider a dataset X = {x 1 , x 2 , . . . x N } with N objects. Each object is represented by m observations, in the form of Y = {y 1 , y 2 , . . . , y m }. The purpose of data clustering is to divide datasets into several similar classes based on similarity measure, so that objects in the same cluster have great similarity and are easy to be analyzed. Multimedia data cluster tasks bring many problems and challenges, especially for missing or damaged data. Key challenges are discussed in three areas as below. 1. Learning the features of incomplete data: feature extraction and analysis are the basic steps of clustering. In general, many feature extraction methods, such as machine learning and deep learning, have been successfully applied to image, text, and audio feature learning. However, the current algorithm focuses on feature learning and extraction of high quality data. In other words, they can not effectively extract the features of lossy data. Therefore, feature learning of incomplete data is the primary problem of heterogeneous data clustering. 2. Clustering in feature space: an important feature of large-scale multimedia data is its diversity, which means that large-scale data sources are diverse, including structured, unstructured data and semi-structured data from a large number of sources. In particular, a large number of objects in large data sets are multi-model. For example, web pages usually contain both images and text. Each mode of multimodal object has its own characteristics, which leads to the complexity of data. Therefore, the feature representation of multimedia data is significant in cluster tasks. 3. Filling missing values to reconstruct data: in wireless multimedia sensor networks, reliable data transmission is critical to provide the ideal quality of network-based services. However, multimedia data transmission may not be successful due to different reasons such as sensory errors, connection errors, or external attacks. These problems can result in incomplete data and degrade the performance of WMSNS applications. After feature extraction and cluster analysis, it is very important to recover missing data from the sensor network. Description of the Proposed Method The variational autoencoder based high-order fuzzy c-means (VAE-HOFCM) algorithm is divided into three stages: unsupervised feature learning, high-order feature clustering, and data reconstruction. Architecture of the proposed method is shown in Figure 1. To learn the features of incomplete multimedia data, the original data set is divided into two different subsets X c and X inc . Samples in subset X c have no missing values while each sample contains some missing values in subset X inc . Feature Learning Network Architecture For trained variational autoencoder, q φ (z |x ) will be very close to p θ (z |x ), so the encode network can reduce the dimensionality of the real dataset X = {x 1 , x 2 , ..., x N } and obtain low-dimensional distribution. In this case, the potential variables may get better results than the traditional dimensionality reduction methods. When the improved VAE model is obtained, the encode network is used to learn the potential feature vectors of missing value sample z = Encoder (x) ∼ q φ (z |x ). The decode network is then used to decode the vector z to generate the original samplex = Decoder (z) ∼ p θ (x |z ). According to the original VAE and to build a better generation model, convolution kernels are added to the encoder. There is a variational constraint on the latent variable z, that is, z obeys the Gauss distribution. Here, each x i (1 ≤ i ≤ N) is fitted with an exclusive normal distribution. Sample z is then extracted from the exclusive distribution, since z i is sampled from the exclusive x i distribution, the original sample x i can be generated through a decoder network. The improved VAE model is shown in Figure 2. In general, assume that q φ (z) is the standard normal distribution, q φ (z |x ), p θ (x |z ) are the conditional normal distribution, and then plug in the calculation to get the normal loss of VAE, where z is a continuous variable representing the coding vector, and y is a discrete variable that represents a category. If z is directly replaced in the formula with (z, y), the loss of the clustered VAE is obtained: Set the scheme as: . Substituting them into Equation (8) and it can be simplified as follows: (9) where the first term − log p θ (x |z ) wants the reconstruction error to be as small as possible, that is, z keeps as much information as possible. ∑ y q φ (y |z )D KL q φ (z |x ) p θ (z |y ) plays the role of clustering. In addition, D KL q φ (y |z ) p θ (y) makes the distribution of each class as balanced as possible; there will not be two nearly overlapping situations. The above equation describes the coding and generation process: • Sampling to x from the original data, coding feature z can then be obtained by q φ (z |x ). Then, the coding feature is classified by classifier q φ (y |z ) to obtain the classification. • Select a category y from distribution p θ (y), select a random hidden variable z from distribution p θ (z |y ), and then decode the original sample through generator p θ (x |z ). The VAE is outlined in Algorithm 1. Variational Autoencoder Based High-Order Fuzzy C-Means Algorithm Variational autoencoder gets the low-dimensional features and initial clustering results of data by feature learning. Then, the final clustering results will be optimized by the FCM algorithm clustering results. Traditional FCM work in vector space. It is better to use higher-order tensor to represent the feature of data because the tensor distance can capture the correlation in the high-order tensor space and measures the similarity between two higher-order complex data samples. Given an N-order tensor X ∈ R I 1 ×I 2 ×...×I N , x is denoted as the vector form representation of X, and the element X i 1 i 2 ... i N (1≤ij≤Ij,1≤j≤N) in X is corresponding to x l . That is, the N element in X is l = i 1 + ∑ N j=2 ∏ j−1 t=1 I t . Then, the tensor distance between two N-order tensors is defined as: where g lm is the metric coefficient and used to capture the correlations between different coordinates in the tensor space, which can be calculated by: where p l − p m 2 is defined as: Minimizing the objective function of high-order fuzzy c-means algorithm: To update the membership value u ik , we differentiate with respect to u ik , as follows: Setting Equation (14) to 0, u ik is calculated: Then, the equation for updating v i is obtained: For each iteration, this operation requires O (c × n), so the total computational complexity of k iterations is O (kc × n). From the above, the VAE-HOFCM algorithm can be described as Algorithm 2: 10: Obtain the modified clustering results using the u ij . By comparing the steps of the HOFCM algorithm, VAE-HOFCM can restore incomplete data simultaneously in the clustering process. Equally, the VAE-HOFCM algorithm has a total time complexity of O (kc × n). However, before that, it needs to train the variational autoencoder network. Experiments This section evaluates the performance of the proposed VAE-HOFCM algorithm on three representative datasets. To show the effectiveness of VAE-HOFCM, the unsupervised clustering accuracy (ACC) and adjusted rand index (ARI) for verification are adopted. ACC is calculated by: where l i and c i indicate the ground-truth label and the cluster assignment produced by the algorithm, respectively. m ranges overall possible one-to-one mappings between clusters and labels. ARI is used to measure the agreement between two possibilistic partitions of a set of objects, where U denotes the true labels of the objects in datasets, and U denotes a cluster generated by a specific algorithm. A higher value of ARI (U, U ) represents that the algorithm has more accurate clustering results. To study the performance and generality of different algorithms, experiments are performed on three datasets: Experimental Results on Complete Datasets This section evaluates the performance of variational autoencoder based high-order fuzzy c-means algorithm (VAE-HOFCM) in clustering compared to other algorithms. The input dimensions of these three datasets are 784, 3072 and 500, respectively. The dimension of VAE hidden layer is set as 25, and the number of training iterations of the training set as 50. After obtaining the low-dimensional features, start clustering, and the membership factor is set as 2.5. Then, the required clustering center is calculated and the final normalized membership matrix U is returned to obtain the clustering result. The clustering results are shown in Tables 1 and 2. Table 1 displays the optimal performance of unsupervised clustering accuracy of each algorithm. For MNIST data clustering class, the proposed VAE-HOFCM algorithm has achieved the highest accuracy of 85.54%. Compared with VAE clustering, the VAE-HOFCM encoder training time and cluster running time sum is slightly more than the former, but the clustering accuracy is improved. Then, the clustering performance and running time of VAE-HOFCM algorithm are generally better than traditional clustering algorithms, such as k-means and fuzzy c-means. Since the dimension of STL-10 dataset is higher and the information content is larger, the operation time of extracting features and clustering is relatively long. However, the proposed algorithm still gets the best running results. Visual features and text features are extracted from the NUS-WIDE dataset, and then these features are connected to form feature vectors. Finally, the feature vectors are clustered. The clustering results show the performance of the proposed algorithm. Table 2 shows the clustering results in terms of ARI (U, U ), VAE-HOFCM produces high value than other algorithms in most cases. K-means usually has the worst performance and the longest running time, whereas VAE and DEC achieve the better result than HOPCM. ARI is not used as an indicator in the STL-10 dataset because the value may be negative in the case of clustering accuracy. There are two reasons for the results of these results in terms of ACC and ARI. On the one hand, HOFCM integrates the learning characteristics of different modes, uses the cross product to model the nonlinear correlation under various modes, and uses the tensor distance as a measure to capture the high-dimensional distribution of multimedia data. On the other hand, VAE successfully learns low-dimensional features and achieves the best performance in feature dimension reduction and clustering accuracy. VAE has good data clustering and data generation performance. Feature extraction is carried out by the VAE to reduce the dimension to two dimensions. These categories have clear boundaries as shown in Figure 3, indicating that the VAE has effectively extracted low-dimensional features. This proves that the VAE has strong data feature expression ability. To obtain better performance in the three constraints of data feature dimension, clustering performance and reconstruction quality, the quality of data reconstruction in different dimensions is compared. Figure 4 shows the reproduction performance of learning generation models for different dimensions. When the latent space is set at 25, this method can obtain a good reconstruction quality. Experimental Results on Incomplete Data Sets To estimate the robustness of the proposed algorithm, each dataset is divided into complete datasets and incomplete datasets. Now, incomplete datasets are used for simulation analysis. Since clustering performance depends on the number of missing values, six miss rates are set, which are 5%, 10%, 15%, 20%, 25% and 30%, respectively. Figure 6 shows the clustering results accuracy of ACC with the increase of the missing ratio on the MNIST dataset and NUS-WIDE dataset. Figure 7 shows the average values of ARI with the increase of the missing ratio on the MNIST dataset and NUS-WIDE dataset. The results show that the increase of missing rate will lead to the decrease of clustering accuracy. However, the proposed algorithm still has a high accuracy because VAE successfully extracts incomplete data features and reduces the difference with the incomplete data features. According to Figures 6 and 7, with the increase of missing rate, the average value of ACC and ARI would decrease, which indicates that the missing rate destroys the original data content, leading to the decrease of clustering accuracy. The average ACC and ARI values based on the VAE-HOFCM algorithm are significantly higher than those of the other three methods at the six missing rates. Therefore, VAE-HOFCM clustering has the best performance, indicating that VAE-HOFCM is also effective for clustering incomplete data. Then, data with different missing rates are reconstructed, as shown in Figure 8. Inputs are incomplete data with different missing rates, and the output are recovered data using VAE. The reconstruction results show that the proposed algorithm not only improves the clustering accuracy, but also ensures that the data can be reconstructed with high quality. The variational auto-coder also has the function of de-noising. As shown in Figure 9, noise is added into the input data to enable VAE to effectively de-noise and restore the original input image. Conclusions In this paper, a VAE-HOFCM algorithm, which can improve the performance of multimedia data clustering, has been proposed. Unlike many existing technologies, the VAE-HOFCM algorithm learns the data features by designing an improved VAE network, and uses a tensor based FCM algorithm to cluster the data features in the feature space. In addition, VAE-HOFCM captures as many features of high quality multimedia data and incomplete multimedia data as possible. In experiments, the performance of the proposed scheme has been evaluated on three heterogeneous datasets, MNIST, STL-10 and NUS-WIDE. Compared with traditional clustering algorithms, the results show that VAE can achieve a high compression rate of data samples, save memory space significantly without reducing clustering accuracy, and enable low-end devices in wireless multimedia sensor networks to achieve clustering of large data. In addition, VAE can effectively fill the missing data and generate the specified data at the terminal, so that the incomplete data can be better utilized and analyzed. Although VAE needs to be trained well, the sum time of training and clustering is still less than most clustering algorithms. Therefore, when performing clustering tasks on low-end equipment with limited computing power and memory space, trained VAE-HOFCM can be adopted.
2019-02-24T16:15:54.960Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "cff2a964780746d35b1fc82e7584d90d1a9c560c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/4/809/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f4b113b07d2ffa575c3c5d231acce1acbd70aba", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
248139217
pes2o/s2orc
v3-fos-license
Nonlinear Dynamics of the Financial–Growth Nexus in African Emerging Economies: The Case of a Macroprudential Policy Regime : A panel data analysis of nonlinear financial growth dynamics in a macroprudential policy regime was conducted in a panel of 10 African emerging countries from 1983–2020, where it had been a non-prudential regime from 1983–1999 and a prudential regime from 2000–2020. The paper explored the validity of invented U-shape hypothesis in the prudential policy regime as well as the threshold level at which excessive finance boosts growth using the panel smooth transition regression (PSTR) model. The PSTR model was adopted due to its ability to address the problems of endogeneity and heterogeneity in a nonlinear framework. The results reveal evidence of a nonlinear effect between financial development and economic growth, where the minimum level of financial development is found to be 60.5% of GDP, above which financial development increases growth in African emerging countries. The findings confirmed a U-shaped relationship, contradicting the invented U-curve hypothesis. The focal policy recommendation is that the financial sector should be given adequate consideration and recognition by, for example, implementing appropriate financial reforms, developing a suitable investment portfolio, and keeping spending on technological investment in Africa’s emerging countries below the threshold. Again, caution is needed when introducing macroprudential policies at a low level of the financial system. domestic credit sector (% of used to capture financial The results support the non-monotonic hypothesis (inverted where the threshold to be 142% of These results contradict those by and Alagidede but support Samargandi et al. Asteriou and Spanos (2019) examined the finance growth during the financial crises from a panel of 26 EU countries, over the period 1990–2016. Introduction Over the past decade, numerous African countries' financial systems have experienced substantial changes in an attempt to transition the sector from a state-owned to a marketoriented financial one, allowing the financial sector to efficiently carry out its fundamental mandate of financial intermediation. The primary goal of these changes was to broaden financial development in order to mobilize more funds, projects and resources with the highest probability of maximization, thereby boosting economic growth and alleviating income inequality and poverty. In the case of African countries, however, these financial developments result in sluggish growth as well as a high degree of poverty and inequality. To date, there have been controversies about both theoretical predictions and the empirical literature in identifying the role played by financial development in economic growth. Theories such as the invented U-curve hypothesis that was later proposed by Levine (2003) as the 'more finance, more growth hypothesis,' postulate the importance of financial institutions in supporting productive investments and stimulating innovation, which would subsequently lead to growth. On the other hand, Robinson (1952) developed the "demand-following" theory, which contends that finance is led by, rather than leads economic growth, and finance plays a minor role in economic growth. According to this non-monotonic relationship, endogenously emerging financial institutions generally have a positive effect on growth, though the magnitude varies with the type of financial development, and as the level of finance increases, growth may increase as well before the threshold point (Greenwood and Jovanovic 1990). On the other hand, there is a strand of studies contending that financial development triggers growth through a variety of factors, rather than by itself. These include trade openness, income per capita, government size, inflation (Yilmazkuday 2011), institutional quality (Law and Azman-Saini 2012) and financial sector policies (Ang 2008). As a result of these factors, the impact of financial development is determined by the dominant economic conditions. The contradiction in these results may be due, but not limited to, the feasible explanation of the divergent results in the existing literature lying in the different model specifications, data sets, estimation techniques or level of economy being studied. The recent global financial crisis has led to a reassessment of both policymakers' and academics' prior conclusions (Law and Singh 2014). As a result, central banks are now pursuing a robust expansionary monetary policy with new unconventional tools, necessitating the implementation of macroprudential policies to mitigate the systemic risk. These policies are formulated and executed with a strong emphasis on the financial system and the economy as a whole. Emerging economies are second to advanced countries in implementing macroprudential policies, with a 3.5 index on average, according to the World Bank. The analysis and formulation of these new policy programmes are still evolving, with very limited research. Considering the high level of average adoption of macroprudential policies in emerging economies' responses to numerous systemic risks (e.g., financial crises), and the contradiction emerging in the literature, there is a pressing need for this subject matter to be investigated in African emerging countries. This study extends the existing literature on the financial-growth relationship, following the seminal work of Ouedraogo and Sawadogo (2020) who employed the panel smooth transition regression (PSTR) in a panel of sub-Saharan African countries over the period 1980-2017. In their model, real GDP was utilized to measure economic growth, while the ratio of bank credit to the private sector to GDP was used to capture financial development, controlling for inflation and government expenditure. Their findings supported the literature on the "more finance, more growth" hypothesis. The current study seeks to extend the existing debate on this subject matter, with roots back to the seminal work of the Greenwood and Jovanovic (1990) known as the invented U-curve hypothesis and many others on the so-called inverted U-shaped relationship, and then to add a twist by introducing a distinction between a macroprudential and a non-macroprudential policy regime, referring to the periods 1983-1999 and 2000-2020. This would allow the researcher to distinguish whether these policies triggered the financialgrowth relationship in African emerging economies. Furthermore, the author seeks to include major monetary policy variables, known as macroprudential monetary policy instruments (i.e., financial-institution-targeted instruments) that were adopted by federal banks in various countries during the 2007 crisis. According to the existing literature on the monetary policy point of view, the macroprudential policy has been argued to have a direct and indirect effect on economic growth, which was not captured in the Ouedraogo and Sawadogo (2020) model. Furthermore, their study did not provide the threshold point above which states that financial development improves economic growth. The current study supports the view that, as countries switch from a non-macroprudential policy regime to a macroprudential policy regime, the correlation between financial development and economic growth might differ. It is because of these inconclusive and sometimes conflicting views that this study seeks to fill the gap in the literature by incorporating and examining the impact of financial development and these monetary policy variables and their effects on economic growth in African emerging economies, which most of the existing studies have not given attention to, and also by providing the threshold level of financial development that adversely affects growth. This will contribute to the body of evidence in African literature. The researcher constructed a balanced panel of 10 African emerging markets covering the period 1983-2020. The period from 1983-1999 is the non-macroprudential policy regime, while the period from 2000-2020 is the macroprudential policy regime. The 10 African emerging countries are South Africa, Namibia, Botswana, Mali, Mozambique, Eswatini, Burkina Faso, Nigeria, Tanzania and Uganda. This study proposes to clarify the ongoing debate by analyzing the nonlinear effects of financial development on economic growth and employing the panel smooth transition regression (PSTR) model, as well as random or fixed effect estimators, as the baseline models. The PSTR is not a new model in the African context. However, in this study, the researcher aims to extend Ouedraogo and Sawadogo's (2020) African finance-growth model by introducing the lag to all variables, following González et al. (2017), and including macroprudential variables. Furthermore, this model allows for an examination of the impact of the financial development, as shown by its various phases. The originality of the PSTR model consists of the fact that individuals can shift between groups and over time, as based on changes in the threshold variable. Because parameters fluctuate smoothly as a function of a threshold variable, the PSTR model also gives a parametric solution to the cross-country variability and time instability of the finance-growth coefficients. These features cannot be accounted for by dynamic or static panel techniques, nor by interaction effects. The PSTR model could provide new insights, since this model endogenously identifies different regimes that correspond with distinct equations as well as the optimal degree of financial development, i.e., the threshold value, with respect to which the sign of the relationship could be different. Lastly, the inspiration for this study emanated not from a lack of studies examining the nonlinear effect of financial development on growth in African countries, but more generally from the fact that this relationship may differ from the one that exists in the literature due to the difference in the smoothness and the adopted financial and macroeconomic policies. Actually, the findings contradict Greenwood and Jovanovic's invention of the U-curve in both macroprudential and non-macroprudential policy regimes. However, what is interesting is that, during the adoption of these policies, they triggered the finance-growth relationship in these countries. Moreover, during the non-macroprudential policy regime, the impact seems to be positive in the high regime, however, statistically insignificant. The remainder of the paper is organized in the following manner: the literature on the subject is briefly reviewed in Section 2, while Section 3 gives an overview of the model. The result of the PSTR and FE are discussed in Section 4, while Section 5 provides concluding remarks and discusses policy implications. Theoretical Debate on Financial Growth The financial-growth relationship has spawned a slew of channels. The central theoretical debate continues with four hypotheses: The Schumpeter hypothesis (Schumpeter 1934), the "supply-leading" and "demand-following" hypothesis (Patrick 1966), the endogenous growth theory of Romer (1986) and the non-monotonic finance-growth relationship of Greenwood and Jovanovic (1990). The current study builds on the non-monotonic rela-tionship developed by Greenwood and Jovanovic (1990) as theoretical basis, going far back to the hypothesis developed by Schumpeter (1934), emphasizing the importance of financial institutions in supporting productive investments and stimulating innovation, which was subsequently renamed the "more finance, more growth" theory by Levine (2003). Building from the Schumpeter hypothesis (Schumpeter 1934;Robinson 1952) developed the "demand-following" theory, which contends that finance is led by, rather than leads economic growth, and finance plays a minor role in economic growth. Following this line of reasoning, finance is merely a by-product or an outcome of growth. The "supply-leading" theory, which was later developed by King and Levine (1993), contends that financial development is an essential precondition for economic growth; as a result, finance leads to economic growth and causation flows from financial development to economic growth. The number and composition of financial development factors, according to its proponents, cause economic growth by directly increasing savings in the form of financial assets, resulting in capital creation and hence economic growth. The quantity and composition of financial development variables, according to their proponents, affect economic growth by directly increasing savings in the form of financial assets, resulting in capital creation and hence economic expansion. The theoretical works of Romer (1986) contributed to the emergence of the endogenous growth theory by arguing that the financial sector plays an important role in boosting growth, particularly by mobilizing savings, allocating resources efficiently, monitoring costs, diversifying risks and facilitating the exchange of goods and services. The non-monotonic relationship between financial development and growth, built on the monotonic relationship hypothesized by Greenwood and Jovanovic (1990), posits that endogenously emerging financial institutions generally have a positive effect on growth, although the magnitude varies with the level of economic development; that is, as financial development increases, economic growth may increase too before a certain level of financial development is reached. Empirical Review After scrutinizing the empirical literature on this subject, the researcher found that existing studies build on the three strands, the Schumpeter hypothesis (Schumpeter 1934) which was later proposed by Levine (2003) as the 'more finance, more growth' hypothesis (Goldsmith 1969;King and Levine 1993;Arcand et al. 2012;Bist 2018;Jobarteh and Kaya 2019;Elijah and Hamza 2019;Abeka et al. 2021), the hypothesis that financial development leads to low growth (Gouider and Trabelsi 2006;Menyah et al. 2014;Elijah and Hamza 2019;Ho and Iyke 2020) and the non-monotonic relationship (Greenwood and Jovanovic 1990;Acemoglu and Zilibotti 1997;Rioja and Valev 2004;Arcand et al. 2012;Samargandi et al. 2015;Doumbia 2015;Ibrahim and Alagidede 2018;Oro and Alagidede 2018;Opoku et al. 2019;Swamy and Dharani 2019;Machado et al. 2021;Abu-Lila et al. 2021). There is inconsistency among these strands, as the Schumpeter hypothesis (Schumpeter 1934) posits the 'more finance, more growth' hypothesis, while the Greenwood and Jovanivic hypothesis claims that there is a nonlinearity relationship between financial development and economic growth. Even in the African literature, a strong paradox has emerged among the studies, as two different findings have been reported. Some support the nonlinearity hypothesis (Ibrahim and Alagidede 2018;Ouedraogo and Sawadogo 2020;Machado et al. 2021), while others claim that there is linearity between the two variables (Assefa and Mollick 2017;Bist 2018;Jobarteh and Kaya 2019;Elijah and Hamza 2019;Chen et al. 2020). In this section, both the global and African literature is reviewed. Going as far back as the study by King and Levine (1993) that tested the Schumpeter hypotheses (Schumpeter 1934) over the period 1960-1989 using a two-stage least squares (2SLS) model in a panel of 57 countries, their findings confirmed the Schumpeter hypothesis (Schumpeter 1934). These findings contradict the results reported by Gouider and Trabelsi (2006) in a panel of 66 countries covering the period 1960-1999, using the traditional crosssectional simple panel and dynamic panel techniques. They used the standard deviation of per capita real GDP as a proxy for economic growth, while the large money stock (M3) was used as a proxy for financial development. Their findings confirmed a negative relationship in developed countries, while in developing countries it was insignificant. Arcand et al. (2012) documented that finance starts having a negative effect on output growth when credit to the private sector reaches 100% of GDP in the period including 16 country periods. Seven years later, the study by Menyah et al. (2014) examined the causal relationship in a panel of 21 African economies over the period 1965-2008. Financial development was captured by applying a panel bootstrapped approach to Granger causality. The empirical results show limited support for finance-led growth. Samargandi et al. (2015) used the panel data of 52 middle-income countries over the period 1980-2008, applying pooled mean group estimations in a dynamic heterogeneous panel setting. A bank-based financial index was used to capture financial development. They found an inverted U-shaped relationship, which contradicts the studies by King and Levine (1993) and Gouider and Trabelsi (2006), while the study by Doumbia (2015) found the saving channel to be the main determinant of the financial-growth relationship in a panel of 43 advanced and developing economies over the period 1975-2009. These results further contradict the Nigerian study by Adeniyi et al. (2015), using time series data covering the period 1960-2010, using a nonlinear threshold model. Their findings supported the literature that believed in the U-shape relationship. Recently, studies such as Assefa and Mollick (2017) In their model, economic growth was captured by real GDP, while international financial integration was used to proxy for financial development. The results were similar to those of the study by King and Levine (1993), but contradicted those of Samargandi et al. (2015). The results documented by Assefa and Mollick (2017) were further supported by Bist (2018) in a panel of 16 African and non-African low-income countries, using the fully modified and dynamic OLS technique. Financial development was captured using credit to the private sector, while the log of real GDP was used to capture economic growth. The results documented by Bist (2018) contradict those reported by Ibrahim and Alagidede (2018) in a panel of 29 sub-Saharan African (SSA) countries over the period 1981-2015, using the Hansen threshold model, as their findings support the non-monotonic hypothesis (Ushaped), which contradicts the findings documented by Samargandi et al. (2015) and Oro and Alagidede (2018). In their model, the ratio of private and domestic credit to GDP was used to capture financial development. The study by Oro and Alagidede (2018) utilized the panel GMM in panel data of 30 non-oil-producing and 30 oil-producing countries grouped by their quality of institutions over the period 2006 to 2015. Their findings confirmed the inverted U-shape. Jobarteh and Kaya (2019) studied the same subject in African countries, using a PSTR model in a short run covering the period 1980-2014. The GDP per capita was used to capture economic growth, with the financial development index as a proxy for financial development. Their findings contradict the study by Ibrahim and Alagidede (2018), but support empirical literature that believes in the 'more finance, more growth' hypothesis. Their findings further reject the existence of nonlinearity in African economies. Elijah and Hamza (2019) used a vector error correction model (VECM) in Nigeria covering the period 1981-2015, using broad money supply as a proxy for financial development. Their finding contradicts the study by Samargandi et al. (2015), but supports that by Gouider and Trabelsi (2006). The study by Swamy and Dharani (2019) investigated the non-monotonic effect of finance on growth in a panel of 24 advanced economies over the period 1983-2013. In their model, economic growth was captured by GDP (annual %), while domestic credit to the private sector (% of GDP) was used to capture financial development. The results support the non-monotonic hypothesis (inverted U-shape), where the threshold was found to be 142% of GDP. These results contradict those reported by Ibrahim and Alagidede (2018), but support Samargandi et al. (2015). Asteriou and Spanos (2019) examined the finance growth during the financial crises from a panel of 26 EU countries, over the period 1990-2016. Their main aim was to find the impact of financial development on growth before and after financial crises. The results show that before a crisis, financial development promotes economic growth, while after the crisis it hinders economic activity. Chen et al. (2020) studied the asymmetric relationship of financial growth in Kenya using the Nonlinear Autoregressive Distributed Lag (NARDL), covering the period from 1972 to 2017. The financial development depth indicator was used to capture financial development. Their findings document the inflationary and government channels as the main determinants of the financial-growth relationship, claiming that no direct impact exists between the two variables in Kenya (Yilmazkuday 2011). However, the Ghanaian study by Ho and Iyke (2020) contradicts the Kenyan study, as it documents the existence of a negative direct effect between the two variables using the ARDL model over the period 1975-2014, which supports the studies by Elijah and Hamza (2019), Ibrahim and Alagidede (2020) and others. Ouedraogo and Sawadogo (2020) used the PSTR estimates in SSA countries over the period 1980-2017, modelling the ratio of bank credit to the private sector to GDP as a proxy for financial development, with GDP per capita as a proxy for economic growth. Controlling for openness, inflation and government expenditure their finding contradicts the study by Ho and Iyke (2020). Aluko et al. (2020) studied the same subject in a panel of 33 SSA countries, using the panel causality test covering the period 1990-2015. Their findings document a bidirectional relationship between finance and growth, which then contradicts the study by Ouedraogo and Sawadogo (2020). Abu-Lila et al. (2021) tested the non-monotonic hypothesis in Jordan during the period 1990-2019, using the Johansen cointegration test. Their results document the evidence of a non-monotonic (inverted U-shaped) relationship, supporting the study by Swamy and Dharani (2019), which has lately been supported by Machado et al. (2021) in a panel of 36 SSA countries, covering the period 1980-2015 using the SGMM. In their model, economic growth is captured by a log of real GDP per capita, while financial development is captured by the natural logarithm of domestic credit to the private sector. However, different results were documented by Ustarz et al. (2021) in the same region as adopted by Machado et al. (2021), using the same model. However, Ustarz et al. (2021) used the financial development index, scaling from zero to 100 with an agricultural value as a proxy for financial development, covering the period 1990-2018. Their findings support Schumpeter's hypothesis. The study by Ustarz et al. (2021) is further supported by Abeka et al. (2021), where financial development is captured by telecommunication infrastructure. Research Methods and Data Adopted for This Study This study uses variables that were suggested by both the theories and the literature as the variables that explain the financial-growth relationship. However, the relationship was extended by adopting the concept of macroprudential policies in determining the nonlinear dynamics effect of financial growth in African emerging economies, as these policies were applied in these countries. It has been proposed that they may hamper economic activity (Caldera-Sánchez and Röhn 2016). Therefore, financial-institution-targeted instruments were included in the model to control for prudential policy effects, as they may affect growth explicitly or implicitly and were not captured in the Ouedraogo and Sawadogo (2020) model. According to the searchers knowledge of these policies, as well as their consequences for macroeconomic performance, remain a point of contention. Economic growth (measured by a log of GDP per capita at constant prices) (growth) was used, as Beck et al. (2007) argue that financial development affects growth through income levels (with financial development measured by domestic credit to the private sector as a share of GDP) (DCPS). While private credit (DCPS1) was used as a robustness check, the control variables were macroprudential policies (i.e., financial-institution-targeted instruments) (MPIF), inflation (INFL), investment (measured by gross fixed-capital formation) (INV), trade openness (TR) and government expenditure (measured by government final consumption expenditure as a share of GDP) (G). For the sensitivity analysis, the researcher added tourism development (TOD) captured by proxy by the number of arrivals of international tourists. The related measures of financial institutions are aimed at the balance sheets of banks, which influence the provision of credit to the economy. The buildup of extra capital may limit overly rapid credit expansion by raising the cost of providing new loans. These resources can be released in times of financial stress to avert credit constraint and absorb bank losses. Banks with larger credit provisions and higher capitalization reduce the likelihood of a financial crisis and improve the real economy's net benefits. All the control variables were expected to be positively related to economic growth. The variables were extracted from WDI (2021), and Cerutti data (Cerutti et al. 2017). The unit-root test was not appropriate for this study as it deals with monotonic data and does not require integration or cointegration. Panel Smooth Transition Regression Model Following González et al. (2005), the study builds a PSTR model for the African emerging economies, to evaluate the nonlinear dynamics effect of the financial-growth relationship. The simplest case of the PSTR model, with a single transition function in two regimes illustrating the threshold effect of financial development on economic growth, is as follows: where Growth it is a dependent variable captured by the log of GDP per capita at constant prices, then i = 1, . . . , N, and t = 1, . . . , T indicate a cross-section and the time dimensions of the panel, respectively, whereas λ t and µ i denote the time and fixed individual effect, correspondingly, K it is the vector of control variables (MPIF, INFL, INV, TR and G) and the error term is denoted by e it . Succeeding to the work documented by Granger and Terasvirta (1993) and González et al. (2017), the transition function in the logistic form g(q it ; γ, c) is a continuous function of the transition variable q it bounded between 0 and 1 and defined as: In (2), c j = (c 1 , . . . , c m )', which is an m dimensional vector of threshold parameters, where the slope parameter denoted by γ controls the smoothness of the transitions. Moreover, γ > 0 and c 1 < . . . < c m are restrictions imposed for identification purposes. In practice, for m = 1 or m = 2, respectively, one or two thresholds of financial development occur, around which the impact of economic growth is nonlinear 1 . This nonlinear impact is represented by a continuum of parameters between the extreme regimes. For m = 2 the transition function has a minimum of (c 1 − c 2 )/2 and reaches a value of 1 for both the low and high values of q it . Therefore, if γ tends to infinity, the model becomes a three-regime threshold model. However, it is reduced to a homogenous or linear fixed effects panel regression, when the transition function becomes constant, i.e., when γ tends to 0. As noted in González et al. (2017), before estimating Equation (1), there are three crucial tests that need to be undertaken, which are (1) testing for the appropriate transition variable among the set of variables included as candidates (DCPS, MPIF, INFL, INV, TR, and GE), (2) testing for the monotonic hypothesis and (3) testing for the sequence for selecting the order m of the transition function, using the LM-type test. The LM-type test contains two groups of misspecification tests, where group 1 is the Lagrange multiplier wild (LM χ ) and Lagrange multiplier Fischer (LM f ), group 2 is the wild bootstrap (WB) and wild-cluster bootstrap (WCB), while their F-statistic, corresponding with their p-values, will be used in the three tests mentioned in the first line of this paragraph. The theoretical reasoning behind the LM-type test is that, for all these tests, the p-value should be small. Note that the WB and WCB will be utilized as robustness checks for linearity against the PSTR. To be precise, amongst the appropriate transition variables, the variable to be used for testing for nonlinearity should have the smallest p-value compared to all the variables included as candidates. On the other hand, for the monotonic hypothesis between financial development and economic growth, the p-value of both LM χ and LM f should be zero or less than zero, which would signify the rejection of the H 0 of linearity between financial development and economic growth. Then, the WB and WCB will hold the argument that a nonlinearity still exists amongst the variables. Lastly, for the sequence for selecting the order m of the transition function, following González et al. (2017), the study will test up to order m = 3. The sequence for selecting the order m of the transition function under the H * 0 : If it still fails, m = 1 will be selected as default (Teräsvirta 1994;Teräsvirta et al. 2010). Finally, the author evaluated the correlation between financial development and economic growth using the fixed effect (FE) and random effect models. In these models, the author generated the squared term of domestic credit to the private sector as a share of GDP (financial development) to capture the nonlinear form of financial growth in the African emerging markets. The author estimates Equation (3), which in order to account for nonlinearity includes interaction terms: Equation (3) incorporates an interaction with a quadratic component to evaluate the nonlinear influence of the transition variable, which is financial development. With the addition of an interaction term, it is possible to see if the marginal effect of financial development differs at greater levels of this variable. The other variables of Equation (3) are defined as in Equation (1). The Hausman test will be used in order to decide between FE and random effects (RE) estimates, under the full set of random effects assumptions. Empirical Analysis of the Study The descriptive statistics of the different variables are reported in the Appendix A (Table A1). As described previously, the PSTR contains three stages, which include finding the appropriate transition variable among all the candidate variables, testing the linearity and finding the sequence for selecting the order m of the transition function using the LMtype test, with the proposed WCB and WB serving as robustness checks, before estimating the PSTR model. The results of the three stages are presented separately in the sections that follow. Table 1 presents the results of all the stages of the PSTR. The first section of Table 1 shows the results of the appropriate transition in the panel regression of financial development and economic growth. The results show that both the p-values of the LM F -test (0.00009) and LM X -test (4.556 × 10 −10 ) signify DCPS as the most suitable choice of transition variable for this study, as the p-values are smaller compared to other included variables as candidates. The results of the homogeneity test are then reported in the second section of Table 1. The author generates the F-statistics and p-values of both LM F (0.00) and LM x 2.984 × 10 −16 to test the null hypothesis of linearity, while the proposed WCB (0.00) and WB (0.00) are robustness checks. Both the p-values of LM χ and LM F indicate the rejection of the null hypothesis of linearity, confirming that there is indeed nonlinearity between financial development and economic growth in selected African emerging countries. This was further supported by WB and WCB, signifying that nonlinearity remains between the two variables. The homogeneity results support studies documented by Assefa and Mollick (2017), Ibrahim and Alagidede (2018) and Machado et al. (2021). Lastly, the third section of Table 1 reports the results of the sequence for choosing order m in PSTR 2 . The results reject H 0 as the p-value of both the LM F (0.59) and LM X (0.43) when m = 1 signifying that, when DCPS it−1 was selected as best transition variable, the model had one regime which separated the low level from the high level of financial development. This concludes that the model has two regimes with one transition and reject m = 2; H 01 and m = 3; H 02 . Conversely, the results of the LM F and LM x were evaluated using the WCB and WB, in the following section following Teräsvirta (1994). Model Evaluation and the Estimated Threshold of the PSTR Model This section reports the results of the model evaluation and the estimated threshold of the PSTR. After estimating the baseline model, following Eitrheim and Terasvirta (1996), the author first evaluated the reliability of selecting the order m = 1 as the best transition variable for this model, using two classes of the misspecification tests: Parameter Constancy (PC) and No Remaining Nonlinearity (NRN) (González et al. 2017). Table 2 presents the results of the PC, NRN and the estimated threshold. The first section of Table 2 reports the results of the PC. The p-values of the LM F and LM χ for parameter constancy show that the parameters are constant, while the second section of Table 2 shows the results of both the WB and WCB tests that take heteroskedasticity as well as possible withincluster dependence into account, suggesting that the estimated model with one transition is adequate. Lastly, the third section of Table 2, contains the results of the estimated threshold for the baseline and robustness model. The results show that the estimated financial development threshold is 60.5% of GDP in the macroprudential policy regime, while in the non-macroprudential policy regime it is 52.9% of GDP; while for the robustness model it is even beyond the baseline as it is 59.2% of GDP. Hence, the first regime, i.e., when the level of financial development is below the value of 60.5% as a share of GDP, reduces the level of growth. This can be justified as, in the low regime of finance, financial development may decrease economic growth through increased economic fragility. Financial innovation and financial liberalization, both of which are captured by financial development, have accumulated systemic risk (see Gambacorta et al. 2014). Higher systemic risk means more frequent and/or severe crises, which have a detrimental impact on economic growth rates. However, when financial development is above the threshold of between 52.9% and 60.5% as a share of GDP, it promotes growth by promoting capital accumulation, and technological advancement by accumulating savings, mobilizing and pooling savings, creating investment information, enabling and encouraging foreign capital inflows and optimizing capital allocation. Moreover, it will decrease inequality and poverty by broadening access to financing for the poor, facilitating risk management by lowering their vulnerability to stock, and increasing investment and productivity, which leads to increased revenue creation. The mean of domestic credit to the private sector (DCPS) was calculated to obtain a clear picture of which countries are at the lower/higher end of the Greenwood and Jovanovic hypothesis of financial development and economic growth. Figure 1 illustrates that the African emerging countries are at the lower end of financial development, with the exception of South Africa, which has a mean of DCPS (135.86%) . There are various dynamics that might lead these countries to be at the lower end of the Greenwood and Jovanovic curve, which might, for example, be the high level of inequality in these countries, as is evident from Greyling (2021a, 2021b). Another possible factor could be the adopted policies that do not benefit the people in improving their standards of living. It has been evident that per-capita income can be a good indicator of an institution's overall development and complexity. As a result, rapid financial development is correlated with high growth. Some countries below the threshold have an average GDP per capita below USD3000 , which further supports the argument that countries with a low level of development tend to be the ones that also have low levels of financial development. There are various dynamics that might lead these countries to be at the lower end of the Greenwood and Jovanovic curve, which might, for example, be the high level of inequality in these countries, as is evident from Greyling (2021a, 2021b). Another possible factor could be the adopted policies that do not benefit the people in improving their standards of living. It has been evident that per-capita income can be a good indicator of an institution's overall development and complexity. As a result, rapid financial development is correlated with high growth. Some countries below the threshold have an average GDP per capita below USD3000 , which further supports the argument that countries with a low level of development tend to be the ones that also have low levels of financial development. Empirical Results of the PSTR and FE Models The results of both the PSTR and the fixed effect are reported in Table 3 where the baseline model is the PSTR, which is a lag of a two-regime model, while the fixed effect model is utilized in supporting the results of the PSTR. First, in both Model I, the macroprudential policy, and Model II, the non-macroprudential policy regime, the results of the baseline model (PSTR) indicate that financial development reduces economic growth, measured by , and it is significant. Furthermore, this finding is supported by the results of the FE. A strong nonlinearity is reported between financial development and economic growth, as the results in Table 1 confirm the homogeneity between the variables by rejecting the hull hypothesis of linearity. Therefore, the results of the homogeneity test allow the estimation of the study to generate the coefficient ( ) that captures the nonlinear component, which is found to be positive and highly significant. Empirical Results of the PSTR and FE Models The results of both the PSTR and the fixed effect are reported in Table 3 where the baseline model is the PSTR, which is a lag of a two-regime model, while the fixed effect model is utilized in supporting the results of the PSTR. First, in both Model I, the macroprudential policy, and Model II, the non-macroprudential policy regime, the results of the baseline model (PSTR) indicate that financial development reduces economic growth, measured by β 0j , and it is significant. Furthermore, this finding is supported by the results of the FE. A strong nonlinearity is reported between financial development and economic growth, as the results in Table 1 confirm the homogeneity between the variables by rejecting the hull hypothesis of linearity. Therefore, the results of the homogeneity test allow the estimation of the study to generate the coefficient (β 1j ) that captures the nonlinear component, which is found to be positive and highly significant. Subsequently, the impact of financial development on economic growth is conditional to the level of financial development. As a result, the study's findings imply that changes in economic growth in terms of financial development range from low to high. The shift between these extreme regimes occurs around the associated endogenous location parameter. When comparing the macroprudential policy regime with the non-macroprudential policy regime across all the estimation tools the authors find that, while the impact is similar, the magnitude coefficient of DCPS in the macroprudential policy regime, when the financial system starts to develop, has a massive impact compared to its impact on a non-macroprudential policy regime. On the other hand, when financial development is high above the threshold, the DCPS has a massive impact on the common man in the macroprudential policy regime, compared to the non-macroprudential policy regime period. Focusing on the baseline model, the magnitude below the threshold is 4.62 and 0.88, while it is 3.62 and 1.03 above the threshold, respectively. The findings of this study contribute significantly to the existing literature in understanding the nonlinear dynamics impact of financial development on economic growth in countries that have implemented macroprudential policies, as they show that integrating these policies at a low level of financial development may cause the level of economic growth to crumble. The argument for financial development being anti-growth might be that a lower level of financial system promotes risk and ineffective resource allocation, which may reduce the rate of savings and risk, resulting in lower economic growth. Higher systemic risk means more frequent and/or severe crises, which would have a detrimental effect on growth rates. The results are in line with findings documented by Puatwoe and Piabuo (2017) in the case of Cameroon. Note: The dependent variable is the growth. The numbers in brackets denote that the standard errors in brackets are obtained by using the cluster-robust and heteroskedasticity-consistent covariance estimator, allowing for error dependency within individual countries. The (***), (**) and (*) reflect the 1%, 5% and 10% levels of significance, respectively. ESD denotes the estimated standard deviation (residuals), while p-v are the p-values. Source: Author's calculation based on WDI (2021) and Cerutti (Cerutti et al. 2017) data. Furthermore, the deployment of these policies beyond a certain threshold of financial development is found to be growth driven by the development of the financial system, while other determinants remain constant. The possible logic behind the positive relationship above the threshold could be that financial development improves growth through technological innovations meaning that, when the level of finance is high, it will be able to provide sufficient funds to the firms that make the most productive use of them. It was further documented in the study by Levine (2005) that financial institutions and markets may stimulate economic growth through a variety of channels, including (i) acquisition and processing, (ii) easing the exchange of goods and services through the provision of payment services and (iii) mobilizing and pooling savings from a large number of investors. This finding is consistent with the previous empirical studies that demonstrated a substantial positive and negative effect of financial development on economic growth; these are Ibrahim and Alagidede (2018) for SSA countries and Oro and Alagidede (2018) for Nigeria. Finally, the findings formulate a U-shaped relationship between the two variables of interest in African countries, which supports the findings reported by Ibrahim and Alagidede (2018). The theoretical justification for the U-shape relationship in African countries is formulated in the same way as that for the lower regime and the high regime. The current study extended the existing debate in the literature by incorporating macroprudential policy instruments (i.e., financial-institution-targeted instruments) in exemplifying the finance-growth relationship. In the low regime of financial development, MPIF has a statistically negative impact on growth in the low regime, while in the high regime it has a positive impact. This shows that tightening the financially related measures is bad for growth in these countries during the low level of financial development while, as the financial sector develops beyond a minimum of 60.5% of GDP, these policies become obsolete in boosting economic activities, which then results in an increase in growth. Many African economies are prone to macroeconomic instability, which can manifest as inflationary pressures. Therefore, the study controls for inflation in the model. Inflation (INFL) has a positive and statistically significant impact on economic growth in the high regime of DCPS, while in the low regime it is negative and insignificant. In Model II, inflation has a negative impact on growth in the low regime. Even the estimates of the fixed effect model support the positive impact of inflation on growth. While it was discovered to be detrimental to growth during the non-prudential policy regime, the results show that in the period of the macroprudential policy regime, inflation promotes growth while in the non-policy regime, inflation reduces the level of growth. This is supported by the logic behind the positive relationship between inflation and output. It could be explained as follows: when the economy is not operating at full capacity, which means there is underutilized labor or resources, inflation can potentially assist in improving output. More money equals greater spending, which equals more aggregated demand. High demand, in turn, leads to more production in order to fulfill that need. Overall, this would lead to high growth. In both the macroprudential policy regime and the non-macroprudential policy regime, INV has a positive and statistical impact on growth. However, during the nonmacroprudential policy regime, above the threshold, investment becomes insignificant. Even the estimates of the FE model in the macroprudential policy regime show that investment improves growth, while in the non-macroprudential policy regime it is insignificant. In general, emerging countries invest a larger proportion of their GDP in investment to facilitate rapid growth, which boosts aggregate demand, which in turn boosts future productive capability. The results confirmed the findings reported in the study by Boamah et al. (2019) in a panel of 18 Asian countries. For both macroprudential and non-macroprudential policy regimes, TR has a statistically positive effect on growth in both the low and high regimes of DCPS. Furthermore, the magnitude coefficient of TR shows that, in the macroprudential policy regime, TR has a massive impact on growth compared to its impact in the non-prudential regime. The findings were further confirmed by the estimate of the FE model. The findings support the study by Keho (2017) in Cote d'Ivoire. The logic behind the positive impact could be that trade enables integration with global trade, with sources of innovation, and boosts FDI gains. Trade openness enables economies to grow output, improving returns to scale and specialization economies, which then leads to growth. Finally, in both policy regimes, G has a statistically significant impact, showing that below the threshold of DCPS it increases growth, while in the high regime beyond the threshold it decreases economic growth. Even the estimates of the FE support the negative effect of government expenditure on economic growth. The results are consistent with the results reported by Zungu et al. (2020) in the SADC region and Greyling (2021a, 2021b) in the African emerging countries. This may be due to a variety of factors, including an increase in government activity, which could impede economic activities such as transfer payments, which tend to discourage people from taking employment, thereby reducing the level of output. Additionally, it may appear when government expenditure is financed by tax revenues. The author checked the sensitivity of the findings in the baseline model by including an additional control variable. This would help strengthen the findings reported in the baseline models on whether they are sensitive to variables included in the system as control variables. Estimation results demonstrate that the nonlinear effect of financial development on economic growth is not sensitive to the variable included in the system as a control variable, or to the variable used to measure financial development. Indeed, the findings are very similar to those initially obtained. The new adopted variable is found to improve growth in all models and in both the macroprudential and non-macroprudential policy regimes. Conclusions and Policy Recommendations The relationship between financial development and economic growth is a source of contention in the theoretical and empirical literature. This paper aims to overcome these inconclusive results by examining the dynamics of financial growth by focusing on a macroprudential policy regime and comparing it to a non-macroprudential policy regime; in brief, by examining how the financial-growth relationship in African emerging countries was triggered by macroprudential policies implemented during the financial crisis. Using panel smooth transition regression and fixed effect models, this study examined the nonlinear dynamics implications of financial development on economic growth in African emerging markets. The study further sought to test the existence of non-monotonic hypotheses in African emerging economies, as well as to determine the threshold at which the level of finance promotes economic growth. The estimation results strongly support the presence of nonlinearities in the financial-growth relationship in African emerging economies. The study's findings reveal that, depending on the degree of the financial system, there are two extreme regimes that differentiate the impact of financial development on economic growth in the case of African emerging economies. Firstly, below the threshold of 60.5% as a share of GDP, a lower level of the financial system promotes risk and ineffective resource allocation, which may reduce the rate of savings and risk, resulting in lower economic growth. In this case, more policies aimed at ensuring improvement/financial inclusion and increasing social mobility and investment are significant. Secondly, above the threshold, a high level of financial development is found to improve growth. More specifically, after passing the minimum threshold of 60.5% as a share of GDP, having more financial institutions/systems will ease the exchange of goods and services through the provision of payment services, and the mobilization and pooling of savings from a large number of investors, which then creates job opportunities, ultimately stimulating growth. The findings of this study were shown to be resistant to the technique and control variables applied, since the author achieved the same results utilizing the fixed effect estimator methods, even when tourism development (TOD) was included in the system. Adopting macroprudential policies, such as financial-institution-targeted instruments aimed at the balance sheets of banks, which influence the provision of credit to the economy, was found to reduce growth in the lower regime, while improving it in the higher regime. What is interesting in this study is that, when comparing the macroprudential with the non-macroprudential policy regime, the magnitude of financial development was found to have a profound impact on growth during the macroprudential policy. As the study found, in the lower regime the magnitude was 4.64% in the policy regime and 0.88% in the non-policy regime. Furthermore, at a high level of finance, the magnitude was found to be 3.63 and 1.03%. The impact on the non-prudential policy regime was found to be insignificant. It is evident that the adopted macroprudential policy triggered the financial-growth relationship in the African emerging countries. The study further documents that a surge in investment and trade openness increases the level of economic growth in both macroprudential and non-macroprudential policy regimes. Government expenditure is found to improve the level of growth up to a certain threshold, but beyond that threshold it is found to have a detrimental effect on economic growth. From a policy perspective, the findings of this study may derive various policy implications. Firstly, the presence of a financial development threshold challenges the effectiveness of policies aimed at improving the financial system to attract investment and technological innovation in African emerging countries. Secondly, countries that are situated just below the threshold level are encouraged to give the financial sector adequate consideration and proper recognition, such as the provision of appropriate financial reform and also work towards formulating policies that aim to develop a suitable investment portfolio, as well as spending on technological investment in these countries. Improving these activities will create job opportunities, which will boost the well-being of the citizens and thus increase economic growth. Thirdly, the findings may help policymakers in African emerging economies to be cautious when introducing macroprudential policies. In a nutshell, these policies are growth-driven when the level of the financial system is high beyond the minimum of 60.5% of domestic credit to the private sector as a share of GDP. The author suggests that future research should focus on a comparative study, where African emerging countries are compared to European or other countries. Conducting a panel smooth transition vector error correction model (VECM) will be a measure contribution. However, this can only be conducted in a bivariate setting. The interesting feature of the latter methodology is a Granger causality test that is conducted in a non-linear framework. Again, because the current study provides the minimum level of financial development required for African countries to improve growth, future studies that aim to find the optimal point for financial growth will be important for understanding how much financial development is required for these countries. Funding: This research received no external funding. Data Availability Statement: Publicly available datasets were analyzed in this study. These data can be found here: [http://data.worldbank.org/data-catalog/world-development-indicators (accessed on 24 October 2021)]. Further inquiries can be directed to the corresponding author. Acknowledgments: I thank everyone who attended the Management of Business and Legal Initiatives (MBALI) (2021) conference in Richards Bay for their invaluable input during the early stages of this research. I also thank the University of Zululand's Department of Economics staff for their constructive criticism and helpful suggestions for this paper. Last but not least, I would like to express my gratitude to my language editor, H. Henneke, hennekeh@wcsisp.co.za, for her valuable and consistent input. Thank you so much! Conflicts of Interest: The author declares no conflict of interest. Additionally, the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. González et al. (2005) consider that it is sufficient to consider m = 1 or m = 2, as these values allow for commonly encountered types of variation in the parameters. 2 The sequence for selecting the order m of the transition function under the H * 0 : β * 3 = β * 2 = β * 1 = 0 for selection m = 3. If it is rejected, it will continue to test H * 03 : β * 3 = 0, H * 02 : β * 2 = 0 β * 3 = 0 and H * 01 : β * 1 = 0 β * 3 = β * 2 = 0, in selection m = 2. If it still fails, m = 1 will be selected as default (Teräsvirta 1994;Teräsvirta et al. 2010).
2022-04-14T15:12:26.599Z
2022-04-12T00:00:00.000
{ "year": 2022, "sha1": "8537dce96880b780e390f1accefe22b6142197ae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7099/10/4/90/pdf?version=1649727143", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "07f027a0f5b9e1c645727cc224ebef339f6fc4eb", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
256418689
pes2o/s2orc
v3-fos-license
Comparative evaluation of the efficacy of customized maxillary oral appliance with mandibular advancement appliance as a treatment modality for moderate obstructive sleep apnea patients—a randomized controlled trial Background Obstructive sleep apnea (OSA) is quite common among the adult population, according to recent epidemiological studies. The most frequently suggested alternate treatment for mild to moderate OSA is oral appliances (OA). The purpose of the present study was to assess as well as compare the effectiveness of custom-made maxillary oral appliances against mandibular advancement appliances in the care of individuals suffering from moderate obstructive sleep apnea. Methods A prospective interventional research was carried out with 40 participants. Polysomnography (PSG) was done and the participants with an apnea-hypopnea index (AHI) >15–30 were involved in the research. Study participants were randomly split up into two test groups: group I was the “Control Group” (group treated with a mandibular advancement device (MAD), n=20), while group II was exposed to a “customized maxillary oral appliance” (CMOA, n=20). Both groups had reference measures for AHI, blood oxygen saturation (SpO2), oro-nasal airflow via respiratory disturbance index (RDI), and the Epworth Sleepiness Scale (ESS). Appliances were fabricated and delivered to the respective study group participants. PSG was again conducted after a period of 1 and 3 months of appliance delivery and re-evaluation was done for all the parameters and was compared with reference measurements. The facts were analyzed using descriptive and analytical statistical methods. The statistical program utilized in the study was “SPSS (Statistical Package for Social Sciences) Version 20.1.” After 1 and 3 months, the statistical significance between the two study groups was assessed at P<0.05. Results The analysis of mean AHI, SPO2, RDI, and ESS for both test groups manifested statistically significant measures (P<0.001). The study results revealed a statistically significant depletion in mean AHI scores, improvement in mean SPO2 scores, and reduction in mean RDI scores and ESS scores when compared with reference measurements to 1 month, 1 to 3 months, and between reference measurements and 3 months. Conclusion The CMOA was effective in managing moderate OSA and has great therapeutic potential. It can be an option for the MAD for treating patients suffering from moderate obstructive sleep apnea. Trial registration The study was registered under Clinical Trials Registry-India and the registration number is CTRI/2020/07/026936. Registered on 31 July 2020 Background Obstructive sleep apnea (OSA) is a condition in which the upper airway is partially or completely blocked during sleep, resulting in arterial oxygen desaturation and arousals. Excessive daytime sleepiness, cognitive problems, obesity, type 2 diabetes mellitus, hypertension, exacerbation of the chronic obstructive pulmonary disease, apnea, nocturnal awakening, episodes of choking during sleep, morning headache, and other manifestations and comorbidities are all linked to OSA [1][2][3]. Severe OSA is a significant danger of developing atherosclerosis, sudden myocardial infarction, and overall mortality [4]. OSA has also been found to be a self-sustaining possibility in developing cardiovascular disease, ischemic stroke, and general mortality. The patients suffering from OSA have reported poor quality of life and also have found notable increased events of road traffic accidents [5]. Several methods for treating OSA have been welldocumented in the literature. The most common among them are behavioral and surgical weight loss therapies, positional therapy, pharmacological therapy, surgical therapies (pharyngeal and maxillomandibular surgeries), continuous positive airway pressure (CPAP), and oral appliances (OA) such as the mandibular advancement device (MAD) [7][8][9]. Among all the listed non-surgical treatment options, only CPAP and OA are highly satisfactory. CPAP therapy is considered to be a gold standard treatment option for people with OSA and is universally approved. CPAP, on the other hand, has a slew of drawbacks, including muscle sagging, discomfort with pressure sensation and leakage, skin inflammation, machine noise, and a slew of other issues that make it unsuitable for users [10][11][12]. MAD has emerged as a feasible, plausible replacement, and the most accepted and chosen therapy for mild to moderate OSA patients. Many authors have confirmed the role of MAD in lowering the AHI episodes and enhancing the quality of life among the individuals suffering from it, in comparison to CPAP in their study publications [13,14]. The working principle of MAD is by clasping the lower jaw in an advanced and descending position which enlarges the upper airway space and substantially reduces the AHI [15]. However, various side effects of MAD have been observed in several long-term model analysis studies, including dental pain, temporomandibular joint issues, xerostomia or excess salivation, and gum irritation [16]. Current analysis has suggested that 936 million individuals globally are suffering from OSA. Considering the numerical values given by the World Health Organization indicating the elevated occurrence of OSA in the general public and the complexities and complications of the existing devices used in managing OSA has resulted in a need of introducing a new effective treatment modality in this field. The customized maxillary oral appliance (CMOA) is an oral appliance designed to be anchored on the maxillary arch at an increased vertical dimension of 2mm that facilitates the advanced and descending position of the lower jaw which results in an enlarged upper airway. To treat OSA, it employs the principles of mandibular advancement splint and tongue holding appliance. CMOA increases the vertical within the limits at present occlusion and hence the chances of changes in the dentition are eliminated as seen in MAD [17]. Because MAD is the most widely used oral appliance for treating OSA, it is employed in this study as a "Control Group" to assess the efficiency of the newly created customized maxillary oral appliance to it. The null hypothesis of the current study was that there is no difference between customized maxillary oral appliances and the mandibular advancement device in the effects of the treatment for moderate OSA. By comparing its efficacy to MAD, the current study intends to introduce this new oral appliance, CMOA, in controlling OSA as a unique remedial choice for individuals suffering from mild OSA. Source of data This study volunteered patients from both genders with the age ranging from 30 to 50 years to turn up in the Sleep Medicine Department of AVBRH and JNMC, Wardha, who was diagnosed with cases of moderate OSA. The duration of the study ranged from June 2020 to May 2021. Ethical aspects The study received approval from the Institutional Ethical Committee (Ref. no-DMIMS(DU)/IEC/2020-21/8811). The study was filed as a randomized controlled trial after receiving approval (CTRI/2020/07/026936). Before the study began, the participants were informed about the study and signed informed permission forms were filled them. Study design (Fig. 1) This study was a two-armed (MAD and CMOA) randomized, controlled, parallel, double-blind clinical investigation. Sample size calculation The software used for sample size calculation was N Master V.2.0. The sample size of the study was 40. The minimal sample size computed for each group based on the study's 80% power was 16. However, 20 samples were chosen from each group to reduce mistakes and to account for any instances lost during followup. The following formula was applied for sample size calculation: ∆ = |μ 2 − μ 1 | = absolute difference between two means σ 1 , σ 2 = variance of mean #1 and #2 n 1 = sample size for group #1 n 2 = sample size for group #2 k = n 2 n 1 = 1 n 1 = The values of mean and standard deviations were taken from the reference article [18]. Randomization and allocation concealment mechanism The study participants were divided into two groups: a control group that received MAD (n=20) and a test group that received CMOA (n=20). A randomization list was constructed by a computer. On the basis of successive enrolments, participants were assigned random numbers. The clinical site was contacted after confirmation of eligibility (subjects who met all inclusion criteria), and a centralized online randomization method (https:// rando mizer. at/) was used. Patients were randomly assigned to one of two arms: MAD or CMOA, utilizing block randomization. Masking The double-blind masking was done, where neither the patient nor the investigators were revealed about the type of test group allocated to the patient. Authors from the Sleep Medicine Department, DMIMS (DU), created the allocation sequence, enrolled participants, and assigned people to therapies. The record remained in the hands of the the authors from Sleep Medicine Department who were not in direct contact with the patients and investigators. mm are referred to as "minimal protrusion" patients Calibration of examiner For the training purpose, examiners were imparted a manual narrating the study protocols and examination criteria and directions concerning the examination of the subjects. Two examiners were selected each from the Department of Sleep Medicine for PSG reading, the Prosthodontic and Orthodontic departments for the fabrication of the interventions, and the experts for CAD-CAM designing. Mandibular advancement device (MAD) (Figs. 2 and 3) Twenty participants of group I (the control group) received MAD. The upper and lower arch impressions were recorded in irreversible hydrocolloid impression material (DPI Algitex) and the cast was poured in dental stone (Kalabhai). Using a George Gauge, the protrusion index was calculated for all study participants by extending the jaw to 60-80% of its maximal protrusion (roughly 6 mm). The upper and bottom halves of the device were made of acrylic, and they were joined by moving the mandible 6mm forward from its central location. Symptoms related to temporomandibular disorder (TMD) were assessed but none of the study group participants had any complaints related to it. All the patients were counseled to wear the appliance during sleeping for a minimum of 6 h daily. Customized maxillary oral appliance (CMOA) (Figs. 4 and 5) Design CMOA is a customized maxillary removable oral appliance with a "base plate" and the "counter plate. " The base plate was adjusted over the upper jaw taking support of hard and soft tissues that is namely teeth and hard palate. The counter plate is adjusted over the base plate with a space of 2 mm in between the two plates. The space between these two plates was made empty or hollow. The upper plate had the anatomy of the occlusal surface of the upper teeth for occluding with the lower teeth in present occlusal relation but at an increased vertical dimension that is attained by hollowing the plate. The hollow upper plates had an opening or a hole in the central incisor area that is in the anterior region, for the uninterrupted instreaming of the fresh air towards the posteriors region of the tongue. Moreover, a bulge was outlined on the upper plate on the palatal aspect of the appliance in the most posterior back region to restrain the tongue from fallback. Fabrication of CMOA The upper and lower arch were scanned (3shape TRIOS 4). Computer-aided designing (CAD) was done using CAD software. The design was then 3D printed in 3D printing epoxy resin material (eResin-PLA). This plate was customized for 20 participants of group II. All the patients were counseled to use the appliance during sleeping for a minimum of 6 h daily. Polysomnography (PSG) A nocturnal PSG (EMBLA(R)S7000, EmblaSystem, Inc., Broomfield, CO., USA) was done in the sleep medicine department to get references or baseline measures and after 1 month and 3 months of use of MAD and CMOA and the AHI was measured. Oxygen saturation in the blood (SpO2) Oxygen saturation in blood was calculated using a finger pulse oximeter (OTICA CONTEC CMS 5100) at the time of PSG, and its values were calculated for references and after 1 month and 3 months of use of MAD and CMOA. Mean oxygen saturation and the proportion of time with SpO2 <90% were assessed. Oro-nasal airflow via a pressure transducer A pressure transducer (OTICA CONTEC CMS 5100) was used to determine the mean respiratory disturbance index (RDI). Epworth Sleepiness Scale (ESS) ESS was applied for self-assessment of the level of sleepiness during the daytime. Safety evaluation Any serious adverse events related to epoxy resin material sensitivity, TMJ pain, or muscle pain because of the rise in vertical occlusal measures, pharyngeal or gag reflex, or others were not found in any study participant. Statistical analysis The reference measures that were recorded before delivering the appliance were compared with the values seen after a month and 3 months of appliance delivery. Descriptive and analytical statistics were performed. The values were presented in mean and standard deviations. The Shapiro-Wilk test was used to determine the normality of continuous data. Because the data had a normal distribution, parametric tests were used to investigate it. To corroborate the mean differences, the independent sample t-test and paired sample t-test were used. The significance threshold was maintained at P<0.05. The statistical program used was "SPSS (Statistical Package for Social Sciences) Version 20.1" (IBM Corporation, Chicago, USA). Result The results of the nocturnal polysomnography are shown in Tables 1, 2 Discussion When compared to the baseline measurements, the results of the present study showed a significant improvement in the AHI due to a decrease in both groups' apnea and hypopnea events. In comparison to the results from the 1-month follow-up measurements, the 3-month follow-up measurements revealed a more pronounced decline in the values and events. Similarly, SpO2, RDI, and the ESS also showed a significant increase in values indicating improvement after intervention by MAD and CMOA. The standard therapeutic approaches for OSA are CPAP and OA [19]. Zhang et al. and Schwartz et al. in 2019 and 2018 respectively executed a meta-analysis to analyze the efficacy of OA against CPAP to manage OSA. Their study results concluded that CPAP had better efficacy in lowering the AHI score; at the same time, it had notably lower compliance that nullified the difference created by the score against MAD in terms of quality of life and cognitive outcomes [20,21]. Some OAs are available in the market and even recorded in the literature for treating "mild to moderate" OSA, with MAD having the most successful and recommended results [22,23]. The goal of this study was to see if the CMOA may be a good oral appliance for people with moderate OSA. The findings of this study confirmed the study's premise by demonstrating statistically significant differences between the reference measures of all measured parameters and the measures collected after a month and after 3 months of MAD and CMOA delivery. However, statistical analysis of the acquired data revealed no significant variations in measured values between the MAD-and CMOA-treated groups, confirming the efficacy of CMOA in controlling OSA. The CMOA increases the vertical dimension by 2mm, which results in the advanced and descending position of the lower jaw which in turn increases the flow of air by keeping the patency of the airway maintained. The vertical dimension of occlusion loss is observed in the population between the ages of 40 and 50 years; this appliance can aid in regaining the vertical dimension and also in re-establishing the actual centric relation [24]. Along with OSA, these appliances can also be used in treating the signs and symptoms of temporomandibular disorders (TMD) that are present because of vertical dimension loss. Within the period of study, CMOA did not cause any alteration in dentition as noticed in patients who received MAD which is the major advantage of using CMOA over MAD, making it more compliant among patients. However, long-term studies are required to prove these facts. Cardinal features of the CMOA are: 1. Backfall of the tongue was prevented by the bulge designed over the palate of the appliance 2. The constant influx of fresh air was facilitated by the hole provided in the anterior region of the appliance 3. Airflow was directed towards the pharynx by making the appliance hollow 4. The precision was maintained in designing and manufacturing by the use of CAD-CAM In both devices, the vertical dimension is increased by 2-4 mm depending on the patient's tolerance and comfort. However, in MAD, this increase in vertical dimension is done by not only opening the bite but also by forwarding the mandible. This causes rotational as well as translational movement resulting in the remodeling of the temporomandibular joint in a new unfavorable position, whereas the CMOA rotates the condyle by opening the bite at existing occlusal relation without or with the minimal translational movement of the joint. So there will be hardly any occlusal disharmony in CMOA. Before and after the intervention, both study groups underwent a PSG, which is the benchmark in diagnosing and grading OSA. Hypopnea (50% or less than 50% reduction in airflow) and obstructive apnea (10-s cessation of airflow) events were observed [25]. The results revealed significant improvement in the AHI by the reduction in the events of apnea and hypopnea in both groups when compared to the baseline measures. Threemonth follow-up measurements showed a more significant reduction in the values and events as compared to the 1-month follow-up results. Alike results were reported by Guimaraes et al. in 2018 [26]. They found improved results in the AHI from 80.5 to 14.6 events/h after successful MAD therapy. Basyuni has stated in their article which was an update on MAD as a therapy for obstructive sleep apnea syndrome that studies since 2005 on managing OSA with MAD have revealed a reduction in mean AHI between 30 and 72% [27]. According to Otero et al., the AHI alone is insufficient to rate the severity of OSA [28]. As a result, in addition to AHI, SPO2, oro-nasal airflow, and ESS were also assessed in the current study to track changes in the severity of OSA before and after 1 and 3 months of MAD and CMOA treatments. MAD is strongly connected with poor nocturnal blood oxygenation in patients suffering from severe OSA, and as a result, it is a suggested predictor of blood oxygenation. Because nocturnal hypoxia is connected to OSA morbidity and death, SpO2 is thought to have predictive value. The data obtained from the present study have shown a significant increase in SPO2 values after the intervention by MAD and CMOA. The current study's findings are consistent with those of Fietze et al. and Temirbekov et al., who investigated the oxygen desaturation index (ODI) [29,30]. The present study result can be an evidence base document to establish a correlation between SpO2 and AHI in OSA patients. The fluctuation in nasal pressure was detected using a nasal transducer. The episodes of the AHI/hour of total sleep time is the RDI. Both test groups showed a notable reduction in the values of RDI from the baseline measures. Similar results were seen in the study piloted by [31][32][33][34][35]. Murray Johns introduced ESS in early 1991 to assess daytime sleepiness, and it has been linked to OSA [36]. It is a numerical scale where a score of more than 10 indicates the presence of sleepiness. This scale has been used in numerous research to diagnose and measure therapy outcomes [37,38]. The data obtained from the present study showed the ESS score ranging from 13 to 15. And the score was highest for the question about sitting quietly after lunch followed by watching TV. When administered sequentially, the ESS scores varied, which must be attributable to the subjective nature of the study. However, the data obtained after 3 months of intervention showed a drastic reduction in the ESS score when compared to the baseline score. But the value was not significant between the MAD and CMOA test groups. Despite the fact that oral appliances can be employed in a variety of OSA patients, they have a number of limitations, including the absence of enough teeth in the maxillary and mandibular arches. A tooth is thought to be especially crucial for ensuring the stability and retention of the mandibular advancement device. The condition of edentulism inherently exacerbates OSA and limits the number of viable therapies [39]. However, a review of the literature finds that only a few papers describe the use of MAD in the treatment of edentulous patients or patients with multiple missing teeth with OSA [40]. Some modifications are a must in the basic design of MAD in such patients. But CMOA can be successfully used not only in patients with fixed but also patients rehabilitated with a removable partial prosthesis as it takes major retention from the palatal slopes as does the removable partial dentures. Limitations of the present study The study had a small sample size. which is a major limitation of the current study. Longitudinal studies involving other parameters such as rapid eye movement, non-rapid eye movement, electrocardiography, electroencephalogram, and oxygen desaturation index also need to be evaluated to prove the authenticity of the CMOA. Further scope The CPAP machine can be easily attached to the CMOA with the help of a small connector. This can be effective and proven effective in managing severe OSA and will have a higher compliance rate as the need for a mask will be eliminated and replaced by the CMOA. To assess the utility of this hybrid architecture, more research is required (CMOA and CPAP). The appliance is being tested to see if it can effectively manage COVID-19 patients. Conclusion To stratify such a treatment technique, a single tool ought to be avoided. And hence together with AHI, SPO2, oronasal airflow, and ESS were checked to corroborate the findings before introducing this novel design of custommade maxillary oral appliances in the field of sleep medicine for the treatment of moderate OSA. Based on the findings of this study, it can be inferred that CMOA is as effective as MAD in treating moderate OSA.
2023-02-01T15:16:53.553Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "1ffaab9180bf19252afd72cee640b2ba4acf7c46", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1ffaab9180bf19252afd72cee640b2ba4acf7c46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213555153
pes2o/s2orc
v3-fos-license
Prediction of Preheating Temperatures for S690QL High Strength Steel Using FEM-Simulation for High Power Laser Welding This study investigates a method for predicting the effect of preheating temperatures on the resulting hardness for high power laser welding of high strength steel. An FEM model is introduced containing a hardness calculation based on an existing model. Moreover, the hardness values of experimental results have been measured in order to show the performance of the model. The hardness calculation requires the chemical composition and the t8/5-time at the point of measurement. It is claimed that a calibration of the melt pool width and depth at room temperature only is enough to get reasonable results from the FEM-model for higher preheating temperatures. From the experimental result of a single experiment the width of a weld seam and the depth was deducted. In this study experiments have been done at various preheating temperatures in order to show the correlation between the model and the experimental results at various temperatures. The hardness equation provides suitable results in the verification with the measurements. The prediction of preheating temperature can be done with the resulting t8/5-time of the FEM-model. This method can decrease the amount of time and costs within a production according to testing and analyzing a matrix of process parameters. Moreover it is concluded that this methodology might be used for single item production. Introduction The use of high-strength steel (HSS) gets more important today. With the increasing mechanical properties, the quantity of material and the weight of constructional parts can decrease [1]. Due to the reduction of the overall weight, economic and environmental advantages arise in industries like e.g. the automotive industry. In industrial production process chains the takt time determines the amount of time available for specific welding tasks. Increasing productivity and decreasing takt times result in the demand of higher process speed. More often laser deep penetration welding processes are used in industrial applications in order to achieve elevated processing velocities. In addition, it is possible to generate deep welding joints with single passes in comparable small production times [2,3]. However, the laser welding process comes with a high-power density inside the laser spot and comparably small diameters. From this follows a strong temperature gradient with rapid cooling speeds inside the heat affected zone (HAZ). Martensitic phase transformation in the weld seam and the HAZ can be the consequence depending on the material. The original micro structure inside the material will be lost in this region. Moreover, welding distortion and residual stresses are largely affected by phase transformation in the cooling process [4]. The amount of martensite and size of the HAZ are a function of the temperature field and cooling rates respectively. In general the type of welding process and the welding parameters can indirectly control the temperature field and cooling rates. Moreover, the temperature distribution is following the geometry of the joined parts itself. Hence, the unwanted effects are a function of the process conditions. For instance cracks can occur at different process parameters [5] and the formation of pores can be reduced by the process parameters as well [6]. The hardness in laser beam welded high strength steel can be comparably high. Often a post treatment of weld seams is necessary to restore the mechanical properties of the material. This increases the production time and cost of laser beam welded constructions [7]. However, the hardness can be controlled by adapted temperature fields. Some strategies like preheating can help to reduce the cooling speeds and hence decrease the drawbacks of laser deep penetration welding [8]. An investigation showed that the effect of increasing hardness at the welding seam and HAZ can be reduced by preheating. Correlations between hardness and preheating temperature have been reported in several studies by empirical analysis of measurements and process parameters [9,10]. Nevertheless, finding the right preheating temperature can be a cost intensive experiment. In that method the estimation of hardness of weld seam as a function of the preheating temperature is reported. It was possible to conduct an empirical equation for the calculation of the hardness. However, experiments at different preheating temperatures had to be done. Another investigation shows the prediction of hardness using a neural network model [11]. However, the amount of requested data is high and the portability to transfer this method to other materials is doubtful. Even though the model is not suitable for the preheating temperature prediction. An investigation showed a method to calculate the hardness in the HAZ with the chemical composition and the cooling time between 800°C and 500°C (t8/5-time) [12]. Measuring the temperature within the weld seam and HAZ in practical conditions is often not possible. Even though the measurements would be affected by the measurement method itself. Therefore, the prediction method is less useful for practical tasks. Using the finite element method (FEM) can provide a realistic temperature field within three dimensional parts. The increase of requested values also increases the calculation time. Even though the results are heavily affected by the quality of the requested process parameters. Using default setting for complex geometries or process parameters can provide insufficient results. Welding simulations can be subdivided into three parts [13]. A Process simulation can provide weld pool profile and dynamics according to process parameters. The heat source is defined by the interaction of laser beam profile and material parameters [14]. Structure and material simulations can provide insights into residual stresses and distortions within three-dimensional part. The heat source is parametrized previously and remains static during the simulation [15,16]. The target of the following investigation is to create a thermos-mechanical calculation model that provides preheating temperatures according to a defined hardness in the weld seam. Therefore an empirical hardness calculation and a FEM model are used. The model is to be validated by experiments. The resulting methodology can be used in any industry working with high strength steel in order to calculate the correct preheating temperature, e.g. the crane industry. Methodology of Validation The basic idea is to use modern FEM-simulation tools for the prediction of properties as a function of preheating temperatures. In practice the validation of the temperature field in the process can be an exhausting process. Hence, in this study it is claimed that a calibration of the melt pool width and depth at room temperature only is enough in order to get reasonable results from the model determining the properties at higher preheating temperatures. From the experimental result of a single experiment the width of a weld seam and the depth was deducted. This was used to model the heat source in the FEM calculation. The methodology of this study is divided into two parts. First the actual method for the prediction of preheat temperatures is made. The second part of the investigation is the validation of the method. Therefore, the results of the method's calculation are compared with the experimental measurements. This validation is not part of the actual prediction method and has only been done in order to show the performance of the model. For the prediction method a welding sample at room temperature (20°C) is made. A metallographic analysis provides the cross-sectional dimensions. Therefore, the depth and wide at the top and bottom of the weld seam are measured. The heat source within the FEM model is calibrated according to this measurement. Furthermore the welding velocity and laser power are integrated into the model. The FEM calculation is done for various preheating temperatures. The resulting t8/5 times for defined elements within the weld seam are exported to the hardness calculation model. Also the chemical composition for the welding sample is provided by an energy dispersive X-ray analysis. The hardness can be calculated according to a specific preheating temperature. For the validation of the prediction method welding samples are made at various preheating temperatures. The hardness is measured in the weld seam. These measurements are compared with the calculated values for specific preheating temperatures. The following flow chart describes the method of preheating temperature prediction and the validation of the method itself. See Fig. 1. Material Metal sheets made of S690QL were used for the experiments. The metal sheet thickness is 6 mm. The plates have a length of 200 mm and a width of 50 mm. The chemical composition has been generated for the base material and the weld seam by an energy dispersive X-ray analysis (EDXA). This analysis was made with an acceleration voltage of 20 kV. The results are shown in Table 1. Due to the insufficient detectability of carbon the value was set to 0.16 wt% according to [17]. This relates to the material composition in the FEM-model. The absolute error was set to + − 0.04 wt%. The differences between the chemical composition of the base material and the weld seam are neglectable. According to this the following calculations are based on the chemical composition of the base material. The two plates were clamped in the lap joined configuration. The length of the weld seam is 150 mm and 25 mm from each side. The laser source is an IPG YLS-10000. This laser is a fiber laser with a maximum output power of 10 kW. The fiber diameter is 200 μm. A collimator lens of 160 mm and focusing lens of 300 mm were used. The focus was set 4 mm below the surface of the upper metal sheet. The used laser power was 6 kW and a travel speed of 1 m/min. The experimental setup can be seen in Fig. 2. To investigate the accuracy of the validation the results of the model were compared to the measurements of the experimental welds. An accordance of the validated and measured values is the proof for the model. For this the welding experiments have been performed with and without preheating. For the preheating the specimen where put into an oven. After the heating process the specimens have been placed into the welding setup. The temperature was measured with a tactile thermometer. The welding process was performed after the plates reached the desired temperature. Experiments have been made at room temperature and at 100°C, 150°C and 200°C respectively. The specimens have been investigated by metallographic inspection. Hardness measurements were made in the weld seams at 3 mm below the surface. The Vickers hardness was measured with a 10 kg load. FEM-Model The FEM welding model has been set up with a software named Simufact welding. The model is shown in Fig. 4. It has already been reported about Simufact in industrial applications, [18]. The temperature dependent thermophysical and mechanical properties of matter have been provided by the software [17,19,20]. The mesh geometry of The geometry of the heat source was adapted by the laser beam parameters and the cross-sectional area. Width and depth of the weld pool have been taken from the experiment without preheating. Also the process parameters of the experiment are taken into account. See Table 2. The configuration of the heat source is calibrated with the measured depth and width of the weld seam cross section for room temperature. See Table 3. The efficiency was set to a typical value of overall absorption for deep penetration welding processes [21]. Hardness Calculation The model describes the calculation of maximum hardness in HAZ for weld seams. Adapting this method is used for hardness calculations in weld seams. According to this method the hardness is connected to the t8/5 time by means of the arctangent function. It is necessary to calculate the maximum hardness and t8/5-time of a full martensitic and full bainitic structure. These are based on specific carbon equivalents. The final equation is described as followed, Eqs. 1 and 2. Specific calculations according to [12]. With this equation it is possible to calculate the maximum hardness of a specific t8/5time. These are given by the FEM simulation in this study. Metallographic Analysis All weld seams show a sound bonding without any kind of defects like cracks or pores in the metallographic cross sectional investigation. The width of the weld seam at the surface is 3.6 mm, 1.6 mm at the bottom and the depth is 10 mm. In Fig. 5 an exemplary cross section without preheating can be seen. The measured width and depth are used for the calibration of the FEM-simulation. However the depth can be calculated by analytical approaches [22] but it was decided to go for a practical solution and measure it in order to be able to transfer the model to more complex geometries. See Table 3. The geometry of the weld seam geometry and the calculated seam geometry show a sound accordance. The microstructure shows a martensitic structure in the weld seam. Hardness Measurement The Vickers hardness of all specimen haven been measured with a 10 kg load. The results are shown in Table 4. The relative error was set to 1.0% of measured value. The maximum measured Vickers hardness occurs at the weld seam without preheating with 407 HV10. The higher the preheating temperature the lower the hardness in the weld seam. At a preheating temperature of 200°C the measured Vickers hardness drops down to 325 HV10. The initial measured Vickers hardness of the specimen is 285 HV10. FEM-Simulations-Results The requested t8/5 time is provided by the temperature of specific nodes in the simulation. In Fig. 6 the result from the FEM simulation without preheating is shown. Each node of the FEM-model contains a separate temperature for each time step. The following data contains to the node in the middle of the weld seam at 3 mm below the surface. The results for each preheating simulation are shown in Fig. 7. According to the temperature-time dataset the t8/5 times have been calculated. The accurate times of 800°C and 500°C are calculated with linear interpolation. With the increase of preheating temperature, the t8/5 times also increase. Due to the chosen time increments the total error for the t8/5-time is set to 0.2 s. See Table 5. Hardness Calculation The calculation uses the chemical composition of the base material and the t8/5-time from the FEM model. In Table 6 all calculated values can be seen. The hardness decreases with increasing preheating temperature. The error propagation has been used according to Eq. 3 and [23] in order to calculate the error in the hardness. The error of the calculated value f is given by Δf. The different variables and individual errors are given by x n and Δx n respectively. All values have been integrated in this calculation. The calculated hardness at a preheating temperature of 20°C is 396 HV10. This is the maximum hardness that has been calculated. With the increasing preheating temperature, the calculated hardness decreases. At 200°C preheating temperature the calculated hardness drops down to 342 HV10. This can be seen in Fig. 8 with the arctan function. With the increasing preheating temperature, the absolute error for the hardness calculation decreases. Conclusion In this investigation the potential of welding simulation has been shown. The combination of validating the FEM-model with measurements and the use of calculationmodels can provide solid results. In Fig. 10 the calculated and the measured hardness are compared related to the preheating temperature. With this method it is possible to predict preheating temperatures for specific materials. Modern-day computers and userfriendly simulation software make similar calculations possible in reasonable times. The prediction of the proper preheating temperature can be realized with comparably low effort. Moreover, the experimental validation can be done via microstructural investigations of the weld seam geometry and a hardness test. No further analysis is necessary. The essential results of this study are the following: & Preheating can decrease the maximum hardness within the weld seam of S690 high strength steel for high power laser welding & The error of the chemical composition measurement has a major effect on the calculated hardness. An accurate measurement is required to provide suitable results for the hardness calculation & The calibration of FEM-model with experimental dimension is needed to provide realistic results
2020-03-05T10:19:47.387Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "a4eda21342de582a4cf373bc37a5cea015bb4998", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40516-020-00111-5.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2de5fe2f4dae21d7031a759138aee86febce654d", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
13883422
pes2o/s2orc
v3-fos-license
ON ONE OSCILLATORY CRITERION FOR THE SECOND ORDER LINEAR ORDINARY DIFFERENTIAL The Riccati equation method is used to establish an oscillatory criterion for second order linear ordinary differential equations. An oscillatory condition is obtained for the generalized Hill's equation. By means of examples the obtained result are compared with some known oscillatory criteria. Let q(t) be a continuous real function on [t 0 ; +∞).Consider the equation φ (t) + q(t)φ(t) = 0. (1.1) Throughout the following we assume that the solutions of the considered equations are real-valued. Definition 1.1.Equation (1.1) is said to be oscillatory if each of its solutions has arbitrary large zeroes. The study of oscillatory behavior of second order linear ordinary differential equations has developed in two directions: the goal of the first one is to derive oscillatory property of the equation from the properties of its coefficients on the whole half line (integral oscillatory criteria: see for example Leighton's theorem in [16], c AGH University of Science and Technology Press, Krakow 2016 590 Gevorg Avagovich Grigorian Wintner's theorem in [11], Hartman's theorem in [10,Theorem 52], and the works of I.V. Kamenev [8], J. Yan [17], W.-L. Liu and H.-J. Li [14], J. Deng [2], A. Elbert [4], H.Kh. Abdullah [1], G.A. Grigorian [5]); the second one -which is radical -studies the oscillatory behavior of equations on the finite interval (interval oscillatory criteria: see Wong's theorem in [10], G.A. Grigorian [5], Sturm's theorem in [7], Q. Kong [9], J.G. Sun, C.H. Ou and J.S.W. Wong [15], M.K. Kwong, J.S.W. Wong [12]).Then the equation is oscillatory if it is oscillatory on the countable set of intervals.The feature of this direction is that out of countable intervals there is no condition (except conditions like local integrability or continuity) posed on the coefficients of the equation.Probably this fact explains the phenomenon of the existence of oscillations; Eq. (1.1) with extremal effect: [9]) (It is easy to construct an example of such an effect by using the Sturm comparison theorem.)In many cases the integral oscillatory criteria us allow to establish oscillatory behavior of linear equations easily.Recently M.K. Kwong [11] obtained new integral criteria, describing the broad classes of oscillatory equations in terms of q(t).We note his following result.Let Theorem 11]).Let the following conditions be satisfied: 1) for some k > 0, α > 2 and for sufficiently large T the inequality holds, 2) there exist δ > ε > 0 and an infinite number of intervals [s n ; Then Eq. (1.1) is oscillatory. In this paper we prove an oscillatory criterion for Eq.(1.1).The proof is based on the Riccati equation method.As a consequence, from this criterion is derived an oscillatory condition for the generalized Hill's equation.For the examples the obtained result is compared with some known oscillatory criteria. RICCATI EQUATION Consider Riccati equation See the proof in [6].Let x(t) be a t 1 -regular solution of Eq. (2.1).Consider the integral OSCILLATORY CRITERION Denote by Ω the set of positive and continuously differentiable on [t 0 ; +∞) functions.For any f ∈ Ω denote Theorem 3.1.For some f ∈ Ω let the following conditions be satisfied: there exists an infinitly large sequence {θ n } +∞ n=1 such that and let for some λ ∈ R 3) Then Eq. (1.1) is oscillatory. Proof.Suppose Eq. (1.1) is not oscillatory.Then Eq. ( 2.1) has a t 1 -regular solution for some t 1 ≥ t 0 (see [7, p. 332]).In Eq. (2.1) make a change We will arrive at the equation Two cases are possible: Let case a) hold.Then it follows from (3.5) that y * (t) ≤ −ε, t ≥ t 2 , for some ε > 0 and t 2 ≥ t 1 .From here, from condition 3) and (3.4) it follows, that y * (t) → −∞ for t → +∞, which contradicts (3.5).Let case b) hold.Then from (3.5) it follows that y * (t) ≥ 0, t ≥ t 1 .From here, from the condition 4) and from (3.4) it follows that y * (t) → −∞ for t → +∞, which again contradicts (3.5).So the relation (3.3) holds.By condition 2) and relation (3.5) choose n = n 0 so large that and put t 2 ≡ θ n0 .Show that the solution x 0 (t) of Eq. (3.1) with From this, (3.6) and (3.7) it follows that x * (t 2 ) < x 0 (t 2 ).By virtue of Lemma 2.3, it follows from here that x 0 (t) is t 2 -normal.By virtue of (2.1), we have Integrating this equality from t 0 to t we obtain Completing the square in the left hand side of this equality and dividing both sides of the obtained by f (t) we will come to the equality By virtue of (3.7), c = 0. Therefore, from (3.8) we get Then where Since x 0 (t) is t 2 -normal by virtue of Theorem 2.4 the left hand side of inequality (3.9) is finite, whereas from condition 1) it follows that its right hand side is equal to +∞.The obtained contradiction proves the theorem. Example 3.3.Consider equation where where It is not difficult to see that Therefore, without loss of generality we will assume that c 0 (t 0 ) = 0, t 0 > 0. Then from (3.11) we get Hence it is clear that for λ = 0 that conditions 3) and 4) of Theorem 3.1 are fulfilled.From (3.11) we derive where for t → +∞.Then assuming f (t) ≡ 1 and taking into account Remark 3.2 we conclude that for Eq.(3.10) conditions 1) and 2) of Theorem 3.1 are fulfilled.Therefore, Eq. (3.10) is oscillatory. Proof.We prove only for the case The proof in the general case can be derived from the realized proof by using the Sturm comparison criterion (see [7, p. 334]).Denote It is easy to derive from (3.13) that h k (t) is a periodic function of period T k (k = 1, 2).Denote where By virtue of the mean value theorem, the equality In view of this, we assume ξ 2 such that for some δ > 0. Denote where It follows from (3.15) that M > 0. Then from (3.17) we have that for enough small value of δ > 0 the following inequalities hold: . In view of this, we choose the natural numbers n 0 and m 0 such that 0 On one oscillatory criterion for the second order linear ordinary differential equations 597 From (3.16) we see that where We have Then from (3.15) for which the solution x(t) of Eq. (2.1) with x(t 1 ) = x (0) is t 1 -regular.If Eq. (2.1) has a t 1 -regular solution, then it has sole t 1 -extremal solution x * (t), and reg(t 1 ) = [x * (t 1 ); +∞).
2018-04-28T17:39:25.566Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "4e55f433511d786e7323ea5212c58018b77461f3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7494/opmath.2016.36.5.589", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4e55f433511d786e7323ea5212c58018b77461f3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
219548770
pes2o/s2orc
v3-fos-license
Subsidia dominationi: The Early Careers of Tiberius Whereas many aspects of the Augustan age continue to enjoy ongoing or renewed interest, the early careers of Tiberius Claudius Nero (born 16 November 42 BCE) and Nero Claudius Drusus (March/April 38 BCE), Livia’s sons from her marriage to Ti. Claudius Nero (pr. 42), have not been subject to much discussion or controversy of late. On the one hand, this could, perhaps, be explained in that they were quite young during the formative stages of the so-called Augustan monarchy, the critical settlements being those of 27, 23 and 19 BCE, the eye-catchers par excellence in the political history of the early Augustan era. On the other hand, Livia’s sons only really emerge into the spotlight of both ancient sources and modern scholarship after the untimely passing of M. Vipsanius Agrippa in 12 BCE. This paper aims at revisiting the evidence for Tiberius’ and Drusus’ careers in the decade or so before the latter’s premature death in Germany in 9 BCE, the period preceding the rapid rise (and demise) of Gaius and Lucius Caesar. There are, indeed, strong indications that Livia’s sons played a far more important part than has hitherto been recognized, both in terms of their official position and their role in assisting Augustus with one of his most important political objectives, namely the imperial monopolization of the public triumph. Introduction In the last two decades, the matter of the early careers of Tiberius Claudius Nero and Nero Claudius Drusus has been somewhat neglected in modern scholarship. Furthermore, there continues to exist considerable disagreement on the main stages of their cursus honorum -as well as such key details as, for example, their first imperatorial salutations -until the latter's untimely death in 9 BCE and the former's second consulship in 7 BCE.1 In her incisive and inspiring study of the Boscoreale Cups, Ann Kuttner makes the following discerning observation, well worth quoting in full: "the place of Tiberius and Drusus in Augustus' political projects and dynastic plans has seldom been properly estimated; the position of Drusus the Elder has hardly been considered at all. This has been a flaw in historical, literary, and art-historical scholarship, misdirection and omission in any one sphere tending to reinforce those faults in other spheres. Hence this chapter [viz. chapter 8, 'Tiberius and Drusus in Augustan Propaganda and the Prototype for the Boscoreale Cups'] […]. Some of this presentation has a polemic character. If it provokes anyone to a more broadly founded analysis of the roles of Augustus' chosen assistants and relatives, and of the artistic evidence for this, I will be delighted. I am not trying to make out that Drusus and the younger Tiberius were the heirs of Augustus, at the expense of Agrippa or Gaius and Lucius, for instance; rather, in a given period they were preeminent, as others necessarily were at other times. The mechanisms by which Augustus delegated power, and tried by its orderly transmission to assure the continuation of pax after his own death, cannot be understood if the Claudii Nerones are ignored. Most of all, I wish here to say of historical interpretation what I maintain throughout of iconographic interpretation: hindsight is a dangerous, usually illegitimate tool for analysis of motive and intention. Failed projects, cropped-off careers, cannot be treated by the serious historian as if they had never been; yet this has overwhelmingly been the case with the career of Drusus the Elder up to his death in 9 B.C. and with that of his brother Tiberius up to his self-imposed exile in 4 B.C. On Augustus' predilections in these years, let Plutarch speak: first place in Augustus' estimation was held by Agrippa, but next after Agrippa he esteemed the sons of Livia [i.e., Plut. Ant. 87.1]."2 Held against the light of chapter 4 of my monograph on the Roman high command ("The summum imperium auspiciumque and the ius triumphi "), a comprehensive reappraisal of the extant sources should indeed significantly alter our current understanding of the issue, reinforcing Kuttner's contention that the role of the Claudii Nerones within the domus Augusta and Augustan dynastic policy in the period here considered has been significantly underrated. If it could be argued that Livia's sons held far more prominent positions in the Augustan military and political machinery than hitherto accepted this would have important further ramifications for our understanding of early imperial history. First, this would require us to recalibrate our views on their relative position of power within the domus Augusta. Second, it would also cast a new light on how Augustus converted the public triumph into the exclusive reserve of the imperial house.3 had ambitious plans for his nephew M. Claudius Marcellus (42), son of his sister Octavia Minor and C. Claudius Marcellus (cos. 50), there is every indication that he also envisaged a brilliant future for Livia's offspring. In 33 BCE, at the age of nine, Tiberius made his first public performance delivering a eulogy for his dead father from the Rostra. In spite of their fathers' strong hostility towards Iulius Caesar and Caesar Octavianus successively, Marcellus and Tiberius alike were given the extraordinary honour of riding the right and the left trace-horse respectively in Octavianus' triple triumph of 13-15 August 29. Shortly after Octavianus' third triumph, Tiberius presided at the City festival (the ludi astici) and prominently featured in the game of Troy during the performances in the circus, leading the turma puerorum maiorum.5 In 27, following the historic creation of the Augustan New Order in January, Tiberius assumed the toga uirilis and staged a series of large-scale public displays, including a couple of gladiatorial shows in memory of his father and his maternal grandfather, M. Livius Drusus Claudianus ( † 42,Philippi), in the Forum and the amphitheatre, at the expense of his mother and his stepfather.6 Both Marcellus and Tiberius served as tribuni militum in Augustus' campaign against the Cantabrians and received the honour of presiding over the ludi castrenses he put on in 25 in honour of his 'victory', "as though they were aediles"7. It was, however, Marcellus' star par excellence that was in the ascendancy. Back in Rome, Marcellus was married to none other than Iulia, Augustus' daughter by Scribonia, his second wife, and his only biological child. As illness prevented Augustus from attending the marriage in Rome, he had it celebrated by Agrippa in his absence.8 In 24, no doubt on the motion of Augustus, the Senate granted Marcellus membership of the Senate with praetorian rank as well as the privilege to stand for the consulship ten years before the statutory minimum age. After this, Marcellus was promptly elected curule aedile.9 In 23, Marcellus handsomely used this office to give exceptionally magnificent games. In the course of the same year, he was furthermore co-opted into the prestigious pontifical college, where he would rank alongside his father-in-law Augustus. If we may believe Cassius Dio, Marcellus' rise was so meteoric that the young prince reportedly began to take offence at Agrippa's prominence, causing Augustus to invest the latter with a special command in Syria in the late spring of 23 BCE.10 As Magie suggests in his study of Agrippa's first independent imperium since the triumviral era,11 he was probably sent on a discreet diplomatic mission to pave the way for Phraates' return of the Roman standards and men taken in the defeats suffered by Crassus and M. Antonius.12 Although Marcellus was the clear favourite, the very beginnings of Tiberius' public life show that this did not prevent Augustus from rapidly advancing the careers of Livia's sons,too.13 In 24, at the same time it granted the aforementioned privileges to Marcellus, the Senate also decreed that Tiberius, then in his nineteenth year, could stand for each office five years in advance of the age prescribed by law, after which he was at once elected quaestor.14 In this capacity, he assumed his first major public commissions well before Marcellus' untimely death late in 23, as he was put in charge of the faltering grain supply as well as a wide investigation into malpractices in Italy's slave-prisons.15 According to Velleius Paterculus, Tiberius acted on his stepfather's direct orders and impressed with the way in which he relieved the corn scarcity at Ostia and in Rome.16 Since Augustus himself would assume the powerful cura annonae in 22 in a sort of carefully planned 'coup de théatre' Tiberius' real brief clearly was to set the stage for a flawless and efficacious takeover by his stepfather.17 In 22, Tiberius also played a notable role in the resolution of another delicate crisis of the greatest importance as he personally arraigned Fannius Caepio, who had conspired with L. Licinius Varro Murena against Augustus, before the quaestio maiestatis, easily securing his condemnation in absentia.18 Throughout 23 and 22, Tiberius thus acted as Augustus' close confidant and trusted aide in difficult and important matters of state, conspicuously displaying his loyalty to Rome's new strongman.19 The strongest possible indication that the premature death of Marcellus would boost the fortunes and positions of Livia's sons, however, was to be Tiberius' next brief, a commission of the utmost importance to his stepfather. Tiberius' success in Armenia probably earned Augustus his ninth imperatorial salution. 23 Thanks to Suetonius (Tib. 9.1), we furthermore know that the young princewho was twenty-one years of age at the time -played a prominent role in what Augustus was quick to sell as the glorious highlight of his Parthian settlement,24 viz. the return of the standards lost in the defeats and reverses suffered by M. Licinius Crassus (cos. 70,55) in 53, L. Decidius Saxa in 40, and M. Antonius (cos. 44, 34, des. 31; triumuir r.p.c 43-30) in 36, an event that in all likelihood took place shortly after he had put Tigranes (III) on the Armenian throne:25 dein ducto ad Orientem exercitu regnum Armeniae Tigrani restituit ac pro tribunalu diadema imposuit. Recepit et signa, quae M. Crasso ademerant Parthi. "then [i.e., after his first stint of military service in Spain] he led an army to the Orient and restored the throne of Armenia to Tigranes, crowning him on the tribunal. He also recovered the standards which the Parthians had taken from Marcus Crassus." All other sources invariably feature Augustus as the recipient of the standards.26 This impression is reinforced by Augustus' Res Gestae, where he carefully distinguishes between the Parthian and Armenian setlements. On the one hand, he in 27.1 duly acknowledges Tiberius' assistance in sorting out the situation in Armenia Maior: 23 RIC I 2 Aug. nos. 518-520 = BMC 1 Aug. nos. 671-678 (comp. 679-682, 703). Mommsen 1883, 13 andBarnes 1974, 21 attribute this salutation to the Parthian settlement. Syme 1979, 310, however, believes the salutation was for the joint settlements as the eastern mints celebrated both signis receptis and Armenia capta in conjunction with Augustus ninth salutation. I am inclined to follow Combès 1966, 461;Ritter 1978, 380 f. andRich 1998, 76 f., who argue that the salutation was triggered by the recovery of Armenia: this settlement had more of a military aspect, as Tiberius invaded the kingdom at the head of an army and the appearance of IMP VIIII on Pergamene denarii celebrating the recovery of Armenia may imply that success occasioned the salutationcomp. also BMC loc. cit., nos. 676-678. Kienast -Eck -Heil 2017, 58 tentatively date Augustus' ninth salutation to "12. Mai (?) 20 v. Chr. (vgl. Ovid. Fasti 5, 545 ff.)". 24 For the fact that Augustus made wide use of literary, numismatic and artistic means to propagate his bloodless 'victory' over the Parthians, see Rich 1998, 73. 25 On the basis of Ovid, Fasti 5.545-598; Barnes 1974, 21;Levick 1976, 234 n. 38 suggest 12 May 20 as the day these standards were recovered. As cogently argued by Rich 1998, 83-85, however, Ovid here refers to 12 May 2 BCE as the day the temple of Mars in the Forum Augustum was dedicated, not coincidentally the day of the (only) festival of Mars Ultor. 26 Strab. 16.1.28;Vell. 2.91.1;Suet. Aug. 21.3;Dio 54.8; Whereas the summaries of Books 136 and 137 are missing, Liv. Per. 141 mentions the return of the standards lost by Crassus and M. Antonius out of context, under the years 11 and 10 BCE, with no mention of either Augustus or Tiberius. Armenian maiorem interfecto rege eius Artaxe, c[u]m possem facere prouinciam malui maiorem nostrorum exemplo regn[u]m id Tigrani, regis Artauasdis filio, nepoti autem Tigranis regis, per T[i(berium) Ne]ronem trade[r]e qui tum mihi priu[ig]nus erat. "Greater Armenia I might have made a province after its king Artaxes had been killed, but I preferred, following the model set by our ancestors, to hand over that kingdom to Tigranes, son of King Artavasdes and grandson of king Tigranes, with the assistance of Tiberius Nero, who was then my stepson." On the other, he unflinchlingly takes sole credit for the return of the standards captured by the Parthians in R. Gest. div. Aug. 29.1-2: Vltoris,reposui.27 "By victories over enemies I recovered in Spain and in Gaul and from the Dalmatians several standards lost by other commanders. I compelled the Parthians to restore to me the spoils and standards of three Roman armies and to ask as suppliants for the friendship of the Roman People. Those standards I deposited in the innermost shrine of the temple of Mars Ultor." Given that Tiberius' eulogist Velleius Paterculus, too, remains silent on the issue, some scholars believe Suetonius to be mistaken.28 If, however, we accept that Augustus granted Tiberius the signal honour of receiving the standards on his behalf, in his presence and that of his legions, there need be no contradiction whatsoever.29 27 As in all my recent work, I have used (and much benefitted from) John Scheid's outstanding edition of the Res Gestae (2007, Paris -Les Belles Lettres). 28 So, for example, Gelzer 1918, col. 481;Krämer 1973, 363;Levick 1976, 234 with n. 38;Woodman 1977, 98 n. 2;Syme 1978, 32;Rich 1990, 181 and 1998, 77 n. 24. Cooley 2009 remains silent on the issue. 29 Seager 1977, 201 f. mounts a convincing argument to accept Suetonius' version of events, amongst other things pointing to Woodman's discerning observation (1977,101) that "there is almost certainly a lacuna in [Vell. Pat.] 2.94.4 at precisely the point where a mention of the recovery of the standards would appear". In my view, however, this conclusion runs counter to Seager's argument in 2005, 14 that Velleius exaggerates Tiberius' brief and that his mission was strictly limited to Armenia. That Velleius' narrative of Tiberius' activities in 20 BCE is not infallible is also shown in the fact that he mistakes Tigranes III for Artavasdes II in 2.94.4 (cf. supra p. 127). Kienast -Eck -Heil 2017, 70 likewise deem it possible that Tiberius received the signa in 20: "Empfang der Partherfeldzeichen?". Dio's summary of what happened in the eastern Mediterranean and along the eastern frontier in 21 and 20 likewise centres almost entirely on Augustus. After narrating his activities in Sicily, Greece, Asia, Bithynia and Syria in 22, 21 and 20 and Phraates' decision to return the lost standards, Dio briefly digresses on the ensuing Parthian honours and some other business in the City (e.g. the birth of Gaius Caesar).30 Only as he resumes his survey of Augustus' arrangements for the Eastern provinces does Dio recount how Augustus decided to send Tiberius to Armenia Maior because the Armenians there had denounced Artaxes and sent for his brother Tigranes, who was in Rome, with the brief to expel the former and restore the latter to the throne. Although Dio downplays Tiberius' achievements, he nonetheless provides us with a futher useful clue as to his official position: καὶ ἐπράχθη μὲν οὐδὲν τῆς παρασκευῆς αὐτοῦ ἄξιον· οἱ γὰρ Ἀρμένιοι τὸν Ἀρτάξην προαπέ-κτειναν· ὁ δ᾿ οὖν Τιβέριος, ἄλλως τε καὶ ἐπειδὴ θυσίαι ἐπὶ τούτῳ ἐψηφίσθησαν, ἐσεμνύνετο ὡς καὶ κατ᾿ ἀρετήν τι ποιήσας. καὶ ἤδη γε καὶ περὶ τῆς μοναρχίας ἐνενόει, ἐπειδὴ πρὸς τοὺς Φιλίππους αὐτοῦ προσελαύνοντος θόρυβός τέ τις ἐκ τοῦ τῆς μάχης χωρίου ὡς καὶ ἐκ στρατοπέδου ἠκούσθη, καὶ πῦρ ἐκ τῶν βωμῶν τῶν ὑπὸ τοῦ Ἀντωνίου ἐν τῷ ταφρεύματι ἱδρυθέντων αὐτόματον ἀνέλαμψε. Τιβέριος μὲν δὴ ἐκ τούτων ἐγαυροῦτο. "Tiberius accomplished nothing worthy of his preparations, for the Armenians killed Artaxes before he arrived. However, he gave himself airs as though he had achieved some feat of valour, especially as supplicationes (θυσίαι) were voted for the event. The thought that he might attain the monarchy had already occurred to him by now, for, when he was approaching Philippi, a noise like that of an army was heard from the battle-site and fire flared up of its own accord from the altars which Antonius had set up in the camp entrenchment. So Tiberius was exultant over these events."31 Since the sources regrettably maintain a deafening silence on Tiberius' official position as he led his legions East in 20 BCE, those few scholars who hazard a guess assume that he did so as a legatus Augusti pro praetore32. In my view, there are nonetheless serious grounds for abandoning this well-entrenched view in 30 Dio 54.3.4-8, 4.2-4, 6.1, 7-8; comp. Dio 53.33.1-2; continuation of Augustus' reorganization of the East: Dio 54.9. On Augustus' Parthian honours, see Rich 1998. 31 Quoted from Dio 54.9.5-7. Dio's representation leaves no doubt that these supplicationes were voted in his name, too, and not only that of summus imperator Augustus -comp. also 54.24.7, where Dio similarly records that "supplicationes were made in Agrippa's name" (καὶ ἐπ᾿ αὐτοῖς θυσίαι μὲν τῷ τοῦ Ἀγρίππου ὀνόματι ἐγένοντο) for his success agains the Bosporans in 14 BCE. 32 See, for example, Levick 1976, 26 ("no doubt with the title legatus Augusti pro praetore"); Kuttner 1995, 123 ("the special legate responsible for bestowing the kingship of Armenia on Rome's candidate for rule there"); and Hurlet 1997, 84 n. 35: "La fonction de légat n'est pas attestée par les sources, mais elle ne fait aucun doute." Bleicken 1998, 357 even suggests that Tiberius "die Interventionsarmee von sechs Legionen ohne Zweifel unter dem Kommando erfahrener Legaten favour of Weingärtner's clever conjecture that Tiberius undertook his first major military command as a proconsul.33 First, it is important to consider that he would shoulder an immense responsibility in achieving a mission that was of the utmost importance to his stepfather. The young prince would, therefore, need every bit of official authority and dignity Augustus could muster on his behalf, especially as Tiberius would be dealing directly with the royalty of Armenia and the Parthian Empire. Second, one should not forget that in terms of dynastic hierarchy, Tiberius now ranked second only to Agrippa, who had received an extraordinary proconsulship in 23 and went on to marry Iulia in 21 BCE, at the behest of Augustus himself.34 In both these respects, it is well worth pointing to a couple of close parallels. In 1 BCE, the Armenians rebelled and promptly received Parthian support. Augustus, greatly distressed at this news, eventually decided to send out his (grand)son Gaius Caesar (cos. 1 CE) as Tiberius was still in Rhodes and he "did not dare send any other influential man" -ἄλλον δέ τινα πέμψαι τῶν δυνατῶν οὐκ ἐτόλμα. Amongst other things, Augustus promptly invested Gaius with full imperium pro consule so as to make sure he would go to Syria with the necessary authority.35 Although the Armenians themselves would not give in without a fight,36 the Parthian king Phrataces quickly came to terms with the Romans, "hearing that Gaius was in Syria and holding the consulship": τὸν Γάιον ἔν τῇ Συρίᾳ ὄντα καὶ ὑπατεύοντα.37 That Gaius' position was further enhanced with an eponymous consulship in 1 CE should not surprise at all since Dio explains that he was deemed still "young and inexperienced in public affairs" when befehligte". entrusted with the Armenian question38 and happened to be Augustus' grandson and adopted son. Some five years later, in 4 CE, a more or less similar situation would again force Augustus' hand in taking consequential decisions to shore up his regime. After the untimely demise of Gaius and Lucius Caesar and faced with the outbreak of war in Germany, the ageing princeps decided to adopt Tiberius, who was promptly entrusted with the command against the Germans as well as invested with a decennial grant of tribunicia potestas.39 In a further move to strengthen the position of his prospective adoptive father, Tiberius himself had first adopted Drusus' eldest son Nero Claudius Drusus, who assumed the name of Germanicus Iulius Caesar following Augustus' ensuing adoption of Tiberius. Only now, so we are told by Dio, did Augustus take courage, "feeling that he had successors and supporters", and set about reorganizing the Senate once more.40 38 Dio 55.10.18: in contrast to his appraisal of Tiberius in 21, Augustus in 1 BCE deemed both Gaius and Lucius Caesar young and inexperienced in public affairs -νέοι καὶ πραγμάτων ἐτύγχανον ἄπειροι. 39 Velleius 2.103.1-3 records that Tiberius returned from Rome in the consulship of P. Vinicius (ord. January-June 2 CE), before the death of L. and C. Caesar, and that he was adopted on 26 June 4 CE. Velleius' narrative suggests that Tiberius had received the tribunicia potestas after Gaius' death but shortly before his adoption, a possibility that cannot be ruled out altogether. For Tiberius' adoption being announced as rei publicae causa by Augustus, in contrast to his simultaneous adoption of Agrippa Postumus, see Vell. 2.104.1-2. Like Dio, Velleius recounts in 2.104.2 that Tiberius' was promptly sent to Germany following his adoption. According to Velleius, the war in Germany broke out three years before and had already earned M. Vinicius (suff. 19) the ornamenta triumphalia. 40 Tacitus (Ann. 1.3.5: Germanicum […] per adoptionem a Tiberio iussit), Suetonius (Tib. 15.2: coactus prius ipse Germanicum fratris sui filium adoptare), and Dio 55.13.1a-3 all suggest that Augustus compelled Tiberius to adopt Germanicus before he would adopt Livia's only suriving son. Dio further suggests that Tiberius' own adoption was due to Augustus' old age, the outbreak of war in Germany and his daughter's Iulia's influence(!), and that he forced Tiberius to adopt his nephew because he feared Tiberius too "would lose his poise somehow or other" and feared he might even commence a rebellion. Although this representation is often adopted uncritically by modern scholars (e.g. Hurlet 1997, 165 and Eck 1998, 111), Levick 1966(esp. 227-233) and 1972a(comp. also 2010 f.) explodes this representation, arguing amongst other things that Tiberius was legally debarred from adopting anyone after his own adoption. In my view, it simply strains belief that Augustus would have had Tiberius adopt Germanicus against his will. Tiberius' adoption of Drusus' eldest son doubtlessly was the result of a gentleman's agreement made between him and the ageing Augustus and served to kill two birds with one stone: yet another signal posthumous indication of Tiberius' (and, for that matter, Augustus') undying affection for his late sibling (in 10 CE, Tiberius had restored and dedicated the temples of Concordia as well as that of Castor and Pollux in his own name and that of his late brother: Suet. Tib. 20; Dio 56.25.1 and Ovid. Fast. 1.637) as well as a significant reinforcement of the Augustan dynasty, since Tiberius could now boast two adult sons rather than one, a rationale recognized by Tacitus (loc. cit.: sed quo pluribus munimentis insisteret: "but so that he might depend on additional Third, the fine detail provided by Dio (supra p. 130) indicates that the supplications decreed on account of the successful investiture of Tigranes in Armenia were decreed on behalf of Tiberius, too, and not just in honour of Augustus as holder of the summum imperium auspiciumque.41 Traditionally, only holders of independent imperium auspiciumque qualified for such honours as supplications, imperatorial salutations, ovations and triumphs, and this was after all a time when Augustus was trying hard to administer the provinces more maiorum.42 In light of these considerations, it is, therefore, far more likely that Augustus himself or, perhaps more probably, the consuls of 21, M. Lollius and Q. Aemilius Lepidus, passed a law investing Livia's oldest son with full consular imperium. In all likelihood Tiberius' first extraordinary consulare imperium was modelled on that granted to Agrippa in the late spring of 23, and this in terms of both duration and scope. As such, he would have received it in quinquennium, rather than in annum or ad tempus incertum.43 Consequently, Tiberius would continue to hold his consular imperium from 1 January 20 to 31 December 16. On the analogy of Germanicus' authority under the (in all likelihood consular) law documented in the Senatus Consultum de Cn. Pisone Patre (of 10 December 20 CE), Augustus' imperium would certainly have been defined as maius quam in any official business vis-à-vis that of Tiberius.44 However, since Agrippa's imperium was probably not statutorily defined as maius quam that of the regular proconsuls before possibly 18 and certainly 13 BCE,45 Tiberius' imperium would not have been legally bulwarks" -Levick 1976, 240 n. 7 rather needlessly dismisses the suggestion (made by Timpe 1962, 29 andSeager 1972, 37 on the strength of Ann. 1.3.5) that the adoption of Germanicus was intended to give additional security to Tiberius since this certainly was an additional benefit of the adoptive arrangements of 4 CE). For a forceful and compelling reappraisal of the relationship between Tiberius and Germanicus Caesar following the death of Augustus, see Drogula 2015, whilst Levick 1976 observes that "Germanicus' relations with Drusus Caesar […] remained untroubled to the end of his life". 41 This vote of supplications on account of events in Armenia further substantiates the suggestion that Augustus had received his ninth imperatorial salutation on account of this victory: cf. supra n. 23. 42 Comp. Dio 54.9.1 (20 BCE). For independent imperium, complete with the ius auspicii, being condition sine qua non for supplications, imperatorial acclamations and triumphal honours, see Vervaet 2014, 78-93. For Augustus' position that his solution of the Armenian question aligned with ancestral precedent, see R. Gest. div. Aug. 27.1, quoted in full supra p. 129. 43 The first legally guaranteed quinquennial command had been created by virtue of the Vatinian Law on behalf of Caesar in the early months of his first consulship in 59 BCE: see MRR,190. For the historic precedent, see Vervaet 2011. 44 For the evidence on the superiority of Tiberius' imperium vis-à-vis that of Germanicus as per the statute law recorded in the S.C. de Cn. Pisone patre, see the quote in n. 47. 45 Cf. n. 90 f. infra. maius quam that of the proconsuls in charge of the individual public provinces. Rather, he would have been entitled to the summum imperium in whatever province he found himself in all affairs pertaining to his prouincia, his official administrative brief, and this either e lege, ex iussu Augusti, or ex s.c., on the formal proviso that Augustus' imperium invariably remained maius.46 In all likelihood, the law creating Tiberius' imperium would also have left the matter of his prouincia(e) entirely to the discretion of Augustus and the Senate, who could define and redefine his operational brief as dictated by circumstances and expediency.47 46 It follows from this argument that I do not accept Eck's suggestion in 2018a, 28 f. that the clause in the law empowering Germanicus which stipulated the unqualified superiority of Tiberius' imperium vis-à-vis that of Germanicus (see the following note) was tantamount to the "rechtliche Fixierung der Angst des Tiberius vor Germanicus" (i.e., triggered by the attempt of the Rhine legions in 14 CE to make Germanicus emperor) and therefore unprecedented. For an earlier version of this argument, see also SCPP, 160, where it is argued that the de facto relationship that had existed earlier between for example Augustus and Tiberius and Tiberius and Germanicus (i.e., resulting from the latter's investiture in 14 CE, cf. n. 123 infra) was now for the first time hardened into statute law because of what had happened in Germany in 14-16 CE: "Aber möglicherweise war durch die Vorgänge in Germanien in den Jahren 14-16 das Bedürfnis entstanden, das faktische Hierarchieverhältnis auch in rechtliche Formen zu kleiden." Neither do I believe that the statute constituting Germanicus' imperium was passed on the occasion of Tiberius' decision to send Germanicus to the transmarine provinces, as surmised in e.g., SCPP, 159 (comp. also Kienast -Eck -Heil 2017, 73 f.: Germanicus received his first imperium maius quam on "17 Sept. 14" and his second in "Herbst 17") and Hurlet 1997, 190 f. (comp. 568). Whilst Germanicus' official brief was indeed formally (re)defined in 17 by decree of the Senate (see the ensuing note), he was still holding the quinquennial imperium he had assumed in 16 CE: as recorded (rather inaccurately) in Tac. Ann. 1.14.3 (quoted in n. 48 infra), Tiberius shortly after Augustus' decease and his own accession asked SPQR for a cumulative second quinquennial imperium (i.e., commencing in January 16, after the expiry of the first one granted in 10 CE and beginning in January 11 CE: Dio 56.25.2), See Dio 54.12.4-5 (quoted in n. 65 infra) for the fact that Augustus had received two consecutive and cumulative quinquennial extensions of his provincial command in 18 BCE. 47 For a substantial discussion of the stages of investiture with imperium (i.e., extra ordinem, outside of the regular cursus honorum) and prouinciae, see Hurlet 1997, 240-277 (comp. 190-195 / um Ti. Caesari Aug(usto) quam Germanico Caesari esset (etc.) -"who, when he should have remembered that he had been given as an assistant to Germanicus Caesar (who had been sent by our princeps in accordance with the authority of this order to settle overseas affairs that required the presence of either Ti. Caesar Augustus himself or of one or the other of his two sons), ignoring the majesty of the imperial house, and also ignoring public law -having been attached to a proconsul and indeed to a proconsul about whom a law was put before the People providing that in whatever province he entered he had greater imperium than the proconsul (officially) in charge of that province, with the proviso that in every case Ti. Caesar had greater imperium than Germanicus Caesar (etc.)". Comp. also Tab(ula) Siar(ensis) frg. 1, 23-24 (after Lebek 1991, 114): [in iis regionibus, quarum] curam et tutelam Germanico Caesari ex auctori[tate senatus mandasset] and Tac. Ann. 2.43.1: tunc decreto patrum permissae Germanico prouinciae quae mari diuiduntur, maiusque imperium quoquo adisset, quam iis, qui sorte aut missu principis obtinerent. On the discrepancies between Tacitus' representation and the text of the SCPP, see SCPP, 116 and Hurlet 1997, 195-197. That Tiberius would also continue to issue precise directives (mandata) to Germanicus concerning the exercise of his provincial mandate is attested in, e.g., Tab. Siar. frg. 1, 21-22 (AE 1984, 508). 48 Like Agrippa before him in 23 BCE, and -as will be discussed shortly -a few years after him also his younger brother Drusus, and later also Gaius Caesar, Germanicus (cf. infra p. 158 f.), and Drusus Caesar (comp. Ann. 2.44.1-2, 64.1; 3.11.1, 19.2-3) and even L. Aelius Seianus (Dio 58.7.4, 31 CE). Following amongst others Levick 1966, 240, Syme 1979, 325, and Hurlet 1997 for some further scholarship in this sense -comp. also Pettinger 2012, 187 n. 10) and 571, I at first believed (see Vervaet 2014, 85) that Drusus Caesar (ord. 14, II 21) received his first quinquennial consular imperium at some point in the spring or summer of 17 CE. However, as his career closely -and no doubt intentionally -mirrored that of his slightly older adoptive brother Germanicus Caesar (ord. 12, II 18), his first quinquennial imperium was probably awarded at some time would possess the imperium of a proconsul outside the City as consul designate and further would hold the title Prince of Youth."49 Tacitus' careful wording strongly suggest that Nero's (doubtlessly quinquennial) consulare imperium did not entail any of the prerogatives strictly tied to the nomen consulis, the consulship proper, such as the ius habendi senatus and agendi cum populo. The imperia of special proconsuls like Agrippa, Tiberius and Nero could, however, be carried intra urbem and, provided they received express authorization ex s.c./iussu principis, used to command troops or exercise any other proconsular prerogatives there, but no more. Here lies the key distinction with the privileged consular imperium of Augustus, who had ever since its enhancement by virtue of the measures of 19 BCE possessed all the prerogatives of the consuls as well as the continuous summum imperium auspiciumque in the City in the capacity of proconsul, symbolized by the right to carry the fasces anywhere and any time and sit between the consuls of the day. This also explains why Tiberius convened the Senate shortly after Augustus' decease by virtue of his tribunicia potestas rather than his enhanced consular imperium: whereas his imperium had indeed been made equal to that of Augustus in omnibus provinciis in 12 or 13 CE, he still lacked the extraordinary prerogatives added to Augustus' imperium in 19.50 point in 13 CE (perhaps when Tiberius' imperium was substantially enhanced, on which see Vervaet 2014, 272 f.) and would have spanned the years 14-18 CE. Tacitus' representation of events in Ann. 1.14.3 (At Germanico Caesari proconsulare imperium petiuit, missique legati qui deferent […] Quo minus idem pro Druso postularetur, ea causa quod designatus consul Drusus […] erat) betrays his confusion by the fact that Tiberius only asked for a second quinquennial imperium for Germanicus Caesar in September 14 CE, causing him to believe that this was Germanicus' first such grant and that Drusus was yet to receive this authority, and to explain the seeming omission of a similar request for the latter on the grounds that he was consul designatus -comp. also the observations in n. 46 supra and 123 f. infra. That Tiberius at this stage only asked for a further quinquennium for Germanicus can be explained in that the latter's first quinquennium was to expire in December of the next year, whereas Drusus Caesar was only into the first year of his (first) quinquennium. An early and cumulative renewal of Germanicus' authority would have had the advantage of strengthening the position of Tiberius and the domus Augusta at a tricky time of transition and amounts to a clever exploitation of the young prince's popularity. Vervaet 2014, 265-275 and n. 156 of p. 261. Nero Caesar doubtlessly received these additional prerogatives as part of the senatus consulta passed in the afternoon of 13 October 54 following his acclamation by the praetorian cohorts: sententiam militum secuta partum consulta (Ann. 12.69.2). At all events, we are told in Suet. Nero 7.2 that, following the At some point early in 19, shortly after he had designated Q. Lucretius to the vacant consulship he himself had refused to accept, Augustus had SPQR confer praetorian rank on Tiberius and grant Drusus the right to stand for the various offices five years earlier than was legally permissible.51 In 17, Tiberius was nonetheless elected to hold the office of praetor urbanus, a magistracy he would thus combine with the consular imperium he could only lawfully exercise in his designated prouincia of the moment, if any.52 Much later, in 70 CE, Domitianus Caesar would find himself in a not entirely dissimilar position when he received the praetura urbana consulari potestate -the main difference being that he would have been authorized to exercise his enhanced imperium in his capacity as urban praetor.53 After Tiberius and Drusus had been allowed by the Senate to stage gladmeasures of 51 CE, Nero led a parade of the praetorian guard, shield in hand, and Tacitus (Ann. 12.41.2) further adds that he subsequently appeared in the pompa circensis in triumphal dress, decked out like an imperator (Nero triumphali ueste […] decore imperatorio) -on this episode and the circus procession as dynastic ceremony in the court of Claudius, see G. Sumi (forthcoming). For a grant of imperium similar to that held by Tiberius following the measure of 12 or 13 CE, comp. also SHA Ant. Pius 4.6-8: adoptatus est V. kal. Mart. die, in senatu gratias agens quod de se ita sensisset Hadrianus, factusque est patri et in imperio proconsulari et in tribunicia potestate collega -"He was adopted on the fifth day before the Kalends of March [i.e., 25 February 138 CE], while returning thanks in the Senate for Hadrian's opinion concerning him, and he was made colleague to his father in both the proconsular and the tribunician power." This brief analysis of the imperium of Tiberius (and others before and after him) significantly qualifies the views of, amongst others, Béranger 1953, 95 f., who is adamant that such imperium "n'est valable que hors la Ville", and not, as opposed to the imperium proconsulare of the emperor, "à l'intérieur du pomérium"; comp. also id. 1980 (where Béranger develops an artificial distinction between the proconsulare ius = the limited imperium extra urbem proconsulare and the full proconsulare imperium of the emperors) and Syme 1958, 409 n. 3: "In 51 the boy Nero was granted proconsulare imperium extra urbem (XII. 41. 1). Now proconsular imperium by its very nature ought to be valid only extra urbem. The addition of the phrase in this passage implies that the central imperial power was normally conceived as a proconsular imperium which had been domiciled and legitimized there." Koestermann 1967, 178, for his part, seems somewhat surprised that the term "proconsulare imperium extra urbem wird in keinen Inschriften erwähnt". Since it concerns a literary paraphrase rather than a technical term this is entirely unsurprising. 51 Tac . It is quite likely that Domitianus' consular imperium too was granted in quinquennium with the proviso that he could use it intra urbem for the exercise of his duties as urban praetor. By virtue of this unprecedented measure, he was thus authorized to exercise his iatorial combats on his behalf at some point early in 16, Augustus entrusted the City and Italy to Ti. Statilius Taurus (suff. 37, II ord. 26) (since he had sent Agrippa to Syria again and had -allegedly -fallen out with Maecenas over the latter's wife) and departed for Gaul, on the pretext of the renewed outbreak of war there. He also decided to take Tiberius with him, despite the fact that he was holding the praetorship. By decree of the Senate, Drusus was authorized to carry out all the duties of his brother's office, acting as a result as a sort of pro praetore urbano.54 These events further underscore that Tiberius came second only to Agrippa and that Drusus' star, too, was now rising rapidly, regardless of the birth of Agrippa's second son Lucius early in 17 BCE and Augustus' prompt -and highly significant -decision to adopt both Gaius and Lucius "as successors to his powers, so that there might be less plotting against him"55. As regards the precise nature of Tiberius' position in Gaul in between his service in the East (supra) and his campaigns against the Raeti and the Vindelici (infra), the only tangible clue is offered by Suetonius in Tib. 9.1. Here it is recounted that he governed Gallia Comata for about a year, which was in a state of unrest through German inroads and the dissensions of its chiefs at the time: Post hoc Comatam Galliam anno fere rexit et barbarorum incursionibus et principum discordia inquietam. It arguably strains belief that Tiberius would have governed the whole of Gallia Comata on behalf of Augustus in the mere capacity of legatus Augusti pro praetore.56 In this respect, it is also well worth noting that, in 20 BCE, the proconsul Agrippa had likewise briefly been charged with the command of Gaul. According to Dio, Agrippa had been sent there from Rome in order to complete the task of quelling internal unrest and ending Germanic urban praetorship on a par with the regular consuls, exercising consular rather than praetorian imperium, whereas Tiberius would have exercised his praetura urbana with praetorian imperium, since he was not legally authorized to exercise any magisterial, 'urban' prerogatives by virtue of his extraordinary proconsulship, largely intended for extra-urban use in the provinces as per the instructions of the Senate and Augustus (cf. supra n. 47). 54 Dio 54.19.1-6, esp. 6: τὸν δὲ δὴ Τιβέριον καίτοι στρατηγοῦντα παραλαβὼν ἐξώρμησεν. ἐστρατήγησε γὰρ καίπερ τὰς στρατηγικὰς τιμὰς ἔχων· καὶ τήν γε ἀρχὴν αὐτοῦ πᾶσαν ὁ Δροῦσος ἐκ δόγματος διήγαγεν. On Maecenas' position in this period, see now Mountford 2019, 22 f., 40-42 and 58-62. 55 Dio 54.18.1-2. 56 See Thomasson 1991, 33 for the fact that, during his triennial stay in Gaul, Augustus oversaw its division into three separate provinces, Aquitania, Gallia Lugdunensis and Belgica, each administered by a legatus Augusti pro praetore. In the context of a discussion of Germanicus' imperium in 14 CE, Syme 1979, 320 discerningly observes that "he held authority over Tres Galliae. Proconsular imperium was requisite". By analogy, the same was true for Tiberius in Gaul in 16 BCE. harassment before he was tasked with the command against the Cantabrians.57 Eck, however, cogently argues that Agrippa's commission in Gaul was to commence preparations for Augustus' (and his) grand plan to secure Italy's northern and eastern periphery for once and all by conquering transrhenian Germania and expanding Roman power in Illyricum all the way to the Danube, in the aftermath of Augustus' conquest of northern Hispania in 25 BCE. Amongst other indications, Strabo (in 4.6.11) records that Agrippa built four major roads from Lugdunum, chosen because of its strategic and central location: one to Aquitania, another to the Rhine, a third to the Channel coast and a fourth to the Massilian seaboard.58 Since Tiberius' brief in Gaul must have been very similar to Agrippa's, furthering his work, and both men operated under Augustus' overarching summum imperium auspiciumque59, it is not unreasonable to suggest they both held the rank of proconsul. 57 Dio 54.11.1-2. Although Dio recounts this in the course of his narrative of events of 19 BCE (which spans 54.10.1 to 12.3), it clearly concerns a commission held between his conduct of affairs in Rome and his stint in Spain to crush the Cantabrian revolt. In 21, Agrippa had indeed been sent to Rome from Sicily to smother the civil strife surrounding the election of a colleague for the consul M. Lollius, who alone had assumed that office following Augustus' refusal to accept his election to the office: Dio 54.6. Following his Roman and Gallic commissions, Augustus in 19 decided to send Agrippa to Spain to crush the final uprising of the Cantabri: Dio 54.11.1-6. 58 Eck 2018a, 8 f., 2018b, 131 f. Agrippa also resettled (part of) the Ubians on the western shores of the Rhine, and probably oversaw the relocation of a number of legions into Gaul, some coming from Hispania after the end of major warfare there, some of which were stationed along the Rhine. Comp. also Roddaz 1984, 389-394 andDalla Rosa 2015, 464: "Il est hors de doute qu'Auguste et Agrippa avaient élaboré un plan pour la conquête de ce territoire, car autrement on ne pourrait pas expliquer les efforts coordonnés et systématiques sur les fronts du Rhin, des Alpes et du Danube dans les années 19-18 av. J.-C." For a further number of scholars who argue that Augustus decided on a great expansion in central Europe as early as c. 20 BCE, see Rich 2012, 76 n. 103 and Dalla Rosa 2015, 464 n. 2 -Wolters 2017, however, believes the movement of troops to the Rhine occurred in the aftermath of the clades Lolliana. Dalla Rosa (op. cit. 475) plausibly suggests that the sweeping consolidation of Rome's northern frontiers provided due justification for the renewal of Augustus' vast provincial command in 18. Rich 2012, 75 f., however, argues that Augustus would have requested another ten-year extension of his provincial command in 18, rather than the five years he then asked for, if he had already been planning on conquering Germany at that time, and suggests that he only changed his plans, including a request for an additional five years, following the invasion of Gaul by the Sugambri, Usipetes and Tencteri in 16. Cf. infra p. 140 f. (with n. 65) for some further discussion of the issue of, and motives for, the staggered renewal of Augustus' decennial provincial command in 18 and shortly thereafter. 59 See Vervaet 2014, 253-288. The rationale for Augustus' speedy adoption of Agrippa's sons early in June 1760 invites closer scrutiny of the political constellation at the time of Tiberius' appointment to, and his tenure of, an extraordinary quinquennial proconsulate. A careful look at the precise circumstances of this period indeed lends further credibility to the suggestion that the domestic situation, too, offered good reasons for Augustus to have Tiberius invested with consular imperium, on the model of what had already been decreed in 23 on behalf of Agrippa. As recorded in Cassius Dio, the years 22-18 were marked by political upheaval and real or alleged conspiracies. After Augustus had assumed the cura annonae in 22 amidst popular riots, the consular elections for both 21 and 19 were marred by violence and civil strife.61 In 21, the electoral violence was such that Augustus decided to commission none other than the proconsul Agrippa with the task of restoring law and order in the City. Whilst he succeeded in quelling these disorders, even he failed to check a disturbance about the appointment of a praefectus feriarum Latinarum, resulting in the post remaining vacant for the entire year.62 In addition, the years 22 and 18 saw political trials and conspiracies, probably triggered by Augustus' assumption of lifelong and therefore plainly autocratic powers and privileges in 23 and 19.63 In 22, M. Primus stood trial for having attacked the Thracian Odrysae as proconsul of Macedonia and brazenly claimed he had been authorized to do so by both Augustus and the late Marcellus. Primus' counsel Licinius Murena stood in open defiance of Augustus and quite a few of the jurors even voted for acquittal. This incident was followed by -and may well have been the immediate cause of -the conspiracy of Fannius Caepio, who was joined by others, allegedly also Murena. All of these men eventually fled and were tried and convicted in absentia, only to be killed shortly after. Augustus reportedly incurred sharp criticism for allowing supplications to be voted and carried out "as though for a victory"64. At the end of his summary of events of 19, Dio furthermore recounts that Augustus was in fear of conspiracy and therefore felt that the breastplate which he often wore under his robe, even in the Senate, provided insufficient protection. "Accordingly," Dio goes on to explain, "he first took the leadership for a further five-year term, since his ten-year term was now about to expire (for these developments took place in the consulship of Publius and Gnaeus 60 Hurlet 1997, 113, 428 Lentulus), and then gave Agrippa various powers almost equal to his own, including the tribunician power for the same period. He said then that this number of years would be enough for them, but not long afterwards he took the other five years of the imperial power, so that the total became ten again."65 His subsequent lectio senatus of 18 BCE, carried out in particularly difficult and tense circumstances, reportedly triggered a great many accusations of conspiracy against Augustus or Agrippa, whether true or false.66 The unambiguous association between continued opposition and Augustus' decision to shore up the position of Agrippa is important for the sake of this argument. As for Agrippa: he had already become so powerful that Maecenas in 21 reportedly advised Augustus that he had "made him so great that he must either become your son-in-law or be killed"67. Significantly, Agrippa's position as second-in-command had also caused Marcellus enough grief for Augustus to decide to send the former to Syria in the late spring of 23, "for fear that, if they remained in the same place, some quarrel or altercation might occur". Agrippa, however, reportedly kept a low profile, as he instead decided to stay in Lesbos and sent out his legates to administer the province.68 65 Dio 54.12.3-5: πρῶτον μὲν αὐτὸς πέντε τῆς προστασίας ἔτη, ἐπειδήπερ ὁ δεκέτης χρόνος ἐξήκων ἦν, προσέθετο (ταῦτα γὰρ Πουπλίου τε καὶ Γναίου Λεντούλων ὑπατευόντων ἐγένετο), ἔπειτα δὲ καὶ τῷ Ἀγρίππᾳ ἄλλα τε ἐξ ἴσου πῃ ἑαυτῷ καὶ τὴν ἐξουσίαν τὴν δημαρχικὴν ἐς τὸν αὐτὸν χρόνον ἔδωκε. τοσαῦτα γάρ σφισιν ἔτη τότε ἐπαρκέσειν ἔφη· ὕστερον γὰρ οὐ πολλῷ καὶ τὰ ἄλλα πέντε τῆς αὐτοκράτορος ἡγεμονίας προσέλαβεν, ὥστε αὐτὰ δέκα αὖθις γενέσθαι. Contra Rich 2012, 71-75, who argues that Augustus in 18 BCE genuinely believed he only required five more years "to finish the job", his notable decision to secure his second decennial provincial command by virtue of two successive and cumulative five-year grants should rather be interpreted as a deliberate political manoeuvre to reduce domestic tensions following his plainly unrepublican and lifelong extraordinary empowerments of 23, 22 and 19 BCE: cf. Vervaet 2014, 258-272;comp. Vervaet 2010, 136-145. 66 Dio 54.13-15.4. 67 Dio 54.6.5: τηλικοῦτον αὐτὸν πεποίηκας ὥστ᾿ ἢ γαμβρόν σου γενέσθαι ἢ φονευθῆναι [in direct speech]. 68 Dio 54.31.3-32.1. Suetonius Aug. 66.3 rather makes Agrippa take offence because the young Marcellus was exalted above him, whilst Velleius 2.93.1 says that men did not trust Agrippa's intentions and thought that he might well contest Marcellus' succession to the imperial purplethe sources are agreed that both men were not on good terms, despite Agrippa's marriage to Augustus' niece Marcella: comp. also Plin. Nat. 7.149 (pudenda Agrippae ablegatio). At all events, shortly thereafter, in June 23, Augustus abdicated his eleventh consulship on the Alban Mount (cf. supra n. 10). Earlier in 23, Augustus had fallen so gravely ill that he despaired of recovering and, as Dio narrates in 53.30.1-2, "made arrangements for everything as though he were about to die. Having gathered around him the magistrates and the other leading senators and equites, he did not appoint any successor, although all expected that Marcellus would be chosen for this, but addressed them about the affairs of state and then gave Piso [his colleague in the consulship In the face of all these genuine and potential challenges to his supremacy, Augustus' decision to bolster his regime by means of investing his loyal stepson Tiberius with a special proconsulate for the years 20 up to and including 16 makes perfect sense. Even if Agrippa would always remain Augustus' foremost support, often in a conspicuously self-effacing fashion,69 there was no harm whatsoever in buttressing the position of another energetic and reliable mainstay.70 That Tiberius' first quinquennium did not entirely concide with that of Agrippa, which spanned the years 23-18, represented an additional safeguard.71 The objection that Tiberius only was a quaestorian of 22 at the time he received his first proconsular command can easily be discarded. First, one should not forget that young C. Octavius himself was not yet twenty years of age when he first took the consular fasces on 19 August 43 BCE,72 whilst Marcellus had been authorized to stand for consul around the age of 30. Much earlier, in 210 BCE, P. Cornelius Scipio Africanus (cos. 205, II 194) had already been invested with an extraordinary consular imperium at the age of 26 as an aedilicius,73 whilst of that year] a book in which he had listed the forces and the public revenues and entrusted his ring to Agrippa". 69 For Augustus' strong attachment to Agrippa, see, e.g., Dio 54.28.3-29.1-6 and 54.31.1, with Dio's own glowing eulogy for the man as Augustus' most zealous and excellent supporter at 29.1-3. 70 Compare also the discerning observations of Kuttner 1995, 181 f.: given that there was no such thing as an "office of emperor" (Augustus' powers being a cumulation of special prerogatives granted to him nominatim), and "Augustus himself and the Roman people at large knew that Augustus was especially vulnerable to sudden sickness and death: he had always to keep in mind the contingency that the near-fatal illnesses of his youth and early middle age might recur. The principle in any Roman clan, and quite evidently in Augustus', was to have as many arrows in the quiver as possible at any one time. Sons, stepsons, nephews, sons-in-law, were to be trained to ensure family dominance against the death of any one individual. Augustus did mark out single individuals to share the institutional bases of his power, especially the tribunicia potestas, and the holder of this at any one time can be regarded as the current 'heir'; he must equally have expected that should he die, this 'heir' would soon himself find a colleague". 71 All of this is not to say that Augustus would be able to put a definitive end to the problem of political conspiracy in this way. In Dio 55. 4.4, for example, we are told that in 9 BCE he punished an untold number of senators reported to be conspiring against him. Having more than one extraordinary proconsul at his disposal, however, would significantly strengthen his regime and the survival of his dynasty in the event of serious difficulties or even an outright challenge. 72 MRR 336. 73 See Vervaet 2012, 47-58. C. Caesar, Augustus' adoptive son, born between 14 August and 13 September 20 BCE, would receive a special grant of quinquennial consular imperium in January 1 BCE and assume the consulship one year thereafter in 1 CE.74 In January 11 CE, when he still was only twenty-five years of age, Germanicus Caesar, too, was invested with a similar grant of consulare imperium.75 In 12, he furthermore held the prior consulship throughout the year, without ever having held the office of praetor.76 At all events, a thorough reappraisal of the evidence concerning the official position of Tiberius and Drusus for the years 15 up to and including 11 BCE further corroborates the argument that Livia's sons held authority far more powerful than has hitherto been assumed. Gallia, Germania and Pannonia (15-11 BCE) According to Cassius Dio, the crisis in Gaul triggered in 16 BCE by the so-called clades Lolliana quickly subsided as the invading Germanic tribes withdrew beyond the Rhine in the face of Lollius' renewed preparations and Augustus' decision to take the field himself, probably at some point shortly after 29 June of that year.77 Instead, they made peace and gave hostages. Augustus consequently spent the remainder of the year as well as the following settling other Gallic business.78 Given the resounding success of the Armenian and Parthian settlements, it should not surprise that Augustus now entrusted Tiberius with another formidable commission, this time in conjunction with his equally capable younger brother Drusus, viz. the conquest of much of the Alps and their immediate 74 Dio 55.10.17 and 10a 4 and Hurlet 1997, 127-141 (comp. 559). On Gaius Caesar's date of birth, see Hurlet op. cit. 113. 75 Dio 56.25.1-2 (quoted in full infra p. 159) and Hurlet 1997, 168 f. (comp. 567). See n. 48 supra for the suggestion that Drusus Caesar in all likelihood became proconsul in January 14 CE, at the age of about twenty-seven. 76 Dio 56.26.1. In this respect, it is also well worth pointing to Augustus' preference to work with those in the senatorial order who were thirty-five or younger: Dio 54.26.8 (in the context of his lectio senatus of 13 BCE). 77 As plausibly suggested by Rich 1990, 196 "On Nero's return Caesar resolved to test his powers in a war of no slight magnitude. In this work he gave him as a helper his own brother Drusus Claudius, to whom Livia gave birth when already in the house of Caesar. The two brothers attacked the Raeti and Vindelici from different directions, and after storming many towns and strongholds, as well as engaging successfully in pitched battles, with more danger than real loss to the Roman army, though with much bloodshed on the part of the enemy, they thoroughly subdued these peoples, protected as they were by the nature of the country, difficult of access, strong in numbers, and fiercely warlike." Suetonius rather typically dedicates few words to this campaign and integrates it into his short précis of Tiberius' military campaigns and distinctions prior to his notorious decision to retire to Rhodes in 6 BCE. For the sake of this argument, it is again useful to quote the relevant section from Tib. 9.1-2: "Next [i.e., following his stint in Gallia Comata] he carried on war with the Raeti and Vindelici, then in Pannonia, and finally in Germany. In the first of these wars, he subdued the Alpine tribes, in the second the Breuci and Dalmatians, and in the third he brought forty thousand prisoners of war into Gaul and assigned them homes near the bank of the Rhine. Because of these exploits he entered the City both in ovation and riding in a chariot, having previously, as some think, been honoured with the triumphal regalia, a new kind of distinction never before conferred upon anyone." 79 In 26, Augustus' plans to make an expedition to Britain had been thwarted by the outbreak of war with the Cantabri and the Astures as well as a revolt of the Alpine Salassi: Dio 53.25.2-5. 80 Comp. Eck 2018a, 11: "Die Kämpfe in den Alpen müssen zum Teil sehr heftig gewesen sein; doch der Erfolg der römischen Truppen war durchschlagend." The fullest account of this war can be found in Dio, on whom we ever rely when it comes to much of the fine detail of the Augustan era. As the predominantly Celtic Raeti, who inhabited the lands north of the Alps between Noricum and Gaul, were raiding northern Italy and harassed the Romans and their allies travelling through their territory and allegedly engaged in acts of outrageous cruelty, Augustus first sent Drusus against them. As he defeated one of their forces that had come to meet him in the Tridentine mountains, the Alpine range adjoining Italy, he was elevated to praetorian rank, again no doubt by decree of the Senate at the behest of Augustus. Despite being repulsed from northern Italy, the Raeti continued to press on Gaul, causing Augustus to send out Tiberius against them as well. As Dio goes on to recount, "the pair then launched simultaneous invasions of the enemy's territory from different directions, both under their own command and that of subordinate commanders (αὐτοί τε καὶ διὰ τῶν ὑποστρατήγων), with Tiberius even crossing the lake in boats". By means of this multipronged strategy, Tiberius and Drusus were able to defeat the Raeti piecemeal, in a series of set battles. Since the Raeti were rich in manpower and were considered prone to revolt, they deported most of the adult males, including the fittest, and left behind enough men to populate the land but too few to rebel.81 Although Dio narrates these events in his summary of 15 BCE and Horace Odes 4.14.14-40 records a great victory won by Tiberius over the Raeti in the summer of 15,82 it is quite possible that the Raetian wars of Drusus and Tiberius spanned the years 15 and 14, when Roman troops scored another major victory on the shores of Lake Constance.83 1986, 43-56 andBernecker 1989, 1-97. On the grounds of the relatively late date of the Tropaeum Alpium, Eck 2018a, 11 suggests that "die lokale Unterwerfung, trotz der Verkündung des Sieges auf den Münzen, vielleicht doch etwas länger als ein Jahr gedauert hat". In this respect, it is well worth noting that Dio records in 54.24.3 that the Maritime Alps, still under the independent rule of their inhabitants, the so-called Long-haired Ligurians, were reduced to subjection in 14 BCE. At all events, Dio's testimony that Tiberius and Drusus commanded a number of subordinate legati (ὑποστρατήγοι: presumably pro praetore) strongly suggests that both commanded as proconsuls in their own right.84 In 53.32.1, Dio records that Agrippa, following his investment with a special command in Syria in 23 BCE, "sent his legates there and stayed himself on Lesbos": τοὺς ὑποστρατήγους ἔπεμψεν, αὐτὸς δὲ ἐν Λέσβῳ διέτριψε. In 54.20.1-3, in his narrative of 16 BCE, Dio likewise recounts that P. Silius Nerva (cos. 20), epigraphically attested as proconsul (of Illyricum)85, first (i.e., probably in 16) defeated the Camunni and Vennii, Alpine tribes. Therafter (i.e., probably in 15), Dio goes on to say, he "and his subordinate commanders" defeated the Pannonians as they overran Istria with the Noricans (καὶ αὐτοί τε πρός τε τοῦ Σιλίου καὶ τῶν ὑποστρατήγων αὐτοῦ κακωθέντες αὖθις ὡμολόγησαν), following which he may well have crushed a rebellion in Dalmatia too.86 Furthermore, since Silius' campaigns against a number of Alpine tribes probably represented the first stage of the plan to conquer the entire Alpine region87 and he undertook these in the capacity of proconsul, it made perfect sense for the commanders involved with the next, arguably more challenging phase of operations, viz. the conquest of Raetia and Vindelicia, to hold the same position, especially as they were ranking members of the domus Augusta. 84 A conclusion that runs counter to the communis opinio that the Claudian brothers fought this war as legati Augusti pro praetore: see, e.g., Rich 1990, 202 ("achievements won by Tiberius and Drusus as Aug.'s legates"); Crook 1996, 96 ("Augustus took an imperatorial salutation; the stepsons could have neither triumph nor ovation, for they were only legati Augusti"); Hurlet 1997, 86 ("En ce début de l'année 12, Tibère et Drusus portaient toujours le titre de légat d'Auguste, qui leur avait été conféré depuis le début de la campagne de pacification des Alpes, mais qui devenait désormais peu approprié et peu conforme à leur nouvelle position au sein de l'État et la famille impériale depuis la mort d'Agrippa."); Dettenhofer 2000, 148 ("nur als legati Augusti"); Dalla Rosa 2015, 473, 482 ("en 15 av. J.-C., ils n'étaient que des simples légats") and 483 n. 74; Kienast -Eck -Heil 2017, 61 and 70. Comp. also n. 141 infra. 85 CIL III, 10017 = ILS 899 (Aenona, Dalmatia): Hurlet 2006, 88: "Pour ce qui est de P. Silius Nerva, son statut et celui de sa province sont plus clairs: une inscription provenant d'Aenona fournit un renseignement capital en le qualifiant de proconsul d'Illyrie; s'y ajoute que selon le témoignage de Dion, il possédait ses propres légats, privilège qui n'était accordé qu'aux titulaires d'un imperium indépendant et était refusé à ce titre aux légats impériaux."; and Dalla Rosa 2015, 470: "Cassius Dion parle explicitement de l'action de ses légats contre les populations du Norique et de la Pannonie. Or un legatus Augusti pro praetore, étant lui-même un mandataire de l'empereur, n'avait pas la possibilité d'effectuer une délégation d'imperium; au contraire, un proconsul avait cette capacité en raison de son imperium consulaire." 87 Eck 2018a, 10: "Nach unseren heutigen Kenntnissen darf man die Eroberung des Alpengebiets in den Jahren 16 und 15 v. Chr. als Vorspiel der Eroberung des Landes bis zur Donau und östlich des Rheins auffassen." Having completed his extensive business in the Gallic, German and Hispanic provinces, Augustus left Drusus in charge of Gaul and Germany and returned to Rome in the consulship of Tiberius (for the first time) and P. Quinctilius Varus [i.e, 13 BCE]. Amongst other things, he went up to the Capitol on the day after arriving, removed the laurels from his fasces and placed them at Jupiter's feet.88 While Tiberius zealously executed his functions as consul,89 Agrippa, now back from Syria, saw his status as first of the strongmen further enhanced. Before the close of 13, his tribunicia potestas was renewed for a further five years. Still according to Dio, he was also granted "power superior to that of the governors in every place outside Italy" and sent to Pannonia, then on the verge of war.90 Perhaps on the model of what had been decreed in 18 BCE, Agrippa thus received another quinquennial grant of consular imperium defined as maius quam that of any proconsul whose province he would enter in the course of his tenure, provided his own imperium remained subordinate to that of Augustus, who had been invested with a universal imperium maius quam in the summer of 23 BCE.91 Regardless 88 Dio 54.25.1 and 4. Although Dio says ἐν τῇ Γερμανίᾳ (54.25.1) it is clear from 54.32.1 that he was also in command of the Tres Galliae. On Augustus' activities with regard to these provinces, see also 54.23.7-8. Tiberius' rapid progression in the cursus honorum is neatly summarized in Suet. Tib. 9.3: Magistratus et maturius incohauit et paene iunctim percucurrit, quaesturam praeturam consulatum. 89 See, e.g., Dio 54.25.2-3 and 27.1, where it is recorded that Augustus censured Tiberius for having seated Gaius at his side when giving the votive games for Augustus' return. As Augustus had been invested for life with all the consuls' prerogatives as well as the summum imperium auspiciumque in Rome and Italy in 19 BCE, he was perfectly entitled to do so, being now consul maior in all but in name: Vervaet 2014, 265-275. For the fact that he had already been invested with lifelong, privileged tribunicia potestas in 23, see Vervaet op. cit. 259 f. 90 Dio 54.28.1: Κἀν τούτῳ τὸν Ἀγρίππαν ἐκ τῆς Συρίας ἐλθόντα τῇ τε δημαρχικῇ ἐξουσίᾳ αὖθις ἐς ἄλλα ἔτη πέντε ἐμεγάλυνε καὶ ἐς τὴν Παννονίαν πολεμησείουσαν ἐξέπεμψε, μεῖζον αὐτῷ τῶν ἑκασταχόθι ἔξω τῆς Ἰταλίας ἀρχόντων ἰσχῦσαι ἐπιτρέψας. 91 Vervaet 2014, 262 f. n. 158, and, esp., 273 f. n. 187. Lacking more precise indications in the sources, the possibility that Agrippa ever received an imperium similar to that given to Tiberius in 12/13 CE cannot be ruled out altogether -on Tiberius' empowerment of 12/13 CE, see Vervaet 2014, 272 f.; on the scope of Augustus' imperium auspiciumque in the provinces as legally defined in 27 and 23 BCE, see Vervaet 2014, 254-263. In 23 BCE, shortly before the further enhancement of Augustus' own imperium in the provinces, Agrippa had probably merely received the right to exercise the summum imperium auspiciumque in those public provinces he would visit during his first quinquennial tenure of consular imperium (Dio 53.32.1: Agrippa; 32.5: Augustus), provided his own imperium auspiciumque remained subordinate to Augustus' universal summum imperium auspiciumque, a bit on the model of the naval high command granted by Cn. Pompeius (in his capacity as overall commander-in-chief of the anti-Caesarian coalition) to the proconsul M. Calpurnius Bibulus (cos. 59) in 49 BCE (Vervaet 2006, 940). In my view, the fact that Dio uses almost identical language in defining (the terms of) Augustus' maius imperium of 23 BCE (53.32.5: of the fact that winter had already set in, Agrippa dutifully set out on campaign early in 12 BCE. However, before he reached his destination, the now terrified Pannonians abandoned their rebellion. Agrippa decided as a result to return and went to Campania, where he fell ill and died in March.92 Given the innocent ages of Gaius and Lucius Caesar, Livia's sons now became the mainstays of the domus Augusta. As primus inter pares, Tiberius was now to bear the brunt. That much is clear from Dio 54.31.1-2, regardless of the historiographer's discernable hostility vis-à-vis Tiberius: Ὡς δ᾿ οὖν ὁ Ἀγρίππας, ὅνπερ που δι᾿ ἀρετὴν ἀλλ᾿ οὐ δι᾿ ἀνάγκην τινὰ ἠγάπα, ἐτεθνήκει, καὶ συνεργοῦ πρὸς τὰ πράγματα πολὺ τῶν ἄλλων καὶ τῇ τιμῇ καὶ τῇ δυνάμει προφέροντος, ὥστε καὶ ἐν καιρῷ καὶ ἄνευ φθόνου καὶ ἐπιβουλῆς πάντα διάγεσθαι, ἐδεῖτο, τὸν Τιβέριον καὶ ἄκων προσείλετο· οἱ γὰρ ἔγγονοι αὐτοῦ ἐν παισὶν ἔτι καὶ τότε ἦσαν. καὶ προαποσπάσας καὶ ἐκείνου τὴν γυναῖκα, καίτοι τοῦ τε Ἀγρίππου θυγατέρα ἐξ ἄλλης τινὸς γαμετῆς οὖσαν, καὶ τέκνον τὸ μὲν ἤδη τρέφουσαν τὸ δὲ ἐν γαστρὶ ἔχουσαν, τήν τε Ἰουλίαν οἱ ἠγγύησε καὶ ἐπὶ τοὺς Παννονίους αὐτὸν ἐξέπεμψε· τέως μὲν γὰρ τὸν Ἀγρίππαν φοβηθέντες ἡσύχασαν, τότε δὲ τελευτήσαντος αὐτοῦ ἐπανέστησαν. καὶ ἐν τῷ ὑπηκόῳ τὸ πλεῖον τῶν ἑκασταχόθι ἀρχόντων ἰσχύειν ἐπέτρεψεν -compare 54.28.1, quoted in the previous note) does not necessarily signify that their maiora imperia had identical geographical scopes. In 23, Augustus was authorized to exercise his maius imperium within the framework of the vast provincial command he had received in decennium in January 27. This comprised his own provinces (the so-called prouinciae Caesaris, governed through legati Augusti pro praetore) as well as the right to wield the summum imperium auspiciumque in all public provinces alike, technically administered by other proconsuls (i.e., in alienis prouinciis). Since his provincial command was universal in that it thus spanned the entire Empire, Augustus' maius imperium would automatically apply across all individual public provinces alike, regardless of where he physically found himself. Perhaps already in 18 BCE and certainly in 13 BCE, Agrippa, for his part, was authorized to exercise his maius imperium within the framework of whatever prouincia(e) he would receive from the Senate at the behest of Augustus. Whenever this official provincial mandate would include one or more public provinces, the clause paraphrased by Dio would automatically authorize him to exercise maius imperium vis-à-vis the incumbent proconsul(s), regardless of their locality, under the overarching (and now maius) imperium auspiciumque of Augustus. Since Agrippa in 23 BCE was authorized to govern Syria in absentia through legati (cf. Dio 53.32.1, discussed supra p. 141), the possibility that, at some stage, he also received the power to do so in regard to any public province(s) that happened to be part of his provincial brief cannot be excluded. 92 According to Dio 54.28.1, Agrippa set out on campaign in "the winter during which Marcus Valerius and Publius Sulpicius became consuls". Although Dio in 54.28.2 suggests that Agrippa fell ill only after reaching Campania, his decision to return may well have been partially inspired by failing health and the intent to regain strength in a more wholesome environment. Whereas Syme 1979, 309 argues that Agrippa died on 12 March, Hurlet 1997, 78 (with n. 299) makes a more compelling case for 19/24 March. "Now that Agrippa, whom he loved for his outstanding qualities rather than from any obligation, was dead, Augustus needed as a collaborator someone who was much superior to everyone else in honour and power and so able to deal with everything promptly and without becoming the object of jealousy and intrigue. Reluctantly, he chose Tiberius, for his grandsons were then still boys. He obliged him to divorce his wife, although she was the daughter of Agrippa by a previous marriage and was bringing up one child and pregnant with another. Then he betrothed Iulia to him and sent him out against the Pannonians. This people had been quiet for a while from fear of Agrippa, but after his death rebelled again."93 Despite his misgivings about being forced to divorce Vipsania and betrothe Iulia, Tiberius acquitted himself rather well of his task in Pannonia, where he probably commanded no less then five legions, the equivalent of Roman forces in Gaul.94 Aided by the neighbouring Scordisci, he laid the land to waste and reportedly inflicted much suffering on its inhabitants. He subsequently disarmed the Pannonians and sold the majority of the adult males as slaves for deportation. As recorded by Dio in 54.31.3-4, the Senate consequently "voted him a triumph for this, but Augustus would not allow him to celebrate it and granted him the ornamenta triumphalia instead": καὶ αὐτῷ διὰ ταῦτα ἡ μὲν βουλὴ τά γε ἐπινίκια ἐψηφίσατο, ὁ δ᾽ Αὔγουστος ταῦτα μὲν οὐκ ἐπέτρεψεν ἑορτάσαι, τὰς δὲ τιμὰς τὰς ἐπινικίους ἀντέδωκε.95 According to Dio (54.32.1), "the same thing happened to Drusus as well" [i.e., in the same year 12 BCE]: τὸ δ᾽ αὐτὸ τοῦτο καὶ τῷ Δρούσῳ συνέβη. As indicated above, Augustus had left Drusus in charge of the newly reorganized Tres Galliae and their Germanic periphery in 13 BCE. As Kuttner observes, "Drusus' mandate was an important one, with three major parts. First, he was charged to initiate in 13 a full census of Gaul (Livy Per. 138); Augustus had carried out one in 27, but Drusus' census was to include for the first time property and class evaluation. This task was not only formidable in purely bureaucratic terms, it also required firm, but sensitive, political handling: the first-time imposition of such a census in the new German province by Varus some twenty years later was to provoke unrest so severe as to destroy Roman rule altogether, and the Gauls did not take kindly to the new ways either. Second, he was charged to handle the preliminary organization of a new cult of Rome and Augustus at Lugdunum, a project brought to completion in 10 B.C. with the inauguration of the cult. It was to serve as a focus for Gallic loyalties to the Empire and to enhance a sense of solidarity among the tribes of three provinces: in it the primores Galliarum gathered together headed by priests chosen 93 At this stage, Augustus would have had no cause for reluctance, as the zealously loyal Tiberius made for a natural choice. 94 As plausibly argued by Hurlet 2006, 142-144 and accepted by Dalla Rosa 2015, 465. 95 Quoted from 31.4. on a rotating basis from their number, and with it was to be associated the administratively empowered assembly of these primores, whose first recorded actions were connected, ironically, with funeral honors decreed for Drusus in 9. Finally, the best-known portion of Drusus' mandate was to organize for a campaign across the Rhine into Germany, to implement a plan of conquest designed to bring Germany into the Empire as a province."96 In 12 BCE, Drusus had to master a twofold threat as the Sugambri and their allies took to arms because of Augustus' departure and the Gauls rebelled against the census. Drusus first defused internal unrest by summoning the Gallic chiefs to Lugdunum on the pretext of the dedication of the altar of the divine Caesar there. He then attacked the Germans on both sides of the Rhine and invaded the territory of the Usipetes. From there he advanced alongside the Rhine into the land of the Sugambri, where he caused widespread destruction. Next, he sailed down the Rhine to the sea, won over the Frisii and with their aid invaded the land of the Chauci. Thanks to his Frisian infantry he was able to extricate his army from a tricky situation as his fleet had been stranded by the tide. As winter had set in, he withdrew and returned to Rome, where, "in the consulship of Q. Aelius Tubero and Paullus Fabius Maximus [i.e., 11 BCE], he was appointed praetor urbanus, although he already held praetorian rank"97. Drusus' praetura urbana would prove to be almost entirely honorary, as the year of his tenure again saw vigorous military activity on the part of both Claudii.98 As soon as spring arrived, Drusus set out again for the war in Germany. He crossed the Rhine, subdued the Usipetes, bridged the Lippe and invaded the territory of the Sugambri. Crossing it unopposed, he entered the land of the Cherusci and advanced as far as the Weser. Drusus' deep penetration of Germany was greatly facilitated by the fact that the Sugambri had invaded the Chatti, who alone among their neighbours had refused to ally with them. Dio goes on to recount that Drusus would have crossed this river, too, had not circumstances forced his hand. First and foremost, he ran out of supplies as winter set in. In addition, a swarm of bees was seen in his camp. At all events, on his way back, Drusus' forces report- edly came close to complete destruction. The enemy (either the Sugambri and/ or the Cherusci and their allies) continuously harassed him with ambushes and eventually managed to trap his army in a narrow valley. Had the Germans not become overconfident, charging in disorder in the conviction that the Romans were all but defeated, the result might well have been a catastrophe of the scale of what was to transpire in the Teutoburg forest some eighteen years later.99 After he had managed to turn the tables on his enemies in a remarkable victory and make a safe return to friendly territory, Drusus constructed a fort at the confluence of the Lippe and the Eliso, and another on the Rhine in the territory of the Chatti.100 As what follows is of particular interest to this enquiry, it is well worth quoting Dio's summary in 54.33.5 in full: διὰ μὲν οὖν ταῦτα τάς τε ἐπινικίους τιμὰς καὶ τὸ ἐπὶ κέλητος ἐς τὸ ἄστυ ἐσελάσαι, τῇ τε τοῦ ἀνθυπάτου ἐξουσίᾳ, ἐπειδὰν διαστρατηγήσῃ, χρήσασθαι ἒλαβε. τὸ γὰρ ὄνομα τὸ τοῦ αὐτοκράτορος ἐπεφημίσθη μὲν ὑπὸ τῶν στρατιωτῶν καὶ ἐκείνῳ τότε καὶ τῷ Τιβερίῳ πρότερον, οὐ μέντοι παρὰ τοῦ Αὐγούστου ἐδόθη, καίπερ αὐτοῦ ἀπ᾽ ἀμφοτέρων τῶν ἔργων τὸν ἀριθμὸν τῆς ἐπικλήσεως αὐξήσαντος. "For these achievements he received the ornamenta triumphalia and the right to enter the City on horseback and to exercise the imperium of a proconsul when his term of office as praetor expired. Drusus then and Tiberius earlier were hailed as Imperator by their troops, but were not granted the title by Augustus, although he increased the number of his own salutations for both their campaigns."101 99 Pliny in Nat. 11.55 records the name of Arbalo for the site of this encounter. As Velleius records in 2.118.2 that Arminius had been fighting with the Romans for a long time (adsiduus militiae nostrae prioris comes) before he turned hostile and the Germans would not repeat this mistake as they methodically destroyed Varus' army over the span of three days (Dio 56.18-24 -contra Wells 2003, who instead argues for a single, decisive engagement) some twenty years later, I believe Arminius may have put the experience of 11 BCE to good use in the so-called battle of the Teutoburg Forest at Kalkriese. Even though he would have been young in 11 BCE, Arminius would have heard first-hand accounts of the battle. Wolters 2017, 43 suggests that Drusus' army was ambushed by the Sugambri and possibly the Chatti. I am inclined to believe this near-catastrophic encounter occurred in the more remote territory of the Cherusci. For an excellent reappraisal of Varus' generalship and tactics at Kalkriese, see Morgan 2019. 100 Dio 54.33.1-4. In Liv. Per. 140, we are merely told that Drusus subjugated the Cherusci, Tencteri, Chauci and other Germanic people across the Rhine. 101 Although Syme 1979, 310 correctly argues that the campaigns of Tiberius and Drusus in 12 and 11 BCE earned Augustus his 11th and 12th imperatorial salutations successively (a chronology also accepted in Kienast -Eck -Heil 2017, 58), Dio unequivocally records that Augustus took these salutations on account of the successes of both his stepsons in these years, and not first because of Tiberius' victories of 12 and thereafter those of Drusus in the next year. Augustus could indeed well have decided to accept four salutations on account of the victories of his step-As Drusus was thus engaged in Germany, the games attached to his praetorship were celebrated in the most costly way, while the birthday of Augustus was commemorated by the slaughter of wild beasts in the Circus and in many other parts of the City -this occasion reportedly marking the first time the Augustalia were held by formal decree of the Senate rather than through the voluntary initiative of one of the praetors, as had happened before 11 BCE. Also at this time, Dio goes on to recount, Tiberius subdued both the Dalmatians, who had risen in revolt, as well as the Pannonians, who had rebelled after them, taking advantage of the absence of the Roman commander and most of his army. Tiberius made war against both people simultaneously, shifting between the two fronts, "and so won the same rewards as Drusus" -ὥστε καὶ τῶν ἄθλων τῶν αὐτῶν τῷ Δρούσῳ τυχεῖν. As Cooley discerningly observes, Dio "is mistaken in presenting these campaigns as mere suppression of revolts, misled by the assumption that Augustus' earlier campaigns in the area in 35-33 BCE had advanced further than they had. Tiberius' campaigns advanced Roman control considerably in the region to the south of the Danube, conquering the Breuci in the Save valley with the help of the Scordisci".102 After Tiberius' victories, Dalmatia was transferred to the prouinciae Caesaris, "on the grounds that it required a permanent garrison both for its own sake and because of the neighbouring Pannonians"103. In his summaries of 12 and 11 BCE, Dio thus provides some vital clues as to the official position of both Tiberius and Drusus, the sum of which would suggest the following reconstruction. In 12 BCE, Tiberius and next also Drusus were granted full public triumphs by the Senate, votes that no doubt also endeavoured to confirm their respective salutationes imperatoriae. Augustus, however, vetoed these motions and instead moved to award first Tiberius and then also Drusus with the ornamenta triumphalia. In Tib. 9.2 (supra, p. 144), Suetonius expressly records that Tiberius was the first ever recipient of this novel distinction.104 In 11, then, Augustus prevented the ratification of the imperatorial salutations they had received in the field on account of their successes of that year. As it strains belief sons in 12 and 11, and it follows that his decision to take only two for their combined successes over these years accounts for another show of modesty on his part. that the Senate would decree triumphs without ratifying preliminary imperatorial salutations, it follows that Tiberius and subsequently Drusus, too, had also been denied senatorial ratification of their imperatorial salutations in 12. It is, moreover, quite likely that the Senate had again wanted to award Livia's sons with curule triumphs in 11 as they moved to ratify their salutations of that year. Instead, doubtlessly at the motion of Augustus, both men were, yet again, granted the triumphal ornaments.105 This time, however, they also received the right to celebrate an ovation106 as well an imperium pro consule from 1 January 10 -the first two distinctions also being recorded in Suet. Claud. 1.2-3.107 Given Tiberius' seniority, he had probably received these honours before they were decreed to his younger brother, Drusus.108 That Tiberius received (and later celebrated) an 105 Many scholars will only allow for one denied salutation and one triumph as well as one grant of ornamenta triumphalia to both brothers over the years 12 and 11 BCE. An early example is Stein. In PIR 2 C 941 (p. 221), he suggests that Tiberius received the ornamenta in 12 whilst being denied a triumph by Augustus, and that the latter in 11 refused to recognize Tiberius' imperatorial salution by his army in the field. In PIR 2 C 857 (p. 197), he asserts that Drusus received the ornamenta triumphalia in 11 whilst being denied his imperatorial salutation by the army. On the basis of Tac. Ann. 1.3 and Val. Max. 5.5.3 as well as (posthumous) numismatic and epigraphical evidence (respectively discussed on pp. 165-167, 180 f., and in n. 113 infra), however, Stein suggests that Augustus later in 11 BCE eventually moved to recognize this imperatorial salutation: postea tamen Augustus eum imperatorio nomine auxit. 106 Contra Gruen 1996, 175, who believes that Tiberius was awarded his ovation on account of his campaigns of 10 and/or 9 BCE. 107 Is Drusus in quaesturae praeturaeque honore dux Raetici, deinde Germanici belli Oceanum septemtrionalem primus Romanorum ducum nauigauit transque Rhenum fossas naui et immensi operis effecit, quae nunc adhuc Drusinae uocantur. Hostem etiam frequenter caesum ac penitus in intimas solitudines actum non prius destitit insequi, quam species barbarae mulieris humana amplior uictorem tendere pultra sermone Latino prohibuisset. Quas ob res ouandi ius et triumphalia ornamenta percepit -"This Drusus, while holding the offices of quaestor and praetor, was in charge of the war in Raetia and later of that in Germany. He was the first of Roman commanders to sail the northern Ocean, and beyond the Rhine with prodigious labour he constructed the huge canals which to this very day are called by his name. Even after he had defeated the enemy in many battles and driven them far into the wilds of the interior, he did not cease his pursuit until the apparition of a barbarian woman of greater than human size, speaking in the Latin tongue, forbade him to push his victory further. For these exploits he received the honour of an ovation as well as the triumphal regalia." The incident allegedly involving the tall Germanic woman, however, took place in 9 BCE: Dio 55.1.3. On this episode and the circumstances and representation of Drusus' decision to halt at the Elbe, see Timpe 1967, who argues that this unmistakable instance of imitatio Alexandri (halting the advance following a prodigious omen) is to be interpreted as indirect contemporary criticism of his unfettered aggression. 108 That Dio first mentions the honours voted to Drusus in 11 may well be ascribed to the thematic organization of the source he was drawing from, where Drusus' campaigns in Germania during 12 and 11 where narrated as a unity, breaking up the narrative of Tiberius contemporary ovation on account of his successes over the Pannonians and the Dalmatians is also on record in Suetonius (Tib. 9.2, quoted supra p. 144), Dio 55.2.4 (infra p. 178 with n. 165), and Velleius 2.96.2.109 The precedence of Tiberius also makes perfect sense in that the Pannonian theatre of war arguably was more important, as recently argued by Eck: none less than Agrippa himself had been tasked with this commission in 13 BCE (supra, p. 147) and its strategic location close to Italy readily wars against the Pannonians and the Dalmatians. This organization also indicates his (or his source's) bias in favour of Drusus. In this respect, it is also worth calling to mind that Drusus only won his decisive victory late in the year (11 BCE), as autumn was giving way to winter. 109 In Per. 141, Livy's epitomator merely records that Tiberius subdued the Dalmatians and Pannonians. Although Mommsen 1878, 466 n. 1 accepts Dio's representation that first Tiberius and then also Drusus were first awarded with the ornamenta triumphalia in both 12 and 11 BCE, he seems to believe that only the latter was voted an ovation in 11: "Damals wurden sie wenigstens für Drusus mit der Ovation zugleich decretirt". Neither can I accept Syme's reconstruction of events. In 1979, 310 f., Syme interprets Dio 54.31.4, 32.1 and 33.5 (cf. supra pp. 149-152) as recording that the Senate first voted Tiberius a triumph in 12 following his acclamation by his army in Pannonia, a decision thwarted by Augustus who instead honoured Tiberius with the novel distinction of the ornamenta triumphalia (comp. also 314: "devised in the first instance for Tiberius in 12 B.C."), and that "the same procedure followed for Drusus in the next year". In n. 15, Syme accordingly explains that both Gelzer 1918, c. 483 and Jones 1934, 153 -and, apparently unbeknownst to Syme, also Boyce 1942 f. -are wrong to believe that Tiberius was presented with the ornamenta twice (Jones, however, believes Drusus only to have received the ornaments once, in 11 BCE), observing that the "same interpretation of Dio would produce the same result for Drusus." On the grounds of the correct "axiom" that "no triumph can be celebrated without an antecedent acclamation, no acclamation taken without the possession of a proconsul's imperium" (comp. also 324, where it is correctly posited that "ovations", too, "presuppose […] imperium"; comp. also Syme 1978, 60: "No triumph, it is clear, can be awarded without a salutation, no salutation accepted without possession of the imperium of a proconsul", termed an "axiom" here), Syme furthermore argues that the honours voted in 11 and recorded in Dio 54.33.5 applied to Drusus only ("for he is named first") and were "for the future", since Drusus and Tiberius were merely "legates in the provincia of Caesar" in 11, and "further defined the scope and potential of the honour (i.e., an ovation, not the full triumph) that might fall to Drusus if and when he earned a salutation. Compare the phrase of Suetonius [in Claud. 1.3, cf. supra n. 107]: Drusus before his consulate (in 9) had received the ouandi ius." Strangely enough, Syme recognizes that Dio goes on to state (in 54.34.3, supra) that Tiberius "received the same honours as Drusus" and accepts that this entailed "proconsular imperium, likewise from the beginning of 10". On the strength of Dio 54.36.4 (cf. infra p. 177), Syme nonetheless suggests that Tiberius earned his first imperatorial salutation as well as the ovation he would celebrate on 16 January 9 (cf. infra n. 166) in his "third campaign", i.e., in 10 BCE. Combès 1966, 175 f., for his part, believes both Tiberius and Drusus secured their first official nomina imperatoria and ovations on account of victories won in Pannonia and Germany in the summer of 9 BCE. Hurlet 1997, 87 and 97 accepts Syme's view that the ovation awarded to Drusus in 11 as recorded in Dio 54.33.5 concerned a future privilege, to be awarded following future successes in Germany. For a discussion of an inscription featuring Drusus as IMP III, cf. infra n. 117. explains why the Romans spared no costs or efforts to regain full control of these lands during the great Dalmatian and Pannonian revolt of 6-9 CE, as opposed to their eventual retreat from transrhenian Germany following the clades Variana.110 In light of this evidence, there is every indication that both Tiberius and Drusus operated as proconsuls in their own right in 12 and 11 BCE. Under the republican ius triumphi, only holders of independent imperium auspiciumque qualified for such honours as supplications, salutationes Imperatoriae, ovations and curule triumphs. The army and then the Senate so moved after their respective victories because they met this condition sine qua non in terms of official position. In 45 BCE, Caesar the dictator had admittedly allowed two of his legati pro praetore to receive and celebrate public triumphs, but this noted breach of custom had caused significant senatorial indignation.111 A repeat of this distasteful episode would, therefore, not have been in the best interest of Augustus and his crafty strategy of upholding mos maiorum whenever possible and politically expedient. The year 47 CE would witness the only known instance of a legatus Augusti pro praetore being granted the privilege of an ovation, viz. A. Plautius (suff. 29 CE), the man who had helped Claudius secure the military success he so desperately needed, namely the conquest of Britain.112 That Drusus is epigraphically and numismatically recorded as IMP and IMP II (and, in one instance, even IMP III) further sustantiates rather than complicates this reconstruction.113 As amply recorded in the sources, Augustus and the Senate .28 that no other Roman general was ever honoured with an honorific name ex prouincia by decree of the Senate is probably to be amended in that Drusus was the first ever Roman commander to receive this honour by decree of the Senate. In this respect, it is well worth reminding that Augustus' triple Actium arch in the Forum Romanum was the first set up in Rome by public decree rather than at the initiative of the honorand or his family: Wallace-Hadrill 1990, 143-147, followed by Rich 1998, 114. 115 Suet. Claud. 1.3 and 5 and Dio 55.2.2-3. According to Tacitus Ann. 3.5, "every distinction which our ancestors had discovered, or their posterity invented, was showered upon him"cuncta a maioribus reperta aut quae posteri inuenerint cumulata. Suetonius and Dio also recount (in Claud. 1.3 and 55.2 respectively) that Augustus himself sent Tiberius to Drusus when he learned of the latter's illness. After his brother had died, Tiberius had the body carried to Rome, first by the centurions and military tribunes as far as the winter quarters of the army, and thereafter by the foremost men of the municipia and the coloniae, where it was received by the decuries of the scribes -for the roughly similar procedures followed in the repatriation of the bodily remains of Gaius and Lucius Caesar and Augustus himself, see Dio 55.12.1 and 56.31.2. When the body was laid in state in the Forum, Tiberius pronounced the first eulogy there (also on record in Ann. 3.5), a second one being delivered by Augustus himself in the Circus Flaminius, as custom dictated he could not conduct the customary intrapomerial rites in honour of his exploits because he was in mourning (comp. also 55.4.4-5.1: "at the time in question he was unwilling to enter the City because of Drusus' death"). The body was then borne to the Campus Martius by the equestrians, including those of senatorial families, after which the ashes were deposited in the sepulchre of Augustus. Comp. also the extremely terse summary in Liv. Per. 142: Corpus a Nerone fratre, qui nuntio ualetudinis euocatus raptim adcucurrerat, Roman peruectum et in tumulo C. Iulii reconditum. Laudatus est a Caesare Augusto uitrico. Et supremis eius plures honores dati. Although it is impossible to rule out that Drusus had again been saluted Imperator by his army in his final campaign, regardless of the lack of any evidence whatsoever, his tragic misfortune probably badly damaged morale in his army. That the army was grief-stricken may be inferred from Consolatio ad Liuiam 169-172, where we are told that they had wanted to burn their commander on a funeral pyre in the camp in full armour, and that only Tiberius' firm resolve ensured Drusus' remains were returned to Rome for proper rites. The affection of the soldiers can also be inferred from Suet. Claud. 1.3: Drusus' death in his summer camp caused it trium liberorum by way of consolation.116 The above analysis suggests that Drusus was twice saluted Imperator by his victorious army in Germany, first in 12, when he was also decreed a curule triumph, and then again in 11. In both instances, his stepfather interfered to prevent senatorial ratification. Following his untimely demise, however, these Germanic salutations were posthumously ratified by the Senate as part of a wider package of posthumous triumphal honours, no doubt at the behest of Augustus himself.117 to be given the name of "Accursed" (Scelerata), and after the departure of Drusus' remains, the army raised a monument in his honour about which the soldiers were to make a ceremonial run each year thereafter, which, no doubt by decree of the Senate, the cities of Gaul were to observe with prayers and sacrifices. 116 Dio 55.2.5-6. 117 Contra Syme 1979, 313 f., who believes the epigraphic evidence quoted in n. 113 supra (which, incidentally, records two, rather than one, imperatorial salutations for Drusus) records a single acclamation that "may go back to 10 B.C. or belong to the last campaign, when Drusus set up a trophy at the Elbe". The analysis above also signifies that I am at variance with Stylow 1977, 489, who argues that Drusus was twice saluted Imperator, in 11 and 9 BCE (speculating that Augustus hesitantly dropped his initial opposition against the first salutatio as recorded in Dio 54.33.5), "einmal mehr als sein älterer Bruder, der erst 8 v. Chr. diese Ehrung erhielt", and Rich 1990, 231, who believes that Drusus received an imperatorial salutation in both 11 and 9, as opposed to his older brother, who only received his first salutation in 9, and consequently struggles with the epigraphic record of Drusus' salutations, observing that his first salutation "is recognized on inscriptions at Saepinum" whereas "only his salutation in 9 […] is recognized on his elogium in the Forum of Aug." Comp. also p. 220: Tiberius "must have received his first salutation as imperator in 9"; Drusus "must have been hailed imperator in the course of this year [i.e., also 9 BCE], his only officially recognized salutation". Like e.g. Kienast 1990, 69, however, Hurlet 1997, who duly accepts the "axiom" that "on ne peut être acclamé imperator et a fortiori célébrer une ovation ou un triomphe que si on possède un imperium en propre" (p. 59, n. 182), suggests that Drusus was indeed twice saluted Imperator by his army: first at the end of his campaign of 10 BCE and then again following his expedition to the Elbe in 9 BCE. In my view, the discrepancy between Augustus' veto against Drusus' salutations in his lifetime and the Senate's posthumous ratification of both his Germanic salutations may help to explain the variation in the epigraphic record with regard to the precise number of acclamations: cf. n. 113 supra. As opposed to the detailed Tiberius-inscriptions from Saepinum, where Tiberius was clearly keen to record that his brother too had earned an equal number of imperatorial salutations, Drusus' elogium in the Forum Augusti puts the emphasis on the fact that he was acclaimed Imperator in Germany, not on his actual number of salutations. Although the funerary honours decreed in 9 BCE concern Drusus' victories in Germania Magna in 12/9 BCE, both Tiberius and Drusus had possibly been already saluted Imperator by their armies following their victories in Raetia and Vindelicia. AE 1959, 278, an inscription found on the forum of Saepinum, may well provide some epigraphic evidence as it features Neroni Claudio / Ti. f. Druso Germ. / auguri cos. imp. III. Stylow 1977, 489 f. is skeptical, observing that imp III is incompatible with the Tiberius-inscription. Nonetheless, it may well be that the dedicators in their zeal decided to record all of Drusus' salutations by the army in the field. Compare the commentary in AE loc. cit. that "La reconnaissance des habitants The conclusion that Drusus operated in the Tres Galliae and its vast Germanic hinterland as proconsul during the years 13/11 BCE sits well with what we know about his impressive raft of activities and responsibilities there. In this respect, it is well worth calling to mind that, a few years before him, both Agrippa (in 20 BCE) and Tiberius (in 16 BCE) had held very similar briefs in the capacity of proconsul (supra, p. 138 f.). Furthermore, in the summer of 14 CE Germanicus Caesar, too, held a special proconsulship as he exercised the supreme command of the legions in both Germania Inferior and Superior whilst conducting another census in the three Gauls.118 How, then, should we explain the fact that Dio's representation in 54.33.5 (supra, p. 151) seemingly suggests that both men would hold their first proconsulates only from January 10 BCE? First, Dio's wording here is quite different from the passages where he records Gaius Caesar and Germanicus' first grants of imperium pro consule in unequivocal terms: Τῶν Ἀρμενίων δὲ νεωτερισάντων καὶ τῶν Πάρθων αὐτοῖς συνεργούντων ἀλγῶν ἐπὶ τούτοις ὁ Αὔγουστος ἠπόρει τί ἂν πράξῃ· οὔτε γὰρ αὐτὸς στρατεῦσαι οἷός τε ἦν διὰ γῆρας, ὅ τε Τιβέριος, ὡς εἴρηται, μετέστη ἤδη, ἄλλον δέ τινα πέμψαι τῶν δυνατῶν οὐκ ἐτόλμα, ὁ Γάιος δὲ καὶ ὁ Λούκιος νέοι καὶ πραγμάτων ἐτύγχανον ἄπειροι. ἀνάγκης δ᾿ ἐπικειμένης τὸν Γάιον εἵλετο, καὶ τήν τε ἐξουσίαν αὐτῷ τὴν ἀνθύπατον καὶ γυναῖκα ἔδωκεν, ἵνα κἀκ τούτου τι προσλάβῃ ἀξίωμα, καί οἱ καὶ συμβούλους προσέταξε. [1 BCE] "When the Armenians revolted and the Parthians joined with them, Augustus was distressed and at a loss what to do. For he himself was not fit for campaigning by reason of age, while Tiberius, as has been stated, had already withdrawn, and he did not dare send any other influential man; as for Gaius and Lucius, they were young and inexperienced in affairs. Nevertheless, under the stress of necessity, he chose Gaius, gave him the proconsular authority and a wife, -in order that he might also have the increased dignity that attached to a married man, -and appointed advisers to him."119 de Saepinum est très comprehensible, car les deux frères, comme il est remarqué plus haut, on fait construire les murs du municipe". 118 See Ann. 1.31.2 and 33.1. Compare also the observation of Syme as quoted in n. 56 supra. Although Kuttner 1995 aligns with the common opinion (cf. also n. 141 infra) that Augustus left Drusus behind in Gaul as he returned to Rome in 13 "as legate of the tres Galliae" (p. 118; comp. also 123: "Augustus' legate in the West", and 177: "the status of such Augustan legati"), the sheer magnitude of his stepson's responsibilities surely warranted an extraordinary proconsulship. In point of fact, Kuttner's attractive suggestion (p. 123) that the major event behind the Boscoreale Cups' depiction may well have been chosen "to sum up Augustus' achievements in the West, in a way similar to the use of the return of Crassus' standards to sum up Augustan sway in the East" further strengthens the case for Drusus commanding as proconsul in Germany and Gaul as from January 15. 119 Dio 55.10.18. Μάρκου δὲ Αἰμιλίου μετὰ Στατιλίου Ταύρου ὑπατεύσαντος, Τιβέριος μὲν καὶ Γερμανικὸς ἀντὶ ὑπάτου ἄρχων ἔς τε τὴν Κελτικὴν ἐσέβαλον καὶ κατέδραμόν τινα αὐτῆς. "In the consulship of Marcus Aemilius and Statilius Taurus [= 11 CE], Tiberius and Germanicus, the latter acting as proconsul, invaded Germany and overran portions of it."120 In the latter passage, any reader ignorant of Tiberius' official position might just as well erroneously infer that, unlike his adoptive son Germanicus, Tiberius did somehow not hold proconsular power when they launched their joint invasion of Germany, especially as Dio in his account of Tiberius' adoption and re-empowerment in 4 CE only mentions his decennial grant of tribunicia potestas.121 And surely no one would read Dio 55.13.5-6, where we are told that Augustus in 4 CE "assumed proconsular power for the purpose of completing the census and performing the lustratio"122, as evidence that he did not already hold the office of proconsul, a position he had occupied amost continuously ever since abdicating his 11th consulship in June 23 BCE. Second, Dio in 54.33.5 simply paraphrases two distinct, and yet very similar, packages of honours and privileges awarded successively to Tiberius and Drusus at different moments in the second half of 11 BCE. His précis does not at all preclude that both brothers already held quinquennial proconsulates when these honours were decreed: it merely shows that they received a(nother) five-year proconsular command as from the start of next year.123 In this respect, it is also well worth calling to mind that even in the case of 120 Dio 56.25.1-2. 121 Dio 55.13.1a-2. The same is true for Dio 56.28.1, where Dio duly mentions the renewal of Tiberius' tribunician power in 13 CE but fails to say anything about the corresponding grant of overriding imperium over all the provinces and the armies recorded in both Vell. 2.121.1 and Suet. Tib. 21.1 (on which Vervaet 2014, 272 f.). Dio also records that Drusus received the privilege to stand for the consulship two years later without ever having held the praetorship. 122 ἀνθύπατον ἐξουσίαν πρός τε τὸ τέλος τῶν ἀπογραφῶν καὶ πρὸς τὴν τοῦ καθαρσίου ποίησιν προσέθετο. According to Dio (comp. also 54.1.5-2.1) Augustus decided to act as such as he was keen not to appear to be conducting this business in the capacity of censor. 123 Compare also Hurlet 1997, 168 f., who rightly contests the view of a large number of scholars (referenced in n. 28 of p. 168) asserting that Ann. 1.14.3 (quoted in n. 48 supra), where Tacitus records that Tiberius asked the Senate to invest Germanicus with proconsular imperium following the death of Augustus, invalidates Dio's evidence that Germanicus already held such imperium since 11 CE with the argument that "une telle interprétation donne toutefois au témoignage de Tacite un sens qu'il n'a pas: l'historien romain ne dit pas que Germanicus reçut alors un imperium pour la première fois; il rappelle plus exactement que 'Tibère demanda pour Germanicus l'imperium proconsulaire', ce qui est sensiblement différent. L'existence d'une salutation impériale décernée à Germanicus dès 13 infirme en outre l'idée que la première investiture date de septembre 14, puisque l'octroi d'une telle distinction militaire prouve que celui-ci était en possession de ses propres auspices avant cette dernière date". Just as Tacitus' (arguably confused Agrippa's successive empowerments with consular imperium in 23, 18 and 13 BCE, Dio only provides very little information. Whereas he remains entirely silent as to Agrippa's first extraordinary proconsulate of 23, he is tantalizingly vague about his second such commission of 18, and only provides more or less unequivocal evidence as regards the final grant of 13.124 That Dio bothers to mention at all that Tiberius and Drusus were to be invested (yet again) with consular imperium as from January 10 should probably be explained in terms of their enhanced relative importance following the untimely death of Agrippa the year before: regardless of Augustus' plans for Gaius and Lucius Caesar, Livia's sons now played first and second fiddle in his regime. It was also in 11 BCE that Augustus married Iulia to Tiberius and that, when his sister Octavia died, Drusus was granted the honour of delivering a second funeral oration for her from the Rostra.125 Velleius Paterculus and Tacitus furthermore offer proof positive that Tiberius and Drusus had already been invested with independent imperium auspiciumque well before Agrippa's death in 12 BCE. First, there is Vell. 2.122, again worth quoting in full given its particular interest to this argument: Quis non inter reliqua, quibus singularis moderatio Ti. Caesaris elucet atque eminet, hoc quoque miretur, quod, cum sine ulla dubitatione septem triumphos meruerit, tribus contentus and confusing: see n. 48 supra) representation here cannot be interpreted as conclusive evidence that Germanicus did not yet hold independent imperium in August 14 CE, especially in the face of Dio's evidence in 56.25.2, so can Dio 54.33.5 (supra, p. 151) not be interpreted as showing that Tiberius and Drusus first received such imperium in 11/10 BCE, especially not in the light of a disjointed if significant body of indirect evidence that they had both long been invested with such authority. Similarly, one might just as well -but equally incorrectly, comp. nn. 46-48 and 121 f. supra -interpret Tab. Siar. frg. 1, 19-20 (AE 1984, 508) as evidence that Germanicus only received consular imperium in 17 CE: ordinato / statu Galliarum, proco(n)s(ul) missus in transmarinas pro [uincias (etc.). I do not, however, accept Hurlet's suggestion (p. 168) that Germanicus campaigned in Germany from 11 CE onwards "sous ses propres auspices": he indeed commanded propriis auspiciis but did so under the overarching auspices of Augustus; comp. also p. 161 f. and n. 129. 124 Dio 53.32.1 (23 BCE); 54.12.4 (18 BCE) and 28.1 (12 BCE): cf. also n. 91 supra. Likewise, as Syme discerningly observes in 1979, 324, Tacitus did not bother registering that Drusus Caesar was sent to Illyricum in the capacity of proconsul in 17 CE even though he does record that both Germanicus and he were voted ovations in the course of 19 CE on account of their diplomatic successes in their respective provinces (Ann. 2.64.1). As Syme correctly notes, "ovations presuppose that imperium". On the likely chronology of Drusus Caesar's first quinquennial proconsulship, see also n. 48 supra. "Among the other acts of Tiberius Caesar, wherein his remarkable moderation shines forth conspicuously, who does not wonder at this also, that, although he unquestionably earned seven triumphs, he was satisfied with three? For who can doubt that, when he had recovered Armenia, had placed over it a king upon whose head he had with his own hand set the mark of royalty, and had put in order the affairs of the east, he ought to have received an ovation; and that after his conquest of the Vindelici and the Raeti he should have entered the City as victor in a triumphal chariot? Or that, after his adoption, when he had broken the power of the Germans in three successive campaigns, the same honour should have been bestowed upon him and should have been accepted by him? And that, after the disaster received under Varus, when this same Germany was crushed by a course of events which, sooner than was expected, came to a happy issue, the honour of a triumph should have been awarded to this consummate general? But, in the case of this man, one does not know which to admire the more, that in courting toils and danger he went beyond all bounds or that in accepting honours he kept within them." It is important to observe that Velleius elsewhere in his narrative unequivocally demonstrates his perfect knowledge of the fact that a Roman commander had to have conquered as a holder of independent imperium auspiciumque in order to qualify for triumphal honours.126 As regards the brilliant successes gained by M. Aemilius Lepidus (ord. 6 CE) during his service in Pannonia as legatus Augusti pro praetore under the immediate command of Tiberius in 9 CE,127 Velleius indeed makes the following insightful observation in 2.115.2-3: "In the beginning of summer Lepidus led his army out of winter quarters, in an effort to make his way to Tiberius the commander, through the midst of peoples that were as yet unaffected and untouched by the disasters of war and therefore still fierce and warlike; after a struggle in which he had to contend with the difficulties of the country as well as the attacks of the enemy, and after inflicting great loss on those who barred his way, by the devastation of fields, burning of houses, and slaying of the inhabitants, he succeeded in reaching Caesar, rejoicing in victory and laden with booty. For these feats, for which, if he had achieved them with auspices of his own, he would duly have received a triumph, he was granted the ornaments of a triumph, the will of the Senate endorsing the recommendation of the Caesars." It was doubtlessly at the same meeting of the Senate that they duly granted Tiberius a public triumph over the Pannonians and the Dalmatians.128 Although Tiberius Caesar, too, was fighting under the auspices of Caesar Augustus at the time (i.e., alienis auspiciis), he as proconsul still held independent imperium auspiciumque of his own and therefore met all of the basic requirements. Regardless of the fact that ordinary senators had ceased to celebrate curule triumphs or ovations since 19 BCE, Lepidus, by contrast, could not even be considered for full triumphal honours, as he had conquered sine propriis auspiciis, without auspices of his own.129 In light of these considerations, Velleius' glowing testimony in 2.122 that Tiberius ought to have been awarded an ovation on account of his bloodless successes 128 Cf. infra, p. 185, for the fact that, by force of circumstances, Tiberius would not celebrate this triumph before 23 October 12 CE, almost three years after it had been decreed. 129 See Vervaet 2014, chapter 7, sections 6 f. for a comprehensive discussion of the high command under Imperator Caesar Augustus. Compare also Hurlet 2015, 290: "Cette precision [i.e. Vell. 2.115.3] signifie a contrario que tous les généraux qui triomphèrent ou qui furent au moins salués imperator au début de l'époque impériale étaient en possession de leur propre imperium et avaient ainsi pris leurs auspices pendant leurs campagnes." For the fact that Velleius was perfectly aware of the key distinction between proconsuls (normally appointed by the Senate using sortition) and imperial legates, see, e.g., 2.99.4, where he records that Tiberius was visited in Rhodes by "all who departed for the provinces across the sea, whether proconsuls or legates" (ut omnes, qui pro consulibus legatique in transmarinas sunt profecti prouinias, uisendi eius gratia Rhodum deuerterint) and 2.112.5 and 113.3, where he distinguishes between Tiberius, termed imperator, and his legates, termed duces or legati, in his narrative of the great Pannonia revolt of [6][7][8][9] in the East in 20,130 a distinction ranking well above the supplications recorded in Dio (supra, p. 130; 133), and that his conquest of the Vindelici and the Raeti should have earned him a full public triumph, further confirms that he achieved these feats as a proconsul, propriis cum auspiciis.131 A few other passages scattered across his second book further confirm that Tiberius had held independent imperium auspiciumque from his Armenian campaign in 20 BCE down to his first command in Germania in 8 BCE. In 2.96.3, Velleius attests that, following Agrippa's death, Tiberius took over the war effort in Pannonia as imperator, i.e., as holder of independent imperium auspiciumque, and that his victory in this formidable war earned him an ovation.132 That he had already held that status during his first stint as commander in Gaul in 15 can also be inferred from 2.104.3. Velleius here claims that in 4 CE, when Tiberius was travelling north to resume aggressive operations in Germania, the inhabitants of the Gallic provinces were overjoyed at the sight of their old commander: "Indeed, words cannot express the feelings of the soldiers at their meeting, and perhaps my account will scarcely be believed -the tears which sprang to their eyes in their joy at the 130 The ovation (or lesser triumph) being the customary reward for 'dustless' (i.e., bloodless) victories: Gell. 5. 6 6, 111.4, 112.5, 113.2, 114.4, 115.2 and 5. It is generally accepted that he led these operations as proconsul. sight of him, their eagerness, their strange transports in saluting him, their longing to touch his hand, and their inability to restrain such cries as 'Is it really you that we see, imperator?' 'Have we received you safely back among us?' 'I served with you, imperator, in Armenia!' 'And I in Raetia!' 'I received my decoration from you in Vindelicia!' 'And I mine in Pannonia!' 'And I in Germany!' " Whilst the vocative imperator strictly refers to the present situation, Velleius' representation unmistakably suggests that all these men had previously served under Tiberius as their imperator and proudly acknowledged him as such. That Velleius' in a single instance (2.120.5) uses the term imperator in a non-technical sense with respect to P. Quinctilius Varus might cause some to doubt this interpretation.133 The detail that Tiberius had granted dona militaria to some of his men during his commands against the Vindelici and the Pannonians in the years 15 to 9 BCE, however, provides further evidence: traditionally, such was the exclusive preserve of imperatores in the technical sense of the word as commanders invested with independent imperium and the corresponding auspices. In the period here considered, that would have been either Augustus himself or anyone else with proconsular imperium.134 133 One should, however, note that Velleius otherwise consistently terms Varus dux in 2.117-120 (see 118.2, 119.2 and 3), and that in his narratives of the campaigns he conducted as proconsul after his adoption by Augustus in 4 CE and before his assumption of the imperial purple (2.104-121) he often terms Tiberius dux (2.106.1 and 3, 111.2 and 4, 112.3 and 5, 113.1, 115.5) as well as imperator (2.104.3-4, 106.1, 111.4, 112.5, 113.2, 114.4, 115.2 and 5, 117.1, 125.3). 134 See Maxfield 1981, 115-118 (esp. 117): "The epigraphic evidence for the entire imperial period is unanimous on the point that it was the emperor or a member of the imperial family who granted dona, whatever discretionary powers their agents may in theory have had. No inscription records a provincial governor granting dona: the vast majority of cases record the emperor as the awarding authority, while just a few name a member of the imperial family, for example Tiberius Caesar, stepson and heir to Augustus, and Germanicus Caesar, nephew of Tiberius. The situation in the senatorial provinces was rather different. Here it was the senate who appointed governors, proconsuls, to act on their behalf: the proconsular imperium gave these men the same rights in the matter of awarding dona as had been granted to their republican predecessors. There is just one example of this theory being put into practice and that was in the province of Africa Proconsularis when the soldier Helvius Rufus distinguished himself during an encounter with the rebel leader Tacfarinas and was awarded a torques and hasta not by the emperor Tiberius but by the governor Apronius [Ann. 3.21]. That no further examples of decorations awarded by proconsular governors are recorded could be due to the fact that their powers in this respect were eroded away in favour of the emperor. It is, however, much more likely to be a simple case of desuetude" (quoted from p. 117). In Aug. 25.3 Suetonius expressly records that Augustus did not deem "those who had celebrated triumphs" eligible for dona as they themselves had "the privilege of bestowing such honours wherever they wished", which indicated his respect for the republican tradition that any holder of independent imperium could award military decorations (for examples of which, see, e.g. Liv. 42.34.11 and Plin. Nat. 22.7). In Tib. 32.1, we are told that In this respect, it is also well worth noting that Florus, too, in 2.30.31 records that all of Germany between the Rhine and the Elbe (in 2.30.23-26, he successively mentions the Usipetes, Tencturi, Catthi, Marcomanni, Cherusci, Suebi and Sicambri) had been conquered sub imperatore Druso, at the behest of Caesar . That Florus, too, was well aware of the key distinctions between imperatores and legati can be gleaned from 2.33.51, where we are told that the final stages of the war against the Cantabrians were carried out by Augustus' "legates Antistius and Furnius as well as Agrippa while he was wintering on the coast at Tarrago "Meanwhile, to consolidate his power, Augustus raised Claudius Marcellus, his sister's son and a mere stripling, to the pontificate and curule aedileship; Marcus Agrippa, no aristocrat, but a good soldier and his partner in victory, he honoured with two successive consulates, and a little later, on the death of Marcellus, selected him as a son-in-law. Each of his step-children, Tiberius Nero and Claudius Drusus, was given the name of imperator, Tiberius rebuked some proconsuls still commanding military forces for not writing their reports to the Senate and "for referring to him the award of some military prizes (quibusdam militaribus donis), as if they had not themselves the right to bestow everything of the kind". Although Suetonius uses the term consulares exercitibus praepositos there is no doubt that Tiberius' reprimand concerns the consular proconsul Africae, who alone continued to command a legion until the reign of Caligula. 135 In 2.33.48, we are told that after arriving in Segisama and pitching his camp there, Augustus had divided his army into three parts in order to seal off the whole of Cantabria and mount a three-pronged offensive against their mountainous strongholds. though his family proper was still intact: for he had admitted Agrippa's children, Gaius and Lucius, to the Caesarian hearth, and even during their minority had shown, under a veil of reluctance, a consuming desire to see them consuls designate with the title Princes of the Youth. When Agrippa gave up the ghost, untimely fate, or the treachery of their stepmother Livia, cut off both Lucius and Gaius Caesar, Lucius on his road to the Spanish armies, Gaius -wounded and sick -on his return from Armenia. Drusus had long been dead, and of the stepsons Nero survived alone. On him all centred. Adopted as son, as colleague in the supreme command, as consort of the tribunician power, he was paraded through all the armies, not as before by the secret diplomacy of his mother, but openly at her injunction." Tacitus here produces an undoubtedly chronological summary of events. Since Dio expressly attests that Tiberius and Drusus were not allowed their salutationes imperatoriae of 12 and 11 BCE respectively, imperatoria nomina are here meant in a technical sense: Augustus had already made them imperatores, i.e., holders of independent imperium auspiciumque, when his family was still intact, well before the death of Agrippa (in March 12), and, in the case of Tiberius, even before he had decided to adopt Agrippa's young sons Gaius and Lucius (in June 17).136 In this particular instance, it should be noted that Ann. 1.2 offers a striking parallel. Gelzer 1918, col. 484;Stylow 1977, 489, andRadice -Mayer 2016, 57 ("Tacitus offers in this chapter a sweeping and fast-paced summary of Augustus' attempts to ensure a likely successor. It is worth noting that strict chronology is often ignored"; comp. also p. 59, where it is incorrectly asserted that "the title 'imperator' brought with it no official imperium […] but did increase the recipients' auctoritas"). Barnes' assertion in 1974, 22 that Tacitus "may be guilty of a misconception: the best evidence seems to reveal that Drusus was not allowed the title during his lifetime" is correct only in the sense that Drusus was probably never made IMP I during his lifetime (cf. infra pp. 179-182). Syme 1979, 314 on the one hand rightly argues that "there are no grounds for perplexity. It is a question of style. The historian is deliberately avoiding the term imperium proconsulare. That power and that alone confers the right to accept and bear the title imp. The theme concerns high politics as well as warfare and ceremonial. The princeps was eager to promote his stepsons. They accede to the consulship at twenty-eight, four years earlier than normal for a nobilis in this epoch; and both are invested with proconsular imperium". On the other, however, he sees this decision as a direct consequence of the death of Agrippa (comp. p. 309: "Hence a problem, to reward their ambition but not to incite it detrimentally; and it was a question how soon they might accede to an imperium proconsulare") and firmly dates it to 10 BCE (p. 310-314). In their respective commentaries, Koestermann (1963, 67: Tiberius and Drusus bore the title "wegen ihrer Verdienste um die Ausdehnung und Befestigung der römischen Macht in den Alpen und in Germanien seit dem J. 16 v. Chr.") and Goodyear (1972, 109: "the exact date of the first conferment […] is not known, but 9 B.C. or not much earlier seems probable") remain tantalizingly vague on the issue. Barnes, Koestermann and Goodyear moreover apparently fail to distinguish between the term imperator in its technical sense of commander with independent imperium and the nomen Imperatoris accorded by the troops in the field through imperatorial salutation. Contra In the context of chonological précis of Augustus' own political career, Tacitus here similarly uses the term triumuiri nomen in a technical sense, to designate the office of triumuir rei publicae constituendae, indicating that Caesar Octavianus continued to hold this plenipotentiary magistracy beyond the death of Marcus Antonius on 1 August 30 BCE: "When the killing of Brutus and Cassius had disarmed the Republic; when [Sextus] Pompeius had been crushed in Sicily and, with Lepidus thrown aside and Antonius slain, even the Julian party was leaderless but for Caesar; after laying down his triumviral title, he conducted his business as a simple consul content with tribunician authority to safeguard the plebs."137 That Augustus designates Tiberius as legatus in regard to his operations in Pannonia and Illyricum from 12 to 9 BCE in his Res Gestae does not present an insurmountable obstacle.138 First, the term legatus here features in a non-technical sense of envoy, indicating that Augustus had personally dispatched Tiberius to conduct these operations, doubtlessly by virtue of a motion carried in the Senate at his behest. In this matter, it is, moreover, important to note that many attested proconsuls of the triumviral and Augustan period are likewise termed legati in the literary sources and the Res Gestae.139 Second, Dio unequivocally attests that Tiberius had completed his summer campaign against Dalmatians and Pannonians of 9 BCE before Drusus' misfortune compelled him to race north to Germania.140 That Tiberius held the office of proconsul at the start of 10 BCE is unequivocally attested in Dio 53. 33.5 (supra, p. 151 2009,247 notes, the addition of the clause concerning Tiberius' status "directs the reader away form assuming that Augustus is referring to Tiberius' suppression of the Pannonian revolt in AD 6-9, towards his initial conquest of the region in 12-9 BCE, since Tiberius was adopted by Augustus in AD 4" -comp. also Ridley 2003, 85-88. 139 See Vervaet 2014, 239-252, esp. 247 The above reappraisal of the extant sources invalidates the consensus that Tiberius and Drusus were denied curule triumphs and imperatorial salutations in 12 and 11 and instead received ornamenta triumphalia because they had conquered as mere legati Augusti pro praetore, a supposition that furthermore fails to account for their respective ovations.141 The literary evidence strongly suggests 141 To quote just a few examples of scholars arguing or suggesting that Tiberius and/or Drusus were denied triumphs and salutations in 12 and/or 11 respectively because they were mere legati Augusti pro praetore, many of whom also invoke R. Gest. div. Aug. 30.1 and Dio 54.33.5 (cf. supra, p. 151) as proof positive that they were not invested with consular imperium before January 10 BCE: Mommsen 1878, 1, 126 n. 1 (where Mommsen even invokes R. Gest. div. Aug. loc. cit. as evidence that Tiberius continued to command as a mere imperial legate in Pannonia in 9 BCE) and 131 n. 2, comp. 2, 852 n. 3 and 1152 n. 2;Stein 1899, col. 2709and 2712comp. C 941, p. 221); Gelzer 1918, c. 483 (Gelzer merely notes that Tiberius was sent to Pannonia in 12 as legatus pro praetore and that Augustus would only allow him the ornamenta triumphalia whilst vetoing his imperatorial salutation and the Senate's vote of a triumph); Jones 1934, 153 (in n. 3, Jones asserts that Dio in 54.35.5 (sic) "tells us […] that the Senate decreed that Drusus, at the close of his term of office as praetor, should rank pro consule"); Alföldy 1974, 55;Barnes 1974, 22 n. 13;Levick 1976Levick , 35 (comp. 1999; Syme 1978, 60 n. 2 ("The stepsons of the Princeps first received proconsular imperium after the campaigns of 11, to be valid for Drusus from January of the next year", with reference to Dio 54.33.5 and 34.3); Syme 1979, 310-314 (comp. 309, where Syme asserts that "no instance" of Augustus taking imperatorial salutations for victories won by proconsuls "is discoverable after the return of normal government in 28 and 27"; in 1939, 394 n. 2 Syme had already suggested that, "though it cannot be proved", "M. Vinicius was the last proconsul, Tiberius the first imperial legate of Illyricum" -in 1986, 332, however, Syme is inclined to consider Vinicius the first of the imperial legates to govern Illyricum, from 15 or 14 to 12 BCE, a suggestion rightly contested by Hurlet 2006, 86 and 145-147, who cogently argues that Dio's testimony in 54.34.4 that Illyricum ceased to be a public province only in 11 BCE should not be called to question); Castritius 1982, 46;Syme 1986, 334;Zanker 1987, 226;Rich 1990, 213 (arguing also that only Drusus as praetor in 11 satisfied the requirement that only holders of independent imperium could receive imperatorial salutations and celebrate ovations, a requirement waived by the Senate when granting Tiberius an ovation as well (Dio 54.34.3), and that only Drusus consequently received an imperatorial salutation in 11, whereas Tiberius had to wait until 9 (comp. also n. 176 infra); Kienast 1990, 69;Thomasson 1991, 34;Hickson 1991, 129;Hurlet 1997, esp. 87-89, comp. also 90 f. and 95 f. and the useful tables on pp. 556 and 562 (on pp. 86 and 88 f., Hurlet suggests that the promotion of Tiberius and Drusus to proconsul following Agrippa's death suffered a slight delay as Augustus' "respect pour des institutions républicaines toujours bien vivantes" had him insist on Drusus first holding the praetorship in 11 BCE: consequently, Drusus had to wait until 1 January 10 BCE, whereas Tiberius, as Hurlet argues on the basis of Dio 55.2.4 and 54.34.3, was invested with his first "imperium proconsulaire" at the end of 11 BCE -"de la fin de l'année" [comp. also 2006, 145: "dans le courant de l'année 11"]; on p. 97 and esp. 101, Hurlet suggests that Tiberius was voted his first and only ovation at some point in 9 BCE); (implicitly) Eck 1998, 58 f.;Bleicken 1998, 579 f.;Dettenhofer 2000, 162-167 (she even believes that Tiberius was decreed two ovations on account of his first victories in Pannonia following Agrippa's death but was only allowed to celebrate the second and furthermore that, from 20 and 15 successively, both of Livia's sons had invariably commanded as proconsuls, and thus as imperatores in their own right.142 As further discussed below, Augustus had a very different rationale for moderating the honours heaped on his stepsons. argues that Augustus consistently favoured Drusus over Tiberius, investing the former with 'proconsular' imperium in 11, one year before the latter); Kehne 2002, 310 n. 102 (where Kehne nonetheless also deems it possible that Drusus administered his extra-urban province "mit eigenem prätorischen Imperium" during his praetura Rich 1990, 210 f. and1998, 120 n. 157. A significant number of scholars even believe that Tiberius was only invested with an extraordinary consular imperium in 9 (following the death of Drusus) or, on the basis of an incorrect interpretation of Dio 55.6.5, a suggestion rightly exploded by Hurlet 1997, 88), even as late as 8 BCE, as opposed to his younger brother Drusus, who allegedly became proconsul in January 10 BCE: Dupraz 1963, 181 (9 BCE gouarc'h 1982, 241 (8 BCE); Gallotta 1987, 122 (8 BCE); Jacques -Scheid 1990, 21 (8 BCE); Kienast 1990, 76 (8 BCE);comp. also Fitz 1993, 50-56, who argues that Tiberius was the first legatus Augusti pro praetore of Illyricum from 12 through 9 BCE, regardless of the fact that his brother Drusus had been invested with imperium proconsulare "seit 11 v. Chr.", suppositions that fly in the face of the extant evidence. According to Fitz, who consequently believes that Drusus in 11 received a number of honours refused to Tiberius because of the former's "höheren Rang", we should not be guided by the 20th century's 'sense of justice' in supposing that the latter had been promoted at the same time as his younger brother, and that it was not customary in the early Principate to award more than one 'secondary proconsulship' at the same time. According to Fitz, is was only as Tiberius took over from Drusus in Germania in "10/9 v. Chr." (sic, p. 54) that he acquired "prokonsularische Machtsbefugnisse" -comp. also p. 56: "Nach dem Tod von Drusus schickte ihn Augustus nach Germanien mit den Machtsbefugnissen eines Prokonsul." On p. 55, however, Fitz is adamant that it is impossible to establish "ob Tiberius noch in Illyricum, zu Lebzeiten von Drusus, die prokonsularische Machtsbefugnisse erhielt oder erst nach dem Tod seines Bruders, als ihm diese mit dem germanischen Auftrag zukamen." For a more or less similar position, see also Kienast -Eck -Heil 2017, 61 and 70, who likewise suggest that Drusus received and celebrated an ovation in 11 BCE (while still legatus Augusti pro praetore) and subsequently became a proconsul as from 1 January 9 BCE, whereas Tiberius campaigned in Pannonia and Dalmatia as a legatus Augusti pro praetore from 13 up to and including 9 BCE and only became proconsul in Germania in 8 BCE. The misconception that Augustus strongly favoured Drusus over Tiberius, considering the former as his preferred successor, can also be found in Wolters 2017, 45-52, esp. 50 f. 142 In Aug. 25.1, Suetonius seems to produce some further evidence for this, as he recounts here how Augustus after the civil wars invariably addressed his troops as soldiers and expected as much from "his sons or stepsons who held military commands" (ac ne a filiis quidem aut priuignis suis imperio praeditis aliter appellari passus est). The context suggests that Suetonius here means independent imperium rather than delegated authority. Hammering Home the Message: the Numismatic Evidence Quite intriguingly, the disjointed -if unequivocal -literary evidence that Tiberius and Drusus held extraordinary, quinquennial proconsulships from respectively 20 and 15 BCE is hardly reflected in the extant epigraphic record.143 Fortunately, however, we do have a remarkable series of denarii and aurei commemorating Augustus' tenth imperatorial salutation (15-13 BCE) on account of Tiberius and Drusus' decisive victories over the Raeti and the Vindelici in 15 and 14 BCE.144 Minted in Lugdunum, the administrative seat of Gaul, and mainly intended to pay the armed forces who were fighting these wars, the reverse of one series of these coins (viz. RIC I 2 Aug. nos. 162a-165b = BMC 1 Aug. nos. 443 and 446-449) strikingly displays either one or both brothers, bareheaded, wearing the general's paludamentum and holding a parazonium, handing over the olive-branches (with the right hand) to Caesar Augustus, seated on a sella curulis on a dais at right in his capacity of universal supreme commander, bareheaded, wearing the toga praetexta, and holding out his right hand.145 (Images 1-5) Hurlet (comp. also n. 111 of p. 190) observes that this is also the case with Germanicus, who is termed proconsul in all known contemporary official sources but never in dedications established by individuals or local communities at their own initiative. 144 As Barnes 1974, 22 n. 12 observes, ILS 5816 (13/12 BCE) seems to be the earliest epigraphical attestation of Augustus as IMP X. That Augustus took his tenth salutation in 15 BCE on account of the victories of Tiberius and Drusus in their Alpine campaign is generally accepted; comp., e.g., Syme 1979, 310 and Kienast -Eck -Heil 2017, 58. 145 Kuttner 1995 speculates that the first "must refer to Drusus, who initiated the Alpine campaigns, the second to Tiberius and Drusus in their joint fighting in the latter part of the Alpine campaign". I do not, however, accept Kuttner's suggestion (187 f.) that Augustus is being presented with palm-branches rather than olive-branches. Whilst Mattingly in BMC locc. cit. has olive branches, RIC remains silent on the issue. The possibility that it concerns laurel branches cannot be ruled out positively. Predominantly on the strength of Oros. 6 The reverse of another series of denarii and aurei commemorating Augustus' tenth imperatorial salutation (RIC I 2 Aug. nos. 166a-169 = BMC 1 Aug. nos. 450-451, 454-455 und 457-458) features a bull, the customary sacrificial animal of the curule triumph. In the Republic, the imperator holding the summum imperium and the prevailing auspices would take credit for victories won even in his absence by subordinate commanders, regardless of the question whether these, too, held independent (i.e., non-derivative) imperium and auspices of their own. If, however, the subordinate commanders who had conducted the actual ductus, the personal leadership, also held independent imperium and auspices, they too perfectly qualified for salutationes imperatoriae, ovations and curule triumphs, as opposed to commanders who merely held derivative praetorium imperium and no auspices whatsoever, such as legati or (pro)quaestores pro praetore. 146 Therefore, the true significance of this remarkable coinage is twofold. First, it further suggests that Augustus' stepsons gained their victories in Raetia and Vindelicia as proconsuls in their own right. There would have been no point in their conspicuously ceding the symbolical trappings of victory and triumph to Augustus had they merely acted as his legati pro praetore: such would have amounted to a pompous exercise in stating the bleeding obvious. Consequently, this coinage corroborates, and perfectly fits, Cassius Dio's note (cf. supra p. 145) that Tiberius and Drusus commanded a number of legati pro praetore in Raetia as well as his evidence on the triumphal honours refused in 12 and 11 BCE. Second, these coins also strikingly confirm how Tiberius and Drusus made a signal contribution to Augustus' policy of converting the triumph into an imperial monopoly. The message was unequivocal: even though both noble proconsuls had gained substantial successes worthy of all triumphal honours they notably ceded all credit to Imperator Caesar Augustus, who alone received an imperatorial salutation as he no doubt turned down the corresponding triumph decreed by the Senate. This picture is substantiated by some further literary and epigraphical evidence. In Odes 4.4 and esp. 14, Horace, too, gives Augustus all the credit for what he terms the recent victories in Raetia and Vindelicia as he proclaims that Drusus destroyed the Vindelician Genauni and Breuni "with your soldiers" (milite nam tuo), and that Tiberius shortly after defeated the savage Raeti in a bloody battle "under your happy auspices": mox graue proelium / commisit immanisque Raetos / auspiciis pepulit secundis.147 The famous inscription on the Tropaea Augusti at La Turbie, set up in the Alpes Maritimae between 1 July 7 and 30 June 6 BCE, unambiguously records how Augustus unhesitantly took all the credit for the military successes of P. Silius Nerva, Tiberius and Drusus as won "under his personal leadership and his auspices", eius ductu auspiciisque.148 Though not quite false, the claim that the extensive series of conquests enumerated in the inscription were made under his personal leadership (ductus) stretches the truth, as he was in every instance physically absent from the actual fighting.149 ifici) Max(imo) imp(eratori) XIIII trib(unicia) pot(estate) XVII | Senatus Populusque Romanus | quod eius ductu auspiciisque gentes Alpinae omnes quae a mari supero ad inferum pertinebant sub imperium p(opuli) R(omani) sunt redactae | gentes Alpinae deuictae Trumpilini Camunni Vennonetes Venostes Isarci Breuni Genaunes Focunates | Vindelicorum gentes quattuor Cosuanetes Rucinates Licates Catenates Ambisontes Rugusci Suanetes Calucones | Brixentes Leponti Viberi Nantuates Seduni Veragri Salassi Acitauones Medulli Ucenni Caturiges Brigiani | Sogiontii Brodionti Nemaloni Edenates (V)esubiani Veamini Gallitae Triullatti Ectini | Vergunni Eguituri Nemeturi Oratelli Nerusi Velauni Suetri. In Nat. 3.136-137, Pliny the Elder takes the trouble to append this inscription: Imp. Caesari Diui filio Aug. Pont. Max., imp. xiv, tr. pot. xvii, S.P.Q.R., quod eius ductu auspiciisque gentes Alpinae omnes quae a mari supero ad inferum pertinebant sub imperium p. R. sunt redactae. Gentes Alpinae devictae Triumpilini, Camunni, Venostes, Venostes, Vennonetes, Isarchi, Breuni, Genaunes, Focunates, four tribes of the Vindelici, the Cosuanetes, Rucinates, Licates, Catenates, Ambisontes, Rugusci, Suanetes, Calucones, Brixentes, Leponti, Uberi, Nantuates, Seduni, Varagri, Salassi, Acitavones, Medulli, Ucenni, Caturiges, Brigiani, Sobionti, Brodionti, Nemaloni, Edenates, Vesubiani, Veamini, Gallitae, Triullati, Ecdini, Vergunni, Eguituri, Nematuri, Oratelli, Nerusi, Velauni, Suetri. Augustus repeats the claim that he had conquered all of the Alpine lands in R. Gest. div. Aug. 26.3: Alpes a re]gione ea, quae proxima est Hadriano mari, [ad Tuscum pacari fec]I nulli genti bello per iniuriam inlato. 149 Augustus indeed spent the years 16, 15 and 14 BCE in Gaul. He set out for Gaul during the consulship of L. Domitius and P. Scipio, "making the wars in that region his excuse" (Dio 54.19.1). He returned to Rome in the consulship of Tiberius and Quinctilius Varus, having "finished all the business which occupied him in Gaul, Germany and Hispania" (Dio 54.25.1), leaving Drusus in Germany. From a purely technical point of view, the successive military operations in different parts of these lands thus took place under his ductus, even though in reality, he was far removed from the actual fighting in Noricum, Raetia and Vindelicia: see, e.g., Dio 54.21-22.1, 23.7. Compare also Dio 54.36.2-4 and 55.6.1-2 for Augustus remaining far from the fighting in campaigns he officially led in person in 10 and 8 BCE. As an autocrat in command of all Rome's armed forces, Augustus could, of course, get away with this generous and expansive interpretation of the concept of ductus, in stark contrast to republican practice, when any imperator claiming some significant military success to have been achieved suo ducto had to have been personally involved with commanding the victorious Roman forces concerned on the field of battle: see Vervaet 2014, chapters 1 and 4. The other senatorial proconsuls and the ranking equestrian officers would have certainly got the message, especially as Agrippa's third and final recusatio triumphi of 14 BCE (cf. infra pp. 186-188) and the ensuing developments of 12 and 11 concerning Tiberius and Drusus rapidly followed suit. If even special and privileged proconsuls closely associated with Augustus declined or were denied triumphs and even imperial salutations, invariably matched by his own refusals to accept the corresponding triumphs decreed him in his capacity as summus imperator, there would be little point for any of the 'ordinary' proconsuls to petition for, or expect, any of the traditional triumphal honours.150 The fact that the Claudians "were not identified by legend, and there was no precedent in Roman coinage for historical scenes featuring several members of the Imperial family", as Rose observes, this series being minted "in a period where the iconography of the Imperial family on Roman coins was suggestive rather than explicit"151, reinforces the universal applicability of its symbolism and Augustus' novel triumphal monopolism. Although the literary sources remain silent on the issue, this coinage furthermore suggests that Tiberius and Drusus may well have been denied senatorial ratification of the imperatorial salutations they doubtlessly received from the army in the field ensuing their victories in Raetia and Vindelicia. Likewise, Tiberius had possibly already been saluted Imperator by his legions in Armenia in 20 BCE. On that occasion, too, the Senate would have ratified only Augustus' salutation. The panegyric nature of Velleius' work easily explains his silence on probable denied salutations for Tiberius in 20 and 15 as well as the attested instances of 12 and 11 BCE.152 Dio's failure to mention Augustus' 9th and 10th salutations of 20 and 15 BCE respectively as well as any corresponding salutations for Tiberius and Drusus by their armies in the field should probably be explained differently. As 150 Comp. Eck's observations concerning the political ramifications of Agrippa's refusals of 19 and 14 BCE in n. 188 infra. 151 Rose 1997, 15. Rose further argues that if "the coins were used as donatives for the troops who fought with Tiberius and Drusus, which seems likely, then the meaning could have been clarified orally at the time of distribution", suggesting that it is, however, "unreasonable to think that the scene would have been clear to everyone who handled the coin", and observes in n. 69 (on p. 219) that the "closest numismatic parallel is the type of Sulla being offered a laurel branch by the Mauretanian king Bocchus: RRC no. 426.1". In my view, the symbolism of the scenes and the identity of the commanders involved would have been quite clear to the armies in Gaul and Germany as well as the contemporary senatorial elite and those equestrians involved with the armies in this sphere of operations. recorded in 54.24.7-8 (quoted and discussed infra, p. 186 f.), the Severan historiographer considers Agrippa's third and final recusatio triumphi of 14 BCE as the decisive moment in the establishment of an imperial monopoly over the public triumph. This arguably simplified representation explains why Dio only pays close attention to what happened in this sphere in regard to Tiberius and Drusus in the ensuing years. In the eyes of Dio, it was only in the context of a seemingly abruptly established new triumphal policy that the relevant actions and votes concerning Livia's stepsons came into sharp relief. The above analysis also explains why the major victories of P. Silius Nerva in 16 and 15 BCE over a number of Alpine tribes and subsequently the Pannonians, the Noricans and possibly also the Dalmatians did not spawn any imperatorial salutations or triumphal honours. As Dalla Rosa correctly observes, "l'année 16 av. J.-C. peut […] être considérée à juste titre comme la date à laquelle les campagnes romaines commencèrent dans les Alpes"153, and the lands of the Noricans were now for the first time incorporated into the Roman Empire.154 Rather than just another instance of Augustan triumphal abstinence, the Senate's inaction on behalf of Silius Nerva, too, suggests that Augustus had already made the decision to exclude 'regular' proconsuls from the public triumph and its associated rituals whilst drastically reducing their frequency too. Following the conviction of M. Primus, who was tried de maiestate in 22 BCE for having waged an unauthorized war of aggression as proconsul of Macedonia (cf. supra, p. 140), none of his peers would have risked to take significant military action without prior authorization on the part of Augustus and his Senate. Silius therefore doubtlessly acted on Augustus' orders throughout his tenure in Illyricum, especially as regards his offensive operations against some Alpine tribes and subsequently also the Noricans following his defeat of their allies, the Pannonians. Dalla Rosa is therefore right to note that "le cas de Silius Nerva dut jouer un rôle décisif, car il fut l'un des premiers proconsuls à subir les conséquences du durcissement de la politique augustéenne en matière de concession du triomphe"155. Since we also 153 Dalla Rosa 2015, 469; on the importance of Silius' Alpine campaign, see also van Berchem 1968 andWells 1972, 66. 154 Comp. Eck 2018a, 10: "Das Königreich Noricum […] fand damals ein Ende und wurde wahrscheinlich dem Statthalter von Illyricum unterstellts; vielleicht amtierte in Noricum ein praefectus als unmittelbarer Vertreter Roms." For a similar appraisal, see Dalla Rosa 2015, 467, with further scholarship on the issue as well as the authority and prerogatives of such praefecti in n. 16. 155 Dalla Rosa 2015, 482. Given the argument of this study, I cannot, however, accept Dalla Rosa's (loc. cit.) argument that the Senate could not possibly have awarded Silius with a triumph because such would have elevated the proconsul above Augustus' stepsons, allegedly unable to receive curule triumphs in 15 as mere legati Augusti pro praetore -comp. also Rich 1990, 202 for a similar argument: a number of proconsuls could have claimed triumphs during the years know from Suetonius (Aug. 71.2) that he was one of Augustus' intimate friends,156 being consul throughout 20 BCE, he may very well even have been aware of Augustus' decision to transform the triumphal ritual into the exclusive monopoly of his dynasty. At all events, the remarkable good fortunes of his three sons attest to the fact that his unassuming loyalty did not go unnoticed: whilst P. Silius held a suffect consulship in 3 CE, his siblings A. Licinius Nerva Silianus and C. Silius were honoured with ordinary consulships in 7 and 13 CE respectively, the former with Q. Caecilius Metellus Creticus Silanus as his (prior) colleague and the latter as consul prior with L. Munatius Plancus as his colleague.157 Since M. Lollius (consul prior throughout 21 BCE, holding the office alone at the start of the year: Dio 54.6.2) and M. Vinicius (suff. 19 BCE), too, had gained their significant successes as proconsuls in Macedonia (c. 19/18 BCE) and Illyricum (c. 14/13 BCE) respectively, much the same can be said of these trusted new men, especially since they, too, had received their important provincial commands extra sortem, i.e., by decree of the Senate on the motion of Augustus.158 It would, however, be up c. 19-14 "but they may have been discouraged from applying, perhaps so as not to overshadow the achievements won by Tiberius and Drusus as Aug.'s legates". Although there is more merit in the arguments that "récompenser Silius Nerva avec un triomphe aurait signifié mettre ses exploits sur le même plan que la conquête de la Pannonie ou de la Germanie, alors que l'annexion du Norique ne fut qu'une opération préliminaire en vue de l'avancée au-delà des Alpes" and "Silius Nerva ne triompha jamais parce que sa victoire ne fut pas jugée suffisante et non du fait d'une supériorité auspiciale générale du prince", the foremost reason for the conspicuous inaction following Silius' victories was Augustus' new triumphal paradigm: as holder of independent imperium and auspices of his own and responsible for the ductus of his victorious troops in 16 and 15, he perfectly qualified for full triumphal honours, regardless of Augustus' summum imperium auspiciumque. 156 P. Silius Nerva had also been legatus Augusti pro praetore in Hispania Citerior and may even have served under Agrippa there in 19 BCE: see Syme 1939, 333 n. 1. For a summary of Silius' career, see PIR 2 S 729. 157 Dalla Rosa 2015, 466 (with n. 10) likewise counts Silius "au cercle des hommes de confiance d'Auguste" and terms his descendents "parmi les sénateurs les plus importants de l'époque julio-claudienne (stemma dans PIR 2 S, p. 271)"; comp. also 474: "L'envoi de ce consulaire en Illyrie, une province normalement administrée par d'anciens préteurs, indique qu'Auguste avait besoin d'un homme de confiance, parce que la garnison de la province était sûrement en cours d'accroissement numérique. to the privileged extraordinary proconsuls Tiberius, Drusus (supra) and Agrippa (infra) to set a number of explicit examples in the ensuing years 15-11 BCE. The Years 10 to 7 BCE Although 11 BCE had been very busy for both of Livia's sons, the following years would bring no reprieve. Sometime in the winter of 11/10 BCE, the Senate voted that the temple of Janus Geminus be closed again on the ground that the wars had ended. Incursions of the Dacians into Pannonia and a fiscal rebellion of the Dalmatians, however, prevented the execution of the decree. Augustus promptly sent Tiberius, who had accompanied him to Gaul, to quell these disturbances whilst Drusus subjugated the Germanic Chatti, who had allied with the Sugambri. Afterwards (i.e., in the winter of 10/9 BCE), so Dio goes on to relate, both brothers returned to Rome with Augustus, who had remained in Lugdunum to monitor the situation in Germania from nearby,159 and "carried out whatever decrees had been passed in honour of their victories or did whatever else devolved upon them": καὶ ὅσα ἐπὶ ταῖς νίκαις ἐψήφιστο ἤ καὶ ἄλλως καθήκοντα ἦν γενέστθαι, ἐπετέλεσαν.160 As he held the eponymous consulship with T. Quinctius Crispinus Sulpicianus (i.e., in 9 BCE), Drusus again invaded the lands of the Chatti, conquered the territory of the Suebi, their allies, with great difficulty and bloodshed, and ransacked the country of the Cherusci, crossing the Weser and advancing as far as the Elbe.161 Dio recounts that he failed to cross this river and decided to Eck 2018aEck , 12 (comp. 2018b suggests that Drusus' brief in 12/9 BCE was to conquer transrhenian Germany at the very least all the way up to the Elbe. Wolters 2017, 45-52 argues that the second phase of Drusus' conquests, commencing in 10 BCE and targeting the southern lands of transrhenian Germany (and to be concluded by Tiberius in 8 BCE, infra pp. 182-185), amounted to an outright war of aggression facilitated by improved geographical knowledge of Germany and prompted by Augustus' wish to ensure Tiberius' renewed campaigning in Pannonia and Dalmatia was duly matched in Germania. Since both brothers had received a quinquen-make his way back to the Rhine after setting up trophies, and, reportedly, receiving a prophetic warning from a native woman.162 He would never make it back there, however, as he prematurely died in his summer camp from complications arising from an accident with his horse.163 After recounting events regarding the repatriation of Drusus' body and his posthumous honours,164 Dio calls to mind that, "while Drusus was yet alive" (i.e., in the summer of 9 BCE), Tiberius had quashed another revolt on the part of the Dalmatians and the Pannonians, "celebrated the equestrian triumph, and feasted the people, some on the Capitol and the rest in many other places" whilst Livia and Iulia dined the women. Dio goes on to recount that "the same festivities were being prepared for Drusus: even the Feriae were to be held a second time on his account, so that he might celebrate his triumph on that occasion", all thwarted by his untimely death.165 That Tiberius nial extension of their extraordinary proconsulships in 11 BCE the renewal of major fighting in 10 may also have been politically expedient in that it brought welcome legitimation for their continuous special empowerment. 162 Comp. also Suet. Claud. 1.2-3: quam species barbarae mulieris humana amplior uictorem tendere ultra sermone Latino prohibuisset. 163 Dio 55.1. Thanks to Suet. Claud. 1.3 we know that Drusus died in his summer camp (in aestiuis castris), whereas we are told in Liv. Per. 142 that Drusus died from complications arising from a broken leg, sustained when his horse fell on it, on the thirtieth day after the accident. Dio 55.1.2-4 merely recounts that Drusus died "of some disease" on his way back from the Elbe before he reached the Rhine. Levick 1972b, 783 n. 5 suggests that Drusus died around the onset of autumn. Tacitus' observation in Ann. 3.5.1 that ipsum [i.e, Augustus] quippe asperrimo hiemis Ticinum usque progressum neque abscedentem a corpore simul urbem intrauisse ("In the bitterest of the winter, he had gone in person as far as Ticinum, and, never stirring from the corpse, had entered the City along with it") should certainly be taken with a grain of salt as he is prone to exaggeration in a context where he attacks the alleged comparative lack of respect for Germanicus' returning remains by Tiberius and his son Drusus -comp. also Syme 1979, 321 n. 21: "it is not safe to press an allegation reported by Tacitus." We also know from Val. Max. 5.5.3 (infra,p. 180) that Augustus and Livia were already in Ticinum when Tiberius arrived there following his victories in Illyricum only to learn of his brother's critical condition. This suggests that Augustus and Livia had decided to remain in Ticinum after Tiberius' hasty departure until the arrival of Drusus' bodily remains. Thanks to Plin. Nat. 7.84, finally, we know that Tiberius "completed by carriage the longest twenty-four hours' journey on record when hastening to Germany to his brother Drusus who was ill: this measured 182 miles". As Tiberius covered the entire distance in a full day and factoring in a somewhat slower pace on the way back to Ticinum (on an estimate of c. 20 days), the combined evidence suggests that Drusus had died sometime around the start of November: comp. also Hurlet 1997, 93 f.: "les derniers jours du mois d'octobre ou les premiers jours du mois de novembre", with the funeral taking place some time in December. 164 55.2.1-3. celebrated his ovation ex Pannonia first, in all likelihood on 16 January 9 BCE166, is consistent with what we know about the honours decreed to both brothers in 12 and 11 BCE, when Tiberius as the older sibling invariably enjoyed due priority. Although Dio in 55.2.4 mentions Tiberius' victories of 9 BCE over the rebellious Dalmatians and Pannonians before his ovation, the date of 16 January 9 also sits well with his ensuing note ( § 5) that the Feriae Latinae, normally (though not invariably) held around March,167 were to be held a second time for the sake of the ovation planned for Drusus.168 Dio's slightly confusing narrative thus suggests that the Feriae had been celebrated as early as on 16 January in 9 BCE, so as to coincide with Tiberius' ovation,169 and that they were to be repeated later in that year on the occasion of Drusus' ovation, in both instances a striking indication of imperial favour. At all events, there is compelling evidence that Tiberius' victories in Illyricum in the summer of 9 must have earned him his first officially sanctioned τὰς γυναῖκας εἱστίασε. τὰ δ᾿ αὐτὰ ταῦτα καὶ τῷ Δρούσῳ ἡτοιμάζετο· καί γε αἱ ἀνοχαὶ δεύτερον τὴν χάριν αὐτοῦ, πρὸς τὸ τὰ νικητήρια ἐν ἐκείναις αὐτὸν ἑορτάσαι, γενήσεσθαι ἔμελλον. ἀλλ᾿ ὁ μὲν προαπώλετο. Syme 1979, 311, followed by Kehne 2002, 311 n. 107, rightly discards the erroneous assumption in Stein 1899, col. 2712 andPIR 2 C 857 (p. 197) and Rohde 1942Rohde , col. 1902 that Drusus actually held his ovation in 11, a mistake repeated by Seager 1972, 26 f.;Bleicken 1998, 579;Seager 2005, 21;Itgenshorst 2008, 30 and52 (cf. also n. 168 infra), and2017, 66 n. 27, 66 nn. 34, 71 and80;and Havener 2016, 335. 166 Gelzer 1918, col. 484 and Stein, PIR 2 C 481 (p. 221) too had already put Tiberius' ovation in 9, before the death of Drusus. As Syme discerningly observes in 1979, 313, "the day was auspicious, made memorable forever by assumption of the ruler's august cognomen". Syme's argument found acceptance in, amongst others, Hurlet 1997, 97-100 (with due reference to Dio 54.36.4) andSeager 2005, 214, where the latter abandons his older position (21 f.) that Tiberius celebrated two ovations over the Pannonians, one in 11 and then another one following the funeral of his brother in 9. Although Fitz 1993, 55, too dates Tiberius' ovation to 9 BCE, he believes it is impossible to discern whether the celebration took place before or after Drusus' death. Kienast -Eck -Heil 2017, 70 tentatively accept 16 January 9 BCE as the date of Tiberius' ovation. 167 Pina Polo 2011, 256 f. 168 There are, at any rate, no grounds for Levick's view (in 1976, 35) that "in 12 BC Tiberius and in 11 Nero Drusus were allowed the insignia of a triumph" followed by the award of an ovation to Tiberius in 10 BCE and one to Drusus in the following year. Comp. also p. 236, n. 6 "Drusus too, though closer to his own fate than to duty towards anyone, in the collapse of spiritual vigour and bodily strength, yet at the very moment that separates life from death ordered his legions with their ensigns to go to meet his brother, so that he be saluted as Imperator. He further gave orders that a headquarters be set up for him to the right of his own and wished him to use the title of consul and Imperator. He bowed to his brother's majesty and out of his own life at the same time." 170 Hurlet 1997, 97 dates Tiberius' first imperatorial salutation to either 10 or 9 BCE: "au terme de l'une des campagnes victorieuses des années 10-9". Although Hurlet correctly associates Tiberius' first imperatorial salutation with Augustus' 13th, I cannot accept his suggestion that "avec la première salutation impériale de Tibère est née la pratique d'associer le prince à une victoire qu'il n'avait pas remportée sous ses propres auspices" since Tiberius, like all other proconsuls of the Augustan era, invariably conducted all his military campaigns under Augustus' auspices: see Vervaet 2014, 253-292. This alleged breach of custom also sits uneasy with Hurlet's assertion (p. 96 with n. 85) that as late as 12 BCE, Augustus had acted as "le garant des traditions ancestrales" as he vetoed the Senate's decision to award Tiberius with a triumph. Kienast -Eck -Heil 2017, 72 more precisely date Tiberius' first (officially sanctioned) salutation to September of 9 BCE but are to my thinking wrong to suggest (on p. 62) that his younger brother Drusus had already received his first and second (officially sanctioned) salutations in 11 (the first) and "10 oder 9 v. Chr." (the second). 171 ILS 95, inscribed on a marble base found in the Campus Martius. 172 Valerius' account is slightly exaggerated here as we know from Plin. Nat. 7.84 and SC ad Pol. 34 that Tiberius made the journey of precisely 182 miles tribus uehiculis. That Tiberius was devastated by the loss of his younger brother is also on record in Consolatio ad In other words, Drusus insisted on Tiberius using the nomen consulis as well as that of imperator while in his camp, after he had already ordered his legions to salute his brother as such. Since Tiberius had held the consulship in 13 BCE, it follows that he had already been awarded the nomen Imperatoris, too, before he made it to his brother's summer camp. Although Dio remains silent in his briefest of mentions of Tiberius' victory over the Dalmatians and Pannonians in 9 BCE173 and his narrative of the greater honours he gained in 8 BCE (infra p. 183) seemingly creates the impression that he only then won his first officially sanctioned imperatorial salutation, there should be no doubt that this had already happened sometime before he met with Augustus and Livia at Ticinum.174 As had been the case in 12 and 11, Tiberius' seniority again earned him due precedence in formally receiving the nomen Imperatoris from Augustus and the Senate.175 This course of events also further explains the posthumous ratification of Drusus' Germanic imperatorial salutations of 12 and 11. By a cruel stroke of fate, he had died before he could celebrate his ovation and officially accept his first nomen Imperatoris, and the actions of Augustus and the Senate on the occasion of his funeral sought 173 55.2.4, cf. supra, p. 178. In 54.36.2, Dio likewise gives the scantest of attention to Tiberius' successes there the year before, as his narrative of these years systematically favours Drusus. 174 Contra Barnes 1974, 22, who misinterprets Drusus' personal and self-deprecating homage to his brother to suggest that "in the sequel, both Augustus and Tiberius took the imperatorial title, respectively for the thirteenth and the first time (ILS 93 f.), and it was posthumously conferred on, or posthumously acknowledged for, Drusus (AE 1934, 151 = Inscr. Ital. xiii, 3, p. 15, no. 9)". As argued in the above, Augustus and the Senate would posthumously ratify both imperatorial salutations Drusus had received from his army in the field in 12 and 11 BCE. Although Syme 1979, 313 f. rightly doubts Barnes' "hazardous conjecture", he, for his part, wrongly believes (311, 314) that Tiberius earned his first, and Augustus his thirteenth, salutation on account of victories won by the former during his campaign of 10 BCE: cf. also n. 109 supra. 175 As demonstrated in the above, Tiberius had been decreed triumphal and other honours before Drusus in both 12 and 11, and his ovation likewise was scheduled to precede that of his younger brother. Syme 1979, 313, is, however, wrong to conceive of Tiberius' early ovation as "a modest reparation" for the alleged fact that "Augustus in his marked predilection for Drusus put him level with his consular brother (although four years junior) through the proconsular imperium decreed at the end of 11 B.C.". In this respect, it is also well worth calling to mind that Augustus had made Tiberius pontifex at some point before 31 December 16 BCE, Drusus' appointment to an augurate following some time before 31 December 11 BCE: Hurlet 1997, 562 and 556. Stein 1899, col. 2713 believes Drusus obtained the augurate around the same time as the consulship. Kienast -Eck -Heil 2017, 61, however, tentatively date Drusus' appointment to the augurate to 19 BCE, when he received the priuilegium annorum, whereas they suggest that Tiberius received his pontificate "at the latest" in 15 BCE, i.e., almost a decade after he received the priuilegium annorum in 24 BCE. In my view, it strains belief that Drusus would have been appointed to a major priesthood before his older brother. to offer posthumous compensation, no doubt with the unqualified support of a deeply mournful Tiberius, then already officially IMP I. 176 After refusing to celebrate any festival on acount of Drusus' victories (and, for that matter, Tiberius'), instead honouring his late stepson with a rare visit to the temple of Jupiter Feretrius and leaving all other customary formalities to the consuls of 8 BCE, Augustus again took the field to campaign against the Germans, his very last military campaign.177 As was his wont, he remained behind in Roman territory whilst Tiberius, whom he had appointed to succeed Drusus, crossed the Rhine. As the Germanic tribes all fell in line or suffered further misfortune, awed by Augustus' calculated show of force, Tiberius was awarded with the nomen Imperatoris as well as his first curule triumph and designated to a second consulship. All the fine detail is yet again produced by Cassius Dio, who also records some further measures and honours flowing from Augustus' Germanic campaign (55.6.4-7): 176 Comp. also Kuttner 1995, 185: Augustus, to console himself and preserve Drusus' memory, "decreed him all the paraphernalia of a real triumphator", and "himself must have been the one to direct that Drusus' statue be placed in the Forum Augustum with the inscribed record that he had been proclaimed imperator, an acclamation that in fact Augustus had not allowed him to recognize officially". Contra Siber 1940, 91 and94 andCombès 1966, 176, who believe that Drusus officially became IMP I on account of military success won in 9 BCE. Comp. also Crook 1996, 98, who suggests that, in addition to Augustus, both Tiberius and Drusus took imperatorial salutations on account of their respective successes of 9 BCE in Illyricum and Germania -on p. 97, Crook correctly dates the vote of ovations and ornamenta triumphalia to both Drusus and Tiberius (whom he lists in that order) to 11 BCE, without making any comment with regard to their official positions: legati Augusti (comp. his observations on p. 96, quoted supra in n. 84) or proconsuls? Much earlier, Stein 1899, col. 2712 had already suggested that both Tiberius and Drusus became IMP I in 9 BCE (see, however, n. 105 supra for the fact that in PIR 2 C 857 and 941, he suggests that Drusus received his first in 11 BCE, roughly two years before his older sibling). Since Augustus is on record as IMP XII in 10/9 (ILS 91, trib. pot. XIV, July 10 to June 9) and IMP XIII (ILS 93 f., trib. pot. XV, July 9 to June 8) as well as IMP XIV (IRT 319) in 9/8, it follows that Tiberius' first salutation in 9 also occasioned Augustus' thirteenth, as already argued by Mommsen 1883, 14, and that, as seen correctly by Syme 1979, 313, the latter's fourteenth followed swiftly in result of Tiberius' successes in Germany in the early summer of 8 BCE, which also earned the latter his second imperatorial salutation -comp. also Hurlet 1997, 100, with n. 112, where it is cleverly argued that IRT 319 indicates that these imperatorial salutations must have taken place between the spring and the early summer of 8 BCE. At any rate, ILS 147 unequivocally attests that Tiberius was still IMP II in or soon after 2/1 BCE. That Tiberius gained his first officially recognized imperatorial salutation in 9 BCE is also accepted by Gelzer 1918, col. 484. Kienast -Eck -Heil 2017, 58 date Augustus' 13th and 14th salutations to "10 oder 9 v. Chr." and "Frühsommer 8 v. Chr." respectively. 177 As observed by Hurlet 1997, 100 andHalfmann 1986, 628. ὁ δ᾿ οὖν Αὔγουστος τοῦτό τε οὕτως ἐποίησε, καὶ τοῖς στρατιώταις ἀργύριον, οὐχ ὡς καὶ κεκρατηκόσι, καίτοι τὸ τοῦ αὐτοκράτορος ὄνομα καὶ αὐτὸς λαβὼν καὶ τῷ Τιβερίῳ δούς, ἀλλ᾿ ὅτι τὸν Γάιον ἐν ταῖς γυμνασίαις τότε πρῶτον συνεξεταζόμενόν σφισιν ἔσχον, ἐχαρίσατο. τὸν δ᾿ οὖν Τιβέριον ἐς τὴν τοῦ αὐτοκράτορος ἀρχὴν ἀντὶ τοῦ Δρούσου προαγαγὼν τῇ τε ἐπικλήσει ἐκείνῃ ἐγαύρωσε καὶ ὕπατον αὖθις ἀπέδειξε, γράμματά τε κατὰ τὸ ἀρχαῖον ἔθος, καὶ πρὶν ἐς τὴν ἀρχὴν ἐσελθεῖν, ἐκθεῖναι πρὸς τὸ κοινὸν ἐποίησε, καὶ προσέτι καὶ τοῖς ἐπινικίοις ἐσέμνυνεν· αὐτὸς γὰρ ἐκεῖνα μὲν οὐκ ἠθέλησε πέμψαι, ἐς δὲ δὴ τὰ γενέθλια ἱπποδρομίαν ἀίδιον ἔλαβε. ά τε τοῦ πωμηρίου ὅρια ἐπηύξησε, καὶ τὸν μῆνα1 τὸν Σεξτίλιον ἐπικαλούμενον Αὔγουστον ἀντωνόμασε· τῶν γὰρ ἄλλων τὸν Σεπτέμβριον οὕτως, ἐπειδήπερ ἐν αὐτῷ ἐγεγέννητο, προσαγορεῦσαι ἐθελησάντων ἐκεῖνον αὐτοῦ προετίμησεν, ὅτι καὶ ὕπατος ἐν αὐτῷ τὸ πρῶτον ἀπεδέδεικτο καὶ μάχας πολλὰς καὶ μεγάλας ἐνενικήκει. "Besides doing this, Augustus granted money to the soldiers, not as to victors, though he himself had taken the title of Imperator and had also conferred it upon Tiberius, but because then for the first time they had Gaius taking part with them in their exercises. So he advanced Tiberius to the position of commander in place of Drusus, and besides distinguishing him with the title of Imperator, appointed him consul once more, and in accordance with the ancient practice caused him to post up a proclamation before entering upon the office. He also accorded him the distinction of a triumph; for he did not wish to celebrate one himself, though he accepted the privilege of having his birthday permanently commemorated by Circensian games. He enlarged the pomerium and changed the name of the month called Sextilis to August. The people generally wanted September to be so named, because he had been born in that month; but he preferred the other month in which he had first been elected consul and had won many great battles." After recounting the passing of Maecenas and digressing on his character and the nature of his assocation with Augustus, Dio does not fail to follow up on the actual celebration, which took place on January 1 of the next year, the very day Tiberius assumed his second consulship, his colleague being the noble Cn. Calpurnius Piso. "Tiberius on the first day of the year in which he was consul with Gnaeus Piso convened the Senate in the Curia Octaviae, because it was outside the pomerium. After assigning to himself the duty of repairing the temple of Concord, in order that he might inscribe upon it his own name and that of Drusus, he celebrated his triumph, and in company with his mother dedicated the precinct called the precinct of Livia. He gave a banquet to the Senate on the Capitol, and she gave one on her own account to the women somewhere or other. A little later, when there was some disturbance in the province of Germany, he took the field. The festival held in honour of the return of Augustus was directed by Gaius, in place of Tiberius, with the assistance of Piso."178 A little less than twelve years following Cornelius Balbus' triumph of 27 March 19 BCE,179 Tiberius' award at long last broke the longest curule triumphal dearth since the Second Punic War.180 None of our sources, however, record a major battle in Germany for 8 BCE and Tiberius even had to return there "not long after his triumph" because of renewed unrest.181 Therefore, the timing and circumstances show Augustus' decision to honour Livia's eldest son with the nomen Imperatoris and a curule triumph over Germany, officially signaling its conquest was now considered complete,182 was clearly predominantly politically motivated, especially as he had personally vetoed (and refused) Germanic triumphs in 12 and 11 BCE. Though arguably also intended to give due lustre to his last military campaign and belated recognition of Tiberius' entire military track record since 20 BCE, the untimely successive deaths of Agrippa and Drusus doubtlessly 178 Dio 55.8.1-3. On (the passing of) Maecenas: see 55.7. As attested in ILS 95 (quoted on p. 180 supra), Tiberius nonetheless took the credit for presiding over the festival celebrating Augustus' return, indicating that Gaius Caesar had merely acted as his proxy. In line with traditional practice, Tiberius remained outside of the pomerium for the sake of preserving his auspicia militaria: see Hurlet 1997, 315 and, esp., Vervaet 2014 The rewards for this season's work were out of all proportion to the practical results: salutations for both Augustus and Tiberius, and for Tiberius at last a triumph and a second consulship at thirty-four." 182 Though Eck 2018a, 13, seems to believe Tiberius' armies gained some decisive successes in Germany in 8 BCE (comp. 2018b, 133 f.), he conclusively argues (esp. 14-27; comp. also 2018b, 133-137) that Augustus from now on considered transrhenian Germania a proper Roman province, with the oppidum Ubiorum as its administrative and religious capital. Compare also Wolters 2017, 36 and 71-74, who likewise argues that Tiberius' triumph of 1 January 7 marked the completion of the Roman conquest of transrhenian Germany, and that the corresponding extension of the pomerium and the emergence of the Ara Germaniae further suggest that the newly conquered territories were now considered prouincia Germania. prompted Augustus to shore up the position and auctoritas of his only surviving stepson. As Gaius Caesar would not receive his first major military command before January 1 BCE,183 Tiberius was now by default the most formidable mainstay of the domus Augusta.184 Remarkably, he would not celebrate another curule triumph before 23 October 12 CE, roughly twenty years later, when he was readying to take over the reins from the elderly Augustus, since 26 June 4 CE his adoptive father.185 That Tiberius only ever celebrated two curule triumphs, the only such triumphs staged in Augustus' reign after March 19 BCE, and that these twice took place at politically expedient moments, powerfully underscores how Augustus had converted the curule triumph into an imperial monopoly, used sparingly to serve the political interests of his house rather than reflecting the military situation in the field. Closing Observations: Tiberius, Drusus and Augustan Dynastic and Triumphal Policy First and foremost, the conclusion that Tiberius and Drusus had already been invested with extraordinary proconsular commands well before Agrippa's premature passing in March 12 BCE significantly alters our understanding of the balance of power within the imperial family in this period. Though Agrippa arguably ever remained the first of the specially empowered strongmen, a fortiori after Augustus' adoption of his sons by Iulia in June 17 BCE, he was closely flanked by Livia's ambitious and capable sons from 20 and 15 BCE successively, who likewise held special proconsulships as from those dates. Augustus clearly hedged his bets 183 Cf. supra p. 158. 184 In 55.9.1-6, Dio in his summary of 6 BCE recounts how Augustus was vexed at their insolent and luxurious lifestyles and lack of appetite to emulate his own conduct. Dio suggests that Augustus invested Tiberius with quinquennial tribunicia potestas as well as another important mission in Armenia precisely to bring his adopted sons to their senses, which reportedly slighted Gaius and Lucius and caused Tiberius to fear their resentment and retreat to Rhodes. The grant of tribunicia potestas especially put Tiberius on a par with Agrippa, the princes' biological father, and so created the prospect of an alternative successor, rendering Dio's "truest explanation" of Tiberius' self-imposed exile to Rhodes (ἡ μὲν οὖν ἀληθεστάτη αἰτία τῆς ἐκδημίας αὐτοῦ τοιαύτη ἐστί) perfectly credible. In § 7, Dio produces some further possible grounds for Tiberius' retirement to Rhodes. All speculation aside, the impression remains that Augustus ever hedged his bets and did not flinch from playing out his favourites against each other if need be. and Agrippa is consequently no longer to be considered as the sole proconsular guardian of the regime during his lifetime.186 On the one hand, the combined evidence on the early careers of Tiberius and Drusus confirms that Dio's testimony on Agrippa's third and final refusal of a decreed public triumph should not be rejected out of hand.187 In 54.24.7-8, Dio records the following response to Agrippa's successful repression of a tribal revolt in the Cimmerian Bosporus in the summer of 14 BCE: καὶ ἐπ᾿ αὐτοῖς θυσίαι μὲν τῷ τοῦ Ἀγρίππου ὀνόματι ἐγένοντο, οὐ μέντοι καὶ τὰ ἐπινίκια καίτοι ψηφισθέντα αὐτῷ ἐπέμφθη· οὔτε γὰρ ἔγραψεν ἀρχὴν ἐς τὸ συνέδριον ὑπὲρ τῶν πρα-186 Contra, e.g., Dettenhofer 2000, 162-167, who suggests that Augustus held back the consular Tiberius even after the death of Agrippa, refusing to invest him with the sort of military authority granted to the latter until 10 BCE, when he at long last received 'imperium proconsulare ';and Seager 2005, 17 f. (comp. also p. 20): "Out of this expedient [i.e., his marriage to Iulia in 21 BCE] devised to neutralize Agrippa there developed the concept of the guardian or regent, whose task was to rule until the time was ripe for the power to be passed on to a direct descendant of Augustus. This was the position for which, after the death of Agrippa, Tiberius was chosen by Augustus, and it is only if the repeated pattern is studied from its inception that the role of Tiberius in the overall design can be fully understood." The view that Livia's sons only rose to real prominence and extraordinary proconsulships following the death of Agrippa perhaps finds its clearest expression in Hurlet 1997, 79 f. and 85 f. I do, however, unreservedly accept Hurlet's argument (op. cit., 81 f.) that Drusus was no second-tier figure vis-à-vis his older brother in terms of prestige and popularity with the people and the court, and that "au contraire […] ils suivaient une carrière parallèle, aussi bien avant la mort d'Agrippa qu'après." Comp. also 84: "L'intervalle [i.e., between Tiberius and Drusus' successive urban praetorships] étant naturellement justifié par leur différence d'âge, il n'était pas douteux aux yeux des Romains qu'Auguste faisait suivre à ses beaux-fils une carrière politique en parallèle. Le même s'observe pour leur carrière militaire." The results of the analysis here firmly substantiate Hurlet's sharp observations as well as those of Kuttner's 8th chapter (1995). On pp. 182-184, Kuttner produces a compelling explanation of why Augustus never adopted Marcellus, Agrippa, the preexile Tiberius and Drusus, amongst other things observing (p. 184) "that Tiberius and Drusus remained Augustus' stepsons in this period says nothing about their position in his dynastic plans. Not adopting them made them, in fact, more useful agents; acting for their stepfather while remaining Claudii, they added the prestige of the patrician Claudii to the supremacy of the Julian house […]. Plutarch (Ant. 87) was quoted above on Augustus' reliance on Tiberius and Drusus next after Agrippa: in 13-9 B.C. the Ara Pacis is structured in such a way as to delineate this hierarchy visually, for on the south frieze Augustus capite velato is echoed by Agrippa similarly posed, after whom the next male portraits are the consul Tiberius and the imperator Drusus; Gaius (and probably his little brother) is on the other side of the altar ( fig. 71)". The very fact that Tiberius features as consul on the altar whilst Drusus wore the paludamentum further suggests he was present at the ceremony as proconsul in 13 BCE. 187 On his first and second refusals of 37 and 19 BCE, see Dio 48.49.3-4 (37 BCE; comp. App. B.C. 5.92) and 54.11.6 (19 BCE; comp. 12.1-2). I intend to discuss the altogether different circumstances of these earlier refusals as well as Agrippa's motivation in a wider study on Augustus and the public triumph. χθέντων οὐδέν, ἀφ᾿ οὗ δὴ καὶ οἱ μετὰ ταῦτα, νόμῳ τινὶ τῷ ἐκείνου τρόπῳ χρώμενοι, οὐδ᾿ αὐτοί τι τῷ κοινῷ ἔτ᾿ ἐπέστελλον, οὔτε τὴν πέμψιν τῶν νικητηρίων ἐδέξατο· καὶ διὰ τοῦτο οὐδ᾿ ἄλλῳ τινὶ ἔτι τῶν ὁμοίων αὐτῷ, ὥς γε καὶ ἐγὼ κρίνω, ποιῆσαι τοῦτο ἐδόθη, ἀλλὰ μόναις ταῖς ἐπινικίοις τιμαῖς ἐγαυροῦντο. "For these successes, supplications were offered in the name of Agrippa, but the triumph which was voted him was not celebrated. Indeed, he did not so much as notify the Senate of what had been accomplished, and in consequence subsequent conquerors, treating his course as a precedent, also gave up the practice of sending reports to the public; and he would not accept the celebration of the triumph. For this reason, -at least, such is my opinion, -no one else of his peers was permitted to do so any longer, either, but they enjoyed merely the distinction of triumphal honours." The above reappraisal of the early careers of the Claudians suggests that Agrippa's conduct in 14 BCE indeed made for a significant episode in Augustus' new triumphal policy and the wider history of the Roman triumph.188 That Agrippa's final recusatio triumphi indeed was a carefully orchestrated affair with a distinct 188 Some scholars believe Agrippa's second refusal of 19 BCE set the decisive precedent for all other proconsuls to follow: e.g. Kuttner 1995, 190 ("Agrippa opened this new phase when in 19 B.C. he ostentatiously refused the triumph voted him by the Senate in favor of Augustus, whom he acknowledged as supreme commander"; comp. 191, where she emphasizes "the importance of Agrippa's ceremony of refusal") and Hurlet 2001, 174 f. as well as 2006, 171 f. Others, however, give equal weight to both his second and third refusals: e.g., Eck 1984, 139, who observes that "Agrippa's conduct set the tone. He declined a triumph in 19 BC and 14 BC, although possessed of independent imperium" (comp. also Eck 1998, 58 f.: "Wenn ein Mann wie Agrippa, der an Machtfülle und Prestige dem Princeps so nahe kam wie niemand sonst, vor diesem zurücktrat und ihn dadurch als alleinige Quelle aller römischen Sieghaftigkeit erscheinen ließ, wer konnte dann noch Anspruch auf einen Triumph erheben?" Eck here also points to Tiberius and Drusus being denied triumphs voted by the Senate but implicitly suggests that, unlike Agrippa, they lacked independent imperium); Itgenshorst 2008, 39 f., who, whilst likewise ignoring Agrippa's first recusatio of 37 BCE, considers the refusals of 19 and 14 as equally important in terms of precedent value: "Die beiden Zurückweisungen des Triumphes durch den Angesehenen Feldherrn und engen Vertrauten des Princeps hatten offensichtlich Vorbildcharakter; entscheidend war hierbei, daß Agrippa (im Gegensatz zu den Legaten des Augustus) in beiden Fällen ein eigenständiges Imperium besessen hatte, was -zumindestens nach republikanischen Maßstäbeneine zentrale Voraussetzung für die Gewährung eines Triumphes darstellte." (comp. also id. 2017, 67, where she casts doubt on Agrippa's first recusatio of 37, attested in Dio 48.49.4); and Levick 2010, 93. Boyce 1942, 139 f., by contrast, suggests it was Augustus' own refusal of a Parthian triumph in 19 that set the decisive precedent. Dalla Rosa 2015, 464 and 481, however, considers the recusationes triumphi of both Augustus (Dio 54.10.4) and Agrippa of 19 BCE (Dio 54.11.6) as the combined decisive precedent: "Le refus, précisément en 19 av. J.-C., des deux plus importants hommes politiques et chefs militaires de célébrer le triomphe pour leurs succès en Orient et en Occident allait peser lourdement sur les futures chances des proconsuls de se voir attribuer ce même honneur." political purpose also follows from the remarkable oddity that neither he nor any of the Roman citizen forces at his disposal had been involved in the actual suppression of the revolt. According to Dio, news of an unwelcome (Roman) usurper to the Bosporan throne had prompted Agrippa to send king Polemo I of Pontus, who managed to defeat the rebels in battle but nonetheless failed to force them into surrender. The rebellious Bosporans only dropped their opposition to Polemo upon learning that Agrippa himself had arrived at Sinope to prepare for a further expedition against them. That Augustus was directly managing the Bosporan succession crisis and its immediate aftermath can be inferred from Dio's note that Polemo subsequently took possession of the Bosporan throne by marrying queen Dynamis, who had married the slain usurper Scribonius following the decease of her husband Asander, as per Augustus' wishes: τοῦ Αυγούστου δῆλον ὅτι ταῦτα δικαιώσαντος.189 At all events, Agrippa's example of 14 BCE was not lost on the 'regular' proconsuls: in Tib. 32.1, we are told by Suetonius that Tiberius rebuked some proconsuls still commanding military forces (i.e. predominantly the proconsul of Africa) for not writing their reports to the Senate and even referring to him the award of the more prestigious dona militaria. 190 On the other hand, however, this reappraisal also signicantly qualifies Dio's picture in that this watershed in the history of the triumph and Rome's senatorial aristocracy was not achieved by virtue of a single recusatio triumphi on the part of the most powerful man in the Empire next to Augustus himself.191 The first demonstrable step on the road towards a new triumphal paradigm was Augustus' notable decision not to share the imperatorial salutation occasioned by Tiberius' successful military diplomacy in Armenia in 20 BCE. Regardless of his status as proconsul -a legally and socially privileged one, for that matter -he had to content himself with a share in the ensuing supplications only. As argued above, 189 Dio 54.24.4-7: "naturally not without the sanction of Augustus". That Augustus always retained full control over his extraordinary proconsuls can also be gleaned from Dio 55.10a.8. Here we are told that the ailing Gaius Caesar late in 3 CE (comp. Hurlet 1997, 560) begged Augustus for permission to retire to private life and convalesce in Syria, following which Augustus communicated his wish to the Senate, effecting a decree that released him of his prouincia and authorized him to return to Italy to do there as he saw fit. 190 Corripuit consulares exercitibus praepositos, quod non de rebus gestis senatui scriberent quodque de tribuendis quibusdam militaribus donis ad se referrent, quasi non omnium tribuendorum ipsi ius haberent. It follows that I do not accept Rich's argument in 1990, 202 that "Agrippa followed his earlier practice in not seeking or accepting a triumph […] -he could hardly have accepted one now for a success won without fighting. Dio's claim that 'others like him' did not triumph is incorrect". 191 For Dio's tendency to compress historical events, see Rich -Williams 1999, 194-199 and 212 f., with due qualification in Vervaet 2010b, 138 f. however, the short decade from 15 to 7 BCE represents the period par excellence for the methodical and conspicuous implementation of Augustus' new triumphal policy. During these years especially, Augustus orchestrated a coherent set of measures to bring about the most decisive rupture in the history of the triumphal ritual.192 The triumphal honours awarded to Tiberius and/or Drusus in 11 and 8 are, however, just as important as the preceding refusals or denials of curule triumphs and such triumphal honours as imperatoral salutations since they demonstrate that the exclusion of the vast majority of the senatorial aristocracy from one of their foremost privileges and Augustus' consistent refusal to celebrate any further triumphs was not tantamount to the ritual's complete termination. By allowing the Senate to decree ovations to both Tiberius and Drusus in 11 and subsequently, at long last, the nomen Imperatoris as well as a curule triumph to Tiberius in 9 and 8 respectively whilst consistently upholding his own policy of refusing triumphs, Augustus made it clear that the public triumph was henceforth to be the exclusive if scarce privilege of the domus Augusta. These insights further substantiate Kuttner's wider suggestion that, "the new Augustan theology of triumph, then, is very closely tied to the firm establishment of his dynasty. The sons, stepsons, and sons-in-law of his house may be only his agents, but they form the group singled out to serve as such agents. As long as he rules, celebration of their capability is carefully subordinated to the proclamation of his own preeminence, but care is also taken to give them scope to build the talents and reputation that enable them to act convincingly for him in his lifetime and to perpetuate his system after his inevitable death. This is a succession policy, in other words, formulated by a shrewd ruler who planned as if he could live indefinitely and at the same time as if he could die tomorrow -the latter event being one that, with Augustus' early propensity to serious illness, was always a real threat. "193 In light of these considerations, it should also come as no surprise that the period here considered also saw the emphatic and strategic introduction of the ornamenta triumphalia.194 Officially awarded by decree of the Senate, probably invar-192 It is one of the foremost merits of Itgenshorst 2005 to have demonstrated that Augustus' establishment of an imperial triumphal monopoly amounts to a watershed of paramount importance in Roman political history. 193 Kuttner 1995, 192. 194 Boyce 1942 believes that the practice of conferring ornamenta triumphalia should be traced to Augustus' honours of 19 BCE. Rich 1998, 120 n. 157 objects that "the recipients of ornamenta triumphalia enjoyed the same entitlement to triumphal dress as those who had celebrated triumphs, chiefly the right to wear the laurel crown at the games. Octavian had been granted this right in 40 BC (Dio 48.16.1). The origins of ornamenta triumphalia are to be found in the grants made to Tiberius, Drusus and L. Calpurnius Piso in 12-11 BCE". Whilst accepting that iably on the motion of Augustus and/or another qualified member of the domus Augusta,195 this distinction was for all intents and purposes designed as a prestigious Ersatz for the triumph proper .196 In 12, and then again in 11, both Tiberius and Drusus twice received the award as Augustus interfered with the Senate's decree ratifying their nomina Imperatoris and curule triumphs.197 Their status as special proconsuls as well as Augustus' stepsons would have immediately enhanced the prestige of the ornamenta. Significantly, the year 11 also saw this distinction awarded to none less than the noble L. Calpurnius Piso 'Pontifex' (ord. 15 BCE), who as legatus Augusti pro praetore managed to defeat the Bessi and a number of other Thracian tribes across three years of brutal warfare.198 Piso's award is on record in both Tacitus (Ann. 6.10.3: decus triumphale in Thraecia meruerat) and Dio 54.34.7, who adds the interesting detail that Piso was honoured with both supplicationes and the ornamenta triumphalia on account of his victories: καὶ αὐτῷ διὰ ταῦτα καὶ ἱερομηνίαι καὶ τιμαὶ ἐπινίκιοι ἐδόθησαν -"for these successes he was granted supplications and triumphal honours"199. These supplicationes were doubtlessly decreed to Augustus in the first place in his capacity of supreme commander and holder of the imperium auspiciumque under which Piso had conquered, possibly along with (yet) another declined triumph. Dio's representation, however, strongly suggests that they were also voted in honour of Piso, regardless of the fact that he as legatus Augusti pro praetore lacked independent imperium and the corresponding auspices. The very fact that the ornamenta were thus for the first time ever awarded to a commander who did not normally qualify for a public triumph (be it curule or minor) as well as this novel, more liberal, usage of supplicationes may well account for Dio's express mention, in keeping with his amply documented interest in triumphal honours and constitutional innovation. 200 In all likelihood, Augustus' decision to have one of his noblest and most trusted legati share in the honour of his supplicationes accounts for another conspicuous sweetener for the senatorial aristocracy to accept his blatantly autocratic new triumphal policy. 201 In the context of a discussion of how Augustus promoted his stepsons in the immediate aftermath of Agrippa's untimely decease, allegedly culminating in their promotions to proconsul in 10 BCE, Syme cleverly discerns two further Augustan ploys to avoid antagonizing "the high aristocracy, the decus ac robur of the renovated Republic". First, he observes how "a resplendent collection is on show, the coevals of the Claudii, consuls in the decade 16-7: a Scipio, two Fabii, two Pisones, and so on […]. Furthermore, a new distinction emerges in these years, pleasing to some at least of the nobiles. Cities in Asia and Africa now put on their coins the names and images of proconsuls.202 The first to acquire the honour is Paullus Fabius Maximus (cos. 11), proconsul of Asia by special appointment in 10/9."203 Since imperatorial salutations by the Late Republic often represented the necessary first step on the road to a curule triumph,204 it should neither surprise that Tiberius and Drusus in 12 and 11 BCE were twice denied their imperatorial salutations.205 That Augustus would indeed not hesitate to cast his veto against (or otherwise neutralize) motions awarding honours he deemed unwelcome or excessive is on record in Suet. Tib. 17.2, where we are told that he prevented the Senate from awarding Tiberius with a special honorific cognomen on account of his hard-won victory in Pannonia in 9 CE (Pannonicus, Invictus or Pius).206 Although Dio has 202 See Grant 1946, 139 f., 224, 228-233 and 387 f. 203 Syme 1979, 314 f. On p. 315, however, Syme needlessly downplays this development: "Too much should not be made of this phenomenon -and it was in fact sporadic. No coins commemorate the proconsuls L. Piso and Iullus Antonius." On the procedure of extra sortem appointments (i.e., by decree of the Senate on the motion of Augustus) of proconsuls and the case of Paullus Fabius Maximus: see Hurlet 2006, 82 f. and 89 f. For a fine study of gubernatorial portraits on Roman provincial coinage, see Erkelenz 2002. 204 Supplicationes were so frequently the forerunner of a triumph (comp. Livy 26.21.3-4) that Cato thought it necessary to remind Cicero that it was not invariably so (Fam. 15.5.2, end of April 50). 205 Contra Kehne 2002, 311, who suggests that Augustus denied Drusus the nomen Imperatoris in 11 BCE as a rebuke of his alleged and costly recklessness as a field commander (with reference to Dio 54.33.3 and 55.1.2 and Vell. 2.97.4). Apart from the fact that such would have been incongruent with the other honours conferred upon Drusus on account of his military exploits of that year, Kehne's speculative suggestion also fails to explain Tiberius' identical treatment by the Senate and the princeps: cf. supra pp. 148-152. Velleius' repeated claims that Tiberius' army never suffered any significant losses during his campaigns in Germany (2.97.4; 107.3 and 120.2) should probably not be taken as veiled criticism of a man Tiberius always held in the highest regard -to quote just one example of Tiberius' profound love of his younger sibling: in Tib. 7.3, we are told that he conveyed Drusus' body from Germany to Rome "going before it on foot all the way". It more likely concerns subtle criticism of Germanicus Caesar, whose campaigns had been deemed costly by Tiberius in 16 BCE: Tac. Ann. 2.26 and Suet. Tib. 52.2. 206 Contra Levick 2010, 93, who suggests that the triumphs offered in 12 to Tiberius and Drusus by the Senate were merely "refused on their behalf by Augustus". roughly one year after the untimely passing of Drusus, would Tiberius be granted the honour of a curule triumph ex Germania, for predominantly political reasons. Since Dio is adamant in 53.21.6 that "nothing was done [in the Senate] that did not please Caesar"210, it should, furthermore not be doubted that the consuls moving the vetoed motions on behalf of Augustus' stepsons in 12 and 11 were really acting in accordance with his wishes.211 The paramount importance of the Claudian princes as mouthpieces and instruments of Augustus' will in key matters of state involving the senatorial order is directly attested in the -equally sensitive -issue of Augustan fiscal policy. In 13 CE, shortly after the fifth and final decennial renewal of his provincial command, the elderly emperor faced widespread and tenacious opposition against the (conditional) five percent tax on inheritances and bequests.212 On the one hand, he asked the Senate to suggest any other viable sources of revenue. According to Dio, he did this with the intention for them to ratify the existing tax for want of any better method, without bringing any censure upon him. On the other, he also ordered both Germanicus and Drusus Caesar not to make any statement in the matter, "for fear that if they expressed an opinion it should be suspected that this had been done at his command, and the Senate would therefore 210 οὐ μέντοι καὶ ἐπράττετό τι ὅ μὴ καὶ ἐκεῖνον ἤρεσκε. 211 Comp. also Dio 53.21.4-5, where we are told that Augustus established a new advisory body in 27, taking as "advisers for periods of six months the consuls (or the other consul when he himself also held the office), one of each of the other magistracies, and fifteen men chosen by lot from the remainder of the senatorial body, with the result that all legislation proposed by the emperors is usually communicated after a fashion through this body to all the other senators; for although he brought certain matters before the whole Senate, yet he generally followed this plan considering it better to take under preliminary advisement most matters and the most important ones in consultation with a few; and sometimes he even sat with these men in the trial of cases"; a measure distinct from the institution of the so-called Consilium Principis in 13 CE (on which Dio 56.28.1-3). It is, therefore, unlikely that the consuls of 12 and 11 had moved to ratify Tiberius and Drusus' salutations and award them with curule triumphs without the connivance of Augustus, who could then use these sententiae as springboards for his own alternative proposals. Contra Havener 2016, 337, who suggests that the Senate moved "eigenständig", either in a proactive attempt to please Augustus or to 'test' his triumphal policy. At least three of the consuls of 12 and 11 BCE were zealous supporters and close confidants of Augustus, viz. P. Sulpicius Quirinius, L. Volusius Saturninus (who both served in 12) and Paullus Fabius Maximus (who served as consul ordinarius throughout 11 BCE, alongside consul prior Q. Aelius Tubero). The fact that P. Sulpicius Quirinius declined a cognomen ex prouincia (viz. Marmaricus: Flor. 2.31), suggests he was well aware of the key tenets of Augustus' new triumphal policy. In 1 BCE, Augustus would assign this seasoned military man to the entourage of C. Caesar as one of his rectores in the tricky task to restore Roman influence in Armenia: Ann. 3.48.1. 212 Reintroduced by Augustus in 6 BCE: Dio 55.25.5.
2020-06-10T13:54:25.646Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "49c245d45ed354047d40bed77468698505af054c", "oa_license": null, "oa_url": "https://doi.org/10.1515/klio-2020-0006", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c58a38b9c6b847761944b7461a2525e8d4313f90", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Art" ] }
119133334
pes2o/s2orc
v3-fos-license
Embedding partial Latin squares in Latin squares with many mutually orthogonal mates We show that any partial Latin square of order $n$ can be embedded in a Latin square of order at most $16n^2$ which has at least $2n$ mutually orthogonal mates. We also show that for any $t\geq 2$, a pair of orthogonal partial Latin squares of order $n$ can be embedded into a set of $t$ mutually orthogonal Latin squares (MOLS) of order a polynomial with respect to $n$. Furthermore, the constructions that we provide show that MOLS($n^2$)$\geq$MOLS($n$)+2, consequently we give a set of $9$ MOLS($576$). The maximum known size of a set of MOLS($576$) was previously given as $8$ in the literature. Introduction In 1960 Evans [4] showed that it was possible to embed any partial Latin square of order n in some Latin square of order t, for every t 2n, where 2n is a tight bound. In the same paper Evans raised the question of embedding orthogonal partial Latin squares in sets of mutually orthogonal Latin squares. The importance and relevance of this question is demonstrated by the prevalence and application of orthogonal Latin squares to other areas of mathematics (see [2]). For instance, the existence of a set of n − 1 mutually orthogonal Latin squares of order n is equivalent to the existence of a projective plane of order n (see [10] for a relevant construction). Thus results on the embedding of orthogonal partial Latin squares provide information on the embedding of sets of partial lines in finite geometries. In addition, early embedding results for partial Steiner triple systems utilised embeddings of partial idempotent Latin squares (see for example [8]). It has also been suggested that embeddings of block designs with block size 4 and embeddings of Kirkman triple systems may make use of embeddings of pairs of orthogonal partial Latin squares (see [6]). In 1976 Lindner [9] showed that a pair of orthogonal partial Latin squares can always be finitely embedded in a pair of orthogonal Latin squares. However, there was no known method for obtaining an embedding of polynomial order (with respect to the order of the partial arrays). In [6], Hilton et al. formulate some necessary conditions for a pair of orthogonal partial Latin squares to be embedded in a pair of orthogonal Latin squares. Then in [7] Jenkins developed a construction for embedding a single partial Latin square of order n in a Latin square of order 4n 2 for which there exists an orthogonal mate. In 2014, Donovan and Yazıcı developed a construction that verified that a pair of orthogonal partial Latin squares, of order n, can be embedded in a pair of orthogonal Latin squares of order at most 16n 4 . This paper seeks to extend these results, providing new constructions that show that a partial Latin square, of order n, can be embedded in a Latin square, of order at most 16n 2 with many mutually orthogonal mates. Further, we develop a second construction for embedding a pair of orthogonal partial Latin squares of order n in sets of mutually orthogonal Latin squares of any size where the Latin squares have polynomial order with respect to n. Also, as a corollary, the construction can be used to increase the best known lower bound for the largest set of MOLS(576). In the literature the existence of 8 MOLS(576) is established. However, we construct 9 MOLS(576). We preface the discussion of our main result with some necessary definitions. Definitions Let N = {α 1 , α 2 , . . . , α n } represent a set of n distinct elements. A non-empty subset P of N × N × N is said to be a partial Latin square (PLS(n)), of order n, if for all (x 1 , x 2 , x 3 ), (y 1 , y 2 , y 3 ) ∈ P and for all distinct i, j, k ∈ {1, 2, 3}, x i = y i and x j = y j implies x k = y k . We say that P is indexed by N. We may think of P as an n × n array where symbol e ∈ N occurs in cell (r, c), whenever (r, c, e) ∈ P , and we will write e = P (r, c). We say that cell (r, c) is empty in P if, for all e ∈ N, (r, c, e) / ∈ P . The volume of P is |P |. If |P | = n 2 , then we say that P is a Latin square (LS(n)), of order n. If for all 1 i n, (α i , α i , α i ) ∈ P , then P is said to be idempotent. The set of elements {(x 1 , x 2 , x 3 ) ∈ P | x 1 = x 2 } forms the main diagonal of P . Two partial Latin squares P and Q, of the same order n are said to be orthogonal, denoted OPLS(n), if they have the same non-empty cells and for all r 1 , c 1 , r 2 , c 2 , x, y ∈ N {(r 1 , c 1 , x), (r 2 , c 2 , x)} ⊆ P implies {(r 1 , c 1 , y), (r 2 , c 2 , y)} ⊆ Q. This definition extends in the obvious way to a pair orthogonal Latin squares of order n. A set of t Latin squares, of order n, which are pairwise orthogonal are said to be a set of t mutually orthogonal Latin squares, denoted MOLS(n). A set T ⊆ A, where A is a Latin square of order n, is said to be a transversal, if • |T | = n, and • for all distinct (r 1 , c 1 , x 1 ), (r 2 , c 2 , x 2 ) ∈ T , r 1 = r 2 , c 1 = c 2 and x 1 = x 2 . Note that a Latin square has an orthogonal mate if and only if it can be partitioned into disjoint transversals. We say that a partial Latin square P on the set N can be embedded in a Latin square L on the set M if there exists one-to-one mappings A pair of orthogonal partial Latin squares (P 1 , P 2 ) is said to be embedded in a pair of orthogonal Latin squares (L 1 , L 2 ) if P 1 is embedded in L 1 and P 2 is embedded in L 2 . A set of orthogonal partial Latin squares (P 1 , P 2 , . . . , P n ) is embedded in a set of mutually orthogonal Latin squares This paper will make extensive use of Evans' embedding result, which is stated as: A partial Latin square of order n can be embedded in a Latin square of order t, for any t 2n. The following is a similar embedding result for partial idempotent Latin squares. Theorem 2.3 ([1]). A partial idempotent Latin square of order n can be embedded in a idempotent Latin square of order t, for any t 2n + 1. It is also worth noting the following well known result which is the culmination of results from a series of papers by many authors, see for example [5]. Embedding a PLS in a set of MOLS We begin by assuming that there exists a set of t MOLS(n) and show that any Latin square L, of order n, can be embedded in a Latin square B, of order n 2 , with the additional property that B has t mutually orthogonal mates. This result will then allow us to show that any PLS(s) where s n/2 can be embedding in a Latin square B of order n 2 such that B has t mutually orthogonal mates. Thus this result, and the associated construction, allows us to generalise Jenkin's result which is stated as: 7]). Let L be a Latin square of order n with n 3 and n = 6. Then L can be embedded in a Latin square of order n 2 which has an orthogonal mate. Proof. For completeness we begin by showing these arrays are Latin squares, then that X k , 1 k t, are mutually orthogonal and finally that for each k, X k and B are orthogonal. Finally assume that for some k ∈ {1, . . . , t}, X k and B are not orthogonal. Thus there exist distinct cells ((p, r), (q, c)) and ((p ′ , r ′ ), (q ′ , c ′ )) such that Since F k is a Latin square, Equation (7) substituted into Equation (6) gives c = c ′ . Then Equation (8) gives F 1 (p, r) = F 1 (p ′ , r ′ ) and when substituted into Equation (5) gives q = q ′ . Returning to Equation (7) we get p = p ′ and consequently r = r ′ . So ((p, r), (q, c)) = ((p ′ , r ′ ), (q ′ , c ′ )), a contradiction. Hence for all 1 k t, X k is orthogonal to B, and the result follows. Corollary 3.3. Let P be a partial Latin square of order n, n 3. Then P can be embedded in a Latin square B of order at most 16n 2 , where B has at least 2n mutually orthogonal mates. Furthermore if P is idempotent then B can be constructed to be idempotent. Proof. We will first embed P in a Latin square L of order m where 2 k = m > 2n 2 k−1 which is always possible given Evans result, Theorem 2.2. We can also assume that L is Observe that since F 1 (0, r) = r, the construction places a copy of P in the sub-array defined by p = 0 and q = 0 and so P has been embedded in B which has been shown to have m − 1 mutually orthogonal mates. As 2 k = m > 2n 2 k−1 we have 2 k+1 > 4n 2 k = m, so 16n 2 m 2 . Hence every partial Latin square of order n embeds in a Latin square of order at most 16n 2 for which there exists at least 2n mutually orthogonal mates. Now, one can make sure B is idempotent if P is idempotent. When embedding P , ensure that L is idempotent, which can be guaranteed by Theorem 2.3 because m 2n + 1. Note that F 1 is in standard form and is decomposable into transversals. So there exists a transversal of F 1 involving the element (0, 0, 0). Without loss of generality one can assume that this transversal is on the main diagonal of F 1 . So F 1 (p, p) the cells ((p, r), (p, r)) and ((p ′ , r), (p ′ , r)) of B contain elements with different first coordinates. The second coordinate in cell ((p, r), (p, r)) of B is L (F 1 (p, r), r). So for each fixed p, these second coordinates form a row-permuted copy of L. Now consider the subsquare S p of B formed by the cells ((p, r), (p, r ′ )) for 0 r, r ′ m−1. The entries in S p all have the same first coordinate F 1 (p, p), and the second coordinates form a row-permuted copy of L. Since L is idempotent, L has a transversal and by permuting the rows {(p, 0), (p, 1), . . . , (p, m − 1)} of B we can arrange for this transversal of S p to lie on the main diagonal of B. This can be done independently for each p = 0, 1, . . . , m − 1, and the result is a transversal of B on its main diagonal. By suitable renaming of the elements of B we can then arrange for B to be idempotent. In the case p = 0, the original entry in the cell (0, r), (0, r ′ ) of B is (0, L(r, r ′ )), so no permuting of the rows of S 0 or renaming of elements (0, x) is required (strictly speaking we apply the identity permutation and the identity renaming here). Hence B retains a copy of L in the subsquare S 0 . Finally, to complete the proof, we apply the same permutation of the rows and renaming of elements to each X k as were applied to B. Note that one can increase the number of mutually orthogonal Latin squares that are orthogonal to B as much as one likes by increasing the order of the embedding Latin square L to guarantee the existence of a larger number of mutually orthogonal Latin squares of the same order as L. Proof. We know by [2] (Section III-3-4) and [11] that if n 7 and n = 10, 18 or 22, there exist four mutually orthogonal Latin squares of order n. Use these Latin squares to form B, X 1 , X 2 , X 3 and X 4 . A bachelor Latin square is a Latin square which has no orthogonal mate; equivalently, it is a Latin square with no decomposition into disjoint transversals. A confirmed bachelor Latin square is a Latin square that contains an entry through which no transversal passes. Wanless and Webb [12] have established the existence of confirmed bachelor Latin squares for all possible orders n, n / ∈ {1, 3}. So it is interesting to note that the above results (including Jenkins result) established that when one essentially "squares" a bachelor, it is possible to find an orthogonal mate. Embedding a pair of OPLS in a set of MOLS In this section we make use of the embedding result of Donovan and Yazıcı, [3], to show that a pair of orthogonal partial Latin squares can be embedded in pair of orthogonal Latin square which have many orthogonal mates. Theorem 4.1 ([3]). Let P and Q be a pair of orthogonal partial Latin squares of order n. Then P and Q can be embedded in orthogonal Latin squares of order k 4 and any order greater than or equal to 3k 4 where 2 a = k 2n > 2 a−1 for some integer a. ] be a pair of orthogonal Latin squares of order n. Let C 1 = [C 1 (i, j)], . . . , C t = [C t (i, j)] be t mutually orthogonal Latin squares of order n. Then the squares , r), (q, c), (A 1 (p, q), B 1 (r, c)))}, is a bijection, form a set of t + 2 mutually orthogonal Latin squares of order n 2 . Proof. The arrays B 1 and B 2 may be obtained by taking direct products, so it is clear that they are orthogonal Latin squares. But C α is orthogonal to C γ and so Equations (13) and (15) imply p = p ′ and B 1 (r, c) = B 1 (r ′ , c ′ ). Further C β is orthogonal to C δ and so Equations (14) and (16) imply q = q ′ and B 2 (r, c) = B 2 (r ′ , c ′ ). Finally B 1 and B 2 are orthogonal and so r = r ′ and c = c ′ . But this contradicts the assumption that the cells ((p, r), (q, c)) and ((p ′ , r ′ ), (q ′ , c ′ )) are distinct. Hence X α,β and X γ,δ are orthogonal. Finally we prove that B 1 and X α,β are orthogonal. Assume this is not the case and that there exist distinct cells ((p, r), (q, c)) and ((p ′ , r ′ ), (q ′ , c ′ )) such that p, B 1 (r, c)), C β (q, B 2 (r, c))) = (C α (p ′ , B 1 (r ′ , c ′ )), C β (q ′ , B 2 (r ′ , c ′ ))). Then Since C α is a Latin square substituting Equation (18) into Equation (19) implies p = p ′ . Now since A 1 is a Latin square Equation (17) implies q = q ′ . Then, since C β is a Latin square, Equation (20) implies B 2 (r, c) = B 2 (r ′ , c ′ ). But B 1 and B 2 are orthogonal so Equation (18) then gives r = r ′ and c = c ′ . Consequently B 1 and X α,β are orthogonal. Similarly we can show B 2 and X α,β are orthogonal. Proof. Let A 1 and A 2 be two orthogonal partial Latin squares of order n. By [3] we can embed them into two orthogonal Latin squares A 1 and A 2 of order k 4 where 2 a = k 2n > 2 a−1 . As k is a power of a prime, there are at least k 4 − 1 MOLS(k 4 ). So there are at least (k 4 − 1 + 2) MOLS(k 8 ) two of which contains the copies of A 1 and A 2 . Similarly by choosing the order of A 1 and A 2 larger, one can obtain as many orthogonal mates as one wants at the expense of increasing the order of the squares into which the partial Latin squares are embedded. Obviously Theorem 4.2 can also be used to construct mutually orthogonal Latin squares of order n 2 for a given integer n. For example, in the literature only 8 mutually orthogonal Latin squares of order 576 were know to exist, but the following corollary constructs 9 MOLS(576). Proof. By [2] Table 3.87 there are at least 7 mutually orthogonal Latin squares of order 24. When applied in the construction given in Theorem 4.2, we may obtain 7 + 2 = 9 mutually orthogonal Latin squares of order 24 2 = 576.
2018-11-12T09:45:28.000Z
2018-11-12T00:00:00.000
{ "year": 2020, "sha1": "2ac1232259a0d7756a49a5de3cbcf90885841d18", "oa_license": "CCBYNCND", "oa_url": "http://oro.open.ac.uk/69074/1/EmbedMany140120.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2ac1232259a0d7756a49a5de3cbcf90885841d18", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
16394339
pes2o/s2orc
v3-fos-license
Ablowitz-Ladik system with discrete potential. I. Extended resolvent Ablowitz-Ladik linear system with range of potential equal to {0,1} is considered. The extended resolvent operator of this system is constructed and the singularities of this operator are analyzed in detail. Introduction Our aim in this article is to study the spectral theory of the matrix operator L(w), L m,n (w) = δ m,n−1 − w r n s n 1/w δ m,n , (1.1) m, n ∈ Z, w ∈ C, every element of which is a 2 × 2 matrix, δ m,n is the Kronecker symbol and we omitted a 2 × 2 unit matrix factor in the term δ m,n−1 . Our attention is concentrated to the case where values of both potentials, r n and s n , are equal to 0 and 1: r n , s n ∈ {0, 1}, n ∈ Z. (1.2) Moreover, we consider here the case of potentials with finite support, i.e., for every given potential there exist finite k and K, k ≤ K, k, K ∈ Z-lower and upper borders of the support-such that r n = s n = 0, n ≤ k − 1, n ≥ K + 1. (1. 3) The corresponding linear problem, is the Ablowitz-Ladik problem [1,2] which is known to be a discretized version of the Zakharov-Shabat linear problem. And like the latter the Ablowitz-Ladik problem is associated to a variety of differential-difference integrable equations, such as discrete mKdV equation, difference KdV, Toda chain, etc., [3]. Problem (1.4) describes also discrete systems with nonanalytic dispersion relations [4]. The Ablowitz-Ladik problem is also known [5,6] to be associated to differencedifference nonlinear equations, that are related to some class of cellular automata, i.e., dynamical systems in a discrete space-time with values belonging to some finite field, say, F 2 . Cellular automata attract great interest in the literature because of the wide range of their applications in different sciences, from physics to biology, from chemistry to social sciences. Detailed references for these applications can be found in [7][8][9][10][11]. These automata are also subject to intensive mathematical study, see for example [12][13][14][15][16][17][18][19][20][21][22]. It is just this kind of applications of problem (1.4) that motivated our specific choice of condition (1.2) on potential. The problem of the investigation of (1.4) by means of the inverse scattering transform, as it was performed in [3], becomes obvious if we write down this equation explicitly: In the standard approach to the study of the spectral problems, the main objects of the theory-the Jost solutions-are determined by their asymptotics at n → +∞ and n → −∞. A solution given by its asymptotics at n → −∞ is swept from the left by (1.5). But in order to construct the Jost solution given by its asymptotics at n → +∞, one has to invert the matrix in the r.h.s. of (1.5). The determinant of this matrix is equal to 1 − r n s n , so in the standard approach the condition r n s n = 1 must be fulfilled. In the case where the potential satisfies (1.2) this means that for every n either r n or s n must be equal to zero [6]. Such condition drastically restricts the class of potentials of the type (1.2), so our aim in this and forthcoming publications is to elaborate an extension of the inverse scattering transform method to the case where both r n and s n can be equal to 1. Let us also emphasize that, imposing condition (1.2) on the potentials, we do not use here the condition r n , s n ∈ F 2 . As was speculated in [23] the problem of the integrability of the cellular automata or, more precisely, the problem of existence of the Lax representations must be solved in terms of the exact equalities, and not in terms of equalities on some finite field. The fact that some matrix(-matrix) operator L is analogous to a differential one is reflected in the property that matrix elements L m,n are different from zero only for uniformly bounded values of |m − n|. In the case of (1.1) we have 1 ≥ m − n ≥ 0. Consequently, we can apply the resolvent approach [24], [25] to investigation of the Ablowitz-Ladik problem. The preliminary results of our investigation were published in [26]. The resolvent approach is based on the following extension of the operator L(w): where h is a real non-negative parameter. In particular for the operator (1.1) we have where we introduced u n (w) = w r n s n 1/w ≡ w σ + 0 r n s n 0 , n ∈ Z, (1.8) and σ is the Pauli matrix σ 3 , If we have some infinite matrix-matrix operator A m,n (h) depending on a parameter h we can associate to it the Laurent series In what follows we consider matrices A m,n (h) such that the series (1.9) are convergent in the sense of Schwartz distributions in ζ, ζ ′ (|ζ| = |ζ ′ | = 1) and h (h ≥ 0). The elements A m,n (h) are reconstructed by means of the formula (1.10) In order to explain the meaning of the extension (1.6) let us introduce the function (distribution) where ζ, z ∈ C, |ζ| = 1; by (1.6) and (1.9) (1.12) Then the above mentioned similarity of matrix and differential operators means that A(ζ, z) depends on z and z −1 polynomially. Let us mention that if we have two objects of this kind, A and B, their product (composition) is defined as follows: where the left hand sides of these equations are related through of (1.9)-(1.12). The main object of our investigation is inverse M (w) of the operator L(w) extended by (1.6), In matrix notations the first equality thanks to (1.7) has the form In order to define this inversion in a unique way we introduce Definition 1. A solution M (w) of (1.16) is called extended resolvent of the operator L(w) if M (w, ζ, ζ ′ , h) is a Schwartz distribution with respect to ζ and ζ ′ and a sectionally continuous function of h, h ≥ 0. Let us first consider the case of zero potential, i.e., r n ≡ s n ≡ 0. Then the resolvent which we denote by M 0 (w) obeys the following equation It is convenient to rewrite this equation using representation (1.12): where we introduced the δ-function on |ζ| = 1, for an arbitrary test function f (ζ) on the contour. Then where we introduced the matrices . (1.25) Here we have to make some comments. First, by (1.6), all expressions h m−n L m,n (w, h) are independent on h and equal to L m,n (w), see (1.1). On the contrary, h m−n M 0,m,n (w, h) essentially depends on h and it is just this dependence that guaranties that M 0 (w, ζ, ζ ′ , h) exists as a distribution in ζ, ζ ′ . Second, any solution of the homogeneous equation Extended resolvent of the regularized operator The specific problem connected with equation (1.17) is, as was mentioned above in the discussion of Eq. (1.4), that if r n = s n = 1 the matrix u n (w) is not invertible. Thus, first of all we have to introduce some regularization of u n (w), say, This substitution regularizes singular only u n (i.e., such that det u n = 0), leaving all other u n untouched. Indeed, by (1.2) det u n equals either 0 ot 1. Then Thus we start with the regularized operator where we introduced the diagonal operator Correspondingly, we denote the extended resolvent of the regularized operator as M (w, λ). that by means of (2.3) can be written in the form Properties of M (w, λ) in the limit λ → 0 are studied in the next section. Let for simplicity write i.e. we omit for a while dependencies on w, λ, and h. Then Eq. (2.6) takes the form where dependence of u m on w and λ is also omitted. It is easy to check that for any m ≥ m ′ we have from (2.9) i.e., u is a diagonal matrix independent on the regularization parameter λ . Let us consider first m ≤ k. Then by (2.13) we can rewrite (2.10) in the form We see that both sides of this equality are independent either on m, or on m ′ ; we denote them as F n and thus we get (2.14) Now we chose in (2.10) m ′ = k and substitute M k,n in the r.h.s. using (2.14), then where m ≥ k. Thus the second term also obeys condition m ≥ n + 1, so that taking (1.3) into account we can write M m,n = θ(m ≥ n + 1) Eqs. (2.14) and (2.15) give the general solution of (1.17) for any F n . In order to fix it, we use the two conditions formulated above. First of all it is necessary to guarantee convergency for any n of the series m ζ −m M m,n = m (hζ) −m M m,n , where Eq. (2.8) was used. Let us consider first the sum from −∞ to k. Using (2.14) we see that the sum of the first terms is finite due to θ-function. The sum of the second terms in (2.14) is equal (up to a constant factor) to k m=−∞ u m (hζ) −m F n . Thanks to (2.13) this sum converges iff the first (second) row of matrix F n is equal to zero when |w| < h (1/|w| < h, correspondingly). Thus the condition of convergency of this series can be written as Then M m,n is constructed explicitly using Eqs. (2.14) and (2.15). Let us introduce the (infinite) matrix column In what follows we also use X m (w, λ, h) = h −m x m (w, λ), Y n (w, λ, h) = h n y n (w, λ). (2.29) In the region (2.20) where Eq. (2.16) was used and by (2.28) Analogously in the region (2.21) we have that where we introduced the matrix Γ independent on m and n, (2.40) In other words x m and y n are solutions of the equation (1.5) regularized by (2.1) and its dual. By means of (2.25) we can also write these equations as We see that formally X and Y are right and left annulators of operator L. The existence of these annulators does not contradict (2.5), i.e., the existence of the inversion of L as both series m ζ −m X m (w, λ, h) and n ζ n Y n (w, λ, h) are divergent, so X m and Y n do not belong to the space mentioned in discussion of Eq. (1.9) and in Definition 1. The use of such quantities can be avoided if, say, in the region |w| > h, 1/|w| > h we use instead of (2.37) the equality Extended resolvent of the original operator In order to get the resolvent of the extended original operator (1.7) we need to consider the behavior of (2.37) in the limit λ → 0. The existence of this limit depends on the regions (2.19)-(2.22). Indeed, the only origin of a singularity in (2.37) is matrix Γ, as follows from (2.23) and (2.24). Its limits in the first three regions of (2.34) exist by (2.26). Let This expression is finite and nonzero for generic w. Zeroes of a 1,1 (w) and a 2,2 (w) if they exist in the corresponding regions give bound states of operator L(w) and will be studied in the following publication. In the region (2.22) a −1 (w, λ) has pole at λ = 0, as follows from the last line of (2.34). To describe the multiplicity of this pole we introduce q(m, n), m ≤ n, number of the degenerated matrices u l (w) on the interval [m, n], i.e., q(m, n) = n l=m (1 − det u l (w)), (3.2) which is independent on w as det u l (w) equals either 0, or 1. Let also Then by (2.1) and (2.26) we have that where we introduce the matrices which are the inverses of u n (w) in the case where det u n = 1 (cf. (1.8)). From (3.4) it follows that a −1 has pole of order Q at λ = 0 and we can write The residues are equal to where by definition for any m ≤ n. By (2.23), (2.25), and (2.24) X m (w, λ) and Y n (w, λ) are polynomials in λ, so that we have the Laurent expansion . (3.10) Now the resolvent M (w, h) of the original (extended) L-operator (1.7) can be defined as |w| > h and 1/|w| > h. (3.11) Let us consider region |w| > h, 1/|w| > h in detail. Inserting the expression (3.9) into Eqs. (2.6) and (2.7) and using (3.11) we derive that Thus we see that in this region equations (3.12) defining the resolvent are modified in comparison with the standard Eqs. (1.16). In [26] it was shown that a solution of (1. 16) does not exist in this region. In order to study the properties of the residues M (j) we can use Hilbert identity that follows from (2.5). Taking into account (2.6) and (2.7) we can rewrite it in the form Substituting M (w, λ ′ ) as in Eq. (3.9), we get in the limit λ ′ → 0 that where j = 1, . . . , Q, and we put by definition Now we insert the expansion (3.11) in the Eqs. where j, l = 1, . . . , Q, k ≥ 0, j − k ≥ 1, l + k ≤ Q + 1. If j and l are such that j − k ≥ 1 and l + k = Q + 1, i.e., l + j ≥ Q + 2, then the r.h.s. of (3.28) is equal to zero thanks to (3.19). On the other side if l + j ≤ Q + 1 we can chose in (3.28) k = j − 1 and then by (3.25) This concludes the construction of the resolvent M m,n (w, h) of the extended Loperator (1.7). As we have seen, this resolvent is discontinuous at |w| = h and |w| = 1/h. In a forthcoming paper we show that study of this discontinuity leads us to modification of the Jost solutions and spectral data, corresponding to the case of the discrete potential (1.2).
2014-10-01T00:00:00.000Z
1999-01-22T00:00:00.000
{ "year": 1999, "sha1": "64a1ad912e088d39a880da6709c1ba77290432f0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/9901015", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b3af05414fc863704f33545c655414fd99e915b4", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
264462006
pes2o/s2orc
v3-fos-license
Association between Lipids, Apolipoproteins and Telomere Length: A Mendelian Randomization Study (1) Background: The relationship between lipids, apolipoproteins, and telomere length (TL) has been explored in previous studies; however, the causal relationship between the two remains unclear. This study aims to assess the causal relationship between lipids, apolipoproteins, and TL using the two-sample Mendelian randomization (MR) approach; (2) Methods: This study comprehensively employed both univariate MR (uvMR) and multivariate MR (mvMR) methods to genetically evaluate the associations between 21 exposures related to lipids and apolipoproteins and the outcome of TL. During the analysis process, we utilized various statistical methods, including Inverse Variance Weighting (IVW), Weighted Median, MR-Egger regression, MR-PRESSO, and outlier tests. Furthermore, to confirm the robustness of the results, we conducted several sensitivity analyses to explore potential heterogeneity; (3) Results: The uvMR analysis indicated that an increase in MUFA, MUFA/FA ratio, LDL-C, VLDL-C, total cholesterol, ApoB, and triglycerides (TG) was associated with an increase in TL. However, this relationship did not manifest in the mvMR analysis, suggesting that this association may be based on preliminary evidence; (4) Conclusions: MR analysis results suggest potential suggestive positive causal relationships between genetically predicted MUFA, MUFA/FA ratio, LDL-C, VLDL-C, total cholesterol, ApoB, and TG with TL. Introduction Telomeres are specialized nuclear protein structures closely associated with age-related diseases, making them a potential biological marker of aging.Human telomeres consist of highly conserved hexanucleotide repeat sequences (TTAGGG) rich in G at the ends of eukaryotic chromosomes and associated proteins, forming a protective cap at the chromosome ends called the T-loop.This structure remains inactivated in response to DNA damage pathways or chromosome fusion events.However, due to the inability of the cell replication machinery to fully replicate the chromosome ends, 50-100 base pairs are lost with each cell division, resulting in the gradual erosion of telomeres.Consequently, as cells age, TL diminishes. Telomeres are essential for maintaining genomic stability at the ends of chromosomes.As cells divide and DNA replicates, telomeres gradually shorten due to the "end replication problem."To some extent, TL is heritable and influenced by factors such as gender, race, and paternal age.Factors negatively correlated with TL include prenatal [1] and childhood [2] stress, chronic stress in adult life [3], as well as conditions like depression [4], smoking [5], obesity [6], and alcohol consumption [7], all of which accelerate telomere shortening.Telomere shortening can be prevented by dietary restriction [8] and increased intake of dietary antioxidants [9].Dietary intake is a significant determinant of cellular TL. TL is primarily regulated by telomerase.Telomerase is a ribonucleoprotein that can compensate for telomeric loss incurred during cellular division-a complex consisting of Nutrients 2023, 15, 4497 2 of 12 catalytic subunit reverse transcriptase (TERT) and RNA component (TERC).At the same time, it is also subject to modulation by specific proteins, such as WRAP53.In a crosssectional analysis, researchers examined the relationship between lipoprotein subfractions and TL and the expression of TERT and WRAP52 in 54 prediabetic individuals from the EPIRDEM study.The findings revealed a positive correlation between smaller-sized high-density lipoprotein (HDL) particles and shorter telomeres, along with lower TERT and WRAP53 expression levels.Conversely, larger-sized HDL particles were positively associated with longer TL, although unrelated to TERT.Hence, the study concluded the existence of a correlation between the lipoprotein profile and TL, as well as the expression of TERT and WRAP53 [10]. Prior metabolomics research has unequivocally indicated that lipid metabolism plays a pivotal role in regulating TL.Various metabolites derived from fatty acids, such as glycerophosphocholine, glycerophosphoethanola-mine, lysophospholipids, glycerides, and phosphatidylcholine, are closely associated with TL [11].Additionally, lipoproteins, particularly HDL-C, along with total cholesterol and TG, have been consistently found to be linked to TL in multiple studies [12,13].These findings underscore the intricate relationship between lipid metabolism and TL, providing a robust foundation for further exploration in this field. A study revealed a positive correlation between TL and polyunsaturated fatty acids (PUFAs), including linoleic acid, in 174 healthy adults [14].Another analysis of data from 11,775 individuals across six independent population cohorts found positive associations between total cholesterol in small VLDL and the total lipid ratio, total cholesterol in small VLDL and the total lipid ratio, the ratio of w-6 fatty acids to total fatty acids, and the ratio of 18:2 linoleic acid to total fatty acids with TL [15].However, these relationships have not been conclusively confirmed. Within the PUFA category, n-3/6 PUFAs are two major families closely related to human health [16,17].In the n-3 PUFAs family, alpha-linolenic acid (ALA) is considered an essential fatty acid.In healthy young males, approximately 8% of dietary ALA is converted to eicosapentaenoic acid (EPA), and up to 4% is converted to docosahexaenoic acid (DHA).Long-chain fatty acids from marine sources, such as EPA and DHA, have shown significant benefits in maintaining balance and preventing diseases, receiving extensive research attention.It is important to note that n-3 PUFAs serve not only as an energy source but also as major biological factors in normal growth, development, and disease regulation. Animal studies suggest that feeding rats with diets rich in n-3 PUFAs can slow down telomere attrition and extend telomeres [18].Another study indicated that supplementing with n-3 PUFAs could improve liver TL in offspring of mothers with gestational diabetes [19].However, there is still controversy regarding the benefits of n-3 PUFAs in humans.One study found a significant association between a higher n-6/n-3 PUFAs ratio and shorter TL.Although this association was related to increased n-3 PUFAs, it appears that n-3 PUFAs have no actual impact on telomeres [20].Moreover, epidemiological studies have shown no significant association between the combined levels of ALA, EPA, and DHA in red blood cells and leukocyte TL [21]. Hence, it is crucial to gain deeper insights into the causal relationships between lipids, apolipoproteins, and TL.MR, as an emerging epidemiological method, assesses causal relationships between exposures and outcomes by utilizing genetic variants as instrumental variables.MR's advantages lie in minimizing the influence of confounding factors, thereby greatly reducing interference from confounding variables between exposure and outcome [22,23].To investigate the potential causal relationship between PUFAs and TL, we conducted MR analyses using summary-level data from genome-wide association studies (GWAS) on two samples and validated the findings using other datasets. Study Design Our study is based on three fundamental assumptions, similar to most MR analyses.These assumptions are as follows: (1) There is a strong correlation between genetic variants and exposures; (2) Genetic variants are unrelated to the exposure-outcome association and are not influenced by confounding factors; (3) Genetic variants exert their effects solely through the association between exposure and outcome [24].Figure 1 provides an overview of our study design.We utilized de-identified data openly available from participant studies, which have received ethical committee approval for human experimentation.This study did not require separate ethical approval.The reporting of the study follows the requirements of the STROBE-MR guidelines. Study Design Our study is based on three fundamental assumptions, similar to most MR analyses.These assumptions are as follows: (1) There is a strong correlation between genetic variants and exposures; (2) Genetic variants are unrelated to the exposure-outcome association and are not influenced by confounding factors; (3) Genetic variants exert their effects solely through the association between exposure and outcome [24].Figure 1 provides an overview of our study design.We utilized de-identified data openly available from participant studies, which have received ethical committee approval for human experimentation.This study did not require separate ethical approval.The reporting of the study follows the requirements of the STROBE-MR guidelines.(5) Glycerolipid categories involving TG (Figure 2).We employed two distinct MR analysis methods.First, we conducted univariate MR analyses for each of the 21 exposure factors individually to investigate their independent effects on the outcome.Subsequently, we constructed four models and applied the mvMR analysis method to examine the relationships between the exposure factors. Subsequently, we constructed four models and applied the mvMR analysis method to examine the relationships between the exposure factors. Lipid and Apolipoprotein Data Sources and Instrumental Variables Regarding the lipid and apolipoprotein data used in this study, GWAS summary data were sourced from the UK Biobank for all exposures except myo-inositol, which was obtained from human blood metabolites analyzed by Shin et al. in 2014 [25], covering 7803 samples, as detailed in the supplementary table.We first selected genome-wide significant single nucleotide polymorphisms (SNP) (p < 5 × 10 −8 ) from the GWAS and excluded SNPs with low linkage disequilibrium (R 2 < 0.001).Additionally, we utilized the PhenoScanner V2 database, which provides comprehensive genotype and phenotype association information, to assess and exclude SNPs related to other phenotypes, including potential confounders and intermediate variables.When considering the concordance of PUFA with the outcome, we also removed SNPs with palindromic or incompatible alleles.The strength of each Instrumental Variable (IV) was assessed using the F-statistic, with an Fstatistic below 10 considered a weak IV [26]. Outcome Data Genetic variants associated with TL were extracted from the largest GWAS dataset to date, obtained from the UK Biobank [27] (Table S1).This dataset represents a massive cohort study analyzing 20,134,421 SNPs and includes 472,174 individuals aged between 40 and 69 years.The outcome dataset has undergone statistical adjustment for age, removing the influence of age on TL.Among the participants, 45.8% were male, and 54.2% were female, with a balanced gender ratio.The dataset s racial composition is primarily European Caucasian, with 94.3% being white, 1.9% Asian, 1.5% Black, 0.3% Chinese, 0.6% mixed race, and 0.9% other ethnicities.DNA was extracted from peripheral blood leukocytes in the UK Biobank cohort, and TL was measured as the T/S ratio using quantitative polymerase chain reaction methods. Univariate MR Analysis We employed various methods for testing, including IVW, weighted median, MR-Egger regression, and MR-PRESSO.The IVW method served as the primary statistical approach to estimate potential causal relationships between lipids, apolipoproteins, and TL.To assess the significance of heterogeneity at the multivariable level, we first used Cochran s Q-test to evaluate heterogeneity, with Cochran s Q yielding a p-value less than 0.05 indicating IV heterogeneity.Subsequently, we further assessed the absence of horizontal pleiotropy using MR-Egger intercept tests and exclusion analysis, with a p-value greater than 0.05 suggesting no horizontal pleiotropy.Additionally, we employed the MR-PRESSO method to detect and correct for horizontal pleiotropy outliers in all reported results from the multivariable summary-level MR tests [28].After excluding outlier SNPs, Lipid and Apolipoprotein Data Sources and Instrumental Variables Regarding the lipid and apolipoprotein data used in this study, GWAS summary data were sourced from the UK Biobank for all exposures except myo-inositol, which was obtained from human blood metabolites analyzed by Shin et al. in 2014 [25], covering 7803 samples, as detailed in the supplementary table.We first selected genome-wide significant single nucleotide polymorphisms (SNP) (p < 5 × 10 −8 ) from the GWAS and excluded SNPs with low linkage disequilibrium (R 2 < 0.001).Additionally, we utilized the PhenoScanner V2 database, which provides comprehensive genotype and phenotype association information, to assess and exclude SNPs related to other phenotypes, including potential confounders and intermediate variables.When considering the concordance of PUFA with the outcome, we also removed SNPs with palindromic or incompatible alleles.The strength of each Instrumental Variable (IV) was assessed using the F-statistic, with an F-statistic below 10 considered a weak IV [26]. Outcome Data Genetic variants associated with TL were extracted from the largest GWAS dataset to date, obtained from the UK Biobank [27] (Table S1).This dataset represents a massive cohort study analyzing 20,134,421 SNPs and includes 472,174 individuals aged between 40 and 69 years.The outcome dataset has undergone statistical adjustment for age, removing the influence of age on TL.Among the participants, 45.8% were male, and 54.2% were female, with a balanced gender ratio.The dataset's racial composition is primarily European Caucasian, with 94.3% being white, 1.9% Asian, 1.5% Black, 0.3% Chinese, 0.6% mixed race, and 0.9% other ethnicities.DNA was extracted from peripheral blood leukocytes in the UK Biobank cohort, and TL was measured as the T/S ratio using quantitative polymerase chain reaction methods. Outcome Data 2.4.1. Univariate MR Analysis We employed various methods for testing, including IVW, weighted median, MR-Egger regression, and MR-PRESSO.The IVW method served as the primary statistical approach to estimate potential causal relationships between lipids, apolipoproteins, and TL.To assess the significance of heterogeneity at the multivariable level, we first used Cochran's Q-test to evaluate heterogeneity, with Cochran's Q yielding a p-value less than 0.05 indicating IV heterogeneity.Subsequently, we further assessed the absence of horizontal pleiotropy using MR-Egger intercept tests and exclusion analysis, with a p-value greater than 0.05 suggesting no horizontal pleiotropy.Additionally, we employed the MR-PRESSO method to detect and correct for horizontal pleiotropy outliers in all reported results from the multivariable summary-level MR tests [28].After excluding outlier SNPs, we conducted robust MR calculations.Finally, funnel plots were used to assess potential directional pleiotropy, and leave-one-out analysis evaluated whether the association was driven by individual SNPs [29]. Multivariate MR Analysis To account for pleiotropy across lipid traits, we conducted mvMR analysis, constructing four models primarily using the multivariable IVW method.Model 1 included n-3 PUFAs and n-6 PUFAs, as these fatty acids share a relationship in structure and function, and both belong to the PUFA category.Model 2 included LDL-C, ApoB, and TG, as ApoB forms particles when encapsulating LDL-C and TG [30,31].Model 3 included HDL-C and ApoA1, as ApoA-I is the major structural and functional protein of HDL, constituting 60% of total protein.Model 4 included FA, MUFA, SFA, and PUFA.Similar to the univariate MR, we employed Cochran's Q-test and MR-Egger intercept tests to detect heterogeneity and pleiotropy. All MR analyses were conducted in the R software (version 4.3.0)and analyzed using the R package "TwoSampleMR" (version 0.5.6).The forest plot in Figure 3 was created using R software. we conducted robust MR calculations.Finally, funnel plots were used to assess potential directional pleiotropy, and leave-one-out analysis evaluated whether the association was driven by individual SNPs [29]. Multivariate MR Analysis To account for pleiotropy across lipid traits, we conducted mvMR analysis, constructing four models primarily using the multivariable IVW method.Model 1 included n-3 PUFAs and n-6 PUFAs, as these fatty acids share a relationship in structure and function, and both belong to the PUFA category.Model 2 included LDL-C, ApoB, and TG, as ApoB forms particles when encapsulating LDL-C and TG [30,31].Model 3 included HDL-C and ApoA1, as ApoA-I is the major structural and functional protein of HDL, constituting 60% of total protein.Model 4 included FA, MUFA, SFA, and PUFA.Similar to the univariate MR, we employed Cochran s Q-test and MR-Egger intercept tests to detect heterogeneity and pleiotropy. All MR analyses were conducted in the R software (version 4.3.0)and analyzed using the R package "TwoSampleMR" (version 0.5.6).The forest plot in Figure 3 was created using R software. Instrumental Variables IVs used in this study for unMR are detailed in Table S2.We also conducted a strength analysis of these instrumental variables, and the results indicated that all SNPs had F-statistics greater than 10 (Table S2), suggesting strong predictive power of our instrumental variables. Fatty Acids For univariate analysis of the 12 categories of fatty acids, we conducted IVW analysis.The results showed a significant correlation between MUFA and TL (p = 0.016, Figure 3).Specifically, for each increase of one standard deviation in MUFA, the odds ratio (OR) for TL was 1.017 (95% CI: 1.003-1.032).However, after Bonferroni correction, this result did not reach statistical significance, suggesting a suggestive causal relationship.Furthermore, the ratio of MUFA to total fatty acids also showed a significant correlation with TL (p = 0.041, Figure 3), while total fatty acids were not significantly correlated with TL (p = 0.129, Figure 3).Therefore, this relationship with TL may be primarily driven by the MUFA variable.Even after Bonferroni correction, the p-value for the MUFA/FA ratio did not reach statistical significance, indicating only a suggestive causal relationship.In contrast, n-3 PUFAs, n-6 PUFAs, and other variables showed no significant correlations with TL (Figure 3). Lipoproteins and Cholesterol In investigating the associations between three lipoproteins and cholesterol and TL, we used the IVW analysis method.The results demonstrated a significant correlation between LDL-C (p = 7.178 × 10 −12 , Figure 3) and TL.Additionally, VLDL-C (p = 2.444 × 10 −3 , Figure 3) showed a significant correlation with TL.Moreover, the association between total cholesterol and TL also reached statistical significance (p = 8.088 × 10 −8 , Figure 3). Apolipoproteins, Phospholipids, and Glycerolipids Through the application of the IVW analysis method, we found that ApoB (p = 5.609 × 10 −12 , Figure 3) and TG (p = 1.006 × 10 −5 , Figure 3) were significantly associated with TL.Specifically, for ApoB, each one standard deviation increase was associated with an OR of 1.040 (95% CI: 1.029-1.052)for TL.Each increase of one standard deviation for TG was associated with an OR of 1.029 (95% CI: 1.016-1.042)for TL. Multivariate MR Analysis Results In the multivariate MR analysis of Model 1, which assessed the association between n-3 PUFAs and n-6 PUFAs and TL, we used the multivariable IVW method.The results indicated a positive correlation between n-6 PUFAs and TL (OR: 1.037, 95% CI: 1.015-1.059,p = 0.001, Table S5).However, the mvMR-Egger intercept results suggested the presence of pleiotropy (p = 0.020), and in conjunction with the univariate MR results for fatty acids, there was no significant causal relationship between PUFAs and TL.Therefore, we consider the causal relationship between n-6 PUFAs and TL to be potentially unstable.No significant correlation was observed between n-3 PUFAs and TL (p = 0.128). In Model 2, the results showed that LDL-C, VLDL-C, ApoB, TG, and total cholesterol were not significantly correlated with TL (Table S5).For the other exposures, no significant correlations were found either.The results for Models 3 and 4 also indicated no significant correlation between HDL-C and ApoA1 and TL. Discussion Our study utilized comprehensive MR methods, including uvMR and mvMR, to analyze the causal effects between 21 lipids, apolipoproteins, and TL.By combining the results of univariate and multivariate MR, we found that MUFA, the MUFA/FA ratio, LDL-C, VLDL-C, total cholesterol, ApoB, and TG may have suggestive positive causal relationships with TL.According to MR findings, we can deduce discrepancies in the causal associations between this study's five major categories of exposure factors and TL.Phospholipids and glycerides do not manifest a statistically significant causal link with TL.However, fatty acids, apolipoproteins, lipoproteins, and cholesterol exhibit a multifaceted pattern of causality with TL rather than a uniform causal model.Hence, our study concludes that categorizing fatty acids alone is insufficient for establishing their causal relationship with TL. To date, despite earlier research into the relationship between lipids, apolipoproteins, and TL, epidemiological and clinical studies have not reached conclusive findings, and relevant research remains relatively limited.Regarding the mechanisms of action between lipids and TL, despite discussions, the specific mechanisms are far from clear.Possible mechanisms involve multiple aspects.Firstly, lipid and fatty acid metabolism may affect TL through oxidative stress.Oxidative stress is considered one of the factors contributing to aging [32][33][34], yet it may also slow telomere attrition [35,36].Previous studies have suggested that the accumulation of fat in the body may be related to oxidative stress [37], and oxidative stress plays a role in the development of age-related diseases such as metabolic syndrome [37,38].Metabolic syndrome, in turn, is associated with oxidative damage to DNA and lipid levels as well as telomere shortening [38].Additionally, lipid metabolism is closely related to the inflammatory process, and inflammation is one factor affecting TL in the bloodstream [39][40][41].It's worth noting that there exists a complex interplay between oxidative stress and inflammation [33,39]. The relationship between n-3 PUFAs and TL has been contradictory in epidemiological and clinical studies.As an essential dietary component, n-3 PUFAs, due to their unique biochemical properties, may influence telomere biology.The study by Farzaneh-Far et al. [42] has laid a critical foundation for our understanding of the impact of n-3 PUFAs on TL.They conducted a prospective study involving 608 patients with stable coronary artery disease.The study results demonstrated a negative correlation between baseline blood levels of n-3 PUFAs (including DHA and EPA) and the rate of telomere shortening over five years (OR, 0.68; 95% CI, 0.47-0.98).In another cross-sectional study measuring leukocyte TL in 2284 women through survey questionnaires, it was found that while the total fat intake was unrelated to TL, the intake of PUFAs, particularly linoleic acid, was negatively correlated with TL [43].However, in a separate cross-sectional study, the authors conducted a randomized controlled trial involving 344 participants and found no significant correlation between dietary PUFAs and TL [44].Furthermore, additional research findings suggest a positive correlation between n-3 PUFAs levels and TL.In the study, Chang et al. [20] conducted a case-control study of patients with coronary artery disease.They used linear regression analysis to assess the relationship between plasma PUFAs and genetic variations.The study concluded that a higher n-6/n-3 PUFAs ratio in plasma and lower levels of EPA and DHA were positively associated with shorter TL in the Chinese population.In a study encompassing 46 obese children aged 3-4 years [45], researchers measured leukocyte telomere length and employed gas chromatography to determine the levels of six fatty acids in red blood cells, including SFAs, n-3 PUFAs, n-6 PUFAs, arachidonic acid (AA), and DHA.Their study results indicated that a reduction in DHA content and an increase in the AA/DHA ratio may be associated with telomere shortening.Furthermore, in another randomized double-blind controlled trial [46], researchers recruited 85 participants aged 25 to 75 with chronic kidney function impairment and categorized them into different groups based on their dietary habits.The study found that, among patients with chronic kidney function impairment, those who supplemented with n-3 PUFAs exhibited an increase in neutrophil TL compared to other groups. Several factors may contribute to the differences in these study results.Firstly, there is a wide variety of methods for measuring fatty acid levels, including gas chromatography, mass spectrometry, and food frequency questionnaires, and different measurement methods can lead to differences in results.Additionally, some studies have small sample sizes or lack long-term dietary information, making it difficult to adequately reflect changes in n-3 PUFA levels in study participants over many years, which is crucial for telomere biology research.Our study found no significant correlation between n-3 PUFAs and TL, consistent with related research conclusions. In our study, no causal relationship was found between n-6 PUFAs and TL.One study's findings indicated that the intake of total n-6 PUFAs, which includes 98.9% linoleic acid (LA), was unrelated to TL [47].Additionally, research has shown that LA intake does not correlate with any inflammatory markers [48].Finally, few studies have demonstrated the adverse effects of n-6 PUFAs or their association with the risk of chronic diseases [49].These studies may contribute to the explanation of our conclusions.It is worth noting that this study suggests a potential association between n-6 PUFAs and TL; however, our multivariable MR results indicate no significant correlation between PUFA, PUFA/FA, and TL, and there is a pleiotropy issue in the multi-instrument MR results for n-6 PUFAs.Therefore, collectively, this study suggests that both n-3 PUFAs and n-6 PUFAs may not have significant correlations with TL. The research on the correlation between lipoproteins and TL is limited and has yielded inconsistent conclusions.One study, involving 4944 participants, found an association between TL and the oldest age group's LDL-C levels, as measured by real-time quantitative polymerase chain reaction.However, no correlation was observed with TG or HDL-C [50].In contrast, another study based on the NHANES database showed no association between TL and LDL-C or TG but demonstrated a positive correlation with HDL-C when TL was less than 1.25 [51].Notably, Nawrot et al.'s research indicated a link between higher levels of oxidized low-density lipoprotein and shorter leukocyte telomeres [52].Furthermore, a study involving 82 healthy subjects found a negative correlation between TL and TG and a positive correlation with HDL-C [53].Aulinas et al. [54] investigated the relationship between TL and adipokine balance in 154 patients with Cushing's syndrome.Their study showed a negative correlation between total cholesterol, TG, and TL.In another study examining the relationship between TL and inflammation in 83 elderly women aged 65 to 74, HDL-C, LDL-C, and TG were found to be unrelated to TL [55].Similarly, Katarina et al. found no association between TL and cholesterol, serum LDL-C, or serum TG [56].However, the relationship between TG and TL still lacks definitive conclusions.Our study suggests that there may be a potential positive causal relationship between TG and TL, although further research is needed to substantiate this evidence. Lee et al. [57] conducted a cross-sectional study involving 309 participants aged between 8 and 80 years, measuring average TL and examining BMI and cardiovascular risk factors.Their study concluded a negative correlation between TL and ApoB.Research on the relationship between ApoB and TL is relatively limited; however, our study's results suggest that there may be a potential positive causal relationship between ApoB and TL.This conclusion provides intriguing avenues for future research to further elucidate this association's mechanisms and biological significance. In summary, consensus on the relationship between lipids and TL has not yet been reached.However, our study suggests potential positive causal relationships between LDL-C, VLDL-C, total cholesterol, ApoB, TG, and TL.The erosion of telomeres is closely associated with the aging process and the increased risk of age-related diseases.Consequently, there is widespread interest and emphasis on the potential of lipid supplementation to slow telomere attrition.This study further substantiates the beneficial role of lipids and lipoproteins in TL.This finding could heighten clinical attention to lipids and contribute to the prevention of aging and associated diseases.Patients at risk may potentially benefit from preventive lipid supplementation. Our study has several strengths.Firstly, we used MR to estimate causal relationships, effectively mitigating the impact of confounding bias compared to traditional observational studies and addressing the challenge of establishing causality in previous cross-sectional research.Secondly, we employed comprehensive GWAS data for MR analysis to enhance the accuracy of our estimates.Genetic information is determined before any confounding clinical factors come into play, reducing their influence.Thirdly, to ensure result accuracy, we applied various statistical tools, such as using the F-statistic to exclude the influence of weak instrumental variables and utilizing the MR-Presso method to identify potential outliers.Additionally, we conducted heterogeneity and multiplicity analyses to ensure the reliability of MR results. However, our study also has limitations.Firstly, our data is limited to individuals of European ancestry, so caution should be exercised when extrapolating these findings to populations of different races.Secondly, despite conducting various sensitivity analyses to test MR assumptions, we cannot completely rule out the influence of confounding bias and/or horizontal pleiotropy.Lastly, as we used genetic data, we can only infer causal relationships between exposure and outcome without specifying the exact mechanisms involved. Conclusions Our study, utilizing genetic data, provides preliminary evidence supporting potential positive causal relationships between MUFA, MUFA/FA, LDL-C, VLDL-C, total cholesterol, ApoB, TG, and TL. Figure 2 . Figure 2. Core Figure of the Study: Lipids and Apolipoproteins Impact Telomere Length. Figure 2 . Figure 2. Core Figure of the Study: Lipids and Apolipoproteins Impact Telomere Length. Figure 3 . Figure 3. Results of univariate MR for 21 Exposures and Telomere Length.A red dashed line in the figure represents the odds ratio (OR) value of 1. nSNP = number of SNP.(A) uvMR results for fatty acids and TL; (B) uvMR results of n-3 PUFAs and n-6 PUFAs and TL; (C) uvMR results for lipoprotein cholesterol and TL; (D) uvMR results for glycerophospholipids, glycerolipids and TL.
2023-10-26T15:11:40.118Z
2023-10-24T00:00:00.000
{ "year": 2023, "sha1": "e929f2e76e3183e7355fb68d283f45b28e2c7325", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/21/4497/pdf?version=1698134973", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b20f1d3a9b9bb539aedf604b5275f5cd10c6ad6b", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56528000
pes2o/s2orc
v3-fos-license
Immunotherapy and pancreatic cancer: unique challenges and potential opportunities Despite decades of research, pancreatic ductal adenocarcinoma (PDAC) continues to have the worst 5-year survival of any malignancy. With 338,000 new cases diagnosed and over 300,000 deaths per year globally there is an urgent unmet need to improve the therapeutic options available. Novel immunotherapies have shown promising results across multiple solid tumours, in a number of cases surpassing chemotherapy as a first-line therapeutic option. However, to date, trials of single-agent immunotherapies in PDAC have been disappointing and PDAC has been labelled as a nonimmunogenic cancer. This lack of response may in part be attributed to PDAC’s unique tumour microenvironment (TME), consisting of a dense fibrotic stroma and a scarcity of tumour infiltrating lymphocytes. However, as our understanding of the PDAC TME evolves, it is becoming apparent that the problem is not simply the immune system failing to recognize the cancer. There is a highly complex interplay between stromal signals, the immune system and tumour cells, at times possibly restraining tumour growth and at others supporting growth and metastasis. Understanding this complexity will enable the development of rational combinations with immunotherapy, priming the TME to offer immunotherapy the best chance of success. This review seeks to describe the unique challenges of the PDAC TME, the potential opportunities it may afford and the trials in progress capitalizing on recent insights in this area. Introduction Pancreatic ductal adenocarcinoma (PDAC) is predicted to become the second leading cause of cancer-related deaths in America by 2030. 1 Despite a multitude of clinical trials, prognosis remains dismal, with a median overall survival (OS) of 4-6 months, having not significantly improved over the last 40 years. 2 This poor prognosis is multifactorial, attributed to PDAC's systemic and aggressive nature, its complex mutational landscape, its desmoplastic stroma, a potently immunosuppressive tumour microenvironment (TME) and a current lack of effective therapies. Surgery remains the only curative treatment for PDAC, but few patients present with operable disease and approximately 80% patients who undergo curative intent surgery ultimately relapse and succumb to their disease. 3,4 The majority of patients present with advanced disease, where the standard of care is chemotherapy, but PDAC constitutes a relatively chemotherapy-resistant cancer. Even for the fittest patients, able to tolerate the triplet chemotherapy regimen FOLFIRINOX (5-fluorouracil, irinotecan and oxaliplatin), OS is only extended to 11 months. 5 Such palliative chemotherapy may be associated with significant toxicity and its impact on quality of life must be carefully considered. Targeted therapies in unselected PDAC patients have not fared any better than chemotherapy in clinical trials and have been unable to offer any clinically meaningful benefit to date. 6 There is therefore an urgent unmet need to develop novel, effective, well-tolerated treatments for this disease and immunotherapy is an obvious area for exploration. Immunotherapy has resulted in a paradigm shift in the treatment of a number of solid tumours, including melanoma, non-small cell lung cancer (NSCLC), gastric cancer, genitourinary cancers, head and neck cancer and selected colorectal cancers. 7 However, as yet PDAC has proved more of a challenge with disappointing results from early trials of single-agent immune checkpoint blockade 8,9 (Table 1). This failure is likely due to a combination of mechanisms of immune escape in PDAC. These mechanisms range from a potential lack of antigenicity and a low mutational burden to the complex interactions between tumour cells, the desmoplastic stroma and immune cells in PDAC creating a highly immunosuppressive TME, making this disease insusceptible to singleagent immunotherapy. This review seeks to describe PDAC's immune escape mechanisms, focusing on recent insights into the interplay between various elements of its immune-excluded TME, and consider how these insights may be leveraged into combination immunotherapy studies with a sound scientific basis. Antigenicity and tumour mutational burden in PDAC As described in the cancer immunity cycle, an effective anticancer immune response requires multiple steps. 10 The first steps require the release and presentation of neoantigens: tumour-associated antigens or tumour-specific antigens. The presence of these neoantigens has been associated with increased numbers of tumour infiltrating lymphocytes (TILs) and enhanced sensitivity to checkpoint blockade. For example, a higher neoantigen burden and nonsynonymous mutational load have been associated with improved efficacy of pembrolizumab treatment in patients with NSCLC 11 and a higher mutational burden with increased clinical benefit from ipilimumab/tremelimumab in melanoma. 12 The relationship between high mutational burden, increased TILs and efficacy of immunotherapy is also seen in tumours associated with mismatch repair (MMR) deficiency, 13 noting approximately 9-17% of PDACs may have an MMR deficiency. [14][15][16] While these microsatellite instability high PDAC patients may be considered for immunotherapy under the tumour agnostic approval for pembrolizumab, this is not the case for the majority of patients. Pancreatic cancer is reported to have a relatively low mutational load with a median somatic mutational prevalence of only 1 mutation/megabase, contrasting with over 10 mutations/megabase present in melanoma and just under 10 for lung cancer and bladder cancer. 17 A value of 10 somatic mutations/megabase of DNA corresponds to approximately 150 nonsynonymous mutations within expressed genes and the formation of neoantigens is common in tumours with such a mutational load. 18 How effective neoantigen formation is with lower mutational loads of one or less is less clear. 18 Despite PDAC having a comparatively low mutational load there is evidence that nearly all cases do express some candidate neoantigens, including quality neoantigens predicted to have a robust level of expression on human leukocyte antigen 20 However, these neoantigens then require efficient presentation by antigen-presenting cells to stimulate a T-cell response, which appears to be problematic in PDAC. Dendritic cells (DCs), a form of antigen-presenting cell, respond to neoantigen recognition with upregulation of the major histocompatibility complex (MHC) I and II and costimulatory molecules that interact with and activate T-cells. DCs in PDAC tend to be scarce and if present, immature, resulting in impaired early tumour antigen recognition and subsequent T-cell response. 21 In addition to the low mutational load and impaired antigen recognition, immunosuppression is also a particularly dominant force in PDAC, leading to actively suppressed T-cells with a reduced activation signature. 19 The TME plays a key role in this immunosuppression. The tumour microenvironment in PDAC The TME in PDAC is characterized by a desmoplastic reaction, a growth of fibrous tissue, surrounding the malignant epithelial cells. 22 This reaction is composed of cancer-associated fibroblasts, arising from pancreatic stellate cells, which produce several extracellular matrix proteins and cytokines, and vascular endothelial cells, all infiltrated by a variety of immune cells (lymphocytes, mast cells and macrophages; Figure 1). The highly fibrotic stroma is seen surrounding both primary and metastatic lesions 23 and is thought to play an important role in PDAC growth, metastasis and resistance to treatment, as well as promoting a hypoxic microenvironment. 24 Indeed, co-injection of human pancreatic stellate cells with tumour cells has been shown to result in increased primary tumour incidence, size, and metastasis in orthotopic mouse models. 25 However, there is considerable complexity to the interactions between the stroma and the tumour cells. The stroma appears to have a dual nature, at times even restraining pancreatic cancer progression. [26][27][28] Moffitt and colleagues defined 'normal' and 'activated' stromal subtypes, based on stromal gene expression. The 'activated' stromal subtype is associated with a worse median OS versus the 'normal' stromal subtype [hazard ratio (HR) 1.94, confidence interval (CI) 1.11-3.37, p = 0.019]. 29 The group postulated that the existence of these two subtypes might help to explain the differential effects of stroma seen in some preclinical models and indeed in clinical trials. Activated stroma was characterized by a diverse set of genes associated with macrophages, such as ITGAM, an integrin and CCL13 and CCL18 chemokine ligands. Unpicking such stromal signalling is of paramount importance in understanding how the immunosuppressive TME develops and is maintained. While PDAC has been described as a nonimmunogenic cancer a robust infiltrate of immune cells has been documented, usually dominated by myeloid derived suppressor cells (MDSCs), tumour-associated macrophages (TAMs) and neutrophils, with TILs present but in smaller numbers. 19,30,31 The immunosuppressive MDSCs and TAMs are attracted to the TME by granulocyte macrophage colony-stimulating factor (GM-CSF) and chemokine (C-C motif) ligand 2 (CCL2) secreted by the tumour cells respectively. 32 The presence of these myeloid cells is associated with a worse prognosis in patients with resected disease, as are regulatory T-cells. On the other hand, the presence of effector (CD8+ and CD4+) T-cells may be associated with a favourable prognosis. [33][34][35][36] The B-cells present are also thought to be important, with an interleukin (IL)35-producing CD1d(hi)CD5(+) subset demonstrated to accumulate in the TME during early neoplasia, supporting tumour cell growth. 37 Despite the presence of these immune cells and a theoretically inflamed TME, PDAC is still considered an immune-excluded tumour, meaning that while some TILs may be present they are prevented from directly interacting with the tumour cells, existing as clusters, tertiary lymphoid aggregates or trapped within the stroma. 38,39 Those T-cells, which are present in the TME, may also not be able to mount a full immune response to the tumour cells, being hindered by the secretion of immunosuppressive cytokines such as IL-10 and transforming growth factor (TGF)-β, and inactivated by the loss of CD3 zeta, a signal transducing chain in TILs. 40 In addition, regulatory (FOXP3+) T-cells in the area block effector T-cell division and both macrophages and γδ T-cells, another type of immunosuppressive T-cell, prevent effector T-cells entering the TME through mechanisms including programmed cell death (PD)-1/programmed death ligand (PD-L)1 signalling. 41 Such escape mechanisms have been well documented in cancers with T-cell inflamed TMEs. 42 The type of T-cells present appear to be dynamic through the course of disease, with the prevalence of regulatory T-cells increasing from premalignant pancreatic lesions to advanced PDAC and certain chemotherapies such as gemcitabine and cyclophosphamide, able to transiently reduce the number of regulatory T-cells present. 35,43,44 Understanding the type, location and functionality of the immune cells in the TME at different times in the disease process and during treatment will be particularly important when it comes to considering combination therapies to overcome the immunosuppressive tumour milieu. Combinatorial strategies to overcome the immunosuppressive nature of PDAC As described, there are multiple reasons why single-agent immunotherapy may fail in this tumour type with its low tumour antigenicity, poor presentation of neoantigens and a TME where the few T-cells present are prevented from interacting with the tumour and are suppressed by TAMs, MDSCs, regulatory T-cells and cytokines. As our understanding of this complex and challenging situation develops, novel combination strategies are being considered to target these various elements in an attempt to maximize the chance of success with immunotherapy. Combination of chemotherapy with immunotherapy Chemotherapies, including anthracyclines, gemcitabine and oxaliplatin, have been implicated in DC recruitment and activation. 45,46 Induction chemotherapy may also trigger tumour-specific antigen release. 47,48 This, and the transient reduction of regulatory T-cells seen with chemotherapies such as gemcitabine and cyclophosphamide, provides a sound rationale for using chemotherapy to prime the immune system, supporting the premise of a combined or possibly staggered chemotherapy and immunotherapy approach to treatment. This strategy has been explored in a number of small early studies with some interesting results to date, noting these were mainly dose-escalation studies often in combination with single-agent gemcitabine ( Table 2). In addition to the more commonly investigated checkpoint inhibitors such as CTLA-4 inhibitors (ipilimumab and tremelimumab) and anti-PD-1/PD-L1 antibodies (nivolumab, pembrolizumab and durvalumab) these trials also involve some novel immunotherapy approaches. For example, targeting CD40 with an agonistic antibody aims to stimulate antigen-presenting cells, such as DCs and B-cells, and promote antitumour T-cell responses. 52 In PDAC mouse models, CD40 agonists have been demonstrated to enhance chemotherapy efficiency by redirecting TAMs to induce fibrosis degradation through interferon (IFN) and CCL2 signalling. 53 In a small clinical trial (n = 22 patients) the addition of a CD40 agonist to gemcitabine resulted in a median OS of 8.4 months and a response rate of 19%, which compared favourably against historical controls of single-agent gemcitabine, and further study is warranted. 50 C-C chemokine receptor type 2 (CCR2), a receptor for CCL2, is also a target of interest as it is involved in the recruitment of immunosuppressive TAMs into the PDAC TME. Colony-stimulating factor-1 receptor (CSF1R) provides another option, mediating the biological effects of CSF1, namely the production and differentiation of macrophages, including TAMs. Preclinical experiments have shown that inhibiting either CSF1R or CCR2 decreases the numbers of pancreatic tumour initiating cells and improves chemotherapeutic efficacy. 54 Indoleamine-pyrrole 2,3-dioxygenase (IDO) is an enzyme implicated in the generation of an immunosuppressive TME through converting antigenpresenting cells from being immunogenic to tolerogenic, producing inhibitory cytokines and activating regulatory T-cells, and provides another focus of study. 56 In PDAC upregulation of IDO has been associated with an increased number of regulatory T-cells. 57 The interim analysis of the clinical trial of indoximod in combination with gemcitabine/Nab-paclitaxel listed below [ClinicalTrials.gov identifier: NCT02077881] reported a 37% response rate with one patient having a confirmed partial response 58 and the final results are awaited with interest. Ibrutinib, a small molecule inhibitor of Bruton's tyrosine kinase which blocks B-cell receptor signalling used in the treatment of various haematological malignancies, is also under investigation in PDAC. In mouse models of PDAC, ibrutinib limits tumour growth, diminishes fibrosis, extends survival, and improves the response to chemotherapy 59 and the results of a number of clinical studies are expected shortly. It remains to be seen whether the future treatment of PDAC will involve immunotherapy in combination with chemotherapy and its attendant toxicities. At the moment, while these novel immune targets are being assessed, a chemotherapy-immunotherapy combination appears to be a judicious approach for clinical trials. This is especially true with the newer combination chemotherapy regimens with a reasonable response rate, which can be important in patients with bulky disease. Once the most promising targets are selected it will be interesting to see if chemotherapy free treatment options become a reality. Vaccine combinations A multitude of vaccines have been studied in PDAC including whole-cell, DC, specific peptide and virus-based vaccines. Multiple antigen targets have been investigated with mesothelin, mucin-1 (MUC1), Wilms' tumour 1 (WT1), carcinoembryonic antigen (CEA) and mutated KRAS making some of the most appealing targets 60,61 (Table 3). While a couple of small studies have demonstrated that personalized peptide vaccination in combination with chemotherapy may be a well-tolerated and potentially interesting approach in this disease, we are some way away from this becoming a practical therapy. 62,63 One of the most studied vaccines to date is GVAX, a whole-cell vaccine. Whole-cell vaccines enable multiple antigens to be targeted simultaneously and as such may result in an expanded T-cell repertoire. Such vaccines may be derived from a specific patient's tumour (autologous vaccines), or from another patient's tumour (allogeneic vaccines). Allogeneic whole-cell vaccines appear to provide a more pragmatic approach and multiple studies have been conducted using the GVAX vaccine in a variety of settings with some interesting if mixed results. GVAX is an irradiated whole-cell tumour vaccine which has been genetically modified to release GM-CSF, a cytokine that mobilizes leukocytes to the TME and induces significant immunoglobulin (Ig)G and IgM responses. 73 GVAX has been shown to induce T-cell infiltration and the formation of tertiary lymphoid aggregates in patients with PDAC, when administered prior to resection, possibly converting a 'nonimmunogenic' tumour into a more 'immunogenic' tumour type. 74 GVAX therapy has also been associated with a significant upregulation of PD-L1 expression in PDAC mice models. When combined with an anti-PD-1 antibody, the mice were found to have increased CD8+ T-cells in the TME and Algenpantucel-L is another allogenic whole-cell vaccine, engineered to express alpha-Gal (mouse alpha-1, 3-galactosyltransferase gene) in two human PDAC cell lines. Here recent trial results have also been less than encouraging, despite earlier positive results, with the phase III IMPRESS study failing to reach its primary endpoint of improving OS. A press release by New Link genetics confirmed OS for patients with respectable PDAC treated with surgery, standard of care and adjuvant algenpantucel was 27.3 months versus 30.4 months for those treated with surgery and standard of care alone. The varied results from preclinical and clinical vaccine studies suggest that, while some vaccines may be active in this disease, it is unlikely that a single-agent vaccine approach will be able to successfully overcome the level of immunosuppression seen in PDAC. The results of the various ongoing checkpoint inhibitor/vaccine combination studies are awaited with interest (Table 4). Adoptive T-cell strategies Adoptive T-cell strategies, or cellular adoptive immunotherapy, is an approach whereby tumour reactive T-cells are collected, modified ex vivo and infused to generate an optimized immune response, most extensively investigated in haematological cancers. The T-cells may be derived from an endogenous source, autologous or allogenic cytotoxic T lymphocytes (CTLs), or be engineered to recognize a specific tumour antigen via a chimeric antigen receptor (CAR-T-cell) or a cloned T-cell receptor. A number of preclinical and small clinical trials of a CTL infusion have been completed in PDAC. For example, MUC1-reactive CTLs, generated by exposing T-cells from healthy volunteers' peripheral blood samples to a MUC1-expressing human PDAC cell line, have been shown to be cytotoxic against MUC1-expressing PDAC cell lines. 79 In a clinical study of CTLs in combination with pulsed MUC1 DCs, 5/20 patients with unresectable or recurrent PDAC had stable disease and 1 patient with multiple lung metastases had a complete journals.sagepub.com/home/tam 9 response with a mean OS of 9.8 months and no grade 2-4 toxicity reported. 80 A further retrospective study investigated the outcomes for patients with unresectable or recurrent PDAC treated with MUC1-DCs, MUC1-CTLs and gemcitabine in combination. In the 42 patients analyzed, median survival was 13.9 months with a disease control rate of over 60% with no severe toxicities reported. 81 Further prospective randomized study appears to be warranted. MUC1-targeting CAR-T-cells have also been investigated. In a PDAC xenograft model CAR-T-cells engineered to recognize the tumour-specific Tn glycoform of MUC1, a neoantigen, demonstrated target-specific activity, controlled tumour growth and improved survival. 82 While adoptive T-cell strategies provide a novel and exciting approach, there are many hurdles to overcome before this treatment reaches the clinic. Both infused TILs and CAR-T-cells have been shown to become progressively dysfunctional over time and to upregulate various inhibitory receptors including PD-1 and LAG3. 83 Further, depending upon the antigen selected for CAR-T therapy there is a risk of low level expression on normal tissues and the development of toxicity and autoimmunity, in addition to the risk of cytokine release syndrome. As with the other combination approaches discussed herein, choosing the correct partner for CAR-T therapy, as well as the most effective and safest antigen, will be of paramount importance. Combination of agents targeting the stroma and immunotherapy As discussed, the TME plays a critical role in PDAC and much effort has been spent in developing therapies to target its desmoplastic stroma. While some have been disappointing, notably the hedgehog inhibitors, 85 a number of more recent studies have been more promising and based on these results future combinations of agents aiming to remodel or reprogram the stroma and immunotherapy appear likely. Hyaluronic acid (HA) is a large glycosaminoglycan, abundant in the PDAC extracellular matrix and correlated with a poor prognosis. 86 Following mouse studies demonstrating low vascularity and high interstitial pressure associated with high HA expression responding to treatment with hyaluronidase, clinical studies of PEGPH20, a pegylated recombinant human hyaluronidase, commenced. 87,88 The latest to report is the randomized phase II HALO 202 study of PEGPH20 in combination with gemcitabine and nab-paclitaxel as a first-line treatment for metastatic PDAC versus gemcitabine and nab-paclitaxel alone. 89 A total of 34% of patients were found to be HA-high (defined as over 50% HA tumour surface staining). Progression-free survival was increased in the triplet regimen in all patients, but the largest improvement was seen in the HA-high patients, with an objective response of 45% versus 31%, and an OS of 11.5 versus 8.5 months (HR, 0.96; 95% CI, 0.57-1.61). Thromboembolic events were significantly increased in the triplet arm such that the study was put on hold and, in a second phase of the study, prophylactic enoxaparin was added. Following this amendment, the combination had a manageable toxicity profile, with thromboembolic event frequency reduced and no increase in bleeding, and a phase III study is underway [ClinicalTrials.gov identifier: NCT02715804]. Preclinical in vitro and in vivo studies have demonstrated that the barrier formed by high levels of HA in the TME, inhibits access of monoclonal antibodies and natural killer cells, and that combination therapy with PEGPH20 may enhance the anti-tumour effects of the monoclonal antibodies. 90 Further, tumour growth inhibition by anti-PD-L1 and anti-PD-1 drugs has been found to be enhanced by PEGPH20 in mouse HA-high PDAC models. 91 A phase Ib study of PEGPH20 in combination with pembrolizumab is underway in NSCLC and gastric cancer [ClinicalTrials.gov identifier: NCT02563548] and a phase I doseescalation study of VCN-01, a genetically modified human adenovirus encoding human PH20 hyaluronidase, alone or in combination with gemcitabine/nab-paclitaxel is recruiting patients with advanced solid tumours, including PDAC [ClinicalTrials.gov identifier: NCT02045602]. Focal adhesion kinase (FAK), a nonreceptor cytoplasmic tyrosine kinase, provides another stromal target. FAK promotes tumour progression and metastasis through its effects both on cancer cells and on the stromal cells of the TME, where FAK phosphorylation aids epithelial to mesenchymal transition. Through kinase dependent and independent processes FAK integrates signals from integrins and growth factor receptors to regulate cell proliferation and survival, to promote angiogenesis, migration, invasion and cancer stem cell (CSC) renewal and its expression has been demonstrated in pancreatic cell lines and resected PDAC, where expression was correlated with tumour size and stage. [92][93][94] While no clinical trials have yet demonstrated a response to single-agent FAK inhibition in PDAC, a synergistic effect was demonstrated preclinically when FAK inhibition was combined with chemotherapy and a PD-1 antagonist and a phase I study of this combination is underway [ClinicalTrials.gov identifier: NCT02546531, gemcitabine, defactinib and pembrolizumab]. [95][96][97] While the PDAC stroma clearly plays an important role in restricting the access of various therapies to the tumour, it is also thought to restrain tumour invasion and metastasis and novel targets have been sought to reprogram rather than ablate the stroma. The C-X-C motif chemokine receptor type 4 (CXCR4)/stromal derived factor-1 (CXCL12) axis provides such a target. It is thought to be important in driving invasion and metastasis in PDAC, with CXCR4 strongly expressed at the tumour's leading edge in CSCs 98 and CXCL12 secreted by cancer-associated fibroblasts. This role may be mediated in part through CXCR4/CXCL12 activation of the Wnt/β-catenin axis and nuclear factor (NF)-κB which results in increased matrix metalloprotein secretion and a resulting decomposition of the extracellular matrix, enabling invasion. 99 In a preclinical study, CXCR4+ CSCs have been shown to be required for the development of liver metastases and the blockade of CXCR4 was found to significantly reduce metastasis in orthotopic mouse models of PDAC. 98 In addition, this axis has been implicated in mediating immunosuppression by cancer-associated fibroblasts and its inhibition with AMD3100 has been shown to act synergistically with an anti-PD-L1 therapy in a PDAC mouse model. 100 A number of earlyphase clinical studies are underway testing the combination of a CXCR4 antagonist and a checkpoint inhibitor such as COMBAT, a phase II study assessing the combination of BL-8040 and pembrolizumab in patients with metastatic PDAC [ClinicalTrials.gov identifier: NCT02826486]. Although the CXCessoR4 phase I/II study of the anti-CXCR4, ulocuplumab, and nivolumab in PDAC and small cell lung cancer (SCLC) was terminated early due to a lack of efficacy [ClinicalTrials.gov identifier: NCT02472977] and the success of this approach remains to be seen. Another stromal target thought to play an important role in invasion and metastasis is retinoic acid. In PDAC, quiescent pancreatic stellate cells transform into activated cancer-associated fibroblasts secreting extracellular matrix, remodelling and stiffening the TME. 101 All-trans retinoic acid (ATRA), a physiologically active form of vitamin A and retinoic acid receptors are reduced in PDAC tissue and associated with worse patient survival outcomes. 102 It has been demonstrated that ATRA can be used to restore mechanical quiescence and reduce the motility of pancreatic stellate cells, suppress extracellular matrix remodelling to inhibit invasion, reduce proliferation and increase cancer cell apoptosis in threedimensional (3D) organotypic and mouse PDAC models. 101,103 The STARPAC clinical study is in progress looking at the combination of ATRA, gemcitabine and nab-paclitaxel. Given the contributions of retinoic acid to immunological tolerance and the elicitation of adaptive immune responses, should this approach prove active, future combinations with checkpoint blockade may prove interesting. 104,105 Radiotherapy combinations Historically, radiotherapy had been considered to compromise the immune system, as white blood cells are highly sensitive to irradiation and the large fields delivered often caused damage to local lymphatics. However, with more modern highly localized techniques and a greater understanding of radiotherapy's immunomodulatory and abscopal effects, where a patient may show disease regression at a site distant to the irradiated area, the role of radiotherapy as an immune priming treatment is now being explored across multiple solid tumours. 106 In a preclinical PDAC study, checkpoint inhibition with a PD-L1 inhibitor significantly improved tumour response to high dose radiotherapy by altering the phenotype of the TME to be more 'antitumorigenic'. 107 In this study anti-PD-L1 therapy alone and in combination with radiotherapy significantly increased the CD8+ve/Treg ratio and enhanced the effect of radiotherapy preventing the formation of liver metastases. Further, Azad and colleagues demonstrated that PD-L1 inhibition also improved tumour response after gemcitabine based chemoradiation in a PDAC mouse model. Other novel targets are being considered for use in combination with radiotherapy. Stimulator of interferon genes (STING) is a transmembrane protein implicated in the production of type 1 interferons and inflammatory cytokines in response to viral infections which also appears to play a role 12 journals.sagepub.com/home/tam in the adaptive immune response against cancer. 108,109 In a mouse model of PDAC, STING ligands have been shown to synergize with computed tomography (CT)-guided radiotherapy to control local and distant tumours through early tumour necrosis factor (TNF) α-dependent necrosis followed by later CD8+ T-cell-dependent control of remaining disease. 110 The authors suggest that the STING ligand converts cell death mediated by radiotherapy into an endogenous vaccine, enhancing the adaptive immune response, controlling local and distant disease. STING is expressed by human PDAC and stromal cells and it will be interesting to see if these results translate into positive clinical trials either in combination with radiotherapy or with checkpoint blockade. In a similar vein, Toll-like receptors (TLRs), transmembrane proteins which play an important role in tissue repair and injury induced inflammation, may provide another novel target in combination with radiotherapy. In cancer, TLR agonists are thought to upregulate the adaptive immune response, induce vascular permeability and recruit leukocytes to the TME but have also been associated with promoting cancer survival and progression. 111 TLRs 7/8 are highly expressed in human PDAC and TLR 7/8 agonists have been shown to boost DC antigenpresenting activity, as an adjuvant to radiotherapy in mouse models of PDAC. 112 However, expression and stimulation of TLRs 7/8 have also been associated with cancer progression and resistance to fluorouracil (5-FU) in cell lines, possibly through Notch-2 signalling. 113 Many other novel immune targets are also being considered in combination with checkpoint blockade, such as signalling via the C-X-C chemokine receptor type 2 (CXCR2) axis or IL10. The CXCR2 axis is an inflammatory signalling pathway involved in neutrophil recruitment, migration and tumour cell proliferation. In human PDAC CXCR2 signalling at the tumour border has been associated with a poor outcome. 117,118 Interest in CXCR2/checkpoint inhibition was piqued by a mouse study where CXCR2 inhibition was demonstrated to promote journals.sagepub.com/home/tam 13 T-cell tumour infiltration and increased sensitivity to anti-PD-1 immunotherapy. 119 A phase Ib/II trial is currently evaluating durvalumab in combination with either chemotherapy (nab-paclitaxel and gemcitabine) or CXCR2 inhibitor (AZD5069) in metastatic PDAC [ClinicalTrials. gov identifier: NCT02583477]. IL-10 has been considered to be an anti-inflammatory, protumourigenic cytokine, mainly secreted by M2-macrophages, regulatory T-cells and T helper 2 cells, with elevated levels of circulating IL-10 associated with a poor outcome in various cancers. 120 However, additional studies have suggested an anti-tumour role for IL-10, with IL-10 able to boost anti-tumour immunity in mouse studies expanding CD8+ TILs and inhibiting inflammatory CD4+ T-cells. 121 The phase I study of AM0010, pegylated recombinant human IL-10, suggests that IL-10 can act as an immune activating cytokine in human solid tumours, leading to systemic immune activation with increased immune-stimulatory cytokines and reduced TGFβ in patients' serum. 121 In PDAC, AM0010 has been investigated alone and in combination with chemotherapy, demonstrating clinical activity and immune stimulation, with AM0010 increasing PD-1+ activated CD8 T-cells and stimulating an oligoclonal expansion of T-cell clones in the blood. 122 A phase I dose-escalation trial is currently underway with PDAC arms investigating the combination of daily AM0010 with chemotherapy or anti-PD-1 pembrolizumab or nivolumab [ClinicalTrials.gov identifier: NCT02009449]. Still more therapies targeting other novel checkpoints are in early development, including drugs directed at lymphocyte activation gene 3 (LAG3), T-cell immunoglobulin and mucin-domain-containing-3 (TIM3), T-cell immunoglobulin and immune-tyrosine inhibitory motif domain (TIGIT) and glucocorticoid-induced TNFR-related protein. Strategies to harness costimulatory molecules are also being considered, for example targeting CD137 or OX-40, both members of the TNF superfamily. 123,124 A number of these targets are already being assessed in phase I studies, including PDAC patients. Once the optimal targets have been selected and potentially combined with other immunotherapies in a rational manner, it will be interesting to see if such combinations are sufficient to augment anti-tumour immunity without the need for chemotherapy in this most immunosuppressive disease. An alternative approach to combination immunotherapy is the development of single drugs which are able to target more than one epitope. Bi-specifics and multi-specifics are antibodies which are engineered to have such multi-functionality and in cancer have been designed to block particular pathways more completely or to deliver effector immune cells efficiently to tumours. 125 For example, a phase I study of epidermal growth factor receptor (EGFR) bispecific antibody armed T-cells (BATs), anti-CD3 × anti-EGFR BATs, demonstrated clinical activity with a median OS of 14.5 months in five patients with locally advanced or metastatic PDAC and a phase Ib/II in the maintenance setting is ongoing [ClinicalTrials.gov identifier: NCT03269526]. 126 This field is very much in its infancy but early results are encouraging. Combinations and DNA damage repair pathways There is accumulating evidence that continued DNA damage in tumour cells results in a proinflammatory, immunologically active tumour environment. 127,128 This effect may be heightened in tumours with deficient DNA damage repair and indeed such tumours have been demonstrated to be more sensitive to immunotherapy, as exemplified by the sensitivity of MMR-deficient colorectal cancer to checkpoint blockade versus MMR-proficient colorectal cancer. 13 Approximately 15% of PDAC patients fall into an 'unstable' molecular subtype, which is associated with deficiencies in DNA maintenance and a sensitivity to platinum agents. 129 Such features may be used to select patients for immunotherapy in the future. This approach is being investigated in a phase II study of the IDO inhibitor epacadostat in combination with pembrolizumab in PDAC patients with chromosomal instability or homologous recombination deficiency [ClinicalTrials. gov identifier: NCT03432676]. DNA repair deficiencies may also be used as a target themselves. Various poly ADP ribose polymerase (PARP) inhibitors have been investigated in PDAC, alone and in combination with chemotherapy, with some promising results. [130][131][132] Further studies combining immunotherapy with PARP inhibitors are underway, such as Parpvax, a phase Ib/II study of niraparib plus either ipilimumab or nivolumab in patients with advanced PDAC whose disease has not progressed on platinum-based therapy [ClinicalTrials.gov identifier: NCT03404960]. Conclusion PDAC presents an extremely difficult malignancy to treat. Its poor immunogenicity, unique TME and high levels of immunosuppression provide significant challenges when considering immunotherapy as a therapeutic option. However, as our depth of understanding increases, methods to overcome these hurdles are presenting themselves and a multiplicity of immunotherapy studies in PDAC are underway, considering innovative targets and scientifically sound combinations. Appropriate patient selection for these novel combination approaches will be of paramount importance and advances in molecular subtyping in PDAC may also be significant. 14,133 Overall, with continued progress in understanding the immunobiology of this disease, there are reasons to be optimistic that immunotherapy may well play an important role in the treatment of PDAC in the future.
2019-01-22T22:20:47.889Z
2018-12-17T00:00:00.000
{ "year": 2018, "sha1": "d02da5a3e549ab0e2088ea5bcc64604f6440ea50", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1758835918816281", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d02da5a3e549ab0e2088ea5bcc64604f6440ea50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215784342
pes2o/s2orc
v3-fos-license
Single-stage “Fix and Flap” gives Good Outcomes in Grade 3B/C Open Tibial Fractures: A Prospective Study Introduction: Grade 3B/C open tibial fractures with grossly contaminated degloving injuries have poor outcomes, with or without vascular injuries. Treatment decision oscillates between limb salvage and amputation. The standard protocol of repeated debridement and delayed wound cover is a challenge in developing countries due to overcrowded emergencies and limited operating room availability. We present results of our modified protocol involving primary stabilisation with external fixation and immediate wound cover as an aggressive modality of treatment. Material and Methods: Thirty-three patients with severe open tibial shaft fractures were managed using a standardised protocol of emergent debridement, external fixation and immediate wound cover with free distant/local rotational muscle flaps and fasciocutaneous flaps, and with vascular repair in Grade 3C fractures. Intra-articular fractures were excluded. Patients were followed for a minimum of three years, with an assessment of clinical, radiological and functional outcomes. Results: Wound cover was achieved with 24 distant free muscle flaps, four local rotational muscle flaps and five fasciocutaneous flaps. All fractures united with an average time to union of 40.3 weeks (16-88). Fifteen patients (45.4%) underwent only a single major surgery using primary definitive external fixation. Deep infection was seen in four patients (12.1%). Nineteen patients had excellent to good outcomes, six were fair, and eight were poor. Conclusion: “Fix and Flap” in the same sitting, using immediate wound cover and external fixation, has given good results in our hands despite the delayed presentation, the neurovascular deficit and the degloving injury. This may be a better management strategy in overcrowded tertiary care centres of developing countries, with a single surgical procedure in almost half the cases. INTRODUCTION Grade 3 open fracture is a high-energy injury, which may threaten the limb and occasionally even the life of the patient 1,2 . The tibia is the commonest site for open fractures 3 with grade 3B/C injuries being complicated by degloving of soft tissues, with gross contamination, possible vascular injuries, and often with poor outcomes. The management decision oscillates between limb salvage and amputation 4 , where many authors [5][6][7][8][9] project a dismal outcome due to the medium and long-term problems with soft-tissue cover, infection, and union, ending with serious disability. The situation is even more challenging in developing countries like India where a late presentation of patients, lack of adequate tertiary care facilities, complex fracture patterns associated with poly-trauma, poor hygiene, poly-microbial infected wounds and antibiotic resistance, determine the final outcome 10 . The standard protocols of repeated serial debridement and delayed wound cover 11 , present a challenge in the underdeveloped countries, with the overcrowding and long waiting lists for operations, even for emergencies. Wound coverage is delayed to allow for swelling and to facilitate a second-look procedure to reassess tissue viability. Using standard protocols, grade 3B/C open tibial fractures often require 30 to 50 weeks for consolidation 12 with a 46% incidence of the delayed union if accompanied by the neurovascular deficit, as opposed to 16% if the vessels are intact 13 . Rates of deep infection could range from 10% to 50% depending on the nature of the injury, degree of contamination, as well as the age of the patient and associated co-morbidities 14 . Although widely accepted, this treatment protocol has been recently challenged, as repeated debridement and delayed closure could lead to additional tissue loss from desiccation and infection [15][16][17][18] . Orthopaedic, plastic and vascular intervention must go hand in hand to treat these complex injuries 15,16,18 . The orthopaedic management of these severe injuries is undergoing a progressive change from external to internal fixation with increasing experience. Primary plating or primary nailing is now being preferred in Grade-1 to Grade-3A fractures, and sometimes even in grade-3B injuries [19][20][21] , despite the significantly high infection rates and the high re-operation rates compared to external fixation [22][23][24] . Primary interlocking nails remain confined to Grade 1 to Grade 3A open fractures without significant bone loss and simple fracture patterns. Some studies [25][26][27] have also correlated reaming with thermal damage to the bone, increased infection rates and decreased union rates. The staged conversion from temporary external fixator to interlocking nail is being recommended for poly-trauma patients 25 . However, in isolated severe Grade 3B/C open tibial fractures in the developing countries, this staged surgery often becomes a logistic problem due to overcrowding of operating theatres. Additionally, questions regarding the best time to convert an external fixator to an interlocking nail remain unanswered 28 , with the literature highlighting deep infection rates of up to 17% 29 due to residual pin tract infection. External fixators are perhaps the most preferred initial treatment modality in Grade 3B/C open tibial fractures; these have sometimes been used as the primary definitive fixation 30 , with no significant difference in non-union and deep infection rates as compared to interlocking nails 24 . Modern plastic surgery has advanced from the complexities of the pedicled to micro-vascular techniques, including free tissue transfers 31 . This allows aggressive use of flaps to rapidly and reliably convert a severe open fracture to closed injury in a single intervention, even with an external fixator in situ. Using these techniques, we employed a single-stage protocol of emergent radical debridement and primary stabilisation with locally sourced external fixator constructs, along with immediate wound coverage using various types of flaps and vascular surgery intervention, when required. This protocol was used to manage grade 3B/C open tibia fractures at our tertiary care centre by a multidisciplinary team of orthopaedic surgeons, plastic surgeons and vascular surgeons. Our experiences and outcomes over a five-year study period are presented. The study was performed at Advanced Trauma Center, PGIMER, Chandigarh, starting from January 2013 and spanning over five years, during which, 38 patients with isolated severe open tibial shaft fractures (Grade 3B and 3C) without systemic injury, presenting to our centre were enrolled in the initial two years of the study. All cases were followed up and assessed for functional outcomes at three years from enrolment. Diaphyseal and metaphyseal open tibial shaft fractures were included, and intra-articular fractures were excluded. Ethical clearance from the ethics committee of the institute for this study was obtained. Consent was taken from all the patients before enrolling for the study. MATERIALS AND METHODS A multidisciplinary team consisting of orthopaedic, plastic and vascular surgeons collectively managed all patients. Preoperative wound toiletry, antibiotics and tetanus prophylaxis were administered to all patients. Penicillin was added if anaerobic contamination was suspected, especially in farmyard injuries. All patients were taken to the operation theatre within one hour of presentation to our hospital after the necessary investigations including an angiogram, if needed. Meticulous radical wound debridement was performed, both inside and outside the zone of injury with liberal application of lavage as per standard protocols. As viability and vascularity of the soft tissues were of prime importance during debridement, all devitalised tissues were excised freely inside the zone of injury. Debridement was further extended outside the zone of injury until adequate bleeding and viable tissues were encountered, to provide a healthy bed for tissue transfer. Skeletal stabilisation was done using stainless steel external fixator constructs appropriate to the fracture morphology from INOR, India. After debridement, intra-operative wound cultures were sent, for targeted postoperative antibiotic administration with the results, in the wards. After achieving physiological length, alignment and rotation, fractures were stabilised with maximum possible cortical contact achievable intra-operatively without significant shortening. Immediate wound coverage was provided in the same sitting by plastic surgeons. The choice between a fasciocutaneous flap or muscle flap, either local or distant, was based on the injury and soft tissue status. Distant micro-vascular free muscle flaps were preferred due to the fear of vascular compromise and poor viability at the zone of injury. The urgent vascular repair was done in Grade 3C open fractures as a priority along with the initial fixation and wound coverage using the same protocol, in the same sitting. Post-operative intravenous antibiotics were used in all patients for the initial three days, including the higher spectrum antibiotics of piperacillin-tazobactam, linezolid and clindamycin for highly contaminated wounds followed by oral antibiotics till sutures were removed. Subsequent antibiotics were given according to the results of the intraoperative microbiological cultures. An aggressive postoperative dressing regimen was followed which consisted of regular wound examination, minor local debridement when necessary, close look for flap viability, customised aggressive wound care with foam dressings for exudating wounds, hydrogels to remove slough and promote autolytic debridement, and silver dressings. Negative pressure wound therapy (NPWT) was used in consultation with plastic surgeons for the removal of exudates, minimising venous congestion of flaps and promoting granulation tissue. Repeat debridement was done only in cases with elevated leukocyte counts along with clinical signs of infection. Patients were discharged after suture removal and were followed weekly for two months, and after that at two-monthly intervals till they recovered. Fixators were kept in-situ for a minimum of four months in patients with at least two or more cortices in contact at the time of initial stabilisation. The early movement of the knee and ankle joints were encouraged; axial dynamisation and loading were individualised. Toe-touch was encouraged as soon as possible post-operatively depending on the wound status, and partial weight-bearing was started at six weeks, going to full weight bearing by three months. Final clinico-radiological outcomes were assessed at the three-year follow-up using the Johner and Wruhs criteria 32 ( Table I). The results were compiled at the end of the fiveyear study period and compared with those of conventional protocol available in the literature, as well as with previous studies based on a similar concept. Clinical criteria for union were the ability of the patient to bear weight on the injured limb and perform activities of daily living, with no pain at the fracture site on palpation and physical stress. Radiological bridging of at least three cortices on standard AP and lateral views, with partial obliteration of the fracture line, was taken as a reliable criterion for fracture healing. After confirmation of union, the external fixator rods alone were removed with the pins left in place, and the patients were then instructed to bear full weight. If there were no symptoms or pain, the pins were subsequently removed after four days. Patients with persistent pain at the fracture site and with no evidence of callus formation at six months follow-up were labelled as delayed or as non-union and were planned for a second stage surgery. Functional and social outcomes were further documented based on subjective limitation of the activities of daily living like household work, family and leisure activities along with self-care. They were graded as having no difficulty, having some difficulty, or having an inability to perform these activities. RESULTS Four patients (grade 3B) were lost to follow-up within a month of enrolment and one patient (grade 3C), with injury to the tibio-peroneal trunk presenting after five days of injury underwent primary amputation. Table II. Immediate soft-tissue cover was achieved in 24 patients with distant free muscle flaps, 15 anterolateral thighs, five latissimus dorsi, two gracilis and two radial artery based forearm flaps; in four patients with local rotational muscle flaps, three gastrocnemius, one hemi-soleus; and in five patients with fasciocutaneous flaps, two perforator based, two cross-leg and one reverse sural artery based ( Fig. 1-3). The average duration of surgery was 10.7 hours. Seven patients who presented later than 24 hours of injury were also provided with an immediate flap cover at the time of initial surgery. Eighteen patients had an associated arterial injury, six with tibio-peroneal trunk, eight with posterior tibial artery and four with anterior tibial artery, which was repaired urgently after an initial fracture stabilisation, before proceeding for the wound coverage. Vascular surgeons carried out all these repairs below the level of popliteal trifurcation ( Fig. 4 and 5). Bony stabilisation was achieved using simple uniplanar external fixator constructs in 14 patients, and hybrid multiplanar construct with ring and tubular rods in 11 patients and T-Type bi-planar construct with convergent pins in 8 patients. Early fixator removal along with secondary stabilisation procedure was done in 8 patients who had an initial joint spanning fixation with bone loss after soft tissue healing was obtained. Limb salvage and union was achieved in all 33 patients at the end of the five-year study period. Average time to union was 40.3 weeks (16-88), which was comparable to the literaturebased results of the standard protocols (30-50 weeks) 12,13 , with no statistically significant difference noted. Fifteen patients (45.4%) united with only a single major surgical procedure utilising primary external fixators as the definitive fixation (Fig. 6, 7 and 8). Flap failure was seen in three patients (9%) within a week of surgery, necessitating revision coverage with cross-leg flap as a salvage procedure; the other 30 flaps settled over in due course of time. Superficial infection was seen in eight patients (24.2%), which resolved with the aggressive postoperative dressing regimen with the microbiological culturebased antibiotic usage, foam/hydrogel/silver dressings and negative pressure wound therapy (NPWT) in consultation with the plastic surgeons. Pin tract infection was seen in two patients (6%) who had to undergo antibiotic infiltration and minor debridement. Deep infection was seen in four patients (12.1%) with Staphylococcus aureus isolated in two, and klebsiella and pseudomonas, in one patient each. Of these four patients, two with a united fracture but chronic discharging sinus refused any further intervention; the remaining two (6%) were managed by debridement and sequestrectomy, followed by Ilizarov reconstruction with bone transport for a final union. These two patients had grossly contaminated wounds with extensive degloving at the time of initial presentation. Both superficial and deep infection rates in our study were similar to the results of the standard published protocols 14,33 without any statistically significant difference. Sixteen patients (48.4%) underwent secondary stabilisation procedures with plating, ILN, or Ilizarov reconstruction. Eight of these patients had a neurovascular deficit at initial presentation. Bone grafting with a fibula or iliac crest was required in ten patients. Outcomes of various surgical procedures are shown in Fig. 9. Malunion with varus deformity (>10 o ) was seen in two patients and pro-curvatum deformity (>20 o ) in one patient. However, malunion rates (9%) in our study were significantly lower than those reported in the literature using standard protocols (33.3%) 24,33 . Shortening >2.5cm was seen in three patients with comminuted fractures and severe bone loss from the initial injury. Using the Johner and Wruhs criteria, 19 patients had excellent to functional outcomes (57.5%), six patients had fair outcomes (18.1%), and eight patients had poor outcomes (24.2%) (Table III). A total of 23 patients (69.6%) faced occasional difficulties in carrying out the activities of daily living of household work, family or leisure activities. None of the patients was bedridden or had severe pain at the fracture site. DISCUSSION Severe open tibial shaft fractures are a major management challenge, especially in developing countries with limited access to tertiary care facilities. A major advance in management came in 1973 with the introduction of microvascular free flaps by Daniel and Taylor 34 . Godina 31 brought a new dimension to the treatment of these injuries by advocating early free tissue transfer within five days of trauma. This reduced the time to union and the incidence of infection 35,36,37 . It should be emphasised that timing of early soft tissue reconstruction is still debatable with some studies advocating coverage within three to five days of injury 29,34,37,38,39 and others favouring immediate wound cover at the time of initial surgery 21,40 . Our five-year study supported the observation that if the patient were hemodynamically stable, delay in soft tissue cover was unnecessary, as it could lead to additional soft tissue loss and a further increase in chances of wound contamination. Hence, there should be aggressive surgical management to tackle these complex injuries without delay 41 . As local flaps have four times higher risk of wound complications than free flaps 42 , we preferred distant free muscle flaps (24) in the majority of our patients followed by fasciocutaneous (5) and local rotational muscle flaps (4) for immediate wound cover. Our study showed that this aggressive management of the severe open tibial shaft fractures was an effective modality, with favourable outcomes in the majority of patients. We accept that this approach was radical and that immediate wound coverage along with debridement and initial fixation in the same sitting had many potential complications 2,35,36,38,42,43,44 . However, due to financial as well as logistic issues, including long operation wait-lists endemic in underdeveloped countries, this single-stage approach would be better suited for overcrowded tertiary care centres in developing countries. Recent studies [45][46][47] also emphasised the importance of single-stage definitive ortho-plastic reconstruction in severe open tibial fractures leading to good outcomes and significantly decreased infection rates. Our patients presented at a mean time of 15.8 hours from injury to our centre and were subsequently taken to the operation theatre within an hour of arrival as a priority. All cases were managed with external fixation, wound coverage and vascular intervention, if needed, at the time of the initial surgical intervention using a single-stage standardised protocol by a multidisciplinary team of orthopaedic, plastic and vascular surgeons. Comparing the results of our fiveyear study with conventional standard protocols advocating serial debridement and delayed wound cover 11,12,14,48 follow-up of 46 months, advocating early wound coverage along with primary internal fixation. Forty per cent of their cases underwent a single definitive surgical procedure of emergent internal fixation and flap coverage. Their infection rates (6.1% vs 12.1%) and functional outcomes of joint stiffness, union rates and pain) were comparable to our study. A slightly higher initial flap failure rate requiring revision flap surgery in our study (9% vs 3.5%) could be due to gross contamination combined with a delayed presentation in many of our cases. Nevertheless, all flaps settled over in due course of time. Toia et al 50 retrospectively compared the results of combined orthoplastic versus staged protocol and concluded that external fixation and free flaps could be successfully integrated to give better outcomes as compared to staged protocol. Their average time to union was ten months as was in our study, and infection rates of 17% and 12.1% were comparable in both the studies. Primary definitive external fixation had been used successfully as a treatment modality in the literature30,51 with good union rates and fewer complications comparable to our study; however, a combined protocol of best choice of fixation and timing of wound coverage in these severe injuries was still missing in the literature. It was important to note that we preferred the low cost but the equally effective primary modality of stable external fixation in all our cases, which provided similar long term results, especially when done as a primary definitive surgical procedure in the 15 patients (45.4%). Six of these patients also underwent vascular repair for Grade 3C injury. Excellent to good outcomes were seen in the majority of our patients (57.5%). We also encountered problems associated with external fixators like pin tract infection (6%) and malunion (9%). These warranted minor surgeries, but overall had no impact on fracture healing or deep wound infections. Factors associated with increased risk of infections and other complications following open tibial fractures included Grade 3B/C injuries, gross contamination and systemic comorbidities 52 . Prevention of infection and early fracture healing depended on the adequacy of the debridement, targeted antibiotic usage, stable skeletal fixation 53 and immediate obliteration of the dead space by a healthy soft tissue cover. Keeping these principles in mind, a team of surgeons using their clinical skills, at the first stage itself as part of a clear, standardised management protocol, played a major role in guiding these severe injuries on the road to favourable outcomes. The relevance of this study arose from the fact that a longterm prospective study from the developing countries emphasised that outcomes of severe Grade 3B/C open fractures managed with a single-stage standardised "fix and flap" protocol using external fixators and immediate wound coverage was missing in the literature. Most of our problems were associated with a delayed presentation with a mean time of 15.8 hours of patients, extensive degloving of soft tissues (33%) and associated neurovascular deficit (54.5%) and gross contamination. Despite these factors, this singlestage management protocol for complex fracture conventionally treated by multiple surgeries would be a good option at overcrowded tertiary care centres with long waiting lists for surgery. CONCLUSION The use of single-stage radical protocol of emergent "Fix and Flap" using external fixation and immediate wound coverage as primary modality, gives good results in complicated scenarios of delayed presentation, gross contamination, neurovascular deficit and extensive degloving. This singlestage protocol gave excellent to a good outcome in the majority of patients and may be a better management strategy in the overcrowded tertiary care centres of developing countries with a single major intervention.
2020-04-16T09:14:17.536Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "d785803a6249c71bbb6e1c1d5c278e94a5efef31", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5704/moj.2003.010", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6ac565b8690c35eebc8b5791f0d646d137ae82e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222208666
pes2o/s2orc
v3-fos-license
Surfacing Misconceptions Through Visualization Critique Students of visualization come to formal education with an abundance of personal experience. However, one's exposure to graphics through media and education may not be sufficiently diverse to appreciate the nuance and complexity required to design and evaluate effective representations. While many introductory courses in visualization address best practices for visual encoding of data based on perceptual characteristics, as cognitive scientists, we place equal value on representational decisions based on communicative context: how the representation is intended to be used. In this pedagogical activity, we aim to surface learners' preconceived notions about what makes a visualization effective. Here we describe the structure and context of an introductory-level visualization activity, how it might be conducted in individual or group settings, our experience with the common misconceptions the activity can reveal, and conclude with recommendations on how they might be addressed. INTRODUCTION Before misconceptions can be corrected, they need to be identified. As instructors of introductory courses on Information Visualization, we are astonished when we learn that, despite our best efforts, students have failed to grasp fundamental concepts about the variety of purposes that visualizations might serve. Often, introductory courses in visualization address best practices for visual encoding of data based on perceptual characteristics. This is an effective starting point for visualization design. However, rankings of visual variables and perceptual discriminability do not tell the whole story of our field. When designing and evaluating representations, we want our students to deeply consider the context of artifacts, specifically, with respect to: (1) the audience (their prior knowledge, beliefs and skills), (2) the task: what the reader is expected to do with the representation, and (3) the communicative context: the designer's goal in communication, be it to inform, persuade, mislead, record, or educate. In our experience, it is common for novice learners to persist with narrow assumptions about "good" visualizations being dense, easy to read, reflections of "truth" for some set of data. Such misconceptions are often implicit, largely unquestioned, and deeply held. They arise as preconceived notions rooted in everyday experience, conceptual misunderstandings from inaccurate application of prior knowledge, and factual misconceptions in the form of unchallenged beliefs. Best practices in science education suggest that to supplant misconceptions with correct conceptual models, the inaccurate ideas must be identified and confronted [3]. Here we present a pedagogical activity that prompts learners to externalize their ideas about what makes visualizations effective or ineffective by preparing a design critique. Learners are challenged to explore * e-mail: amyraefox@ucsd.edu † e-mail:tjscott@ucsd.edu the context of a visualization artifact and make inferences as to its audience and intended purpose, before evaluating the effectiveness of its design. Importantly, students are directed to give evidence to support their explanations, thereby prompting them to explore the sources of prior knowledge. By prompting students to make these ideas explicit, instructors can develop targeted examples to counter the most common misconceptions in follow-up instruction. ACTIVITY DESCRIPTION Synopsis: Students are asked to find two visualizations from educational materials or news media. Using the context of where the graphics are located, students make inferences as to the designer's intended audience and purpose. Students then provide objective critique of the effectiveness of each visualization. Preparation As this activity is designed to elicit prior knowledge of visualization concepts, it can be performed early in an instructional session, with little to no prior instruction. In the Spring of 2020 this activity was conducted as the first assignment in an online, introductory course on Information Visualization in the Department of Cognitive Science at the University of California, San Diego. The assignment was released during Week 2 (of a 10 week session), after lectures and required readings on: the history and modern context of visualization as a discipline [1], distributed cognition as a theoretical framework [2], information and representation [6], and visual variables and encoding [6]. Learning Goals 1. The student's attention is brought to the context in which visualizations are embedded in communication media, such as news articles, textbooks, and scholarly literature. 2. The student develops awareness of the decisions a designer must make when developing a graphic to support a communicative goal. 3. The student makes explicit their criteria for assessing a representation as effective with respect to its communicative goal. Context Asynchronous Activity. Structured as a graded assignment, we asked the 38 students in our course to complete the activity independently over the course of one week. We expected students would spend 1-2 hours on the assignment, including searching for and critiquing their chosen visualizations. Students posted their responses on a course discussion forum, and were encouraged to review and comment on the submissions by classmates. Two weeks later, students completed a follow-up assignment where they reflected on their initial critiques and posted a response describing how their understanding of effective visualization had evolved. We expect the follow-up assignment took students between 15-30 minutes of reflection and writing. Synchronous Activity With minimal modification, the assignment can be conducted as an interactive, synchronous, activity. In this case, we recommend providing learners with a small corpus of example visualizations, curated by the facilitator to include graphics that range in efficacy along any dimensions the instructor wishes to reinforce based on the context of the instructional session and background knowledge of participants. Depending on the number of participants, it may be most effective to have students form small groups where the collectively construct a critique, which can then be shared and discussed with the class. In the case of live sessions, it is necessary for facilitators to anticipate the most common misconceptions learners are likely to surface, and have prepared strategic counter-examples to address them. Instructions The following instructions are provided to learners. Background. The use of visualization is pervasive in media: explanatory diagrams in magazines and online articles, graphs describing the projected impact of a new state budget, new experimental data plotted against theoretical expectations, demographic information, and of course-information on current events, etc. In each case, the author of the visualization tries to convey a point of view by choosing which data to present, emphasizing some aspects of the data while minimizing others. The result of these decisions can vary widely, from informative and enlightening, to confusing, or misleading. Requirements. Select two visualizations from any source (print or electronic). For each visualization, consider its context in order to make a subjective judgement as to the designer's purpose in creating it. Then, develop a critique: an objective assessment of how well the visualization functions with respect to its intended purpose. • You should aim to select one visualization you judge to be effective, and one you judge to be ineffective. • You should find visualizations "in the wild", rather than texts or blogs on information visualization, where the graphic has already been analyzed and/or critiqued. • In your description of each visualization, address the questions: Who is the intended audience? What is the purpose? What type of data is shown? How is this data represented? What was the goal of the designer in representing the data in this way? • In your critique of the each visualization, use the concepts covered in the course to date to evaluate how well you think the design of the graphic functions in relation to its purpose. Are the data encoded effectively? What is the message the reader will likely take away? How are cognitive and perceptual principles being applied (or violated)? EVALUATION: MISCONCEPTIONS, REVEALED We developed this activity as a formative, rather than summative, assessment, aimed at scaffolding learners in their exploration of the purposes visualizations might serve, and explanation of criteria on which they should be evaluated. Although we did provide grades on the assignment (based primarily on effort, completeness of explanation, and citations to relevant course literature), the most impactful feedback to learners was provided in the lesson that followed the assignment, which directly addressed the most common misconceptions present in student critiques through (theoretical) reinforcement of concepts, and targeted examples. We conducted this as a remote, synchronous lecture in our online course, though in a live setting, this could be conducted as a "debriefing" session, provided the facilitator has anticipated the most likely misconceptions. In the sections that follow, we describe the most common misconceptions revealed by our Spring 2020 course, and recommendations for how they can be addressed. Good visualizations are (immediately) easy to understand. The most pervasive misconception we observed in student critiques was the idea that to be effective, a visualization must be immediately easy to understand. This intuition seems to follow directly from popular discourse about the purpose of visualizations being to "show" or "reveal" data, and the adage that, "a picture is worth one thousand words". These ideas are evident in responses like: • "The visualization is good because it is very intuitive and easy to understand." • "This visualization is good because it is easy to make sense of the data at a glance..." • "The external representation of data isn't simple enough for someone to read and understand instantly ..." • "There are too many visual marks to encode the visualization in a short amount of time." • "I would consider this a bad visualization because at first glance there is just too much going on." While each of these statements might have been accurate with respect to the particular visualization the student was critiquing, in their formulation of criteria, the learners have revealed that their conceptual model of efficacy does not appropriately rely on identification of the intended audience and task. At the heart of this misconception are false assumptions about general audiences and quick-reading tasks. Not all visualizations are intended to be read by novices, or the general public. Nor are all visualizations intended for the purpose of "informing" or "quick reading". We want students to understand that a seemingly complex visualization, one that is not easy to understand at first glance, might be very effective if the intended audience and task require some degree of expertise and specialized prior knowledge of the domain, or graphical formalism. To address this misconception, we recommending presenting learners with example visualizations coming from the same information domain, with different analytical purposes. In the domain of weather, for example, one might contrast: (1) a 10-day weather forecast designed to inform about temperature and rain predictions, to help the general public make decisions about upcoming activities, with (2) a meteorological visualization such as a spaghetti plot [5], designed to support meteorologists interpreting surface fronts for forecasting of cyclones. The key concepts to reinforce are: (1) the ease of readability should be evaluated with respect to the prior knowledge of the intended audience, and (2) the encoding structure should be judged with respect to the computational requirements of the expected task. An appropriate theoretical text on this concept is Stephen Palmer's, "Fundamental Aspects of Cognitive Representation" [4], and in particular, the explanation of figure 9.1 (page 263). The intended reader for a visualization is people who are interested in [topic of the visualization]. To evaluate the communicative efficacy of a visualization, we teach students they must consider the intended audience of the graphic. Making inferences about an intended audience however, proved to be difficult for our students, as we saw an over-emphasis on the topic (or domain) of data. For example: • Regarding a graphic depicting COVID-19 diagnosis rates: "The intended audience is anyone who wants information about the current situation regarding COVID-19 because this news article can be viewed online without a subscription." Although the student has correctly noticed that the article is available without subscription, they fail to note the content of the news article, which focused on the economic impact of COVID-19. In this case, the graph was being used to support the journalist's arguments about economic impact in different countries. • Regarding a graph depicting causes of death worldwide, from https://ourworldindata.org/. "The intended audience would be the entire world as it pertains to everyone." The student has confounded the potential audience to whom the graphic might be relevant, with the likely audience for whom the author designed the graphic. Conversely, in critiquing a geothermal map depicting the crustal thickness of areas of the moon, one student concluded that the graphic was "most likely intended for astronomers, [or] researchers." In this assertion, the learner reveals they have considered not only the data domain of the visualization, but also the media in which it was embedded, making reasonable inferences about what subpopulation of readers interested in the topic the designer likely had in mind when designing the visualization. In another example, a student makes a reasonable inference about the intended audience based on the terminology used in labelling a figure (from The Economist magazine), but disagrees with the design decision, suggesting instead that the graphic would be more effective if designed for a general audience. "From the presentation of the line graphs, it is evident that the intended audience is not made to be public-friendly but targeted towards a specific group of people that are familiar with economic terms (e.g. maximum interest), such as people in the business field. Although one aspect that this visual lacks and would have made the visualization more effective is that it should have been targeted to be understood by everyone." A sophisticated understanding was demonstrated by a student who noted that a chloropeth map of COVID-19 cases within Cuyahoga county: (1) appeared in a locally-distributed newspaper and (2) included district labels that would only be relevant to locals. The student correctly inferred that the audience was highly targeted and likely aimed at helping county residents make informed decisions about social distancing and other precautions against the virus. To confront the misconception that the audience of a graphic is the population of readers interested in the data being depicted, in follow-up lessons we emphasized the importance of communicative context: Where is the artifact located, and what can that location tell us about the intention of the designer? It can be effective to draw contrast between conceptual artifacts like diagrams used to teach a concept, and analytical artifacts like statistical graphics used to communicate results in a scholarly paper. In turn we contrast these with an example of a persuasive graphic, where decomposition of the encoding structures and choice of data to include (and disclude) directly supports the narrative structure of media in which the graphic is embedded. In each case we prompt students to reflect on how the communicative context serves to narrow the potential audience for whom the graphic needs to be effective. "The purpose of the visualization is to convey [the data values in the visualization]." This common misconception is similar to the last, but subtly different in an important way. Much like novices might presume that the intended audience of a visualization is as broad as those interested in the topic, they might similarly presume that the purpose of the visualization is to convey the data in the graphic. The important component here regards what about the data the author wishes to convey. Wainer [7] fruitfully distinguishes between "levels of reading", where a first-order reading involves extracting the value of individual datum, and second-order readings involve observing the relationship between data points -perceiving trends. In our experience, novices often presume that for a visualization to be effective, it must readily afford first-order readings. • Regarding a geographic heatmap visualization of COVID-19 cases in the United States: "If the graph had used different colors maybe the audience can decipher the information better instead of the immediate thought of this [area] is bad". Here, the student has presumed that in order to be effective, the graphic should support a first-order reading, allowing the user to extract the precise number of cases in a particular area. In fact, the the context of the news article would suggest that the purpose of the graphic was to reinforce the author's narrative that Florida was a "hot mess" of COVID-19 cases, prior to Spring Break. One can confront this misconception by directly teaching the "levels of reading" concept (see [7]). It is also useful to present students with examples of statistical graphics from news articles that are aimed at supporting a narrative (the graphic quickly tells a single story), vs. those that are designed as exploratory (numerous outstanding examples from the New York Times Graphics team) that do not tell a single story "at a glance", but rather, afford multiple readings. CONCLUSION By conducting this activity during our Spring 2020 introductory course on Information Visualization, we successfully brought students attention to pervasive misconceptions they held about the relationship between the purpose, task, and audience of communicative graphics. We scaffolded their exploration of visualizations in news and educational media to help them make reasonable inferences about designers' intentions, and prompted them to make explicit their understanding of criteria on which visualizations should be evaluated. This allowed us to directly challenge misconceptions in our debriefing instructional sessions. In a follow up assignment, we asked learners to revisit their responses and "critique" their critiques, reflecting on how their understanding of visualization purpose had evolved. We hope the structure of this activity, and description of common misconceptions, will be useful for fellow instructors teaching the nuanced fundamentals of how visualizations can succeed as communicative artifacts.
2020-10-09T01:00:24.476Z
2020-10-08T00:00:00.000
{ "year": 2020, "sha1": "0dc37caae8712f148141b5c7ee4e8c06b14f570c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "673996ecebe0cbee79da6d9d31d69df90a0a739e", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Computer Science" ] }
103541358
pes2o/s2orc
v3-fos-license
Nitrogen-doped porous carbon materials generated via conjugated microporous polymer precursors for CO 2 capture and energy storage † Heteroatom doping and well-tuned porosity are regarded as two important factors of porous carbon materials (PCMs) for various applications. However, it is still di ffi cult to tune a single variable while retaining the other factors unchanged, which restricts rational and systematic research on PCMs. In this work, in situ nitrogen-doped porous carbon material (NPCM-1) and its non-doped analogue PCM-1 were prepared by direct pyrolysis of conjugated microporous polymer precursors (TCMP-1 and CMP-1 respectively) with the same skeleton structure. It was found that the CO 2 adsorption capability of the PCMs was signi fi cantly enhanced compared with their CMP precursors thanks to the optimized pore con fi guration. Meanwhile, NPCM-1 exhibits much better performance in supercapacitive energy storage than PCM-1 even though these two PCMs possess comparable porosity properties, which is probably due to the much improved electrical conductivity and wettability with the electrolytes because of the introduction of nitrogen doping. Thus, this work provides a valuable insight into the design and preparation of high performance PCMs for CO 2 capture and energy storage applications. Introduction Porous carbon materials (PCMs) have been extensively studied in a range of energy and environment related applications due to the abundance of raw materials, thermal and chemical stability and structural diversity. [1][2][3] Traditionally, PCMs are prepared by physical or chemical activation of biomass feedstocks, e.g. wood, coconut shells and rice husk. 4,5 Recently, newly developed carbon materials such as carbon nanotubes 6 and graphene 7 have also been used as precursors to prepare PCMs by activation. Heteroatom doping has been found to be a rational method to improve the application properties of PCMs through the enhancement of interactions between PCMs and the adsorbates. 8,9 However, PCMs prepared by traditional activation methods oen suffer from broad pore size distribution 10 and the inconvenience to introduce precisely located heteroatoms onto the skeletons of PCMs, 11 which restricts their efficient applications in both gas adsorption and energy storage. 12 The rapid development of microporous organic polymers (MOPs) which possess intrinsic microporosity and tunable chemical structures has provided another option for the construction of PCMs. 13 A variety of MOPs including porous aromatic frameworks (PAFs), 14 conjugated microporous polymers (CMPs), 15 covalent triazine-based frameworks (CTFs) 16 and hypercrosslinked polymers (HCPs) 17 have been used as pyrolysis precursors for the preparation of PCMs with precisely controlled chemical and porous structures for various applications. Among them, CMPs are particularly interesting thanks to their unique properties derived from the combination of extended conjugation with permanent microporosity. 18 Through direct carbonization, the electrical conductivity of CMPs could be greatly enhanced and their pore structures could be further tuned as well. 19 Also, CMPs synthesized from heteroatom containing monomers can be used as carbonization precursors to prepare heteroatom doped PCMs, 20 in which the concentration, location and conguration of the heteroatoms can be ne-tuned by judicious selection of monomers. 18 Although there have been a few reports related to the CMP-derived PCMs, 21,22 the effects of porosity and heteroatom doping on their applications in CO 2 capture and energy storage have not been systematically studied. In this context, we prepared two porous carbon materials, PCM-1 and nitrogen-doped NPCM-1, by the carbonization of CMP precursors with the same skeleton structure and different chemical compositions. Separate inuence of either porosity or nitrogen doping on the CO 2 capture and energy storage performances of the PCMs are studied in detail. It is found that carbonization is an efficient method to optimize porosity of the CMP materials, which is highly advantageous for the improvement of CO 2 adsorption capability, and the supercapacitive energy storage performance of the PCMs could be greatly enhanced by the in situ nitrogen doping strategy. Synthesis of CMP-1 To a ame-dried 3-necked ask under Ar atmosphere were added 1,3,5-triethynylbenzene (150 mg, 1 mmol), 1,3,5-tribromobenzene (315 mg, 1 mmol), tetrakis(triphenylphosphine) palladium (58 mg, 0.05 mmol) and copper(I) iodide (19 mg, 0.1 mmol). The mixture was evacuated and purged with Ar three times before DMF (30 mL) and triethylamine (10 mL) were added into the ask. The dark brown mixture was heated to 150 C and stirred for 72 h under Ar. The mixture was cooled to room temperature and the insoluble precipitation was ltered and washed with distilled water, ethanol, acetone, CHCl 3 and methanol. The product was further puried by Soxhlet extraction with methanol, THF and CHCl 3 for 24 h each. The product was then dried in a vacuum oven at 100 C overnight to yield light yellow powder (yield: 204.2 mg, 92%). Preparation of PCM-1 CMP-1 in a ceramic boat was put into a tube furnace, vacuated and purged with N 2 three times at room temperature and then heated to 700 C with a heating rate of 5 C min À1 . The sample was kept in the furnace for additional 2 h before cooled down to room temperature to give PCM-1 as a black powder. Preparation of NPCM-1 The preparation of N-PCM-1 was similar to that of PCM-1 but TCMP-1 was used as the precursor. Preparation of electrodes The mass of the electrodes was prepared by grinding active materials (35 mg, 70 wt%), conductive additive (acetylene black: 10 mg, 20 wt%), and binder (polyvinylidene uoride in Nmethylpyrrolidinone solvent: 5 mg, 10 wt%) in a mortar. The Ni foam current collector was cast with the slurry, dried under vacuum at 120 C for 5 h and then cut to a circular shape working electrodes with the diameter of 12 mm. Gas sorption analysis Surface areas and pore size distributions of the samples were measured by N 2 adsorption and desorption at 77 K using a BELSORB Max (BEL Japan Inc.). Samples were degassed at 120 C for 12 h under vacuum before analysis. The surface areas were calculated using the BET model in the pressure range P/P 0 from 0.05-0.1. The total pore volume was determined at a relative pressure of 0.99. Pore size distributions were derived from the isotherms using the nonlocal density functional theory (NL-DFT) pore model for carbon with cylindrical and slit pore model. CO 2 isotherms were measured at 273 K on a BELSORB Max (BEL Japan Inc.). Electrochemical measurements All electrochemical performances were measured by using cointype cells of 2032 size and 6 M potassium hydroxide (KOH) as the electrolyte. The electrochemical performances were conducted by cyclic voltammograms (CV), galvanostatic charge/ discharge experiments (GCD), electrochemical impedance spectroscopy (EIS), using a CHI660E electrochemical workstation. The cyclic voltammograms (CV) were obtained over the potential range of À1 to 0 V at a scanning rates from 5 to 500 mV s À1 . Galvanostatic charge/discharge experiments (GCD) with current rates from 0.1 to 10 A g À1 in the voltage range of À1 to 0 V. Electrochemical impedance spectroscopy (EIS) measurements of the electrodes were recorded by applying a sine wave with an amplitude of 5.0 mV over the frequency range from 100 kHz to 10 mHz. Cycling performances of cells were measured by LAND CT2001A battery test system. The specic capacitance C s (F g À1 ) of electrode materials was calculated by the discharge curve according to C s ¼ 4 Â C cell ¼ 4 Â IDt/mDV. The specic energy density E (W h kg À1 ) and specic power density P (W kg À1 ) were calculated by the following equations, respectively: is the experimentally determined specic capacitance, I (A) is the discharge current, and Dt (s), m (mg), and DV (V) are the total discharge time, the mass of active materials on the two electrodes, and the potential drop during discharge, respectively. Results and discussion As shown in Scheme 1, porous carbon material PCM-1 and its nitrogen doped counterpart NPCM-1 were prepared by the pyrolysis of two conjugated microporous polymer precursors possessing the same skeleton structure and different chemical compositions. The CMP precursors CMP-1 and TCMP-1 were both synthesized through A3 + B3 Sonogashira-Hagihara crosscoupling polymerization under the same reaction conditions. Different from CMP-1 which possesses a 1,3,5-benzene knot, nitrogen-rich TCMP-1 was constructed by alkyne-type monomer M1 and a 1,3,5-triazine containing monomer M3. Then, CMP-1 and TCMP-1 were carbonized at 700 C for 2 h under N 2 atmosphere, and the resulting porous carbon materials were labeled as PCM-1 and NPCM-1 respectively. The PCMs were obtained as black powders and insoluble in all solvents tested including THF, toluene, DMF, and chloroform, and were also found to be chemically stable in various aqueous conditions. Infrared spectra for the samples are shown in Fig. S1 (see ESI †). For the CMP precursors, the peaks at around 2200 cm À1 , which are characteristic of bis-substituted acetylenes, are easily detected, indicating successful cross-coupling reactions during the polymerization. The characteristic terminal C-C triple bond stretching vibration peak at about 3300 cm À1 of TCMP-1 exhibits a much lower intensity than that of CMP-1, suggesting a higher degree of polymerization of the triazine-based polymer. 23 Aer heat treatment, the infrared absorption peaks of PCM-1 and NPCM-1 atten out, suggesting the carbonization of CMP precursors during the pyrolysis process. 16 Raman spectroscopy was used to further conrm the structures of the as-prepared PCMs (Fig. S2 †). The Raman spectrum of PCM-1 presents two typical peaks, the D band at 1320 cm À1 attributed to structural defects and the G band at around 1590 cm À1 associated with highly ordered graphite carbon. 24 The D band and G band of NPCM-1 shi slightly to 1329 and 1583 cm À1 , and the I D /I G ratio of NPCM-1 increases from 0.83 of PCM-1 to 1.08, indicating that the introduction of nitrogen element could increase the degree of defect in the porous carbon structure. 25 The X-ray photoelectron spectroscopy (XPS) spectra of the samples are presented in Fig. 1a. The O 1s peak located at 532 eV could be mostly assigned to the absorbed atmospheric H 2 O and O 2 molecules. 26 The N 1s peak can only be observed in TCMP-1 and NPCM-1, verifying the existence of nitrogen atoms in these two samples (5.5 at% for TCMP-1 and 1.5 at% for NPCM-1). Compared with TCMP-1, the nitrogen content of NPCM-1 decreases dramatically aer carbonization, and the decrease of the C-N bond in C 1s spectra also corroborates the drop of N content (Fig. S3 †). Further analysis of the N 1s spectrum of the two samples shows that the nitrogen congurations are different for TCMP-1 and NPCM-1 (Fig. 1b and c). The single nitrogen peak in TCMP-1 splits to three peaks in NPCM-1, including the pyridinic N (398.8 eV), graphitic N (401.2 eV) and a small oxidized N (405.6 eV). 16 These results show that the nitrogen concentration and congurations can be changed during the pyrolysis process. The emerging of graphitic N and pyridinic N in NPCM-1 can be attributed to the structure reconstruction and electron redistribution during carbonization and is potentially benecial for supercapacitive energy storage. 27 Surface morphology and crystallinity of the CMP and PCM samples were investigated by scanning electron microscopy (SEM) and X-ray diffraction (XRD) respectively. As shown in the SEM images (Fig. 2, S4 and S5 †), a three-dimensional network with interconnected pores can be found in CMP-1 and PCM-1 ( Fig. 2a and b), while a bulk structure with a relatively smooth surface can be observed in TCMP-1 and NPCM-1 (Fig. 2c and d). Powder XRD patterns of all the samples show a broad peak at approximately 22.5 (Fig. S6 †), indicating amorphous structures of both the CMPs and PCMs. 7 These results suggest that the macroscopic structure and morphology of the CMPs are mostly maintained aer the pyrolysis process. The surface area and pore structure of the samples were evaluated by nitrogen adsorption/desorption measurements at 77 K. The isotherms are shown in Fig. 3a and the porosity data are listed in Table 1. The four samples show comparable apparent BET surface areas ranging from 610 m 2 g À1 (TCMP-1) to 718 m 2 g À1 (NPCM-1). All samples give rise to a combination of type I and type IV adsorption isotherms according to the IUPAC classications, 28 indicating the existence of both micropores and mesopores in the networks. 29 It is interesting to note that the shapes of the adsorption isotherms of PCM-1 and NPCM-1 remain almost unchanged compared with their carbonization precursors, CMP-1 and TCMP-1 respectively, although the adsorption quantity does display some slight variations. The fact that porosity of the carbonized materials corresponds well with their CMP precursors presents a signicant advantage that it is possible to prepare PCMs with netuned porosity by the selection of proper CMP precursors, whose porosity can be precisely controlled by rational selection of synthetic monomers. 30 Pore size distribution curves of the four samples are shown in Fig. 3b and c, as calculated using nonlocal density functional theory (NL-DFT). Compared with the CMP precursors CMP-1 and TCMP-1, which possess two micropore diameters centering around 0.8 nm and 1.4 nm and a proportion of mesopores around 2.0 to 5.0 nm, both of the carbonized materials PCM-1 and NPCM-1 exhibit a predominant micropore with a smaller pore size centering around 0.5 nm and much fewer mesopores. The level of microporosity in the materials is assessed by the ratio of micropore volume to the total pore volume (V 0.1 /V tot ). Accordingly, the carbonized PCMs show higher V 0.1 /V tot values (0.48 of PCM-1 and 0.69 of NPCM-1) than their CMP precursors (0.40 of CMP-1 and 0.56 of TCMP-1). The main conclusion of the porosity analysis is that PCMs with comparable porosity properties can be prepared by the carbonization of CMP precursors with the same skeleton structure and the pyrolysis process is an efficient method to induce further micropore development, 31 which could potentially be of great benet for the adsorption of small gases, such as H 2 and CO 2 . 32 Also, the fact that we can get non-doped and Ndoped PCMs with similar porosity properties makes it possible for us to merely study the exact effect of N-doping on the application performance of PCMs, since traditional N-doping process oen alters the porosity properties as well. CO 2 uptake of the samples was measured at 273 K and the adsorption isotherms are shown in Fig. 3d. 1,3,5-Triazine based conjugated microporous polymer TCMP-1 shows a higher CO 2 uptake (2 mmol g À1 ) than the 1,3,5-benzene based CMP-1 (1.6 mmol g À1 ) at 273 K and 1 bar, although these two polymers possess similar porosity. This result is in agreement with our previous ndings that the introduction of N-rich units could enhance the interactions between the CO 2 molecules and the polymer network, resulting in the increase of CO 2 uptake capability. 23 Compared with their CMP precursors, the carbonized materials PCM-1 and NPCM-1 show much higher CO 2 capture capabilities of 3.6 mmol g À1 and 3.9 mmol g À1 respectively at 273 K and 1 bar. The signicant improvement of CO 2 capture can be ascribed to the micropore redevelopment and the decrease of pore size during the pyrolysis process. 32 Thus, from the above results, we can see that both nitrogen doping and the optimization of pore structure are benecial for the enhancement of CO 2 capture of the PCMs and the proper control over pore size seems to be particularly important. This could potentially provide a reference for the design of materials for high performance CO 2 capture. The supercapacitive energy storage performance of the samples was evaluated by two-electrode symmetric supercapacitor system with 6 M KOH as the electrolyte. As shown in Fig. S7, † cyclic voltammetry (CV) curves of the samples at different scan rates (5 to 500 mV s À1 ) between À1 and 0 V exhibit a typical quasi-rectangular shape with good symmetry, indicative of a double-layer capacitive nature. 33 CV curves of the four samples at the same scan rate of 100 mV s À1 are shown in Fig. 4a for better comparison. Compared with the carbonized materials PCM-1 and NPCM-1, the CV curves of CMP precursors CMP-1 and TCMP-1 exhibit almost negligible integrated areas, corresponding to much lower specic capacitances, which is probably due to poor electrical conductivity of the pristine CMPs. CV curve of NPCM-1 show larger integrated area than that of PCM-1 at the same scan rate, suggesting higher specic capacity of the N-doped material. Galvanostatic chargeÀdischarge (GCD) measurements were carried out to further evaluate the electrochemical performances of PCM-1 and NPCM-1. As shown in Fig. S8, † GCD curves of PCM-1 and NPCM-1 electrodes at different current densities exhibit triangular shapes with high symmetry and nearly linear slopes, corresponding to ideal electrochemical double layer capacitors and corroborating well with the CV curves. 20 The specic capacitances at different current densities calculated based on the GCD curves of NPCM-1 (e.g. 264 F g À1 at 0.1 A g À1 ) are much higher than those of PCM-1 (e.g. 90 F g À1 at 0.1 A g À1 ) (Fig. 4b and c). The Ragone plot (Fig. 4d) reveals that NPCM-1 exhibits a maximum energy density of 9.0 W h kg À1 , two times higher than that of PCM-1 (3.1 W h kg À1 ). The energy density of NPCM-1 remains at 3.6 W h kg À1 when the power density is elevated to 5.0 kW kg À1 , while that of PCM-1 shows a dramatic decrease during the increase of power density (0.3 W h kg À1 at 5.0 kW kg À1 ). The signicant improvement of specic capacitance and energy density of NPCM-1 compared with PCM-1 is probably due to the enhanced electron transportation ability and higher ion diffusion rate induced by nitrogen doping. 19 Electrochemical impedance spectroscopy (EIS) measurements were conducted to investigate the kinetic behavior of the PCM electrodes. The Nyquist plots of PCM-1 and NPCM-1 show a small semicircle in the high frequency region and a straight sloped line in the low frequency range (Fig. 4e). The diameter of the semicircle corresponds to the charge transfer resistance (R CT ) of the electrode materials. 20 NPCM-1 possesses a R CT of 1.0 U, which is lower than that of PCM-1 (1.5 U). Furthermore, compared with PCM-1, NPCM-1 exhibits a more vertical curve at low-frequency region, indicating a better capacitive behavior and lower diffusion resistance of electrolyte ions onto the Ndoped material. The lower overall resistance of NPCM-1 can be attributed to the improved surface wettability induced by nitrogen doping, which is benecial for easier accessibility of electrolyte ions to the electrode. 34 The long-term stability of NPCM-1 was investigated by cycling experiments at the current density of 1 A g À1 . As demonstrated in Fig. 4f, although there are some uctuations mainly caused by the activated process in the rst 3000 cycles, the specic capacitance of NPCM-1 aer 10 000 cycles remains almost unchanged and the GCD curves retain the symmetry well through the entire testing cycles, indicating highly reversible electrochemical properties and outstanding long-term stability of the N-doped porous carbon material. Conclusions In summary, two PCMs (PCM-1 and N-doped NPCM-1) with comparable porosity were prepared by the carbonization of rationally designed CMP precursors with the same skeleton structure and different chemical compositions. Carbonized materials PCM-1 and NPCM-1 show much higher CO 2 capture capabilities compared with their CMP precursors thanks to the optimized pore structure. Due to the cooperative effects of both nitrogen doping and small pore size, NPCM-1 shows a high CO 2 adsorption capability of 3.9 mmol g À1 at 273 K and 1 bar, which is quite promising considering its moderate specic surface area. Meanwhile, NPCM-1 exhibits much better performance in supercapacitive energy storage than PCM-1 even though these two PCMs possess similar porosity structures, probably due to the much improved electrical and surface properties induced by nitrogen doping. Thus, NPCM-1 displays a decent specic capacitance of 264 F g À1 at the current density of 0.1 A g À1 and excellent cycling stability. Overall, we can conclude that carbonization is an efficient method to signicantly improve the CO 2 capture capability of CMP materials through the optimization of pore structure and nitrogen doping can be used as a highly favorable strategy to enhance the supercapacitive energy storage properties of porous carbon materials. This could potentially provide a rational design principle for the construction of high performance porous materials. Fig. 4 (a) Cyclic voltammograms of CMPs and PCMs at the scan rate of 100 mV s À1 , (b) galvanostatic charge-discharge curves of CMPs and PCMs at a current density of 0.1 A g À1 , (c) gravimetric capacitances (C s ) of PCM-1 and NPCM-1 at different current densities, (d) Ragone plots of gravimetric energy density versus power density for PCM-1 and NPCM-1-based supercapacitors, (e) Nyquist plots with the inset showing the enlarged part of the high-frequency region, (f) capacitance of NPCM-1 for a 10 000-cycle charge-discharge test at a current density of 1 A g À1 , the inset shows the charge-discharge curves of NPCM-1 during a test of 10 000 cycles.
2019-04-09T13:02:10.751Z
2017-06-23T00:00:00.000
{ "year": 2017, "sha1": "3eb27760efa700c182981dfd500bac14e95d509f", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra05551j", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "776c08eeb8f82430bd32ba28a4c959715c1b5538", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
235287375
pes2o/s2orc
v3-fos-license
Organic Hydroponic Farming Incorporated with Recycles Water Recent The cultivation of flora by way putting the root in liquid nutrient solution is referred to as Hydroponic. Agriculture is the backbone of the growing us of like India, agriculture is disturbed with the aid of number of issues like manures, pesticides, small and fragmented land holdings, chemical used for plant growth etc. Hydroponics can be a higher idea to unravel these difficulties and moreover with the aid of using hydroponics we can go organic. Hydroponics is the quickest developing zone of agriculture and it is the technique of creating vegetation life in liquid with delivered vitamins on the other hand besides soil. The whole manner can be finished in room terrace, balcony and additionally in closed environment. Since it can be extended vertically, it has a acquire that many numbers of plants can be planted in a limited space. The plant boom is monitored weekly as soon as for the yield purpose. Since the drift of the used and smooth water for the watering the plant is automatic, few integral parameters like water level indicator, pH sensor, Automatic water dropper and DC motor. These parameters are controlled by Arduino board. Hence the plant is grown by way of capability of Eco-friendly method. Introduction "HYDROPONICS" is growing o plants in an exceedingly in nutrient water resolution. Usually used mediums embrace improved vermiculite, brick shards, styrene packing peanuts and timber fibre. It has been known as practicable methodology for manufacturing lettuce, cucumbers as correct as decorative vegetation like herbs and foliage plants. The ban on bromide in soil , the demand for hydroponic medium mature turn out fleetly exaggerated within the recent years [1]. A greenhouse could be a structure partitions and roofs created particularly of obvious substance like glass. It reduces the strength consumption needed for plant growth. The utilization of inexperienced house has several edges. Victimisation greenhouse, the vegetation will be mature regardless of the climate. This techniques build use of sensing element to show the water and continues the pH scale level. In aquacultural farming, the plant growth is faster and it is fully free from plague. The production is double than the standard farming method. Additionally seeds can be planted within the same space and consequently the yield is more [2]. Many micropollutants are widely talked about as a result of the continued influx of prescription drug and private care products (PPCPs) around recent approaches. PPCP residues are commonly found in the application of liquid resources, sewage treatment plants (STPs), and water purifiers due to their widespread use, low human energy and improper disposal (WTPs) and half of the limited PPCPs square transferred to STPs , adverse effects on biological therapies; therefore, standard STPs are insufficient when it comes to PPCP removal. In addition, the excreted metabolites can be further modified in the acquisition of water bodies. Many high-care systems, combined with membrane filtration, granular coal, and advanced connectivity procedures, are used for high-quality removal of individual PPCPs. This judge incorporates patterns of PPCPs occurrence in watersheds and is therefore the accepted method for their treatment in STP / WTP unit methods that are applicable in many countries. The purpose of this judge is to provide a comprehensive framework for the termination and end of PPCP in alternative therapies as the most effective means of removing STP and WTP programs [3] In hydroponics and aquaponics, the germination time of spinach was much longer than in conventional farming technique. Since water and vitamins were delivered directly to the roots in hydroponics and aquaponics, the nutrients and water were consumed in an unfavourable manner.The peak of traditionally cultivated spinach wont to be the simplest position (60 thday) that was once twenty three cm than the hydroponically (18cm) additionally as aquaponically (20.5cm) cultivated spinach. Aquaponically grown spinach was supposed to be only a little taller than hydroponically grown spinach.The top of historically grown spinach used to be the highest, which may be due to the fact that the area for roots in hydroponics and aquaponics is less established, causing the top to be stunted. Traditional spinach had the highest surface position (on the tenth).The hydroponically (on the 10th day, it was 10 sq.cm and on the 60th day, it was 79 sq.cm) was larger than the hydroponically (on the 10th day, it was 10 sq.cm and on the 60th day, it was 79 sq.cm).6sq.cm, and on the 60th day, it was in the 70s q.cm) as well as aquaponically (on the 10th day, it was eight sq.cm, and on the 60th day, it was in the 70s q.cm).It was 72 sq.cm on the sixty-fifth day.) spinach that has reached maturity. Aquaponically grown spinach has a significantly quieter surface than hydroponically grown spinach. The yield of aquaponically grown spinach (4455 kg) was higher than that of hydroponically and historically grown spinach. The yield of spinach grown hydroponically (3780 kg) was slightly higher than that of spinach grown conventionally (3780 kg). [4] Proposed System The implementation of aquacultural farming is that the quickest growing sector of agriculture and it may alright dominate the food production. Aquacultural farms 90-95% less water than the traditional farms and also farm may be placed anyplace as no soil is needed. In our project we have proposed an idea that would control the certain parameters automatically. The 230V from the power supply is step downed to 12V by a stepdown transformer. The transformer has two windings such as 9V and 15V; here the components are operated in 12V so 15V winding is chosen. The 12V AC is converted into by using bridge rectifier. Arduino controller is finely operated at 12V. The pH sensor, water level indicator and automatic water dropper is controlled using Arduino. Here the motor is used to pump recycled drained water into hydroponic system. The drained water from the system is recycled by using it as a feed for the fishes. After a week the pH of the water is sensed, if the water is basic the required amount of citric acid is mixed automatically into the water. The water which is neutralized (pH=7) is filled in the container. Then the organic nutrient solution is mixed with water for nourishing the plant samplings in the hydroponic system. The process made here is cycled for three yields of the farm. In this method water is re-circulated, so hydroponic plants consume only 10% of water is used comparatively to field grown plants. An efficient hydroponic setup will minimize water loss to a greater extend [6][7][8]. Figure 1 shows the Block diagram 4.Result The goal is to see whether growing plants in a water-nutrient solution rather than soil results in a healthier plant. Since there are no molecules of unnecessary material obstructing a plant's roots, nutrients which be absorbed more quickly, allowing it to grow faster and healthier Because of the constant feeding of nutrients and water, hydroponic plants have grown much taller and developed more leaves than plants grown in normal soil. Aquaponics (Fish) is used to disinfect the water in this case. As a result, the null hypothesis is dismissed because the data contradicts its reason. The application of fertiliser at a consistent rate throughout the day allowed the plants to grow at a controlled and consistent rate. Figure 2 shows the Model block diagram Conclusion and Future Scope With the help of sensors and an Arduino microcontroller, an automated device for hydroponic gardening was successfully designed. It effectively regulates the nutrient solution's pH. It also provides gardening information to the consumer and saves the information for future use. This programme aids the consumer in increasing efficiency and producing the healthiest product possible in hydroponic gardening. The first task was to develop a simple agriculture automation system that farmers could use without any previous technical expertise, i.e., an automated system for a layperson. Science is only useful if it can be applied in real-life situations. The second challenge was to create a
2021-06-03T00:19:11.341Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7b72d755e94caa7f0c2cb91d0982f822b5f51561", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1916/1/012105", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7b72d755e94caa7f0c2cb91d0982f822b5f51561", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
56492584
pes2o/s2orc
v3-fos-license
Epilithic Biofilms in Lake Baikal: Screening and Diversity of PKS and NRPS Genes in the Genomes of Heterotrophic Bacteria. Abstract A collection of heterotrophic bacteria consisting of 167 strains was obtained from microbial communities of biofilms formed on solid substrates in the littoral zone of Lake Baikal. Based on the analysis of 16S rRNA gene fragments, the isolates were classified to four phyla: Proteobacteria, Firmicutes, Actinobacteria, and Bacteroidetes. To assess their biotechnological potential, bacteria were screened for the presence of PKS (polyketide synthase) and NRPS (non-ribosomal peptide synthetases) genes. PKS genes were detected in 41 strains (25%) and NRPS genes in 73 (43%) strains by PCR analysis. The occurrence of PKS genes in members of the phylum Firmicutes (the genera Bacillus and Paenibacillus) was 34% and NRPS genes were found in 78%. In Proteobacteria, PKS and NRPS genes were found in 20% and 32%, and in 22% and 22% of Actinobacteria, respectively. For further analysis of PKS and NRPS genes, six Bacillus and Paenibacillus strains with antagonistic activity were selected and underwent phylogenetic analysis of 16S rRNA genes. The identification of PKS and NRPS genes in the strains investigated was demonstrated among the homologues the genes involved in the biosynthesis of antibiotics (bacillaene, difficidine, erythromycin, bacitracin, tridecaptin, and fusaricidin), biosurfactants (iturin, bacillomycin, plipastatin, fengycin, and surfactin) and antitumor agents (epothilone, calyculin, and briostatin). Bacillus spp. 9A and 2A strains showed the highest diversity of PKS and NRPS genes. Bacillus and Paenibacillus strains isolated from epilithic biofilms in Lake Baikal are potential producers of antimicrobial compounds and may be of practical interest for biotechnological purposes. Introduction Microorganisms from various ecological niches are the most important source of antibiotic substances and other bioactive metabolites (Sponga et al. 1999;Lorentz et al. 2006;Wu et al. 2011;Mondol et al. 2013;Palomo et al. 2013). To date, 95-99% of microorganisms in natural biotopes exist in the form of biofilms, since this facilitates access to nutrients, promotes cooperation between microorganisms, and protects cells from negative environmental effects (Costerton et al. 1987). Biofilm is a microbially derived sessile community characterized by the cells that are irreversibly attached to a substratum, interface or to each other, embedded in a matrix of extracellular polymeric substances that they have produced, and it exhibits an altered phenotype with respect to growth rate and gene transcription (Donlan and Costerton 2002). Biofilms are a type of microbial consortia that play an important role in biogeochemical processes in the biosphere. In the aqueous environment, biofilms exist in several types, depending on the substrate on which they are formed: epilithic (rock surfaces), epipsammic (attached to sediment particles), epixylic (on dead plant material), epiphytic (on living plants), marine or lake snow (on organic and inorganic particles), and biofouling (artificial surfaces) (Romaní et al. 2016). Compared to other biofilms, epilithic biofilms have a more complex heterogeneous structure with a higher algal biomass and a large repertory; they are also more independent of seasonal fluctuations (Romaní and Sabater 2001;Bartrons et al. 2012). Obviously, the search for the biologically active substances (BAS) among the bacteria inhabiting epilithic biofilms is promising. Multidomain enzymatic 'megasynthases' , including PKS, NRPS and their NRPS/PKS hybrid complexes, 4 502 synthesise a wide range of secondary metabolites of the bacterial origin (Staunton and Wilkinson 2001). A diverse chemical structure and functional activity characterize polyketides, among which there are antibiotics, statins, tumor growth inhibitors, and other pharmaceutically significant compounds. There are three types of PKS (I, II, and III), which differ depending on the structure and mechanism of catalysis. Type I PKS are organized into modules consisting of at least three functional domains: ketosynthase (KS), acyltransferase (AT), and acyl carrier protein (ACP). Each module is responsible for one elongation cycle of the polyketide chain. Type II PKS are a large multienzyme complex of small, discrete enzymes with particular functions. The pivotal component that is responsible for the condensing activity resembles β-ketoacyl synthase II of type II FAS found in bacteria and plants. This class of PKS is responsible for the biosynthesis of bacterial aromatic polyketides, such as oxytetracycline and pradimicin. Type III PKS are self-contained enzymes that form homodimers. Their single active site in each monomer catalyzes the priming, extension, and cyclization reactions iteratively to form polyketide products. Despite their structural simplicity, type III PKS produce a wide array of compounds such as chalcones, pyrones, acridones, phloroglucinols, stilbenes, and resorcinolic lipids (Dayu et al. 2012). NRPS synthesize a few natural compounds with a wide range of biological activity and various medicinal properties. Monomers of amino acids serve as substrates for the synthesis of NRPS peptides. The modules contain an ATP-dependent adenylation domain (A-domain), a peptidyl carrier protein (PCP) domain, and a condensation (C) domain. The assembled molecule is released from the enzyme complex through a thioesterase (TE) domain. The A-domain is the most conservative (Staunton and Wilkinson 2001). The natural products obtained by these biosynthetic pathways have been widely described for cultured and uncultured strains (Wu et al. 2011;Fickers 2012). Molecular methods have been successfully used to detect and identify target genes in the organism as indicators of the production of novel secondary metabolites (Banskota et al. 2006;Palomo et al. 2013). There have been multiple studies on secondary metabolites synthesized by PKS and NRPS gene clusters in members of the phylum Firmicutes (Lorentz et al. 2006;Wu et al. 2011;Fickers 2012;Mondol et al. 2013;Zhang et al. 2013). Natural strains of the genera Bacillus and Paenibacillus have in their genomes the clusters of genes responsible for the synthesis of several active compounds (antibiotics and biosurfactants), which act synergistically, thus showing high antagonistic activity against various patho gens (Ongena and Jacques 2008;Chen et al. 2009;Kim et al. 2010;Li et al. 2012). Therefore, natural isolates of bacilli represent a rich source of new antimicrobial substances of great importance for biotechnology. Lake Baikal, one of the largest (area of 31 722 km 2 ) and the deepest (1637 m) freshwater reservoir in the world, has a significant biodiversity and high endemism of hydrobionts, unique ecological peculiarities, and rich biotopes. It is a kind of natural laboratory for studying the metabolic potential of microbial communities. Its littoral zone occupies 7% of the total area; the coastline is 2000 km. Previously, strains of the genera Streptomyces and Micromonospora were isolated from water, sponges, and sediments in Lake Baikal. They showed antagonistic activity against potentially pathogenic microorganisms resistant to a number of antibiotics (Terkina et al. 2006). The authors suggested that Baikal actinomycetes can be used as producers of new BAS. Quite recently, poly ketide synthase genes were identified in the metage nome community of the endemic sponges Lubomir skia baicalensis and Swartschewskia papyracea. Among the closest relatives, there were the genes involved in biosynthesis of metabolites, curacin A, stigmatellin, and nostophycin (Kaluzhnaya et al. 2012;Kaluzhnaya and Itskovich 2016). In the genome of the Baikal strain Pseu domonas fluorescens 28Bb-06, PKS genes were identified as 50-66% homologous to the gene clusters involved in biosynthesis of yersiniabactin, rhizoxin, disorazol, and epothilone (Lipko et al. 2012). In strains isolated from the freshwater sponge L. baicalensis, PKS and NRPS genes were detected in nine out of 14 cultures of the genera Bacillus, Pseudomonas, Variovorax, Curtobacte rium, and Rhodococcus (Kalyuzhnaya et al. 2013). The formation of hydrobiont communities on various geological rocks has been studied in Lake Baikal since 2000(Timoshkin et al. 2003. These studies showed that the development and activity of organisms depended on the chemical composition of the rocks and their structure. They also showed the high selectivity of these organisms in terms of the occupation of different substrates (Parfenova et al. 2008). For the first time, bacterial communities of water and biofilms formed on a solid substrate in Lake Baikal were studied by pyrosequencing of the 16S rRNA gene fragment. Bacterial communities of biofilms showed high taxonomic diversity, represented by Cyanobacteria, Bacte roidetes, and Proteobacteria; the contribution of other groups did not exceed 1% . The genomes of bacteria Serratia, Pseudomonas, Rheinheimera, and Flavobacterium isolated from epilithic biofilms in Lake Baikal showed diversity in their PKS genes, which are responsible for the synthesis of antibiotics and cytostatics (Sukhanova et al. 2017). Previously, we determined the antimicrobial activity of Bacillus and Paenibacillus strains isolated from biofilms (Zimens et al. 2014). This work aimed to detect and evaluate of diversity of the PKS and NRPS genes in the genomes of heterotrophic bacteria isolated from epilithic biofilms in Lake Baikal. Experimental Materials and Methods Sampling. Samples of epilithic biofilms were taken from the littoral zone of Lake Baikal near the settlement of Listvyanka (Cape Beryozovy, 51°50ʹ41.04˝, 104°54ʹ05.82˝). Biofilms were sampled from plates (rocks and minerals) with a thickness of 0.5-1 cm that had been prepared in advance and were immersed in 2011 by divers at a depth of 7-8 m and exposed under natural conditions of the lake during the year. In May 2012, the plates covered with biofilms were lifted from the bottom of the lake, put in sterile containers with Baikal water and then transported to the laboratory at a temperature of 10°С. Under aseptic conditions, fouling with an area of 2 cm 2 was scraped, which was used for cultivation in nutrient media. Isolation of heterotrophic bacteria. The samples of biofilms were suspended in 50 ml of sterile Baikal water and shaken for 30 min on a shaker at 120 rpm. A 1 ml aliquot was added to 100 ml of sterile Baikal water, then 1 ml of the resulting suspension was plated in three replicates using the pour plate method onto solid nutrient media with different contents of organic matter. To isolate pure cultures, the following nutrient media were used: R2A (Fluka analytical, USA), NSY (g/l: nutrient broth 1, soy peptone 1, yeast extract 1 and agar 15), PCA (HiMedia, India), and TSA (HiMedia, India). The duration of incubation was 5-7 days at 20-22°С. Pure cultures were obtained by depleting inoculations to individual colonies. Molecular genetic identification of strains by the 16S rRNA gene fragment. DNA from the day-old bacterial cultures was isolated using the DNA-sorb-B kit according to the manufacturer's protocol (PE CRIE of Rospotrebnadzor, Moscow, Russia). The obtained template was used in the polymerase chain reaction (PCR); target amplicons of the 16S rRNA gene fragment were obtained using the conservative bacterial primers 27L (5'-AGAGTTTGATCATGGCTCAG-3') and 1542R (5'-AAGGAGGTGATCCAGCCS-3') (Brosius et al. 1981). The nucleotide sequences of the 16S rRNA gene fragments were determined on an ABI PRISM 310A Genetic Analyser automatic sequencer (Perkin Elmer, USA) at the SB RAS Genomics Core Facility (Novosibirsk). Comparative analysis of the sequences obtained with previously published ones was carried out using the FASTA and BLAST software package. Nucleotide sequences of 167 strains were registered in GenBank under the following numbers: HF548373 -HF548383, HF548386 -HF548401, HF678874 -HF678892, HF678894 -HF678990, HF947322 -HF947328, LT555292, and LT601385 -LT601400 (personal results; unpublished data). Study of enzymatic activity in members of the genera Bacillus and Paenibacillus. The ability of the strains studied to utilize carbon compounds (Hiss medium) and organic nitrogen-containing substances (amino acids) was assayed. The proteolytic extracellular enzymes were defined on media with casein and gelatine, lipolytic enzymes with tributyrin and lecithin, and amylolytic enzymes with starch (Netrusov 2005). Phosphatase activity was detected using the Alkaline Phosphatase-VITAL kit (Vital Development Corporation, Russia). Phylogenetic analysis of the 16S rRNA gene sequences from Bacillus and Paenibacillus. For the species identification of Bacillus and Paenibacillus isolates, the sequences were aligned in the Clustal-W program. Phylogenetic analysis of nucleotide sequences of the 16S rRNA gene (length of 1360 bp) was carried out using the Mega 6.06 program, the Maximum Likelihood method, and the Kimura 2-parameter model. Bootstrap support was computed for 1000 replicates. Identification of PKS and NRPS genes in the genomes of Bacillus and Paenibacillus. Amplicons of the gene fragments were visualized in 1% agarose gel using a transilluminator (VL-6.MC, France). The PCR fragments were cloned in the vector pJET1.2/ Sukhanova E. et al. 4 504 blunt (CloneJET PCR Cloning Kit, Fermentas, Lithuania), then amplicons were transformed in the cells of competent E. coli DH-5α and XL-1 strains. Nucleotide sequences were determined on a genetic analyzer (Applied Biosystems, USA) in Irkutsk (Russia) and at the research and production company Sintol (Moscow, Russia). To transfer nucleotide sequences of the PKS and NRPS into amino acids, we used the BioEdit 7.2.5. program. A comparative analysis of the sequences obtained was carried out using the BLASTX and BLASTP software package. Phylogenetic analysis of amino acid sequences of the KS-domain fragments of PKS genes and A-domain of NRPS genes was carried out using the Mega 6.06 program, the Neighbor-joining method, and the Kimura 2-parameter model. Bootstrap support was computed for 1000 replicates. The sequences were aligned in the Clustal-W program. PKS-NRPS genes in microorganisms 4 505 Results Table I shows the results of strain isolation from epilithic biofilms in Lake Baikal. We obtained a collection of heterotrophic bacteria consisting of 167 strains. The isolates classified by a comparative analysis of the 16S rRNA gene fragment belonged to four phyla: Pro teobacteria, Firmicutes, Actinobacteria, and Bactero idetes and 32 genera of bacteria. The members of the genera Aeromonas, Pseudomonas, and Bacillus were the dominant strains (Table I). PCR screening of the isolates for the presence of PKS and NRPS genes. Screening of PKS genes in the genomes of heterotrophic bacteria revealed their presence in 41 strains belonging to 14 genera: Bacil lus, Paenibacillus, Pseudomonas, Aeromonas, Serratia, Rhizobium, Devosia, Yersinia, Iodobacter, Kocuria, Pseu doclavibacter, Microbacterium, Brachybacterium, and Flavobacterium (Table I). The total percentage of the strains with PKS genes was 25%. The occurrence of PKS genes in members of the phylum Firmiсutes (Bacillus, Paenibacillus) was 34%, 20% in Proteobacteria and 22% in Actinobacteria (Table I). The screening of 167 strains showed a positive PCR signal for the presence of NRPS genes in 73 strains of 11 genera: Bacillus, Paenibacillus, Staphylococcus, Pseu domonas, Aeromonas, Serratia, Rhodococcus, Kocuria, Microbacterium, Streptomyces, and Flavobacterium (Table I). The total percentage of strains containing NRPS genes was 43%. A high percentage of these genes was found in the genus Pseudomonas (57%). At the same time, the occurrence of NRPS genes in members of the phylum Firmiсutes (Bacillus and Paenibacillus) reached 78% of the total number of strains from this group. These genes were found in 32% of Proteobacteria and 22% of Actinobacteria (Table I). The phylogenetic diversity of the genera Bacillus and Paenibacillus isolated from epilithic biofilms in Lake Baikal and the presence of PKS and NRPS genes in their genomes are shown in Fig. 1. Thus, NRPS genes were more commonly found in Bacillus and Pseudomonas; the members of the phyla Firmiсutes (Bacillus, Paenibacillus) also had high percentage of PCR positive strains with both, PKS and NRPS genes. Physiological and biochemical characteristics of Bacillus and Paenibacillus strains. At the next stage, based on the obtained results of PCR screening, we selected six cultures: Paenibacillus spp. 5A, 12А, and 7A and Bacillus spp. 2А, 2В, and 9А. Previously, these strains showed antagonistic activity (Table II) (Zimens et al. 2014). Among them, there were highly active Paenibacillus spp. 5А and 12А and Bacillus sp. 9A, which simultaneously suppressed the growth of test cultures from different taxonomic groups (Gram-positive and Gram-negative bacteria, as well as fungi). Hence, we can assume that the strains studied can produce several different antimicrobial compounds (Zimens et al. 2014). The selected isolates were tested for the ability to produce extracellular enzymes (Table III). We found that Paenibacillus spp. strains most actively utilized carbohydrates and polyatomic alcohols, and Bacillus spp. strains used amino acids. All cultures showed the ability to utilize starch and casein (Table III). The data on the physiological and biochemical characteristics of Paenibacillus spp. 5А and 12А and Bacillus sp. 2А and 9А were consistent with the data from the phylogenetic analysis. Phylogenetic analysis of the nucleotide sequences of the 16S rRNA gene from Bacillus and Paenibacillus strains. Phylogenetic analysis indicated that the nucleotide sequences of the 16S rRNA gene of Paenibacillus spp. 5A and 12А strains formed a separate sister cluster with the type strain Paenibacillus peoriae KCTC 3763 T (Fig. 2). This strain, isolated from soil, is antagonistic against phytopathogenic bacteria and fungi (Jeong (Berge et al. 2002), which allowed us to preliminarily classify this strain as P. graminis 7A. Nucleotide sequences of the 16S rRNA gene from Bacillus spp. 2А and 9А strains formed a joint clus-ter with the type strain Bacillus amyloliquefaciens NBRC 15535 Т (Fig. 2) isolated from fermented locust bean fruits (Africa) (Meerak et al. 2008 We determined eight nucleotide sequences of PKS genes for both Paenibacillus spp. 5А and 12А (Table IV); the closest homologues were obtained from Paeni bacillus polymyxa and P. peoriae. Among the homologous sequences obtained from the 12А and 5А strains, there were genes for the synthesis of antibiotics (difficidine, erythromycin, bacillaene, batumin, and Sukhanova E. et al. 4 508 sorangicin) and antitumor agents (calyculin and bryostatin) ( Table V). We identified eight nucleotide sequences of polyketide synthases for Paenibacillus sp. 7A strain (Table IV). Comparative analysis indicated that all the closest homologues were obtained from P. graminis. Among them, we detected erythronolide synthase with high homology (97%). In addition, PKS sequences of Paenibacillus sp. 7A differed in structure from genes isolated from the Paenibacillus spp. 5А and 12А strains. Among the similar sequences, there were synthases of the antibiotic plipastatine and the antitumor agent epothilone with 43-53% homology (Table V). In the genomes of Bacillus spp. 2А and 2B strains, we identified seven and eight nucleotide sequences of the PKS gene fragment, respectively (Table IV). Inter-estingly, PKS sequences obtained from Bacillus strains had the closest relatives isolated from marine sponges (Zhang et al. 2009). Moreover, we detected genes of enzymes involved in the production of antibiotics (bacillaene and difficidine) and an antitumor agent (calyculin); homology was 70-87% (Table V). For Bacillus sp. 9А strain, we detected four PKS sequences (Table IV). They were homologous to the PKS genes obtained from strains of the genus Bacil lus spp., including the species B. amyloliquefaciens. Additionally, among the related sequences, there were PKS genes with high homology (94-98%) that were responsible for the synthesis of antibiotics (difficidine and bacillaene) ( Table V). (Table VI). For the Paenibacillus sp. 12A strain, we obtained five nucleotide sequences of the NRPS gene fragment which had homologues isolated from Paenibacillus polymyxa ( Table VI). The homologous sequences included genes for the synthesis of antibiotics (bacitracin, fusaricidin, tridecaptin, and bacillorin). In Paenibacillus sp. 5А, two sequences of the NRPS gene fragment were detected. The homologous sequences included genes for the synthesis of antibiotics (bacitracin and fusaricidin) and low homology to fengycin (Table VI). Identification of NRPS genes in the genomes of We identified two sequences of the NRPS gene fragment in Bacillus sp. 2В strain. Among the homologues, there were genes coding for the enzymes responsible for the synthesis of biosurfactants (plipastatin and suractin) (Table VI). Three sequences of the NRPS gene fragment were determined Bacillus sp. 2А strain. The homologous sequences included genes for the synthesis of an antibiotic (bacillaene) and biosurfactants (plipastatin, fengycin, and surfactin) (Table VI). We detected four sequences of the NRPS gene fragment in Bacillus sp. 9А strain. The homologous sequences included genes responsible for the synthesis of antibiotic (bacillaene) and biosurfactants (bacillomycin, surfactin, and iturin) (Table VI). Phylogenetic analysis of amino acid sequences of the KS-domain fragments of PKS genes (Fig. 3) and A-domain of NRPS genes (Fig. 4) in the bacteria isolated from the epilithic biofilms of Lake Baikal showed that sequences from different strains clustered together. It means that enzyme complexes of such strains as Bacil lus spp. 2А and 2В, Paenibacillus spp. 5А and 12А were similar. On the other hand, different sequences were obtained from one strain. It means that this strain, e.g. Bacillus sp. 9А, possessed several enzyme complexes. Discussion The results of PCR screening showed that PKS and NRPS genes in members of Bacillus, Paenibacillus, and Pseudomonas from Lake Baikal were more frequent than in other heterotrophic bacteria isolated from biofilms. The high occurrence of the BAS genes found in Baikal isolates is typical of the members belonging to these genera, since they are well-known producers of various secondary metabolites. For example, many Bacillus species produce such antibiotics as bacillaene, difficidine, macrolactin, mycosubtilin, bacillomycin, iturin, bacitracin, and gramicidin C (Fickers 2012). Paenibacillus strains isolated from various habitats synthesize antibiotics of a peptide or macrolide nature: polymyxins A-E, paenibacillin, jolipeptin, gavaserin, saltavalin, fusaricidin A-D, gatavalin, paenimacrolidine, paenilamicin, and others (Wu et al. 2011;Aleti et al. 2015). A review by Zhao and Kuipers (2016) 512 and only 50% of the analyzed species had genes encoding PKS (Zhao and Kuipers, 2016). In total, 1231 gene clusters for putative non-ribosomal antimicrobials were identified and combined into 23 types of NRPS, five types of PKS, and three types of hybrid synthesized NRPS/PKS compounds distributed across 49 Bacillales species. Previously, other authors also noted the high content of NRPS and PKS genes in bacilli (Aleti et al. 2015). In addition, a high percentage of isolates (85%) containing one or both metabolic clusters were isolated from the rhizosphere (Aleti et al. 2015). The authors noted that this was due to a more detailed study of the rhizosphere as an important subject in agriculture; hence, these genes may be also characteristic of bacilli from other ecological niches. For instance, this study on Bacillus and Paenibacillus strains from freshwater reservoirs has shown that they also contain NRPS and PKS genes. Metabolites produced by B. amyloliquefaciens and B. subtilis represent a bulk of the studied diversity of polyketides and lipopeptides from the genus Bacillus (Aleti et al. 2015). These two species are used to obtain most of the commercially available substances contributing to the plant growth and biocontrol (against phy- topathogens) in agriculture. They produce three types of polyene polyketides, including bacillaene, difficidine, and macrolactin. At present, two polyketides (paenimacrolidine and paenilamicin) have been described for the genus Paenibacillus (Aleti et al. 2015). In Bacillus and Pseudomonas, the NRPS genes mainly encode for the synthesis of lipopeptide biosurfactants (LPBS) (Roongsawang et al. 2010). Due to their complex and diverse structures, lipopeptides demonstrate various biological activities, including surface activity, as well as anticellular and antienzymatic activity. Lipopeptides are involved in multicellular behaviour, such as swarming motility and biofilm formation. Among the producers, the genera Bacillus and Pseudomonas are of special interest, since they produce a wide range of effective LPBS, which are potentially useful for agricultural, chemical, food, and pharmaceutical industries (Roongsawang et al. 2010). NRPS clusters of the genus Bacillus encode lipopeptide families of surfactin, fengycin, iturin, and kurstatin (Aleti et al. 2015). The results of this study indicate that heterotrophic bacteria isolated from epilithic biofilms in Lake Baikal are potential producers of secondary metabolites, for which the synthesis involves PKS and NRPS gene clusters. Identification of PKS genes has shown that Bacillus sp. strain 9А contains sequences in the genome that are related to the genes known for the synthesis of antibiotics bacillaene (baeL, baeN) and difficidine, which can indicate their ability to produce these compounds whereas Bacillus sp. 2А and 2B contains only bacillaene (baeL, baeM, baeN). Bacillaene is a polyene antibiotic and it was first found in the culture medium of B. sub tilis 3610 and 55422 strains (Fickers 2012;Aleti et al. 2015). Its biosynthesis was described in B. amyloliquefa ciens FZB42 and is encoded by a hybrid cluster of PKS-NRPS genes called bae. This cluster has a similar structure to the pksX cluster of B. subtilis 168 strain, which is also likely to encode bacillaene. The bae gene cluster contains five open reading frames, i.e. baeJ, baeL, baeM, baeN, and baeR (Aleti et al. 2015). Difficidine is a macrocyclic polyene synthesized by B. amyloliquefaciens АТСС 39320 and АТСС 39374 strains. It is encoded by the dif gene cluster with 14 open reading frames, from difА to difN and difY. Difficidine and bacillaene exhibit antimicrobial activity against a wide range of pathogenic bacteria by inhibiting protein synthesis (Fickers 2012;Aleti et al. 2015). Another strain, Paenibacillus sp. 7A, has genes with high homology to erythronolide synthase responsible for the biosynthesis of macrolide 6-desoxy-erythronolide B, which is the precursor of the well-studied and widely known antibiotic erythromycin. It was first isolated in 1949 from the culture liquid of a Saccha ropolyspora erythraea strain (Liu et al. 2013). The effect of this antibiotic is due to binding to the 50S ribosome subunit, which disrupts the formation of peptide links between amino acid molecules and blocks peptide synthesis in microorganisms. Despite the high percentage of similarity (96-100%) with the closest relatives of PKS genes from Paeniba cillus spp. 5A and 12А strains, the homologues had low similarity with the identified polyketide synthases (69-75%). It is likely that these genes have not been characterized yet, and these strains can produce novel and previously undescribed secondary metabolites. Identification of NRPS genes showed that the sequences from Paenibacillus spp. 5А and 12А had high homology with their closest relatives, among which there were genes encoding for the synthesis of peptide and lipopeptide antibiotics (bacitracin, bacillorin, fusaricidin, and tridecaptin). Bacitracin is a polypeptide antibiotic and a mixture of related cyclic peptides produced by B. subtilis strains. Bacitracin is active against Gram-positive bacteria. It was first isolated in 1945. It is usually used for topical treatment of skin, eye or nose diseases, but it can also be used internally in the form of an injection as an intestinal antiseptic. Due to its toxic effect on kidneys, bacitracin is used only when other antibiotics are ineffective. Its action involves breaking the synthesis of the cell wall by inhibiting lipid carriers (Johnson et al. 1945;Karala and Ruddock 2010;Ciesiołka et al. 2014). Moreover, bacitracin degrades nucleic acids, in particular RNA, through a hydrolytic mechanism (Ciesiołka et al. 2014). Bacillorin and bacillomycin L should be considered as synonymous names for a single molecule. Fusaricidins are depsipeptide antibiotics synthesized by the members of the genus Paenibacillus. They have a ring structure. These antibiotics have high antifungal activity against plant pathogenic fungi, such as Fusarium oxysporum, Aspergillus niger, Aspergillus oryzae, and Penicillium thomii. Fusaricidins also have good bactericidal activity against Gram-positive bacteria, such as Staphylococcus aureus (Li et al. 2007;Choi et al. 2008). Tridecaptins are a class of linear cationic lipopeptides exhibiting strong activity against multidrugresistant Gram-negative bacteria. At the same time, they show low cytotoxicity and hemolytic activity. Tridecaptins are produced by Paenibacillus polymyxa strain (Cochrane et al. 2015). Most NRPS gene sequences from Bacillus spp. 9А, 2А and 2В strains were homologous with the sequences responsible for the synthesis of different lipopeptide biosurfactants, such as fengycin, bacillomycin, plipastatin, surfactin, and iturin. Notably, the closest relatives of the sequences of NRPS gene fragments from Bacillus spp. 9А and 2А strains included PKS genes responsible for the synthesis of bacillaene. The identification of PKS genes also indicated the genes responsible for the synthesis of this antibiotic. As mentioned above, a type 4 514 I PKS-NRPS hybrid gene cluster is responsible for its synthesis. Therefore, in the strains studied by two different pairs of primers, we detected the genes responsible for the synthesis of bacilaene. The fengycin family includes fengycin and plipastatin, which are cyclic lipopeptides produced by B. sub tilis (Bie et al. 2009). Natural fengycin is a mixture of isoforms, which differ slightly in their physicochemical properties due to variations in the chain length and branching of its hydroxy fatty acid component (Bie et al. 2009). Fengycin specifically inhibits filamentous fungi; its hemolytic activity is 40-fold less than that of surfactin (Bie et al. 2009). Plipastatin, an antifungal antibiotic, is one of the most important non-ribosomal lipopeptides produced by B. subtilis. Plipastatin is involved in inhibition of phospholipase A2 and biofilm formation (Batool et al. 2011). It is produced by different strains of Bacillus species and shows moderate surfactant properties. It is an antifungal metabolite and inhibits filamentous fungi, but it has no effect on yeast and bacteria (Romero et al. 2007;Chen et al. 2009). The iturin family includes compounds of iturin and bacillomycin. Both are cyclic lipopeptides produced by B. subtilis, and they exhibit strong antifungal properties (Peypoux et al. 1981;Zhang et al. 2013). Iturin has low toxicity in mammals and shows strong antibiotic activity, thus making it potentially a useful and effective substance for biological control to reduce the use of chemical pesticides in agriculture (Romero et al. 2007;Ongena and Jacques 2008;Kim et al. 2010;Zhang et al. 2013). The surfactin family are structurally cyclic peptides with a multiple biological activity produced by some B. subtilis strains (Cosmina et al. 1993;Ongena and Jacques 2008). Surfactin is a strong surface-active compound. It can lyse erythrocytes and protoplasts of bacteria. Additionally, surfactin inhibits the thrombinfibrinogen interaction, thus slowing the formation of fibrin. This property defines it as a possible component in the development of anticoagulants for the prevention of thromboses and diseases, such as myocardial infarction, pulmonary embolism, etc. Surfactin exhibits anticholesterol activity and decreases the level of cholesterol in the plasma and liver. It has antitumor, fungicidal, and antibiotic activity. Many useful physicochemical characteristics of this substance indicate that it can be widely used in the pharmaceutical, technical, and environmental fields. In this study, we showed the presence of RKS and NRPS genes in the genomes of heterotrophic bacteria isolated from epilithic biofilms in Lake Baikal. The occurrence of these genes in bacteria of the genera Bacillus and Paenibacillus was higher than in other bacterial groups. Comparative analysis of the obtained amino acid sequences showed a wide variety of the genes. These sequences were related to the genes involved in biosynthesis of antibiotics (bacillaene, difficidine, erythromycin, sorangicin, and batumin), biosurfactants (fengycin, bacillomycin, plipastatin, surfactin, and iturin) and antitumor agents (epothilone, calyculin, and briostatin). Bacillus sp. 9А (iturin, bacillomycin, surfactin, bacillaene, and difficidine) and Bacillus sp. 2А (plipastatin, bacillaene, surfactin, fengycin, and difficidine) showed the highest variety of PKS and NRPS genes. Furthermore, the investigated strains exhibited multiple enzymatic and antagonistic activities, indicating that they are potential producers of bioactive metabolites. Therefore, Baikal representatives of the genera Bacillus and Paenibacillus can be of practical interest for biotechnological purposes. To confirm our assumptions, it is necessary to obtain individual compounds and determine their structure, as well as study biological activity.
2018-12-15T14:02:40.401Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "73a0f2b5e20db00f05d92286d15c2c83b03d0d22", "oa_license": "CCBYNCND", "oa_url": "http://www.exeley.com/exeley/journals/polish_journal_of_microbiology/67/4/pdf/10.21307_pjm-2018-060.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "73a0f2b5e20db00f05d92286d15c2c83b03d0d22", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14944379
pes2o/s2orc
v3-fos-license
Correct Microkernel Primitives . Primitives are basic means provided by a microkernel to implementors of operating system services. Intensively used within every OS and commonly implemented in a mixture of high-level and assembly programming languages, primitives are meaningful and challenging candidates for formal verification. We report on the accomplished correctness proof of academic microkernel primitives. We describe how a novel approach to verification of programs written in C with inline assembler is successfully applied to a piece of realistic system software. Necessary and sufficient criteria covering functional correctness and requirements for the integration into a formal model of memory virtualization are determined and formally proven. The presented results are important milestones on the way to a pervasively verified operating system. Introduction Correctness guarantees for computer systems is a hot research topic.Since there are a lot of examples when the correctness of separate computer components has been successfully established, the formal verification of an entire industrial-size system is being brought to the forefront.In [8] Moore, the head of the famous CLI project, proposes the grand challenge of whole computer system pervasive verification. Verisoft [13] is a research project inspired by the problem of a complete computer system correctness.The project aims at the development of the pervasive verification technology [10] and demonstrating it by applying to an exemplary computer system.A prototypic system comprises (i) a pipelined microprocessor with memory management units, (ii) a number of devices, in particular, a hard disk, (iii) a microkernel, (iv) a simple operating system, and (v) an exemplary user application.Pervasive formal verification of the whole system is attempted.The process is supported by a variety of computer aided verification tools, both interactive and automated, in order to minimize the possibility of errors induced by verification engineers. This work relates to the problem of operating system microkernel correctness.A microkernel is the minimal kernel which, basically, provides no operating system services at all, but only the mechanisms necessary to implement such services.The mechanisms include process and memory management, address spaces, low-level IPC, and I/O.Usually, they are implemented in the form of primitives, microkernel routines which provide this functionality to the upper layers.Since every service of an operating system makes use of primitives, the correctness of the latter becomes of special importance. In the current paper we discuss the correctness issues of primitives of an academic operating system microkernel.We describe how the methodology for the system software verification developed in the frame of Verisoft is successfully applied to primitives implemented in C with inline assembler.We outline the correctness criteria of microkernel primitives.Stating the correctness theorems we show what it means that a primitive fulfills these correctness criteria.We sketch a general idea how such theorems are proven.In a case study we elaborate on particular for the example details of specifications and proofs. The contribution of this paper is that (i) all necessary and sufficient correctness criteria of primitives of a microkernel for a pervasively verified system are determined and formally proven, (ii) a novel, convenient for formal use, approach to verification of C with inline assembler programs is presented, and (iii) an important part of a realistic microkernel is proven correct showing that seamless formal verification of crucial parts of operating systems is feasible.All material presented in the paper is supported by formal theories in a computer aided theorem prover. Related Work. A number of research projects suggest ideas to microkernel verification.Choosing reasoning either in C or assembler semantics, to the best of our knowledge, nobody exploits their combination.The L4.verified project, targets at constructing seL4 [4], a formally verified operating system kernel.From the system's prototype designed in Haskel both formal model and C implementation are generated.A richer compared to Verisoft subset of C including pointer arithmetic is used, which, however, provides less expressive semantics than inline assembler as the latter makes possible to accesses even registers of a processor.A substantial progress seems to be achieved in the verification of the model, but only exemplary parts of the source code are reported verified.The FLINT project exploits an x86 assembly code verification environment for certification of context switching routines [9], an important microkernel part.No results on integration of object code correctness into a high-level programming language are reported.The recent Robin project aims at the verification of Nova microhypervisor [12].Although implementation is in (a subset of) C++ with inline assembler, the verification is planned to cover only C++ parts.Currently there is no connection to real object code, which seems to be planned for the (far) future.It is planned to build a model precise enough to catch virtual memory aliasing and address space separation errors, however it is unclear whether these properties will be shown to be respected by the hypervisor's implementation. Outline.In Sect. 2 we discuss implementation issues and formal model of a microkernel.We briefly formalize all concepts necessary to present the microkernel correctness criteria which have to be satisfied by its primitives.Next, in Sect.3, we elaborate on our verification methodology and sketch the semantics of C programs with inline assembler parts.In Sect. 4 we proceed with the correctness theorem for a primitive.The presented approach is supported by the case study in Sect. 5 for which the primitive that copies data between processes is selected.We conclude in Sect.6. Notation.We denote the set of boolean values by B and the set of natural numbers including zero by N. We denote the set of natural numbers less then x by N x .We denote the list of n elements of type T by T n .The elements of a list x are accessed by x[i], its length is denoted by |x|.The operator x yields for a bit string x ∈ B n the natural number represented by x.We allow to interchange a bitvector x with its value x .The set of all possible configurations of a concept x is defined by C x . An Academic Operating System Microkernel We consider an exemplary academic microkernel which provides mechanisms for the (i) process and memory management, (ii) address spaces, (iii) IPC, and (iv) device communication. Implementation Issues The microkernel implements the Communicating Virtual Machines (CVM) [3] model which defines the parallel execution of concurrent user processes interacting with a kernel.According to the model the microkernel is split into two logical parts: (i) the abstract kernel which provides an interface to a user or an operating system and could be implemented in a pure high-level programming language, and (ii) the lower layers which implement the desired functionality stated in the beginning of Sect. 2. The implementation of the low-level functionality necessarily contains assembler portions because processor registers and user processes could not be accessed by ordinary C variables.By linking the two kernel parts together the concrete kernel, a program which can run on a target machine, is obtained. The kernel lower layers could be split into three logical parts: (i) primitives, (ii) a page fault handler, and (iii) context switch routines.Within the paper we discuss the correctness of primitives.They are implemented in the C0 programming language [7], a slightly restricted C, with inline assembler parts.In brief, the limitations of C0 compared to standard C are as follows.Prefix and postfix arithmetic expressions, e.g., i++, are forbidden, as well as function calls as a part of expressions.Pointers are typed and do not point to local variables or to functions.Void pointers and pointer arithmetic are not supported.The size of arrays has to be statically defined. Primitives The academic microkernel contains 16 primitives described in Table 1.The comment 'A' denotes that a primitive has an inline assembler portion.The comment 'D' designates that a primitive accesses devices.Thus, the primitives can be divided into three groups: (i) 7 primitives implemented in pure C0, (ii) 4 primitives which have assembler portions, and (iii) 5 primitives which have assembler portions and access devices.In this paper we give the methodology for verification of code written in C0 with inline assembler.It is applicable to all the primitives.However, we have verified so far primitives from the second group. A Formal Model The CVM model defines a parallel execution of the kernel and N user processes on an underlying physical machine with a hard disk.According to CVM, the C0 language semantics is used to model the computation of the kernel, and semantics of virtual machines models the computation of user processes.In the following, we outline the necessary concepts of the model: (i) physical and virtual machines [3], (ii) a hard disk [5], and (iii) C0 machines [7].Having them, we sketch the CVM semantics and give its correctness criteria.For details cf.[6]. Memories of physical and virtual machines are conceptually organized in pages of P machine words. Physical Machines Physical machines are the sequential programming model of the VAMP hardware [2] as seen by a system software programmer.They are parameterized by (i) the set SAP ⊆ B 5 of special purpose register addresses visible to physical machines, and (ii) the number TPP of total physical memory pages which defines the set PMA = {a | 0 ≤ a < TPP • P } ⊆ B 30 of accessible physical memory addresses.The machines are records pm = (pc, dpc, gpr, spr, m) with the following components: (i) the normal pm.pc ∈ B 32 and the delayed pm.dpc ∈ B 32 program counters used to implement the delayed branch mechanism, (ii) the general purpose pm.gpr ∈ B 5 → B 32 and the special purpose pm.spr ∈ SAP → B 32 register files, and (iii) the word addressable physical memory pm.m ∈ PMA → B 32 .The computation is possible in two modes: user and system.In user mode a memory access to a virtual address va is subject to address translation.It either redirects to the translated physical memory address or generates a page fault interrupt which signals that the desired page is not in the physical memory.The decision is made by examining the valid bit v(pm, va) maintained by the memory management unit of the physical machine.When on, it signals that the page storing the virtual address va resides in the main memory, elsewise, it is on a hard disk. The semantics of an uninterrupted execution is defined by the underlying i nstruction set architecture (ISA).On an interrupt signal, which could be internal or external, the machines switches to the system mode and invokes a special piece of software-an interrupt handler.Within the paper, we are interested in two particular kinds of interrupts: (i) page faults, and (ii) system call exceptions.A page fault is treated by the page fault handler, a routine which translates addresses and loads missing pages from a hard disk into the physical memory.Its implementation servers several purposes.For instance, it could be used to handle a page fault and to guarantee that no page fault will occur within a certain period in the future.The latter property is needed for the primitives, thus, they heavily call the handler (for details cf.Sect.2.5).System call exceptions occur due to a special instruction, called the trap.It is used by an assembler programmer in order to invoke one of the system calls provided by the operating system microkernel.System calls, viewed from a simplified perspective, are just the wrappers around the primitives. Virtual Machines Virtual machines are the hardware model visible for user processes.They give user an illusion of an address space exceeding the physical memory.No address translation is required, hence page faults are invisible.The virtual machine's parameters are: (i) the number TVP of total v irtual memory pages which defines the set of accessible v irtual memory word addresses VMA = {a | 0 ≤ a < TVP • P } ⊆ B 30 , and (ii) the set SAV ⊆ SAP of special purpose registers addresses visible to v irtual machines.Their configuration, formally, is a record vm = (pc, dpc, gpr, spr, m) where only vm.spr ∈ SAV → B 32 and vm.m ∈ VMA → B 32 differ from the physical machines.Semantics is completely specified by the ISA with the following exception.Due to safety reasons we split the set SAV into two parts: (i) the set SAV R of read only register addresses, and (ii) the set SAV W of addresses of registers that could be completely accessed by a user.A write attempt to a register vm.spr[x] with x ∈ SAV R has no effect.The set SAV R contains the register ptl (page table length).It stores the amount of virtual memory allocated to the process measured in pages.We abbreviate vm.spr[ptl] = vm.ptl. Integrating a Hard Disk We use the formal model of a hard disk based on the ATA/ATAPI protocol.We denote the configuration of the hard disk by hd.Only the component hd.sm ∈ N 2 30 → N 2 32 which models the disk content as a word-addressable memory is of our interest.A step of the system (pm, hd) comprising the physical machine and the hard disk is denoted by the function δ(pm, hd) = (pm , hd ).If no write instruction to the disk is executed only the physical machine is updated according to its semantics.Otherwise, both pm and hd are changed.C0 Machines A C0 machine is a record c = (pr , tt, ft, rd , lms, hm).Its components are: (i) the program rest c.pr , a sequence of statements which still has to be executed, (ii) the typetable c.tt which collects information about types used in the program, (iii) the function table c.ft storing information about functions of the program, (iv) the recursion depth c.rd , (v) the local memory stack c.lms mapping numbers i ≤ c.rd to memory frames which implement a relatively lowlevel memory model and comprise components for the number of variables in a frame, their names, types, and contents, and (vi) the heap memory c.hm which is a memory frame as well. The global memory of a C0 machine c is c.lms(0).The top local memory frame is denoted by top(c) = c.lms(c.rd ).A memory frame first includes the parameters of the corresponding function.A variable of a machine c is a pair (m, i), where m is a memory frame and i is the number of the variable in the frame.By va(c, i) = (top(c), i) we denote the i-th variable of the current function context. Communicating Virtual Machines The CVM configuration is formally a record cvm = (up, ak, cp) with the following components: (i) the list of N user processes cvm.up ∈ C N vm represented by virtual machines, (ii) the abstract kernel cvm.ak ∈ C c modeled by a C0 machine, and (iii) the current process identifier cvm.cp ∈ N N , where cvm.cp = 0 stands for the kernel.The CVM semantics distinguishes user and kernel computations.In case cvm.cp = 0 the user process pid = cvm.cp is intended to make a step.In case no interrupt occurs it boils down to the step of the virtual machine cvm.up [pid].Otherwise, the kernel dispatcher is invoked and the kernel computation starts.The kernel dispatcher handles possible page faults and determines whether a primitive f is meant to be executed.In case it is, the parameters of the primitive p f are extracted by means of the system call mechanism.The specification f S is applied to the user processes cvm .up= f S (cvm.up,p f ).Next, the user computation resumes. Correctness Criteria Microkernel correctness requirements have to relate: (i) the implementation of kernel lower layers, encoded by the C0 machine c, (ii) the CVM model cvm, and (iii) the physical machine with the hard disk (pm, hd). The implementation c is related to the CVM model by means of linking.We use the formal specification of the linking operator link(cvm.ak,c) = k.It takes two C0 machines encoding the abstract kernel and the implementation of its lower layers, respectively, and produces the concrete kernel k, also a C0 machine.We state that the concrete kernel k correctly runs on the physical machine pm by means of the C0 compiler consistency relation (cf.Sect.3.2). The correctness criteria for the user processes is hidden inside the memory virtualization relation.This simulation relation, called the B-relation, specifies a parallel execution of the user processes cvm.up on one physical machine pm.In order to specify the B-relation, let us first give a notion of a process control block (PCB).The PCBs are C0 data structures permanently residing in the memory of the underlying physical machine.They store the information about visible registers of all user processes.Thus, we are able to reconstruct user virtual machines from the contexts stored in the PCBs.The function virt(pid, pm, hd) = vm yields the virtual machine for the process pid by taking the register values from the corresponding PCB fields.The memory component vm.m of the built virtual machine is constructed out of the physical memory and the data on the hard disk depending on where a certain memory page lies: otherwise . The physical memory address is computed by the function pma(pid, a) while the swap memory address is yielded by the function sma(pid, a) (for the definitions cf.[1,6]).Then, the B-relation is defined formally as follows: There is a number of additional correctness demands omitted due to the space limitations. A Page Fault Handler The B-relation can only be maintained with an appropriate page fault handler.The page fault handler is a routine which serves two purposes.Called for a virtual address va and a process identifier pid it (i) yields to the caller the translated physical memory address pma(pid, va), and (ii) guarantees that the page storing pma(pid, va) resides in the physical memory of the machine running the handler. Possibly called twice in a primitive in order to translate addresses for different processes, it must respect the following.An appropriate page fault handler must not swap out the memory page that was swapped in during a previous call to it.In order to guarantee this a proper page eviction strategy must be used.We support two lists, called active and free, for the page management.Together they describe all pages of physical memory accessible to a user.Items of the free list describe the pages that immediately could be given to user, i.e., without swapping out a page to the hard disk.Active list describes physical pages that store a virtual page.When all physical memory is occupied, a page from the active list is evicted and replaced by the one loaded from the hard disk according to the FIFO strategy.For formal details and correctness issues cf.[1]. Verification Approach There are several possibilities to argue about the correctness of kernel lower layers, and in particular of primitives.One might have an idea to reason about their object code in the machine language semantics.Due to the huge size of the target code-the kernel lower layers translated by the C0 compiler are 11K lines long-this approach seems to be unfeasible for the interactive verification. Running to extremes, one can try to verify system code on a very high-level of abstraction, e.g., by means of a generic verification environment for imperative programs [11], and then transfer the results down to the necessary level introducing refinement theorems.However, techniques that allow reasoning about mixture of C and assembler code in such environments were only recently invented (the approach is used in [1]).They basically aim at big C programs with assembler portions, isolated in separate functions.Since this is not the case for the primitives-they are relatively small C functions which can have several inline assembler parts-we decided to do the formal verification in a mixture of C0 small step and inline assembler semantics. Verification Environment We use Isabelle/HOL theorem prover as the basis for the verification environment.All the concepts and their semantics listed in Sect.2.3 were formalized in Isabelle by the colleagues in the Verisoft project.The source code of the primitives is automatically translated by a tool into the C0 small step semantics in Isabelle. C with Inline Assembler Semantics A C0 configuration c is related to the underlying physical machine pm by the compiler simulation relation consis(alloc)(c, pm) parameterized over an allocation function alloc which maps C0 variables to the physical memory cells.Essentially, the relation is a conjunction of the following facts: (i) value consistency: the respective variables of c and pm have the same values and the reachable portions of the heaps in c.hm and pm.m are isomorphic, (ii) control consistency: the delayed program counter pm.dpc points to the start of the translated code of the first statement of c.pr and pm.pc = pm.dpc+ 4, (iii) code consistency: the compiled code lies at the correct address in the memory pm.m, and (iv) stack consistency: the heap resp.stack pointers which reside in the registers pm.gpr[29] resp.pm.gpr [30] point to the first free address of c.hm resp.to the beginning of top(c).For details cf.[7]. An assembler instruction list il can be integrated by a special statement asm(il ) into the C0 code.As long as no such statement occurs the C0 semantics ... is applied.The former approach to deal with verification of an assembler statement is to maintain the compiler consistency relation with execution of every single instruction from il (cf.Sect.4.3 of [3]).This method turned out to be inconvenient due to excessive complexity of formal proofs, therefore a new one was developed and used. In brief, the novel approach is as follows.On an assembler statement the execution is switched to the consistent underlying physical machine and continues directly there.When the assembler instructions have been executed we switch back to the C0 level.For this we have to update the C0 machine possibly affected by the assembler instructions.The allocation function alloc makes it possible to determine which variables of the C0 machine have changed.We retrieve their values from the physical machine and write back to the C0 memory configuration. Let c be the C0 configuration with c.pr = asm(il ); r, and let pm be the physical machine consistent to c w.r.t. the allocation function alloc, i.e., consis(alloc)(c, pm).From the consistency relation we have that the program counters of pm point to the address of the assembler statement: pm.dpc = ad (asm(il )) and pm.pc = pm.dpc+ 4, where ad (s) yields for a statement s its address in the memory of pm.This allows us to start reasoning about the correctness of the assembler code il directly in the semantics of the physical machine.Let pm be the physical machines configuration after executing il.In order to formally specify the effect of an execution of asm(il ) on the C0 machine c we define the function upd (c, pm, pm ) = c which analyzes the difference between pm and pm and projects it to the C0 level updating the configuration c to c (cf. Fig. 1).A number of restrictions are imposed on the changes in the physical machine, which guarantee that the C0 machine is not destroyed by the assembler portion il , namely: (i) the program pointers after the execution of il point to the end of il : pm .dpc= pm.dpc+ 4 • |il |, (ii) the memory region where the compiled code is stored stays the same, i.e., we forbid self-modifying code, (iii) the stack and heap pointers are unchanged: pm .gpr[x]= pm.gpr[x]for x ∈ {29, 30}, (iv) the memory occupied be the local memory frames remains the same except for top(c), and (v) pointers change is forbidden except setting them to null.We formally prove that we deal with assembler portions which meet these restrictions. The program rest is updated straightforwardly-the assembler statement is removed, i.e., c .pr = r.The memory update proceeds separately for the global, the top local, and the heap memories.For each of them the respective memory cells of the physical machines configurations pm and pm are compared.In case a memory cell at an address a is changed, the value of the variable x, s.t.alloc(c , x) = a is updated with pm .m[a].However, the compiler correctness relation does not necessarily hold between the C0 configuration c and the physical machine pm .The control consistency will be broken if the assembler statement asm(il ) is either (i) the last statement of a loop body, or (ii) the last statement of the 'then' part of a conditional statement.The translation of these statements to the target code results in a list of assembler instructions il which has to be executed by the machine pm in order to reach a consistent to c state.Note that il contains only control instructions, and, hence does not affect any C0 Executing il we transit from pm to pm updating the program counters and regain consistency consis(alloc)(c , pm ). Correctness of a Primitive Since primitives are parts of the microkernel, their correctness is closely related to the correctness of the whole kernel.Execution of a primitive is one of the induction cases of the overall kernel correctness theorem [6].We distinguish two main theorems for each primitive: (i) the primitives functional correctness, and (ii) the top-level correctness of a primitive.The latter is used to prove the induction case of the overall kernel correctness theorem and, therefore, claims correctness criteria needed for the integration, for instance that the abstract kernel data is not corrupted.The former is used as an auxiliary theorem to prove the latter.It states the correctness of the input-output relation of a primitive call.Such modularization increases the robustness of formal theories to the possible code changes, e.g., due to the errors disclosed during the verification.In this case, one has to adapt the proofs only of the first theorem which is much simpler than the second one.Next, we present the general idea behind these theorems and discuss their formal proofs. Functional Correctness The functional correctness justifies the input/output relation of a primitive.We start in some C0 state k encoding the concrete kernel and consistent to the underlying physical machine pm and claim the requirements pre f (k, pm) to a primitive f caller.We end in the resulting state obtaining the desired values post f (k , pm ) of C0 variables and memory cells of the physical machine.Note that both pre-and postconditions, in general, speak not only about values of C0 variables, but also about the memory parts of the underlying machine which are not accessible via variables but are subject to change by inline assembler code.The straightforward idea of the functional correctness is reflected in the next theorem. Theorem 1 (Functional Correctness of a Primitive).Let k be the concrete kernel calling the primitive f with the parameters p f : k.pr = f(p f ); r.Let (pm,hd) be the configuration of the underlying physical machine with the hard disk, s.t. it is consistent to the concrete kernel: consis(alloc)(k, pm).Assume that the precondition pre f (k, pm) to the primitive is satisfied, then there exist (i) a number of steps T of the physical machine with the hard disk, s.t.(pm , hd ) = δ T (pm, hd), and (ii) a configuration of the concrete kernel k with an appropriate allocation function alloc , s.t.they are consistent to the physical machine: consis(alloc )(k , pm ), and the desired postcondition post f (k , pm ) holds. In our experience it is inconvenient to prove such theorems directly.We rather create several separate lemmas of the same form but speaking about the code in different semantics.For example, if a primitive contains a number of assembler instructions wrapped both from the beginning and the end in C0 code, we create three lemmas: one for the C0 part before assembler, next for the assembler portion, and, finally, for the remaining C0 part.This simple idea is easily scalable to arbitrary combinations of C and assembler.We prove such lemmas by applying C0 and inline assembler semantics.The crucial point is the construction of a consistent C0 machine after the assembler part execution.We proceed as described in Sect.3.2. The verification proceeds, certainly, with respect to the total correctness criteria, i.e., we show the termination and the absence of run-time errors.The machinery for this is hidden inside the C0 small step semantics.The set C c of all possible C0 configurations is represented formally in Isabelle by the option type which extends C c with an additional error state.The semantics is constructed in a way that the computation ends in a non-error state only in the case that no run-time errors occur.We formally show that the resulting configuration of a primitive call is not in the faulting state.We do this in an iterative fashion, i.e., we show that the execution of every single statement brings some sensible configuration.This could happen only in case all expressions of the statements are correctly evaluated.The correctness demands to the expression evaluation forces us to show formally that neither null-pointer dereference nor out-of-boundary array access happens.This also proves the termination of single statements.In order to guarantee the termination of a whole program provided that statements terminate we have to show that neither infinite loops nor infinite recursive calls occur.Since we do not use recursion in the kernel implementation, we pay attention only to loops.Their termination is closely related to the way loops are verified in the C0 semantics.The correctness of a loop is established by an inductive lemma.We formally specify the number of the loop iterations by a ranking function over the variables modified in the loop.Since we proceed by induction on the result yielded by the ranking function, the termination follows.We give details in the example (cf.Sect.5.3).The absence of run-time errors in assembler portions boils down to the absence of interrupts conditions which are required to the proven by the inline assembler semantics.The termination of assembler loops is proven analogously to C0 loops.The correctness criteria needed for the integration are, basically, split into two parts.They are: (i) the kernel correctness requirements stated in Sect.2.4, and (ii) the kernel invariant which turns out to be necessary to be proven first. The kernel invariant inv(cvm, k, pm, hd) is the conjunction of the following statements: (i) the memory map properties, (ii) the page fault handler invariants, (iii) the validity of the C0 machine encoding the concrete kernel, and (iv) the hard disk properties and liveness requirements for the system 'hard disk -physical machine'. Memory Map Properties The kernel code has a particular alignment in the memory.Its data structures lie both in the global and the heap memories.For safety reasons we must know these regions, and know which of their parts could be changed with every step of the kernel, for instance with the execution of a primitive.Fig. 2 depicts the memory structure which we describe formally. Page Fault Handler Invariants As mentioned in Sect.2.5, the page fault handler is (heavily) called by the kernel.The handler maintains a variety of global data structures, in particular lists for page management.Therefore, we must claim that no functions besides the page fault handler are allowed to modify its data structures.Due to the complexity of the page fault handler its verification is attempted by means of the refinement technique which connects its representation on several semantical layers.In order to support that approach, we formally preserve: (i) the mapping between the implementation of the kernel lower layers c which contains the handler and the PFH abstraction, and (ii) the validity properties over the handler abstraction.The page fault handler properties relevant for the primitives correctness comprise: (i) for distinct pairs (pid 1 , va 1 ) = (pid 2 , va 2 ) the translated physical addresses are distinct: pma(pid 1 , va 1 ) = pma(pid 2 , va 2 ), (ii) every physical address is associated exactly with one pair (pid, va), and (iii) all translated addresses lie outside the kernel range: ∀(pid, va) : pma(pid, va) ∈ [0 : KERNEL END). Next, we present the top-level correctness theorem of a primitive execution.It turns out that its proof requires several static properties pro(cvm.ak,c) over the abstract kernel and the implementation of the kernel lower layers.They are the necessary preconditions to a correct linking and state, not exclusively, the following: (i) the function tables cvm.ak.ft and c.ft encode the same function signatures, (ii) all external function declarations in cvm.ak.ft have an implemen-tation in c.ft and vice versa, and (iii) the type tables cvm.ak.tt and c.tt encode the same types. Theorem 2 (Top-level Correctness of a Primitive).Let k be the concrete kernel calling the primitive f with the parameters p f : k.pr = f(p f ); r.Let cvm be the configuration of the CVM model, and (pm,hd) be the configuration of the underlying physical machine with the hard disk.Assume that (i) the concrete kernel is consistent to the physical machine: consis(alloc)(k, pm), (ii) the relation B(cvm.up,pm, hd) holds, (iii) the preconditions pre f (k, pm) to the primitive are satisfied, (iv) the kernel invariant inv(cvm, k, pm, hd) holds, and (v) the kernel static properties pro(cvm.ak,c) are satisfied, then there exists a number of steps T of the physical machine with the hard disk, s.t.(pm , hd ) = δ T (pm, hd) after which (i) the CVM model executes the primitive and the relation B(f S (cvm.up,p f ), pm , hd ) still holds, (ii) the concrete kernel executes the primitive and is still consistent to the physical machine: ∃k , alloc : consis(alloc )(k , pm ) ∧ k .pr= r, and (iii) the kernel invariant is preserved: inv(cvm , k , pm , hd ). Case Study: Copying Data Between Processes As an application of the developed approach we show how we establish the correctness of the copy primitive1 .It is intended to copy n words from a process pid 1 at address a 1 to a process pid 2 at address a 2 .In the context of an operating system it is widely used to implement process management routines, as well as IPC.The correctness is justified by the instances of Theorems 1 and 2, where f = copy, and p f = p copy = pid 1 , pid 2 , a 1 , a 2 , n. Algorithm Let copy asm (pa 1 , pa 2 , s) be an assembler fragment that copies s words in the memory from a physical address pa 1 to pa 2 .The algorithm behind the copy primitive is as follows.In a loop until n words are processed, we compute the size s of portions to be copied respecting the page borders of both processes.The crucial observation is that both pages, from and to which we copy, must be present in the physical memory.This is achieved by two consecutive calls to the page fault handler which compute physical addresses pa 1 = pma(pid 1 , a 1 ) and pa 2 = pma(pid 2 , a 2 ) and guarantee that both pages containing pa 1 and pa 2 reside in the main memory.We proceed with the copying by executing copy asm (pa 1 , pa 2 , s).The idea is depicted in Fig. 3. Specification The specification of the primitive has to reflect the changes on (i) the user processes cvm.up of the model, (ii) the concrete kernel k, and (iii) the underlying The effect of the primitive execution on the model is given by the function copy S (cvm.up,pid 1 , pid 2 , a 1 , a 2 , n) = cvm .upwhich updates the memory of the user process pid 2 , i.e., the virtual machine cvm.up[pid 2 ]: otherwise . The result of copy S is welldefined only if the preconditions pre copy S (cvm.up,pid 1 , pid 2 , a 1 , a 2 , n) are satisfied.Otherwise, the same trick as with C0 machines is used.The model state space C cvm is extended with a single error state which signals, in particular, that the preconditions to a primitive are not justified.The validity requirements over a model run prevent error states.The predicate pre copy S encodes formally the following: (i) the amount to be copied is reasonable: n > 0, (ii) we copy between different processes: pid 1 = pid 2 , (iii) since memories of virtual machines are word-addressable, the addresses a 1 and a 2 are divisible by 4, (iv) process identifiers pid 1 and pid 2 lie in the interval [1, N ), (v) virtual machines vm 1 = cvm.up[pid 1 ] resp.vm 2 = cvm.up[pid 2 ] have amount of virtual memory storing resp.sufficient to store the desired portion, i.e., a x /4 + n < vm x .ptl• P , x ∈ {1, 2}. Effects on the Implementation.The intended modifications of the physical machine pm, on top of which the concrete kernel k runs are defined by the postcondition post copy (k , pm ).First, it claims the value of the result variable of the call.Next, it describes the changes in the physical memory of the updated machine pm .Recall that a virtual address va of a process pid is translated to the physical one by means of the function pma(pid, va).Then, the step-by-step changes over the physical memory are: gain from methods of automated verification while proving the functional correctness of source code.We used the ML code generation mechanism for the proof of the microkernel source code wellformdness properties required by the C0 compiler correctness theorem.That saved about 1K proof commands.Next possible candidate for proof automation are assembler portions.Due to the relatively simple finite memory model it might be possible to obtain the values of desired memory cells by means of model checking.In order to ease the C0 part verification, one can think of a Hoare logic environment for the C0 small step semantics which will automatically generate verification conditions to be proven. consis(alloc)(c,pm) c o n s i s ( a l l o c ) ( c ', p m '' ) upd(c,pm,pm') Fig. 2 . Fig. 2. Memory structure of the microkernel claimed by the kernel invariant Table 1 . List of primitives of the microkernel
2015-07-14T19:54:51.000Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "1d461b60e354f67f517644fb8bbb89b8db68d818", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.entcs.2008.06.048", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1d461b60e354f67f517644fb8bbb89b8db68d818", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
3054964
pes2o/s2orc
v3-fos-license
An image analysis toolbox for high-throughput C. elegans assays We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. Much progress has been made in automating the analysis of particular types of C. elegans experiments, such as those involving low-throughput, high-resolution, 3-D, or time-lapse images, or images of embryos [7][8][9][10][11] . However, there is still a strong need to automate the analysis of high-throughput, static images of adult worms in liquid culture, a common screening output. For most assays, the density of worms per microplate well causes the worms to touch or cluster, so that automated analysis has been limited to populationaveraged measurements [12][13] , hiding population heterogeneity and prohibiting measurements on individual animals. An alternative to microscopy is flow systems adapted for worms(e.g., COPAS, Union Biometrica), measuring length, optical density and fluorescence emission at transverse slices along the length of individual worms. However, image-based screens have several benefits: They allow detection of more complex phenotypes by two-dimensional analysis of shape and signal patterns, and do not require re-suspension of worms in additional liquid prior to analysis, allowing smaller sample volumes and closed culture conditions -an important factor when screening large libraries of small molecules and RNAi clones, and when using pathogenic microbes. Also, image based screening allows for visual confirmation of results, the images form a permanent record that can be re-screened for additional phenotypes, and low-throughput experiments require no more equipment than a microscope and a digital camera. To improve C. elegans phenotype scoring from images of adult worms in liquid, we developed an image-analysis toolbox that can detect individual worms regardless of crossing or clustering. It can measure hundreds of phenotypes related to shape, biomarker intensity, and staining pattern in relation to the anatomy of the animals. A typical workflow starts with bright field images (Fig. 1a). We pre-process to compensate illumination variations, detect well edges, and make the image binary (Fig. 1b). The next step, and the major challenge, is "untangling," i.e., detecting individual worms among clustered worms and debris. To address this, we first construct a model of the variability in worm size and shape from a representative set of training worms (Fig. 1c). The model is then used to untangle and identify individual worms (Fig. 1d). A large number of measurements such as size, shape, intensity, texture, and spot counts can thereafter be made on a per-worm basis using all image channels available, as is common for cell-based assays 14 . Many phenotypes, such as spot area per animal, can be scored directly by such measurements; more complex phenotypes, such as subtle or complex changes in protein expression patterns, can be scored using a combination of measurements and machine learning 15 . If reporter signal location is of interest, we map each worm to a low-resolution atlas allowing quantification correlated to the worm's anatomy. We evaluated the untangling performance using images from our prior work 8 , where 15 worms were placed in each well of a 384-well plate. Approximately 1500 worms from 100 wells were manually delineated, revealing that 46% of the worms were clustered or touching other worms ( Supplementary Fig. 1). Compared to manual delineation, 51% of the worms were correctly detected with automated foreground-background segmentation followed by connected component labeling. When applying the untangling algorithms of the WormToolbox the performance increased to 81%, which proved sufficient for the assays presented here. The major source of error was poor image contrast close to well edges; performance improved to 94% when the foreground-background segmentation was manually corrected, decoupling errors caused by untangling from errors in the initial segmentation. We also tested the performance of the untangling in relation to the size of the training set, and found that performance plateaus using a worm model constructed from 50 randomly selected training worms. This means that training can be done on a relatively small number of samples representing the phenotypic variation of a given experiment ( Supplementary Fig. 2). We first evaluated the toolbox on data from a different laboratory and imaging system 13 . The challenge was to detect individual adult worms that were partly clustered and mixed with eggs and progeny. We trained the worm model on L4 and adult worms only, and observed that untangling improved the accuracy of finding individual adult worms as compared to thresholding and size-sorting alone ( Fig. 1e and Supplementary Fig. 3). The model efficiently excluded smaller larvae (L1, L2, and L3) and eggs, and performance was relatively robust in the presence of up to 6-fold more progeny than adults ( Supplementary Fig. 4). We also evaluated the performance of worm untangling as the number of worms per well increased. Wells contained either L1, L3, or adult worms at increasing concentrations, and we created a separate worm model for each developmental stage. As expected, the performance was higher for the slightly smaller L3 worms as more space between worms leads to less clustering, but untangling became unstable when the worms were so small(L1) that the image resolution only allowed a few pixels per worm (Supplementary Fig. 5 and 6). In the second assay, we evaluated the toolbox for scoring viability, which can be read out as a morphological phenotype in bright field images alone, without the need for a viability stain. Worms in liquid culture tend to be curved and evenly opaque when alive but become rod-shaped and textured when they die (Fig. 1f). We untangled high-throughput images of worms infected with Enterococcus faecalis and either mock-treated with DMSO or treated with ampicillin 12 . After making shape, intensity and texture measurements of each untangled worm, we manually selected 150 live and dead training examples from one 384well plate. We thereafter used the gentle-boosting classifier of CellProfiler Analyst 15 ( Supplementary Fig. 7) to identify a combination of measurements that discriminates live and dead worms. Finally, we applied the classifier to 1,500 worms from a different 384-well plate, and verified that it distinguished live and dead worms as well as humans can (Fig. 1g). To evaluate the performance of the viability scoring on more heterogeneous data from a real high throughput experiment we selected 1,766 random images and 200 hits from a 37,200 compound screen 12 and compared the automated scoring with that of visual scoring based on bright field images ( Supplementary Fig. 8). We achieved an accuracy of 97%, and a precision of 83%, indicating that morphology-based viability screening could be a feasible alternative to the viability stains(SYTOX)used in the original screen. In the third assay, we evaluated how well the toolbox could differentiate between a positive and a negative control from an RNAi screen for regulators of fat accumulation 16 . The positive control down-regulates daf-2, and the negative control was an empty vector. We compared two different approaches for pattern quantification: per-well measurements(using the basic functionality of CellProfiler), where no effort was made to assign fatty regions to individual worms, yielded a false discovery rate (FDR) of 22.2% ( Supplementary Fig. 9); and per-worm measurements(using the untangling functionality of the Worm Toolbox), yielded an FDR of 4.5% ( Supplementary Fig. 10). The per-worm measurements were superior because they captured the heterogeneity of the population, which was lost in the population averages from per-well measurements. Finally, we evaluated the toolbox's ability to detect worms with a change in the location of GFP expression (Fig. 2a). We used a C. elegans strain where GFP expression in the intestine is under the control of a promoter that responds to Staphylococcus aureus infection 17 . A pharyngeal stain (mCherry) served as an internal control. The assay could not be scored using simple approaches, such as measuring the total intensity of GFP expression per well or per worm, or counting the number of GFP spots ( Supplementary Fig. 11). However, using worm straightening (Fig. 2b) and our atlas-mapping (Fig. 2c), we were able to quantitatively detect elevated expression of clec-60::GFP in the anterior intestine ( Fig. 2d) and separate positive and negative controls with a Z'-factor of 0.21. Here we focused on location of signal along the length of the worm, but asymmetric signal distribution across the width of the worm (e.g. fluorescence in full worm as compared to only eggs, or only gut) could also be discerned, using the outline of the worm as a spatial reference for the atlas. The WormToolbox is the first system to automatically, quantitatively, and objectively score a variety of phenotypes in individual C. elegans in static, high-throughput images. The toolbox is implemented as modules for the open-source CellProfiler 14,18 software, emphasizing ease-of-use, is compatible with cluster computing to speed analysis, and is flexible to new assays developed by the scientific community. Training the worm model takes less than an hour, and once an image analysis pipeline is set up for an assay, a typical analysis takes 10-30s per image; much less if a computing cluster is available. The performance of the WormToolbox depends on the contrast between worms and the surrounding background, making it sensitive to large variations in background illumination and to the worm-like tracks sometimes formed when growing worms on agar medium. The WormToolbox can handle images of worms on agar in large plates, but further optimization is needed for worms on solid medium in 384 well plates. In liquid culture, the untangling can handle up to 20 adult worms per well in 384-well format, and is designed to detect worms of the size and shape range of the training worms used to create the worm model. Unexpected phenotypes are likely to be discarded as debris, but wells with a low fraction of correctly detected worms may be flagged for visual examination. In future work we will extend the WormToolbox by adding further worm-specific measurements based on their unique anatomy and better handling of mixed worms at various stages of development. Online methods The open-source code of the CellProfiler WormToolbox algorithms described here is available as Supplementary Software 1 and for download at http://www.cellprofiler.org. Example pipelines for worm model training, worm untangling, and feature extraction available as Supplementary Software 2, and on the CellProfiler website. Compiled version of the code and updates are available at http://www.cellprofiler.org. Here we describe the steps of the workflow as well as our four sample assays. Instructions for how to get started using the WormToolbox are provided in Supplementary Methods 1. Compensating for uneven illumination Uneven illumination often distorts bright field microscopy images of the multi-well plates typically used in high throughput chemical and genetic screens, making foregroundbackground intensity thresholding difficult. Our novel approach for approximating background illumination and well edge position is based on the convexity of both the well and the illumination field ( Supplementary Fig. 12). The algorithm is as follows: Choose 256 evenly spaced intensity levels between the minimum and maximum intensity for the image. Starting from the lowest intensity, for each intensity, find all pixels with equal or higher intensity. Find the convex hull that encloses those pixels, set the pixels of the output image within the convex hull to the current intensity, and continue to the next intensity level. If the well edges are dark and the well has a convex shape, this approach removes the well edge and compensates uneven illumination without the need of any input parameters, making it robust to the variations often present in high-throughput experiments. The final result is thresholded using Otsu's 19 method, resulting in a binary image that serves as input for the worm untangling step. Worm detection Following illumination correction and thresholding, we create a mathematical description of each worm cluster (Supplementary Fig. 13). We reduce each binary object to its morphological skeleton and let each segment of the skeleton represents a worm segment, and each branch point represents a point where worms touch or intersect. This way, the segments and branch points comprising the worm cluster can be described as a mathematical graph, and untangling becomes a search for paths through the graph that are likely to represent complete worms. More precisely, we search for the ensemble of paths through a cluster that best represents the true worms as compared to a worm model, limiting worm overlap, and maximizing cluster coverage. Our advancements as compared to our previous work 20-21 are described in the next three sections. Worm model construction and shape cost The worm model is created from a comprehensive set of non-touching training worms essentially as in our prior work 20 , here with the shape descriptor based on angles rather than spatial coordinates. We sample equidistant control points along the morphological skeleton of each training worm using cubic spline interpolation. Each of the control points other than the first and last is at the vertex of an angle formed by the lines from its predecessor and successor. These angles and the path's length form a feature vector functioning as our shape descriptor. Worm width, length, and area are also extracted and we make the training set symmetric by mirroring all samples along the x-and y-axis. The shape cost of a path potentially representing a worm in a cluster is given by the dot product of the feature vector describing the path and a cross-correlation matrix derived from the training data. Note that we are only looking at over-all body shape when training the model, which does not vary as much as other features of worms such as fluorescent markers or bright field stains and texture. The training worms should represent the worms in the data set, and the variation in size and shape must be within certain limits, for example, a variation in length of a factor 2 might cause the untangling step to divide some long worms in half or exclude some short worms as debris. Any worms that deviate in shape and posture from that expected by the model will be discarded as debris. It is therefore feasible to flag wells with few detected worms as compared to foreground pixels so they can be screened visually (or with an improved worm model), to detect unexpected body shape or size phenotypes (or failures in worm detection due to large amounts of debris or other problems). It is also worth noting, that due to similarity in worm size and shape, we were able to re-use the worm model created for the second assay in both the third and fourth assay, which consisted of images captured over several years and on different microscope systems. Preprocessing of cluster skeletons Artifacts appear when two or more adjacent worms form regions wider than the wormwidth, resulting in a skeleton no longer centered on a true worm. In the two-worm case ( Supplementary Fig. 14), the skeleton is composed of the two segments that enter the area where the worms touch, the two segments that leave the area, and a single segment running the length of the area where the worms touch. To improve alignment of the segmentation result with the true worms we introduce a preprocessing: Touching areas are defined by a circular structuring element whose diameter is the maximum width of a worm. All skeleton ends adjacent to the area are connected with new paths and the best paths are selected by the path search described below. To improve worm detection in cases where two worms touch end-to-end without producing a branch point, we add branch points at an average worm's length starting from each endpoint of every skeleton segment longer than the longest training worm. Worm untangling by path search Once the skeleton has been preprocessed we consider the combined cost of different ensembles of paths representing worms. Conceptually, the algorithm is composed of three steps: enumeration of paths, calculation of costs of individual paths, and calculation of costs of ensembles of paths. The first step is to generate all paths whose lengths are between the minimum and maximum acceptable length, as defined by the worm model, and discard a path if the shape cost function exceeds the maximum acceptable cost. There are three parts to the cost of a particular ensemble of paths: the sum of costs of the individual paths in the ensemble, a penalty cost that is proportional to the length of all segments that are shared by paths in the ensemble and a penalty cost that is proportional to the length of all segments that do not appear in any path in the ensemble. Details on the algorithm are described in Supplementary Methods 2, and the open source of the code (www.cellprofiler.org). Straightening and atlas-based feature extraction To extract reporter signal location, worms are transformed to a straight shape by re-sampling the image data along lines perpendicular to the central axis of the worm, much as previously described 21 . However, here we make the processing much faster by re-using the spline function describing the path through the worm during untangling. If a head-or tail-specific marker is available, worms flipped so that they all have an intensity distribution skewed in the same direction. Our low-resolution worm atlas consists of a user-defined number of transversal and longitudinal segments. Intensity mean and standard deviation is extracted from each sub-segment in any number of image channels. Evaluation of worm detection We evaluated the performance of the untangling on 100 bright field images from our published high-throughput experiment 12 . Each image of a well from a 384-well plate contains approximately 15 worms. Ground-truth was created by manually delineating all worms and saving them as individual binary masks to enable evaluation of worm detection, clustering, and overlap (provided though www.broadinstitute.org/bbbc). In this data set, 46% of the worms touch or overlap, with most of the worms in clusters of two ( Supplementary Fig. 1). We calculated accuracy, precision, recall and F-factor for individual worms, and a threshold on F-factor of 0.8 yields 81% correctly segmented worms by automated foreground-background segmentation followed by untangling, and 94% correct segmentation if the foreground-background segmentation was manually corrected before untangling. If worms are defined by conventional intensity thresholding and connected component labeling, only 51% of the worms are correctly segmented. The performance is generally higher for smaller clusters (Supplementary Fig. 2). Performance is also affected by the size of the training set, and plateaus at about 50 training worms. Processing speed has been reduced by about ten-fold as compared to our previously published implementation 17 , and all steps combined, including image preprocessing, worm untangling and straightening, typically take less than 10s. Assay 1: Finding individual adult worms in the presence of eggs and progeny Images were kindly provided by Gosai et al. 13 Briefly, C. elegans were cultured at 22 °C on nematode growth medium (NGM) plates seeded with E. coli strain OP50. Next, 36 animals, with a predetermined percentage (0, 25, 50, 75 and 100%) of adult worms, were dispensed into each well of a 384-well plate. Images were acquired on the ArrayScan V TI HCS Reader (Cellomics, ThermoFisher) fitted with a 2.5x objective and a 0.63x coupler. We compensated for uneven background illumination using the convex-hull approach and identified objects by automated intensity thresholding. We constructed a worm model from the non-touching, adult worms, which we identified based on their areas and maximum widths. We thereafter untangled all worms ( Supplementary Fig. 3) and counted adult worms per image (Fig. 1e). Next we tested the limit of adult worm detection at increasing concentration of progeny ( Supplementary Fig. 4), concluding that the untangling is stable to about six-fold more progeny than adults. Finally we found the limits in worms per well using a COPAS worm sorter to seed 1-96 L1, L3, or adult worms per well in a 384 well plate, in four replicates ( Supplementary Fig. 5). For adult worms, the untangling works well until reaching about 20 worms per well, while for L3 worms, the limit is reached at about 30 worms per well, which is to be expected as the worms are smaller and further apart in the well. For the small L1 worms, the image resolution only allows a few pixels per worm, making the model-based worm untangling unstable, particularly in the presence of small bubbles, which can be confused with small worms at this resolution. In order to explore the modes of failure, we examined a subset of segmentation results visually, and saw that the initial illumination correction and intensity thresholding have a large effect on the resulting segmentation ( Supplementary Fig. 6). Assay 2: Live/dead scoring based on bright field morphology We cultured C. elegans (glp-4(bn2);sek-1(km4) mutant) on plates seeded with E. coli strain HB101. We infected sterile adult worms by pipetting them onto a lawn of E. faecalis and incubating for 15 h at 15 °C. Using a COPAS worm sorter, we transferred 15 of the infected worms to each well of a 384-well plate. As a positive control, we added 21 μg/ml Ampicillin to 192 wells and mock-treated 192 wells with an equal volume of DMSO 12 . We captured bright field transmitted-light images showing the entire well using a Molecular Devices Discovery-1 microscope with a transmitted light module, a 2x low-magnification objective, and MetaXpress software. We manually delineated 60 worms from positive and negative control wells and used them to construct a worm model. After untangling, we measured worm shape, intensity and texture. Using CellProfiler Analyst 15 , we trained a classifier to distinguish the live and dead phenotype based on 150 training examples. The classifier used 8 shape-, intensity-, and texture features ( Supplementary Fig. 7). We applied the classifier to a set of images that did not include the training examples, classifying each worm as live or dead. Finally, we scored wells by the fraction of live worms and compared to the majority vote of three C. elegans specialists scoring by visual inspection (Fig. 1g). We also compared automated and visual scoring of images randomly selected from a full-scale HTS experiment (Supplementary Fig. 8). Assay 3: Fat accumulation scoring based on staining pattern We treated animals with either an empty vector (L4440) or RNAi against the insulin receptor (daf-2) according to standard procedures. We stained them with the fat-specific stain oil red O. Using an upgraded Axioscope microscope (Zeiss) with automated hardware (Biovision Inc.) and Surveyor software (Objective Imaging Ltd.), we acquired six bright field color images per well. The original images were 2782×3091 pixels, but we scaled them to 690×765 pixels to speed analysis. Before detecting and untangling worms, we combined the color channels into a single gray scale image. We used the same worm model as for Assay 2, and achieved satisfactory segmentation results without any adjustments. We defined fat regions by intensity thresholding and quantified fat patterns by measuring the extent of the fatty regions ( Supplementary Fig. 9). We thereafter compared per-well, percluster, and per-worm measurements ( Supplementary Fig. 10), finding the latter to be ideal for the assay. Assay 4: Reporter pattern detection by worm straightening and atlas mapping We used a transgenic strain of C. elegans that expresses GFP from the promoter of the gene clec-60 and myo-2::mCherry for labeling the pharynx. We cultured the worms on NGM plates seeded with E. coli OP50 at 15-20°C according to standard procedures, and sorted 15 worms into each well of a 384-well plate. Of these, 48 wells received wild-type worms (L4440) expressing clec-60::GFP and 48 wells received pmk-1(km25) mutants. Using a Discovery-1 microscope (Molecular Devices), we acquired bright field images as well as fluorescence images at two wavelengths (for GFP and mCherry). The images were 696×520 pixels 17 . We tested three approaches to phenotype scoring ( Supplementary Fig. 11). First, we defined spots of GFP signal by intensity thresholding, and likewise approximated worm count by thresholding intensities in the image channel showing the pharynx (mCherry). Because large variations in GFP expression and touching worm heads lead us to underestimate the number of worms, this approach was not successful. Instead, we proceeded to untangle the worms using the same worm model as for Assay 2, and achieved satisfactory segmentation results without any adjustments. The mean and standard deviation of the GFP expression in individual worms was also insufficient to separate the two phenotypes, so we continued the analysis by straightening the worms, aligning them to the low-resolution worm atlas and measuring mean and standard deviation of GFP expression from each of six transversal sub-segments evenly spread along the length of the worm. Instead of examining each measurement separately, we trained CellProfiler Analyst software 15 to distinguish the phenotypes by presenting it with examples of mutant and wild-type worms. The resulting classifier relied primarily on the difference in standard deviation of GFP fluorescence in transversal segment number two (second from head, T2 of 6) to distinguish wild-type and mutant worms. Based on this, we labeled as mutant worms those with a standard deviation in GFP expression in T2 of 6 greater than 0.4. Finally, we scored each well by the percentage of mutant worms, and achieved a Z'-factor of 0.21. A list of general aspects of assay design and error handling for image based screening is presented in Supplementary Table 1, and details regarding image settings are given in Supplementary Note 1. Supplementary Material Refer to Web version on PubMed Central for supplementary material.
2015-03-07T18:39:34.000Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "a1da365647689a80472f28dd8b8a3f70342059a4", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3433711?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "712366eccbd0b553093fcae72d51f35b8d80862d", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
58573491
pes2o/s2orc
v3-fos-license
Glutathione S-transferase pi 1 variant and squamous cell carcinoma susceptibility: a meta-analysis of 52 case-control studies Background There are several meta-analyses on the genetic relationship between the rs1695 polymorphism within the GSTP1 (glutathione S-transferase pi 1) gene and the risk of different SCC (squamous cell carcinoma) diseases, such as ESCC (oesophageal SCC), HNSCC (head and neck SCC), LSCC (lung SCC), and SSCC (skin SCC). Nevertheless, no unified conclusions have been drawn. Methods Herein, an updated meta-analysis was performed to evaluate the probable impact of GSTP1 rs1695 on the susceptibility to different SCC diseases under six genetic models (allele, carrier, homozygote, heterozygote, dominant, and recessive). Three online databases, namely, PubMed, WOS (Web of Science), and Embase (Excerpta Medica Database), were searched. Results Initially, we obtained a total of 497 articles. Based on our selection criteria, we eventually included 52 case-control studies (9763 cases/15,028 controls) from 47 eligible articles. As shown in the pooling analysis, there was no difference in the risk of overall SCC disease between cases and controls [allele, Pa (P value of association test) = 0.601; carrier, Pa = 0.587; homozygote, Pa = 0.689; heterozygote, Pa = 0.167; dominant, Pa = 0.289; dominant, Pa = 0.548]. Similar results were obtained after stratification by race (Asian/Caucasian), genotyping, control source, and disease type (ESCC/HNSCC/LSCC/SSCC) (all Pa > 0.05). Conclusion The rs1695 polymorphism within the GSTP1 gene is not associated with the risk of overall SCC or a specific SCC type, including ESCC, HNSCC, LSCC, and SSCC. Background SCC (squamous cell carcinoma), also termed "epidermal carcinoma," is a malignant tumour that takes part in epidermis or adnexal cells and exhibits distinct degrees of keratosis [1][2][3]. SCC exists in the squamous epithelium of several places, e.g., skin, mouth, lung, lips, oesophagus, cervix, and vagina [4][5][6]. Based on GWAS (genome-wide association study) data, more and more reported genetic polymorphisms are believed to contribute to the aetiologies of different SCC types. For instance, a series of genes, including CADM1 (cell adhesion molecule 1), AHR (aryl hydrocarbon receptor), and SEC16A (SEC16 homolog A, endoplasmic reticulum export factor), may be related with the risk of SCC [7]. Two variants within the KLF5 (Kruppel-like factor 5) gene on chromosome 13q22.1, namely, rs1924966 and rs115797771, may be relevant to ESCC (oesophageal SCC) susceptibility [8]. Herein, we determined whether GSTP1 (glutathione S-transferase pi 1) gene polymorphism is associated with the susceptibility to different SCC patterns. GSTP1, a member of the GST (glutathione S-transferase) family in humans, is associated with the biological detoxification or biotransformation process through catalysing the conjugation of many hydrophobic and electrophilic compounds with reduced glutathione [9,10]. The GSTP1 gene, which is located on human chromosome 11q13, comprises seven exons and six introns [11]. Two common polymorphisms, namely, rs1695 A/G polymorphism in exon five (p.Ile105Val) and rs1138272 C/T polymorphism in exon six (p.Ala114Val), have been reported [12,13]. Several SCC/GSTP1 rs1695-associated meta-analyses with conflicting conclusions have been reported. For instance, in 2009, Zendehdel et al. enrolled three case-control studies [14][15][16], performed a meta-analysis to assess the association between GSTP1 rs1695 and ESCC risk in Caucasian populations, and found a borderline significant association [16]. In 2014, Song et al. enrolled 21 case-control studies to perform a meta-analysis concerning the role of the GSTP1 rs1695 polymorphism in the risk of oesophageal cancers, including EAC (oesophageal adenocarcinoma) and ESCC [17]. The subgroup meta-analysis of ESCC containing thirteen case-control studies showed a positive correlation, particularly in the Caucasian population [17]. However, in 2015, Tan et al. performed another meta-analysis with twenty case-control studies on overall oesophageal cancer and reported negative results in both ESCC and EAC subgroups [18]. Accordingly, we performed an updated meta-analysis with a relatively larger sample size to reevaluate the potential impact of the GSTP1 rs1695 A/G polymorphism on the susceptibility to SCC diseases, mainly including ESCC, SSCC, HNSCC (head and neck SCC), and LSCC (lung SCC). Eligible article screening We performed a literature search and screened the retrieved articles as per the PRISMA (preferred reporting items for systematic reviews and meta-analyses) guidelines [19]. Selection criteria included duplicated articles; data from animal or cell experiments; meeting abstract or meta-analysis; review, trials or case reports; data of GSTP1 expression; not SCC or GSTP1; lack confirmed histopathological data; combined GA + AA genotype frequency; without the control data; and P value of HWE (Hardy-Weinberg equilibrium) less than 0.05. Eligible case-control studies provided sufficient genotype frequency data of the GSTP1 gene rs1695 polymorphism in each case and control group. Data extraction Two investigators independently extracted the data and evaluated the methodological quality of each article by means of the NOS (Newcastle-Ottawa Scale) system. One table contains the following basic information: first author, publication year, region, race, genotyping assay, genotype frequency, disease type, control source, P values of HWE, study number, and sample size of the case/control. Data synthesis We utilized STATA software (StataCorp LP, College Station, TX, USA) for the following statistical analyses. The allele (allele G vs. A), carrier (carrier G vs. A), homozygote (GG vs. AA), heterozygote (AG vs. AA), dominant (AG + GG vs. AA), and recessive (GG vs. AA+AG) models were utilized to target the GSTP1 gene rs1695 G/A polymorphism. We calculated the OR (odds ratio), 95% CIs (confidence intervals) and P a (P value of association test) values to estimate the association. When the P h (P value of heterogeneity) was > 0.1 or I 2 was < 50.0%, a fixed-effects model was adopted. Otherwise, a random-effects model was selected. Considering the factors of race, genotyping assay, control source, and disease type, we performed the corresponding subgroup meta-analyses. We also carried out Egger's/Begg's tests to determine a potential publication bias. The presence of a publication bias was considered when P E (P value of Egger's test) and P B (P value of Begg's test) were below 0.05. Sensitivity analysis was applied to assess data stability and robustness. Article retrieval and screening The article retrieval and selection processes during our meta-analysis were conducted as described in the flow chart shown in Fig. 1. After our literature search, a total of 497 articles were obtained. Then, 168 articles with duplicated data and 214 articles meeting the exclusion criteria were excluded. Next, we assessed the eligibility of the remaining 115 full-text articles. After the exclusion of 68 ineligible articles, a total of 47 articles containing 52 case-control studies [14][15][16] were ultimately recruited for our meta-analysis. Table 1 summarizes the extracted basic information. Overall meta-analysis First, we performed the overall meta-analysis, which included 52 case-control studies with 9763 cases and 15,028 controls ( Table 2). The fixed-effects model was applied in all meta-analyses, because no substantial between-study heterogeneity was detected [ Table 2, I 2 value < 50.0%, P h > 0.1]. As shown in Table 2, no altered susceptibility to SCC disease in cases was observed compared with controls [allele, P a = 0.601; carrier, P a = 0.587; homozygote, P a = 0.689; heterozygote, P a = 0.167; dominant, P a = 0.289; dominant, P a = 0.548]. These data suggest that the rs1695 polymorphism within the GSTP1 gene does not contribute to the risk of overall SCC. Subgroup analysis Next, we performed additional subgroup meta-analyses according to the factors of race (Asian/Caucasian), genotyping assay (PCR-RFLP), control source (PB/HB), and disease type (ESCC/HNSCC/LSCC/SSCC). As shown in Tables 3 and 4, there were no significant associations in any subgroup analysis for all genetic models tested (all P a > 0.05). The forest plot of the subgroup analysis by disease type under the allele model is shown in Fig. 2. Furthermore, we included all case-controls studies regarding the specific SCC type and conducted a series of subgroup analyses by race and control source. However, similar results were obtained (data not shown). As a result, the GSTP1 gene rs1695 polymorphism is not likely related to the genetic susceptibility of a specific SCC type, including ESCC, HNSCC, LSCC, and SSCC. Publication bias and sensitivity analysis The publication bias analysis data obtained from Egger's and Begg's tests are shown in Table 2. There was no remarkable publication bias in most genetic models (P E > 0.05, P B > 0.05), except for the heterozygote (P E = 0.022, P B = 0.049) and dominant (P E = 0.036) models. The funnel plot (allele model) is displayed in Fig. 3a-b. Moreover, our sensitivity analysis led us to consider the stability of the data. Figure 4 shows a representative example of the sensitivity analysis (allele model). Discussion In the current meta-analysis, we first focused on the genetic relationship between the GSTP1 rs1695 A/G polymorphism and the risk of overall SCC and then conducted subgroup analyses by the specific histological status. After rigorous screening, four main types of SCC, namely, ESCC, HNSCC, ESCC, and SSCC, were targeted. ESCC, a type of squamous epithelium differentiation of a malignant tumour within the oesophagus, accounts for the vast majority of oesophageal cancers [64,65]. ESCC often presents in physiological or pathological stenosis of the oesophagus, and genetic factors, carcinogens, and/or chronic irritants may contribute to the pathogenesis of ESCC [64,65]. The GSTP1 rs1695 A/G polymorphism is significantly related to the risk of ESCC in the Kashmiri population [42]. Similarly, GSTP1 rs1695 may be an independent risk factor for ESCC in Western populations [53]. Nevertheless, different associations were detected in other reports. For instance, no difference between unrelated controls and ESCC cases was observed in a French population [14] or a Chinese population [61]. Therefore, a meta-analysis was required to comprehensively evaluate the role of the GSTP1 rs1695 A/G polymorphism in ESCC risk. Herein, we recruited 15 case-control studies involving 1934 cases and 3951 controls and performed a new meta-analysis to examine the association between the GSTP1 rs1695 A/G polymorphism and ESCC susceptibility. The carrier (carrier G vs. A) model, as well as the allele, homozygote, heterozygote, dominant and recessive genetic models, was used. Our results in the stratified analysis of specific ESCCs are consistent with the data of Tan et al. [18]. Similarly, inconsistent results regarding an association between the GSTP1 rs1695 A/G polymorphism and LSCC risk have been reported in different races and geographical locations [24,31,33,34,37,40,45,47,52,56,57,60,63]. Here, we failed to detect a positive correlation between GSTP1 rs1695 and LSCC susceptibility, consistent with the prior meta-analysis of Feng in 2013 [66] and Xu in 2014 [67]. Head and neck cancer comprises cancers of the mouth, nose, sinuses, salivary glands, throat, and lymph nodes in the neck, and HNSCC is the major pathologic type [68]. In 2012, Lang et al. enrolled 28 case-control studies to perform a meta-analysis regarding the genetic effect of the GSTP1 rs1695 A/G polymorphism on overall head and neck cancer [69]. The authors were unable to identify a positive association between the GSTP1 rs1695 A/G polymorphism and the risk of overall head and neck cancer. Nevertheless, the potential role of GSTP1 rs1695 in the susceptibility to HNSCC was not assessed. Therefore, we performed a subgroup meta-analysis of HNSCC involving 18 case-control studies, but did not identify an association between GSTP1 rs1695 and HNSCC risk. SSCC, SBCC (skin basal cell carcinoma) and (MM malignant melanoma) are the three main types of cutaneous cancer [4]. Herein, we did not identify an association between the GSTP1 rs1695 A/G polymorphism and SSCC risk, consistent with the prior meta-analyses regarding the correlation between GSTP1 rs1695 and the susceptibility to cutaneous cancer in 2015 [70,71]. Human GST family genes, mainly including GSTA (glutathione S-transferase alpha), GSTM1 (glutathione S-transferase mu 1), GSTT1 (glutathione S-transferase theta 1) and GSTP1, encode phase II enzymes and are thus important for the body defence, metabolic detoxification of mutagens or chemical drugs, or cellular elimination of carcinogens [9,10]. The rs1695 A/G polymorphism within the GSTP1 gene can result in the substitution of Ile (isoleucine) for Val (valine) at amino acid position 105, which may lower the cytosolic enzyme activity of GSTP1 protein [72,73]. Although significant associations were not obtained in our overall meta-analysis or subgroup analyses by pathological type, we cannot rule out the potential genetic effect of the GSTP1 rs1695 A/G polymorphism. There are still some limitations to our meta-analysis that should be clarified. Even though our findings were considered reliable by our sensitivity analysis and publication bias assessment, more eligible investigations are still warranted to further enhance the statistical power. We note that population-based controls were not utilized in each case-control study. The currently available data of genotypic and allelic frequency from the on-line databases led us to only target the rs1695 polymorphism of the GSTP1 gene. Other possible functional polymorphisms of the GSTP1 gene, such as rs1138272, or relative haplotypes will be important to examine in the future. We should also pay attention to the genetic relationship between GSTP1/GSTM1/GSTT1 polymorphisms and the risk of SCC. Conclusion In general, based on the currently published data, the GSTP1 gene rs1695 polymorphism is not associated with the susceptibility to overall SCC diseases, including ESCC, HNSCC, LSCC, and skin SCC. The confirmation or refutation of this conclusion merits further evidence.
2019-01-22T08:41:49.823Z
2019-01-21T00:00:00.000
{ "year": 2019, "sha1": "6a669133b459f447dfc0a264a919ff099ee906b6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12881-019-0750-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a669133b459f447dfc0a264a919ff099ee906b6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265349800
pes2o/s2orc
v3-fos-license
Choroidal Perfusion Changes After Vitrectomy for Myopic Traction Maculopathy ABSTRACT Background The choroidal vasculature supplies the outer retina and is altered in many retinal diseases, including myopic traction maculopathy (MTM). Choroid health is typically assessed by measuring the choroidal thickness; however, this method has substantial limitations. The choroidal vascularity index (CVI) was recently introduced to provide quantitative information on the vascular flow in the choroid. This index has been evaluated in a wide range of diseases but has not been extensively used to characterize MTM. Aim This study aimed to investigate the CVI across different stages of MTM and the influence of macular surgery on choroidal perfusion markers in different surgically resolved MTM stages. Methods Eighteen healthy myopic eyes in the control group and forty-six MTM eyes in the surgical group were evaluated using enhanced optical coherence tomography (OCT) imaging. Binarized OCT images were processed to obtain the luminal choroidal area (LCA) and stromal choroidal area (SCA), which were used to calculate CVI in the form of a percentage ratio. CVI data were collected at baseline, one and four months postoperatively, and at the final clinical visit. MTM eyes were divided into four stages based on disease severity. The choriocapillaris flow area (CFA) and central subfield thickness (CSFT) were measured along side the CVI. Results No significant differences were observed between the two groups at baseline, except for visual acuity (p  < 0.0001). Surgery significantly improved vision at all postoperative time points (p  < 0.0001). At baseline, there were no significant differences in CVI, CFA, or CSFT scores between the control and surgical groups. However, all three measurements were lower at the final visit in the surgical group (p ≤0.0001). No significant differences were found in any of the parameters among the four stages of MTM (p  > 0.05). Ultimately, correlation and multivariate linear regression analyses did not reveal any significant association between CVI and visual acuity. Conclusions This study did not find significant preoperative differences in CVI between healthy myopic eyes and eyes with MTM. However, the postoperative CVI and CFA values were significantly lower than those of the control eyes. Thus, CVI may not be a good biomarker for surgical outcomes, as the correlation between CVI and visual acuity was not statistically significant.The CVI and CFA decreased after surgery, providing evidence of choroidal changes after surgical management. BACKGROUND The choroid is a vascular tissue that supplies oxygen and nutrients to the retinal pigment epithelium (RPE) and the outer retina. 1 This tissue plays a crucial role in maintaining retinal homeostasis, and abnormalities in the structure of the choroid may indicate the presence of an underlying disease.Choroidal thickness measurements are widely used in clinical research to meaningfully evaluate the state of the choroid in healthy and diseased retinas. 2 However, choroidal thickness alone has important limitations, such as the inability to characterize the vascular flow between the stromal and luminal vascular areas. ][5] CVI can be calculated from optical coherence tomography (OCT) images via a series of digital binarization and quantification steps. 6,7As CVI captures both vascular and stromal changes within the choroid, it provides a more informative characterization of the state of the choroidal structure than choroidal thickness. Since the initial introduction of CVI, a series of studies have linked changes in this index to the pathogenesis and progression of various retinal diseases. 5,8,9The CVI has been shown to capture certain microcirculatory changes in the retina [10][11][12] and has been correlated with visual function after vitrectomy. 13s retinal diseases typically involve multiple processes that affect the choroid (e.g., inflammation, edema, and leakage), CVI has been suggested as a potentially useful biomarker for assessing the integrity of the vascular network within the choroid. 8,14Furthermore, earlier studies found that CVI is not sensitive to various confounding factors, such as axial length, blood pressure, or intraocular pressure. 15yopic traction maculopathy (MTM) is a visionthreatening condition associated with pathologic myopia (PM), the presence of posterior staphyloma (PS), evidence of inner and/or outer retinal layer-like thickening with or without epiretinal membrane (ERM) proliferation, or tractional elevation of Henle's layer with or without evidence of a champagne flute-shaped schisis appearance, stretched retinal vessels, and an abnormally rigid inner limiting membrane (ILM), which may evolve into forms with more severe retinal complications. 16Frequently, eyes with myopic foveoschisis (FS) and foveoretinal detachment (FRD) progress, leading to the formation of macular holes (MHs). 17,180][21] Due to the progressive nature of MTM, a system of four stages has been adopted to categorize the disease 21,22: myopic FS at stage 1, FRD at stage 2, myopic MHs at stage 3, and MH retinal detachment (RD) at stage 4. In this context, monitoring disease and treatment outcomes through noninvasive methods such as CVI could substantially improve the management of MTM. 23n a previous case series involving four eyes with MTM, we found that more patients with an advanced disease stage tended to have lower CVIs. 24In this study, we obtained the following results: a more comprehensive series of healthy myopic control eyes and MTM eyes that underwent macular surgery and evaluated their CVIs at four different time points.This study aimed to investigate CVI values across different stages of MTM and influence of macular surgery on choroidal perfusion markers in successfully resolved surgical MTM stages. We further assessed the differences between the preoperative and postoperative CVI, choriocapillaris flow area (CFA), central subfield thickness (CSFT) and the correlation between CVI and other variables using multivariate linear regression analysis. Study Design We conducted a nonrandomized retrospective analysis of the medical charts of successfully operated patients with different stages of MTM.All patients were treated between August 2016 and June 2022 operated on by the same surgeon (MAQR).This retrospective analysis was conducted in the Retina Department of Oftalmologia Integral ABC (Mexico City).The institutional review board approved the study design, and written informed consent was obtained from all patients.This study adhered to the guidelines outlined in the Declaration of Helsinki.No reference number was provided by the institution, owing to the retrospective nature of the study. The inclusion criteria were as follows: patients over 18 years of age, with an equivalent spherical equivalent refractive error of > −6,0 diopters or axial length >26.5 mm, presence of any detected structural MTM stage due to PM, who had undergone vitrectomy with successful and uncomplicated macular surgery using different ILM peeling techniques for symptomatic MTM, at least six months of follow-up; and perfusional evaluation during follow-up, serial CVI, choriocapillaris flow area (CFA), and automated CSFT measurements of the macula according to the study protocol. The exclusion criteria included patients with evidence of diffuse macular chorioretinal atrophy, patchy foveal-affected chorioretinal atrophy, or evidence of involuted or active myopic choroidal neovascularization based on the atrophy/traction/neovascularization (ATN) classification. 25Additionally, eyes treated with macular laser photocoagulation or intravitreal injections during the study period, previous complicated vitreomacular surgeries, postoperative complications such as glaucoma and endophthalmitis, the presence of intraocular silicone oil, or those who did not fulfill the minimum postoperative functional, structural, and perfusional evaluation criteria and follow-up study protocols were also excluded. Eighteen healthy myopic eyes were included in the control group.Eyes were matched for sex and age.The study groups and their corresponding inclusion criteria are listed in Table 1. Surgical Procedure and Study Protocol Examinations The surgical procedures were performed by a single highly experienced retina surgeon.(MAQR).The surgical techniques used in this study have been previously described in detail by the authors. 26According to the study protocol, preoperative evaluation of the MTM stage was performed using a highresolution spectral domain (SD) optical coherence a tomography (OCT) Spectralis HRA OCT system (Heidelberg Engineering, Heidelberg, Germany), with 25-line horizontal volume scans covering the area centered on the fovea.The CSFT measurements were obtained using standardized algorithms contained in the software of the instrument and were automatically generated.Perfusional choroidal evaluations were completed following a previously published protocol by the study authors using an OCT angiography (OCT-A) device (RTVue XR Avanti, OptoVue, Inc., Fremont, CA, USA). 26,27CFA was performed by segmenting the choriocapillaris subfoveal plexus (CSP) slabs using RTVue XR OCT Avanti with AngioVue Software (OptoVue Inc., Fremont, CA, USA) and automatically calculated from a 3.142 mm 2 evaluation area. Brief Description of the CVI Quantification Method by Image Binarization Briefly, CVI values were derived from the images obtained using SD-OCT of the macula.High-resolution 9-mm horizontal OCT-B images were selected and uploaded to the ImageJ analysis software (version 1.53, http://imagej.nih.gov/ij/).The images were first converted into an 8-bit format and adjusted using the Niblack automatic local threshold.Next, an area of the subfoveal choroid was manually selected using the adaptable geometric polygon tool to manually select the total choroidal area (TCA) from 750 µm nasal to 750 µm temporal in the direction of the horizontal plane from the foveal center and from the RPE-Bruch membrane to the scleral border in the direction of the vertical plane.Subsequently, the stromal vascular tissue area was determined by the number of white pixels, and the luminal area (LA) at the enhanced choroid was determined by applying the threshold tool by quantifying the number of dark pixels.Finally, the dark-to -light pixel ratio was expressed as a percentage and defined as CVI, as previously described by Agrawal et al 4,8 The protocol study method for binarization is illustrated in Figure 1. Outcome Measures The primary outcome of this study was the pre-and postoperative quantitative evaluation and comparison of macular CVI, CFA, and CSFT values and their correlation with visual changes.The secondary outcomes included investigating the potential effects of vitrectomy and macular surgery on choroidal perfusion markers by comparing preoperative and postoperative values and correlating them with those obtained in the control group. Statistical Analysis All statistical tests were performed using GraphPad Prism software (version 9.2.0), with the significance threshold for all tests set at p < .05.Where appropriate, nonparametric tests were used because tests for the normality of distribution showed that the data were not normally distributed.Fisher's exact test was used to test for differences in sex, eye laterality, and lens status between groups.The Mann-Whitney U test was used to test for differences in age, axial length, and baseline and postoperative best-corrected visual acuity (BCVA) between the groups.The paired Wilcoxon signed-rank test was used to assess the changes in BCVA after surgery.The Kruskal-Wallis test was used to detect significant differences in the CVI, CFA, and CSFT.Pearson's correlation analysis and multivariate linear regression were performed using the R programming environment (version 4.1.1). General Characteristics of the Study Groups The present study included 64 eyes, with 18 eyes in the healthy myopic control group and 46 eyes in the surgical group who underwent successful MTM surgery.No statistically significant differences were found between the two groups in terms of sex, eye laterality, lens status, age, or axial length.In the surgical group, the mean preoperative MTM was 11.35 months.Fortyone (89.0%) eyes showed an improvement in vision with CVI values lower than those in the control group.Twenty-eight (60.8%) eyes showed BCVA worse than 20/60 (0.48 logMAR units), and all of them showed CVI values below the mean of those in the control group; the reduction in the CVI was significant but did not correlate with the final BCVA (for further detail in the data analysis, consult the supplementary file).The eyes were further divided into four stages according to the severity of MTM (Table 1), with 11, 18, 9, and 8 eyes in stages 1, 2, 3, and 4, respectively.Demographic data are summarized in Table 2. Surgical and Visual Outcomes The baseline BCVA was significantly worse in the surgical group than in the healthy myopic group (p < .0001)(Table 3).After surgery, the BCVA improved from 1.179 to 0.753 logMAR at one month (p < .0001),0.63 logMAR at four months (p < .0001),and 0.592 logMAR at the final visit (p < .0001).However, the postoperative BCVA in the surgical group was lower than that in the healthy myopic group (p < .0001). After surgery, MTM resolved in an average of 5.50 weeks, and the patients were followed up for a mean period of 25.41 months (Table 3).Half of the eyes were treated using foveasparing (FS)-ILM peeling, followed by an inverted flap in 17 eyes and classical ILM peeling in six eyes.Most eyes (63%) underwent gas tamponade, while the remaining eyes underwent silicone oil tamponade.More than half (52.2%) of the eyes did not require additional surgery; the majority of the second surgeries were performed to extract the silicone oil tamponade (37%).Ten (22%) surgical eyes developed postoperative complications, with MHs being the most common (10.9%), followed by residual extrafoveal FS (6.5%), rhegmatogenous RD (RRD) (2.2%), and diffuse chorioretinal atrophy (DCRA) (2.2%). Retinal and Choroidal Characterization SD-OCT and OCT-A were used to measure the CVI, CFA, and CSFT in both groups at baseline, one month after surgery, four months after surgery, and at the final visit in the surgical group.The analysis showed that in the first postoperative month, the choroidal perfusion markers did not differ from the preoperative values with an increase in CSFT.However, at four months and in the last quantification corresponding to the last visit, a significant tendency toward a reduction in the three markers was observed (Table 4).Compared with the healthy myopic group, the surgical group had a significantly lower CVI (p = .0001),smaller CFA (p < .0001),and smaller CSFT (p < .0001)at the final visit; however, there were no significant differences at baseline (Figure 2a, c, and e). In the surgical group, SD-OCT and OCT-A measurements showed significant differences across the time points (p < .0001)(Figure 2b, d, and f).Specifically, all three values decreased postoperatively.When the surgical group was separated according to the MTM stage (Figure 3), no significant differences in CVI were identified at any of the time points among the stages (p = .164,0.375, 0.820, and 0.432 at baseline, one month after surgery, four months after surgery, and at the final visit, respectively). Correlation and Linear Regression Analyses Analysis of the correlation between the three OCT measurements and BCVA did not reveal any significant correlation with the final surgical outcome.In the surgical group at the final visit, the Pearson's correlation coefficient between CVI and BCVA was 0.096 (p = .527),that between CFA and BCVA was 0.016 (p = .915),and that between CSFT and BCVA was 0.183 (p = .224). Multivariate linear regression analysis was performed to identify any relationship between the selected patient factors and the final postoperative BCVA in surgically treated eyes.The model contained six variables: age, preoperative MTM duration, postoperative time to MTM resolution, final CVI, final CFA, and final CSFT.None of the patient factors showed a significant relationship with final postoperative BCVA (p > .05)(Table 5).Preoperative MTM duration tended to have the best relationship with BCVA (p = .051),and the corresponding coefficient was positive (i.e., a longer preoperative MTM duration was associated with a higher logMAR value and therefore a worse BCVA). Representative images from the surgical participants who underwent enhanced OCT imaging evaluation and CVI calculations are shown in Figure 4. DISCUSSION MTM is a progressive and debilitating condition that can lead to severe loss of visual function without proper treatment. 16,28VI has recently been introduced as a promising biomarker for choroidal health and has potential applications in disease diagnosis and management. 8Since then, numerous studies have evaluated CVI in various retinal diseases; however, its detailed application in MTM is lacking.In this study, we report CVI-quantified findings across both treatment times and MTM stages to evaluate their potential as biomarkers for the disease and the possible deleterious effects of surgery.We believe that the findings of this study can provide insights into the suitability of preoperative and postoperative CVI and CFA as choroidal perfusion markers and how they behave after noncomplicated macular surgery in different MTM surgical stages.The relationships among CVI, CFA, CSFT, visual function, and other postoperative outcomes of MTM were also explored.Collectively, these results provide novel insights into the choroidal state of eyes with MTM before and after surgery and may help guide the potential use of CVI as a biomarker for the clinical management of the disease. The study consisted of two cohorts: healthy myopic eyes and MTM eyes, the latter of which was further divided into four stages based on disease severity.Except for worse visual acuity in the eyes with MTM, there were no significant differences between the two groups in terms of sex composition, eye laterality, lens status, age, or axial length.In eyes with MTM, the surgical procedure quickly improved visual function one month postsurgery, with the average vision improving progressively through the last clinical visit.However, the visual acuity at the last visit was worse than that in the healthy myopic control group, indicating that some visual loss was irreversible, which is consistent with previously reported surgical outcomes. 29he CVI, CFA, and CSFT were used to assess choroidal and retinal health across the study groups.Intriguingly, none of the three parameters was significantly different between the myopic control and preoperative values in the surgical group.Furthermore, all three parameters showed a notable trend of reduction in values, which were significantly reduced at the final visit, with statistically significant differences from those in the healthy control myopic group.Comparisons across the time series confirmed that these three parameters decreased after surgery.This finding is in contrast to an earlier study that found that CVI increased at both one and three months after macular buckling surgery, accompanied by thickening of the choroid in the initial postoperative period, 30 without mention of the axial length of the eyes, and having in the scientific thinking that the eyes in our study belong to a highly myopic eye spectrum given the presence of PS and complex vitreoretinal interfaces and the lack of any kind of either equatorial or macular buckling elements.Some of these changes may be attributed to the macular surgical procedures performed, which can have different effects on the long-term postoperative perfusion markers of the choroid due to transurgical vitrectomy-related perfusion changes at the level of the microcirculation in these highly myopic eyes.Additionally, a reduction in CSFT after surgery has been previously reported in MTM by Yi et al, 31 whereas changes in CVI and CFA associated with surgery in MTM have not yet been described.Therefore, we hypothesized that both thickness and poor final postoperative vision are the result of damage due to retinalchoroid perfusion, represented by lower CVI and CFA values, as a consequence of the degenerative and tractional processes inherent in elevated myopia or the changes and alterations in retinochoroidal perfusion due to macular surgery.However, prospective, randomized, and multicenter studies are required to define these concepts in the absence of meta-analyses and systematic studies. In a previous preliminary study, we reported a progressively smaller CVI at more advanced stages of MTM. 24However, when the same assessment was performed for the cohorts in this study, no significant differences were found among the four stages of MTM at any time point evaluated, although CVI tended to decrease over time.Given the significantly larger sample size in the present study, the previously observed variations in CVI may have been due to sampling noise rather than biological variations.Collectively, these results suggest that the CVI was not significantly different between the MTM stages but was significantly reduced after surgery.A possible implication is that the structural worsening in these highly myopic eyes depends more on the long-term stage and chronic tractional component of the MTM in combination with the lower values in the choroidal perfusional stage of the choriocapillaris and choroid according to our postoperative quantitative evaluation of these biomarkers or is due to the impact of the surgery on the perfusion of the posterior pole.However, further long-term prospective studies are required to confirm this hypothesis. 3][34] In RP, choriocapillaris loss has been reported in extracted human eyes, 35 and Tan et al 36 reported significantly lower CVIs in eyes with RP.Similarly, Wei et al 32 found lower CVI in a cohort of eyes with retinal dystrophies.In another study, Ratra et al 37 found that CVI was a more robust biomarker than CSFT for capturing choroidal alterations in eyes with Stargardt disease.Furthermore, the study found no significant correlation between visual acuity and CVI. In the present study, there were no significant differences in the CVI between the control and MTM eyes before surgery, suggesting that it may not be useful for the diagnosis of MTM.In contrast, Wang et al 38 reported a greater subfoveal choroidal capillary vessel density in eyes with MTM and retinoschisis.However, choroidal atrophy is a known part of the pathogenesis of MTM, along with RPE atrophy and a reduction in adhesion between the RPE and retina. 39It is unclear why there were no significant differences in CVI between the control eyes and eyes with different stages of MTM before surgery in this study.A study with a larger sample size, especially in more severe cases of MTM, is needed to identify these differences.However, it is important to note significant differences in the postoperative perfusion evaluations, where lower CVI and CFA-quantified values were observed when compared with the estimated values of the highly myopic eyes conforming to the control group.Consideration of the impact of complex surgery on the microcirculation of the posterior pole in these eyes may be warranted, as it could lead to the development of more sophisticated low-pressure control perfusion vitrectomy techniques with detailed intraoperative control of perfusion to preserve the choroid and choriocapillaris intact and potentially obtain better functional results.However, this hypothesis needs to be proven in future randomized controlled trials. Another important question is whether any relevant patient characteristics and choroidal/retinal parameters are correlated with visual acuity.In this study, in a series of analyses, we found that none of the relevant variables, including CVI, were predictive of final visual acuity.Notably, the duration of MTM before treatment and, consequently, a more advanced stage of MTM characterization tended toward significance (p = .051);that is, a longer preoperative duration was associated with worse visual outcomes.Currently, the timing of MTM treatment remains controversial because of its progressive pathogenesis. 40Although no statistically significant difference was found when we analyzed the values among the different MTM stages in the long term, these observations highlight the need for early diagnosis and treatment to preserve vision. The present study has several limitations that are worth addressing.First, these findings should be confirmed in a larger cohort of healthy eyes with myopia or MTM.Second, additional assessments of the retina-choroid complex, such as multifocal electroretinography, autofluorescence imaging, and microperimetry, may further enhance the current understanding of CVI as a potential biomarker for MTM.Third, more detailed studies are required to assess the cause of postoperative CVI reduction.Despite these limitations, the present findings are consistent with the various comparisons and parameters among the study groups.Although we did not find convincing evidence for CVI as a biomarker for MTM, it may be used in combination with other anatomical features to provide a more comprehensive assessment of choroidal health. CONCLUSION In conclusion, MTM is a progressive degenerative disease characterized by alterations in the choroidal vasculature.The CVI was significantly reduced after surgery, but there was no statistically significant difference between MTM stages and between control and MTM eyes presurgery.However, significant differences were found between the long-term postoperative CVI and CVI in the healthy myopic control group.Further research is required to better understand these complex conditions.These observations suggest that long-term CVI measurement alone may be a reliable biomarker for the presence of MTM and can capture changes in the choroidal vasculature after surgical management.Additional studies are needed to confirm these findings and to further evaluate the utility of CVI as a biomarker for MTM. Figure 1 . Figure 1.Protocol depicting the method used for quantifying the CVI in healthy myopic eyes.(a) binarized image designed to depict the intraretinal structure and choroidal layers in greater detail in a healthy, moderately highly myopic eye with an axial length of 27.8 mm.(a1) magnified image within the yellow square showing binarized processing of the subfoveal choroidal stroma and luminal vascular visualization of the subfoveal choroidal vessels to obtain a choroidal vascularity index (CVI) of 62.8% in a healthy, moderately myopic eye.The selected subfoveal area of choroidal flow is clearly delineated by the white dotted line.(a2) quantified choriocapillaris flow area (CFA) of 2.308 mm 2 in the protocol-selected area of 3.142 mm 2 in this healthy myopic eye.(b) binarized processing of choroidal flow in a healthy highly myopic eye with an axial length of 30.8 mm and enhanced choroidal vessel visualization, yielding a CVI of 59.4%.(b1) the magnified image within the white-yellow dotted line clearly delineates a CVI of 1.972 mm 2 .(c) binarized image corresponding to a healthy, highly myopic eye in the control group; the CVI was 63.4% inside the selected choroidal flow area.(c1) magnified image depicting the selected area for the CVI measurements.The white and yellow dotted lines depict the binarized choroidal flow area selected to calculate the CVI.(c2) the CFA was 2.173 mm 2 for the selected choriocapillaris.(d) binarized image in a healthy highly myopic eye with an axial length of 29.6 mm and a CVI of 58.2%.(d1) magnified image of the selected central subfoveal area clearly depicting the CVI area defined by the white dotted line.(d2) the CFA area was 1.737 mm 2 . Figure 2 . Figure 2. OCT measurements across study groups.OCT imaging was used to obtain the choroidal vascularity index (CVI, a-b), choriocapillaris flow area (CFA, c-d), and central subfield thickness (CSFT, e-f).(a, c, e) measurements across the healthy myopia group and surgical group at baseline and final visits.For all three measurements, no significant differences were detected at baseline between the study groups, but values were lower in the surgical group at the final visit.(b, d, f) measurements across time series in the surgical group.Significant changes were detected across time in all three measurements.p values are indicated by * (ns = not significant, *** = p ≤ .001,**** = p ≤ .0001).CFA, choriocapillaris flow area; CSFT, central subfoveal thickness; CVI, choroidal vascularity index. Figure 3 . Figure 3. CVI measurements across MTM stages.The surgical group was separated based on the MTM stage.CVI at baseline (a), one month postsurgery (b), four months postsurgery (c), and the final visit (d) are plotted.p values are indicated by * (ns = not significant).CVI, choroidal vascularity index; MTM, myopic traction maculopathy. Figure 4 . Figure 4. Surgical cases.(a) preoperative OCT findings consistent with myopic foveoschisis (FS) due to macular thickening and schisis-like thickening of the inner and outer retina layers.(a1) magnified image from the yellow inset depicting a preoperative choroidal vascularity index (CVI) of 49.8% calculated within the area clearly delineated with the yellow dotted line.(a2) long-term postoperative binarized image, showing a CVI of 47.3% within the area delineated with the red dotted line.(a3) the choriocapillaris flow area (CFA) was 2.308 mm 2 at the selected subfoveal choriocapillaris area of 3.142 mm 2 .(b) preoperative image of a symptomatic female patient with FS and an axial length of 29.7 mm.(b1) corresponding binarized image of this complex FS eye with outer and inner retinal layer-like thickening, tractional elongation of Henle's layer, and a thin superficial foveal layer with a preoperative CVI of 40.4%.(b2) long-term postoperative horizontal B-scan image following binarization showing an almost normal foveal profile, diffuse thinning of the retinal layers, and no evidence of outer or inner retinal layer thickening.The postoperative CVI was 49.6% in the subfoveal choroidal flow area, as delineated by the red dotted line.(b3) the image shows a CFA of 2.138 mm 2 at the selected subfoveal choriocapillaris area of 3.142 mm 2 .(c) highly myopic eye with an axial length of 31.2 mm and a moderate posterior staphyloma.Evidence of a full-thickness myopic MH with tractional elongation of Henle's layer and thickening of the macula without evidence of macular detachment.(c1) magnified preoperative binarized image with a CVI of 56.6%.(c2) postoperative structural evaluation showed a flat macula with a recovered foveal profile, an external limiting membrane (ELM) line lucency defect, and a well-preserved retinal pigment epithelium (RPE) layer.The calculated postoperative CVI was 54.4%.(c3) CFA of 1.808 mm 2 at the selected subfoveal choriocapillaris area of 3.142 mm 2 .(d) preoperative image of an eye with extensive macular hole retinal detachment (MHRD).(d1) the preoperative CVI was 44.7%.(d2) postoperative appearance shows extrafoveal chorioretinal atrophic areas, an irregular foveal profile, a thin foveal roof, a closed macular hole, macular reattachment without evidence of residual subretinal fluid (SRF), attenuated internal and external retinal layers, and a subfoveal ellipsoid zone (EZ).The quantified choroidal perfusion indices were lower than those obtained in the control myopic group, with a CVI of 42.7%.(d3) the CFA value was 1.704 mm 2 at the selected subfoveal choriocapillaris area of 3.142 mm 2 . Table 1 . Summary of study groups and inclusion criteria. Table 2 . Patient demographic data and clinical characteristics. Table 4 . Choroidal perfusion markers and OCT measurements for both study groups. CFA, choriocapillaris flow area; CSFT, central subfield thickness; CVI, choroidal vascularity index.*Comparison between healthy myopic and surgical groups at the final visit.None of the comparisons between the healthy myopic and surgical groups at baseline were statistically significant. Table 5 . Multivariate linear model relating final postoperative BCVA with patient factors.
2023-11-23T06:17:48.541Z
2023-11-21T00:00:00.000
{ "year": 2024, "sha1": "9ad26c7143187dd4394d8c7e268a09599f0e7058", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/08820538.2023.2283029?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "05c9675b2bb4e1cd9b8ecbb951b2a0ceea494a43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250669299
pes2o/s2orc
v3-fos-license
Research on corrosion mechanism of suspension insulator steel foot of direct current system and measures for corrosion inhibition There are increasingly serious electrocorrosion phenomena on insulator hardware caused by direct current transmission due to the wide-range popularization of extra high voltage direct current transmission engineering in our country. Steel foot corrosion is the main corrosion for insulators on positive polarity side of transmission lines. On one hand, the corrosion leads to the tapering off of steel foot diameter, having a direct influence on mechanical property of insulators; on the other hand, in condition of corrosion on steel foot wrapped in porcelain ware, the volume of the corrosion product is at least 50% more than that of the original steel foot, leading to bursting of porcelain ware, threatening safe operation of transmission lines. Therefore, it is necessary to conduct research on the phenomenon and propose feasible measures for corrosion inhibition. Starting with the corrosion mechanism, this article proposes two measures for corrosion inhibition, and verifies the inhibition effect in laboratory conditions, providing reference for application in engineering. Introduction Extra high voltage direct current transmission has advantages such as long transmission distance, large transmission capacity, saving transmission space and low comprehensive cost. It effectively solves the problem of the reverse distribution of resource reserve and energy consumption in western and eastern China, playing an important role in promoting reasonable optimal configuration of energy resources and improving national economic level in our country [1][2][3] . The ±800kV Chusui direct current transmission engineering put into operation in June 2009 is the first ±800kV direct current transmission engineering in the world independently designed and researched by our country [4][5][6] . It is a main line connecting the power transmission of the whole power system in south China, starting from Yunnan to the west and to Guangdong to the east. The V series structure is adopted in large quantity for the ±800kV Chusui direct current transmission line, in order to improve mechanical performance and ensure transmission safety. However, this structure may lead to local corrosion in operation sections with damp weather conditions and large precipitation because it is easy to accumulate moisture on lower edge of the insulator hardware, which composites electrolyte together with surface contaminant and forms conducting loop with the function of additional direct voltage. There are frequent phenomena of insulator corrosion on various direct current transmission lines. In some operation sections with extremely high corrosion ratio, insulator hardware corrosion may lead to the degradation of mechanical performance and electrical performance of the insulator to different degrees, and the degradation of mechanical performance may result in serious transmission accidents such as flashover or breakdown, threatening the safe and stable operation of power system. Therefore, it is necessary to conduct further research on corrosion mechanism of the insulator and propose targeted inhibition measures for sections with serious corrosion, in order to ensure safe operation of power system. Fig.1: Typical Inducing Environment for Steel foot Corrosion According to comprehensive analysis on typical topographic conditions and meteorological environment in regions with serious insulator steel foot corrosion, the following typical characteristics shown in Fig.1 can be acquired: (1) on shaded side of plateau mountainous regions; (2) large air humidity and plentiful rainwater; (3) frequent continuous heave fog weather, difficult to disperse. Galvanic Corrosion of along Service Leakage It refers to the corrosion caused by leakage current circulation path formed along with the surface of the insulator, with the basic principle of electrochemical corrosion, as shown in Fig.2 External direct current power supply, positive and negative metal electrodes and conducting solution form the electrolytic tank circuit. The metal electrode connected to the positive pole of the direct current power supply generates the oxidizing reaction under the effect of the direct electromotive force, and loses electrons; at the same time, corresponding positive ions are formed and broken away from the surface of metal material. This process is known as anodic corrosion [7][8][9] . The metal on the positive polarity side of the power supply in Fig.3 is the positive pole, and the metal on the negative polarity side of the power supply is the negative pole. When the. The leakage current path is formed in condition that the porcelain ware is affected with damp on the surface and the contaminant is dissolved in moisture. At this time, the metal in the positive electrode generates corresponding oxidization reaction and loses electrons, and changes to high valence positive ions at the same time. The metal in the negative electrode generates corresponding reduction reaction, and acquires electrons while forms negative ions at the same time. The positive and negative ions combine in the electrolyte, and generate water insoluble hydroxide. The hydroxide is further oxidized with the function of oxygen and water. It is available to acquire the rust products of different element proportions with the combined action of factors such as temperature, PH value and oxygen content in moisture. The composition of rust can be expressed as follows: The values of m, n and p here vary with different conditions. The insulator with steel foot corrosion on direct current transmission line is on the positive polarity side. Considering that the steel foot side of the insulator on the positive polarity side is connected to the transmission line, the steel foot is in high potential compared with the iron hat. In condition that the insulator is exposed to rain or affected with damp on the surface, the contaminant accumulated on surface of the insulator is combined with moisture to form an electrolyte environment, leading to the electrolytic tank conditions due to the positive and negative electrodes of iron hat and steel foot. The steel foot as the positive electrode changes to bivalent cation dissolved in the electrolyte by losing electrons due to oxidation reaction, leading to positive electrode steel foot corrosion. Analysis on Accelerated Corrosion Test The simulation test methods used for research on insulator electrolytic corrosion at present are mainly divided into two categories: one category includes long term field simulation method, and the other category includes accelerated simulation methods by establishing test platforms in laboratory conditions, mainly including water spray method, salt fog method, electrolytic bath method, solid dirt layer method, etc. It is obvious that field simulation methods cost too much time and leads to large time span, adverse to practical engineering application. The accelerated corrosion methods satisfy the requirements on time, but indexes such as field fitting performance shall be taken into consideration. Several methods are introduced in the following, and comparisons on advantages and disadvantages among them will be made, in order to select the optimal test method. Water spray method The water spray method suspends the insulator according to the actual operation method in the field, and sprays NaCl solution to the position of corrosion on steel foot. It then adds direct current voltage to form conductive path, and connects the steel foot end to the positive electrode of the power supply, to generate positive electrode corrosion and forms the accelerated corrosive environment. In which the flow rate and the conductivity of the conducting solution have large influence on corrosion rate. The flow rate is determined by adjusting the water spray rate of the sprayer, and the conductivity is acquired by calculating the concentration of the NaCl solution. Salt fog method The NaCl solution is adopted for the salt fog method. Different from the water spray method, it forms salt fog by utilizing high pressure electric spray gun, to develop the conductive film on the surface of the insulator directly, to generate certain leakage current with the function of the external direct voltage. The test is needed to be conducted in special fog room equipped with special fogging device. Solid dirt layer method This method coats a layer of solid dirt layer on the surface of the porcelain insulator, with available solid dirt layers shown in Table 4.2. Artificial damping is conducted after the coating of the dirt layer, to generate the leakage current of certain value under the direct voltage, which shall be continued for a while, to achieve the purpose of the accelerated electrolytic corrosion of the test object. The adopted solid dirt layer material shall form a conductive film on the surface of the porcelain insulator after the damping with strong adhesiveness, in order to avoid too early loss after moisture. According to existing research findings, only about 3C leakage electric charge quantity can be simulated in each smearing; therefore, over ten thousands of times of smearing are needed for the insulator with operation time of more than 15 years. Electrolytic bath method Add 3% NaCl solution in the electrolytic bath as the electrolyte, and adopt a copper bar as the negative electrode, and connect the steel foot test object to the positive electrode. Well wrap the positions not applied with corrosion of the steel foot test object with insulation material before soaking the steel foot test object into the electrolyte, in order to achieve consistence between the simulated steel foot position and the actual corroded position on the steel foot of the suspension insulator. Then adjust the direct current power supply and keep the constant current of 5A, and record the ampere hour value accumulated in the electrolytic process, and measure the corresponding corroded depth and corrosion volume of the steel foot and the zinc cover. Comparisons among different schemes According to existing research findings, in the four above schemes, the water spray method has comparatively low requirements on equipment; it is easy to set up the test platform, coinciding with field operation conditions. The salt fog method is also coinciding with field operation conditions, but it has comparatively high requirements on test platform, and special fog room and supporting fogging device are required. The solid dirt layer method does not have high requirements to test equipment, but ten thousands of times of smearing are required, which is only applicable to the tests with small electric charge leakage quantity. The electrolytic bath method has simple requirements on equipment, but with simulation effect different from the actual field corrosion condition of the insulator. According to the comprehensive comparisons among the above test methods, this article selects the water spray method for the accelerated corrosion test. Sample Pretreatment Make a semi-circular arc copper sheet electrode with inner diameter of 1.5 cm, outer diameter of 3.5 cm and width of 2 cm. Weld a metal wire on the middle part of the outside of the copper sheet. The method for the creep distance of the short circuit part is as follows: Fix the manufactured copper sheet electrode with waterproof glue on 1cm away from the steel foot, and the other end of the metal wire is fixed on the locking pin. Development of the Test Select 5 brand new porcelain insulators and conduct creep distance treatment to the short circuit part with metal copper sheet, with the same suspension method of V shape series in field operation (with the included angle between the axis of the insulator and the horizontal plane of 76°), in which samples 1#, 2#/ and 3# are not conducted with any treatment, and sample 4# is covered with RTV hydrophobic painting in the region without short circuit, and sample 5# is added with umbrella type nonmetal protective hat on the zinc cover part. Prepare the electrolyte with certain conductivity. Conduct test during the preparation process with conductivity tester, with electrolyte solute of refined salt sodium chloride with purity higher than 99%, The accelerated corrosion test aims to accelerate the steel foot corrosion. According to the electrochemical corrosion principle, the positive electrode connected with positive electrode of the power supply will have corresponding oxidizing reaction; therefore, the steel foot is connected to high potential, and the iron hat (the copper sheet used for the short circuit creep distance) is connected to low potential. Start the test units according to the following operation sequence: check wiringinlet the electrolytedisconnect the ground connectionenable the power supplyboost to designated value with the target voltage of 400 V. Measure the displayed reading of the units according to leakage charge, and record the accumulated leakage electric charge quantity in every 1 h and the real-time value of the leakage current within a period. According to the result of the hardware dissection test to the XZP 2 -300 model operation insulator taken down from ±800kV Chusui direct current transmission line: the annual average maximum leakage electric charge quantity on steel foot position is 1479C, which is taken as the benchmark electric charge quantity of the test. Record the data as per step (4) after the start of the test. Suspend the test in condition of the occurrence of rust on 1# insulator for the first time. Take down 1# insulator after the outage operation, and record the time node, including operating time and the leakage electric charge quantity of the node. Take down the sample at the leakage electric charge quantity of 8135C on 2# insulator (equivalent to the field operation of 5.5 years). Take down 3# and 4# samples at the leakage electric charge quantity of 44370C on 3# (equivalent to the field operation of 30 years). Take down 5# sample when the leakage electric charge quantity of 5# sample is the same as that of 1# sample, and record corresponding time point and leakage electric charge quantity. Analysis on Test Result 1. The leakage electric charge quantity of 2# sample is 8135C (equivalent to the field operation of 5.5 years). Make comparisons with the insulator (6#) taken down from Chusui direct current line with actual operation length of 5.5 years. 2. Stop the test at the leakage electric charge quantity of 44370C on 3# sample (equivalent to field operation of 30 years), then take down the 3# and 4# samples. The scene drawings of the two insulator samples are as shown in Fig.8, and the corresponding relationship between leakage electric charge quantity and time is as shown in Fig.9. According to the comparisons on the corrosion state and the leakage electric charge quantities of 3# and 4# insulators: (1) The corrosion state of the two are serious, in which the outermost zinc cover of 3# insulator totally falls off, with the most serious corrosion state near to the cement junction. The surface layer of the zinc cover of 4# insulator is also basically corroded; there are large quantities of zinc hydroxide products attached on surface of the zinc cover away from the cement junction; (2) According to the comparisons made on the leakage electric charge quantity at the taking down and the corrosion rate during the test process, the existence of hydrophobic material plays certain role to corrosion delay, but not significant enough. When the leakage electric charge quantity of 5# insulator is equivalent to that of 1# insulator during taking off, take off 5# insulator, and record corresponding time point and leakage electric charge quantity, with corresponding time node of 108.8h and corresponding leakage electric charge of 20795C. Refer to Fig.9 for comparisons between sample drawings and Fig.10 for comparisons between corrosion rates. (1) The insulator corrosion regions are junctions among steel foot, air and cement (protective cover); i.e., the installation of umbrella shape protective cover can effectively change the corrosion positions of the insulators; (2) According to the relation scheme between the leakage electric charge and time, the installation of umbrella shape protective cover can effectively delay corrosion rate by reducing the corrosion rate to 0.6% of the original rate; (3) The samples in the figures are conducted with equivalent treatment; the thickness of the exposed zinc layer shall be further increased compared with the steel foot treatment method in ideal state, in order to further delay the corrosion rate to the iron base. Conclusion 1. According to the comparisons among various accelerated corrosion test methods, the water spray method has significant advantages on comprehensive consideration of various indexes such as field fitting performance, operability of the test and economy, applicable to the development of continuous test of large leakage electric charge quantity; 2. As for the XZP2-300 model porcelain insulator, when the leakage electric charge achieves 20795C, the zinc cover loses the protective function, and the insulated steel foot iron base is corroded, leading to direct influence on the mechanical strength of the insulator; 3. The coating on the surface of the insulator porcelain ware with hydrophobic material plays certain role in delaying the corrosion on the steel foot of the insulator, but not significant enough; 4. The installation of the umbrella shape protective cover can effectively change the corrosion position and delay the corrosion rate by reducing the corrosion rate to 0.6 times of that of the original rate.
2022-06-27T21:08:02.983Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b8a704300d019a096e2cbd0e76ce6f677752d9d8", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/231/1/012076/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b8a704300d019a096e2cbd0e76ce6f677752d9d8", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
5470119
pes2o/s2orc
v3-fos-license
The characteristics of circular disposable devices and in situ devices for optimizing male circumcision: a network meta-analysis In situ device (ISD) and circular disposable device (CDD) are used for optimizing male circumcision (MC), but evidence to explore the characteristics of these two devices is insufficient. In order to explore this issue systematically and provide reliable evidence, ten published randomized controlled trials (RCTs) exploring the safety and efficacy of ISDs and CDDs were included (involving 4649 men). Moderate quality of the RCTs included was found after assessment. Pairwise meta-analyses and network meta-analyses were processed in stata 13.0 and AIDDS v1.16.6 respectively. According to the outcomes that were statistically significant in both pairwise and network meta-analyses, ISD was found to have less intraoperative blood loss (IB), less operative time (OT) and less incidence of wound bleeding (WB) than conventional circumcision (CC); ISD was found to have less WB but more wound healing time (WHT) than CDD; CDD was found to have less IB and less OT than CC. CDD tended to have the best wound healing condition and least pain experience; ISD tended to have the least IB, least OT, least WB, and highest satisfaction rate. With their own superiorities in many aspects, CDD and ISD are both safe and effective devices for optimizing MC. Scientific RepoRts | 6:25514 | DOI: 10.1038/srep25514 Therefore, we conducted this systematic review and network meta-analysis to assess the safety and efficacy of ISDs and CDDs. Furthermore, we evaluated the characteristics of these devices for optimizing MC. Results Selected trials. Pubmed and Ovid databases were searched and 491 and 838 references were evaluated, respectively. After 775 duplicates were removed, the titles and abstracts of 554 records were read, and 539 records that did not meet the inclusion criteria were excluded. Fifteen trials were retrieved for a detailed evaluation and five were excluded: two non-randomized controlled trials, one trial that compared two application methods of one device, and two trials that used immature 15 and harmful devices 22 . Ten randomized controlled trials (RCTs) were identified (4649 men) 12,18,20,21,[23][24][25][26][27][28] and included in our meta-analysis (Fig. 1); moderate methodological quality is shown in Fig. 2. Seven trials 12,18,21,23,[26][27][28] had a low risk of bias of a random sequence generation; only three trials 18,26,28 had a low risk of allocation concealment; and none of the trials had a low risk of performance bias or detection bias. However, there was a low risk of attrition bias, reporting bias, and other biases in all trials except one 26 that had an incomplete outcome data. Of all the RCTs, six were conducted in China 20,21,24,25,27,28 , and the remaining four were conducted in South Africa 12 , Uganda 23 , Rwanda 18 , and Kenya and Zambia 26 . Nine RCTs were two-arm trials: circular stapler versus CC (three studies), Shang Ring versus CC (four studies), Unicirc versus CC (one study), and PrePex versus CC (one study). One RCT was a three-arm trial: circular stapler versus Shang Ring versus CC. More RCT characteristics are shown in Table 1. The general network situation of eligible comparisons in this meta-analysis is shown in Fig. 3. Five studies compared CDD versus CC, six compared ISD versus CC, and one compared CDD versus ISD directly. However, the numbers of comparisons were variable and less than ten in each of the analyzed outcomes (Tables 2 and 3). Thus, we considered the publication bias in each comparison. Credibility of network meta-analysis. We had adequate iterations in each model to make sure every Markov chain was similar and that all potential scale reduction factors (PSRF) were close to 1 (the details are not shown). This ensured that every model in this network meta-analysis converged. Consistency and inconsistency analyses were applied in each outcome and no great difference between random effects standard deviation (RESD) and inconsistency standard deviation (ICSD) was found ( Table 4). As comparison involved both direct and indirect evidence of CDD versus ISD, node-splitting models were estimated on the outcomes of this comparison (Table 4). We found that most of evidence was exchangeable, except the WB. Comparisons between CDD and CC. Five studies involving 2026 men were included in pairwise meta-analyses ( Table 2). The statistically significant outcomes were IB, OT, mean pain score on the operation day (PO), mean pain score on postoperation days (PP), and wound healing time (WHT). CDD showed less IB In network meta-analyses (Table 3) Comparisons between ISD and CC. Five studies involving 2937 men were included in pairwise meta-analyses ( In network meta-analyses (Table 3) Comparisons between CDD and ISD. Only one study involving 628 men was included in pairwise meta-analyses ( Table 2). The statistically significant outcomes were observed for IB, OT, PO, PP, SR, incidence of wound adverse event (WAE), WB, WE, WHT, and incidence of wound infection (WI). CDD showed more IB In network meta-analyses ( , PO: mean pain score on the operation day, PP: mean pain score of postoperation days, Com: overall incidence of complication, WAE: incidence of wound adverse event, WB: incidence of wound bleeding, WD: incidence of wound dehiscence, WE: incidence of wound edema, WHT: wound healing time (day), WI: incidence of wound infection, Cost: overall expenditure, SR: satisfaction rate. The surface under the cumulative ranking curve and treatment ranks ( Table 5). The outcomes of CDD whose surface under the cumulative ranking curve (SUCRA) was ≥ 80% were PO, PP, WE, WHT, and WI and whose SUCRA was ≤ 20% were Cost and WB. CDD had a 72%, 73%, 90%, 91%, and 70% possibility for the least PO, PP, WE, WHT, and WI, respectively; CDD had a 97% and 74% possibility to have the highest Cost and WB, respectively. The outcomes of ISD whose SUCRA were ≥ 80% were IB, OT, SR, and WB and whose SUCRA were ≤ 20% was WHT. ISD had a 70%, 62%, and 77% possibility to have the least IB, OT, and WB, respectively, and a 97% possibility to have the highest SR; ISD had a 95% possibility to have the longest WHT. Although direct and indirect evidence were unexchangeable when comparing WB of CDD and ISD, a sensitivity analysis was applied to explore the reliability of the ranks' order, and we found that the conclusion was stable (the details are not shown). Discussion The WHO and Joint United Nations Programme on HIV/AIDS (UNAIDS) recommend a voluntary medical MC to be considered as a part of a comprehensive HIV prevention package in countries with generalized epidemics. MC has been proven to have the greatest public health impact and provide the largest cost efficiency if the services are rapidly scaled up. One efficacious way includes using disposable devices in MC; this may lower the surgical skill required and accelerate the pace of delivery of voluntary medical MC while maintaining the safety of the procedure 11,19 . To our knowledge, this is the first network meta-analysis concerning disposable devices in MC. Although our analysis was based on ten studies, it included 4649 individuals who were randomly assigned to three different kinds of MC methods. These trials were conducted in China and some countries in Africa. The methodological quality of RCTs was moderate due to inadequacies in allocation concealment and blinding because of ethical issues and properties of the surgical studies. In our meta-analysis, devices were classified into one of two categories: ISD (PrePex and Shang Ring) and CDD (circular stapler and Unicirc), according to their operation principles. An ISD consists of an inner and an outer ring. The inner ring is a frame that the outer ring can lock onto to clamp the foreskin 26,29 . Excess foreskin is removed immediately after the rings are locked firmly 23 or at the time when the rings are removed 29 . Rings are removed according to surgeon's assessment of whether ischemic foreskin has necrosed and the wound has healed. Without a ring in situ, CDD has a circular glans pedestal in which the excess foreskin can be incised smoothly and a fastening part to fix the reserved foreskin to prevent shifting. The wound is stapled simultaneously 21 when incising the foreskin or glued with some type of biogel 12 . The staples will theoretically fall off when the wound has healed. Some trials involving ISD in Africa were reviewed by the Technical Advisory Group on Innovations in Male Circumcision of the WHO. The outcomes based on a total of 1983 Shang Ring and 2417 PrePex procedures found that ISD was easier to perform, had higher surgical success rates, lower total procedure times, eliminated the need for suturing, possibly had fewer complications, caused less bleeding, gave better cosmetic results, may potentially reduce the time taken for recovery after surgery, and eliminated the need for routine injectable anesthesia (only PrePex) compared with other methods. However, wound healing took about 1-2 weeks longer on average than CC and the pain varied by the method type 19 . We had previously conducted a meta-analysis of RCTs that compared the safety and efficacy of Shang Ring with CC and drew similar conclusions 13 . The superiority of disposable devices was also confirmed in our meta-analysis. According to the stable outcomes, ISD was found to have less intraoperative blood loss, a lower operative time, and a lower incidence of wound bleeding than CC. In addition, ISD showed a lower incidence of wound bleeding but needed more wound healing time than CDD. CDD was found to have lesser intraoperative blood loss and a lower operative time than CC. After assessing the results of SUCRA and treatment ranks, we found that CDD tended to be the treatment with the best wound healing condition (least incidence of wound edema/infection and wound healing time) and the least pain experience (lowest pain score). However, it may be the most expensive device with the highest incidence of wound bleeding after surgery; among all the techniques, the ISD circumcision tended to have the lowest operative time and bleeding volume intraoperatively and the lowest incidence of bleeding postoperatively. Furthermore, ISD had the highest satisfaction rate despite requiring the longest wound healing time relative to the other techniques. CC showed no advantages other than a minor trend to be the cheapest MC method (78.5% SUCRA, 57% possibility). Lv et al. 21 conducted a survey on 508 men who had recently undergone circumcision. They found that safety and pain were their main concerns before MC, while pain and penile appearance were their main postoperative concerns. Pain levels varied based on different levels of individual tolerance, methods of anesthesia and analgesia, wound condition (e.g., infection or edema would result in more pain) and so on. In our analysis, both CDD and ISD demonstrated shorter operative times than CC, which was a result of a fast excision of excess foreskin (or no excision at all) and avoiding suturing. It could also be the reason for their lower intraoperative pain score. CDD was also likely to have a better wound condition and that could be the reason for its lower postoperative pain and fast wound healing. Men who chose ISD circumcision were required to wear the ring for 1-2 weeks longer, and pain during an erection was reported as being somewhat higher than at comparable times following a CC (Shang Ring, in particular). However, ISD had a 97% possibility to be the device that best satisfied the men in our analysis. ISD left a neat circumferential wound with no suture marks six weeks postoperatively 19 . However, staples and stitches are used to anastomose foreskin in circular stapler circumcision and CC, respectively, and the unaesthetic appearance may be derived from the imprints and pinholes. Without a doubt, the most basic and important requirement of safety of CDD and ISD have been proven in many studies and in this meta-analysis. Nevertheless, devices still have their own specific adverse events, such as a dislocation of the frenulum and wound disruption associated with all devices, inconvenience associated with ISD, unpleasant odor associated with PrePex, and potential intraoperative suturing requirement associated with CDD because of bleeding or incomplete wound closure 11,12,19,25,29 . We could not explore these issues in our analysis due to lack of adequate numbers of included RCTs. There are some limitations in our study: We classified devices into two categories but differences still existed within the same category. For example, Shang Ring requires a sterile field and injection of local anesthetic at placement, whereas PrePex requires neither a sterile field nor an injection of local anesthetic at placement. We failed to contact missing data and had a limited number of studies (ten RCTs), and only one comparison between CDD and ISD implies the possibility of a publication bias. Some of the outcomes measured in this meta-analysis were only based on two studies. Studies often report their outcomes in different ways, such as follow-up periods, pain score, and definition of complications. Therefore, all of the results from our meta-analysis should be considered with caution. In conclusion, the clinical performance of disposable devices used in adult MC exceeded that of CC. CDD circumcision tends to have the best wound healing condition and the least pain experience. ISD circumcision tends to have the lowest operative time, least intraoperative blood loss, least incidence of wound bleeding, and highest satisfaction rate. Each device has its own advantages and these should be discussed with men prior to their circumcision. Material and Methods Search strategy. A systematic bibliographic search of PubMed, Embase, and the Cochrane Library databases (the Cochrane Central Register of Controlled Trials and the Cochrane database of Systematic Reviews) of Ovid was done from inception to 12 January 2016 for RCTs that reported using disposable devices to complete adult MCs. The following were the keywords used for searching: "circumcision, " "randomized controlled trial" and their other form of expressions. Original papers were scanned in the reference section to look for missing trials. Methods Outcomes SUCRA Study eligibility criteria. There was no language restriction, and published RCTs that compared efficacy and safety of one kind of disposable device with CC or another device in adult MC were included regardless of whether concealment of allocation or blinding method was carried out or not. The studies including devices that had been confirmed as immature or harmful and thereby not currently used were excluded. Duplicate publications of two or more studies investigating the same sample were excluded. Interventions and comparisons. Men were divided into different groups according to the principle of the operation, irrespective of the brand names of devices used. Shang Ring and PrePex were classified as ISD; circular stapler and Unicirc were classified as CDD; and all non-device MCs were classified as CC (e.g., sleeve and dorsal slit). The comparisons were between at least two of ISD, CDD, and CC. Outcome measures. The outcomes measured in this review included the following: IB-intraoperative blood loss (ml), OT-operative time (min), PO-mean pain score on the operation day, PP-mean pain score of postoperative days (after 24 h), Com-overall incidence of complication, WAE-incidence of wound adverse event, WB-incidence of wound bleeding, WD-incidence of wound dehiscence, WE-incidence of wound edema, WHT-wound healing time (day), WI-incidence of wound infection, Cost-overall expenditure, SR-satisfaction rate. Literature screening and data extraction. Two reviewers (Yu Fan and Dehong Cao) read the titles and abstracts of all potential trials and independently selected eligible RCTs according to the predetermined inclusion and exclusion criteria. If any discrepancy existed, it was resolved by discussion between the two reviewers or by the assistance of a third reviewer (Qiang Wei). Data were extracted and typed in by the first two reviewers through elaborative reading and utilization of a specific formula. The third reviewer (Qiang Wei) checked the information and resolved any differences by discussions with other team members (other authors). If the study provided medians and interquartile ranges instead of means and standard deviations (SDs), we imputed the means and SDs, as described by Hozo et al. 30 In cases of multiple pain scores at different time points, we calculated the mean ± SD pain score to obtain PO and PP. In the presence of missing data, efforts were made to contact the authors for information. Extracted data mainly included general trial information, selected outcomes, and methodological characteristics. The methodological quality of each selected trial was assessed by the same two reviewers according to the Cochrane Collaboration Risk of Bias Tool in Review Manager 5.3. A funnel plot was used to test the possibility of publication bias only when the number of each comparison was no less than ten. Otherwise, the existence of potential publication bias was considered. Data processing and statistical analysis. A network plot of devices was drawn in Stata 13.0 to visually display the number of studies involved in each direct comparison. A pairwise meta-analysis was then processed by synthesizing the studies that compared the same interventions with a random-effects model, which incorporated the assumption that different studies assessed different yet related treatment effects 31 . Dichotomous and continuous data were expressed as RR and SMD, respectively, both with 95% confidence intervals (CI). A network meta-analysis was analyzed in AIDDS v. 1.16.6. Its network meta-analysis models were implemented in the Bayesian framework and estimated by Markov chain Monte Carlo methods. All data processing procedures in AIDDS followed the methods described by Zhao et al. 32 Using the Brooks-Gelman-Rubin diagnostic, we considered that the model had converged when all Markov chains were similar and PSRF was close to 1. Otherwise, additional iterations were applied until the model had converged. In a network meta-analysis, we chose OR and the 95% credibility interval (CrI) to express dichotomous data. This was because logOR was more suitable than logRR as the former had better mathematical properties and often reflected the underlying mechanisms more effectively. Consistency and inconsistency analyses were applied to check whether the trials in the network were consistent. An ICSD greater than RESD indicated an inconsistency problem. Next, we investigated the source for inconsistency from the studies, excluded them, and ran the model until there was no significant inconsistency. A node-splitting model was estimated for each comparison involving both direct and indirect evidence. Whether the evidence was exchangeable was according to the corresponding inconsistency parameter. It should be zero when evidence was completely exchangeable (p > 0.05). When there was no relevant inconsistency and evidence was exchangeable, a consistency model was used to draw conclusions on the relative effects (OR, SMD, with CrI) of the included treatments and their rank effects with possibilities. Outcomes were considered to be stable if they were statistically significant in both pairwise and network meta-analysis. The aforementioned treatment ranks were processed in Stata 13.0 to calculate SUCRA. The larger the SUCRA value, the better recommendation of its best rank, and vice versa. We defined a SUCRA ≥ 80% as a significant recommendation of the best rank and a SUCRA ≤ 20% as a significant consideration of the worst rank.
2018-04-03T00:41:41.100Z
2016-05-09T00:00:00.000
{ "year": 2016, "sha1": "3d3afea4b2ebc9d640f5c74b063213b631762e41", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep25514.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d3afea4b2ebc9d640f5c74b063213b631762e41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261322569
pes2o/s2orc
v3-fos-license
Association of Diaphragm Thickness and Respiratory Muscle Strength With Indices of Sarcopenia Objective To evaluate the relationship between respiratory muscle strength, diaphragm thickness (DT), and indices of sarcopenia. Methods This study included 45 healthy elderly volunteers (21 male and 24 female) aged 65 years or older. Sarcopenia indices, including hand grip strength (HGS) and appendicular skeletal muscle mass/body mass index (ASM/BMI), were measured using a hand grip dynamometer and bioimpedance analysis, respectively. Calf circumference (CC) and gait speed were also measured. Maximal inspiratory pressure (MIP) and maximal expiratory pressure (MEP) were obtained using a spirometer, as a measure of respiratory muscle strength. DT was evaluated through ultrasonography. The association between indices of sarcopenia, respiratory muscle strength, and DT was evaluated using Spearman’s rank correlation test, and univariate and multiple regression analysis. Results ASM/BMI (r=0.609, p<0.01), CC (r=0.499, p<0.01), HGS (r=0.759, p<0.01), and gait speed (r=0.319, p<0.05) were significantly correlated with DT. In the univariate linear regression analysis, MIP was significantly associated with age (p=0.003), DT (p<0.001), HGS (p=0.002), CC (p=0.013), and gait speed (p=0.026). MEP was significantly associated with sex (p=0.001), BMI (p=0.033), ASM/BMI (p=0.003), DT (p<0.001), HGS (p<0.001), CC (p=0.001) and gait speed (p=0.004). In the multiple linear regression analysis, age (p=0.001), DT (p<0.001), and ASM/BMI (p=0.008) showed significant association with MIP. DT (p<0.001) and gait speed (p=0.050) were associated with MEP. Conclusion Our findings suggest that respiratory muscle strength is associated with DT and indices of sarcopenia. Further prospective studies with larger sample sizes are needed to confirm these findings. INTRODUCTION Sarcopenia is a geriatric syndrome associated with loss of skeletal muscle mass and muscle strength [1].Sarcopenia commonly occurs as an age-related process and is also influenced by malnutrition, inactivity, disease, and other iatrogenic factors [2].Sarcopenia is associated with low quality of life, increased risk of falls and fractures, disability, and loss of independence [3]. The association between sarcopenia and respiratory diseases has been previously documented.One study reported a high prevalence of about 60% of sarcopenia in patients with respiratory failure [4].On the other hand, a cross-sectional study based on the 2008-2011 Korean National Health and Nutritional Examination Survey showed that lower skeletal muscle mass is associated with reduced respiratory function in the elderly [5].Reduced respiratory muscle strength can also impact respiratory health.A study conducted in Japan reported that respiratory muscle weakness and lower body trunk muscle mass increased the risk for pneumonia in older people [6].A systematic review of adults with respiratory muscle weakness after stroke showed that respiratory muscle strength training decreased the risk of respiratory complications [7]. Diaphragm thickness (DT) can be decreased in a range of disease states, including sarcopenia.Ultrasonographic evaluation of the diaphragm showed that DT is reduced in patients on prolonged mechanical ventilation [8].A study by Deniz et al. [9] revealed a significant reduction in DT among individuals with sarcopenia compared to non-sarcopenic elderly individuals.This reduction in DT is concerning, as it can contribute to diaphragm dysfunction and respiratory complications [10].Recently, the concept of respiratory sarcopenia has emerged and ultrasonographic evaluation of DT has been proposed as a measure of respiratory muscle mass [11].However, evaluation of the diaphragm by ultrasound is usually performed in the intensive care unit setting, and is not routinely measured in sarcopenia patients [12]. While previous studies have examined the correlation between respiratory muscle strength and sarcopenia [13], as well as DT and sarcopenia [9], there is a dearth of research investigating the association among all three factors simultaneously.Hence, our study aimed to assess the relationship between respiratory muscle strength, diaphragm muscle, and indices of sarcopenia within a single investigation. Study population Healthy adult volunteers (25 male and 25 female) aged 65 years or older were consecutively recruited for this cross-sectional study through advertisements.Participants with functional problems due to lung disease (such as lung cancer, history of lung surgery, chronic obstructive pulmonary disease, asthma, or tuberculosis), diseases which can affect sarcopenia (such as stroke, spinal cord injury, or peripheral neuropathy), or a history of major joint surgery were excluded.They were informed of the purpose and nature of the study and signed the written consent form.The study was approved by the Institutional Review Board of Chung-Ang University Hospital (No. 1751-003-281). Skeletal muscle mass assessment Bioelectrical impedance analysis (BIA), InBody S10 (Biospace) was used to measure skeletal muscle mass.BIA is a non-invasive, easy to administer tool for measuring body composition [14].The participants were instructed to avoid eating or doing exercises at least 8 hours before the study.After measuring the height and weight, electrodes were attached to the four extremities of the participants in the supine position.Appendicular skeletal muscle mass (ASM) was obtained through the body composition analysis.Appendicular skeletal muscle mass/body mass index (ASM/BMI) was calculated as follows [15]: ASM/BMI=appendicular skeletal muscle (kg)/body mass index Body mass index=weight (kg)/height (m) 2 Thigh and calf circumference measurement Thigh and calf circumference (CC) was measured with the patient in the supine position.The left knee was raised to form a 90° between the calf and the thigh [16].The tape measure was placed around the left calf and thigh, and the maximal circumference was measured without compressing the subcutaneous tissue. Muscle strength and physical performance measurements Handgrip strength (HGS) was measured using a hand-grip dynamometer, T.K.K.5401 (Takei Scientific Instruments).Participants were asked to assume the following position: adduct and neutrally rotate the shoulder, flex the elbow to 90°, and place the forearm in a neutral position, with the wrist between 0° and 30° extension and between 0° and 15° ulnar deviation while sitting in a straight-backed chair [17].Instruction was given to squeeze the grip handle as hard as possible for 3 seconds, and the maximum contraction force (kg) was recorded.The tests were performed three times in each hand with a 60-second rest between each trial.The average of the three values was used for the analysis. Gait speed was measured to evaluate physical performance, and function of the lower extremities.Gait speed was evaluated on a hard surface by measuring the time taken to walk 4 m at www.e-arm.orgone's usual walking pace [18].The participant walked a total 9-m distance, with 2.5 m at the start and end used for acceleration and deceleration.The measurements of three trials were averaged and used for the analysis. Respiratory muscle strength Maximal expiratory pressure (MEP) and maximal inspiratory pressure (MIP) were used as a measure of expiratory and inspiratory muscle strength [19].MEP and MIP were measured in the sitting position using the portable spirometer (Pony FX; COSMED) [20].To minimize errors, an experienced operator coached the subjects to completely seal their lips around the mouthpiece to prevent air leakage.Participants were encouraged to maximally expire for MEP measurements and to maximally inspire for MIP measurements.At least five trials were performed under supervision and the maximum value between trials which varied by less than 20% were recorded [21].Each test was performed with a 1-minute break. DT measurement DT was measured by B-mode ultrasound using a 7.5 MHz linear transducer (SONOACE R7, Samsung Medison Inc.).The measurement of the diaphragm was conducted at the right side, at the zone of apposition in the 8th or 9th intercostal space as described by De bruin et al. [22].The probe was placed between the anterior and mix-axillary lines.The participant was in the sitting position and measurements of the diaphragm were taken at the end of expiration (DTe) and inspiration (DTi) during quiet breathing by a single experienced physician.The mean value of DTe and DTi was used as a measure of DT.Thickening fraction (TF) of the diaphragm during quiet breathing was also calculated as follows [23]. Statistical analysis The baseline characteristics and measurement of participants are presented as the mean±standard deviation.The Mann-Whitney test was used to compare differences between the sexes.The Spearman's rank correlation test was used to evaluate correlation between the DT and other indices of sarcopenia.Linear regression analysis was used to evaluate association between respiratory muscle strength, DT and indices of sarcopenia.Multiple linear regression analysis with backward elimination was performed to identify factors predictive of respiratory muscle strength.Statistical significance was defined as a p-value of less than 0.05.Statistical analysis was performed using the IBM SPSS Statistics ver.19.0 (IBM Corp.). Baseline characteristics A total of 25 male and 25 female participants were recruited.Five participants with a history of lung disease (chronic obstructive pulmonary disease and asthma) were excluded.The baseline characteristics of the participants are shown in Table 1.The mean age was 76.76±1.13years for male (n=21), and 76.42±1.03years for female (n=24).Height, weight, HGS, CC, MEP, DT, and ASM/BMI were significantly different according to sex.Measures of gait speed, HGS, and CC were comparable to previously published normal range for age and sex [24]. DISCUSSION In this study we have demonstrated that indices of sarcopenia, DT, MIP, and MEP were intercorrelated with each other.In the univariate analysis, DTi, DTe, DT, HGS, CC, and gait speed were significantly associated with both MIP and MEP.Age was www.e-arm.orgsignificantly associated with MIP and sex and ASM/BMI with MEP only.In the multivariate linear regression analysis DT showed significant association with both MIP and MEP.Age and ASM/BMI were significantly associated with MIP. The diaphragm is a skeletal muscle like limb muscles.It is composed of roughly equal proportions of slow and fast fibers [25].The main difference lies in structure and not composition.Compared to limb muscles, diaphragm fibers have smaller cross sectional area, allowing efficient oxygen supply and increased resistance to fatigue [25].Due to similarity in fiber composition, the diaphragm muscle may be affected in situations where skeletal muscle wasting occurs, such as sarcopenia.An animal study found that sarcopenic rats induced by genetic modification have thinner diaphragm and weaker respiratory muscle strength than normal rats [26].In their study of 30 sarcopenic and 30 non-sarcopenic elderly patients aged over 65, Deniz et al. [9] reported that DT was significantly reduced in the sarcopenic compared to the non-sarcopenic elderly individuals.Similarly, in our study of health elderly people, indices of sarcopenia were significantly associated with DT.Our findings suggest that the diaphragm may be affected in sarcopenia. Respiratory muscle strength is closely related to diaphragmatic, abdominal, and intercostal muscle strength [21] and can be reduced in a range of diseases such as stroke [27], spinal cord injury [28], and neuromuscular disease [29].Findings from previous studies suggest that respiratory muscle strength may also be reduced in sarcopenia.In a study on healthy elderly by Shin et al. [30], skeletal muscle mass index was significantly associated with MIP.In a cross-sectional study by Ohara et al. [31], MEP and MIP were associated with sarcopenia indicators such as muscle mass, hand grip strength, and gait speed.Our study findings were similar to prior research, illustrating a correlation between respiratory muscle strength and indicators of sarcopenia. The diaphragm is a primary inspiratory muscle which con-tracts during inhalation and relaxes during exhalation.Its correlation with MIP is apparent, given its fundamental role, and has been reported previously [32].However, the reason DT was associated with MEP is not readily discernible.One possible explanation is that expiration was enhanced by greater elastic recoil of the rib cage and lung after stronger inspiration.Another possibility is that the DT was increased in subjects with already strong expiratory muscles.In a study by Souza et al. [33], elderly female who underwent inspiratory muscle strengthening training showed significant increases in MIP, MEP, and DT compared to the control group, indicating a positive correlation between respiratory muscle strength and DT.Our study demonstrated that DT is significantly associated with both inspiratory and expiratory muscle strength. In the multiple regression analysis, MIP was negatively correlated with age and ASM/BMI, and positively correlated with DT.Age related decrease in MIP has been described before [34].Possible explanations include age related muscle atrophy and loss of fast twitch fibers [34].The association between MIP and skeletal muscle mass have also been reported in previous studies.In a study by Ro et al. [35], skeletal muscle mass was significantly associated with MIP in both young male and female.Similarly, Shin et al. [30] also reported that skeletal muscle mass showed significant correlation with MIP in the healthy elderly.Contrary to our study results, in both studies, the correlation was positive.However, they did not evaluate DT, a significant determinant of inspiratory strength.The reason skeletal muscle mass, which is a measure of limb muscle, was significantly negatively associated with MIP is unclear.Obesity may have been a contributing factor.In general, people who are obese tend to have larger muscle mass [36].Obesity can affect respiratory function through mechanical factors and metabolic effects associated with proinflammatory state [37].However, measures of obesity and central obesity such as waist circumference, and lipid profile were not evaluated.Skeletal muscle mass was measured using BIA, which may not be as accurate as dual-energy X-ray absorptiometry (DXA) [38].If surrogate measures such as DXA, limb muscle ultrasound, magnetic resonance imaging were used results could have been different.Further studies are needed to validate this. MEP was also significantly correlated with DT but not ASM/ BMI in the multiple regression analysis.The significant positive correlation between ASM/BMI and MEP in the univariate linear regression analysis was not observed in the multiple regression analysis.This may be due to the fact that ASM/BMI primarily reflects limb muscle mass rather than trunk muscles, which are more closely associated with expiratory muscle strength [21].The results seem to indicate that the ASM/BMI has a lesser role in predicting expiratory strength.Therefore, it may be necessary to measure DT separately as an additional indicator of respiratory sarcopenia [39]. There are some limitations to this study.First, this was a cross-sectional study of healthy elderly volunteers with small sample size.Causal relationships cannot be confirmed and caution is needed in generalizing the findings.Second, skeletal muscle mass was measured by BIA.BIA may overestimate skeletal muscle mass compared to DXA [38].However, studies have shown that BIA was reliable in measuring muscle mass, and strongly correlated with skeletal muscle measurement by DXA [40].Third, TF of the diaphragm was measured only during quiet breathing and not maximal breathing.TF during maximal breathing may be better correlated with maximal respiratory pressure.Lastly, other possible confounding factors such as abdominal muscles involved in respiration were not evaluated. This was the first study to demonstrate correlation between respiratory muscle strength, DT, and skeletal muscle mass in healthy Korean elderly.Sarcopenia patients may have decreased respiratory muscle strength associated with reduced DT.Therefore, assessment of respiratory muscle strength and DT may be needed in sarcopenia patients to prevent respiratory functional decline.Further studies are necessary to evaluate changes in DT in patients with sarcopenia and whether early interventions may help prevent pulmonary complications. Table 2 . Spearman's correlation analysis of indices of sarcopenia, respiratory muscle strength, and DT Table 3 . Associations of age, sex, indices of sarcopenia, and DT with MIP and MEP by univariate linear regression analysis expiration; TF, thickening fraction; HGS, hand grip strength; CC, calf circumference. Table 4 . Associations of age, sex, indices of sarcopenia, and DT with MIP and MEP by multiple linear regression analysis
2023-08-30T15:07:49.059Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "2a730d81984de4360e8486db90399cdc25502e43", "oa_license": "CCBYNC", "oa_url": "https://www.e-arm.org/upload/pdf/arm-23081.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a74b4f18675fbc9a8667dbc56e1ffbd88326cf63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253608520
pes2o/s2orc
v3-fos-license
Adiponectin/Leptin Ratio as an Index to Determine Metabolic Risk in Patients after Kidney Transplantation Background and Objectives: It has been confirmed that adiponectin/leptin (A/L) ratio correlates better with cardiometabolic risk factors than hormone levels alone. The aim of our study was to determine the risk of developing post-transplant diabetes mellitus (PTDM) and other metabolic conditions depending on A/L ratio after kidney transplantation (KT). Material and Methods: In a prospective analysis, the studied samples were divided into three groups: control group, prediabetes and PTDM group. Pre-transplantation, at 3, 6 and 12 months after KT, we recorded basic characteristics of donor and recipient. We also monitored levels of adipocytokines and calculated A/L ratio. Results: During observed period, we recorded significant increase in A/L ratio in control group (p = 0.0013), on the contrary, a significant decrease in PTDM group (p = 0.0003). Using Cox regression Hazard model, we identified age at time of KT (HR 2.8226, p = 0.0225), triglycerides at 1 year (HR 3.5735, p = 0.0174) and A/L ratio < 0.5 as independent risk factors for prediabetes and PTDM 1-year post-transplant (HR 3.1724, p = 0.0114). Conclusions: This is the first study to evaluate the relationship between A/L and risk of PTDM and associated metabolic states after KT. We found out that A/L ratio <0.5 is independent risk factor for prediabetes and PTDM 1 year post-transplant. Introduction Post-transplant diabetes mellitus (PTDM) represents a frequent metabolic complication in kidney transplant (KT) recipients and is a serious risk factor for patient and graft survival. Even impaired glucose tolerance seems to have as significant impact on mortality after KT as PTDM itself [1,2]. According to available studies, PTDM or prediabetic condition develops in more than one third of KT recipients who have not previously suffered from diabetes [3]. In addition to glucose metabolism disorders, weight gain is almost a rule after KT, with half of the patients suffering from central obesity. Increased appetite, improved perception of tastes due to the disappearance of uremia, liberalization of dietary restrictions, as well as a sedentary lifestyle with poor overall physical condition are the factors most involved in it [4]. Central obesity is associated with hypertriglyceridemia, adipocyte-driven cytokine release, and subclinical inflammation, all of which induce insulin resistance with a high risk of PTDM development [5]. Low levels of adiponectin, which can be observed in obese patients, are closely related to insulin resistance and significantly increase the risk of developing PTDM independent of sex, age and type of immunosuppression. On the contrary, the production of leptin increases in obese patients. In our previous study, we confirmed that its increased level was significantly associated with the development of PTDM after KT [6]. Previous data also show that leptin is an independent risk factor for diseases of the cardiovascular system [7]. Despite the fact that adiponectin and leptin were independently associated with the development of metabolic syndrome (MS), diabetes mellitus (DM) type II and cardiovascular diseases, the ratio adiponectin/leptin (A/L) showed a stronger association with these pathological conditions than individual hormones [8]. A/L ratio can be considered as a marker of adipose tissue dysfunction. Due to the influence of dysfunctional adipose tissue, the amount of cardiometabolic risk markers increases, which is manifested by a decrease in the A/L ratio [9]. Its significant decrease was observed in patients with metabolic syndrome, while its decrease was correlated with an increase in the number of risk factors for MS, on the basis of which it can be considered a predictive marker of MS [9][10][11][12]. For the above reasons, the A/L ratio can represent a practical marker characterizing adipose tissue dysfunction and identify persons at increased risk of cardiometabolic diseases [13]. The results on the general population show that an A/L ratio > 1 can be considered normal, an A/L ratio of 0.5-1 indicates a moderate increase, and an A/L ratio < 0.5 a strong increase in cardiometabolic risk [14]. The aim of our study was to determine the risk of developing PTDM, prediabetic conditions and other metabolic risk factors depending on the A/L ratio in one year after KT. Material and Methods In our prospective study, patients actively enrolled on the waiting list for a primary KT at the Martin Transplant Center, who underwent KT during the observation period, were monitored. Patients who already had confirmed DM type I or II were not included in the follow-up. At the same time, patients with infectious complications, patients who did not undergo protocol biopsy (screening hospitalization) and those who died during the study were excluded from the study. In the third month of follow-up, screening of level of adipocytokines was performed during a short diagnostic hospitalization (graft protocol biopsy). Therefore, patients who did not undergo a protocol biopsy (poor anatomical conditions, infectious complications) were not included in the follow-up, as this would bias the results at 3 months. We divided the studied sample of patients into three groups: 1. control group, 2. patients who developed prediabetes after KT (fasting hyperglycemia, impaired glucose tolerance) and 3. patients who developed de novo PTDM after KT. In KT recipients, the initial serum level of leptin, adiponectin, interleukin 6 and 10 was measured at the time of flow cytometry crossmatch (FCXM), i.e., approximately 4 to 5 h before the procedure, and then at 3, 6 and 12 months after KT. The levels of adipocytokines and interleukins were evaluated using the ELISA method (Biomedica kits). The A/L ratio was calculated from the measured values in individual time periods. Based on the results of a previous study by Frühbeck et al. on the general population, we consider an A/L ratio above 1.0 to be normal, an A/L ratio of 0.5-1.0 indicates a medium metabolic risk, and an A/L ratio < 0.5 a high metabolic risk [14]. All participants were placed on the same immunosuppressive protocol. In the induction, antithymocyte immunoglobulin was used in a cumulative dose of 3.5 mg/kg of body weight, in the maintenance regimen tacrolimus, mycophenolic acid in a standard dosage. Regarding corticosteroids, methylprednisolone was administered at a dose of 500 mg intravenously pretransplantation and on the first day after KT, followed by a change to oral prednisone. At the time of KT, we recorded in all patients: basic characteristics of the donor (donor with extended criteria, cold ischemia time) and characteristics of the recipient (age, sex, length of dialysis treatment, underlying cause of kidney failure, delayed onset of graft function, panel of reactive antibodies, number of mismatches in class A, B, DR and DQ). In prescribed intervals after KT, we monitored risk factors for PTDM such as waist circumference, body mass index (BMI), c-peptide and immunoreactive insulin levels, lipid profile (total cholesterol, low-density-LDL and high-density-HDL cholesterol, triglycerides), vitamin D, tacrolimus level and parameters reflecting graft function such as glomerular filtration rate determined using the CKD-EPI formula (Chronic Kidney Disease-Epidemiology Collaboration Index) and quantitative proteinuria from 24 h collected urine. When diagnosing PTDM and prediabetic conditions, we used the valid criteria of the American Diabetes Association (ADA): fasting blood glucose > 126 mg/dL (7 mmol/L) in more than one case, random blood glucose > 200 mg/dL (11.1 mmol/L) with symptoms, blood glucose two hours after administration of 75 g of glucose within the oral glucose tolerance test (oGTT) > 200 mg/dL (11.1 mmol/L). The total length of follow-up was one year. We used a certified statistical program, MedCalc version 13.1.2. (MedCalc Software VAT registration number BE 0809 344,640, Member of International Association of Statistical Computing, Ostend, Belgium). Comparisons of continuous variables between groups were carried out using parametric (t-test) or non-parametric (Mann-Whitney) tests; associations between categorical variables were analyzed using the χ 2 test and Fisher's exact test, as appropriate. Cox regression Hazard model was used for multivariate analysis for independent risk factors of PTDM in one year after KT. A p-value < 0.05 was considered to be statistically significant. Ethical Approval All procedures involving human participants have been approved according to the ethical standards of the institutional and/or national research committee, including the 1964 Helsinki Declaration and its later amendments of comparable ethical standards. Informed consent for included participants was checked and approved by University hospital's and Jessenius Faculty of Medicine's ethical committees (EK 33/2018) and all signed informed consents have been archived for at least 20 years after research completion. The clinical and research activities being reported are consistent with the Principles of the Declaration of Istanbul as outlined in the Declaration of Istanbul on Organ Trafficking and Transplant Tourism. Results A total of 170 patients after primary deceased donor KT were included in the study. For known DM type I or II, a total of 28 patients were excluded; subsequently, during the follow-up, another 38 patients were excluded for other reasons (infectious complications, death, missing protocol biopsy). Thus, 104 patients were selected for prospective follow-up ( Figure 1). The level of tacrolimus was maintained in the range of 3.0 to 6.0 ng/L, and during the monitored period, we did not notice differences in its level between the studied groups. Likewise, there was no significant difference in the daily dose of prednisone. The control group consisted of a total of 40 patients, during the monitored period, we identified a prediabetic condition in 42 patients and PTDM in 22 patients. In the individual groups, we determined the average A/L ratio at defined time intervals and compared it between them (Table 1). We found that the A/L ratio was significantly lower in the group with PTDM compared to the control group during the entire observation period, significantly lower in the group with prediabetes compared to the control group at the beginning, in the 3rd and 12th months of follow-up, and in the group with PTDM compared with prediabetes at 6 and 12 months of follow-up. Figure 2 shows the development of the A/L ratio in all three groups during the entire monitored period. During the 12 months of follow-up, the A/L ratio increased statistically significantly in the control group (p = 0.0013), on the other hand, it decreased significantly in the group of patients who developed PTDM (p = 0.0003). In the group with prediabetes, the A/L ratio also had a decreasing tendency, but this decrease did not reach statistical significance. Figure 3 illustrates the change in the distribution of individual groups according to the A/L ratio over the course of 12 months. In the control group, there was a significant decrease in patients with A/L ratio for medium risk. In the groups with prediabetes and PTDM, the changes were not statistically significant, but in both there was a decrease in the sample with a normal A/L ratio and an increase in the sample with A/L ratio for high risk. The level of tacrolimus was maintained in the range of 3.0 to 6.0 ng/L, and during the monitored period, we did not notice differences in its level between the studied groups. Likewise, there was no significant difference in the daily dose of prednisone. The control group consisted of a total of 40 patients, during the monitored period, we identified a prediabetic condition in 42 patients and PTDM in 22 patients. In the individual groups, we determined the average A/L ratio at defined time intervals and compared it between them (Table 1). We found that the A/L ratio was significantly lower in the group with PTDM compared to the control group during the entire observation period, significantly lower in the group with prediabetes compared to the control group at the beginning, in the 3rd and 12th months of follow-up, and in the group with PTDM compared with prediabetes at 6 and 12 months of follow-up. Figure 2 shows the development of the A/L ratio in all three groups during the entire monitored period. During the 12 months of follow-up, the A/L ratio increased statistically significantly in the control group (p = 0.0013), on the other hand, it decreased significantly in the group of patients who developed PTDM (p = 0.0003). In the group with prediabetes, the A/L ratio also had a decreasing tendency, but this decrease did not reach statistical significance. Figure 3 illustrates the change in the distribution of individual groups according to the A/L ratio over the course of 12 months. In the control group, there was a significant decrease in patients with A/L ratio for medium risk. In the groups with prediabetes and PTDM, the changes were not statistically significant, but in both there was a decrease in the sample with a normal A/L ratio and an increase in the sample with A/L ratio for high risk. Patients were divided then into groups according to the A/L ratio at 1 year after KT and compared them with each other in association with other recorded characteristics ( Table 2). We found that patients with A/L ratio < 0.5 were in the dialysis program for a significantly longer time compared to those with A/L ratio > 1, had a higher value of BMI, waist circumference, level of insulin and triacylglycerols, worse graft function, higher prevalence of prediabetes and PTDM in 1 year after KT. When comparing with the medium risk group (A/L ratio 0.5-1), we did not notice significant differences in the incidence of prediabetes and PTDM. Compared to the normal A/L ratio, patients in this group were significantly older, had higher BMI, waist circumference and cholesterol. By comparing both risk groups, we found that patients with A/L ratio < 0.5 were significantly longer in the dialysis program, had a higher level of triacylglycerols, a lower level of HDL cholesterol, vitamin D and were younger. In the context of monitoring interleukin levels, patients at high risk (A/L ratio < 0.5) had a significantly lower level of protective IL-10 compared to the other two groups. At the same time, the group with A/L ratio > 1 showed significantly lower levels of pro-inflammatory IL-6 at 1 year after KT compared to the other two groups. Using Cox regression Hazard model, we identified age at the time of KT ≥ 50 years (HR 2.8226, p = 0.0225), triglycerides level > 1.7 (HR 3.5735, p = 0.0174) and A/L ratio < 0.5 in 1 year after KT (HR 3.1724, p = 0.0114) as independent risk factors for the development of prediabetes and PTDM 1 year after KT (Table 3). At the same time, we found that pre-transplant A/L ratio was negatively correlated with BMI value (p = 0.0013) and A/L ratio 1 year after KT was negatively correlated with BMI (p = 0.0111), waist circumference (p = 0.0108) and triglycerides level (p = 0.0261) 1 year after KT. Figure 3 shows that the probability of developing PTDM and prediabetic conditions at 1 year after KT decreases significantly with increasing A/L ratio (Figure 4). Discussion To our knowledge, this is the first study that investigated the A/L ratio in patients after KT in the context of the risk of developing metabolic complications and cardiometabolic risk factors. In our work, we found that A/L ratio < 0.5 represents an independent risk factor for the development of PTDM and prediabetic conditions in this group of patients 1 year after KT. We identified that a higher value of the A/L ratio before transplantation, as well as 1 year after KT, statistically significantly reduced the probability of developing PTDM and prediabetes at the end of the study period. Monitoring adipose tissue hormones in association with cardiometabolic risk factors has only a short history in the transplant population, and until now these hormones have been used as separate variables. In previous studies, we confirmed that hyperleptinemia is an independent risk factor for the development of PTDM, and a low level of adiponectin was associated with insulin resistance and obesity [15]. Frühbeck et al., in a cross-sectional study in 2019 on a sample of 292 patients, evaluated the A/L ratio as a predictor of dysfunctional adipose tissue. Patients were divided into the same groups according to the A/L value (<0.5, 0.5-1, >1). In the group with high cardiometabolic risk, the authors identified significantly more patients with obesity, DM type II. and MS. At the same time, however, they state that the A/L ratio was more strongly correlated with anthropometric parameters, such as BMI, waist circumference, or body fat stores, than with parameters of metabolism and inflammation [16]. Authors Inoue et al. Discussion To our knowledge, this is the first study that investigated the A/L ratio in patients after KT in the context of the risk of developing metabolic complications and cardiometabolic risk factors. In our work, we found that A/L ratio < 0.5 represents an independent risk factor for the development of PTDM and prediabetic conditions in this group of patients 1 year after KT. We identified that a higher value of the A/L ratio before transplantation, as well as 1 year after KT, statistically significantly reduced the probability of developing PTDM and prediabetes at the end of the study period. Monitoring adipose tissue hormones in association with cardiometabolic risk factors has only a short history in the transplant population, and until now these hormones have been used as separate variables. In previous studies, we confirmed that hyperleptinemia is an independent risk factor for the development of PTDM, and a low level of adiponectin was associated with insulin resistance and obesity [15]. Frühbeck et al., in a cross-sectional study in 2019 on a sample of 292 patients, evaluated the A/L ratio as a predictor of dysfunctional adipose tissue. Patients were divided into the same groups according to the A/L value (<0.5, 0.5-1, >1). In the group with high cardiometabolic risk, the authors identified significantly more patients with obesity, DM type II. and MS. At the same time, however, they state that the A/L ratio was more strongly correlated with anthropometric parameters, such as BMI, waist circumference, or body fat stores, than with parameters of metabolism and inflammation [16]. Authors Inoue et al. confirmed already in 2005 that the A/L ratio correlates with insulin resistance, even better than the Homeostatic Model Assessment (HOMA) index [17]. In our study, the A/L ratio for high risk (<0.5) was significantly associated with the incidence of prediabetes and PTDM. Since it has been shown that the probability of developing PTDM decreases significantly with an increase in the A/L ratio, its development in the post-transplantation period is therefore extremely important. We assume that over time after a successful KT, when oxidative stress, systemic inflammation subsides, immunosuppressive drug doses are reduced, and physical activity is initiated and increased, space is created for an increase in the A/L ratio. As in the above-mentioned study, in our work we also noted a significantly higher representation of obese patients in the group with A/L ratio < 0.5 compared to the group with A/L ratio > 1 expressed by BMI and waist circumference. Both of these indicators were negatively correlated with the A/L ratio, which coincides with the results in the general population [16,18]. Authors Frühbeck et al. confirmed in their study in 2017 that a low A/L ratio is an indicator of dysfunctional adipose tissue, and these patients show higher cardiometabolic risk as a result of increased systemic inflammation and oxidative stress. The authors confirmed that the levels of proinflammatory markers produced by adipose tissue, such as serum amyloid A (SAA) and c-reactive protein (CRP) strongly correlated with the A/L ratio. Increased serum concentration of these markers was associated with a lower A/L ratio. Based on these findings, they hypothesize that the A/L ratio may reflect the presence of systemic inflammation caused by adipose tissue dysfunction. It is high proinflammatory factors in the field of dysfunctional adipose tissue that represent mediators in the etiopathogenesis of MS [10]. For this reason, we also included IL levels in our follow-up. As one of the main pro-inflammatory cytokines, IL-6 is involved to a high degree in the activity of systemic inflammation and thus in the development of insulin resistance and DM [19]. IL-10 with its anti-inflammatory effects was identified in lower concentrations in patients with DM type II. [20]. In our studied sample, the risk group with an A/L ratio < 0.5 showed significantly lower levels of IL-10, and on the contrary, patients with a normal A/L ratio had significantly lower concentrations of IL-6 in their blood. Both hormones, leptin and adiponectin, also contribute to the development of lipid metabolism disorders and represent a risk for fat-induced dyslipidemia [21]. We mainly identified differences in the concentrations of triacylglycerols, the level of which was significantly higher in the group with a high metabolic risk (A/L < 0.5) compared to the group with a normal A/L ratio but also with the ratio for medium risk. At the same time, patients in this group showed a low level of HDL cholesterol. This finding agrees with the conclusions of previous studies, which confirmed the negative correlation of the A/L ratio and the level of triacylglycerols [9,17]. Senkus et al. on the contrary, they detected the only significant correlation between the A/L ratio and the level of HDL cholesterol [13]. Elevated HDL cholesterol is thought to increase the secretion of adiponectin from adipose tissue, thereby promoting its anti-inflammatory and insulin-sensitizing effects [21]. In our study, the level of triacylglycerols at 1 year after KT was an independent risk factor for the development of prediabetes and PTDM. This relationship has already been confirmed earlier [22]. An interesting finding was that the monitored patients after KT with an A/L ratio for high and moderate risk of cardiometabolic complications spent a significantly longer time in the chronic hemodialysis program compared to the group with a normal A/L ratio. We consider the cause to be chronic inflammation and oxidative stress, which in long-term dialysis patients represent one of the basic aspects of their cardiovascular morbidity and mortality [23]. Limitations of our study may be the size of the investigated patient sample, the total length of follow-up and the inclusion of patients from only one Transplant center (monocentric study). Additionally, a potential limiting factor may be the lack of previous studies in this topic. Conclusions This is the first study to evaluate the relationship between A/L ratio and the risk of PTDM and associated metabolic states in patients after KT. In our study, we found out that A/L ratio < 0.5 is an independent risk factor for prediabetes and PTDM development 1 year after KT. The A/L ratio can be considered an indicator of adipose tissue dysfunction. The ranges of A/L ratio used by us can be a useful indicator of metabolic status and the risk of cardiometabolic complications in the post-transplantation period and could also be extrapolated to the general population. Further studies will be needed to confirm our findings. Funding: This study was supported by grant VEGA-1/0238/21: Continual glucose monitoring and glycemic variability in early post-transplantation period as a predictor of complications after kidney transplantation. Data Availability Statement: The data that support the findings of this study are available from the first author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2022-11-18T16:29:35.924Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "4568cc04511a0ddfbc52e6808485160f36523191", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/11/1656/pdf?version=1668571091", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "591fe2815e181a41f474fbb84d2d8aaa0bc61784", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257130827
pes2o/s2orc
v3-fos-license
Brain Wave Frequency Measurement in Gamma Wave Range for Accurate and Early Detection of Depression The Global System of Mobile Communication (GSM) which debuted in Nigeria in August 2001 was greeted with much celebration as the country finally joined the League of the GSM nations, though after less economically buoyant African countries such as Botswana, Mozambique, Uganda and Tanzania. Fourteen years after the introduction, Issues and challenges has been raised on why subscribers are been ripped of their money and short changed with epileptic services. This arises from the myriad of problems ranging from congestion to the poor network delivery. This paper discusses the evolution of GSM services in Nigeria, the range of services available and the issues involved in its operation in the last fourteen years. Not forgetting the benefit it has brought to the Nigerian economy. Suggestions are also made on how Nigeria can reap more benefits of the GSM communication. Introduction According to the World Health Organization factsheet published in April 2016, there is one suicide committed every 40 seconds which averages for 2,160 suicides per day and over 8,00,000 suicides every year.Moreover, suicide is the third leading cause of death in the world for those aged 15-44 years.Research has consistently shown a strong link between suicide and a mental illness called depression, with 90% of the people who die by suicide having an existing mental illness [1].It is depression that causes people to commit suicide 95% of the times.We tend to lose about 3% of our population each year to depression.This establishes a thorough background of how severe this disease is and how it is taking our own species away from us.We are losing our workforce and human resources to a mental illness.Often called the "cold" of the brain, depressionlike any other diseasehas its own symptoms, causes, diagnosis, treatments, and complications.We are well aware of the treatments, yet we lose 2,160 humans to suicide every day.One of the major causes of this is late diagnosis.Depression turns severe over timeminor depression turns into psychotic depression.People often overlook the symptoms of minor depression, considering their feelings to be a mere sign of slight sadness.Sometimes this feeling of "slight sadness" persists over a long duration of time and turns into a major depression.The earlier depression is diagnosed, the easier it is for effective treatment.This work focuses on the samethe timely and accurate diagnosis of depression.The project establishes a relation between brainwaves and depression, and the introduction of an idea of a device which uses the concept of brainwaves waves to diagnose depression.The work of this project is divided into two main themes, one being the co-relation of brainwaves with depression; the other being the proposal of a device based on the brainwaves which can be used to diagnose the same.It is important to know some basic concepts that are used, before beginning with the core concepts of the project.The section 2 and 3 thoroughly explains the various concepts, both basic and core, and helps in the easy and proper understanding of the topic.This project is inspired by the recent rise in teenage suicides among high school students.Many online forums have shown teenagers venting out their feelings as being misunderstood in school and feeling low since they feel as if their sadness doesn't matter.Due to high school stress and various other factors, majority of the This project is inspired by the recent rise in teenage suicides among high school students.Many online forums have shown teenagers venting out their feelings as being misunderstood in school and feeling low *Corresponding Author, E-mail-address: kumari.naresh01@gmail.comAll rights reserved: http://www.ijari.orgsince they feel as if their sadness doesn't matter.Due to high school stress and various other factors, majority of the teenagers get depressed, some come out of the disease with help of their parents or the school counselors, but some others are unable to do so and their depression prolongs into their adult life.The device which this project suggests can be used to solve this problem.Among students from Grade 9-12 in 2013, 17.03% of the students seriously considered attempting suicide. State of Depression To begin with, depressionas a word in generalis synonymous to "low".Geographically it means a "low" lying area, financially it means a "low" stock of monetary fund, and medically it means the "low" of the mind.Sadness, grief and guiltiness are all normal human emotions.One experiences those feelings from time to time; they usually go away within a single or a short span of days.But, depression is something more.It is a period of overwhelming sadness.Depression, as defined by the American Psychiatric Association on their official webpage www.psychiatry.orgis: "Depression (Major Depressive Disorder) is a common and serious medical illness that negatively affects how one feels, thinks and acts, and interferes with the everyday life of a person for weeks or more."Untreated depression can cause complications which further put the individual's life at risk.Studies show that teenagers are more prone to depression than any other age group; this is due to the hormonal changes that occur in the body during puberty [2].When it comes to gender, studies show that women are more likely to be diagnosed with depression than men. Symptoms and Statistics of Depression Often called the "cold" of the brain, depression like any other disease has its own symptoms.Feelings of sadness or emptiness that don't go away within a few weeks may be a sign of depression.Some other emotional symptoms may include: (i) Extreme irritability over minor things.Although the presence of these symptoms is used as a tool in diagnosis of depression, they sometimes might be misleading as they might be only temporarily present i.e. when the person is feeling low only for that certain period of time [4].Yet males take their own lives at nearly 4 times the rate of females and represent 77.9% of all suicides.According to the World Health Organization, 350 million people suffer from depression worldwide.This is about 5% of the human population.It is depression that causes people to commit suicide 95% of the times.Percentage of people who IJARI experience depression in various countries:Germany -9.9%,Israel -10.2%,United States -19.2%, France-21%, and India-9% 4. Waves and Brain Activity All our thoughts, emotions and even behaviors are caused due to the communication between neurons in our brains.The brain is made up of millions of neurons.These neurons use electric impulses to communicate with each other.This activity results in the formation of waves in the brain.These waves are, logically, called brainwaves.Brainwaves are produced by synchronized electrical pulses from masses of neurons communicating with each other [7,8].It is important to know that all living Homo sapiens display five different types of electrical patterns or "brainwaves" across the cortex, i.e. the brainwaves from the brain can be divided into five categories [3].Each of these five types of brainwaves have a purpose and helps to serve us in optimal mental functioning.Our brain's capability to transition through various brainwave frequencies plays an important role in determining our stress management, concentration, and even our sleep quality.Even if one of the five types of brainwaves is overproduced or under produced in our brain, it can cause problems.These five types of brainwaves are categorized into these groups on the basis of their frequencies (ranging from 0 Hz -300 Hz).It is important to know that all the five types of brainwaves are being produced all the time.What distinguishes "Wave A" from "Wave B" is the difference in frequency.The five brain waves in decreasing order of their frequency are: Gamma, Beta, Alpha, Theta Even though all the five types are produced throughout the day,one particular brainwave will be dominant over the rest four depending upon the state of mind and consciousness.The Table 1 thoroughly explains the emission cause, range, purpose, effect due various amounts of each of the waves.We observe that depression is witnessed when a person's brainwaves exist in the lower region of the gamma wave band i.e. around 25 Hz.Hence, to diagnose depression we only need to focus on the gamma waves, therefore a deeper understanding of the gamma waves is important for the thorough understanding of the project as a whole. 1 Brain Function and Gamma Waves One must not confuse Gamma Waves with Gamma Rays.Gamma Rays are the EM Waves which are emitted from fusion, fission, alpha decay or gamma decay of the atomic nucleus.They are produced in the sun, to cite an example.They have the highest frequency in the Electromagnetic Spectrum.Gamma brain waves are a frequency pattern of brain activity that measures between 25 and 100 Hz, with around 40 Hz being typical in humans.The similarity between Gamma Rays and Gamma Waves is that their frequency is the highest in Electromagnetic Spectrum and Brainwave Spectrum respectively, accompanied with a low amplitude.The gamma wave originates in the thalamus and moves from the back of the brain to the front and back again 40 times per secondnot only that, but the entire brain is influenced by the gamma wave.Everyone has gamma brainwave activity, but the amount of gamma waves produced varies.Low amounts of gamma brainwave activity have been linked to depression, as the table suggests and high amount of gamma waves are related to peace.According to neuroscientists, higher gamma activity is linked to higher focus.People with high gamma activity are naturally happier, calmer and more at peace.This is nature's best anti-depressant.Being the highest of all four brainwaves, increased production of gamma brainwaves can help boost your energy levels, enabling peak performance, physically and mentally.In depression, a person's focus is majorly affected.They are not able to concentrate easily due to the disease.If the gamma activity is very low, the focus of the person will be very low thus indicating depression.Since more gamma activity means a happier person, the lack of gamma activity would mean lack of happiness.It would directly mean that the anti-depressant" is not working the way it was supposed to thus indicating depression.Similarity between Gamma Rays and Gamma Waves is that their frequency is the highest in Electromagnetic Spectrum and Brainwave Spectrum respectively, accompanied with a low amplitude.The gamma wave originates in the thalamus and moves from the back of the brain to the front and back again 40 times per secondnot only that, but the entire brain is influenced by the gamma wave.Everyone has gamma brainwave activity, but the amount of gamma waves produced varies.Low amounts of gamma brainwave activity have been linked to depression, as the table suggests and high amount of gamma waves are related to peace.According to neuroscientists, higher gamma activity is linked to higher focus.People with high gamma activity are naturally happier, calmer and more at peace.This is nature's best anti-depressant.Being the highest of all four brainwaves, increased production of gamma brainwaves can help boost your energy levels, enabling peak performance, physically and mentally.In depression, a person's focus is majorly affected.They are not able to concentrate easily due to the disease.If the gamma activity is very low, the focus of the person will be very low thus indicating depression.Since more gamma activity means a happier person, the lack of gamma activity would mean lack of happiness.It would directly mean that the anti-depressant" is not working the way it was supposed to thus indicating depression.1: Brain wave frequency ranges and the symptoms of patient A depressed person, usually, is seen as a bit more lethargic as compared to a average mentally healthy person given the same circumstances.This means that it would be right to conclude that a person having very low energy levels showing the symptoms of frequently tiredness or extremely lazy would probably be in a depressed state.So we can conclude from the neuroscience researches as well as from the data provided above that if we wish to diagnose depression we have to detect a brainwave which lies in the interval 20 to 30 Hz, since that would relate us to the lower range of the gamma waves, the range where depression is diagnosed.The various sections of human brain are shown in fig. 2. Diagnosis of Depression In today's time, the trend of going to WebMD and taking a depression quiz is up for the diagnosis of depression and in the psychiatrists office it's more or less the "questionnaire" method of diagnosis [6].What is one of the major drawbacks with such type of a method of diagnosis is that it is not the most reliable form.It does not analyze the person scientifically for depression, but takes their current mood into perspective and attempts to diagnose the disease.The depression questionnaires usually have questions such as "in the past few months have you been-"or "has the past week been difficult etc.".These questions refer to the past but the answers are dependent on the person's current mood.If a person is feeling low while they are attempting the questionnaire then they tend to project their past worse than it was, or if they are feeling happy then they might tell that everything was okay in the past.There is a problem with both these scenariosthe problem being that the person's current mood serves as an obstacle in the accurate diagnosis of depression [10].The brainwaves, however, do not do this.A depressed person will show low gamma wave activity no matter what their current mood is. Proposed Device for detection of depression The development of a device which detects only the frequencies ranging from 20 Hz to 30 Hz would be exactly fitting for the accurate and timely detection of depression.The first part of the project was correlating brainwaves with depression.That part has already been explained in the above sections.The following section explains the second part of the project i.e. the development of the device.If the device detects frequencies ranging from 20 Hz to 30 Hz we will be able to say that it is a tool which will help in the diagnosis of one of the deadliest illness-Depression.The device indicates the presence of a brainwave having frequency ranging from 20 Hz to 30 Hz only.If a frequency of, for example, 45 Hz is given to the device, and then the device would show no activity.The indication of the presence of a brainwave in 20-30 Hz interval can be done by lighting of a bulb whenever such a wave is detected.Perhaps one of the most important concepts to know in this is that a brainwave is not a normal wave.The term can be highly misleading to many people as it suggests a system of waves which are originating in the brain and travelling elsewhere.This, however, is not the case.What happens in the brain is that the millions of neurons fire electricity through the brain over extremely, extremely short spans.What we call a wave is the graphical representation of these electrical impulses plotted along the time axis.So when we say that a wave of 22 Hz is being produced by the brain, what we mean to say is that in 1 second, 22 electrical impulses are travelling into the electrodes attached to the scalp. Working of proposed depression detection device About 20 small electrodes are attached to the head with washable glue.Electrode: a conductor that carries current; can be used for diagnostic testing to receive and record electrical activity of nerves.The device contains 2 parts; a transmitter and a receiver.The transmitter is connected to the electrodes which are attached to the scalp, where as the receiver stands alone, wirelessly.At any given period of time, the brain is always sending out electrical impulses into the electrodes.The electrodes act as a mediator for sending these electrical impulses to the transmitter.So, the electrical impulses initiate in the brain and travel all the way to the transmitter through electrodes.The transmitter captures the impulse, encodes it and sends it wirelessly to the receiver using a carrier frequency.It sends the impulse as soon as the electrode supplies it with impulse.Many such impulses are sent over a period of time.The impulses reach the receiver in a wave form.The receiver decodes the impulses and determines if the frequency of these impulses lies within the desirable range of 20 Hz to 30 Hz.If the frequency does come out to This device can also be used in schools to analyse students who are lagging behind in academics.It has been noted that a person suffering from depression loses their intellectual ability which in turn affects their studies.Teachers in school usually take a student's poor performance as lazinessbut it can be much more, it could be an indication of depression.If this device is introduced in school, we would be able to diagnose students with depression and treat them accordingly for their better performance instead of labelling them lazy. Treatment methods of depression The cure for depression, however not a technical part of the project, but one of the most necessary part is explained in this section.Some of the treatment methods are : (i) Anti-depressant medication is available in the market.These usually have some side-effects but they are effective for curing depression.Antidepressants primarily work on brain chemicals known as neurotransmitters, such as serotonin, neropinephrine and dopamine.These chemicals are involved in regulating mood. (ii)Regular exercise.Exercise enhances the actions of endorphins, which improve natural immunity and reduce the perception of pain. (iii) Listening to alpha waves regularly is found to have reduced depression according to recent studies.Alpha Wave recordings are openly available on the internet and can be used for purposes such as boosting creativity. (iv)Psychiatric help.There are several therapy sessions as well as counseling available in the outside world.Depression, even in the most severe cases, can be effectively treated with professional help.Up to 80% of those treated for depression show an improvement in their symptoms generally within 4 to 6 weeks of beginning a professional help treatment.The earlier a treatment begins, the more effective it is.A depressed person might be reluctant to seek help at first, they can be held back by stigma that is associated with depression in certain societies.The society, too, must rise above the illogically produced stigmas and help those in need.The patients often feel that help is of no use, but that is not true.Therapy sessions, behavioral therapy, medication as well as exercise have been scientifically proven to improve a depressed person's state of mind.No matter how hopeless a person might feel, one thing that they must not ever forget is that-There is always help. Conclusions There is a tremendous rise in cases of depression worldwide.We are losing our efficient workforce and human resources to this mental illness.Often called the "cold" of the brain, depression like any other disease has its own symptoms, causes, diagnosis, treatments and complications.We're well aware of the treatments, yet the treatment methods are not effective sometimes.One of the major reasons of the failure of treatments is the late diagnosis of depression.The earlier depression is diagnosed, the easier and effective the treatment is.Late diagnosis usually happens when people are unsure of their feelings and take their sadness for simply just a bummer.Currently, questionnaires are often used for diagnosis of depression, which is not a very reliable method.The method proposed in this project is based on the idea of brainwaves. The millions of neurons in our brain send electric current over short spans of time.This activity results in the formation of waves namely Delta, Theta, Aplha, Beta and Gamma waves.Considering symptoms of depression such as focus levels, laziness, lack of natural anti-depressants this project comes to the conclusion that lower gamma wave activity in the brain indicates depression.Gamma waves range from 25 Hz to 100 Hz.A lower gamma activity would be having brainwaves emitted in the range from 20 Hz to 30 Hz.A proposal for a device is also laid down in the project.The device uses electrodes, a transmitter and a receiver for detecting whether the impulses sent by the brain are in the 20 Hz to 30 Hz range, thus indicating depression accordingly. (ii) Anxiety and restlessness.(iii) Anger management issues.(iv) Loss of interest in favourite activities.(v) Fixation on the past or the things that have gone wrong.(vi)Thoughts of death and suicide. Fig 1 : Fig 1: Waveform Diagrams for various frequency bands Fig. 3 : Fig. 3: Block diagram of proposed device desirable range, then a blub attached to the receiver starts glowing.If the frequency is outside the range, then the receiver's bulb shows no activity.
2023-02-24T17:20:11.101Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "5c458c896c8635a1f71c55acb348e45886f88ea5", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.51976/ijari.611809", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5095a5465f63204605091fd315f145362366ff58", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246972632
pes2o/s2orc
v3-fos-license
Consumer Perception and the Evaluation to Adopt Augmented Reality in Furniture Retail Mobile Application The importance of retailers to utilize interactive technology, such as Augmented Reality (AR), in their mobile applications is considered due to the change in consumer behavior from in-store to online. However, there is limited study in understanding consumer perception to evaluate the effectiveness of AR implemented by retailers during the COVID-19 pandemic in developing countries like Indonesia. The research examined the relationship between AR characteristics, consumer perception, and attitude toward AR in mobile furniture retail applications. The intention to adopt was also included in measuring behavioral responses. Using 383 valid data, the researchers empirically tested the insights through Partial Least Square-Structural Equation Modelling (PLS-SEM). The results reveal that AR characteristics have a significant influence on consumer perception. Besides, perceived functional benefit and trust in AR directly relate to attitude toward AR and indirect on intention to adopt AR applications. Thus, the research provides managerial implications for retailers to adopt AR technology as interactive media to enhance customer experience during online shopping in the current and after the pandemic. It is also expected to help government regulation in digital infrastructure to support AR implementation in industry and users’ data privacy. In addition, the research contributes to theoretical development in AR adoption, interactive marketing, and consumer behavior. INTRODUCTION There is an increase in the utilization of technology advancement in many countries due to people conducting physical distancing after the World Health Organization (WHO) announced COVID-19 as a global pandemic in March 2020. Particularly in developing countries such as Indonesia, most people stay at home. They do their activities virtually since the central government sets the enforcement of social restrictions to reduce the spread of the COVID-19 infection (BPS, 2020). The virtual activities during the COVID-19 pandemic have also changed consumer shopping behavior from conventional to online. According to the Central Bureau of Statistics (BPS, 2020), 72% of consumers are shopping online during the pandemic. Around 31% of them experience an increase in shopping activity. During the pandemic, the growth of online shopping in Indonesia accounts for impacting consumers' consistency in online shopping. Moreover, the survey by McKinsey reveals that 60% of respondents plan to continue their online purchase activity until after the COVID-19 pandemic (Potia & Praseco, 2020). Although there is a high percentage of people doing online shopping, it is a challenge for digital retailers to provide a satisfying shopping experience and journey similar to in-store shopping activities. Then, the utilization of technology has resulted in various options for retailers to provide and present their products virtually to increase customer experience (Kowalczuk, Siepmann, & Adler, 2021). For example, IKEA has launched IKEA Place in furniture retail, which adopts Augmented Reality (AR) to attract digital shoppers to create their experience using a mobile application. As one of the promising options in recent years, AR is defined as a technology (in mobile phones or supported devices) that allows a 3D virtual object to be presented to the real-world environment (Fan, Chai, Deng, & Dong, 2020;Yavuz, Çorbacıoğlu, Başoğlu, Daim, & Shaygan, 2021). With AR adopted in IKEA Place, consumers can modify the 3D furniture by zooming in (or out), moving it to a certain location, and rotating it to the appropriate position to meet the suitable product during the selection (Fan et al., 2020). Therefore, they can make sure that the furniture fits in and is designed functionally for their room. Previous researchers have attempted to examine the level of consumer adoption and intention to use AR in retails through incorporating perception, augmentation, consumer emotion, evaluation, and other psychological predictors (Barhorst, McLean, Shah, & Mack, 2021;Jung, Park, Moon, & Lee, 2021;Kowalczuk et al., 2021;Nikhashemi, Knight, Nusair, & Liat, 2021;Park & Yoo, 2020;Qin, Peak, & Prybutok, 2021;Yim, Chu, & Sauer, 2017). However, the implication of AR on consumer perception is still debatable, particularly in developing countries (Saleem, Kamarudin, Shoaibb, & Nasar, 2021). Moreover, although some researchers have discussed the effectiveness of AR, there is a limitation of what key factors make consumers adopt and accept this technology (Thaichon, Phau, & Weaven, 2020). In particular, it is to know how the application of AR is suitable to be used by retailers to answer the change of consumer behavior from in-store shopping to online. In addition, from a practical view, the adoption of AR in mobile retail is less than 35% (Park & Yoo, 2020). Actually, the use of AR technology can help retailers to fulfill consumers' pleasant experiences when shopping online through virtual interactions of 3D products (Dehghani, Lee, & Mashatan, 2020;Kowalczuk et al., 2021). Therefore, considering that the use of AR is an upcoming trend in retail (Thaichon et al., 2020), researchers need to understand the consumer perception to evaluate the effectiveness of AR to be implemented by retailers in the era caused by the pandemic. The potential impact of studying AR on mobile retail applications also needs to be considered, especially how consumers respond to the adoption of new technologies for shopping. The research attempts to add some literature and find evidence from Indonesia (representing the developing country) regarding the intention to adopt AR in mobile applications to fill the gap. The research is also encouraged by the Indonesian government that actively prioritizes digitalization to stimulate post-COVID-19 economic recovery (Devanesan, 2020). The implementation of AR in businesses will be thriving due to the support by the government to make digital technology profitable as an economic booster. Therefore, understanding the consumer perspective is necessary. However, the research is limited to furniture products considering that home decoration and interior design have become viral and trending topics on social media (particularly Twitter) in recent years. The trend of home decoration took center stage in 2020 due to the changing home design needs and preferences, as many people were homebound during pandemics (Dzulkifly, 2020). Moreover, in furniture retail, the implementation of AR-based product presentation in the mobile application has been successfully carried out by IKEA through IKEA Place. Therefore, the research aims to examine consumer perception and intention to adopt AR in furniture retail mobile applications. According to Kowalczuk et al. (2021), AR characteristics can be classified into five categories. First, interaction with virtual products is defined as the overall constructs contributing to users' interaction with virtual products, such as rotation, position, and zoom in (or out). Second, processing quality is how the system provides accuracy, reliability, and speed in the requested service. In AR mobile applications, processing quality is important to create user experience during shopping, particularly in relation to technical and functional quality. Third, the following AR characteristic is information about virtual products, which refers to the amount of information supplied from AR. Fourth, the quality of virtual product presentation shows the graphical visualization quality that is adhered to virtual product presentation. Fifth, handling personal information is the consumer perception toward overall security and privacy when using AR. The AR characteristics mentioned influence users' perception and have implications on the final evaluation of using this technology for shopping. In the context of mobile applications, especially retail mobile applications, AR has resulted in positive and negative consumer perceptions. For example, the study regarding AR in digital retails finds that AR interactively affects South Korean consumers' mental imagery, which, furthermore, drives their attitude toward AR (Park & Yoo, 2020). Similarly, it is also found that AR encourages positive perception and attitude, impacting behavioral intention to use AR (Qin et al., 2021). The perception in the study of Qin et al. (2021) is related to consumer perception of the ease of using technology and consumer gratification, both utilitarian and hedonic. Moreover, in China, the previous researchers find that in addition to a positive perception of consumer behavior (ease of use and usefulness), negative perception like risk influences consumers to have no intention to use AR technology (Zhuang, Hou, Feng, Lin, & Li, 2021). Furthermore, according to Dehghani et al. (2020), using mix-reality (a combination of virtual and augmented reality) in retail services shows that perceptions have an indirect relationship to behavioral intentions, and perceived functional benefit is the most significant construct in influencing behavior. Lastly, it is also revealed that positive perception of AR that is effective and efficient to use has directly impacted consumer attitude and intention, which leads to consumer experiential value and behavioral intention to use AR (Wu, Chiu, & Chen, 2020). In Indonesia, the research area of the implementation of AR in shopping applications is still limited, so it is important to identify consumer perception toward AR by asking them directly. These considerations motivate the researchers to conduct a Focus Group Discussion (FGD) to build a model relationship. FGD is conducted to achieve an in-depth understanding of Indonesian consumers regarding their point of view in the implementation of AR (Malhotra, 2010). Moreover, this approach is used to find appropriate perception variables for the Indonesian context and quantify the variables found from the FGD by relating them with behavioral outcomes using survey data. The FGD consists of eight participants (three females, five males), who regularly shop online, are technology literate (advanced smartphone users), have used or are familiar with AR, and are interested in furniture or home decoration. After the researchers analyze and categorize the most mentioned words using conventional notes, this FGD approach generates three perception factors reflecting consumer perception toward AR application in furniture retail mobile applications. There are perceived functional benefit, perceived trust, and perceived product risk. These factors are related to the findings from prior studies in the context of technology adoption, which have influenced consumer attitude and behavioral intention (Gupta & Duggal, 2021;Ho, Wu, Lee, & Pham, 2020;Kaushik, Mohan, & Kumar, 2019;Um, 2019). To investigate the relation of these factors comprehensively, the researchers also conduct a literature review and formulate hypotheses. The value that consumers perceive in AR comes from the realistic virtual presentation of how it fits with the actual product (Kowalczuk et al., 2021). The reality congruence is significantly related to enhancing AR functional benefit in purchasing a product online. This statement is supported by Nikhashemi et al. (2021). A clear representation of an image in AR has stimulated consumers to feel excitement and control over virtual and real-world environments. Moreover, the presentation of vividness in AR leads to a higher level of consumers' motivation to process the information (Barhorst et al., 2021). The clear visual presentation influences consumer trust and reduces the risk of using online shopping technology. Since there is no research examining the reality congruence on consumer trust and product risk, the researchers argue that the high quality of virtual product presentation increases consumer trust in the AR when purchasing and decreases consumer risk perception to the product presented in the application. Based on these arguments, the following hypotheses are proposed. H1a: There is a significant relationship between AR characteristics of reality congruence and perceived functional benefit. H1b: There is a significant relationship between AR characteristics of reality congruence and perceived trust. H1c: There is a significant relationship between AR characteristics of reality congruence and perceived product risk. System quality in AR is related to how technology responsiveness provides a quick response to requested services (Kowalczuk et al., 2021). Technology responsiveness in mobile shopping technology benefits consumers when purchasing online. The relation of system quality on usefulness as it functional benefit (Tseng & Lee, 2018), and consumer trust is examined by previous studies in the context of technology service. It is found positively significant (Luo, Wang, Zhang, Niu, & Tu, 2020;Nguyen, Chiu, & Le, 2021;Sarkar, Chauhan, & Khare, 2020). Although there is no study regarding the impact of AR system quality on consumer perceived product risk, system quality captures the capacity of AR technology systems to perform reliable and accurate product presentations. It is identified to reduce consumer perceived product risk. The following hypotheses are proposed by considering these relations. H2a: There is a significant relationship between AR characteristics of system quality and perceived functional benefit. H2b: There is a significant relationship between AR characteristics of system quality and perceived trust. H2c: There is a significant relationship between AR characteristics of system quality and perceived product risk. AR is a technology that enables consumers to interact with virtual products because of its ability to create a virtual product that looks like an actual product (Kowalczuk et al., 2021). This advantage of AR has increased consumer trust in the product (Saprikis, Avlogiaris, & Katarachia, 2021) and avoided the risk that consumers perceive on the product when they purchase digitally. The quality of AR product information that describes interactive and real-time product presentation is considered the most significant benefit of using AR when shopping (Smink, Frowijn, Reijmersdal, Noort, & Neijens, 2019). Moreover, a previous study has found this perceived informativeness as a significant factor in consumer trust (Liu, Bao, & Zheng, 2019). Similar to how the product interaction in AR shopping applications will avoid consumers' perceived risk to the product (Park & Yoo, 2020), the following hypothesis is related to the statement that the quality of product information in AR shopping applications will reduce risk perception. Considering limited previous research on AR characteristics, namely interaction with the virtual product and product informativeness on consumer perception, the researchers propose the hypotheses of the last dimensions of AR characteristics. H3a: There is a significant relationship between AR characteristics of interaction with virtual products and perceived functional benefit. H3b: There is a significant relationship between AR characteristics of interaction with virtual products and perceived trust. H3c: There is a significant relationship between AR characteristics of interaction with virtual products and perceived product risk. H4a: There is a significant relationship between AR characteristics of product informativeness and perceived functional benefit. H4b: There is a significant relationship between AR characteristics of product informativeness and perceived trust. H4c: There is a significant relationship between AR characteristics of product informativeness and perceived product risk. Perceived functional benefit is defined as the consumers' perception of the overall benefit of technological functions (absolute and relative), including the effective and efficient process, cost, and time consumption compared to conventional ones (Althunibat et al., 2021;Shareef, Baabdullah, Dutta, Kumar, & Dwivedi, 2018). In most previous studies, perceived functional benefit (also called a utilitarian benefit) is a significant predictor of perception related to consumer motivation to make a final decision (use or purchase). For example, the use of technology in banks (mobile banking) provides functional benefits in the form of simple and easy transactions instead of going to a physical bank (Shareef et al., 2018). Those are factors of consumers' perception of positive benefit functions found to drive consumers' intention to adopt mobile banking at all consumer service phases (static, interaction, and transaction). It is supported by a recent study in smart-government service adoption, in which perceived functional benefit is significantly related to a person's final decision (Althunibat et al., 2021). In the study of mixed reality, perceived functional benefit can increase consumer satisfaction and have indirect relations on behavioral intention (Dehghani et al., 2020). Although there is a limited examination of functional benefit on consumer attitude, consumer evaluation of the technology applied in mobiles application as good and favorable is repeatedly mentioned in group discussion. Many studies that apply the theory of planned behavior by Ajzen (1991) find that attitude directly influences behavioral intention (Gupta & Duggal, 2021;Kaushik et al., 2019;Sadiq, Dogra, Adil, & Bharti, 2021;Troise, O'Driscoll, Tani, & Prisco, 2021;Wan, Shen, & Choi, 2017). It is usually used as the first factor to examine consumer evaluation to make a final decision. Therefore, the research finally proposes a hypothesis that there is a relationship between consumers' perceived functional benefit and attitude toward AR in mobile furniture applications. H5: There is a significant relationship between customers' perceived functional benefit and attitude toward AR in mobile applications. Perceived trust is defined as the degree of attitudinal confidence for integrity, credibility, reliability, and safety of mobile applications from its technical and organizational standpoint and customer service value if required (Dehghani et al., 2020;Shareef et al., 2018). In digital applications, consumers' perceived trust can help to reduce the complexity and uncertainty of online purchasing decisions (Um, 2019). Trust also plays a crucial role in purchasing decisions in retail mobile applications. Customers are unlikely to purchase a product through a mobile application (online) if they do not trust it (Kaushik et al., 2019). Trust is a crucial construct that influences consumer attitude and final decisionmaking (Sarkar et al., 2020). For instance, previous studies explain that perceived trust is the dominant variable influencing users' attitudes toward digital applications (Kaushik et al., 2019;Mufarih, Jayadi, & Sugandi, 2020;Sarkar et al., 2020;Um, 2019). In addition, previous research has also found that trust in application weakens consumers' perception of the risk (Marriott & Williams, 2018;Mufarih et al., 2020). However, to the best of the researchers' knowledge, the relationship of consumers' perceived trust on attitude and product risk in the context of AR mobile applications has not been discussed in previous studies yet. So, the research attempts to present a novel result by examining the relationship of perceived trust on consumer attitude toward AR and perceived trust on perceived product risk. The hypothesis, which refers to previous studies regarding perceived trust, is correlated to consumer attitude and perception of risk in mobile technology (Mufarih et al., 2020). Therefore, the following hypotheses are proposed. H6: There is a significant relationship between consumers' perceived trust and attitude toward AR in mobile applications. H7: There is a significant relationship between perceived trust and perceived product risk in AR mobile applications. The main cause of perceived risk in online shopping activities is that consumers cannot interact (touch, feel, and try) with the product before purchasing (Bonnin, 2020). According to Dowling and Staelin in Bonnin (2020), perceived risk is defined as the consumers' perception of uncertainty and adverse consequence to the product or service during their purchase (shopping) activities. Perceived risk is also related to consumers' concerns about the quality of the product (Vonkeman, Verhagen, & Dolen, 2017). It also means that the probability and the outcomes of purchase activities are uncertain. Meanwhile, according to Ariffin, Mohan, and Goh (2018), perceived risk is divided into two, namely indecisions (the probability and favorability of outcomes) and consequences (the importance of losses). In online shopping, perceived product risk is more highly concerned with consumers when purchasing the product. It has come from the potential loss of products that do not meet consumers' expectations with the standard and quality of the product (Ariffin et al., 2018). AR applied in retail mobile applications can help consumers to reduce their perceived risk of purchasing the product online (Beck & Crié, 2018). Previous studies in online shopping and digital technology find that perceived risk (mainly product) has a negative relation to consumer attitude (Gupta & Duggal, 2021;Ho et al., 2020;Sadiq, Dogra, Adil, & Bharti, 2021;Troise et al., 2021). Therefore, the research assumes that perceived product risk presented by AR in mobile furniture applications has decreased consumer attitude toward AR. The following hypothesis is as follows. H8: There is a significant relationship between consumers' perceived product risk and attitude toward AR in mobile apps. In the psychological context, the behavioral theory of final decision making has been discussed by Ajzen (1991) in the theory of planned behavior. The theory explains attitude as one of the direct influences on consumer intention to behave a certain behavior. Based on Ajzen (1991), attitude is defined as the favorable or unfavorable degree of an individual to the specific behavior in the question. In other words, when consumers respond positively to a stimulus, it can directly influence a positive attitude and relate to individuals' intention to behave in the future. According to Ajzen (2002), measuring attitude should include experiential (affective) and instrumental (benefit, function) dimensions. Based on Wan, Shen, and Choi (2017), experiential attitude is labeled hedonic because it operates by asking consumer behavioral rates such as sound, pleasant, sensible, and others. Meanwhile, an instrumental attitude is labeled as a utilitarian attitude because it comes from the functional performance of the product. According to Ajzen (1991), behavioral intention refers to the degree of an individual's motivation to behave. This motivation depicts the effort people expend and the willingness they perform to behave. Measuring behavioral intention is the way to evaluate future behavior resulting from consumers' actions to execute the decision (Chennamaneni, Teng, & Raja, 2012). Although the behavioral intention is the preliminary stage for actual consumer behavior, intention is still considered a strong predictor for future consumer behavior (Sheeran 2002). Previous studies have discussed the relationship, association, and impact between attitude and behavioral intention in technology implementation and adoption. The result is found to be significantly positive (Gupta & Duggal, 2021;Ho et al., 2020;Kaushik et al., 2019;Lee, Xu, & Porterfield, 2021;Lee & Cho, 2019;Mufarih et al., 2020;Sadiq et al., 2021;Um, 2019;Yavuz, 2021). Therefore, the overall consumer attitudes have a relationship and influence on the intention to behave, which is a significant predictor as described in Ajzen (1991). The last hypothesis is as follows. H9: There is a significant relationship between consumer attitude toward AR and intention to adopt AR in mobile applications. According to the literature review above, a comprehensive conceptual model is illustrated in Figure 1 (see Appendices). It shows ellipse as latent variable, rectangle dash arrow as the group factor in AR characteristics variable, and solid arrows as the relationship between variables. METHODS The research is classified as mixed-method research consisting of qualitative and quantitative. In qualitative research, the research conducts FGD which has been dealt with in the previous section. In quantitative research, a cross-sectional survey data collection is applied to online buyers in Indonesia through a structured in-person administered questionnaire. Online survey data collection is conducted due to the social distancing policy during the COVID-19 pandemic, limiting physical contact with the respondents. The online questionnaire is uploaded on Google Form and administered for two months using the purposive sampling technique. The criteria for respondents are (1) Indonesians with a minimum age of 18-year-old who consistently do activities at home during pandemic (occasionally going out from home is tolerated); (2) they are technology literate; (3) they are familiar with online shopping platforms; (4) they are familiar with AR technology; and (5) they purchase at least one household appliance or furniture online during the COVID-19 pandemic. The perception about AR application from the survey data only evaluates the visual and cognitive perception obtained through detailed text and a video in the survey combined with the perception generated by respondents' experience to use AR technology. The questionnaire design is adjusted based on this consideration, adopted by several sources, and adjusted to the context of AR and characteristics of Indonesian consumers. All construct items in the research are measured using a seven-point Likert scale (1= "strongly disagree" to 7= "strongly agree"). The researchers also conduct a comprehensive discussion to find the best wording in the sentence and ease of understanding the main idea of questions. Table 1 (see Appendices) presents the summary of several items and the source of each item. About 397 data are collected from the respondents. After discarding inappropriate and missing data, 383 valid data can be used for analysis. To quantify the primary data, the researchers utilize Partial Least Square-Structural Equation Modelling (PLS-SEM) using SmartPLS 3.3.3 to analyze the measurement and structure models. Since SmartPLS can analyze complex models, PLS-SEM is more appropriate than other statistical tools. RESULTS AND DISCUSSIONS In descriptive analysis, the researchers find that the majority of respondents are female (66%). It is possibly due to more women purchasing online during the pandemic (BPS, 2020). Most respondents are between the ages of 23-27 years old from the total sample. Then, most of them have high school to undergraduate degrees, and around 30% are employees. Moreover, more than 80% of respondents live in Java and have income from 1 to 5 million IDR (about 70,11 to 350 USD) per month. Furthermore, the majority of respondents shop online once to ten times during the pandemic, and specifically, 89,8% indicate that they shop for furniture products online with a similar frequency. In PLS-SEM analysis, the first step done in the measurement model is assessing the indicator loadings. According to Hair Jr., Howard, and Nitzl (2020), the minimum level of acceptance should be at least 0,707. As shown in Table 1 (see Appendices), most indicator loadings are above the acceptable threshold. However, there are two indicators of an intention to adopt with a value of about 0,6. Thus, the researchers remove it. The next step is assessing reliability to measure the consistency of indicators in explaining the constructs. According to Hair Jr. et al. (2020), reliability evaluation is divided in two ways, namely Cronbach's alpha (α) and Composite Reliability (CR). The results displayed in Table 1 (see Appendices) indicate a high level of reliability that the values range from 0,88 to 0,98. Next, convergent validity is to analyze whether the indicators in the constructs can measure the same thing (Hair Jr, Black, Babin, & Anderson, 2019). Average Variance Extracted (AVE) is the common method for measuring convergent validity. In the research, all the AVE values are more than 0,5. It indicates the internal convergent validity of the constructs. Discriminant validity is established by Fornell-Larcker Criterion (Hair Jr. et al., 2019). In Table 2 (see Appendices), the Fornell-Larcker Criterion is supported since the AVE square root values (in bold) exceed the construct correlations than the other constructs in the model. PLS algorithm and Blindfolding are utilized in assessing the quality of the structure model by evaluating the level of coefficient determination (R 2 ), effect size (f 2 ), and predictive relevance (Q 2 ). As displayed in Table 3 (see Appendices), the research finds that the predictive power in the model is moderate as the R 2 value is higher than 0,5, except for the attitude toward AR and intention to adopt AR. Following the step by Hair Jr. et al. (2019), the researchers conduct the effect size analysis with the interpretation if specific exogenous is omitted from the model, the result elucidates the effect with respectively 0,02 (weak), 0,15 (medium), and 0,35 (large). Besides, Stone-Geisser's Q 2 test is used for predictive relevance assessment by conducting Blindfolding based on cross-validated redundancy (Hair Jr. et al., 2020). The analysis reveals the model evidence of predictive relevance with a value of more than zero (see Table 3 in Appendices). A bootstrap procedure with 5.000 resamples is conducted to analyze hypotheses by testing the structural relationship on the significance of path coefficients. The research uses 5% of the two-tailed significance level (t-value = 1,96) as a statistical decision based on Hair Jr. et al. (2019). The researchers find that most of the relationships supported the hypotheses, except for H1c, H3c, H4c, and H8, as shown in Table 4 (see Appendices) and illustrated in Figure 2 (see Appendices). The significant path coefficients further show that the AR characteristic of reality congruence increases consumer perception of AR functional benefit and trust while shopping online in the pandemic era. In many countries, particularly Indonesia, the increase of COVID-19 cases makes the central government issue a policy to temporarily close non-essential stores (non-grocery and non-pharmacy) and encourage taking advantage of online commerce. However, purchasing goods like furniture products online is not as easy as purchasing groceries. It requires physical contact to ensure that the material, size, and design match the room where the furniture will be placed. In the research, using IKEA Place, the researchers show that the reality congruence fits the actual products more than other digital channels, such as websites and e-catalogs. This advantage is related to AR, allowing consumers to furnish the room through the 3D virtual product virtually. The technology enables virtual products as if they are confirmed in shape, size, and design (Qin et al., 2021). This benefit implies that the quality of reality congruence in presenting the product will shape consumer perception that AR applications are functionally effective and efficient to use. Thus, it increases trust in apps when purchasing furniture products online. Prior studies have explained that system quality in service technology will increase overall positive consumer perception of technology use (Baabdullah, Alalwan, Rana, Kizgin, & Patil, 2019). The findings in the research support previous studies like Luo et al. (2020), Nguyen et al. (2021), and Sarkar et al. (2020 and develop further insight into the system quality, which increases consumers' perceived functional benefit in online shopping and trust. AR system quality also decreases consumer perception that the purchased product through AR-based applications is riskier. Because system quality is identified as the main driver of IKEA Place functionality (Kowalczuk et al., 2021), the prompt response and reliable performance of AR technology (compared to web-based product presentations) will generate consumers' belief in usability benefits and relieve the feeling of uncertainty about the product they purchase using AR application. Therefore, consumers have evaluated that AR with high system quality will increase their perception of functional benefit and trust in the application for online shopping. Besides, it decreases their perceived risk on the product accuracy. Moreover, AR application quality of virtual product information generates a high perception of functional benefit and trust in AR application when purchasing furniture online. This implication is supported by the fact that AR-based product presentation, as proposed to be used as the new alternative for consumers to purchase furniture during the COVID-19, has the advantage of providing clear product information. It can be seen on the IKEA Place feature called 'For You Feed' that offers IKEA product suggestions and daily inspiration in designing room interiors based on users' interests (Miller, 2019). The 'Browse' feature also provides more than thousands of furniture products based on selected collections or categories with specific information for each product. From the visual interpretation in a promotional video during the survey, these two features are seen when the model chooses a product. The application clearly presents product information, such as name, category, price, and recommends similar products. This product information ensures that ordered online products meet users' expectations, given the limited conditions for coming to the store. This finding is consistent with Yim et al. (2017). AR provides effective communication benefits. The last AR characteristic is product interaction that also significantly increases perceived functional benefit and trust. Unlike product information, product interaction allows consumers to interact with the virtual products and respond to the stimuli from the AR technological system. Moreover, AR application provides a 3D virtual product technology feature of 360 o that enables consumers to rotate, zoom, and move virtual products to more specific points in the actual environment compared to other channels, such as web-based product presentations that only provide 2D visual products (Qin et al., 2021). Consumers' visual perception generated from promotional video of IKEA Place shows that AR presents the product interactively. Allowing consumers to control the product improves their perception of the application as a beneficial and trustworthy medium during online shopping. Furthermore, the feature strengthens consumer perception that the application is beneficial because it helps consumers to decide the products quickly and precisely and encourages effective and efficient online purchasing. Therefore, the more users can interact with the virtual product, the more their trust will be in the overall AR application features when utilizing them before purchasing online. Consumers' visual perception about the functional benefit of using IKEA Place and their trust in the overall AR features before purchasing furniture online increases consumer attitude toward AR application. Previous studies in the technological context have approved the direct relation of perceived functional benefit and trust on attitude (Gupta & Duggal, 2021;Kaushik et al., 2019;Mufarih et al., 2020). Specifically for the AR study, the result is consistent with the research by Yim et al. (2017). The benefits of AR in generating medium usefulness result in consumers' positive attitudes. Therefore, this finding implies that when consumers look into the application that can display realistic 3D furniture and digitally select, place, and move virtual products through one-touch (Dehghani et al., 2020), their evaluation of the application is positive. It is similar to the reason for perceived trust in IKEA Place, which is significantly positive on consumer attitude. It shows consumers' belief in AR performance since the application provided and developed by IKEA can display realistic furniture products (size, color, and shapes) and information (price and product details). Thus, it increases overall cognition, emotion, and behavior. The hypothesis result indicates that the more the consumers think that AR application is trusted for shopping furniture, the less the consumers perceive the risk to the offered product. The previous finding supports the significant relationship between trust and perceived risk by Kaushik et al. (2019). This finding goes according to consumers' visual perception that AR application has quality in processing system and quality in virtual product presentation and information and allows consumers to interact with virtual products. Therefore, the trust in the AR application will attenuate the risk that consumers perceive about the products. The last result further reveals that a positive consumer attitude has a significant relationship with the intention to adopt AR applications when purchasing a furniture product. This finding supports previous research regarding consumer evaluation of the AR having implications on their behavioral intention by Park and Yoo (2020), Qin et al. (2021), and Zhuang et al. (2021). According to Manchanda and Deb (2021), consumer attitude in AR positively affects the intention to adopt m-commerce. This finding indicates that consumers who form attitudes toward AR application positively have greater intention to download, use, and recommend the app in the future. Based on the research findings, the research has numerous managerial implications for retailers and application developers to gain information regarding consumers' responses to the AR technology in mobile applications. The finding describes the importance of implementing AR technology, particularly in furniture retail mobile applications, to create a customer experience that is not found in competitors. The research suggests that reality congruence and product informativeness in AR can be effective tools to enhance consumers' perceived functional benefit and trust in applications. Thus, if furniture retailers implement AR technology, retailers and application developers should provide quality virtual product presentations and key product information in the AR interface. The improvement can be considered through graphical and pixelated quality, the size accuracy of augmented products with their real products, and detailed information based on consumers' frequently asked questions when purchasing online. The results also indicate the importance of managing system quality in AR to facilitate better overall consumer perception of the AR, such as functional benefit, trust, and reduced risk perception to virtual products. Application managers or developers should develop AR shopping mobile applications with good processing speed (minimum delay response), accurate and reliable service as requested, and trouble-free when presenting virtual products in real environments. Thus, when furniture (and other business categories) retailers use AR as alternative marketing and sales channel, maintaining AR system quality periodically can benefit their brands. It should be done to create a customer experience during the purchase of a product and avoid the risk that the consumers perceive to product performance. Encouraging consumers' positive perception of AR applications is beneficial in terms of functions and making them trustworthy applications for shopping. It is crucial for application managers and developers to provide excellent quality in virtual product interaction. Practitioners should consider the seamless possibility of consumers interacting with virtual products by improving the quality of human-computer interaction functions such as rotation, zoom (in and out), color change, and well-defined 360-degree 3D product presentation. The research also shows that it is advantageous for retail managers to cultivate consumer attitude toward AR using the consumers' perceived functional benefit and trust in AR. When retailers decide to apply AR and manage consumer attitude toward their AR applications positively, it will evoke consumer intention to download the AR application, take them as a priority channel for shopping online, and recommend them to other people. Persuading consumers to use AR must be intensively carried out, particularly in emerging markets that rarely utilize AR mobile applications in online shopping. Retailers can take advantage of social media marketing (on Instagram, Twitter, YouTube, or TikTok) by creating attractive, innovative, and informative content regarding the use of AR-based applications when shopping online. Persuasive communication that retailers can do in social media platforms should focus on how the application provides functionally beneficial features when shopping online. Creating social media content that can evoke consumer trust in AR applications includes reposting Instagram and Twitter users' content when using the company's AR apps during shopping. Moreover, promoting applications with the help of influencers is recommended to increase consumers' sensory experience that AR is functionally beneficial and trusted for online shopping. The advantage of this marketing strategy can also avoid consumers' perception that shopping by using AR is riskier because the products they receive may not like as shown on the applications. Lastly, as the research utilizes IKEA Place that is AR-based for product presentations, the researchers suggest that IKEA in emerging markets integrates the online commerce's store in the application because currently, some emerging countries such as Indonesia are not available. However, consumers can still utilize the application only to visualize AR-based furniture. This consideration makes it easier for consumers to purchase IKEA furniture products through the AR application. They should also adjust the information presented on IKEA Place with the local showroom (price, product availability, and collection). Since the application is not available for some smartphone types, the researchers suggest that IKEA and AR application developers conduct technical research to ensure that consumers can easily access the apps. In addition, the research is addressed to the government to produce regulations and policies in infrastructure support, personal data privacy, and other stimuli that can help the development of AR implementation in the business sector. CONCLUSIONS The research, which focuses on consumer perception, attitude, and adoption of technology in retails, provides a comprehensive study related to consumer perception and the evaluation to adopt AR mobile applications as part of consumers' online shopping experiences. A total of 383 valid data are collected using an online survey. Then, PLS-SEM is applied to analyze data from the model developed through FGD. The result finds that AR characteristics, such as reality congruence, system quality, product informativeness, and product interaction, are significantly related to perceived functional benefit and trust based on the consumer evaluation of the IKEA Place promotional video. Besides, only AR characteristic of system quality has a negative significant relationship on perceived product risk, which supports the hypothesis. The result also shows the significant relation between perceived functional benefit and trust on attitude toward AR. It indirectly impacts the intention to adopt AR. The contribution of the research to the literature is three-fold. First, the researchers build a comprehensive theoretical model based on results of FGD combined with models that have been developed in the literature. It can be seen in several variable relationships that previous studies have never discussed the AR-based retailer applications context. Accordingly, apart from extending the correlation of existing variables in AR studies, the researchers also present new ones. Second, to the best of the researchers' knowledge, previous studies have not explored consumer adoption of AR retail mobile applications in the context of furniture in Indonesia from a set of characteristics in AR to users' perception and behavioral responses. The research can be considered as one among pioneer research that presents the psychological consequences of implementing AR-based technology in mobile retail applications. Third, the research examines the relationship between AR characteristics and consumer perception, which is indirectly related to consumer attitude and behavioral adoption. Since there are no studies with this relationship, the researchers provide a new light for other researchers to assess consumer perceptions of AR as influenced by the characteristics of the technology. Therefore, these three important aspects further enrich the literature in AR, consumer behavior in retail, and technological adoption in an emerging country perspective during the COVID-19 pandemic. Despite the contribution, the research has some limitations that can guide the agenda for future research. First, consumer evaluation of the AR-based product presentations on IKEA Place is only based on the promotional video on IKEA Place and the customer experience of using AR technology from other platforms. Future research that uses IKEA Place as a sample platform may collect data from customers who experience using IKEA Place regularly. Second, the research only collects data from one of the emerging markets. It is suggested for future research to conduct comparative data collection, such as across developing countries or comparing developing and developed countries datasets. Third, questionnaire bias cannot be avoided since the research applies a survey strategy. Future research is suggested to conduct experiment studies by comparing participants' perceptions and attitudes toward AR mobile applications and other mobile applications. Lastly, the research focuses solely on AR in furniture retail mobile applications. The result may be impactful for furniture retail practitioners and literature. Thus, future research may explore different product categories, such as clothing, accessories, cosmetics, and others, to broaden the understanding of literature in AR mobile application, consumer behavior, and technological adoption.
2022-02-19T01:47:27.942Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "8ec8e5a1436aee2078d81b870a61324244e7a42c", "oa_license": "CCBYSA", "oa_url": "https://journal.binus.ac.id/index.php/BBR/article/download/7801/4374", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b19f1756fc98a8c3481f73f85b7586ca28e074c", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
250642189
pes2o/s2orc
v3-fos-license
Experiences and perspectives of cancer stakeholders regarding COVID‐19 vaccination Abstract Aim The risk of dying from COVID‐19 is higher for those who are older, immune‐compromised, or chronically ill. Vaccines are an effective strategy in reducing mortality and morbidity from COVID‐19. However, for COVID‐19 vaccination programs to reach full potential, vaccines must be taken up by those at greatest risk, such as cancer patients. Understanding the perspectives of all stakeholders involved in cancer patient COVID‐19 vaccine uptake will be critical to ensuring appropriate support, and information is provided to facilitate vaccination. The aim of this research was to explore the longitudinal views of cancer stakeholders regarding COVID‐19 vaccination. Methods Semistructured interviews were conducted with cancer patients (n = 23), family members (n = 10), cancer health professionals (n = 19), and representatives of cancer nongovernment organizations (n = 7) across Australia 6 and 12 months postrecruitment. Transcripts were thematically analyzed, using an inductive approach. Results All stakeholder groups expressed mostly positive attitudes toward COVID‐19 vaccination, with the following key themes identified: (1) high motivation—vaccination perceived as offering health protection and hope; (2) hesitancy—concern about vaccine hesitancy among the general population, with a minority hesitant themselves; (3) confusion and frustration—regarding the vaccine rollout and patient eligibility; (4) uncertainty—about vaccination in the context of cancer; (5) access to vaccination; and (6) desire for expert individualized advice—on vaccine interaction with cancer treatments. Conclusion These findings highlight the COVID‐19 vaccine concerns and information needs of cancer stakeholders. Policymakers need to provide clear tailored information regarding vaccine eligibility, accessibility, benefits, and risks to facilitate vaccine uptake. METHODS This was a substudy of a longitudinal qualitative study involving semistructured interviews with four groups of cancer stakeholder, exploring attitudes to and experiences of COVID-19 vaccination. Eligible participants were adult cancer patients (18 years and over) currently receiving treatment (chemotherapy, radiation therapy, hormone therapy, targeted therapy, immunotherapy, or surgery) or within 6 months of treatment (except ongoing hormone therapy); family members of adult cancer patients currently receiving treatment; oncology HPs; and representatives of cancer NGOs. Non-English speaking or incapacity to give informed consent were exclusion criteria. Participants were recruited through an email invitation via national professional or consumer organizations, two NSW hospital-based oncology services and via snowballing (HPs forwarding the email to colleagues nationally). A participant information sheet and consent form were accessible via a link embedded in the email. The research team contacted interested participants to schedule a telephone interview. Recruitment continued until theoretical saturation (no new themes emerging after three consecutive interviews). 31 The longitudinal qualitative study collected qualitative data at baseline (consent) and two timepoints post consent: 6 months (T1; March-June 2021), and 10-12 months (T2; August-October 2021). Analyses Interviews were audio-recorded, transcribed verbatim, anonymized, uploaded to NVIVO 12, and subjected to thematic analysis using framework analysis to compare and contrast themes across stakeholder groups and timepoints. 32 Thematic analysis of the interviews revealed six main themes: (1) high motivation-vaccination perceived as offering health protection and hope; (2) hesitancy-concern about vaccine hesitancy among the general population, with a minority hesitant themselves; (3) confusion and frustration-regarding the vaccine rollout and patient eligibility; (4) uncertainty-about vaccination in the context of cancer; (5) access to vaccination; and (6) desire for expert individualized advice. High motivation All stakeholder groups expressed pro COVID-19 vaccination attitudes and were eager to be vaccinated. Hesitancy A few patients/family did express hesitancy about the vaccine at 6 months, and some HP/NGO participants reported hesitancy in patients. Hesitancy was due to a belief that COVID-19 is a conspiracy, benign, a perception of insufficient research conducted about vaccine outcomes, fear of side effects, concern about treatment interactions, and impacts on treatment scheduling. Some patients/family acknowledged they were late adopters generally, which carried over to this vaccine. Some patients were advised by their oncologist/GP to put off vaccination and prioritize the flu vaccination. 'I've got one patient who doesn't believe COVID exists. . . 'There will still be 5 to 10 percent of people who refuse to get the vaccine for whatever daft reason they believe in. . . But there will be another 15 or so percent of people who are not bothering now because they're too lazy or too selfish or whatever, who will want to then go and get it once COVID gets here, and they need to wake up to themselves.' HP100, Medical Oncologist, T2 Accessibility Most interviewees experienced easy, streamlined access to the vaccine. HPs had access through their workplaces, and patients were called by GPs or proactively asked their GP/oncologist about vaccination. At 6 months, some patients/family noted their HPs exhibited a lack of urgency to have patients vaccinated, prioritizing the flu vaccine. However, none discussed this at the 12-month interviews. The information needs identified in the interviews included accessibility and eligibility, risks, effectiveness and timing of the vaccine, and long-term implications of the vaccine (Table 2). DISCUSSION This is the first study to look at attitudes toward COVID-19 vaccination from multiple cancer stakeholder perspectives over time. All stakeholder groups reported barriers to COVID-19 vaccination uptake in this population, supporting the previous research. Barriers included lack of confidence in science and vaccine efficacy, 16,19,21,23,24 fear of side effects, 16,[21][22][23][24][25][26]30 belief that COVID-19 is benign, 16,23,25 and concern about vaccine impact on treatment. 18 To support this, once vaccines have been proven to be safe and effective in the general population, clinical trials should focus on vulnerable population groups, such as those with compromised immune systems. Research is currently underway in Australia to investigate the safety and efficacy of COVID-19 vaccines in cancer patients, the results of which will inform tailored messaging for cancer patients regarding COVID-19 vaccination. 38 Further, research 35,39 indicates that education interventions, such as webinars delivered by experts (oncology and disease specialists), can impact patient perspectives regarding COVID-19 vaccine safety and effectiveness, as well as shift intentions toward vaccination. Potter and colleagues 37 suggest that government agencies and healthcare organizations can also play an important role in media and education campaigns to provide evidence-based information and prevent the spread of misinformation. In addition to information and communication needs, this research highlighted stakeholder confusion and frustration with the national COVID-19 vaccine rollout. The COVID-19 vaccine rollout in Australia was delayed due to slow and inadequate supply of vaccines to Australia and inefficient distribution networks within Australia. 40
2022-07-20T06:17:37.037Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "05f938a48e2d71705bedd0a4fe0002f231710c51", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9349780", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "825cf5cfb01bde8e01d9931f42f7bcb8ee662016", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265266860
pes2o/s2orc
v3-fos-license
Fatigue Response of MoS2 with Controlled Introduction of Atomic Vacancies Fatigue-induced failure resulting from repetitive stress–strain cycles is a critical concern in the development of robust and durable nanoelectromechanical devices founded on 2D semiconductors. Defects, such as vacancies and grain boundaries, inherent in scalable materials can act as stress concentrators and accelerate fatigue fracture. Here, we investigate MoS2 with controlled atomic vacancies, to elucidate its mechanical reliability and fatigue response as a function of atomic defect density. High-quality MoS2 demonstrates an exceptional fatigue response, enduring 109 cycles at 80% of its breaking strength (13.5 GPa), surpassing the fatigue resistance of steel and approaching that of graphene. The introduction of atomic defect densities akin to those generated during scalable synthesis processes (∼1012 cm–2) reduces the fatigue strength to half the breaking strength. Our findings also point toward a sudden defect reconfiguration prior to global failure as the primary fatigue mechanism, offering valuable insights into structure–property relationships. T he mechanical endurance of materials is typically limited by their ultimate breaking strength or lifetime due to fatigue induced by cyclic loading at stress below their ultimate tensile strength.Notably, over 80% of fracture incidents occur as a result of fatigue. 1 The achievement of a sustainable future requires the use of durable materials that can withstand repeated mechanical stress.Two-dimensional (2D) materials such as graphene and transition metal dichalcogenides (TMDCs), particularly MoS 2 , are being investigated as active components in a variety of electromechanical devices, i.e., flexible displays, mechanical sensors, and nanomechanical resonators, 2,3 due to their unique electronic and mechanical properties, such as appropriate band gap, exceptional electrostatic gate coupling, high flexibility, and ultrahigh strength.While graphene and MoS 2 have been widely employed to enhance the fatigue resistance of bulk materials and structures, 4−7 experimental works into the service life of atomic thin layers have only been conducted in recent years, 8,9 owing to the challenges of performing such experiments.However, with the increasing adoption of few-layered devices in practical applications, their mechanical reliability and service life have become critical concerns. −12 Unfortunately, every scalable method for the production of these materials involves a certain (usually low) density of imperfections in their atomic lattice.Therefore, systematic studies on physical magnitudes upon defect content should enlighten the tolerance of these materials in the road to real life applications. 13ere we evaluate the mechanical reliability and fatigue response of MoS 2 by means of indentations with atomic force microscopy (AFM) on suspended membranes.By analyzing multiple breaking events, we determine that monolayered MoS 2 has a reliability similar to engineered ceramics.We demonstrate that the dynamic fatigue life of high-quality CVD grown monolayered MoS 2 is greater than 10 9 cycles for a stress value of 13.5 GPa, which is 0.8 times its ultimate breaking strength.Upon the controlled introduction of atomic vacancies, we perform a systematic study of these magnitudes as a function of defect density.Lateral force microscopy images before and after fatigue testing of the membranes reveal that fatigue results from a sudden defect reconfiguration prior to global failure. We performed fatigue measurements on MoS 2 monolayer drumheads with diameters ranging from 0.5 to 2 μm.Our starting MoS 2 monolayers were grown by CVD (see SI1) and then transferred by an all-dry technique 14 onto SiO 2 /Si substrates with predefined micrometric circular wells yielding suspended membranes of MoS 2 well anchored on the circular perimeter. 15We confirmed the presence of the MoS 2 monolayers using photoluminescence microscopy 16 (data provided in SI2) and imaged them using AFM in dynamic mode.For this study, we selected only single-layer drumheads exhibiting no observable slack or wrinkling (Figure 1a).We estimated the defect density using micro-Raman spectroscopy (details in SI2). 11,17,18We obtained native defect densities of 0.4 × 10 12 and 0.25 × 10 12 cm −2 for two different batches of asgrown samples, corresponding to mean distance between defects of ⟨l d ⟩ = 15.7 nm and ⟨l d ⟩ = 20 nm, respectively.These defect densities are typical of ultra-high-quality MoS 2 CVD grown samples. 19xperiments were performed under ambient conditions (21 °C and ∼30% humidity) using a custom-made AFM.Prior to fatigue testing, we conducted regular indentations at the center of the drumheads.From these indentation curves, we calibrated the force and indentation as shown in Figure 1b and discarded slippering of the MoS 2 at high loading force.We also estimated the elastic modulus of the MoS 2 layers and the residual stress in the membranes yielding values of 200 and ∼0.15 N/m, respectively (see SI3 for details).Subsequently, we applied a static force (with a corresponding static stress σ DC ) at the center of the suspended membrane and oscillated the AFM probe at a prefixed amplitude around the static load at a frequency of 100 kHz, inducing a dynamic stress σ AC .We maintained these conditions until fracture.We detected fatigue failure by observing an abrupt increase in the cantilever deflection and a sudden increase in the cantilever amplitude, as shown in Figure 1c,d.We confirmed the membrane failure using AFM images acquired after this event (SI4 fatigue protocol). Prior to conducting fatigue tests, we indented numerous asgrown MoS 2 monolayer drumheads until they fractured.This allowed us to determine the fracture force of the membranes.Then, we estimated the ultimate breaking strength σ F using 20 where E 2D is the two-dimensional Young's modulus, F break is the fracture force, and R tip is the radius of the indentation tip.This expression ignores nonlinear elasticity, and the derived value is known to overestimate the strength by about 10%; however, it has widely been used in the literature. 21,22Our measurements yielded an average breaking strength of ⟨σ F ⟩ = 17 ± 1 GPa, with no dependence on the residual stress of each membrane (data in SI5), consistent with previous studies. 11,23 what follows, all strength values will be normalized to the average ultimate breaking strength of as-grown monolayered (e) Goodman lines for microdrums that did not fracture (solid line, lim = 0.45⟨σ F ⟩, c = 1.5), for 10 9 cycles (dashed line, lim = 0.26⟨σ F ⟩, c = 1) and 10 8 cycles (dotted line, lim = 0.36⟨σ F ⟩, c = 1.1).Note that, for nonbroken microdrums, we only provide an upper limit.(f) Static fatigue for as-grown MoS 2 .Red dots: microdrums that broke in times lower than 12 h.Gray dots: microdrums that did not break after 12 h of static loading.MoS 2 , i.e., 17 GPa.As shown in Figure 2a, our data are well described by a two-parameter (m, ⟨σ F ⟩) nanoscale Weibull distribution 24,25 where σ F is the fracture strength measured for each indentation and ⟨σ F ⟩ is the average value for all σ F measured in the experiments. We characterized two batches of as-grown drumheads with the above-mentioned native defect densities finding Weibull modulus values of m = 14 and m = 25, for ⟨l d ⟩ = 15.7 nm and ⟨l d ⟩ = 20 nm, respectively.The Weibull modulus describes the variability in material strength, and in bulk materials, it is used as an indicator of mechanical reliability.Although the direct applicability of this analysis to nanostructures still has some limitations (detailed discussion in SI6), we compared our results to those reported previously.Our Weibull modulus is lower than the typical values for metals (m ∼ 100) and that reported for graphene (m ∼ 16−44). 8,21However, it is higher than that of the best-engineered ceramics (m ∼ 10) and similar to that reported for as-grown MoS 2 in a very recent study (m ∼ 22). 26e performed fatigue characterization by applying σ DC and σ AC and measuring the number of cycles for drumhead survival before failure.We used Goodman diagrams to visualize fracture statistics.Figure 2b shows our data of as-grown monolayer MoS 2 , where black dots represent the membranes that did not break after 4.3 × 10 9 loading cycles and white dots represent those that failed right after reaching the load conditions.From this plot, we extracted stress-number of cycles (S−N) graphs performed at a constant σ AC and varying σ DC and at a constant σ DC and varying σ AC .These results are depicted in Figure 2c,d, respectively, where the fatigue life of MoS 2 is shown to be strongly dependent on both σ DC and σ AC . Our data reveal a fatigue strength of 0.8⟨σ F ⟩ for 10 9 cycles.These results place high-quality CVD grown MoS 2 as one of the best materials in terms of dynamic fatigue response, with a high level of survival, 1 order of magnitude higher than those of highstrength steels in absolute and relative terms.The best alloys show a fatigue endurance of about 0.5⟨σ F ⟩, corresponding to 0.5 Intercept and slope of the linear fittings in panel d.(f) Data obtained with a constant σ AC of 0.05⟨σ F ⟩ and varying σ DC .(g) Intercept and slope of the linear fittings in panel f.(h) Goodman lines for drums that did not break (solid line), those that failed at 10 9 cycles (dashed lines), and microdrums fractured at 10 8 cycles for as-grown (black) and irradiated sample with ⟨l d ⟩ = 6.5 nm (blue).(i) Static fatigue lifetime of microdrums with different defect densities.Solid lines are linear fits to the data of the coresponding color. GPa for the case of steel.As-grown MoS 2 also exceeds by far the fatigue lifetime of other nanostructures such as Si nanobeams.. 27,28 Comparable values to those reported here have been recently reported only for graphene. 8It is worth noting that our normalized S−N plots superpose those of graphene (see SI7). We can also define Goodman lines from our data.These lines define the regions where the membranes do not break after a certain number of fatigue cycles and are commonly expressed as 29 c where the parameter σ lim is the maximum σ AC that the material can withstand without breaking, when σ DC = 0. Additionally, c is known as the safety factor, which indicates how many times a component is safer than what is required for a given use. 30For the case of 4.3 × 10 9 cycles, we obtain σ lim = 0.45⟨σ F ⟩ (7.7 GPa) and c = 1.5.Goodman lines for 4.3 × 10 9 , 10 9 , and 10 8 cycles, as depicted in Figure 2e, show a very high tolerance to a large number of cycles, a characteristic that is only achieved by metal alloys. Scanning electron images of the membranes fractured by fatigue tests showed micrometer length tears with straight and sharp edges (starting at the center of the drumhead and reaching the walls of the wells) and crack propagation along crystallographic directions, indicating global and catastrophic failure (Images provided in SI8).For those drumheads that survived 4.3 × 10 9 cycles, AFM topography images after fatigue testing did not show any evident change.Moreover, subsequent indentations also depicted similar breaking strength and elastic response to the nonirradiated membranes.Since the strength of two-dimensional materials is highly dependent on the size of defects, 11 this result suggests that the dimension of flaws in the most strained region (under the tip) rarely undergo significant alterations during the fatigue process and point toward an abrupt atomistic mechanism of fatigue without progressive damage.It also poses dynamic fatigue proof testing as a noninvasive technique as an approach for high reliability sample selection. We expanded our dynamic fatigue study to incorporate static loading conditions, which is a key factor in determining the service life of materials.These results are included in Figure 2f. In classical fracture mechanics, fatigue cracks start at the site of the highest local stress in a device. 1 In macrostructures, this usually happens at the holes or notches.In microstructures, these are inclusions, voids, cavities, or scratches.For highly crystalline atomically thick materials such as TMDCS, imperfections in the atomic lattice are the expected root cause of fatigue initiation.The most common atomic defect in MoS 2 are single sulfur vacancies, 19,31,32 which are inherent in any largescale production method due to their low defect formation energy. 33Single sulfur vacancies reduce the strength of MoS 2 and increase fracture toughness. 11However, the influence of defects on the fatigue lifetime is still unexplored.In what follows, we report our results on this topic. We produced MoS 2 samples with a controlled type and defect density by irradiating samples with doses of Ar + at 500 eV with perpendicular incidence at different irradiation doses.The techniques used to characterize the samples are described in our previous study 11 and SI2.Summarizing, irradiation generated homogeneous densities of vacancies, mainly sulfur monovacancies (∼80% of created defects), and a smaller percentage of single Mo vacancies and double sulfur vacancies.Consecutive doses resulted in higher defect densities.We estimated defect densities of 0.4 × 10 12 cm −2 for the as-grown sample and 1.4 and 2.4 × 10 12 cm −2 for two consecutive irradiations, corresponding to mean defect distances of ⟨l d ⟩ = 15.7 nm, ⟨l d ⟩ = 8.6 nm, and ⟨l d ⟩ = 6.5 nm, respectively. Fatigue response of samples with controlled densities of atomic vacancies is depicted in Figure 3.To enable direct comparison, we also included the results for as-grown MoS 2 in these plots.Figure 3a shows that the ultimate breaking strength of the irradiated samples decreased from 17 GPa for the asgrown samples to 10 and 9.7 GPa for the two consecutive irradiation doses, as previously reported. 11Weibull plots in Figure 3b, show that introduction of atomic vacancies decreases reliability by decreasing Weibull modulus from m ∼ 14 for the as-grown samples to m ∼ 11 and m ∼ 8 for the two irradiated batches of samples, respectively, showing a clear trend.However, it should be noted that the observed decrease in Weibull modulus measured by nanoindentations cannot be directly extrapolated to globally stressed samples (see SI6 for a detailed discussion). The Goodman diagram in Figure 3c summarizes the results of dynamic fatigue tests conducted on both pristine and irradiated drumheads.The measurements obtained with constant σ DC , enclosed in the green ellipse of Figure 3c, are presented in Figure 3d, with similar plots available in SI9.Linear fits are drawn as continuous lines.Despite the dispersion of experimental data in irradiated samples due to the stochastic nature of brittle failure in MoS 2 , the representation of slopes and intercepts with the y-axis of linear fits in Figure 3d reveals a robust trend, as shown in Figure 3e.The y-axis intercept indicates the maximum sustainable value of σ AC , which decreases from 12 to 6 and 3 GPa as the defect density increases.The slope reflects the sensitivity of fracture strength to the number of cycles or the change in survival stress per order of magnitude in the number of cycles.Figure 3f illustrates the lifespan of the samples with a fixed σ AC of 0.125⟨σ F ⟩ (encircled by the yellow ellipse in Figure 3c).It is evident from the plot that the survival cycles at a constant σ AC decrease with increasing defect density.Again, the slopes and yaxis intercepts are presented in Figure 3g, demonstrating robust trends, as seen in Figure 3e,g.These trends permit the extrapolation of the fatigue response for densities of atomic vacancies within the standard range for MoS 2 produced by scalable methods.Interestingly, we did not observe a fatigue endurance neither for pristine nor for defective drumheads. Based on our results, the fracture strength at 10 9 cycles appears to be a suitable comparison point for samples with varying densities of atomic vacancies.As-grown MoS 2 displays a fatigue strength of 0.8⟨σ F ⟩ (σ DC = 13.6 GPa, σ AC = 0.45 GPa), which decreases to 0.6⟨σ F ⟩ (σ DC = 10.2GPa, σ AC = 0.45 GPa) and 0.5⟨σ F ⟩ (σ DC = 8.5 GPa, σ AC = 0.45 GPa) with the introduction of 1.4 × 10 12 and 2.4 × 10 12 cm −2 of atomic defect densities, primarily sulfur vacancies.This line of reasoning allows for the plotting of Goodman lines to define safety regions for both as-grown and irradiated samples.Figure 3h illustrates these lines for the as-grown and most irradiated sample.In the case of the irradiated sample with 2.4 × 10 12 cm −2 (⟨l d ⟩ = 6.5 nm), Goodman lines at 4.3 × 10 9 cycles yield σ lim = 0.27 (4.6 GPa) and c = 1.4.These safety lines, even for irradiated samples, demonstrate the high fatigue resistance of MoS 2 relative to the best bulk materials, such as high-strength steel.Goodman lines for steel at 10 7 cycles are in the range of hundreds of MPa, implying an improvement of 2 orders of magnitude in the number of cycles and 1 order of magnitude in typical stresses. In Figure 3i, we observe a trend of diminishing breaking strength over extended periods of static loading with decreasing breaking stresses for increasing defect density.This observation points out the influence of thermal fluctuations under ambient conditions, which mimic the effects of small-amplitude stress cycles but at a much higher frequency.To draw a comparative perspective, considering a characteristic phonon frequency of 10 13 Hz for thermal fluctuations, for a given σ DC /⟨σ F ⟩ achieving membrane failure due to thermal fluctuations would require 10 8 cycles more than those induced by σ AC = 0.05⟨σ F ⟩, as expected for picometer-sized fluctuation caused by phonons at room temperature.This outcome aligns with prior research, corroborating that thermal fluctuations, while exerting a lesser impact than induced cycling, can indeed contribute to the rupture of covalent bonds when subjected to applied stress levels below the fracture threshold. 8,26he failure time τ for a material under an applied stress was described decades ago for polymers by Zurkhov et al. 34 , where τ 0 is the reciprocal of the natural frequency of the atoms (about 10 13 Hz), U 0 is the average energy required to break atomic bonds, and γ is a coefficient that translates stress to energy and proportionally decreases with the disorder.By fitting the data to this expression, we obtained an average binding energy of 190 kJ/mol for the as-grown samples, which is comparable to the value of 160 kJ/mol for sulfur bonds in bulk MoS 2 , suggesting that this empirical model can also be extrapolated to covalent materials.As expected, we also found that γ and U 0 decrease with an increase in defect density. Scanning electron microscopy images of irradiated samples after fatigue failure also showed tears propagating to the edge of the wells.A batch of samples was subjected to a fatigue test almost reaching their expected failure conditions, according to graphs depicted in Figure 3d,f.These membranes did not show changes in elastic response neither obvious topographic change after fatigue testing.However, regular AFM topographic images acquired in dynamic mode do not provide enough resolution to resolve atomic scale processes.To gain further insight into the atomistic mechanism of fatigue, we also performed lateral force microscopy (LFM) images of some membranes before and after fatigue testing (see SI10 for conditions).LFM has been shown to resolve single atomic defects when applied to 2D materials; 35 vacancy-type defects in MoS 2 appear in LFM images as high frictional regions.Figure 4 panels a and b depict LFM images of a MoS 2 drumhead with an induced density of defects of 1.4 cm 2 (i.e., ⟨l d ⟩ = 8.6 nm), where defects appear as darker regions in the image.This membrane was subjected to fatigue testing approaching their expected failure conditions (5 × 10 6 cycles with σ DC = 0.7⟨σ F ⟩ and σ AC = 0.2⟨σ F ⟩), and subsequently imaged again by LFM in the same conditions and using the same AFM probe, within a time interlap of few hours after fatigue testing.For this membrane we did observe at least one detectable change.As highlighted in Figure 4c,d, after fatigue testing, we observed a dark feature that revealed the emergence of a multiatomic defect.Among the seven membranes measured using this protocol, only one of them (that depicted in Figure 4) showed detectable changes.Although LFM can provide atomic resolution and resolve individual vacancies in small images, it is very difficult to account for these defects across the entire suspended membrane.Darker regions in these images are likely double sulfur, or molybdenum vacancies rather than sulfur monovacancies.Despite this issue, our results support previous molecular dynamic simulations 8 where failure upon fatigue in graphene is shown to be preceded by stress mediated bond reconfiguration at vacancy defects and clustering of atomic vacancies into multiatomic ones.The fact that we only observe these changes in a reduced number of membranes also supports the idea of an abrupt atomistic mechanism of fatigue, very different from the progressive damage observed in conventional materials. Very recent molecular dynamic simulations concluded that the reliability of MoS 2 results from a cooperative effect of three major ingredients: defect configuration, defect density, and thermal fluctuations. 26Our results quantify the influence of the density of atomic defects and point toward a non-negligible influence of thermal fluctuations upon static loading.The influence of defect configuration cannot be directly derived from the present results, but we envision creation of different kind of atomic defects, such as multivacancy, by Ga irradiation under a field ion beam, 11 or controlled passivation of atomic vacancies 36 to further explore the relevance of defect configuration. Summarizing, by means of nanomechanical indentations with an AFM tip, we evaluated the mechanical reliability, dynamic and static fatigue lifespan, and safety regions of monolayered MoS 2 .A controlled introduction of atomic vacancies allowed a systematic study of these magnitudes as a function of defect content.We observe that the mechanical reliability of MoS 2 decreases with defect induction.Dynamic fatigue testing places (d) LFM of the region marked with a blue square in panel c.As every scanning probe microscopy, LFM images are highly dependent on the precise atomic status of the tip apex; this accounts for slight deviations between pre-and postfatigue images (usually changes in contrast and position of the features).However, the emergent darker region comparing between images in panels c and d cannot be ascribed to tip changes. MoS 2 as one of the best materials showing ultrahigh dynamic fatigue strength, with a strain tolerance at 10 9 cycles and fatigue safety lines achieved only before by graphene and metal alloys.This tolerance decreases with defect introduction, but even the most defective samples evaluated here yet exhibit a fatigue response and safety lines comparable to metal alloys.We also provide insights into the atomistic mechanism of fatigue indicating sudden atomic reconfiguration before global failure.The results presented here, together with previous works reporting improved fracture toughness with controlled defect creation, 11 Figure 1 . Figure 1.(a) AFM image of a representative MoS 2 microdrum.(b) Force vs indentation curve on a MoS 2 microdrum where the DC and AC forces are marked, along with the corresponding indentation.(c) Upper panel: illustration of an AFM tip indenting a microdrum.Lower panel: sharp decrease in the deflection of the cantilever at the fracture point.(d) Representative data observed near the fracture point.Before failure, the cantilever amplitude is low as a consequence of the reacting force of the suspended membrane.After failure, the cantilever amplitude increases with the free oscillation amplitude. Figure 2 . Figure 2. (a) Probability plot of the surveillance of MoS 2 drumheads at different stresses and corresponding Weibull fitting for the two batches of asgrown samples.(b) Goodman diagram representing the applied static (horizontal axis) and dynamic (vertical axis) stress, normalized to the mean breaking strength.Black circles correspond to drumheads that survived after 4.3 × 10 9 cycles, gray circles represent those that fractured between 1 and 4.3 × 10 9 cycles, and white dots represent those that broke just after reaching the DC load.(c) S−N diagram with varying σ DC at two different σ AC of 0.125⟨σ F ⟩ and 0.025⟨σ F ⟩. (d) S−N diagram of microdrums supporting σ AC /σ F of 0.6⟨σ F ⟩ and 0.45⟨σ F ⟩ with varying σ AC .(e)Goodman lines for microdrums that did not fracture (solid line, lim = 0.45⟨σ F ⟩, c = 1.5), for 10 9 cycles (dashed line, lim = 0.26⟨σ F ⟩, c = 1) and 10 8 cycles (dotted line, lim = 0.36⟨σ F ⟩, c = 1.1).Note that, for nonbroken microdrums, we only provide an upper limit.(f) Static fatigue for as-grown MoS 2 .Red dots: microdrums that broke in times lower than 12 h.Gray dots: microdrums that did not break after 12 h of static loading. Figure 3 . Figure 3. (a) Breaking strength as a function of mean distance between defects.(b) Survival probability and Weibull fitting for as-grown drumheads and those with two consecutive irradiation doses.(c) Goodman diagram showing all results in as-grown and irradiated drumheads.Green and yellow regions indicate the data selected for plots in panels d and f, respectively.(d) Data obtained with a constant σ DC of 0.45⟨σ F ⟩ and varying σ DC .(e)Intercept and slope of the linear fittings in panel d.(f) Data obtained with a constant σ AC of 0.05⟨σ F ⟩ and varying σ DC .(g) Intercept and slope of the linear fittings in panel f.(h) Goodman lines for drums that did not break (solid line), those that failed at 10 9 cycles (dashed lines), and microdrums fractured at 10 8 cycles for as-grown (black) and irradiated sample with ⟨l d ⟩ = 6.5 nm (blue).(i) Static fatigue lifetime of microdrums with different defect densities.Solid lines are linear fits to the data of the coresponding color. Figure 4 . Figure 4. (a) 1024 × 1024 pixel LFM image of a MoS 2 drumhead with a defect density of 1.4 cm 2 (i.e., ⟨l d ⟩ = 8.6 nm).(b) LFM of the region marked with a blue square in panel b.Yellow ellipses encircle regions that allowed localizing the desire region before and after fatigue testing and highlight regions where changes were not observed.Green circle guides the eye where the change were observed.(c) 1024 × 1024 LFM image of the same drumhead shown in panel a but after performing fatigue testing for 4 × 10 6 cycles with σ DC of 0.7⟨σ F ⟩ and σ AC of 0.2⟨σ F ⟩.(d) LFM of the region marked with a blue square in panel c.As every scanning probe microscopy, LFM images are highly dependent on the precise atomic status of the tip apex; this accounts for slight deviations between pre-and postfatigue images (usually changes in contrast and position of the features).However, the emergent darker region comparing between images in panels c and d cannot be ascribed to tip changes. provide a clear understanding of how atomic defects in monolayer MoS 2 influences its mechanical resilience.The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.nanolett.3c02479.SI1, CVD growth of MoS 2 ; SI2, photoluminescence and Raman spectra of MoS 2 ; SI3, elastic characterization by nanoindentation prior to fatigue testing; SI4, fatigue protocol; SI5, dependence of breaking strength with prestress of the samples; SI6, Weibull distribution; SI7, comparison of fatigue response of MoS 2 and graphene; SI8, scanning electron microcopy images of MoS 2 membranes fractured by fatigue; SI9, complementary S−N plots; SI10m lateral force microscopy details (PDF)
2023-11-18T16:14:16.041Z
2023-11-16T00:00:00.000
{ "year": 2023, "sha1": "f99e799d76c53f27240c029e773378928d5b97f7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acs.nanolett.3c02479", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5c561257ba0b31c653d864634ce1ac9b1b6d1228", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
230720963
pes2o/s2orc
v3-fos-license
Carriage of extended spectrum beta-lactamase-producing Enterobacteriaceae by healthy school children from two remote villages in western Cameroun Background: Higher carriage rate of extended spectrum beta-lactamases (ESBL)-producing Enterobacteriaceae have already been reported among healthy community children, thus can increases the risk of developing pathological infection. Since children are the most exposed population due to lack of hygiene knowledge, determining their carriage prevalence will limit the progression or development of those pathologies. The objective of this study was to determine the prevalence of ESBL-producing Enterobacteriaceae carriage among children in remote villages of western Cameroon where healthcare structures are absent and the use of antibiotic consumption rare. Methods: A total of 110 fresh stool samples were collected from 110 healthy primary school children between ages 2 to 5 years old from two remote villages. Upon screening using selective agar media for ESBL, Enterobacteriaceae were identified using the Api 20E gallery. Antibiotic susceptibility was investigated using the disc diffusion technique and the ESBL production was determined using the double-disc synergy test. Chi-square test was used for comparison. Results: Children had no history of hospitalization and had not been subjected to antibiotic treatment three months prior to this study. Data analysis indicated a 22% carriage rate for ESBL-producing Enterobacteriaceae among school children. Overall, 24 (67%) out of 36 isolates were ESBL producers and 15 (61%) out of 24 being Escherichia coli . Other ESBL-producing bacteria were Klebsiella pneumoniae (3%) and Kluyvera spp (3%). We also isolated small proportion of bacteria showing resistance to high level cepholosporinase, which overall represented 33% of the total bacteria isolate. Conclusions: The higher carriage of ESBL-producing Enterobacteriaceae in children from some isolated villages devoid of health care structure highlights the risk for resistance transmission between pathogenic and non-pathogenic bacteria. This study also indicates that farming conditions can induces resistance. The current result may contribute to design a therapeutic policy to curtail the emergence of ESBL-producing Enterobacteriaceae in remote villages in western Cameroon. Introduction The resistance of Gram-negative bacilli to antibiotics is considered as a global challenge for healthcare due to limited treatment options and is also associated to high mortality [1,2,3].The production of extended-spectrum ß-lactamases (ESBL) is the mechanism by which the Enterobacteriaceae species induce the antibiotic resistance [4]. Whereas the infection with multidrug-resistant organisms have been initially associated with the hospital environment, there is now increasing evidence of high rates carriage of ESBLproducing microorganisms identified in community settings [5,6].It is therefore evident that the communities are becoming important reservoirs for antibiotic-resistant bacteria.Recent investigations suggest that Escherichia coli strains producing ESBL are the Enterobacteriaceae responsible for community infections [7,8,9] and its prevalence is increasing in resource-limited countries where infectious diseases, poverty and malnutrition are endemic [10]. Whether the infection is acquired in hospital or community, the digestive tract is the main reservoir from which Enterobacteria originate [11,12].Moreover, the digestive tract is where exchange of resistance genes between bacteria happens and antibiotic treatment select for the overgrowth of resistant bacteria [5,13].Intestinal carriage of bacteria is common in resource-limited countries due to poverty and poor hygiene conditions [14].Therefore, persons colonized are at risk of subsequent infection [5,15,16] and this impact on the prevalence of ESBL-producing Enterobacteriaceae among adult in rural Africa where hygiene is almost inexistent [17].Moreover, high prevalence of fecal carriage ESBLproducing Enterobacteriaceae is also observed among children living in rural Africa where poverty is high and hygienic conditions absent [17,18,19] and are therefore one of the reason for higher mortality among children observed in rural Africa. From the above informations', we thought it wise to determine the prevalence of carriage of ESBL-producing Enterobacteriaceae among school children from two remote villages in western Cameroon.Indeed, in those villages the absence of basic healthcare exposures was queried and no antibiotic treatments during the last three (3) months were the main selection criteria.Recovered Enterobacteriaceae isolates were tested for susceptibility to relevant antibiotic classes.Data analysis indicated 22% carriage of EBSL-producing Enterobacteriaceae among investigated children.In addition, the data gathered in this study is of paramount importance since it may contribute to design strategies to curtail the emergence and spread of ESBL-producing Enterobacteriaceae among children in rural Africa and devise innovative therapeutic approaches against multidrug-resistant organisms. In addition, ethic approval for the current study was given by the "Université des Montagnes" Ethical Committee (Autorization N°2017/087/UdM/PR/CAB/CIE).Written informed consent was obtained from the parents or guardian on behalf of all the children enrolled in the study. Study setting and population This prospective study was performed between October 2017 and July 2018 in two primary schools (Moineaux de Bafou Ballefer and oiselets de Nzi) of the Bafou village near Dschang, the largest city of the Menoua subdivision and in one primary school (École publique projet route du Noun 2) of a remote village (route du Noun) near Bangangté, the largest city of the Ndé subdivision.Healthy children between 2 to 5 years old were included (n = 110).A standardized questionnaire was performed for collection of demographic information on children (age, gender, antibiotic treatment during the last 3 months and never been hospitalized). Sample collection and bacterial isolation A freshly emitted stool specimens from each child and contained in the coproculture pots was stored in icebox and send to the laboratory microbiology at the ¨Clinique Universitaire des Montagnes (CUMs)¨for analysis.Fecal specimen from each child was collected and cultured on MacConkey agar within six hours as follow.A total of 0.5 g of fecal sample was suspended in 5 mL of sterile 0.9 % saline.Each suspension was seeded on McConkey agars supplemented with céfotaxime a 1mg/L in order to select the Enterobacteriaceae resistant to third-generation cephalosporins (3GC).After seeding, the plates were incubated for 48h at 37°C.One colony representing each distinct colonial morphotype was isolated from supplemented MacConkey agar and further analyzed by gram coloration and oxidase test.Bacilli gram-negative and negative in oxidase test were seeded on a nutrient agar and incubated for 24 hours at 37 o C.After 24 hours we collected the colonies and prepared a suspension having a turbidity equivalent to that of the 0.5 standard of the McFarland range for carrying out the biochemical identification and performing the antibiotic susceptibility tests. Biochemical identification of Enterobacteriaceae The Biochemical identification was carried out according to the recommendations of the manufacturer on gallery Api 20E (Biomérieux, Marcy l'Etoile, France), which constitutes a standardized system of identification of Enterobacteriaceae. Antimicrobial Susceptibility testing Susceptibility tests were carried out by the Kirby-Bauer disk diffusion susceptibility test using 15 conventional antibacterial agents that are commonly used in Cameroon.In short, this was conducted on 24h bacterial pure culture obtained by streaking bacterial isolates on fresh nutrient agar and allowing for an overnight aerobic incubation at 37°C.From the resulting bacterial population, a suspension to the density of a McFarland 0.5 turbidity standard prepared in 0.9% saline was adjusted to the final opacity recommended for susceptibility tests by agar diffusion technique on Mueller Hinton agar.Test procedures and interpretations were done according to the standard guidelines recommended by the "Comité de l'Antibiogramme de la Société Française de Microbiologie [20]".We used 30μ g of each antibiotic disc that included amoxicillin, cefoxitin, cefotaxime, ceftazidime, nalidixic acid.In addition, discs of 10 μ g were used for cefepime and ertapenem while discs of 5 μ g were used for gentamicin, kanamycine, amikacin, ciprofloxacin and ofloxacin.A fosfomycin disc was used at 50 μ g.The combination of trimethoprim-sulfamethoxazole and amoxicillin/clavulanic acid were used at 23.75/1.25 μ g and 20/10 μ g, respectively.Escherichia coli (25922) from American Type Culture Collection (Manassas, Virginia, USA) was used as reference for quality control. Phenotypic screening for ESBL-producing Enterobacteriaceae The detection of ESBL(s) production in Enterobacteriaceae was performed using a double-disc synergy testing as described [21].Briefly, Amoxicillin-Clavulanic acid (20/10 μ g) antibiotic disc was placed at the center of an agar Mueller-Hinton plate.Around Amoxicillin-Clavulanic acid disc was placed the cefotaxime, ceftazidime and cefepime discs at a distance of 3.0 centimeter (cm) to the center.Development of the zone of inhibition (in a form of "champagne stopper") towards the Clavulanic acid disc following incubation at 37°C for 24 hours was indicative of a potential ESBL positive Enterobacteriaceae. Statistical analysis Statistical analysis was performed using EPI Info version 7.1.3.3 software (USD, Inc., Stone Mountain, GA, USA).Chi-square test was used to compare proportions.Uni-variate and multivariate analysis were performed using logistic regression.Multivariate analysis of characteristic features for carriage of ESBL-producing Enterobacteriaceae included the following nine variables; sex, age, parent level of education, underweight, stunting, wasting, use of antibiotics.A P-value <0.05 was considered statistically significant. Demographic Characteristics of school children Out of 110 school children enrolled, 36.36%(40 children) were from ¨école publique projet route du Noun 2¨in the Nde subdivision and 63.63% (70 children) were from ¨Moineaux de Bafou Ballefer and Oiselets de Nzi¨schools in the Menoua subdivision.The children age varied from 2 to5 years old and 41% (45 children) were male and 59% (65 children) were female.In addition, all parents reported that their children had never been hospitalized and had not taken antibiotics during the last 3 months prior to the study. Bacteria isolation and identification Screening of the fecal flora of 110 school children resulted in 31 positive culture indicating the presence of at least one bacteria strain.Among these cultures, 17 (15%) of the fecal sample were collected in the Menoua subdivision (Moineaux de Bafou Ballefer and oiselets de Nzi) and 14 (13%) were from samples collected in the Nde subdivision (école publique projet route du Noun 2).In total, bacteria susceptibility to cefotaxime allowed the isolation of 36 strains among which 64% were E. coli.Moreover, the other bacteria strain identified were K. Pneumonia (9%), E. sakazaki (6%), S. liquefaciens (6%), Kluyvera spp (6%), E. agglomerans (6%) and the least represented E. intermedium (3%) (Table 1).Further analysis showed that some patients were colonized by more than one micro-organism. Table 1: Distribution of Enterobacteriaceae strains isolated from school children's stools. Antimicrobial susceptibility and carriage of extended spectrum beta lactamase-(ESBL)producing Enterobacteriaceae The disc diffusion test is a method that confirms the presence of the ESBL-producing Enterobacteriaceae [22] and in the current investigation, 15 different antibiotic discs were used and the results of these tests are presented in table 2. Data analysis showed that all bacteria isolated from feces were susceptible to ertapenem and resistant to amoxicillin.In addition, these bacteria showed a high level of resistance to ciprofloxacin (90%) and ofloxacin (81%).Moreover, resistance to kanamycin (43%), amikacin (42%) and gentamicin (45%) was also observed (Table 2).After analysis of the antimicrobial susceptibility tests, 24 of 36 (67%) of the isolated Enterobacteriaceae were ESBL-producing.Detailed analysis showed that 17 of the 24 (47%) were from the samples collected in the Ménoua subdivision and 7 of the 24 (20%) were from sample collected in the NDE subdivision.Further analysis indicated that Escherichia coli was the most abundant Enterobacteriaceae isolated with ESBL phenotype (61%).Other ESBLproducing bacteria species were Klebsiella pneumoniae (3%) and Kluyvera spp (Table 3). Finally, additional analysis of the data collected from participant indicated that 22% of the community children analyzed were carrier of ESBL-producing Enterobacteriaceae. Phenotypic characterizations of the isolated Enterobacteriaceae Further analysis of data obtained upon antibiotic susceptibility test showed that some of the isolated Enterobacteriaceae were weakly resistant to the high level cephalosporinase (HLC) (33%).In contrast, all of the isolated Enterobacteriaceae showed no resistance to carbapenemase (0%).Moreover, the majority of the isolated E. coli (61%) was resistant to ESBL while only 3% was resistant to the HLC.The other microorganism which showed a resistance to both HLC and EBSL was Kluyvera spp.Distribution of resistance phenotypes displayed by Enterobacterial isolates are shown in Table 3. of genetic elements responsible the resistance [36].In this study we did not carry out extensive molecular characterization in order to determine the genotype of each isolate. However, it is demonstrated that the CTX-M-15 like genotype is the dominant CTX-Ms enzyme among carriers worldwide [28].Therefore the CTX-M-15-like genotype might be the one present in ESBL-positive isolates recorded in our community.In addition, SHV-type ESBL might also be considered as one of the possible genotype of ESBL-positive isolates. The main ESBL-producing Enterobacteriaceae strains isolated in this study was E. coli which is the frequently reported Enterobacteriaceae in hospital-based [37,38,39] and community-based [40,41] studies in other African countries.Although the current study found a significantly higher prevalence of ESBL-producing Enterobacteriaceae among healthy community children in remote region of western Cameroon, the clinical impact of multidrug-resistant bacteremias has yet to be investigated.Our study seems to be the only one conducted in Cameroon which has targeted children in remote villages where the use of antibiotics is rare.In addition, the clinical consequences of ESBL Bacteremia in such remote region have to be evaluated and the significant impact of these multi-resistant infections on mortality of children in that region has to be shown.These results should encourage health authorities to investigate whether multidrug-resistant bacteremia are among the causes of death in children aged 0-5 years recorded in remote villages or rural Cameroon. Conclusion Our study shows a presence of Enterobacteriaceae having a high level of antibiotic resistance among subject who had in all probability never taken antibiotics.This result can be explained by the fact that environmental conditions have a high roll in the transmission of Enterobacteriaceae resistant.Therefore, we believe on one hand that the farmers must avoid anarchic use of pesticides and herbicides for plants treatment, and on the other hand that all peoples in such area must apply strict rule of hygiene to avoid infection. ethic approval for the current study was given by the "Université des Montagnes" Ethical Committee (Autorization N°2017/087/UdM/PR/CAB/CIE).Written informed consent was obtained from the parents or guardian on behalf of all the children enrolled in the studyPublisher's Note Table 2 : Antibiotic susceptibility rates of Enterobacteriaceae isolated from healthy school children feces. Table 3 : Distribution of resistance phenotypes by Enterobacterial isolates resistant to thirdgeneration cephalosporin.
2020-03-05T10:08:24.587Z
2020-03-03T00:00:00.000
{ "year": 2020, "sha1": "6f2124e44c14d42009cecd6ccde21b9332a6f0c2", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-15735/v1.pdf?c=1587836107000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8140a6ed9c9b9844c33cc81aa29629b106964a2b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
229331487
pes2o/s2orc
v3-fos-license
LAST MILE LOGISTICS IN THE FRAMEWORK OF SMART CITIES: A TYPOLOGY OF CITY LOGISTICS SCHEMES* As current cities are attributed to particular dynamism consists of population density and increased urbanization, urban areas are facing some challenges for city logistics, both in terms of economic, environmental, and social impact. Especially, the debates over last-mile logistics are arising with inefficiencies in delivery cost (half truckload on delivery) and delivery time per parcel (unnecessary waiting-load periods at multiple stops) while inner-urban areas are especially suffered from traffic congestion, emission, and noise pollution. In this regard, smart cities as a concept with the potential to produce sustainable solutions to urban problems bring along with the need for innovative urban logistics systems to make conventional distribution channels of the city up to date. The key objective tackled in this paper can be defined as the identification of the city logistics schemes with highlighting current approaches in smart cities. The study adopts a systemic approach based on the typology of consolidationdistribution schemes in city logistics to define the feasibility of micro logistics initiatives from the scope of the smart city consisting of mobility, sustainability, and liveability. Thanks to a detailed examination of city logistics dynamics, this study can contribute theoretically to smart city logistics literature as well as practically the logistics sector. INTRODUCTION The smart city concept promises the idea of creating a more liveable city from perspectives of economically, environmentally, and socially related to interconnected challenges among complex city members (infrastructures, networks, and environments). Especially, with ideas such as solving transportation problems and increasing energy efficiency, the smart city concept symbolizes a new urban utopia driven by IT systems (Pinochet et al., 2019). According to Cohen (2012), instead of highlighting the relationship of the smart city with the technology sector solely, a comprehensive approach should be encouraged. In this line, the idea of smartness should be integrated with six dimensions as people, economy, environment, living, governance, and mobility put together to figure out what a city should look like in the 21st century. Recently, one of the important topics on the smart city agenda is the necessity for more sustainable processes in urban freight transports as city logistics (macro-level) and last-mile logistics (micro-level) (Mangiaracina et. al, 2019). As city logistics deals with logistics as a whole system (such as actors, infrastructures, policies, etc.) in urban scope, last-mile logistics refers to delivery operations and strategies as the last step of the distribution process in the inner-urban area. Day by day, emerging challenges are rising on behalf of city actors related to the urban freight transport ecosystem in the line with reasons such as: 8. Rising trends through multi-channel distribution (a need for adding value-added business models and strategies such as micro logistics networks) Concerning the smart city concept, city logistics should be improved in response to the above-mentioned problems according to the vision of the smart city is a merge of ideas to create better urban areas promoting the overall wellbeing for all city actors (Yigitcanlar et. al., 2018). Especially in the city logistics framework, concepts as mobility, sustainability, and liveability can be handled to design the structure of the distribution network in line with the aims of a smart city (Malindretos et. al., 2018). METHODOLOGY The consolidation strategy in the distribution network has been broadly adopted because of its alignment with smart logistics solutions. The basic logic of the consolidation facilities is based on finding solutions to problems such as urban traffic congestion and large costs of a low number of shipments with the understanding of scaling up goods to increase the load factors in the final distribution leg. The study uses a systemic approach based on the typology of consolidation-distribution schemes in city logistics. This approach refers to determine tiers of delivery pattern in which goods are delivered from outside of urban to innerurban areas based on existing consolidation facilities in the distribution network (Staricco and Brovarone, 2016). According to this approach, city logistics schemes can be classified as: 1. Conventional distribution (without consolidation) 2. Urban consolidation centres 3. Micro consolidation centres Mobile Depots This paper is handled from two dimensions: Firstly, an overview of consolidation-distribution schemes in line with the last leg of delivery is defined and detailed. Secondly, these schemes are evaluated to explain the last mile delivery ecosystem in compliance with the smart city concept from three aspects: mobility, sustainability, and liveability. CITY LOGISTICS SCHEMES Conventional distribution refers to distribute goods directly from point A to point B. This strategy can be beneficial according to characteristics of goods (freight deliveries as a bulk), consumers (commercial), or place (close-range delivery). Especially in B2B marketing, it is clear that vehicles are already full because of the volume of goods delivered and also the delivery point is stable and sole. In this case, goods origin from factories through commercial customers can be distributed without the necessity of any consolidation facility. Nevertheless, this strategy can lead to ineffective mobility operations and resource utilization in case of parcel deliveries are distributed to multi located end consumers. From the perspectives of sustainability and liveability, various types of vehicles on roads (generally with high emission rates), high frequency in delivery trips (traffic congestion), and focusing on profit more than benefits of society force this system to be transformed into a smart layout. Figure 1. Conventional distribution (without consolidation) versus distribution from urban consolidation center, adopted from Allen et al., Urban consolidation centers are facilities that are located relatively closer to the city center and function as a distribution channel by creating an integrated logistics system for different companies, providing storage, classification, consolidation, and deconsolidation as well as several value-added services such as accounting, legal consultancy, and brokerage. This facility's main purpose is the achievement of a high level of load utilization while vehicles are distributing goods to the target area. By bundling goods in vehicles efficiently, challenges like distance traveled per unit of parcel delivered, environmental issues resulted from numerous vehicles onroad or the negative impact of freight operations on traffic congestion can be reduced in city logistics. Studies about urban consolidation centers show that these facilities can be beneficial from a sustainability manner, but they are mostly not financially viable for a long period. This can be thought that running this giant facility costs more than benefits for its users. At this point, initiatives of micro consolidation centers are started to get more attention from the perspective of feasibility. Micro consolidation centers can be defined as the smaller scale of the urban consolidation centers. The specific features about these centers are: 1. Mostly suitable for the last leg of the delivery (facilities are setting-up in very central urban areas to be close to reception points) 2. Usually involve a light freight (in opposition to the heavy urban freight, deliveries include smaller scale packages) 3. Opportunity to deliver with sustainable vehicles (such as using cargo cycles from a central point to anywhere) 4. Easiness to operate loading/unloading activities in the urban area (using small vehicles lead to reduce negativities of urban planning) 5. Integration with innovative business models (such as click and collect model) Figure 2. TNT Express Mobile Depot, adopted from Verlinde et. al., (2014) Mobile depots can be seen as an innovative solution both of city logistics from the macro aspect and last-mile delivery from the micro aspect. A Mobile depot is a vehicle consisting of a trailer fitted with a loading dock, warehousing facilities, and an office. Thanks to being capable to serve multi-channel distribution, this vehicle can be a pick-up point in the central parking location as well as parcels are delivered from this point by electrically supported cycle cargos. As seen in figure 2, TNT Express tested this innovative concept in Brussels as part of the European project STRAIGHTSOL. As one of the important smart logistics solutions, integration of mobile depots into city logistics operations are expected to widen in the coming years. City logistics schemes from a smart perspective can be evaluated mainly in three aspects: mobility (smooth delivery operations in the inner-urban area), sustainability (decreasing negative impacts on economic, social, and environmental issues), and liveability (aiming respond to expectations of all city actors such as logistics service provider, manufacturer, government and city resident) (He,2020). According to the effects on mobility, sustainability, and liveability, the evaluation of city logistics schemes within the range of poor/fair/good can be seen in table 1. CONCLUSION Based on conducted researches in literature and sectoral improvements recently, this study highlights the last mile delivery ecosystem in the line with adopting a systemic approach based on the typology of consolidationdistribution schemes. From the perspective of micro logistics networks, facilities that serving the inner urban area smoothly can be urban consolidation centers, micro consolidation centers, or mobile depots. While all facilities present different features in terms of cost/benefit, service quality, or operational process, the important point is improving the mobility, sustainability, and liveability of cities in the line with goals of smart city logistics. As a result of this study, it is found that within last-mile logistics initiatives, the applicability of both micro consolidation center and micro depot is more compliant with the aims of the smart city concept.
2020-11-26T09:06:18.382Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "43e201e0e4d25b3b6893e558961b80ddc9509eec", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIV-4-W3-2020/335/2020/isprs-archives-XLIV-4-W3-2020-335-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b3133088e4912246c62b7a92cb0ce890ce8fd50a", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Business" ], "extfieldsofstudy": [ "Business" ] }
262026133
pes2o/s2orc
v3-fos-license
A Study on the Supply Chain Structure of Rural Talents and Its Optimization Measures in the Context of Rural Revitalization—Take Putian City as an Example : This paper combines the background of rural revitalization strategy, takes Putian City, Fujian Province as the research and investigation object, by drawing on the academic research results of other scholars and briefly analyzing the current situation of talent supply and demand in rural areas to argue the significance of building a rural talent supply chain. At the same time, we analyze the reasons for the contradiction of rural talent supply and demand which conclude the traditional concept constraints, imperfect rural talent policies and the overall low level of rural economic construction. To this end, the author explores a sustainable talent development model through a combination of fieldwork and questionnaire survey, taking the above attribution as the starting point and combining the survey results to construct five links of existing talent inventory, determining talent demand, attracting talent return, strengthening talent management and regular talent inventory, with a view to providing theoretical support for the structure of rural human and supply chain in the context of rural revitalization strategy. 98.99 million rural poor people having been lifted out of poverty under the current standards.The completion of the historical task of eliminating absolute poverty means that the focus of the "three rural areas" has shifted from poverty eradication to the comprehensive promotion of rural revitalization . [1]This means that the focus of the "three rural areas" has moved from poverty eradication to the comprehensive promotion of rural revitalization.The revitalization of the countryside cannot be achieved without the support of talents, but the current situation of talents in China's rural areas can hardly meet the needs of rural revitalization.According to Han Shifeng and others, for a long time, there has been a continuous loss of outstanding talents in rural areas, the total number of rural human resources is insufficient, the structure is unbalanced, the quality is not high, and the age is aging, and there is a big gap between the level of rural human resources and the demand for rural revitalization. [2]The contradiction between supply and demand of rural human resources has emerged, and human resources revitalization is facing major problems.Take the rural practical talents as an example, according to the statistics of the Ministry of Agriculture and Rural Affairs, there are about 22.54 million rural practical talents in China. [3]This accounts for only 4.42% of the total population of 50,979,000 in rural areas.[4] The Ministry of Agriculture and Rural Affairs (MOAR) statistics show that there are currently about 22.54 million rural practical talents in China, accounting for only 4.42% of the total population of 50.979 million.Tang Yuchi and others also believe that the overall scale of rural talents is small, the overall quality is low, the reserve force of rural talents is insufficient, and the disconnection between the training of talents and the actual demand is prominent. [5].It can be seen that the current quantity and quality of rural talents are not enough to support the comprehensive promotion of rural revitalization strategy.Therefore, attracting talents back to rural areas and retaining them to provide sustainable talent guarantee for rural revitalization is a problem that needs to be solved urgently, and building a rural talent supply chain is an effective way to solve the above problems. Feasibility analysis of rural talent supply chain The No. 1 document of the Central Government in 2023 pointed out that before a strong country, we must first strengthen agriculture, and when agriculture is strong, the country is strong.It is necessary to strengthen the formation of rural human resources team.These programs aim to revitalize rural human resources, organize and guide talents in education, health, science, technology, culture, social work and spiritual civilization construction to work at the grassroots level, and support the development of human resources urgently needed in rural areas, implementing programs to train highly qualified farmers and develop rural entrepreneurs, and improve their education level.At the same time, we are implementing the "Doctors in the Countryside" program.Implementing the "Rural Women's Revitalization Initiative" and the "Youth Talent Training Initiative". [6]The program is also being implemented.This shows that the state has given great support to the construction of rural talents in terms of policy. Basic information of the questionnaire survey The theme of this questionnaire is "Questionnaire on the willingness of rural talents to return to their hometowns in the context of rural revitalization".The questionnaire has 18 questions, including 11 single-choice questions, 5 multiple-choice questions, 1 sorting question and 1 fill-in-the-blank question.The questionnaire contains three parts: the first part is the basic information of the respondents, the second part is the views of the respondents on the current situation of rural development and the important factors affecting the development of rural talents, and the third part is the outlook of the respondents on the future development of rural talents. The distribution method of this questionnaire mainly used QR code and electronic link, through questionnaire star, WeChat, Xiaohongshu and other online platforms, while combined with offline channels to distribute.A total of 227 questionnaires were collected, of which 41 were invalid, with an efficiency of 81.93%, and we can see it in Table 1. Analysis of the current situation Through field visits and research, we found that there are currently problems in both supply and demand in the construction of rural talents.The supply is mainly reflected in two aspects: First, the total supply of rural talents is insufficient.The serious outflow of rural talents is one of the important reasons for the insufficient supply of rural talents.Due to the continuous promotion of urbanization, there are more and more exchanges between urban and rural areas.In this context, rural land is less attractive to farmers and farmers are less dependent on it in this wave.Especially those rural talents with relatively high education level are flocking to the cities for better development.In terms of employment and entrepreneurship, cities and towns have a natural advantage over rural areas. In addition, the difficulty of returning to the rural population is an important reason for the lack of supply of rural talent.People who leave the rural areas are more capable of integrating into the cities and becoming urban residents than those who live in the rural areas.After integrating into the city and raising a family in the city, "it is easy to go out but difficult to return to the countryside".In the process of returning the questionnaires, it was found that the number of people with agricultural and non-agricultural household accounts for about half (see Figure 1), but only 32.26% of the participants were willing to return to the rural areas (see Figure 2).Thus, it can be seen that most rural talents are more willing to stay in cities due to the higher income, better platform, better development prospect and wider social resources provided by cities for practical and prospective considerations.In contrast, rural areas are more backward in terms of development, employment and entrepreneurship, providing fewer social resources and making it difficult to form a platform for human resources. Analysis of the problems and reasons for building a rural talent supply chain 4.1 The constraints of traditional concepts Under the influence of China's thousands of years of agrarian civilization, the backward and conservative ideology is an important reason for the lack of rural talents.Traditional farmers believe that being a farmer has no future, and that inputs and outputs are not directly proportional to each other.Moreover, many people have a prejudiced perception of farmers, believing that they are unhygienic and uncivilized.The society has formed a social culture that rural areas are backward and farmers are outdated.Secondly, in rural education, both teachers and parents instill discriminatory ideas about the countryside into children, forming the ideology that if they want to live better, they must leave the countryside.At the same time, as modernization continues to advance, urban development needs to rely on surplus rural labor, resulting in a large outflow of labor.Farmers who have left the countryside to get better development opportunities are not willing to return to the countryside, which further reinforces the idea that there is no development prospect in the countryside.This is also reflected by the questionnaire survey.(Figure 3) Imperfect policy on rural talents Compared with urban talent policies, rural talent policies are inherently deficient in terms of wages, working conditions, social security, talent mechanisms and institutions.For example, there are relatively few civil service and career positions in rural areas, and there is still a shortage of applications and recruitment.The existing rural talent policy lacks coherence in talent mobilization and assessment, resulting in a situation where rural talents are mostly managerial and less technical.At the same time, the current situation of low income of rural talents leads rural talents to rely on their enthusiasm and love for rural areas in the process of rural construction.For example, in the evaluation of teachers, there are few senior teacher titles, and even fewer are assigned to rural areas.As a result, low-income rural teachers have little incentive to improve their skills in order to obtain higher titles, and the quality of education in rural areas is not high.Thus, the imperfection of rural talent policies also contributes to the contradiction between supply and demand of rural talent. Rural talent supply chain architecture and optimization path Through combing and analyzing the existing research data and combining with the real survey, the main structure of rural talent supply chain in this paper contains five major links: talent combing, demand determination, attracting talent back, talent management and regular talent inventory.The main purpose of talent combing is to match the existing talents with the jobs by mapping the existing talents, so that the value of talents can be maximized; the determination of talent needs is based on talent combing, and the current and future needs of talents are clarified based on the current situation of rural development and future development direction; attracting talents to return is an important and central link to realize the sustainable development of rural talents.Talent management is the scientific and effective management of rural talents to enhance their creativity and guarantee their long-term development in rural areas; finally, in the form of regular talent inventory, we can grasp talent trends in real time and provide a basis for determining talent needs in the next step, just as shown in figure 4. Consider the current situation, Conduct a full range of talent sorting Rural talents refer to all kinds of talents with certain skills and knowledge in the fields of agricultural production, technical services, science and technology promotion, processing and circulation, ecological and environmental protection, etc., which directly serve the economic and social development of rural areas.Specifically, they include agricultural technicians, business management talents, production talents, rural brokers and farmer entrepreneurs.Based on the current situation in rural areas, the existing talents can be sorted out from several aspects, such as the number of personnel, personnel types, talent categories and talent distribution.The structure of rural organizations is generally complex, and it is also a key consideration when sorting out talents.In the rural talent combing should focus on the needs of different positions of personnel, it is a continuous and systematic process of understanding the development status of rural talent.It is designed to provide a basis for the development of a reasonable talent plan. Determine the demand for talents, taking into account the reality and future needs of rural areas Combining the reality of rural development and the needs of future development, we do demand for talents and reasonably formulate the talent plan.The overall can be divided into two categories: First, based on the current situation, it can be based on job vacancies, fill the lack of existing talents, form a complete talent ecology, and drive rural development with talent development.The second is based on the future development of rural economy, based on the blueprint of future development of rural areas, to develop the future talent development needs. Accelerate the integration of urban and rural industries and increase employment opportunities Rural enterprises are an important channel for employing rural talent.As part of the national rural renewal strategy focusing on agriculture and rural development, local governments should increase financial and political investment in the context of rural realities, so that urban enterprises can reorient their development to focus on the quantity and quality of local agricultural production.In particular, they should vigorously develop agricultural products processing, agricultural products cold chain and agricultural products transportation, gradually forming a new round of growth of urban enterprises and promoting the integration of chains of agriculture, breeding and processing, production and marketing, agriculture, industry and commerce, trade and agro-industry. Improve rural supporting infrastructure construction to retain talents In order to develop human resources in rural areas, it is important to consider not only how to increase their numbers, but also how to recruit them more effectively.To comprehensively solve the vicious circle of rural exodus, we must strengthen the construction of supporting facilities in rural areas, invest human, material and financial resources to improve the environmental conditions in rural areas in all aspects such as culture, education and health, narrow the gap between urban and rural areas, improve the comfort of rural life in general, attract and retain talented people and strengthen their sense of belonging.This will attract and retain talents, enhance the sense of belonging and identity to rural areas, and lay a good foundation for getting rid of poverty and getting rich. Strengthen the improvement of the training mechanism of rural talents In order to actively cultivate the high-quality professionals needed for the construction of rural economy, it is necessary to adjust the vocational environment of colleges and universities accordingly, give full play to the vocational training function of vocational colleges and universities, improve the support mechanism of human resources, and promote the innovative vitality of human resources.Firstly, agricultural colleges and universities should optimize the educational environment, strengthen teacher training, improve teaching conditions, increase the number of agricultural teachers, improve students' literacy and vocational skills, expand students' opportunities to participate in social activities, and prepare them for future employment.Secondly, higher education institutions should increase the quantity and quality of agricultural disciplines, optimize the curriculum according to the needs of rural economic development, and actively carry out off-campus practical activities to promote the combination of theoretical knowledge and practical skills of rural talents. Construction of big data platform to achieve real-time update of talent demand As rural talents contain many fields and are distributed in a wide geographical range, they cannot realize centralized and unified management of personnel like enterprises, at this time, they can be built in a data platform to realize real-time management of talents.Using information technology to build a large-scale trading platform to break the bottleneck of information asymmetry between supply and demand.In terms of talent training and supply, and the development needs of rural areas themselves, relatively accurate market demand data is needed to support the timely dynamic adjustment, in order to visualize the development of rural areas. Regular talent inventory to achieve long-term development of talent Regular talent inventory is an important tool to improve the return on talent.Regular talent inventory can be divided into two types: one is to take a certain period as the limit, and regularly take inventory of existing rural talents to fully understand the current situation of the talent team, the number of each talent category and the gap between talents.Targeted training of talents can be carried out to avoid the emergence of a talent supply and demand gap and redundancy of personnel in some positions, to provide direction for talent ladder construction and to adjust the talent structure at the right time.Also according to the talent inventory, let the talent themselves understand their own value, as well as the future development prospects.The second is the irregular talent inventory, real-time follow-up of the current situation of talent development, for the existing needs of the countryside, adjust the talent structure, positions and other content, to achieve talent without time difference supply, in order to meet the countryside development requirements.Regular talent inventory can be divided into the following steps: First, combined with the rural development strategy, the rural strategy map will become a talent map to clarify the direction of rural development and the direction of human resources reserve; second, based on the talent map, external recruitment and internal training will be conducted according to the needs of rural development; third, an inventory of organizational structure, organizational climate, personnel composition and human resources quality will be conducted to improve the overall efficiency of rural organizations; Fourth, we should update human resource standards, identify highpotential talent, develop succession plans for key positions, and establish a dynamic talent pool; Fifth, developing training plans for key positions and levels are important to accelerate the development of high-potential talent; And sixth, relevant institutions can establish mechanisms to identify organizational and human resources and integrate them into the overall activities of the village so that human resource management can support the implementation of the village regeneration strategy. Through regular talent inventory, not only to the current understanding of the full picture of talent, mapping the current situation of talent, to provide the basis for the next appointment of talent; more on the rural talent echelon construction to indicate the direction.In addition, the most important thing is that the talent inventory can reasonably determine the next step of talent development needs, so as to achieve long-term development of talent, just as shown in figure 5. Conclusion To implement the rural revitalization strategy, we must take talents as the first resource, establish a long-term institutional mechanism for talent revitalization, cultivate and display talents in a sound talent structure, make up for the gap of rural revitalization constrained by talent loss, and make greater contribution to the overall revitalization of the countryside.To realize rural revitalization, the most urgent and crucial link is to implement the talent revitalization strategy.To revitalize rural talents, it is not only necessary to bring in talents, but also to retain talents, and to achieve sustainable talent building.This paper intends to build a supply chain suitable for the development of rural talents and realize the sustainable development of rural talents through five links: combing existing talents, determining the demand for talents, attracting talents back, strengthening talents management and taking regular talents inventory.To achieve sustainable development of talent revitalization, it is not possible to rely on unilateral efforts alone, but requires multiple dimensions and multiple subjects to make joint efforts.The government needs to further optimize the social environment, the rural areas need to improve the institutional system, and individuals need to correct the consciousness bias, in order to make the sustainable development of rural talent revitalization possible as a result of multiple effects, instead of just staying on words.The supply chain of rural talents is a big topic with rich connotation and difficult task.Due to the limitation of my own knowledge, I have not yet been able to systematically clarify some issues on how to strengthen the construction of rural talent supply chain, so I need to further explore them in the future. Figure 1 : Figure 1: Distribution of household registration in the questionnaire survey Figure 2 : Figure 2: Willingness of talents to return to their hometown for development Figure 3 : Figure 3: Return to the status of rural development concerns Figure 4 : Figure 4: Rural talent supply chain architecture Based on this, the following five areas of optimization responses are given. Figure 5 : Figure 5: Ideas of countermeasures for rural talent supply chain Table 1 : Questionnaire completion table
2023-09-18T15:07:27.321Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0f3a54a84a63cb08b946c1c106afe3971fbbff54", "oa_license": "CCBY", "oa_url": "http://www.clausiuspress.com/assets/default/article/2023/08/25/article_1692977720.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa725abb32553881f7cacfb42291d0c8688de341", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
54586941
pes2o/s2orc
v3-fos-license
Distribution of stress in greenhouses frames estimated by aerodynamic coefficients Widely disseminated in both national and international scenarios, greenhouses are agribusiness solutions which are designed to allow for greater efficiency and control of the cultivation of plants. Bearing this in mind, the construction of greenhouses should take into consideration the incidence of wind, and other such aspects of comfort and safety, and ensure they are factored into the design of structural elements. In this study, we evaluated the effects of pressure coefficients established by the European standard EN 13031-1 (2001) and the Brazilian standard ABNT (1988), which are applicable to the structures of greenhouses with flat roofs, taking into account the following variables: roof slope, external and internal pressure coefficients and height-span ratio of the structure. Using the ANSYSTM computer program, zones of columns and roof were discretized by the Beam44 finite element to identify the maximum and minimum stress portions connected to the aerodynamic coefficients. With this analysis, we found that, in the smallest roof slope (a equal to 20°), the frame stress was quite similar for standards adopted. On the other hand, for the greatest inclination (a equal to 26°), the stress was consistently lower under the Brazilian standard. In view of this, we came to the conclusion that the differences between stresses when applying both standards were more significant at the higher degrees of height-span ratio and roof slope. Introduction Agricultural construction technology, especially for growing plants, faces great challenges when it comes to the design of protected environments capable of allowing greater production efficiency in smaller areas.According to Shamshiri and Ismail (2013), these construction systems have the purpose of improving the quality and predictability of crops, entailing the control of a number of factors, such as humidity, temperature, solar radiation, internal carbon dioxide levels, and protection of crops from the action of rain, strong winds and pests (Straten et al., 2010;Emekli et al., 2010;Ali-Nezhad and Eskandari, 2012).Thus, the optimal growth of a certain crop will be affected by the greenhouse's architectural design.This being the case, a study on the structural design of greenhouses based on applicable normative instructions takes on great importance (Iribarne et al., 2007;Ali-Nezhad and Eskandari, 2012). The extensive use of this agricultural construction technique has increased concern over its safety against structural damage.As pointed out by von Zabeltitz (2011), such safety concerns are already under discussion in countries where the protected crop technique is largely used.In this sense, ABNT (2012) was recently published to standardize procedures for greenhouse designs in Brazil.This standard is based on the wording of the standard of the European Committee for Standardization -CEN, EN-13031-1 (2001), but takes geographical conditions in Brazil into account. According to Buyuktas et al., (2011), the failure to make static calculations and consider existing envi-ronmental factors may lead to damage to greenhouses in adverse weather conditions.Because of the characteristics of the light construction of greenhouses, stress from extreme wind speeds can lead to damage to the greenhouse structure itself (Elsner et al., 2000).The effects of wind are taking on increasing importance in the structural design of greenhouses given the need for larger facilities that allow for a more favorable internal climate for cultivation.Therefore, the main purpose of this study was to analyze, through computer modeling, the behavior of stress on the structures of pitched roof greenhouses resulting from the differences in wind pressure coefficients, as established by both the Brazilian standard (ABNT, 1988) and the European standard (EN 13031-1, 2001). Materials and Methods The structural behavior of a greenhouse frame subject to wind action was simulated by considering the ratio between column height (h) and structure span (s), as well as the roof slope (a), as set out in EN (2001) and ABNT (1988).The structure span (Figure 1) was set at 8 m, and under the extreme conditions of EN (2001), with ≤ 0.3 and ≥ 0.6, the resulting heights of the columns amounted to 2.40 m and 4.80 m, respectively.By using these heights and varying the roof slope, we obtained the reference heights (from base to roof ridge) at the extreme levels of 3.86 m and 4.35 m, 6.26 m and 6.75 m, respectively.Thus, based on such conditions, the external and internal pressure coefficients were extracted with reference to both standards. External and Internal Pressure Coefficients The aerodynamic coefficients were obtained from the geometric characteristics of the preset models, with wind applied perpendicularly to the roof ridge, so that the wind pressure developed on the same plane of the frame (Figure 1).For the purposes of this research, we should emphasize that, for the wind direction parallel to the roof ridge, the pressure coefficient variations would be less significant.However, this condition should also be taken into account by agricultural greenhouse designs. In accordance with EN 13031-1 ( 2001), with wind applied perpendicularly to the roof ridge, and column heights of 2.40 m and 4.80 m, we obtained the external pressure coefficients shown in Table 1.To obtain such coefficients, we considered a variation of angle (a), setting limits as established by the European standard at 20° and 26°. The ratios and the slopes previously established were considered for the standardization of results.According to ABNT (1988), with wind applied perpendicularly to the roof ridge, and for heights of 2.40 m and 4.80 m, the external coefficients (Table 1) are associated with the ratios ≤ 0.5 and 0.5 < < 1.5, respectively.The Brazilian standard also considers the ratio of length and construction width, which in this study ranged from two to four. The external coefficients for the roof planes, according to the Brazilian standard.are associated with angles ranging from 0° to 60°.However, for the purposes of comparison with the European standard, we applied external coefficients of 20° and 26° (Table 1). With regard to the internal pressure coefficients under the said European standard, the internal pressure (Cpi equal to +0.2) and internal suction (Cpi equal to -0.4) situations should be considered.These amounts are connected to wind applied perpendicularly to the roof ridge and for single span greenhouses. In the case of the Brazilian standard ABNT (1988), the internal pressure coefficients are calculated considering the dominant openings, entailing the control of these coefficients by applying the upwind or downwind ratio of the dominant openings.Therefore, for comparison purposes, the amounts established by the European standard EN 13031-1 (2001) were considered, i.e. +0.2 and -0.4. As asserted by Mistriotis and Briassoulis (2002), few experiments have been conducted on internal pressure coefficients for greenhouses, especially in situations with different openings, windows and fans, which require different system settings.In addressing this issue, computational modeling can contribute an important alternative for ascertaining these coefficients. Combining internal and external coefficients helps to ascertain the pressure coefficient (C) to be applied on each structure zone.Typically, wind speed characteristics (which depend on the conditions of each design, i.e. wind speed and the basic factors connected to topography, roughness and construction dimensions, together with probabilistic concepts) have been used to ascertain the load for each part of the agricultural greenhouse frame applying Equation 1.By applying the F actions on the respective frame zones, for each discretized finite element, we obtained results linked to the pressure coefficient (C).Therefore, to accomplish the purposes of this research, considering that dynamic pressure and the distance between frames are characteristics of each design, unit amounts were adopted to account for the wind action.This way, the action accounted for the component of action C (i.e., the pressure coefficient itself) and the results attained would qualitatively represent the effects of this coefficient on the stress; given that the stress portion will be shown without units. Computational modeling To analyze the influence of the pressure coefficients on the structure stress conditions, we applied the finite element method using the ANSYS program (AN-SYS TM , version 10.0).At each geometrically modeled frame line, a mesh with three-dimensional BEAM44 elements was generated, having 3 elements per line 2A).The BEAM44 element was used to represent a tubular steel profile with a commercial square section equal to 60 mm × 60 mm and a thickness of 2 mm, applied to all of the frame zones.The use of profiles with greater or smaller stiffness which allow for suitable internal pressure on the structural elements should be considered as particularities of each design, and could be evaluated by means of a structural optimization analysis. The total constraint of movements of the column bases was assumed.The action of component C was applied linearly along the length of each structural element (Figure 2B) and, in order to consider the structure's selfweight, the gravitational acceleration (9.81 m s -²) was activated.For the purpose of steel characterization, we applied Young's modulus equal to 21 × 10 10 Pa, Poisson's ratio of 0.30 and density equal to 7.86 × 10 3 kg m − ³. Distribution of stress portions As a result of the modeling, we obtained extreme levels consisting of the maximum and minimum stress portions (direct stress and bending stress).Figures 3A and 3B were obtained from the wind and the structure's self-weight amounts.For an objective analysis of the results obtained with ANSYS TM , for the finite elements generated in each frame zone (K, A, B and L), the extreme levels of the maximum and minimum stress portions were extracted.These results, shown in Figures 4A to D and Figures 5A to D, corresponding to the researched cases (Table 2), were obtained setting the roof slopes a (20° and 26°), the ratios (0.3 and 0.6) and Cpi coefficients (-0.4 and +0.2), as well as the external coefficients stipulated by the European and Brazilian standards.In the zone most influenced by the aerodynamic coefficients (zone K), with a roof slope set at 20º (Figures 4A and B, and Figures 5A and B) for each ratio (0.3 or 0.6) and each Cpi value (suction or pressure), the values of extreme stresses were close in both standards.Therefore, with the imposition of a equal to 20°, the stress distribution in the column (zone K) will be similar to both European and Brazilian standards. On the other hand, with the imposition of a equal to 26º (Figures 4C and D, and Figures 5C and D), the stress portions in zone K were greater when calculated in accordance with the European standard.For equal to 0.3, the differences in maximum stresses were equal to 14 % and 11 % when we applied Cpi equal to -0.4 and 0.2, respectively.In this same order of Cpi and with equal to 0.6, the maximum stress portions obtained as per the European standard were 5 % and 6 % greater than the values obtained in accordance with the Brazilian standard.The differences found for the minimum stresses, with the ratio equal to 0.3, were 15 % and 13 % for Cpi equal to -0.4 and 0.2, respectively.However, for a ratio equal to 0.6, the difference in minimum stresses was equal to 5 % for both Cpi amounts.In this situation (a equal to 26°), the use of the Brazilian standard allows for using less rigid profiles for the columns. In the corresponding zone B of the roof (where the lowest maximum and minimum stress portions occur), the highest values were obtained when the European standard was applied, except for the situation calculated with equal to 0.3, a equal to 20° and Cpi equal to +0.2 (Figures 4A and 5A).In zone A, the calculations in accordance with the European standard also resulted in higher stress portions, except in the case where equal to 0.3, a equal to 26° and Cpi equal to +0.2 were adopted (Figures 4C and 5C). In the zones where the columns are positioned, in which K is more critical than L, for each of the ratios, the stress portions were more intense with higher roof slopes (a equal to 26°).With regard to the influence of the roof slope in the simulation in accordance with the Brazilian standard coefficients, with equal to 0.3 and Cpi equal to +0.2, by setting a equal to 26°, in zone A we found a reduction of 25 % in the maximum and minimal stress portions, compared to the amount of the simulation with a equal to 20°.For this same analysis, when applying the European standard, there was a 45 % reduction in the stress portions.By setting a equal to 26° and Cpi equal to +0.2, to evaluate the parameter , the amount of 0.6 led to a reduction in the stress portions to 10 % and 12 % in the simulation in accordance with the Brazilian and European standards, respectively.For the same (i.e., equal to 0.6) and Cpi equal to -0.4, the stress reduction for these standards was equal to 2 % and 7 %, respectively. The variation in minimum stress portions for the roof slope was similar to the variation in the maximum stress portions, but with a slightly higher variation in the ratio equal to 0.3 and Cpi equal to -0.4 (this was the only combination that had an ineffective performance when changing the angle from 20° to 26°).The increases in the maximum stress portion for ABNT (1988) andEN (2001) were 17 % and 53 %, respectively, whereas, for the minimum stress portions, they reached 19 % and 57 %, for each standard, respectively. When comparing the calculation results for single and multi-span greenhouses, in accordance with the EN 13031-1 (2001) and Chinese standards, Tong et al., (2013) reported that the European standard establishes greater pressure coefficients and wind profile than the Chinese standard.Therefore, the internal forces were higher when applying the European standard's procedures.Thus, these authors concluded that the European standard is more comprehensive for dimensioning factors concerning wind. Distribution of Solar Radiation In addition to the construction's position in relation to the hemisphere (latitude), the roof slope also affects the incidence of solar radiation; given that slightly sloping roofs are not suitable due to the losses of reflection and absorption (Garg and Prakash, 2000).According to the latter authors and Critten (1993), radiation incident angles smaller than 30° are appropriate.Roofs with inclinations close to 30° are described as ideal for maximizing the solar radiation for the Mediterranean region (Soriano et al., 2009). In the case of the roof slopes evaluated in this study (20° and 26°), the amount of 26° was favorable for locations at higher latitudes.This is due to the higher roof slope, which reduces the sunlight incidence angle, and favors the distribution of solar radiation and the internal microclimate inside the greenhouse.In regions closer to the Equator, a roof slope equal to 20° could be further investigated in view of the smaller zenith angle.However, an angle of 26° favors solar irradiance over a period covering an entire year with the solstice occurring in the opposite hemisphere with a higher zenith angle.The results of this analysis directly impact the dimensioning of the structural elements of greenhouses, as well as the aspect of comfort.Therefore, when an environment needs either a greater volume or a reduction in load on the roof's structural elements, the angle value should be set at 26°. Future perspectives Research must be conducted experimentally with prototypes tested in a wind tunnel, to confront the differences in results of stress distribution obtained by computational modeling. In addition to the aspects discussed in this study, structural optimization is relevant to obtaining consistency with the particular characteristics of each greenhouse (including aspects of strength and stability within secure limits), thus widening the sustainable use of this type of rural construction. Conclusion Differences between stresses estimated in greenhouse frames, using the coefficients established by Brazilian and European standards, were more significant for the greater amounts of height-span ratio and roof slope.The maximum and minimum stress portions took place in the column's most critical zone (zone K) for both values of height-span ratio and for roof slope amounting to 20°, which resulted in similar values for both standards.However, with roof slope equal to 26°, these stress portions were lower when the Brazilian standard was applied.The main similarities connected to the results for the stress portions, were found in simulations with roof slope equal to 20°, height-span ratio equal to 0.3 and internal pressure coefficients equal to -0.4.Establishing the roof plane slope at the highest amount (equal to 26°) is a strategy that allows a larger volume for greenhouses with better thermal comfort conditions, while simultaneously reducing wind stresses on the roof zone. Figure 1 - Figure 1 -External (Cpe) and internal (Cpi) pressure coefficients applied to the wall and roof zones.A) internal pressure, B) internal suction.(K and L represents the column zones and A and B represents the roof zones).K and A represent the column and roof zones in upwind position and, L and B represent the column and roof zones in downwind position. for the ratios shown, which were obtained graphically or interpolated.2 For the ratios established under the European standard, there is uniformity based on a given ratio of amount of Cpe, and zone A presents the highest suction coefficient.Sci.Agric.v.73, n.2, p.97-102, March/April 2016 where: F -action of the wind perpendicularly applied along the axis of each frame element [N m −1 ]; C -Pressure coefficient, based on external and internal coefficients; q -wind dynamic pressure (q equal to 0.613 .Vk) [N m −2 ], where Vk is the characteristic wind speed [m s −1 ]; L -Distance between the frames [m]. Table 2 -Figure 2 - Figure 2 -Finite element modeling of a typical frame.A) Mesh of elements, B) Loading structure. Figure 3 - Figure 3 -Distribution of stresses resulting from the aerodynamic coefficients and self-weight of the structure.A) Maximum stress portion, B) Minimum stress portion. Figure 4 - Figure 4 -Maximum stress portions influenced by the aerodynamic coefficient.A) a equal to 20° and equal to 0.3, B) a equal to 20° and equal to 0.6, C) a equal to 26° and equal to 0.3, D) a equal to 26° and equal to 0.6.K and A represent the column and roof zones in upwind position and, L and B represent the column and roof zones in downwind position. Figure 5 - Figure 5 -Minimum stress portions influenced by the aerodynamic coefficient.A) a equal to 20° and equal to 0.3, B) a equal to 20° and equal to 0.6, C) a equal to 26° and equal to 0.3, D) a equal to 26° and equal to 0.6.K and A represent the column and roof zones in upwind position and, L and B represent the column and roof zones in downwind position. Table 1 - External pressure coefficients based on Brazilian and European standards, where K and A represent the column and roof zones in upwind position and, L and B represent the column and roof zones in downwind position.
2018-12-04T18:06:11.520Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "a3f2717dc25832e5c1ff05226b0d3fb5aaa18038", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/sa/v73n2/0103-9016-sa-73-2-0097.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a3f2717dc25832e5c1ff05226b0d3fb5aaa18038", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
267490245
pes2o/s2orc
v3-fos-license
The Deficiency of Hypusinated eIF5A Decreases the Putrescine/Spermidine Ratio and Inhibits +1 Programmed Ribosomal Frameshifting during the Translation of Ty1 Retrotransposon in Saccharomyces cerevisiae Programmed ribosomal frameshifting (PRF) exists in all branches of life that regulate gene expression at the translational level. The eukaryotic translation initiation factor 5A (eIF5A) is a highly conserved protein essential in all eukaryotes. It is identified initially as an initiation factor and functions broadly in translation elongation and termination. The hypusination of eIF5A is specifically required for +1 PRF at the shifty site derived from the ornithine decarboxylase antizyme 1 (OAZ1) in Saccharomyces cerevisiae. However, whether the regulation of +1 PRF by yeast eIF5A is universal remains unknown. Here, we found that Sc-eIF5A depletion decreased the putrescine/spermidine ratio. The re-introduction of Sc-eIF5A in yeast eIF5A mutants recovered the putrescine/spermidine ratio. In addition, the Sc-eIF5A depletion decreases +1 PRF during the decoding of Ty1 retrotransposon mRNA, but has no effect on −1 PRF during the decoding of L-A virus mRNA. The re-introduction of Sc-eIF5A in yeast eIF5A mutants restored the +1 PRF rate of Ty1. The inhibition of the hypusine modification of yeast eIF5A by GC7 treatment or by mutating the hypusination site Lys to Arg caused decreases of +1 PRF rates in the Ty1 retrotransposon. Furthermore, mutational studies of the Ty1 frameshifting element support a model where the efficient removal of ribosomal subunits at the first Ty1 frame 0 stop codon is required for the frameshifting of trailing ribosomes. This dependency is likely due to the unique position of the frame 0 stop codon distance from the slippery sequence of Ty1. The results showed that eIF5A is a trans-regulator of +1 PRF for Ty1 retrotransposon and could function universally in yeast. Introduction Programmed ribosomal frameshifting (PRF) is a recoding event by which the translating ribosome switches from the initial (0) reading frame to the +1 or −1 reading frame at a specific position, and then continues its translation [1,2].Unlike frameshift mutations, PRF can be regulated by cis-acting elements and trans-acting factors, and has important biological functions [3].This phenomenon was first discovered in viruses [4].The efficiency of PRF determines the stoichiometric ratio between viral Gag (structural) and Gag-Pol fusion (enzymatic) proteins, and it has been demonstrated in many different viral systems that viral particle assembly and propagation are inhibited when changing PRF efficiencies [5][6][7][8][9][10].The PRF is widespread and likely exists from bacteria to higher eukaryotes [11][12][13]. The efficiency of PRF is regulated not only by cis-regulatory elements in the mRNA but also by trans-acting factors, such as tRNAs [14][15][16], polyamines [17,18], antibiotics [19,20] and proteins [11,21].In yeast, the cis-regulatory elements are the slippery sequence and stimulatory RNA secondary structures.The stimulatory RNA secondary structures act as a roadblock for rapid translation.The slippery sequence and the induction of a ribosomal pause are required to promote efficient frameshifting [22].Currently, it has been reported that trans-acting factors modulate PRF in yeast.A rare tRNA-Arg (CCU) that regulates Ty1 element ribosomal frameshifting is essential for Ty1 retrotransposition [14], and high polyamine levels that regulate OAZ1 element ribosomal frameshifting are essential for OAZ1 [23]. The eukaryotic translation initiation factor 5A (eIF5A) is essential for cell viability and is highly conserved in all eukaryotes [24].It is the only protein known to carry hypusination, an unusual post-translational modification [24].Hypusine is a modified lysine residue found in eIF5A that is required for its activity.Although the hydroxylation of desoxyhypusinated-eIF5A, the last step of hypusination, is not essential in Saccharomyces cerevisiae, this complete post-translational modification of eIF5A is strictly required for its function in higher eukaryotes [25,26].eIF5A was originally thought to stimulate the formation of the first peptide bond during the translation initiation phase [24].Then, its involvement in translating polyproline-containing proteins was discovered [27].Recent studies based on the ribosome profile data suggested that eIF5A works more generally at many ribosome-stalled sites [28,29].eIF5A binds to the ribosomal E site to promote the peptide bond formation of sterically unfavorable amino acid combinations and plays a critical role in peptidyl-tRNA hydrolysis after stop codon recognition.Furthermore, the hypusination of eIF5A is specifically required for +1 PRF at the shifty site derived from the OAZ1 in S. cerevisiae [30].In addition, it is also associated with the synthesis of proteins involved in polyamine synthesis and transport [31][32][33][34]. In addition to OAZ1, the L-A virus and Ty1 retrotransposon of the yeast S. cerevisiae have been especially useful in characterizing the molecular genetics and biochemistry of PRF [22].A −1 PRF event is responsible for producing the Gag-Pol fusion protein of the L-A virus of yeast [35,36].The 5 ′ gag gene encodes the major coat protein, and the 3 ′ pol gene encodes a multifunctional protein domain, which includes the RNA-dependent RNA polymerase and a domain required for viral RNA packaging [22].The promotion of efficient −1 PRF in the L-A virus of yeast requires a special sequence, X XXY YYZ (the 0-frame is indicated by spaces) called the 'slippery site' [22].The simultaneous slippage of ribosome-bound A-and P-site tRNAs by one base in the 5 ′ direction still leaves their nonwobble bases correctly paired in the new reading frame [22].A second promoting element, usually an mRNA pseudoknot, is located immediately 3 ′ to the slippery site [22].It is thought that the role of the mRNA pseudoknot is to induce elongating ribosomes to pause over the slippery site [22].Furthermore, a +1 PRF event, directed by a heptanucleotide sequence CUU AGG C, is responsible for producing the Gag-Pol fusion protein of the yeast retrotransposon Ty1 [22,37].Although both +1 and −1 ribosomal frameshifting occurs at heptameric "slippery sites", the nature of these sites is entirely different.Unlike −1 ribosomal frameshifting, the simultaneous slippage of ribosome-bound A-and P-site tRNAs from the 0-frame to the +1 frame would not allow their non-wobble bases to repair.Also, in −1 ribosomal frameshifting, the downstream sequence required to promote efficient frameshifting is the mRNA pseudoknot.Although a potential pseudoknot structure can be inferred in Ty1, the structure is not required [22].In addition, the outcome of −1 PRF of CTS2 was also identified in the yeast, which is predicted to direct ribosomes to a premature termination signal [38]. To explore the function of Sc-eIF5A on +1 PRF and −1 PRF, two eIF5A temperaturesensitive yeast strains, tif51A-1 and tif51A-3, were used in the study.The loss of Sc-eIF5A reduced the putrescine/spermidine ratio.The re-introduction of Sc-eIF5A in yeast mutants recovered the putrescine/spermidine ratio.Moreover, Sc-eIF5A depletion decreases +1 PRF in Ty1, but has no effect on −1 PRF in L-A.The re-introduction of Sc-eIF5A in yeast eIF5A mutants restored the efficiency of +1 PRF in Ty1.In addition, the impaired hypusine modification of yeast eIF5A by GC7 treatment or by mutating the hypusination site leads to decreases in +1 PRF in Ty1.Mutational studies of the Ty1 frameshifting element suggested a recovered the putrescine/spermidine ratio.Moreover, Sc-eIF5A depletion decreases +1 PRF in Ty1, but has no effect on −1 PRF in L-A.The re-introduction of Sc-eIF5A in yeast eIF5A mutants restored the efficiency of +1 PRF in Ty1.In addition, the impaired hypusine modification of yeast eIF5A by GC7 treatment or by mutating the hypusination site leads to decreases in +1 PRF in Ty1.Mutational studies of the Ty1 frameshifting element suggested a model in which the efficient removal of a post-termination ribosome on the Ty1 frame 0 stop codon by Sc-eIF5A is necessary for a trailing ribosome to stall at the slippery sequence and undergo frameshifting.These findings showed that eIF5A-ployamine feedback regulation was essential for +1 PRF in yeast. Sc-eIF5A Depletion Decreases the Putrescine/Spermidine Ratio The hypusination of eIF5A is a multi-step process, during which a 4-aminobutyl moiety, derived from spermidine, is transferred to a specific lysine residue of eIF5A (K51 in yeast Hyp2 or Anb1).However, high levels of polyamines were shown to stimulate ribosomal frameshifting during the decoding of OAZ1 mRNA and Ty1 mRNA in yeast, respectively [18,23].Furthermore, eIF5A is also associated with the synthesis of proteins involved in polyamine synthesis and transport [31][32][33][34].The polyamine levels were monitored in eIF5A-deficient and eIF5A-complementary strains by HPLC.The putrescine level was lower in the tif51A-1 and tif51A-3 strains than in the WT strain (Figure 1A).After the re-introduction of WT Sc-eIF5A in the tif51A-1 and tif51A-3 strains, the putrescine content was restored to that of the WT strain (Figure 1A,B).However, in the absence or complementation of Sc-eIF5A, the spermidine level did not change significantly in the tif51A-1 strain compared to that in the WT.Nevertheless, the spermidine level was significantly decreased in the tif51A-3 strain than in the WT.The spermidine level was increased after Sc-eIF5A complementation.Given the known importance of the putrescine/spermidine ratio in modulating ribosomal frameshifting in yeast [18], the putrescine/spermidine ratios in WT, eIF5A-deficient and eIF5A-complementary strains were calculated.The putrescine/spermidine ratios from the tif51A-1 and tif51A-3 strains were both much lower than the WT strain.After the re-introduction of WT Sc-eIF5A in the tif51A-1 and tif51A-3 strains, however, the putrescine/spermidine ratios increased significantly, and even exceeded the level of the WT strain (Figure 1C). Sc-eIF5A Depletion Decreases +1 Programmed Ribosomal Frameshifting Efficiency To investigate the influence of Sc-eIF5A on +1 PRF and −1 PRF, three PRF reporter constructs, pDB722-Ty1, pDB722-CTS2, and pDB722-L-A, were generated (Figures 2A,B and S1A).The frameshift reporter plasmid is preceded by a Renilla luciferase gene and followed by the complete PRF element and a sequence encoding a Firefly luciferase.The Firefly luciferase production depends on +1 or −1 PRF, while Renilla luciferase serves as the internal control.An in-frame control reporter, pDB722, in which the Fluc is in the same reading frame as Rluc, provides baseline data.Frameshift efficiencies were calculated by dividing the Fluc/Rluc activity ratio generated from the frameshift reporter by the same ratio generated from the in-frame control reporter (Figure 2C). .Sc-eIF5A Depletion Decreases +1 Programmed Ribosomal Frameshifting Efficiency To investigate the influence of Sc-eIF5A on +1 PRF and −1 PRF, three PRF reporter constructs, pDB722-Ty1, pDB722-CTS2, and pDB722-L-A, were generated (Figures 2A,B and S1A).The frameshift reporter plasmid is preceded by a Renilla luciferase gene and followed by the complete PRF element and a sequence encoding a Firefly luciferase.The Firefly luciferase production depends on +1 or −1 PRF, while Renilla luciferase serves as the internal control.An in-frame control reporter, pDB722, in which the Fluc is in the same reading frame as Rluc, provides baseline data.Frameshift efficiencies were calculated by dividing the Fluc/Rluc activity ratio generated from the frameshift reporter by the same ratio generated from the in-frame control reporter (Figure 2C).Since eIF5A is an essential protein in yeast, two eIF5A temperature-sensitive strains, tif51A-1 and tif51A-3, were used [39].The two eIF5A mutant cells were incubated at 37 °C for 5 h, which led to a significant reduction in eIF5A.Three PRF reporter plasmids (Figures 2A,B and S1A), or an in-frame control reporter plasmid, were transformed into the WT strain and two eIF5A yeast mutants, respectively.Then, after these yeast strains were incubated at 37 °C for 5 h, Fluc and Rluc activities were assayed.The WT strain containing a Ty1 PRF construct exhibited a frameshifting rate of approximately 7.52%, +1 PRF rates from the tif51A-1 and tif51A-3 strains containing the Ty1 PRF construct were approximately 2.5% and 3.17%, respectively (Figure 3A).The −1 PRF efficiencies from the tif51A-1 and tif51A-3 strains containing the CTS2 PRF construct were decreased by 1.82% and 3.24%, respectively (Figure S1B).However, the −1 PRF efficiencies from the tif51A-1 and tif51A-3 strains containing the L-A PRF construct were similar to the WT strain containing the L-A PRF construct (Figure 3B).These results indicated that Sc-eIF5A promotes the translation of the +1 PRF gene of Ty1 and the −1 PRF gene of CTS2, but does not influence the translation of the −1 PRF gene of L-A.Since eIF5A is an essential protein in yeast, two eIF5A temperature-sensitive strains, tif51A-1 and tif51A-3, were used [39].The two eIF5A mutant cells were incubated at 37 • C for 5 h, which led to a significant reduction in eIF5A.Three PRF reporter plasmids (Figures 2A,B and S1A), or an in-frame control reporter plasmid, were transformed into the WT strain and two eIF5A yeast mutants, respectively.Then, after these yeast strains were incubated at 37 • C for 5 h, Fluc and Rluc activities were assayed.The WT strain containing a Ty1 PRF construct exhibited a frameshifting rate of approximately 7.52%, +1 PRF rates from the tif51A-1 and tif51A-3 strains containing the Ty1 PRF construct were approximately 2.5% and 3.17%, respectively (Figure 3A).The −1 PRF efficiencies from the tif51A-1 and tif51A-3 strains containing the CTS2 PRF construct were decreased by 1.82% and 3.24%, respectively (Figure S1B).However, the −1 PRF efficiencies from the tif51A-1 and tif51A-3 strains containing the L-A PRF construct were similar to the WT strain containing the L-A PRF construct (Figure 3B).These results indicated that Sc-eIF5A promotes the translation of the +1 PRF gene of Ty1 and the −1 PRF gene of CTS2, but does not influence the translation of the −1 PRF gene of L-A. To further investigate the effects of Sc-eIF5A on −1 and +1 PRF, the gene encoding yeast HYP2 was cloned into the pRS315 vector.The pRS315-Sc-HYP2 was transformed into tif51A-1 and tif51A-3 strains.The Sc-eIF5A-complemented tif51A-1 and tif51A-3 strains that expressed Sc-eIF5A-C-HA fusion proteins were obtained (Figure 4A,B).Following the Sc-eIF5A complement, the strains were harvested at 37 • C for 5 h.After the re-introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains, +1 PRF of Ty1 was restored to that of the WT strain (Figure 5A,B).Notably, the re-introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains had minimal effects on −1 PRF at the shifty site of CTS2 and L-A (Figures 5A,B and S1D,E), indicating that Sc-eIF5A promotes the translation of the +1, but not −1 PRF genes.To further investigate the effects of Sc-eIF5A on −1 and +1 PRF, the gene enco yeast HYP2 was cloned into the pRS315 vector.The pRS315-Sc-HYP2 was transfo into tif51A-1 and tif51A-3 strains.The Sc-eIF5A-complemented tif51A-1 and tif5 strains that expressed Sc-eIF5A-C-HA fusion proteins were obtained (Figure 4A,B) lowing the Sc-eIF5A complement, the strains were harvested at 37 °C for 5 h.After t introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains, +1 PRF of Ty1 was res to that of the WT strain (Figure 5A,B).Notably, the re-introduction of Sc-eIF5A i tif51A-1 and tif51A-3 strains had minimal effects on −1 PRF at the shifty site of CTS L-A (Figures 5A,B and S1D,E), indicating that Sc-eIF5A promotes the translation of th but not −1 PRF genes.In order to verify whether the decrease in programmed ribosomal frameshiftin ciency was caused by the decrease in the transcript levels of the fusion genes from structs, total RNA from the WT, tif51A-1, tif51A-3, Sc-eIF5A-complemented tif51A-Sc-eIF5A-complemented tif51A-3 strains at 37 °C for 5 h, containing pDB722-Ty pDB722-CTS2, or pDB722-L-A, were extracted.And, equal amounts of cDNA To further investigate the effects of Sc-eIF5A on −1 and +1 PRF, the gene encoding yeast HYP2 was cloned into the pRS315 vector.The pRS315-Sc-HYP2 was transformed into tif51A-1 and tif51A-3 strains.The Sc-eIF5A-complemented tif51A-1 and tif51A-3 strains that expressed Sc-eIF5A-C-HA fusion proteins were obtained (Figure 4A,B).Following the Sc-eIF5A complement, the strains were harvested at 37 °C for 5 h.After the reintroduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains, +1 PRF of Ty1 was restored to that of the WT strain (Figure 5A,B).Notably, the re-introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains had minimal effects on −1 PRF at the shifty site of CTS2 and L-A (Figures 5A,B and S1D,E), indicating that Sc-eIF5A promotes the translation of the +1, but not −1 PRF genes.In order to verify whether the decrease in programmed ribosomal frameshifting efficiency was caused by the decrease in the transcript levels of the fusion genes from constructs, total RNA from the WT, tif51A-1, tif51A-3, Sc-eIF5A-complemented tif51A-1 and Sc-eIF5A-complemented tif51A-3 strains at 37 °C for 5 h, containing pDB722-Ty1, or pDB722-CTS2, or pDB722-L-A, were extracted.And, equal amounts of cDNA were In order to verify whether the decrease in programmed ribosomal frameshifting efficiency was caused by the decrease in the transcript levels of the fusion genes from constructs, total RNA from the WT, tif51A-1, tif51A-3, Sc-eIF5A-complemented tif51A-1 and Sc-eIF5Acomplemented tif51A-3 strains at 37 • C for 5 h, containing pDB722-Ty1, or pDB722-CTS2, or pDB722-L-A, were extracted.And, equal amounts of cDNA were analyzed by qPCR (Figures 5C,D and S1C).There were no significant differences in Ty1-Fluc, CTS2-Fluc, and L-A-Fluc mRNA expression levels for the WT, yeast mutants, and Sc-eIF5A-complemented strains, which indicated that the Ty1, CTS2, and L-A transcriptional rates of yeast mutants were essentially equivalent in the WT strains and Sc-eIF5A-complemented strains. The reference luciferase (Renilla) activities were dramatically reduced in S. cerevisiae transfected with reporters containing the CTS2 signal compared to the in-frame control (Figures S2C and S3E,G).However, the Renilla luciferase of Ty1 and L-A displayed similar activities for all constructs, respectively (Figures S2A,E and S3A,C,I,K).So, this dualluciferase-based reporter system is unsuitable for detecting −1 programmed ribosomal frameshifting efficiency during the decoding of CTS2 mRNA in yeast, which is concordant with previous reports showing that absolute luciferase activities were reduced in HeLa cells transfected with reporters containing the CCR5 sequence compared to the HIV-1 control and IFC [40,41]. analyzed by qPCR (Figures 5C,D and S1C).There were no significant differences in Ty1-Fluc, CTS2-Fluc, and L-A-Fluc mRNA expression levels for the WT, yeast mutants, and Sc-eIF5A-complemented strains, which indicated that the Ty1, CTS2, and L-A transcriptional rates of yeast mutants were essentially equivalent in the WT strains and Sc-eIF5A-complemented strains.The reference luciferase (Renilla) activities were dramatically reduced in S. cerevisiae transfected with reporters containing the CTS2 signal compared to the in-frame control (Figures S2C and S3E,G).However, the Renilla luciferase of Ty1 and L-A displayed similar activities for all constructs, respectively (Figures S2A,E and S3A,C,I,K).So, this dual-luciferase-based reporter system is unsuitable for detecting −1 programmed ribosomal frameshifting efficiency during the decoding of CTS2 mRNA in yeast, which is concordant with previous reports showing that absolute luciferase activities were reduced in HeLa cells transfected with reporters containing the CCR5 sequence compared to the HIV-1 control and IFC [40,41]. The Hypusine Modification of Sc-eIF5A Influences +1 Programmed Ribosomal Frameshifting Efficiency The hypusine modification in eukaryotes is achieved by the sequential reactions catalyzed by two enzymes: deoxyhypusine synthase (DHS) and deoxyhypusine hydroxylase (DOHH).To investigate whether the hypusine modification is sufficient for the Sc-eIF5A control of +1 PRF, we took advantage of the N1-Guanyl-1,7-diaminoheptane (GC7), a potent inhibitor of DHS.At the treatment of the WT strain with the DHS inhibitor GC7 at 37 • C for 5 h, the hypusine modification of Sc-eIF5A was completely inhibited (Figures 6A and S5A).Ty1 +1 PRF was reduced from 7.34% to 2.79% (Figure 6B), and luciferase values of Ty1 for each experiment were shown in Figure S4A,B. The results suggest that the hypusine modification of Sc-eIF5A plays a crucial role in influencing +1 PRF at the shifty site of Ty1. Figure 6.The hypusination depletion of Sc-eIF5A decreases the Ty1 +1 PRF.(A) WT and GC7-treated WT strains were grown at 37 °C for 5 h, and the hypusine levels of Sc-eIF5A in WT and GC7-treated WT strains were shown by Western blot analysis.Cultures contained 1mM GC7.(B) WT and GC7treated WT strains were grown at 37 °C for 5 h, and Ty1 +1 PRF was detected.Cultures contained 1 mM GC7. (C) WT, tif51A-1-HYP2-K51R and tif51A-3-HYP2-K51R strains were grown at 37 °C for 5 h, and the levels of Sc-eIF5A and its hypusine modification in WT, tif51A-1-HYP2-K51R and tif51A-3-HYP2-K51R strains were shown by Western blot analysis, respectively.(D) WT, tif51A-1-HYP2-K51R and tif51A-3-HYP2-K51R strains were grown at 37 °C for 5 h, and Ty1 +1 PRF was detected.Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions separated by the +1 PRF signal from the yeast Ty1 retrotransposon or the 0-frame control were introduced into the WT strain.PRF efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporter Further, the unhypusinated Sc-eIF5A K51R was expressed in the tif51A-1 and tif51A-3 strains.Western blot analysis indicated that the Sc-eIF5AK51R-C-HA of which the HA tag does not interfere with the hypusination of Sc-eIF5A [42] from the tif51A-1 and tif51A-3 strains was produced.The hypusine modification of Sc-eIF5A was completely inhibited (Figures 6C and S5B).In addition, the tif51A-1-HYP2-K51R and tif51A-3-HYP2-K51R strains caused the specific inhibition of +1 PRF at the shifty site of Ty1 (Figure 6D), and luciferase values of Ty1 for each experiment were shown in Figure S4C,D. The results suggest that the hypusine modification of Sc-eIF5A plays a crucial role in influencing +1 PRF at the shifty site of Ty1. The Ty1 Frame 0 Stop Codon Position Confers the Dependency of +1 Programmed Ribosomal Frameshifting on Sc-eIF5A Given the robust requirement for hypusined Sc-eIF5A for the +1 PRF of reporter transcripts carrying the Ty1 ribosomal frameshifting element, we next investigated its possible regulatory mechanism.In SARS-CoV-2, the proximity of the ORF1a stop codon to the slippery sequence (18 nucleotides), much less than a ribosomal footprint (approximately 30 nucleotides), confers the dependency of −1 PRF on eIF5A [43].The distance between the first Ty1 frame 0 stop codon and the slippery sequence is also 18 nucleotides, much less than a single ribosomal footprint in yeast.So, we investigated whether the proximity of the first Ty1 frame 0 stop codon to the slippery sequence conferred the dependency of Ty1 +1 PRF on Sc-eIF5A.We generated a mutant version of the Ty1 frameshifting element, in which the first frame 0 stop codon was mutated into a sense codon.Two identical nucleotide substitutions left the secondary structure and the free energy of the first stem of the pseudoknot was unaltered, which increased the distance between the slippery sequence and the first frame 0 stop codon to 39 nucleotides, a distance greater than a ribosomal footprint.This mutation did not significantly alter the baseline rate of frameshifting compared with the wild-type frameshifting element (Figure S6).Nevertheless, the dependency of frameshifting on Sc-eIF5A was entirely abolished by the Ty1-UGU-UUC mutation (Figure 7B).Therefore, these results implicate the proximity of the stop codon to the slippery sequence as the key feature that necessitates the dependency of Ty1 +1 PRF on Sc-eIF5A. Given the robust requirement for hypusined Sc-eIF5A for the +1 PRF of reporter transcripts carrying the Ty1 ribosomal frameshifting element, we next investigated its possible regulatory mechanism.In SARS-CoV-2, the proximity of the ORF1a stop codon to the slippery sequence (18 nucleotides), much less than a ribosomal footprint (approximately 30 nucleotides), confers the dependency of −1 PRF on eIF5A [43].The distance between the first Ty1 frame 0 stop codon and the slippery sequence is also 18 nucleotides, much less than a single ribosomal footprint in yeast.So, we investigated whether the proximity of the first Ty1 frame 0 stop codon to the slippery sequence conferred the dependency of Ty1 +1 PRF on Sc-eIF5A.We generated a mutant version of the Ty1 frameshifting element, in which the first frame 0 stop codon was mutated into a sense codon.Two identical nucleotide substitutions left the secondary structure and the free energy of the first stem of the pseudoknot was unaltered, which increased the distance between the slippery sequence and the first frame 0 stop codon to 39 nucleotides, a distance greater than a ribosomal footprint.This mutation did not significantly alter the baseline rate of frameshifting compared with the wild-type frameshifting element (Figure S6).Nevertheless, the dependency of frameshifting on Sc-eIF5A was entirely abolished by the Ty1-UGU-UUC mutation (Figure 7B).Therefore, these results implicate the proximity of the stop codon to the slippery sequence as the key feature that necessitates the dependency of Ty1 +1 PRF on Sc-eIF5A. Discussion The translation factor eIF5A, originally identified as an initiation factor, was shown to function broadly in translation elongation and termination [28,29].Recent studies demonstrate that eIF5A and its hypusination is required for the efficient PRF of OAZ1 mRNA in S. cerevisiae [30].In this report, we employed congenic sets of tif51A-1 and tif51A-3 strains expressing either the Ty1, CTS2, L-A or the in-frame reporters to investigate whether the hypusine modification of Sc-eIF5A is vital for the translation of either of the reporters.The results showed that the hypusine modification of Sc-eIF5A is sufficient for +1 PRF of Ty1, but its effect on −1 PRF of L-A was insufficient.This study represents the first case, indicating that the hypusine modification of eIF5A plays an essential role in the +1 PRF of Ty1 mRNA in S. cerevisiae, indicating that the regulation of +1 PRF by yeast-hypusinated eIF5A is universal.Therefore, these data provide evidence for the in-depth exploration of the +1 PRF mechanism in eukaryotic cells. Here, the deficiencies of Sc-eIF5A in the tif51A-1 and tif51A-3 strains have no effect on the −1 PRF of the L-A mRNA, but decrease the −1 PRF of the CTS2 mRNA (Figures 3B and S1B).Furtherly, the Renilla luciferase of L-A, but not CTS2, displayed similar activities for all constructs (Figures S2C and S3E,G).Furthermore, a greater decrease in the Fluc than in the Rluc of CTS2 led to the decrease in the Fluc to Rluc ratios and thus deflated the estimated −1 PRF (Figures S2C,D and S3E-H).These results are similar to previous reports, showing that absolute luciferase activities were reduced in HeLa cells transfected with reporters containing the CCR5 sequence compared to the HIV-1 control and IFC [40,41].A greater decrease in Rluc than in Fluc led to an increase in the Fluc to Rluc ratios and thus inflated the estimated −1 PRF [41].The observed effect is due to the cryptic splicing of the reporter RNA [41], suggesting a possibility that the reporter RNA containing the CTS2 sequence could be cryptically spliced in S. cerevisiae. Hypusine-modified eIF5A is important for efficient translation termination; its loss of function results in the accumulation of ribosomes at termination codons [43].Previous studies have shown that the depletion of hypusine-modified eIF5A impairs the −1 PRF of coronavirus SARS-CoV-2 mRNA in human cells [43].The proximity of the stop codon to the slippery sequence (18 nucleotides), located less than one ribosomal footprint upstream (approximately 30 nucleotides), is the key feature that necessitates efficient termination for frameshifting at the SARS-CoV-2 frameshifting element [43].Nevertheless, other betacoronaviruses whose frame 0 stop codons are naturally located farther downstream within the frameshifting element do not require eIF5A for efficient frameshifting [43].In our studies, the deficiency of hypusine-modified eIF5A in the tif51A-1 and tif51A-3 strains also has no effect on the −1 PRF of the L-A mRNA in S. cerevisiae (Figure 3B).Its frame 0 stop codon was also located farther downstream within the frameshifting element (117 nucleotides), much more than a single ribosomal footprint, suggesting that the L-A virus of yeast also does not require eIF5A for efficient −1 PRF.However, our discovery suggests that Sc-eIF5A is essential for efficient Ty1 +1 PRF.This dependency is likely due to the proximity of this stop codon to the slippery sequence, located less than one ribosomal footprint upstream.Thus, a trailing ribosome might be sterically inhibited from reaching the slippery sequence if a terminating or post-termination ribosome in the frameshifting element is not rapidly removed, as with the SARS-CoV-2 PRF model [43].In support of this model, we demonstrated that the relocation of the frame 0 termination codon farther downstream eliminates the requirement for efficient translation termination and ribosome recycling.This result implicates the proximity of the stop codon to the slippery sequence as the key feature that necessitates the efficient clearance of ribosomes from the Ty1 frame 0 stop codon, promoting the frameshifting of trailing ribosomes.Our finding, therefore, points toward a mechanism in which the stop codon in the first stem of the pseudoknot of the Ty1 frameshifting element, in concert with the activity of the ribosome recycling machinery, plays a key role in the efficient removal of non-frameshifted ribosomes from the secondary structure and subsequent frameshifting by incoming ribosomes.Hence, we speculate that if a mutant version of the L-A frameshifting element, in which the proximity of its frame 0 stop codon to the slippery sequence, located less than one ribosomal footprint upstream, is generated, Sc-eIF5A is also likely to promote PRF at the L-A mutant frameshifting element. Polyamine biosynthesis is under feedback control with the synthesis of multiple enzymes and regulators inhibited by polyamines at the translational level.The high polyamines promote +1 ribosomal frameshifting during the decoding of OAZ in eukaryotes from yeast to humans [23,44,45].The OAZ binds to ODC, targets it for ubiquitinindependent degradation [46,47], and inhibits putrescine synthesis.In addition, the high polyamines inhibit eIF5A-dependent translation termination on the PS* uORF to repress the synthesis of the ODC antizyme inhibitor (AZIN1), which is a catalytically defective form of ODC that still binds to OAZ [31][32][33].The down-regulation of the titration of OAZ via AZIN1 enhances OAZ from targeting ODC for degradation, reducing ODC and inhibiting putrescine synthesis.Moreover, the high polyamines inhibit translation termination on the MAGDIS uORF to repress the synthesis of the S-adenosylmethionine decarboxylase (AdoMetDC), and vice versa [48][49][50][51].In addition to synthesizing polyamines, cells also import polyamines.The high polyamines also inhibit eIF5A-dependent translation termination on the MLLLPS* uORF to repress the synthesis of the polyamine transporter Hol1, and vice versa [34].Interestingly, polyamines universally promote +1 ribosomal frameshifting efficiency [17,18,52,53]. Based on these properties of eIF5A and polyamines, we speculated that the hypusinated eIF5A promotes +1 PRF by increasing the putrescine/spermidine ratio.We propose the following model (Figure 8) for the translational control of +1 PRF in yeast.Under the condition of Sc-eIF5A depletion, PPW motif synthesis is inhibited [32].Ribosomes that initiate at the weak start site of the uCC pause when translating the highly conserved C-terminal PPWxxPS* motif (* = stop codon).The stalled ribosome impedes scanning, and subsequent scanning ribosomes that leaky scan past the uCC start codon without initiating are proposed to form a queue behind the stalled elongating ribosome.Eventually, the queue will extend back to the uCC start codon, poising a scanning ribosome over the uCC start codon for a longer time and enhancing initiation on the uCC.Because ribosomes that translate the uCC do not reinitiate downstream, the increased translation of the uCC represses the synthesis of AZIN [31,32].The down-regulation of the titration of OAZ via AZIN leads to the enhanced degradation of OAZ-targeted ODC, reducing ODC and inhibiting putrescine synthesis [33,46,47].And this is thought to inhibit the +1 PRF, and vice versa [17,18,52,53].On the other hand, Sc-eIF5A depletion can also impair translation termination at a Pro-Ser-stop motif in a conserved up-stream open reading frame on the HOL1 mRNA to repress HOL1 synthesis [34], which will lead to reduced polyamine synthesis.And this is thought to inhibit the +1 PRF, and vice versa.The regulation of polyamines by Sc-eIF5A is the result of two regulatory pathways.Taken together, eIF5A is a trans-regulator of +1 PRF for Ty1 retrotransposon and could function universally in yeast. The Generation of Ribosomal Frameshift Reporters To construct the yeast Ty1, CTS2 and L-A test reporters, the respective frameshift signals were cloned into the polylinker region of pDB722.The Ty1 and CTS2 frameshift signals were amplified from S. cerevisiae genome (TransGen Biotech, Beijing, China).The primers are shown in Table 1.The PCR products were digested with Sal I and ligated into pDB722 to create pDB722-Ty1 and pDB722-CTS2, respectively.The coding sequence of L-A frameshift signal containing the Sal I restriction site was synthesized and subcloned into pUC57 to create pUC57-L-A.pUC57-L-A was digested with Sal I and ligated into pDB722 to create pDB722-L-A.The synthesized L-A sequence is shown in Table 2.The PRF reporter plasmids were transformed into yeast strains, respectively. The Generation of HYP2 Complementarity Strains Sc-HYP2 (NC_001137.3 in GenBank database) was amplified by PCR.Then, Sc-HYP2 was cloned into pRS315 to create pRS315-Sc-HYP2, which was transformed into tif51A-1 and tif51A-3 strains harboring the pDB722 series of plasmids, respectively.The primers are shown in Table 1. The Inhibition of HYP2 Hypusination in WT Strain by GC7 Treatment The WT strains harboring the pDB722 series of plasmids were grown in 5 mL of YPD medium at 25 • C until they reached approximately 1-2 × 10 7 cells.mL−1 .Then, the yeast cells were grown in 5 mL of YPD medium which was added 1 mM GC7 at 37 • C for 5 h. The Generation of Ty1 Frame 0 Stop Codon Mutant Strains Two identical nucleotides of the first stem of the pseudoknot of the Ty1 frameshifting element were substituted by the directed point mutation method [54], which was named Ty1-UGU-UUC.Then, Ty1-UGU-UUC was cloned into pDB722 to create pDB722-Ty1-UGU-UUC, which was transformed into WT, tif51A-1 and tif51A-3 strains, respectively.The primers are shown in Table 1. The Detection of Polyamines in Yeast Strains by High-Performance Liquid Chromatography (HPLC) Putrescine, spermidine, and spermine were measured by HPLC using a Supersil ODS2 5 µm column (Elite, Dalian, China).The WT, tif51A-1, tif51A-3, tif51A-1 and tif51A-3 HYP2 complementarity strains were grown in YPD media (15 mL) at 37 • C for 5 h.Then, yeast cells were collected by centrifugation at 3000× g for 5 min.Polyamines were extracted from yeast cell lysate with 5% trichloroacetic acid (TCA) [55], and after centrifugation at 18,000× g for 5 min the supernatant was used for HPLC analysis.A reaction mixture containing 400 µL supernatant, 1 mL 2M NaOH and 30 µL benzoyl chloride was incubated at 37 • C for 20 min [56].The benzoylzed polyamines were extracted with 2 mL of ethyl ether, dried with nitrogen flow, and dissolved in 200 µL of methanol.Aliquots (20 µL) of each sample were injected onto an ODS-C18 column (Elite, 4.6 × 150 mm) and the benzoylzed products were separated at a flow rate of 1 mL/min at 30 • C with a mobile phase of 60/40 (v/v) methanol/water.The appearance of the benzoylzed products was monitored through changes in absorption at 254 nm. Analysis of Ribosomal Frameshift Efficiency in tif51A-1, tif51A-3 and HYP2 Complementarity Strains The tif51A-1 and tif51A-3 strains harboring the pDB722 series of plasmids and HYP2 complementarity strains were grown in 5 mL of YPD medium at 25 • C until they reached approximately 1-2 × 10 7 cells.mL−1 .Then, the tif51A-1 and tif51A-3 strains harboring the pDB722 series of plasmids were grown in uracil − liquid media (15 mL) at 37 • C for 5 h.And the HYP2 complementarity strains were grown in uracil − and leucine − liquid media (15 mL) at 37 • C for 5 h.The yeast cells were collected for subsequent protein analyses by Western blot, mRNA analyses by quantitative real-time PCR and ribosomal frameshift efficiency analyses by Dual-luciferase assays.The WT strains harboring the pDB722 series of plasmids and the HYP2 hypusination site mutant strains were grown in 5 mL of YPD medium at 25 • C until they reached approximately 1-2 × 10 7 cells.mL−1 .Then, the HYP2 hypusination site mutant strains were grown in uracil − and leucine − liquid media (15 mL) at 37 • C for 5 h.And the WT strain by GC7 treatment were grown in YPD liquid media (15 mL) at 37 • C for 5 h.The yeast cells were collected for subsequent protein analyses by Western blot and ribosomal frameshift efficiency analyses by Dual-luciferase assays. Analysis of Ribosomal Frameshift Efficiency in Ty1 Frame 0 Stop Codon Mutant Strains The Ty1 frame 0 stop codon mutant strains were grown in 5 mL of uracil − liquid medium at 25 • C until they reached approximately 1-2 × 10 7 cells.mL−1 .Then, Ty1 frame 0 stop codon mutant strains were grown in uracil − liquid media (15 mL) at 37 • C for 5 h.The yeast cells were collected for subsequent ribosomal frameshift efficiency analyses by Dual-luciferase assays. Quantitative Real-Time PCR Total RNA was isolated from yeast cells using the E.Z.N.A. ® Yeast RNA Kit (Omega, Norcross, GA, USA).For cDNA synthesis, the extracted total RNA (2 µg) was treated in a reaction system of Quantscript RT Kit containing 1 µM Oligo (dT) Primer, 20 U TransScript ® RT/RI Enzyme Mix, 2 × TS Reaction Mix and gDNA Remover (TRAN, Shanghai, China).Using the resultant cDNA as template and Sc-actin as an internal reference gene, FL was analyzed by quantitative real-time PCR (qPCR) method.The primers are shown in Table 1.All qPCR were set up using TransStart ® Green qPCR SuperMix (TRAN, Shanghai, China) and performed on an Applied Biosystems 7500 Fast Real-Time PCR system (Waltham, MA, USA). Western Blot Cells were lysed in RIPA lysis buffer (Beyotime, Jiangsu, China).Samples were frozen and thawed for 1 h followed by centrifugation at 17,000× g for 30 min at 4 • C. Cleared protein lysate was denatured with 5× loading buffer for 10 min at 95 • C, and loaded on precast 10% to 15% bis-tris protein gels.Proteins were transferred onto nitrocellulose membranes using the iBLOT 2 system (BIO-RAD, Hercules, CA, USA) following the manufacturer's protocols.Membranes were blocked with 5% w/v milk and 0.1% Tween-20 in PBS for 1 h.Then, the membranes were incubated with anti-HA (1:100, Sangon Biotech, Shanghai, China) or anti-α-tubulin (1:400, Sangon Biotech, Shanghai, China) or anti-hypusine (1:400, AtaGenix, Wuhan, China) overnight at 4 • C. The membranes were subsequently incubated with secondary antibody (Abcam, Cambridge, UK) in 5% milk and 0.1% Tween-20 in PBS for 2 h and visualized using Licor Odyssey infrared scanner (Odyssey Clx, Gene Company Limited, Shanghai, China).The optical density of the signals on film was quantified using grayscale measurements in ImageJ software V1.8.0 and converted to fold change.The monoclonal antibody against the hypusine was produced by immunizing rabbits with the synthetic peptide "C-Ahx-STSKTG[hypusine]HGHAKV-amide". Dual-Luciferase Assays The yeast cells were harvested and lysed with 1 mL of ice-cold 1× passive lysis buffer from the Dual-Luciferase ® Reporter Assay System (Promega, Madison, WI, USA).Lysates were cleared by centrifugation at 15,000× g for 2 min, and the supernatant was assayed for the Renilla luciferase (Rluc) and Firefly luciferase (Fluc) activities, by adding 10 µL of lysate and 10 µL of each reagent, as per the Promega protocol, using Glo MaxTM 20/20 Assay System (Promega, Madison, WI, USA).Frameshift efficiencies were calculated by dividing Fluc values by Rluc values and then dividing the relative ratios by the average Fluc to Rluc ratio of the in-frame control reporter. Figure 2 . Figure 2. Secondary structural representation of the PRF signals used in this study.Graphical representation of the pDB722-Ty1 (A) and pDB722-L-A (B).(C) The formula for calculating the +1 PRF efficiency.Red and green underlined text represent the slippery sequence of Ty1 and L-A, respectively. Figure 2 . Figure 2. Secondary structural representation of the PRF signals used in this study.Graphical representation of the pDB722-Ty1 (A) and pDB722-L-A (B).(C) The formula for calculating the +1 PRF efficiency.Red and green underlined text represent the slippery sequence of Ty1 and L-A, respectively. Figure 3 . Figure 3. Depletions of Sc-eIF5A in the tif51A-1 and tif51A-3 strains decrease +1, but not −1 respectively.(A) Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions rated by the +1 PRF signal from the yeast Ty1 retrotransposon, or (B) the −1 PRF signal fro yeast L-A virus, or the 0-frame control were introduced into WT, tif51A-1 and tif51A-3 strain efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporte sus the 0-frame control plasmid.Error bars denote SD. * p < 0.05, ns, not significant (Student' tailed t test, n = 3, assayed in duplicate). Figure 4 . Figure 4. Sc-eIF5A could complement the functional loss of tif51A-1 and tif51A-3 mutant strain Expressions of the eIF5A-HA fusion gene in tif51A-1 strains containing Ty1, CTS2 or L-A PR struct were shown by Western blot analysis, respectively.(B) Expressions of the eIF5A-HA gene in tif51A-3 strains containing Ty1, CTS2 or L-A PRF construct were shown by Wester analysis, respectively."+" and "−" denote that the Sc-eIF5A were transformed or untransform the Western blot analysis. Figure 3 . Figure 3. Depletions of Sc-eIF5A in the tif51A-1 and tif51A-3 strains decrease +1, but not −1, PRF, respectively.(A) Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions separated by the +1 PRF signal from the yeast Ty1 retrotransposon, or (B) the −1 PRF signal from the yeast L-A virus, or the 0-frame control were introduced into WT, tif51A-1 and tif51A-3 strains.PRF efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporter versus the 0-frame control plasmid.Error bars denote SD. * p < 0.05, ns, not significant (Student's two-tailed t test, n = 3, assayed in duplicate). Figure 3 . Figure 3. Depletions of Sc-eIF5A in the tif51A-1 and tif51A-3 strains decrease +1, but not −1, PRF, respectively.(A) Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions separated by the +1 PRF signal from the yeast Ty1 retrotransposon, or (B) the −1 PRF signal from the yeast L-A virus, or the 0-frame control were introduced into WT, tif51A-1 and tif51A-3 strains.PRF efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporter versus the 0-frame control plasmid.Error bars denote SD. * p < 0.05, ns, not significant (Student's twotailed t test, n = 3, assayed in duplicate). Figure 4 . Figure 4. Sc-eIF5A could complement the functional loss of tif51A-1 and tif51A-3 mutant strains.(A) Expressions of the eIF5A-HA fusion gene in tif51A-1 strains containing Ty1, CTS2 or L-A PRF construct were shown by Western blot analysis, respectively.(B) Expressions of the eIF5A-HA fusion gene in tif51A-3 strains containing Ty1, CTS2 or L-A PRF construct were shown by Western blot analysis, respectively."+" and "−" denote that the Sc-eIF5A were transformed or untransformed in the Western blot analysis. Figure 4 . Figure 4. Sc-eIF5A could complement the functional loss of tif51A-1 and tif51A-3 mutant strains.(A) Expressions of the eIF5A-HA fusion gene in tif51A-1 strains containing Ty1, CTS2 or L-A PRF construct were shown by Western blot analysis, respectively.(B) Expressions of the eIF5A-HA fusion gene in tif51A-3 strains containing Ty1, CTS2 or L-A PRF construct were shown by Western blot analysis, respectively."+" and "−" denote that the Sc-eIF5A were transformed or untransformed in the Western blot analysis. Figure 5 . Figure 5. Re-introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains enhanced Ty1 +1 PRF.(A) tif51A-1 and Sc-eIF5A-complemented tif51A-1 strains were grown at 37 °C for 5 h.(B) tif51A-3 and Sc-eIF5A-complemented tif51A-3 strains were grown at 37 °C for 5 h.Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions separated by the +1 PRF signal from the yeast Ty1 retrotransposon, or the −1 PRF signal from the yeast L-A virus, or the 0-frame control were introduced into WT, tif51A-1 and tif51A-3 mutant strains.PRF efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporter versus the 0-frame control plasmid.Results are the average of at least three independent experiments.Relative Ty1-Fluc (C) and L-A-Fluc (D) mRNA levels were determined by qPCR and first normalized to actin mRNA.Then, for each panel, measurements were normalized to WT samples.Error bars denote SD. * p < 0.05, ** p < 0.01, ns, not significant (Student's two-tailed t test, n = 3, assayed in duplicate). Figure 5 . Figure 5. Re-introduction of Sc-eIF5A in the tif51A-1 and tif51A-3 strains enhanced Ty1 +1 PRF.(A) tif51A-1 and Sc-eIF5A-complemented tif51A-1 strains were grown at 37 • C for 5 h.(B) tif51A-3 and Sc-eIF5A-complemented tif51A-3 strains were grown at 37 • C for 5 h.Dual-luciferase reporter plasmids containing Fluc and Rluc coding regions separated by the +1 PRF signal from the yeast Ty1 retrotransposon, or the −1 PRF signal from the yeast L-A virus, or the 0-frame control were introduced into WT, tif51A-1 and tif51A-3 mutant strains.PRF efficiencies (%) were calculated by dividing the ratio of Fluc to Rluc obtained with the reporter versus the 0-frame control plasmid.Results are the average of at least three independent experiments.Relative Ty1-Fluc (C) and L-A-Fluc (D) mRNA levels were determined by qPCR and first normalized to actin mRNA.Then, for each panel, measurements were normalized to WT samples.Error bars denote SD. * p < 0.05, ** p < 0.01, ns, not significant (Student's two-tailed t test, n = 3, assayed in duplicate). Figure 7 . Figure 7.The dependency of Ty1 +1 PRF on Sc-eIF5A is determined by the distance between the frame 0 stop codon and the slippery sequence.(A,B) Upper, the sequence and secondary structure of tested frameshifting elements.Lower, the effect of the loss of Sc-eIF5A on the frameshifting of each construct.Red underlined text represents the slippery sequence of Ty1.Purple underlined text represents Ty1 frame 0 stop codons.Error bars denote SD. * p < 0.05, ns, not significant (Student's two-tailed t test, n = 3, assayed in duplicate). Figure 7 . Figure 7.The dependency of Ty1 +1 PRF on Sc-eIF5A is determined by the distance between the frame 0 stop codon and the slippery sequence.(A,B) Upper, the sequence and secondary structure of tested frameshifting elements.Lower, the effect of the loss of Sc-eIF5A on the frameshifting of each construct.Red underlined text represents the slippery sequence of Ty1.Purple underlined text represents Ty1 frame 0 stop codons.Error bars denote SD. * p < 0.05, ns, not significant (Student's two-tailed t test, n = 3, assayed in duplicate). 17 Figure 8 .Figure 8 . 4 . Figure 8. Schematic model of eIF5A and its target genes' AZIN1 and HOL1 functions.Under the condition of eIF5A depletion, AZIN synthesis is suppressed.Down-regulation of the titration of OAZ via AZIN leads to enhanced degradation of OAZ-targeted ODC, reducing ODC, inhibiting putrescine synthesis, and suppressing further +1 PRF.In addition, eIF5A depletion also leads to decreased translation of HOL1 mRNA, reducing the levels of polyamines and repressing the +1 PRF.Under the condition of eIF5A complementarity, AZIN synthesis is promoted.Titration of OAZ via AZIN prevents OAZ from targeting ODC for degradation, stabilizing ODC, and enhancing putrescine synthesis, promoting further +1 PRF.Moreover, eIF5A complementarity also leads to increased translation of HOL1 mRNA, increasing the levels of polyamines and promoting the +1 PRF.Solid arrows represent the promotion of downstream proteins by the upstream proteins, whereas dashedFigure 8. Schematic model of eIF5A and its target genes' AZIN1 and HOL1 functions.Under the condition of eIF5A depletion, AZIN synthesis is suppressed.Down-regulation of the titration of OAZ 4. 9 . Analysis of Ribosomal Frameshift Efficiency in HYP2 Hypusination Site Mutant Strains and WT Strain by GC7 Treatment Table 1 . Sequence of primers.
2024-02-06T17:22:20.599Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "82ee8d2068dc27e19b093bc4cabaea51a5371b88", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/3/1766/pdf?version=1706771259", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e57d87b500fbc41efb642fd3ba9a0aaf4d8826a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
46567769
pes2o/s2orc
v3-fos-license
Railway Track Crack Detection Vehicle In this paper, we presented a railway track crack detection patrolling vehicle. Indian railway has one of the world's largest railway networks comprising 92,081 km (57,216 mi) of track over a route of 66,687 km (41,437 mi) and 7,216 stations.So,manualinspectionand detecting a crack on these railways tracks is very tedious process and consumes lot of time and human resource. This paper proposes a cost effective solution to the problem of railway track crack detection utilizing IR sensor array assembly which tracks the exact location of faulty track, then inform to nearby railway station through short messaging application, so that many lives will be saved. There are many advantages with this system when compared with the traditional detection techniques. It includes less cost, low power consumption and less analysis time. I. INTRODUCTION India is the fourth largest railway network in the world. Indian rail network is still on the growth trajectory trying to fuel the economic needs of our nation.However, in terms of the reliability and safety parameters,we have not yet reached truly global standards.Cracks in rails have been identified to be the main cause of derailments in the past, yet there have been no cheap automated solutions available for testing purposes. On further analysis of the factors that cause these rail accidents, recent statistics reveal that approximately 60% of all the rail accidents have derailments as their cause, of which about 90% are due to cracks on the rails either due to natural causes (like excessive expansion due to heat) or due to antisocial elements. This track detection vehicle system is basically a electronics device that detects the crack in track and provide the location of crack to nearby stations. This method which utilizes simple components inclusive of a ZigbeeTransmitter and Receiver, Micro-controller, IR Sensors, Photodiode and LED display based crack detector assembly is very useful in railway crack detection. This vehicle will work as a patrolling vehicle on the tracks within regular intervals and transmit update information regarding the tracks conditions. If any fault is detected it will transmit the fault location as well as distance of that particular fault from the starting point by means of Zigbee transmitter to nearby stations. Hence, necessary steps canbe taken as soon as possible by railway authorities to overcome the conditions.This setup is very cost effective as well as highly reliable. The proposed setup would make the inspection and maintenance of railways tracks easier and help them to monitor efficiently by replacing the human inspection which is currently followed. The design of the vehicle and software related to it are very simple and can be easily adopted by the present system.This idea can be implemented in the long run to facilitate better safety standards and provide effective testing infrastructure for achieving better results inthe future. II. BLOCK DIAGRAM This is the basic block diagram of our proposed project. It shows all the basic fundamental process of working of this patrolling vehicle. This assembly consists of two DC motor for movement of vehicle over the railway track. IR (LED) sensors and Photodiode for crack detection. LCD will show the fault location with its distance from starting point. Transmitter will transmit the information to nearby stations. All components are connected to Micro-controller. 1) Micro controller: This section forms the control unitof the whole project. This section basically consists of a Microcontroller with its associated circuitry like Crystal with capacitors, Reset circuitry, Pull up resistors and so on. The Microcontroller forms the heart of the project because it controls the devices being interfaced and communicates with the devices according to the program being. 2) Photodiode: Photodiode is alight sensitive semiconductor diode which converts the light energy into voltage or current based on the mode of operation. In general Photodiodes are operated in reverse bias condition. The clear Photodiode can detect visible and IR rays to limit the Photodiode to detect only IR rays a black cutting is applied to the glass of the Photodiode. The photodiode allows the current to pass through it if the photodiode is exposed to IR rays and it doesn't allow current to pass through it if no IR rays falls on it. The amount of current passed through the photodiode is directly proportional to amount of IR rays falls on it. 3) Liquid-crystal display (LCD): It is a flat panel display, electronic visual display that uses the light modulation properties of liquid crystals. Liquid crystals do not emit light directly. LCDs are available to display arbitrary images or fixed images which can be displayed or hidden, such as preset words, digits, and 7-segment displays as in a digital clock. They use the same basic technology, except that arbitrary images are made up of a large number of small pixels, while other displays have larger elements. 4) IR Obstacle sensor: This sensor is a short range obstacle detector with no dead zone. It has a reasonably narrow detection area which can be increased using the dual version. Range can also be increased by increasing the power to the IR LEDs or adding more IR LEDs. The photo below shows my test setup with some IR LED's (dark blue) as a light source and two phototransistors in parallel for the receiver. You could use one of each but I wanted to spread them out to cover a wider area. It has a range of about 10-15cm (4-6 inches) with my hand as the object being detected. 5) Zigbee Transmitter and Receiver: Zigbee devices are low power, low cost electronic devices which is intended to create personal area networks with low power radios. The major advantage of Zigbee transmitter is that it can transfer data to multiple receivers simultaneously. The physical range of Zigbee transceiver is 10 to 20 metres (approx.). The transmission distance is limited to 10-100 metres line-of-sight due to low power consumption, depending on power output and environmental characteristics. Zigbee devices can transmit data over long distances by passing data through a mesh network of intermediate devices to reach more distant ones. 6) DC Motor: A DC motor's speed can be controlled over a wide range, using either a variable supply voltage or by changing the strength of current in its field windings. It has highest starting torque. So, here to run the vehicle on the tracks we use 2 DC motors. IV. WORKING The vehicle used in this system consists of two IR sensors at its either sides. When vehicle travels over the track, IR transmitter transmits the IR signal continuously. When crack is detected in the track, the transmitted IR signal is passed through crack and received by the receiver. Initially IR receiver is active low, when transmitted IR signal is received by receiver the IR receiver change into active high mode. This signal is feed to the comparator which compares the two signals i.e., transmitted under received and received signal. When the signal is high then comparator pass the signal to the microcontroller which indicates that crack is detected. At this instant motor is stopped. Also the microcontroller receives the distance travelled by the vehicle from a fixed point and it sends this distance of crack to the nearby base station using ZIGBEE Transmitter. The Zigbee receiver installed at the several base stations receive this information. Then the most nearest base station takes the suitable and necessary steps to prevent any calamities. V. FLOW CHART The following flow chart showing the step wise working of the railway track crack detection vehicle. VII. APPLICATIONS Project can be used as patrolling vehicle for inspection of cracks, tear and wear at various places like: Automatic detection of crack on railway tracks. Calculation of distance of the crack from the origin. Automatic crack detection in forged metal parts. Detection of cracks in concrete pipe. VIII. CONCLUSION In this project,by using this Autonomous patrolling vehicle for purpose of railway track inspection and crack detection, it will have a great impact in the maintenance of the tracks which will help in preventing train accidents to a very large extent. Hence, owing to the crucial solution of this problem, we have worked on implementing an efficient and cost effective solution suitable for this application. This system automatically detects the faulty rail track without any human intervention. There are many advantages with the proposed system when compared with the traditional detection techniques. The advantages include less cost, low power consumption and less analysis time.Railway track crack detection vehicle is designed in such a way that it detects the cracks or deformities on the track which when rectified in time will reduce train accidents.. By this proposed system, the exact location of the faulty rail track can easily be locatedwhich will mended immediately so that many lives can be saved. By using LED-Photodiode assembly for railway track crack detection system we got accuracy up to 80%.
2019-04-15T13:12:05.860Z
2017-02-15T00:00:00.000
{ "year": 2017, "sha1": "ec3146f187c8c66f2887efedd94d9816c12f90f7", "oa_license": null, "oa_url": "https://doi.org/10.17148/iarjset.2017.4233", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "06443a691f6aa821b48dc6ad7847034187585ccd", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
116955962
pes2o/s2orc
v3-fos-license
Explaining the magnetic moment reduction of Fullerene encapsulated Gadolinium through a theoretical model We propose a Theoretical model accounting for the recently observed reduced magnetic moment of Gadolinium in fullerenes. While this reduction has been observed also for other trivalent rare-hearth atoms (Dy3+, Er3+, Ho3+) in fullerenes and can be ascribed to crystal field effects, the explanation of this phenomena for Gd3+ is not straightforward due to the sphericity of its ground state (S=7/2, L=0). In our model the momentum lowering is the result of a subtle interplay between hybridisation and spin-orbit interaction. INTRODUCTION Endohedral metallofullerenes M@C 82 are novel materials that have attracted a wide interest in physics, chemistry but also in material or biological sciences for the large variety of promising applications of their peculiar properties [1,2,3]. In endohedral metallofullerenes, a positively charged core metal is off-center in a negatively charged strong carbon cage, resulting in strong metalcage interaction and intrafullerene charge transfer from the metal to the cage. [1,4,5,6] The magnetism of these systems is mainly due to the spin of the entrapped metals. In a series of average magnetization mesurements a paramagnetic behaviour has been observed [7,8,9,10], with negative Weiss temperatures. The negative Weiss temperature indicates the presence of a weak antiferromagnetic interaction betwen the cage and the metal, and between neighbooring cages, but for heavy rare-earths (RE) endofullerenes [10] ferromagnetic coupling has been mentioned in the sub-20K range. In the case of heavy RE these experiments gave a number of magnetons per encaged ion that is smaller than for the free ion. This result has been phenomenogically ascribed to the cage crystal field interaction for high L ions and, for the L = 0 Gd case, to the antiferommagnetic interaction between the ion and the cage. A recent work [11] has used x-ray magnetic circular dichroism and (XMCD) x-ray absorption spectroscopy (XAS) to characterize local magnetic properties of heavy RE metallo centers, using the M 4 and M 5 resonances (3d → 4f transitions). The absorption spectra of this work were very well fitted assuming trivalent ions (4f n electronic structure with n = 7 for Gd, n = 9 for Dy...), while XMCD confirmed that there is a strong reduction of the measured ion magnetisation compared to the free ion case. For L = 0 ions the reduction was reproduced by a model Hamiltonian were a weak crystal field prevents the ion total angular moment J to align completely along the magnetic field. The case L = 0 of trivalent Gadolinium was more difficult and hybridisation model gave not a satisfactory explanation. Infact, althought hybridisation gives antiferromagnetic coupling with the cage and accounts ( in the Gd case) for a 14% reduction of the average moment ( Gd + cages), it cannot explain the reduction of the moment localized on Gd ion (see next section). What we will show in the present paper is that a combined action of hybridisation and spin-orbit interaction can have a dramatic effect on the observed magnetic moment. This effect is not trivial and is significant only in a restraint parameters region that was not discovered in the previous numerical study [11]. In the present paper we will give a complete analytical discussion of this effect. In next section we will introduce a simple model with an anisotropic hybridisation where the Gd ground state is basically 4f 7 with a small 4f 8 component due to a backdonation from the cage. we will then consider spin-orbit interactions at the first order and show the dramatic changes in magnetisation. Analytical formula will be compared to exact numerical solutions. Finally we will discuss the possible experimental manifestations of the studied phenomena. HYBRIDISATION MODEL AND SPIN-ORBIT Back donation concerns a cage unpaired electron. As the Re ion is off center we consider that the hopping is non isotropic and choose to restrain the transfer of the cage backdonated electron to the 4f orbital that is closest to the cage, the m z = 0 one, where the z-axis is parallel to the off center displacement. The Gd ground state has S = 7/2, L = 0. Adding one more electron to this state, one can acces only the quantum numbers S = 3, L = 3 of the 4f 8 configuration [12]. The energy difference between the 4f 8 level and 4f 7 ground state will be named, without SO coupling, ∆E. This is a positive scalar quantity and must be large in magnitude, compared to the hopping strenght t, because the fractional backdonation has been observed to be very small. We consider an effective interaction term proportional to t 2 . This interaction transforms the state (S z , s) (where S z is the spin z-component of the 4f 7 shell and s the cage unpaired electron one) into the state (S z + 2s, −s), and the other way round. We can therefore restrict the effective Hamiltonian to the two-dimensional space spanned by (S z , s) and (S z + 2s, −s ). The elements of the effective interactions are : where E 0 is the 4f 7 ground state energy, c + ms and c ms are creation/annihilation operators for a 4f electrons with m z = m and s z = s. The sum runs over the states η of the 4f 8 configuration. The case of Gd is quite simple because the only 4f 8 state having a non-zero parentage coefficient with The results of operating on the ground state on 4f 7 can be expressed in terms of the parentage coefficients and angular recoupling factors using well known formula [13], for our specific case: Putting this formula into (1) the Hamiltonian can be written as : where v Sz is the versor : For positive ∆E the ground state of H is v Sz , it corresponds to antiferromagnetic alinement ( total angular moment J = 3), and has energy − t 2 ∆E 8 7 . The state perpendicular to v Sz has energy zero and corresponding total angular moment J = 4. There is no energy dependency on S z as it could have been expected on the basis of rotational invariance in the spin space, as long as spin-orbit interaction is not included. The local moment of Gd, in the antiferromagnetic ground state, can be almost fully aligned along the magnetic field. On the basis of equation (4) one should observe, at saturation S z = (7/2 × 7 + 5/2)/8 = 3.375 (5) corresponding to a 3.6% reduction which is very far from the 20% observed one. The antiferromagnetic metal-cage coupling cannot explain the moment reduction observed with XAS, XMCD techniques. Therefore we add the spin-orbit (SO) coupling to the picture which breaks rotational invariance in the spin space. At first order SO splits the energies of the S ′ = 3 L ′ = 3 state according to the total J ′ , and affects the denominator involved in equation (1). As a result the equation for H, that is given by equation 3 for the zero-SO case, must be rewritten in this form : where S ′ z = S z − 1/2. From a formal point of view SO gives at first order also another contribution beside affecting the propagator denominator. In fact the 4f 7 ground state, considering SO interaction, is not a pure S = 7/2 , L = 0 state, but it has also a small component of the state S = 5/2, L = 1, whose amplitude is first order in SO strength. It is this perturbed ground state that should be considered in equation (1). However for symmetry reasons, in the framework of our model where backdonation affects only the m z = 0 orbital, the contribution at first order in SO coming from this S = 5/2, L = 1 component is zero. At the moment we will therefore restrain the discussion to the above equation (7), that contains all the important physics of the studied phenomena. The S z dependency in equation (7) is given by the J ′ dependecy of E SO J ′ . If the denominator in equation (7) is constant, one can factor the term : and obtain again equation (3). For the 4f 8 , S ′ = 3, L ′ = 3 state the energy E SO J ′ is where ζ nl is the strength of SO interaction. This equation can be obtained in a simple way : the state obtained putting seven spin-up electrons in the 4f shell, plus one spin-down electron in the m z = 3 state, has S = 3, L = 3, J = 6. It is very easy to calculate for this mono-determinantal state the expectation value of the S.L scalar operator, it is −1.5 ( m z = 3 of the eighth electron times its spin ). The expectation values for the other J ′ can be calculated observing that the expectation value of a scalar product of two L = 1 tensors must be proportional to the 6J factor for the angular moments sextet (3,3,J ′ ,1,1,0). By a quick glance at sixJ tables, equation (9) is readily obtained. The energy correction E SO J ′ , considering ζ nl = 0.1975eV [14] has the negative value of about −0.3eV for J ′ = 6 and a positive value of about 0.4eV for J ′ = 0. These values have to be compared to ∆E. Taking ∆E of the order of 1eV the effect of E SO J ′ is not negligeable and the dependency on S z is given mainly by the term with the smaller denominator, the J ′ = 6 term. The Wigner symbol has the bigger value for the smaller S ′ z , as one can understand classically considering that to get J ′ = 6 one has to align S ′ and L ′ . Therefore the ground state has the smallest S ′ z and, unless the polarising magnetic field is sufficiently strong, the observed local magnetic moment will be zero. The J ′ = 6 term is the only one to consider for ∆E approaching the value of 0.3 eV, because its denominator in equation (7) tends to zero. But a small denominator means strong hybridisation, while hybridisation is weak because the encaged Gd is an almost pure 4f 7 configuration. One should therefore consider the region where ∆E + E SO J ' is big compared to the hopping strenght. For ∆E going to infinity the studied effect cancels out( see equation (8)). So we are going to study a region where the effect results from a imperfect cancellation of the different terms involved in the sum of equation (7). As a cancellation is involved one has to be very precise evaluating each single term in the sum. Therefore we devote a particular attention to the exact values of E SO J ′ . These values could be obtained at second order using Racah formalism and summing contributions from all the 4f 8 states accessible operating with the spin-orbit interaction on the 4f 8 , S = 3, L = 3 state. However, for the scope of this paper, which is to clarify the consequences of equation (7), it is sufficient to plug in the sum the energies obtained by exact diagonalisation of Gd 2+ ion. We show in table (1) the comparison of E SO J ′ energies, calculated at first order by equation (9), compared with the exact numerical result. The numerical value of E SO J ′ is obtained calulating numerically the energy eigenvalues of the 4f 8 Hamiltonian, and subtracting the ground state energy of the 4f 7 one, where SO interaction is accounted for in both Hamiltonians. In the numerical calculation F 2 , F 4 , F 6 are taken from Thole [14]. The parameter F 0 is already contained in ∆E, so we take F 0 equal to zero in numerical calculations. In figure (1) we show the energies as a function of ∆E for the antiferromagnetic eigenstates of H SO (S z ) ( equation (6) ) for S I: Dependancy of 4f 8 ground states as a function of total moment J. First order formula ( first line) is compared to numerical results ( second line ) obtained using parameters from reference [14]. Units are eV. for S ′ z = 0 has been subtracted (it is taken as origin of the energy scale). In the left panel the 4f 8 energies entering equation (6) are calculated at first first order in SO, while in the right panel numerical 4f 8 energies for an isolated ion are used. The parameters used in the calculation are t = 0.05eV and ∆E between 0.4 and 1. One can observe that the simple formula 6 is an excellent approximation when exact energies are considered in the denominator. The energy splitting has to be compared with the magnetic field strength. Considering a typical XAS-XMCD experimental case [11] with a 7 Tesla field, the energy gain got aligning about 7 Bohr magnetons from perpendicular to parallel direction with the field is 0.4meV. This energy is of the same order of magnitude of, or lower, than the splitting caused by hybridisation plus spin-orbit. This effect can therefore, depending on t and ∆E, prevail on the magnetic polarizing field and suppress partially, or completely, the magnetization. DISCUSSION AND CONCLUSIONS We have shown in the previous section that a very small anisothropic hybridisation ( t = 0.05 eV ) can give, for weak magnetic fields, a complete suppression of the magnetisation along the encaged metal displacement axis. At zero temperature the magnetization would be a discontinuous function of the magnetizing field. For a polarizing field perpendicular to the displacement axis the magnetization curve would be instead continuous. In a real system one should take into account temperature and disorder. Temperature effect would tend to smear discontinuities. Disorder may be of different kinds. One kind of disorder are fluctuations in ∆E due to inhomogeneities of cage environments. As ∆E affects the propagator denominator of equation (7) fluctuations might influence greatly the experimental result: discontinuities could be smeared out because the moment of cages having lower ∆E is depressed more than that of higher ∆E ones. Disorder of the displacement axis direction in the sample would have a similar effect. These consideration could explain why experiments show magnetization curves that are continuous and saturate at reduced values. As an example we calculate magnetisation curves at different temperature in the case of random orientation of the displacement axis. We consider the parameters ∆E = 1eV and t = 0.2eV . The magnetisation is shown in figure 2 for 1.8,3,6 and 8K. These data have to be compared with figure 8 of reference [10]. The experimen-tal behaviour is reproduced. The above discussion leaves the problem still open. First of all further investigation is needed to better evaluate the real values of ∆E and t, that in this work we have chosen arbitrarily with the only criteria of giving a numerical example based on conservative values ( small t and non negligeable ∆E). Second, a comparison with a complete set of data using a realistic modelisation of the sample should be done. However we can conclude that the effect is important even for very small hybridization, and therefore cannot be ignored if one wants really to understand the magnetic properties of encaged RE. I am grateful to the people of the ID08 beamline at ESRF, in particular Nick Brookes and Celine De Nadai for introducing me to this subject and motivating this analysis, and for the very fruitful discussions.
2019-04-14T02:06:50.971Z
2004-11-01T00:00:00.000
{ "year": 2004, "sha1": "8e6233808f340e48fe11e607c1a61e6524883a61", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0411017", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7eda185e2a0f42d97128f28512af8d492e6af11a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
162182460
pes2o/s2orc
v3-fos-license
Serotonin Neuronal Function from the Bed to the Bench: Is This Really a Mirrored Way? Significance Statement Induced pluripotent stem cells (iPSCs) offer a great opportunity to recapitulate both normal and pathological development of brain tissues. Recently, three research teams have developed human-PSC technology and direct somatic cell reprogramming to allow induction of human serotonin (5-HT; 5-hydroxytryptamine) neurons in vitro. While preclinical studies have repeatedly shown that 5-HT suppresses 5-HT neuronal firing activity, one group has tested the effect of 5-HT on the neuronal activity of those 5-HT-like cells and found a paradoxical excitatory action of 5-HT. Here, we argue that few cautions in translational interpretations have to be taken into account. Nonetheless, utilizing patient-derived cells for generating disease relevant cell types truly offers a new and powerful approach for investigating mechanisms playing fundamental roles in psychiatric disorders. tissues and may as well provide essential strategies toward cell-based therapy of neuropsychiatric diseases (Vadodaria et al., 2018). Successfully, in 2016, three research teams have developed human-PSC technology (Lu et al., 2016) and direct somatic cell reprogramming (Vadodaria et al., 2016a;Xu et al., 2016) to allow induction of human serotonin neurons in vitro for the first time (for review, see Vadodaria et al., 2016b). Remarkably, Lu et al. (2016) have demonstrated the accurate timely regulation of WNT, SHH, and FGF4 signaling pathways during the serotonergic (5-HT) neuron differentiation and generated an enriched population of 5-HT neurons from human embryonic stem cells (ESCs) and iPSCs. These human 5-HT neurons not only express specific biomarkers (TPH2, 5-HT, GATA3, GATA2, FEV, LMX1B, SERT, AADC, and VMAT2) but also show electrophysiological activities and release 5-HT in response to stimuli in a dose-dependent and time-dependent manner (Lu et al., 2016). Subsequently, this group further analyzed the features of human iPSCs-derived 5-HT neurons both in vitro and in vivo. They found that these human 5-HT Significance Statement Induced pluripotent stem cells (iPSCs) offer a great opportunity to recapitulate both normal and pathologic development of brain tissues. Recently, three research teams have developed human-PSC technology and direct somatic cell reprogramming to allow induction of human serotonin (5-hydroxytryptamine; 5-HT) neurons in vitro. While preclinical studies have repeatedly shown that 5-HT suppresses 5-HT neuronal firing activity, one group has tested the effect of 5-HT on the neuronal activity of those 5-HT-like cells and found a paradoxical excitatory action of 5-HT. Here, we argue that few cautions in translational interpretations have to be taken into account. Nonetheless, using patient-derived cells for generating disease relevant cell types truly offers a new and powerful approach for investigating mechanisms playing fundamental roles in psychiatric disorders. neurons are sensitive to the specific neurotoxin 5,7dihydroxytryptamine in vitro. After being transplanted into new-born mice, the cells not only expressed their typical molecular markers but also showed the migration and projection to the cerebellum, hindbrain, and spinal cord. Clearly, the obtained human iPSCs-derived neurons exhibit the typical features of the 5-HT neurons in the brain (Cao et al., 2017). As observed in vivo, a recent study also described selective serotonin reuptake inhibitor (SSRI)dependent elevation of extracellular 5-HT concentrations, caused by the antidepressant citalopram exposure of human iPSC-derived 5-HT neurons (Vadodaria et al., 2019). Accordingly, somatic cells were also shown to be directly converted to functional neurons (directly induced neurons) through ectopic expression of neural conversion factors. Consequently, dopaminergic, cholinergic, or striatal medium spiny neurons have been recently generated directly from human fibroblasts by using forced expression of lineage-specific transcription factors acting during brain development (Miskinyte et al., 2017). Therefore, Xu et al. (2016) demonstrated the efficient conversion of human fibroblasts to serotonin induced neurons following expression of the transcription factors Ascl1, Foxa2, Lmx1b, and FEV. The authors have examined the transdifferentiation that was enhanced by p53 knock-down and suitable culture conditions (including hypoxia, which was shown to increase the yield of 5-HT neurons). Importantly, Xu et al. (2016) verified that serotonin induced neurons were able to express markers for mature 5-HT neurons, presented Ca2ϩ-dependent 5-HT release and selective 5-HT uptake, and exhibited spontaneous action potentials and spontaneous excitatory postsynaptic currents. Surprisingly however, bath application of 5-HT significantly increases the firing rate of spontaneous action potentials. In parallel, Vadodaria et al. (2016a) showed that overexpressing a different combination of 5-HT phenotype-specific transcription factors (NKX2.2, FEV, GATA2, and LMX1B) in combination with the neuronal transcription factors ASCL1 and NGN2 directly and efficiently generated 5-HT neurons from human fibroblasts. Induced 5-HT neurons showed increased expression of specific serotonergic genes known to be expressed in raphe nuclei and displayed spontaneous action potentials, released serotonin in vitro and functionally responded to SSRIs. Noticeably, the results from Xu and co-workers on the functional effect of 5-HT on spontaneous action potentials of induced 5-HT neurons appear to be in discrepancy with all the preclinical data obtained so far. Indeed, animal studies, mostly conducted in rodents, have demonstrated that this neurotransmitter exerts an inhibitory influence on the firing activity of mature 5-HT neurons (for review, see Blier and El Mansari, 2013). 5-HT neurons exist in nearly all animal taxa, from the invertebrate nervous system to mammalian brains. The 5-HT system in the vertebrate brain is implicated in various behaviors and diseases. In mammals, the cell bodies of 5-HT neurons are located in the brainstem, near or on the midline. The dorsal raphe nucleus (DRN) contains ϳ50% of the total 5-HT neurons in both rat and human CNS (Piñeyro and Blier 1999). In rodents, the 5-HT-containing cells have been shown to exhibit a slow (1-2 Hz) and regular firing rate, with a long-duration positive action potential. This regular discharge pattern results from a pacemaker cycle attributed to a Ca 2ϩ -dependent K ϩ outward current. The depolarization is followed by a long afterhyperpolarization (AHP) period, which diminishes slowly during the interspike interval. During the depolarization, extracellular Ca 2ϩ enters the neuron via a voltage-dependent Ca 2ϩ channel activating a K ϩ outward conductance leading to an AHP. Ca 2ϩ is then sequestered/extruded and the AHP diminishes slowly. When the membrane potential reaches the lowthreshold Ca 2ϩ conductance, a new action potential is triggered (Piñeyro and Blier 1999). Around five decades ago, Aghajanian et al. (1970) were the first to assess, electrophysiologically in anesthetized rodents the effects of monoamine oxidase inhibitors (MAOIs), the first class of antidepressant medications, on the firing activity of single, serotonin-containing neurons of the midbrain raphe nuclei. All MAOI tested caused depression of raphe unit firing rate by increasing endogenous 5-HT and such suppressant effects were prevented by prior treatment with an inhibitor of 5-HT synthesis. Similarly, in vitro and in vivo, direct application of exogenous 5-HT suppresses 5-HT neuronal firing activity (Piñeyro and Blier 1999). Numerous rodent studies have shown that this net effect of 5-HT is mediated via the activation of somatodendritic 5-HT 1A autoreceptors (for review, see Piñeyro and Blier 1999). This 5-HT 1A autoreceptor receives an increased activation by endogenous 5-HT at the beginning of a treatment with a SSRI or a MAOI and, consequently, a decreased 5-HT neuronal firing activity is obtained. Indeed, when activated by 5-HT, G ␣i/o -coupled 5-HT 1A autoreceptors trigger a strong reduction of 5-HT impulse flow through the opening of inwardly rectifying K ϩ channels and the inhibition of voltage-dependent Ca 2ϩ channels (Piñeyro and Blier 1999). By reducing pacemaker firing, 5-HT 1A autoreceptors regulate 5-HT levels both locally in the DRN and in terminal projection regions (Courtney and Ford, 2016). As the SSRI or MAOI treatment is prolonged, the 5-HT 1A autoreceptor desensitizes and firing activity is restored in the presence of the SSRI or MAOI. This adaptive change has been proposed to underlie, at least in part, the delayed therapeutic effect of SSRI or MAOI in major depression (Piñeyro and Blier 1999). However, only very few studies have been conducted in humans to directly address the role of 5-HT 1A autoreceptors on 5-HT neuronal activity. One of the reasons resides in the small size of the DRN, which renders it virtually invisible for MRI-based in vivo imaging studies (Sibon et al., 2008). Interestingly still, human EEG studies have reported that the stimulation of presynaptic 5-HT 1A receptors induces a shift of the frequency spectrum (McAllister-Williams and Massey, 2003), an effect reflecting the inhibitory action of these receptors on 5-HT activity (Seifritz et al., 1996(Seifritz et al., , 1998. More recently, clinical studies have shown that the 5-HT 1A agonist buspirone produces a more pronounced shift in medication-free depressed patients, confirming the hypothesis that at least some depressive disorders may be related to an abnormally enhanced functional status of 5-HT 1A autoreceptors, leading to a hypo-function of the 5-HT system (McAllister-Williams et al., 2014). Also of note, several PET studies have shown that an enhanced binding potential at DRN 5-HT 1A sites correlates with a reduced 5-HT transmission within the amygdala, thus providing indirect, but strong evidence, that these receptors inhibit terminal 5-HT release (Fisher et al., 2006). Clearly, the reason of the discrepant electrophysiological findings mentioned above appears to be puzzling. For that reason, the net effect of 5-HT on spontaneous action potentials of induced 5-HT neurons, obtained from both Lu et al. (2016) and Vadodaria et al. (2016a), should be extremely interesting to be assessed and compared. Indeed, a role of the chosen transcription factors for this opposing electrophysiological result cannot be fully ruled out (Vadodaria et al., 2018). The different combinations of transcription factors employed may cause differential maturation stages of induced 5-HT neurons. In rodent, the 5-HT 1A autoreceptor-mediated inhibition was shown to vary with age and was absent/reduced until Postnatal 21 (Rood et al., 2014). Xu and co-workers employed the transcription factor Ascl1, involved in rostral and caudal neurogenesis of 5-HT neurons, Foxa2, activated by sonic hedgehog signaling to induce 5-HT neuronal fate by suppression of ventral motor neuron generation, as well as Fev and Lmx1b, which are essential for the expression of the 5-HT neurochemical phenotype (Kiyasova and Gaspar, 2011). In contrast to this, Vadodaria and co-workers established generation of induced 5-HT neurons by overexpression of the 5-HT phenotype-specific transcription factors Fev, Lmx1b, Gata2, and Nkx2.2. The latter being discussed as having a cluster-specific function in 5-HT neurogenesis (Kiyasova and Gaspar, 2011). Therefore, an excitatory action of 5-HT may reflect differential maturation stages of induced 5-HT neurons, and in vitro maturation may be enhanced by forced expression of a larger number of neuronal and 5-HT specific transcription factors. Actually, a thorough examination of the supplementary data provided by Xu et al. (2016) indicates that even when considered mature (i.e., Ͼ46 d old), their induced 5-HT neurons display a resting membrane potential remaining as high as -42 mV, a value quite remote from those classically measured in vivo in preclinical studies, i.e., below -60 mv (Liu et al., 2002). Another possibility would reside in the fact that the protocol chosen by Xu and co-workers triggered a modified maturation of 5-HT 1A autoreceptors, leading to an alternative coupling of these receptors and preventing them to activate the G ␣i/o subunit. In this context, the use of Patch-Seq (Fuzik et al., 2016), a recent method for obtaining full transcriptome data from single cells after whole-cell patch-clamp recordings of induced 5-HT neurons, should be very helpful to provide critical clues of these paradoxical electrophysiological results. Finally, it has to be kept in mind that in vivo, 5-HT neurons are part of a mature circuitry that obviously cannot be fully recapitulated in vitro, which might also impair the efficacy of 5-HT 1A autoinhibition. Alternatively, the discrepancy between the results of Xu et al. (2016), and those observed in rodents may be related to a differential sensitivity toward distinct kinds of 5-HT autoregulation. Indeed, it has recently been proposed that 5-HT 2B receptors may constitute a new class of autoreceptors that would actually be excitatory, therefore counteracting the influence of the 5-HT 1A ones (Belmer et al., 2018). In mice, this positive autoregulation appears to be negligible with respect to the 5-HT 1Arelated autoinhibition, requiring the use of specific 5-HT 2B agonists to be unmasked (Belmer et al., 2018). It remains possible that the induced 5-HT neurons obtained by Xu et al. (2016), express a higher proportion of 5-HT 2B receptors, rendering the net influence of 5-HT positive on them. Thus, it would be very informative to assess the excitatory action exerted by 5-HT on the spontaneous action potentials of these cells with both selective 5-HT 1A and 5-HT 2B receptor antagonists. If this latter hypothesis were to be confirmed, the next step would be to determine whether such a higher expression of 5-HT 2B receptors constitutes a distinct feature of human 5-HT neurons, or whether it results from the technique of induction. In summary, even if several advantages and inconvenients can be addressed in the use of iPSCs versus induced neurons, in terms of cell source, time and cost efficiency as well as expendability (Mertens et al., 2018), all three groups have provided, the same year, important and robust data on the conversion of human cells to induced 5-HT neurons (Lu et al., 2016;Vadodaria et al., 2016a;Xu et al., 2016). In opposition to the electrophysiological results of Xu et al. (2016), preclinical studies have repeatedly shown that 5-HT suppresses 5-HT neuronal firing activity. Significantly, this inhibitory action of 5-HT is frequently related to the well described therapeutic delay of antidepressant action, has been recurrently considered as a "brake" of the antidepressant response and has initiated numerous studies on the development of new and effective therapeutic strategies (Artigas et al., 2017). Furthermore, learning more about the electrophysiological properties of human iPSC-derived 5-HT neurons will not only help to understand serotonergic autoregulation, but also significantly contribute to understanding 5-HT neuromodulation of neuronal circuits. Even if few cautions in translational interpretations have to be taken into account, as for data obtained in animal studies, using patientderived cells for generating disease relevant cell types truly offers a new and powerful approach for investigating the genetic and cellular mechanisms that may play fundamental roles in psychiatric disorders (Vadodaria et al., 2018).
2019-05-24T13:06:59.286Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "9bdf037e32ebf7b7f4a052bdaa43e988630d2d78", "oa_license": "CCBY", "oa_url": "https://www.eneuro.org/content/eneuro/6/3/ENEURO.0021-19.2019.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bdf037e32ebf7b7f4a052bdaa43e988630d2d78", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53634312
pes2o/s2orc
v3-fos-license
Active Q-switched Fiber Lasers with Single and Dual-wavelength Operation A brief explanation on Q-switched fiber laser operating principle for active technique in terms of operation characteristics is presented. Experimental analysis of proposed pulsed fiber lasers by the active Q-switched technique is demonstrated. Experimental setups in‐ clude the use of Er/Yb doped fiber as a gain medium and an acousto-optic modulator as cavity elements. Setup variations include the use of fiber Bragg gratings for wavelength selection and tuning and Sagnac interferometer for wavelength selection in single wave‐ length operation and for cavity loss adjustment in dual wavelength operation. The exper‐ imental analysis of principal characteristics of single-wavelength operation of the fiber laser and cavity loss adjustment method for dual-wavelength laser operation are dis‐ cussed. presented. As it can be observed, pulse duration and pulse energy present a behavior typically obtained in actively Q-switched lasers. Pulse duration increases as pulse operation of actively Q-switched fiber lasers. The documental investigation is focused on reported approaches on Q-switched fiber lasers taking into account cavity elements, configu‐ rations, experimental results, and new fiber technologies incorporation. A the operating of in the Q-switched technique to and dual that use an active Q-switching The EYDCF) gain medium and the application of the active Q-switching technique by using Introduction Fiber lasers have been studied almost from the onset of laser demonstration. Research on the development of innovative laser systems has been of constant interest in optics and photonics, having fast growth and becoming a central research area in scientific and industrial implementations. Fiber lasers have been widely studied because their unique characteristics of high power confinement, high beam quality, low insertion loss, compactness, and ruggedness. In general, they are attractive for different application areas such as medicine, telecommunications, optical sensing, and industrial material processing. Fiber lasers make use of stimulated emission to generate light by using an active medium for gain supply. The use of fibers doped with rare-earth elements provides a gain medium with great thermal and optical properties for fiber laser development, in contrast to solid state lasers. Erbium-doped fibers (EDF) have been widely used for fiber laser implementations; however, in the last decade, the constant search of efficiency improvement in terms of very high gain with low pumping thresholds has significantly increased the use of Ytterbium doped fibers because they offer an efficiency above 80% [1]. Moreover, high-power fiber lasers are also of high interest for different applications such as spectroscopy, pump sources, and the study of nonlinear phenomena. In contrast with solid state lasers, a fiber laser requires longer interaction lengths favoring the occurrence of nonlinear effects when high power is achieved, making them desirable for optical switching, nonlinear frequency conversion, solitons, and supercontinuum generation, among other applications. As it is known, pump diodes provide pump power limited to a few watts. This restriction also limits the fiber laser output power when conventional-doped (single-clad) fibers are used. With the development of double-clad fibers (DCF), high-power fiber lasers experienced significant advances since DCF makes an output power increase attainable. In conjunction with clad-pumping techniques, the DCF feature of large surface area-to-gain volume ratio, in addition to high doping concentration, offer high output power with an improved spatial beam confinement, in contrast with the use of single-clad doped fibers [2]. However, achieving high power continuous-wave (CW) operation of a fiber laser without output power fluctuations is not as straightforward. Taking into account this fact, the development of fiber lasers in pulsed regime provides a feasible alternative. In comparison with CW fiber lasers, pulsed fiber lasers provide high peak power that can be used in the generated wavelength or shifted to other wavelength range by nonlinear frequency conversion. The most important pulsed regimes are Q-switching and mode-locking. In contrast with CW operation, in pulsed regimes the output is time dependent. In pulsed lasers by the Q-switched technique, stable and regular short pulses are obtained with pulse durations in the nanoseconds range; in contrast to ultrashort pulses obtained by mode-locked techniques, in this case the pulse duration corresponds to several cavity round trips. Q-switching can be developed by passive and active techniques. Passive Q-switching is performed by using a saturable absorber element placed inside the cavity including graphene [3][4][5], carbon nanotubes (CNT) [6][7][8], transition metal-doped crystals [9][10][11], and semiconductor saturable absorber mirrors (SESAM) [12,13]. On the other hand, the active Q-switching technique is based on the use of a modulator driven by an external electrical generator. Cavity loss modulation is typically performed by electro-optic modulators (EOM) [14,15], and acousto-optic modulators (AOM) [16][17][18]. The EOM and the AOM are based on completely different principles of operation. While the EOM is based on the Pockels effect, the AOM modulates the refractive index of sound waves that generate a periodic grating as it propagates through the medium. In terms of operation, the main difference is the modulation bandwidth. Typically, the modulation bandwidth of an EOM is 500 kHz to 1 MHz, while for AOM is in the range of 50 to 100 MHz. The use of the active Q-switching technique for pulsed laser operation allows higher energy pulses and stability. These advantages are increased in lasers based on integrated optics (alloptic) or all-fiber setups. Otherwise, from the onset of the fiber Bragg gratings (FBG), the incorporation into the design of optical fiber lasers has been almost immediate, contributing significantly to the progress in this particular area. FBGs have been widely used as narrow band reflectors for generated laser wavelength selection. FBGs have unique advantages as optical devices including easy manufacture, fiber compatibility, low cost, and wavelength selection, among others. Moreover, the FBG central wavelength can be displaced or modified by mechanical strain or temperature application [19,20]. This feature makes them capable devices for fiber lasers wavelength tuning [21] and for all-fiber modulation techniques [22,23]. Moreover, dual wavelength fiber lasers (DWFL) have been studied previously [24][25][26]. Obtaining two wavelengths by using a single laser cavity has been attractive for potential application in areas such as optical sources, optical communications, optical instrumentation and others. The phenomenon of obtaining two wavelengths simultaneously with equal powers has been studied in terms of the competition between generated laser lines to improve the stability and DWFL emission control methods. The use of variable optical attenuators (VOA) and high birefringence fiber loop Sagnac interferometer (high birefringence fiber optical loop mirror, Hi-Bi FOLM) have been demonstrated as efficient methods for generating two laser lines simultaneously through the adjustment of losses within the cavity. Furthermore, wavelength tuning in pulsed DWFL development suggests its possible application in microwave and mainly terahertz generation. For DWFL improvement, different techniques for tuning and setting the separation between generated laser lines have been developed. The main goal in DWFL wavelength tuning is to obtain wide separation and continuous wavelength tuning. A reliable approach for wavelength tuning in DWFLs is the use of FBGs where the Bragg wavelength is shifted by temperature changes [27] or by mechanical strain application [28,29]. In this chapter, a brief explanation on Q-switched fiber laser operating principle for active technique is presented. Also, a description of operation characteristics of Q-switched lasers, mainly in active Q-switching technique, is presented. Additionally, the current state of the art (to our knowledge) regarding DWFL in actively Q-switched pulsed regime is reviewed. Furthermore, experimental setups for a dual wavelength actively Q-switched fiber laser and an actively Q-switched fiber laser with single and dual wavelength operations are experimentally demonstrated and analyzed. The experimental results of the lasers are discussed and compared in terms of operation characteristics, including repetition rate, pulse duration, pulse energy, average power, and peak power. Q-Switched fiber lasers: A review from operating principle to single and dual-wavelength operation Q-switching is a significantly employed technique in fiber lasers improvement. Q-switching is a suitable technique to obtain powerful pulses at low repetition rates from a few kHz to 100 kHz, typically; it can obtain short pulses in nanoseconds range, corresponding to several cavity round trips. This is in contrast with the mode-locking technique in which ultrashort pulses are obtained. In recent years, Q-switched fiber lasers have been attractive due to their applications in medicine, optical time-domain reflectometry, terahertz generation, optical instrumentation, remote sensing, and materials processing in the industry. Q-switching is performed by cavity losses modulation. The intracavity losses are maintained on a high level until the gain medium accumulates a considerable amount of energy supplied by the pumping source. Then the losses are abruptly minimized to build up the laser radiation within the cavity. As a result, a pulse with energy in a range of micro-Joules (even milli-Joules) is generated. Thus, the variation of the intracavity losses corresponds to a laser resonator Qfactor modulation. In general, Q-switched fiber lasers can be obtained with continuous or pulsed pump power. In the case of continuous pump power application, an important condition must be considered: a longer gain medium upper-state lifetime is required to avoid energy loss by fluorescence emission to reach stored high energy as a consequence. Particularly, in fiber lasers the saturation energy has to be high to prevent excessive gain that can lead to an early inception of laser generation. The pulse energy is commonly higher than the gain medium saturation energy. Although Q-switched lasers based on bulk optics are regularly preferred over fiber lasers because of their larger mode areas to store more energy, the incorporation of bulk components in fiber lasers leads to a detriment of simplicity, robustness, and alignment of the laser. Also, the use of bulk elements in fiber lasers produces a beam quality degradation and the addition of high cavity losses, resulting in a decrease of laser performance and efficiency. Thus, in bulk fiber lasers approaches, the use of higher pump powers is required to increase the laser output power [23]. The Q-switching technique can be performed by passive and active methods: Passive Q-switching is performed by using a saturable absorber element placed inside the cavity, which modulates automatically the losses within the laser cavity. As already mentioned, the variety of saturable absorber elements in passively Q-switched fiber lasers usually includes the use of graphene, CNT, metal-doped crystals, and SESAM [3][4][5][6][7][8][9][10][11][12][13]. The pulse repetition rate is determined and varied by the applied pump power, while the pulse duration and pulse energy are affected by the cavity and the Q-switching element parameters and commonly remain fixed. Thus, the repetition frequency cannot be modified with independence of other operation parameters [30]. To reach an efficient performance, the saturable absorber recovery time, commonly, has to be higher than the pulse duration and lower than the gain medium upper-state lifetime. Laser pulses generated with passive Q-switching typically present a low repetition rate range because of the applied pump power range. The main advantages of passively Q-switched lasers are their simple designs and low cost due to the fact that the use of external modulators and their electronics are not required. On the other hand, active Q-switching is based on the Q-factor modulation using a modulator included in the fiber laser design. The modulation element is driven by an external electrical generator. In the active Q-switching technique, the stored energy, when cavity loss is high, generates a pulse soon after an external electrical signal arrives on the modulator driving the intracavity losses to a low level. In this case, the pulse duration and the pulse energy depends on the energy stored by the gain medium. Hence, the pump power and the repetition rate variations modify the achieved pulse duration and pulse energy. For the active technique, the modulating switching time does not have to be similar to the pulse duration, the pulse duration is in any case of the order of the laser resonator round-trip time. As has been mentioned previously, active cavity losses modulation is typically performed by EOMs and AOMs [14][15][16][17][18]. According to the technological progress, the used modulators have been experiencing important changes. In early actively Q-switched lasers approaches, modulators were mainly using bulk components. Then, they were designed by using integrated optics coupled to optical fibers. Recently, all-fiber modulators have been included in fiber laser designs to increase the overall performance of the laser. The acousto-optic Q-switching is performed by a radio frequency (RF) power controlling a transducer. The generated acoustic wave provides a sinusoidal optical modulation of the gain medium density resulting in an intracavity loss modulation. AOMs can rapidly modulate the cavity losses that allows the Q-switched pulses generation with pulse durations of a hundred of nanoseconds. The shortest pulse durations and the highest pulse energies are obtained at the lowest allowed repetition rate, however, at the cost of obtaining low output average power. The use of the active Q-switching technique for pulsed laser operation allows higher energy pulses and stability. These advantages increase with laser designs based on integrated optics or all-fiber laser designs. Moreover, most of the Q-switched fiber laser approaches are focused on the use of single-clad fibers as a gain medium. In comparison with single-clad Er or Yb doped fibers, Er/Yb double clad co-doped fibers (EYDCF) can be used to suppress the self-pulsing of Er ion-pair [4]. This effect, in addition to cladding pump techniques, can be used to increase the pump power efficiency, minimizing the gain degradation by using EYDCF as a gain medium. Regarding passively Q-switched fiber lasers, different approaches using EYDCF have been reported [30][31][32][33]. Laroche et al. [30] in 2002 presented a pioneer experimental setup of pulsed lasers for passive Q-switched technique using EYDCF as gain medium. They presented an open cavity laser using Cr2+:ZnSe as a saturable absorber. V. Philippov et al. [33] reported a similar configuration by using Cr2+:ZnSe and Co2+:MgAl2O4 as saturable absorbers. The maximum average power of 1.4 W was achieved in pulses with durations from 370 to 700 ns for a repetition rate between 20 kHz and 85 kHz. In the case of actively Q-switched lasers using EYDCF, to our knowledge, a small number of approaches have been reported [34,35]. Recently, González-García et al. [34] reported a linear cavity EYDCF laser Q-switched by an acousto-optic modulator. The pump power is introduced to the EYDCF by a free space subsystem carefully optimized by using a lenses design. Nowadays, the development of DWFL has been of interest because of their ability to obtain two laser wavelengths simultaneously by the use of a single cavity. DWFL's advantages are low cost, simple design, fiber compatibility, and low loss insertion devices, making feasible more complex optical devices design. From its first approaches in CW fiber lasers, DWFL research has increased because of their potential applications in different areas such as optical communication systems, optical instrumentation, optical sources, and spectral analysis. In recent years, the experience in the study of DWFL in CW regime has been incorporated into the implementation of DWFLs in pulsed regime. With this advancement, it has opened the possibility of new applications where high output power is required such as research of nonlinear phenomenon study and remote sensing. The main issue in DWFL operation is centered on the difficulty of obtaining two stable wavelengths simultaneously because doped fibers behave as a homogeneous gain medium at room temperature, causing a competition between the generated laser lines that leads to a generated laser line's instability. Commonly used methods to balance the generated wavelengths include the use of polarization controllers (PC) and variable optical attenuators (VOA), among others; however, most of them are arbitrary methods in the absence of a measurable physical variable for wavelength competition analysis and characterization, affecting the repeatability in the laser performance. The methods are focused on adjusting laser intracavity losses to balance the simultaneously generated wavelengths. In previously reported investigations, the Sagnac interferometer with high birefringence (Hi-Bi) fiber loop capability has been theoretically and experimentally demonstrated as a trustworthy alternative for the adjustment of losses within the cavity [21,[36][37][38], since the Hi-Bi FOLM periodical spectrum can be finely displaced in wavelength by temperature variations applied on the Hi-Bi fiber loop [37]. Different experimental setups of DWFL by passive Q-switching technique have been reported [39][40][41]. However, to our knowledge, there have not been reported approaches using EYDCF as a gain medium. Concerning cavity losses adjustment for dual wavelength laser operation, the most frequent method is the use of PC in ring cavity fiber lasers. H. Ahmad et al. [40] reported a ring cavity passive Q-switched DWFL operating at 1557.8 nm and 1559 nm by using PC for dual wavelength generation. A nonlinear optical loop mirror (NOLM) with a dispersion-decreasing taper fiber (DDTF) in the fiber loop is used as a passive Q-switching element. Regarding actively Q-switched fiber lasers, only few attempts have been reported in which dual wavelength emission is obtained. In 2013, G. Shayeganrad [42] reported a compact linear cavity actively Q-switched DWFL. The Q-switching is performed by an AOM. The gain medium is a c-cut Nd:YVO 4 crystal with a feature of dual-wavelength generation in Qswitched regime without adjustment elements. An undoped YVO 4 crystal is used for stimulated Raman scattering (SRS) effect enhancement. The SRS simultaneous wavelengths at 1066.7 and 1083 nm are shifted at 1178 and 1199.9 nm. S.-T. Lin et al. [43] reported a selectable dual-wavelength actively Q-switched laser. By using two electro-optic periodically poled Lithium niobate (PPLN) integrated crystals, the output wavelengths between 1063 and 1342 nm are selected with voltage variations on PPLN Bragg modulators (EPBM) sections. It is worth mentioning that both cited experimental setups are designed with bulk optic elements with high pump power application, around 20 W. As has been said above, such designs require fine alignment, so typically, efficiency and instability problems are presented. The all-fiber or optical fiber coupled laser systems promise to be an option for solving alignment issues minimizing losses within the laser cavity. Pump-to-signal efficiency can be increased and, consequently, highly increased pump power is not required to obtain more energetic pulses. However, for such designs, the output power is typically limited by the maximum signal power handled by the employed components. Therefore, the use of double-clad doped fibers provides a stable and straightforward method to generate high energy nanosecond pulses in actively Q-switched dual-wavelength fiber lasers. From reported investigations, EYDCF offers high conversion efficiency for high-energy pulses generation [44,45]. Regarding EYDCF use, in 2014, an actively Q-switched wavelength tunable DWFL using EYDCF as a gain medium has been reported [44]. The linear cavity laser incorporates the use of bulk components to introduce pump power in the EYDCF. The laser wavelengths are generated and simultaneously tuned by using a polarization maintaining fiber Bragg grating (PM-FBG). The maximal separation between generated wavelengths of 0.4 nm is adjusted by polarization axis adjustment performed by a PC. The simultaneous wavelength tuning range of ~11.8 nm is obtained by axial strain applied on the PM-FBG. The maximal average power of 22 mW is obtained with a repetition rate of 120 kHz with a pump power of 1.5 W. Recently, a self-Q-switched (SQS) EYDCF laser with tunable single operation and dual wavelength operation using a Hi-Bi FOLM as a spectral filter was experimentally demonstrated [45]. The wavelength tuning in single wavelength operation and the cavity loss adjustment for dual wavelength operation is performed by temperature variations applied in the FOLM Hi-Bi fiber loop, allowing Hi-Bi FOLM spectrum wavelength shifting. Stable SQS pulses with duration of 4.1 µJ and repetition rate of 25 kHz are obtained with a pump power of 575 mW. The single wavelength tuning range over 8.4 nm is obtained with FOLM Hi-Bi fiber loop temperature variation in a range of ~7.2 °C. Separation between generated simultaneous dual wavelengths is 10.3 nm. Then, we propose the use EYDCF fiber as gain medium for the design of actively Q-switched lasers with operation in single and dual wavelength. Also, we propose the use of FBGs and Hi-Bi FOLM as cavity elements that allow modifying the characteristics of laser operation and improve its performance by straightforward methods. Actively Q-switched dual-wavelength fiber laser based on fiber Bragg gratings In this section, an experimental analysis of a ring cavity dual-wavelength actively Q-switched fiber laser with an EYDCF as a gain medium is presented. A pair of FBGs is used for separately generated laser lines tuning by mechanical compression/stretch applied on the FBGs. Simul-taneously generated dual-wavelength laser lines tuning are presented with wavelengths separation from 1 nm to the maximal separation of 4 nm (without the need of cavity loss adjustment). The experimental setup is presented in Figure 1. The fiber ring cavity laser is based on the use of 3 m of EYDCF as a gain medium. The EYDCF is pumped with a laser source at 978 nm through a beam combiner. The pump power of 5 W is limited by the maximal AOM signal power of 1 W. An optical isolator with maximal output power of 5 W is used to ensure unidirectional operation. An optical subsystem formed with a 50/50 optical coupler with output ports connected to FBG1 and FBG2 with central wavelengths at 1543 and 1548 nm respectively, allows dual wavelength emission at FBG reflected wavelengths; it is also used for separate laser wavelength emission monitoring at outputs 1 and 2. The FBGs with approximately 99% of maximum reflectance are placed on mechanical devices for generated laser wavelength tuning by applying axial strain on the gratings. The simultaneously generated laser wavelengths are measured at 90/10 coupler output 3. A fiber-pigtailed AOM driven by a RF signal is placed for active Q-switching pulsed laser operation. The output spectra monitored at output ports (1, 2, and 3) are measured with an OSA and also the Q-switched pulses are detected and observed with a photodetector and an oscilloscope, respectively. Figure 2 shows the experimental results for the dual-wavelength fiber laser spectrum measurements with fixed pump power of 5 W. The measurements were obtained at output 3 with an OSA with attenuation. Output power results are presented in linear scale to support the achieving of two simultaneous laser wavelengths with equal powers. Two simultaneous wavelengths are obtained without requiring cavity losses adjustment in the presented wavelength separation tuning range, however, we noticed the requirement of cavity losses adjustment for wavelength separations above 4 nm. Results for dual wavelength operation with cavity losses adjustment (wavelengths separation above 4 nm) are not presented since it was performed introducing curvature losses by fiber bending applied between 50/50 output ports and FBGs connections; an arbitrary method in which it is not possible to characterize the competition between the generated laser lines. Figure 2(a) shows the generated laser lines spectrum measurements for dual wavelength Qswitched laser operation with different wavelength separations. The separation tuning from 1 to 4 nm is achieved by mechanical compression/stretch applied on the FBGs. The repetition rate remained fixed at 70 kHz. As it is shown, dual wavelength laser operation is generated simultaneously with approximately equal laser lines output powers without an adjustment of losses within the cavity. As it can be seen, for the repetition rate and pump power settings, a preference exists to generate the longer wavelength during the competition between the laser lines. As it can be observed, output powers for both simultaneous wavelengths increase when the repetition rate is increased. The competition between generated laser lines presents a preference to generate the longer wavelength as the repetition rate is increased, however, dual wavelength laser operation is presented over the repetition rate range without cavity losses adjustment. Figure 3 shows the output power ratio for the two simultaneously generated laser lines measured P(λ 2 )/P(λ 1 ), where λ 1 and λ 2 are the shorter and the longer laser wavelengths, respectively. The spectrum measurements were performed at output 3 with an OSA and output powers were individually monitored at output 1 and 2 with a photodetector and a power meter. The measurement of the power ratio between generated laser lines is a straightforward method for numerically analyzing the competition between laser line behavior. With output power ratio 0 < P(λ 2 )/P(λ 1 ) < 1, the shorter wavelength is generated with power above the longer wavelength. On the other hand, for P(λ 2 )/P(λ 1 ) > 1, the longer wavelength presents an output power above the shorter wavelength. As it has been previously shown for the proposed experimental setup, there exists a preference to generate the longer wavelength. Furthermore, it was shown that the Q-switched dual wavelength fiber laser output powers are modified with repetition rate and tuned wavelength variations. In Figure 3, it can be clearly observed that with increasing repetition rate, the competition between laser lines has an imbalance in which the longest wavelength has a preference to be generated. Strong competition allowing dual wavelength laser operation with almost equal output powers from 20 kHz to about 60 kHz of repetition rate can be observed. With repetition rate variations from 60 kHz to 100 kHz, the longer wavelength output power increases significantly, at the expense of the shorter wavelength output power. As it can be also observed, the range of repetition values over which the longer wavelength starts growing significantly is shortened when increasing the separation between the generated laser wavelengths. As it is shown, for a wavelength separation of 1 nm, the maximum power ratio is about 2 times, with a repetition rate of 100 kHz. However, for a wavelength separation of 4 nm and a repetition rate of 70 kHz, an output powers ratio in which λ 2 is 9.5 times greater than λ 1 is observed. Figure 4 is a group of experimental results for actively Q-switched dual wavelength laser pulses. The results also show pulse profiles for different repetition rate variations, comparison between pulses measured at different outputs, and experimental analysis of pulses time shift by repetition rate variations. Figure 4(a) shows the optical pulse measurements for actively Q-switched dual wavelength laser operation. The wavelength separation remains fixed at 4 nm between simultaneously generated laser lines. With the use of a photodetector and an oscilloscope, the pulse traces together with the leading pulse of the signal applied to the AOM were obtained at output 3 where both generated wavelengths are simultaneously measured. The resulting pulses were obtained for different repetition rate variations from 50 to 100 kHz. For actively Q-switched operation, with the increase of repetition rate, the pulse duration typically increases as pulse amplitude decreases. As it can be seen, with a repetition rate of 50 kHz, there is a time shift of 93.7 ns between the leading edge of the electrical pulse applied to the AOM and the generated laser pulse. As we can observe, the time shift depends on the repetition rate. Fiber Laser The dependence of the temporal pulse shift on the repetition rate variations is shown in Figure 4(b). As it is shown, the pulse time shift increases as the repetition rate increases. Thus, it can be observed that for a repetition of 100 kHz, the pulse temporal shift between the electrical modulation signal leading edge and the generated pulse increase to ~2.3 µs. Figure 4(c) shows the pulse traces that correspond to the same dual wavelength generation with wavelength separation of 4 nm and repetition rate of 50 kHz. Since the FBGs have a reflection close to 100% at the central wavelength, it is possible to obtain independently single laser concerning each of the generated laser wavelengths at the outputs 1 and 2 as a result of the signal transmitted by each FBG. Thereby, the pulses generated by the laser wavelength λ 1 = 1543.5 nm (blue line) obtained at output 2 and the optical pulses for wavelength λ 2 = 1547.5 nm (red line) acquired at output 1 are shown together with the optical pulse for both λ 1 and λ 2 measured at output 3. As it is shown, a slight time shift and pulse widening is observed for both wavelength pulse measurements (output 3) compared to the individual pulses observed for λ 1 and λ 2 . Figure 5 shows the output power in dual wavelength operation for generated laser wavelength separations ∆λ = 1 nm (λ 1 = 1545.2 nm and λ 2 = 1546.2 nm) and separation ∆λ = 4 nm (λ 1 = 1543.5 nm and λ 2 = 1547.5 nm) as a function of the repetition rate variations over the range from 30 kHz to 100 kHz, with the used pump power of 5 W. The difference between measured average power for both wavelength separations P(∆λ = 1 nm)-P(∆λ = 4 nm) at the same repetition rate for both measurements is also shown. The average power was measured at output 3 with a power meter. As what typically occurs in actively Q-switched fiber lasers, it is observed that the average power increases with the repetition rate increase. As can be seen, the maximal average power is obtained with repetition rate of 100 kHz. For the dual wavelength operation with laser lines separation of 1 nm the maximal average power (red line, squared symbol) is 496 mW while it is 490 mW for a separation of 4 nm (blue line, circled symbol). The difference between average power measured for both wavelength separations tuned in a range from −10 mW to 10 mW is observed. As it can be also observed, the behavior of average power on repetition rate has no significant variations with respect to the tuned wavelength separation between generated laser lines. Figure 6 shows the measured pulse duration and the estimated pulse energy on repetition rate variations and the estimated pulse peak power for dual-wavelength laser operation. Results are obtained for wavelength separation between generated laser lines of 1 nm and 4 nm. Pulse profiles for Q-switched dual wavelength operation with both generated wavelength separations were performed with a photodetector and monitored by an oscilloscope. From pulse shape measurements, pulse duration was obtained. The pulse energy for each wavelength separations is estimated with the repetition rate and the average power results shown in Figure 5. Estimation of pulse peak power is obtained with the pulse energy and the pulse duration achieved results. Typically for actively Q-switched lasers, with the increase of repetition rate, the obtained pulses widens increasing the pulse duration. Thus, although the pulses train average power Fiber Laser increases with the repetition rate increase (see Figure 5), the optical pulses are less energetic by the widening and the pulse peak power reduction experienced (see Figure 4(a)). Similarly, for a wavelength separation of 4 nm at the same repetition rate, the maximal pulse energy (blue line, squared symbol) of 5.98 µJ and the minimal pulse duration of 295 ns is obtained. The estimated pulse peak power on repetition rate variations is shown in Figure 6(b). As it can be observed, the pulse peak power (pulse amplitude) for the wavelength separation of 1 nm is higher compared with what is observed for the wavelength separation of 4 nm. This result is essentially attributed to a smaller increase in pulse duration for the measurements of ∆λ = 1 nm as the repetition rate is increased (shown in Figure 6(a)). For ∆λ = 1 nm and ∆λ = 4 nm, the maximal pulse peak power of ~26.6 W and ~20.27 W, respectively, are obtained for the minimal repetition rate of 50 kHz, when the pulse widening is minimal. In this section, an experimental analysis of an actively Q-switched ring cavity fiber laser has been presented. Through experimental and estimated results of laser spectra emission and generated laser pulses, the behavior of the dual wavelength laser operation of competitions between the simultaneously generated laser lines and the evolution of generated laser pulses has been analyzed. Actively Q-switched pulsed laser parameters as repetition rate, pulse duration, pulse energy, average power of the laser emission, and peak pulse power has been also experimentally studied in terms of different tuned separations for two simultaneously generated wavelengths and variations on rate repetition of the electrical signal applied to the AOM. Results have been shown that are generalized to any actively Q-switched laser and particularly for lasers with dual wavelength operation. It is worth mentioning that for the proposed experimental setup, it is not necessary to implement a cavity losses adjustment method for the shown operation tuning range (wavelengths separations from 1 to 4 nm), however, a cavity losses adjusting method is required when during the competition between generated laser wavelengths there exists a wavelength preferred for the laser emission. Actively Q-switched dual-wavelength fiber laser with a Sagnac interferometer for cavity losses adjustment In this section, a linear cavity actively Q-switched fiber laser is proposed for experimental analysis. In contrast to the laser experimental setup demonstrated in the previous section (in which explaining the parameters of active Q-switched fiber lasers was intended), the experimental laser setup is a linear cavity configuration in which a method to adjust the losses within the cavity (when required) for simultaneous dual wavelength laser operation is presented. The proposed configuration includes the use of a Sagnac interferometer with high birefringence fiber in the loop (Hi-Bi FOLM) used as a spectral mirror and mainly for cavity loss adjustment during the laser lines competition in two simultaneous laser lines generation. The use of Hi-Bi FOLM as a reliable method of cavity losses adjustment for lasers operating in dual wavelength application has been extensively studied by our research group [21,37,38,45]. The main objective of this section is to illustrate through a proposed experimental setup that the Hi-Bi FOLM can also be used to implement dual wavelength fiber lasers in pulsed regime for the actively Q-switched technique as well as the experimental analysis of dual wavelength laser operating parameters. The proposed actively Q-switched fiber laser experimental setup is shown in Figure 7. The linear cavity laser is bound by two FBGs at one end and a Hi-Bi FOLM at the other end. A 3m length of EYDCF used as a gain medium is pumped by a laser source at 978 nm through a beam combiner. The pump power was fixed to 1.5 W. An AOM driven by an RF signal generator is used for application of the active Q-switching technique. FBG1 and FBG2 with reflections of 99% at central wavelength tuned to 1542.7 nm and 1552.7 nm, respectively, are used as narrow band mirrors for generated laser wavelengths selection. With the selected FBGs central wavelengths, the separation between generated laser lines is ~10 nm. The Hi-Bi FOLM is formed by a 50/50 coupler with output ports interconnected through a Hi-Bi fiber segment of ~56 cm. The Hi-Bi FOLM is acting as a wide band mirror with a periodical spectrum. With the selected Hi-Bi fiber segment, the spectrum period is ~10.3 nm [35]. The Peltier device used for Hi-Bi fiber temperature control is used to shift the Hi-Bi FOLM spectrum in wavelength. This Hi-Bi FOLM spectrum displacement is the method for cavity losses adjustment for dual wavelength laser operation [35]. The splices between Hi-Bi fiber ends and 50/50 output ports are placed in mechanical rotation stages for Hi-Bi FOLM transmission spectrum amplitude adjustment [35]. The Hi-Bi FOLM amplitude was adjusted near maximal contrast. The unconnected 50/50 coupler port (output port) is used for Hi-Bi FOLM transmission spectrum measurement (with pump power below the laser generation threshold) and for laser spectrum measurement with an OSA. The output port is also used for pulses detection by a photodetector and observed on an oscilloscope. Figure 8 shows the cavity losses adjustment performance for single and dual wavelength laser operations. The adjustment is performed by temperature changes in the Hi-Bi FOLM fiber loop. The temperature meter and control has a resolution of 0.06 °C. The repetition rate was set to 60 kHz. Figure 8(a) shows the three Hi-Bi FOLM transmission spectrum for the Hi-Bi fiber loop temperatures in which single wavelength operations for λ 1 and λ 2 and dual wavelength operation is generated. The Hi-Bi FOLM spectrum measurements were performed with pump power below the laser generation threshold at the output port with an OSA. As it can be seen, dual wavelength laser operation is obtained with Hi-Bi fiber loop temperature of 25.9 °C in which cavity losses are balanced. With the Hi-Bi loop temperature increase, the spectrum shifts to shorter wavelengths performing an imbalance in the competition between the laser lines, thus, the shorter wavelength (λ 1 = 1547.2 nm) laser emission is favored. On the other hand, a decrease in Hi-Bi FOLM loop temperature favors the emission of the longer wavelength (λ 2 = 1547.2 nm). In Figure 8(b), the laser spectrum emission for dual wavelength operation and single wavelength operations for λ 1 and λ 2 are shown. The measurements were performed with pump power of 1.5 W. As it is shown, single wavelength laser operation for λ 1 and λ 2 are obtained with temperatures of 26.6 °C and 25.1 °C, respectively. Dual wavelength operation with approximately equal powers is obtained with Hi-Bi fiber loop temperature of 25.9 °C. The temperature operation range is ~1.5 °C. In Figure 9, pulsed regime measurements for actively Q-switched dual wavelength laser operation are presented. Pulse train profile and average power on repetition rate variations are shown. Figure 9(a) shows the pulse train in time domain for dual wavelength laser operation with repetition rate of 60 kHz measured at the output port. The Hi-Bi fiber loop temperature was set to 25.9 °C for dual wavelength operation with equal powers as it is shown in Figure 8(b). For repetition rates below 35 kHz and above 75 kHz, unstable pulses are generated since the laser pulses displace outer the modulating AOM electrical signal time window. The inset on Figure 9(a) shows a sample pulse from the pulses train. The estimated pulse duration is ~448 ns. In Figure 9(b), the average power on repetition rate variations for dual wavelength operation is shown. Measurements obtained with pump power of 1.5 W and repetition rate from 35 to 75 kHz, were performed at the output port with a power meter. As it can be seen, the average output power increases with the repetition rate from 58.3 to 84.9 mW. Figure 10 shows the experimental results of pulse parameters for the actively Q-switched laser on dual wavelength operation. Measured pulse duration and estimated pulse energy and pulse peak power dependences on repetition rate variations are shown. In Figure 10(a), results for pulse duration and pulse energy on repetition rate variations from 35 to 75 kHz are presented. As it can be observed, pulse duration and pulse energy present a behavior typically obtained in actively Q-switched lasers. Pulse duration increases as pulse energy decreases with the repetition rate increase. The pulse duration shows a widening in a range of 213 to 586 ns. The pulse energy decreases as the pulse widens from 1.67 to 1.13 µJ. Figure 10(b) shows the pulse peak power dependence on repetition rate variations. As it is shown, the pulses undergo a peak power decrease as repetition rate increases. With the lower repetition rate, the pulses have less pulse duration, are more energetic, and with a higher peak power. Conclusions In this chapter, actively Q-switched fiber lasers for single and dual wavelength operation have been experimentally investigated. The investigation is based on single and dual wavelength operation of actively Q-switched fiber lasers. The documental investigation is focused on reported approaches on Q-switched fiber lasers taking into account cavity elements, configurations, experimental results, and new fiber technologies incorporation. A review from the operating principle of pulsed lasers in the Q-switched technique to single and dual wavelength operation, mainly those lasers that use an active Q-switching method was presented. The research was led to reach the point where double clad fibers (specifically EYDCF) are used as the gain medium and the application of the active Q-switching technique by using AOM. An analysis of the main parameters of actively Q-switched fiber lasers, including the repetition rate, pulse duration, pulse energy, average power, and peak power characteristics of the technique was experimentally discussed. This experimental study was presented in terms of a couple of proposed actively Q-switched fiber laser experimental setups. The actively Q-switched parameters' typical behavior was mainly discussed in the first experimental setup proposed, a ring cavity dual wavelength actively Q-switched fiber laser based on the use of fiber Bragg gratings for wavelengths selection. The second experimental setup is a linear cavity actively Q-switched fiber laser with single and dual wavelength operations with a Hi-Bi FOLM. The use of the Hi-Bi FOLM as a method to adjust the losses within the cavity (when required) for simultaneous dual wavelength laser operation was discussed.
2018-11-02T20:19:04.297Z
2016-03-02T00:00:00.000
{ "year": 2016, "sha1": "376a0a86daf78f61d4e750c472b8476649dab256", "oa_license": "CCBY", "oa_url": "https://openresearchlibrary.org/ext/api/media/50b699a3-63be-49a6-83b6-7abaaa325ecd/assets/external_content.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "da2b280469a32c530e4a67605b32c1f3c7f84bb5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
265768542
pes2o/s2orc
v3-fos-license
Genome-wide analysis of lipolytic enzymes and characterization of a high-tolerant carboxylesterase from Sorangium cellulosum Microorganisms are important sources of lipolytic enzymes with characteristics for wide promising usages in the specific industrial biotechnology. The cellulolytic myxobacterium Sorangium cellulosum is rich of lipolytic enzymes in the genome, but little has been investigated. Here, we discerned 406 potential lipolytic enzymes in 13 sequenced S. cellulosum genomes. These lipolytic enzymes belonged to 12 families, and most are novel with low identities (14–37%) to those reported. We characterized a new carboxylesterase, LipB, from the alkaline-adaptive So0157-2. This enzyme, belonging to family VIII, hydrolyzed glyceryl tributyrate and p-nitrophenyl esters with short chain fatty acids (≤C12), and exhibited the highest activity against p-nitrophenyl butyrate. It retained over 50% of the activities in a broad temperature range (from 20°C to 60°C), alkaline conditions (pH 8.0–9.5), and the enzymatic activity was stable with methanol, ethanol and isopropanol, and stimulated significantly in the presence of 5 mM Ni2+. LipB also exhibited β-lactamase activity on nitrocefin, but not ampicillin, cefotaxime and imipenem. The bioinformatic analysis and specific enzymatic characteristics indicate that S. cellulosum is a promising resource to explore lipolytic enzymes for industrial adaptations. Introduction Lipolytic enzymes represent a group of proteins catalyzing the hydrolysis and formation of ester bonds of a structurally diverse array of compounds with no requirement for cofactors (Bornscheuer, 2002).Lipolytic enzymes can be employed for the synthesis of structurally diverse polymeric materials by catalyzing free combinations of diester and diol monomers (Kim and Dordick, 2001;Ning et al., 2022), forming chiral and enantioselective intermediates in the production of agrochemicals, flavoring compounds and pharmaceuticals (Tanaka et al., 2002;Athawale et al., 2003).Lipolytic enzymes are also used to degrade environmental toxic pesticides like pyrethroids, carbamate and organophosphate in an effective and green manner (Diegelmann et al., 2015;Sirajuddin et al., 2020).Significantly, lipolytic enzymes with hightolerance characteristics, like thermophilic, cold-adaptive, alkaline, salt-tolerant, or stable in organic solvents, could bring higher yields and fewer by-products in the production of foods, detergents, fragrances and pharmaceuticals than those under mesophilic conditions (Priyanka Yuan et al. 10.3389/fmicb.2023.1304233Frontiers in Microbiology 02 frontiersin.orget al., 2019;Al-Ghanayem and Joseph, 2020;Johan et al., 2021).With the increasing requirement of lipolytic enzymes for industrial biocatalysis, discovering novel lipolytic enzymes or remolding enzymes have attracted a lot of interests.Microbial lipolytic enzymes are widely used in industrial processes because of their potential broad substrate specificity, high region-and stereo-selectivity, and remarkable stability in organic solvents (Jaeger and Eggert, 2002;Panda and Gowrishankar, 2005).Exploring microbial genomic resources provides opportunities for deep excavation of novel lipolytic enzymes (Johan et al., 2021). The lipolytic enzymes include two types, carboxylesterases (EC 3.1.1.1),hydrolyzing small water-soluble esters or triglycerides with fatty acids shorter than C6, and lipases (EC 3.1.1.3),which hydrolyze triglycerides composed of long-chain fatty acids.Both the carboxylesterase and the lipase belong to the alpha/beta-hydrolase superfamily and are characterized by having a catalytic triad composed of Ser, His and Asp (or Glu) residues and a conserved G-x-S-x-G, G-D-S-L or S-x-x-K motif around the nucleophilic serine at the active site (Holmquist, 2000;Bornscheuer, 2002).The classification system of bacterial lipolytic enzymes was first proposed from 53 enzymatic proteins by Arpigny & Jaeger, and included 8 families defined by the biochemical properties and sequence identities (Arpigny and Jaeger, 1999).With the discovery of more lipolytic enzymes, the bacterial lipolytic enzymes have been expanded to 19 families based on the phylogenetic criterium, conserved motifs and biological characteristics (Kovacic et al., 2018;Johan et al., 2021).Lipases are grouped in family I, including eight subfamilies, while carboxylesterases are reported in the rest of 18 families.Among these lipolytic enzyme families, family VIII carboxylesterases are unique for displaying both esterase and β-lactamase activities (Biver and Vandenbol, 2013;Mokoena et al., 2013;Jeon et al., 2016;Kwon et al., 2019), making them promising in the synthesis and modification of β-lactam antibiotics (Mokoena et al., 2013).The active serine residues of family VIII carboxylesterases are in the S-x-x-K motif, instead of the typical G-x-S-x-G pentapeptide, forming the catalytic triad with lysine and the other conserved tyrosine in the Y-x-x motif, the same as that of β-lactamases (Petersen et al., 2001;Cha et al., 2013). The cellulolytic myxobacterium Sorangium cellulosum is not only extremely attractive in drug screening (Bollag et al., 1995;Gerth et al., 2003), but also exhibits extensive degradation abilities on a wide range of macromolecules, such as lipids and polysaccharides.In recent years, some novel glycoside hydrolases have been reported from this cellulolytic myxobacterium (Wang et al., 2012;Li et al., 2022), but little attention has been paid on lipolytic enzymes.S. cellulosum genomes have many ORFs (open reading frames) predicted to encode various hydrolytic enzymes (Schneiker et al., 2007;Han et al., 2013), and four lipolytic enzymes have been characterized (Cheng et al., 2011;Wu et al., 2012;Udatha et al., 2015), including the cold-adapted lipase LipA previously reported in the So0157-2 strain.Studying lipolytic enzymes with promiscuous activities will be helpful for our understanding of the cellulolytic myxobacteria and potential applications of the diverse enzyme resources.In this study, we identified the lipases and carboxylesterases from 13 available sequenced S. cellulosum genomes and characterized a novel family VIII carboxylesterase LipB, which was alkali-tolerant, feasible to a wide range of temperature, and especially stimulated by specific alcohols, suggesting potentials in industrial processing associated with alcohols or detergents production.Diverse lipases and carboxylesterases with potential adverse-tolerances from S. cellulosum genomes will conduce for the lipolytic enzyme applications in industrial production. Strains, plasmids, culture media and chemicals Strains and plasmids used in this study are listed in Supplementary Table S1.Escherichia coli strains DH5α and BL21 (DE3) were used to clone plasmids and express the recombinant protein.E. coli strains were grown in Luria-Bertani (LB) broth at 37°C.Myxococcus xanthus strains were grown at 30°C in CYE medium [10 g/L casitone, 5 g/L yeast extract, 10 mM 3-(N-morpholino) propanesulfonic acid (MOPS) and 4 mM MgSO4, pH 7.6].The media were supplemented with 40 μg/mL kanamycin, 30 μg/mL apramycin, or 10 μg/mL tetracycline if required.We employed the plasmids pET-28a and pET-29b as the expression vectors, while pBJ113 and pSWU30 were as the knock-out plasmid and the overexpression plasmid, respectively.Primers used in constructing plasmids are listed in Supplementary Table S2. Bioinformatics analysis of lipolytic enzymes in Sorangium cellulosum genomes Lipolytic enzymes were identified from 13 S. cellulosum genomes by PSI-BLAST searches using representative enzymes of the 19 families as queries (num_interactions = 3, E-value cut-off = 10 −5 ).The protein sequences were obtained from GenBank assembly of S. cellulosum genomes (Supplementary Table S3).The identified proteins were further filtered by the analysis of characteristic conserved motifs with FIMO1 and the reserved lipolytic enzymes were classified based on sequence identities with query sequences.The information of query sequences and consensus motifs of each lipolytic enzyme family is listed in Supplementary Table S4. The sequence similarity network of 406 predicted S. cellulosum lipolytic enzymes was constructed with 171 studied lipolytic enzymes by EFI-EST (Oberg et al., 2023), the E-value for BLAST was set to 5 and the alignment score threshold was set at 10. Deductive amino acid sequences of the family VIII carboxylesterases were further aligned by MAFFT online version 2 and embellished with ESPript (Robert and Gouet, 2014).The phylogenetic tree was constructed using the Yuan et al. 10.3389/fmicb.2023.1304233Frontiers in Microbiology 03 frontiersin.orgmaximum likelihood method in IQ-TREE 2 (Minh et al., 2020) and modified by iTOL (Letunic and Bork, 2021). Three-dimensional structure and docking analysis of LipB To model the three-dimensional (3D) structure of the LipB protein, we submitted the amino acid sequence to I-TASSER online program based on a threading approach (Yang and Zhang, 2015) and visualized by PyMOL (Lilkova, 2015).AlphaFold2 (Cramer, 2021) was also applied to build the 3D structure of LipB, and models with predicted local-distance difference test (pLDDT) value of major sites above 70 were considered credible (Jumper et al., 2021).The accuracy of predicted structural models was assessed by SAVES v6.0.3For molecular docking by AutoDock Vina (Trott and Olson, 2010;Eberhardt et al., 2021), the structure of LipB predicted by AlphaFold2 was employed as the receptor protein, and ligand molecules were downloaded in the mol2 format from the PubChem database. 4The docking results were visualized using PyMOL. Expression and purification of recombinant LipB Codon-optimized lipB sequence (Supplementary Table S5) was synthesized by GENEWIZ (Suzhou, China), amplified with the lipB F1/R1 and lipB F2/R2 primer pairs and cloned into the expression vectors pET-28a and pET-29b by homologous recombination with ClonExpress ® MultiS One Step Cloning Kit (Vazyme, China) to generate recombinant plasmids pET-28a-lipB and pET-29b-lipB.For expression of the LipB protein, E. coli BL21 (DE3) harboring the recombinant plasmid was grown in 50 mL LB medium with 40 μg/mL kanamycin at 37°C to 0.6 of the OD600 value.Then isopropyl-β-Dthiogalactoside (IPTG) was added to the culture at a final concentration of 1 mM for additional 6 h incubation at 37°C or 0.1 mM for additional 22 h incubation at 16°C.The cells were collected by centrifugation and resuspended in Lysis buffer (25 mM Tris, 200 mM NaCl, 10% glycerin, pH 8.0), then crushed with an ultrasonic cell disruptor and the cellular supernatant was obtained by centrifugation at 12000 × g and 4°C for 30 min.The expression of the LipB protein was examined by SDS-PAGE. To prepare the recombinant protein (LipB tagged with maltosebinding protein, MBP-LipB), E. coli BL21 (DE3) cells harboring the recombinant vector pET29b-lipB were cultured in 3 L of LB broth, and induced with 0.1 mM IPTG incubated at 16°C for 22 h.The supernatant was incubated with amylose affinity column (GE Healthcare, America), which was pre-equilibrated with Lysis buffer, and then eluted with the elution buffer containing 10 mM maltose.The soluble MBP-LipB protein was further purified using gel permeation chromatography to remove non-targeting proteins and finally resuspended in Lysis buffer. Esterase activity assay of MBP-LipB To assay the crude enzymatic activity, the E. coli BL21 (DE3) cells harboring the recombinant vector pET29b-lipB without and with induction of 0.1 mM IPTG at 16°C for 22 h were, respectively, harvested and resuspended with fresh LB broth at the concentration of 10 OD/mL, subsequently inoculated on the plate with glyceryl tributyrate, incubated overnight and observed by Stereo Microscope (Nikon, Japan). The standard assay for esterase activity was carried out using spectrophotometric method with the reaction mixture containing 1 mM p-NP esters, 1 μL (0.67 μg) of purified MBP-LipB and 1% acetonitrile in a total volume of 1 mL of 50 mM Tris-HCl buffer (pH 8.0) (Petersen et al., 2001;Gupta et al., 2002).The reaction mixture was incubated for 10 min and terminated by the addition of 20 μL of 10% SDS.The enzymatic activity was measured by monitoring the changes of absorbance at 405 nm.All measurements were performed in triplicate. Similarly, the optimal temperature was determined at the temperatures ranging from 20°C to 70°C.The thermostability was determined by incubating the reaction mixtures at 35°C, 45°C, 55°C for different times until up to 1 h and the residual activity was measured. Effects of metal ions (MnCl 2 , MgCl 2, CaCl 2 , CuCl 2 , CoCl 2 , ZnCl 2 and NiCl 2 ) or organic solvents (methanol, ethanol, acetone, trichloromethane, acetonitrile and isopropanol) on LipB esterase activity were detected by incubating the reaction mixture, respectively, with the metal ions or organic solvents under the reaction condition mentioned above for 1 h, and the residual activity subsequently tested.The concentration of each of the metal ions was 5 mM, and the final concentrations of the organic solvents were 5%, 10% or 15%.The enzymatic activity of the protein without additives was defined as 100%. β-lactamase activity assay of MBP-LipB The β-lactamase activity of LipB was determined by using nitrocefin as the substrate with the method (O'Callaghan et al., 1972) with small modifications.Briefly, the reaction mixture containing 1 mM nitrocefin, 1 μL purified enzyme (0.67 μg), 500 μL of 200 mM Tris-HCl buffer (pH 7.0) and distillation-distillation water (ddH 2 O) in a total 1 mL volume was incubated at 30°C and measured with the spectrophotometric method at 482 nm every 1 h.To exclude the influence of the MBP-tag, we used the MBP protein in the β-lactamase activity assay with nitrocefin as the substrate. Ampicillin, cefotaxime or imipenem was also used as substrate to detect the β-lactamase activity of MBP-LipB using the same reaction mixture without nitrocefin and then incubated at 30°C for 24 h.The equipped with a C18 reverse-phase column (Thermo Fisher Scientific, Boston, USA).The elution condition was a constant concentration gradient of phosphate buffer and acetonitrile (HPLC grade) at a flow rate of 0.5 mL/min for 20 min to detect at 230 nm for ampicillin, at a flow rate of 0.8 mL/min for 20 min to detect at 254 nm for cefotaxime, and at a flow rate of 1 mL/min for 15 min to detect at 295 nm for imipenem.The reaction metabolites were identified by comparing the retention time and the UV visible spectra with the negative control using ddH 2 O to replace the enzyme. The optimal temperature and pH for LipB β-lactamase activity were employed.Because nitrocefin was unstable under the thermal (≥55°C) or alkaline (pH ≥8.0) conditions, the detection was conducted at the temperature range from 20°C to 50°C or in the pH range from 3.0 to 7.5. Epothilone hydrolase activity assay of MBP-LipB To analyze the hydrolase activity of MBP-LipB against epothilones, 0.5 mM epothilone A or epothilone B and 2 μg of purified enzyme were added into 50 mM Tris-HCL (pH 9.0) at a final volume of 100 μL and incubated at 30°C for 24 h.An equal volume of ethyl acetate was added to finish the reaction, the mixture was evaporated under reduced pressure and dissolved in 50 μL methanol.The remained epothilone A or epothilone B was determined by HPLC.The elution condition was a programmed concentration gradient of 60% methanol (HPLC grade) and 40% ddH 2 O (HPLC grade) at a flow rate of 1 mL/ min for 25 min to detect at 249 nm. Primers lipB-up F and lipB-up R, lipB-down F and lipB-down R were used to amplify the up and down homologous arms of lipB from S. cellulosum So0157-2 genome, respectively.The arms were linked to pBJ113 to obtain knockout plasmid pBJ-lipB.The lipB gene was amplified with lipB F3 and lipB R3 primers, digested with KpnI and EcoRI and then cloned into pSWU30-pilA resulting in the overexpression plasmid pSWU30-pilA-lipB.The pBJ-lipB and pSWU30-pilA-lipB were introduced into the epothilone-producing strain ZE9 (Zhu et al., 2015) by electroporation and the positive mutant strains ZE9∆lipB and ZE9 + lipB were screened as previously reported (Yue et al., 2017).ZE9 and mutants were cultivated overnight in 50 mL of CYE medium, then inoculated at a ratio of 0.04 OD/mL into 50 mL medium containing 2% of the XAD-16 resin and fermented at 30°C for 7 days.The resin was harvested with strainer and extracted with 3 mL methanol by shaking overnight at room temperature (Gong et al., 2007).The supernatant was examined by HPLC.The yield of epothilones was quantified from the peak area in the UV chromatogram, by reference against a calibration standard. Sorangium cellulosum We searched the 13 available S. cellulosum genomes (Supplementary Table S3) by PSI-BLAST with 19 representatives from those identified lipolytic enzymes of different families as query sequences, and discerned 1,084 non-redundant lipolytic enzymes, which belonged to 14 families (Supplementary Table S6).These putative enzymes were filtered with FIMO to determine the existence of the typical motifs conserved in lipolytic enzyme families (Johan et al., 2021).After removing the sequences without the conserved motifs we obtained 406 lipolytic enzymes (Supplementary Table S7).Notably, because the conserved motifs in the families III, VI, XV and XIX, or in the families IV and VII, were closely similar, 61 of the 406 proteins appeared in different families, which were determined of their ascription by the BLASTP similarity values (Supplementary Table S8).Finally, these S. cellulosum lipolytic enzymes were classified into 12 families (Figure 1A). The S. cellulosum genomes each contained multiple genes (22-44) encoding lipolytic enzymes (Supplementary Table S9), with varied compositions in different families (Figure 1B).According to the sequence similarity network analysis, the lipolytic enzymes belonging to families I, IV, VII, VIII, and XVII showed high similarities, but many others (up to 60% of the 406 enzymes) exhibited low similarities with those reported representatives (Figure 1C), showing rich novel lipolytic enzymes in S. cellulosum. The family VIII carboxylesterases were the most abundant lipolytic enzymes occurring in S. cellulosum.These 74 predicted family VIII carboxylesterases, together with 29 reported ones, could be divided into four groups (Figure 1D).To understand lipolytic enzymes in S. cellulosum, we further investigate the sequence and functional characteristics of LipB (AKI82204.1) of the family VIII carboxylesterases in S. cellulosum So0157-2, an alkaline epothiloneproducing strain with the known largest S. cellulosum genome (Han et al., 2013).One more reason for the choice of LipB is that the lipB gene is adjacent to the biosynthetic gene cluster of epothilones, and the LipB protein was once predicted to be an esterase responsible for the hydrolysis of epothilones to prevent self-toxicity (Gerth et al., 2002;Zhao et al., 2010;Li et al., 2017). Sequence alignment and three-dimensional structure of LipB So0157-2 contained 34 lipolytic enzymes, and 7 of them belonged to the family VIII carboxylesterases.The lipB gene encodes a protein containing 454 amino acid residues with the predicted molecular weight of 48.4 kDa.Multiple amino acid sequence alignment revealed that LipB contained the conserved S-x-x-K motif (at the position of residues of 118-121) and Y-x-x motif (239-241), which are commonly observed in class C β-lactamases, penicillin binding proteins and family VIII carboxylesterases (Figure 2).Besides, the W-x-G motif, conserved in the oxyanion hole of family VIII carboxylesterases (Nan et al., 2019;Park et al., 2020), was also observed in the C-terminal region of LipB (Trp 407 -Asp 408 -Gly 409 ).The sequence characteristics suggested that LipB was a dual-functional enzyme with the class C β-lactamase and the family VIII carboxylesterase activities. We constructed the 3D structure of LipB protein using the I-TASSER online program, and revealed that LipB was structurally close to some carboxylesterases (PDB IDs: 4IVI, 1CI8, and 3ZYT) and several penicillin binding proteins (PDB IDs: 4P6B, 5GKV, and 2QMI) (Supplementary Table S10).The optimal structures predicted by I-TASSER and AphaFold2 (model 1 and rank_1) were aligned and matched well with each other (Supplementary Figure S1A), and residues, 65.1% of model 1 and 85.1% of rank_1, revealed by ramachandran plot analysis, were in the most favored regions (Supplementary Figure S1B).According to Verify3D, 68.9% of residues in model 1 have scored ≥0.2 in the 3D-1D profile, while the residues scored ≥0.2 in the rank_1 were 70.9% (Supplementary Figure S1C).Therefore, the rank_1 model estimated by AlphaFold2 (pLDDT:87.3,pTM:0.857)was adopted as the supposed 3D structure of LipB (Figure 3A). Similar to the class C β-lactamase and family VIII carboxylesterases (Wagner et al., 2002), LipB was composed of two domains, a small helical domain (residues 136-257, painted orange in Figure 3A) and an alpha/beta-domain (residues 1-135 and residues 258-454, painted green).The helical domain consisted of four alphahelices and a short two-stranded antiparallel beta-sheet.The alpha/ beta-domain had five long antiparallel beta-sheets, two pairs of short two-stranded antiparallel beta-sheet and 10 alpha-helices (7 on one side and 3 on the other).These two domains shaped a catalytic active pocket, where three conserved motifs (S-x-x-K, Y-x-x, W-x-G, painted red in surface) were precisely fit (Figure 3B).As shown in Figure 3C, a structure superimposition (root-mean-square deviation (RMSD) of 1.1 Å) was observed between LipB and EstU1 (PDB ID: 4IVI), a characterized family VIII carboxylesterase with the β-lactamase activity on the first-generation cephalosporins, and the key active sites essential for the β-lactam hydrolytic activity overlapped well in LipB (Ser 118 , Lys 121 and Tyr 239 ) and EstU1 (Ser 100 , Lys 103 and Tyr 218 ).The above bioinformatics analysis further suggested that LipB might display both esterase and β-lactamase activities. Expression and purification of recombinant MBP-LipB To investigate the biological activity of this enzyme, we expressed the recombinant LipB in E. coli.The codon optimized lipB gene was cloned in different expression vectors and transformed into E. coli BL21 (DE3).With the expression plasmid pET28a-lipB, the His-LipB recombinant proteins were expressed in an insoluble form, even after optimization of the induction conditions (Supplementary Figure S2A, the band of His-LipB was marked with red arrows).When labeled with the MBP-tag at the N-terminal of LipB, MBP-LipB was solubly expressed in cells harboring pET-29b-lipB; more recombinant proteins were obtained with the induction with 0.1 mM IPTG at 16°C for 22 h than that induced by 1 mM IPTG at 37°C for 6 h (Supplementary Figure S2B).The recombinant proteins were purified with amylose affinity chromatography and gel permeation chromatography.Notably, if the MBP-tag was truncated from MBP-LipB, the LipB protein became insoluble.Thus, the recombinant MBP-LipB protein was employed in the following assays. Biochemical characterization of LipB as an esterase To determine the esterase activity of LipB in E. coli, the IPTGinduced and uninduced cells harboring pET-29b-lipB were inoculated on the plates supplemented with glyceryl tributyrate.After overnight incubation, an obvious transparent zone was observed around the induced colonies, but not the uninduced, indicating that the induced LipB could hydrolyze the glyceryl tributyrate (Figure 4A).Subsequently, we purified the MBP-LipB proteins and assayed the esterase activity with the substrates p-NP esters with various chain lengths of fatty acids (from C2 to C12).As shown in Figure 4B, LipB efficiently hydrolyzed p-NP esters with short chain fatty acids and exhibited the highest activity toward p-NP butyrate (C4).Defining the hydrolytic activity toward p-NP butyrate as 100%, LipB maintained more than 70% of activity against p-NP esters with acetate (C2) or hexanoate (C6).When p-NP esters were with longer chain lengths (C8 and C10), the esterase activities were less than 40%, and dropped to 20% with the p-NP laurate (C12) as a substrate.The absorbance of p-NP esters incubation showed no activity with the MBP protein, which excluded the influence of MBP-tag in esterase activity of LipB (Supplementary Figure S3A).In addition, the structure modeling also showed that the MBP and LipB fragments formed two separate parts with no significant interactions (Supplementary Figure S3B).Notably, the activities of LipB were some different from that of LipA of the same strain, which belongs to family XV and exhibited the highest activity toward p-NP acetate (C2) under various pH and temperature conditions (Cheng et al., 2011). Effect of pH on the esterase activity of LipB was measured in a pH range from 3 to 10 with p-NP butyrate as the substrate.As shown in Figure 4C, the LipB protein exhibited high activity under an alkaline condition (pH 8.0-9.5) and the maximum activity appeared at pH 9.0, which was corresponding to the natural growing environment of S. cellulosum So0157-2 (Han et al., 2013).Under the conditions with pH value lower than 7.0 or higher than 10.0, LipB lost more than 80% of activity. LipB retained high activity in a broad temperature range (from 20°C to 60°C) and exhibited the maximum activity at 50°C.When the temperature increased to 70°C, the enzyme lost its activity almost completely (Figure 4D).Thus, unlike the cold-adapted lipase LipA identified previously, LipB worked as a thermophilic family VIII carboxylesterase.As to the temperature stability, LipB retained about 60% of activity after an incubation at 35°C or 45°C for 60 min.However, 30-min incubation at 55°C diminished more than 80% of the enzymatic activity (Figure 4E). The activities of LipB in the presence of different metal ions are shown in Table 1.In general, lipolytic enzymes do not need cofactors in the hydrolyzation of the ester-bond.Nevertheless, it has been reported that activities of lipases and carboxylesterases are also enhanced by some divalent cations such as Ca 2+ , Zn 2+ , and Mg 2+ (Choi et al., 2004; Gao et al., 2016; Araujo et al., 2020).As to LipB, addition of Ca 2+ , Cu 2+ or Zn 2+ reduced the activity approximately in half, whereas the presence of 5 mM of Mg 2+ , Co 2+ or Ni 2+ enhanced LipB esterase activity, and the highest increase to 142.8% was obtained by the addition of Ni 2+ .As suggested by Araujo et al. ( 2020), the activity of LipB might be strengthened by the promotion of these divalent cations on a rapid product release or cleaning the enzyme's active sites. More interestingly, LipB exhibited superior tolerance to organic solvents.The activity of LipB was stimulated 2-fold, 2.7-fold and 1.6-fold by the presence of 15% of methanol, 10% of ethanol or 5% of isopropanol, respectively, but suppressed by acetone, trichloromethane and acetonitrile.10% of trichloromethane or 15% of acetonitrile inactivated LipB completely (Table 2).The organic solvent stability is an important criterion for industrial esterases (Gorman and Dordick, 1992).The excellent organic solvent stability of LipB implied its potentials in industrial applications for biotransformation and bioremediation associated with organic solvents. β-lactamase activity of LipB According to the β-lactamase activity, family VIII carboxylesterases were classified into three types: having no β-lactamase activity, represented by EstB (Petersen et al., 2001); only active to nitrocefin, represented by EstC (Rashamuse et al., 2009); and having β-lactamase activities toward different β-lactam antibiotics including cephaloridine, cefazolin, cephalothin and nitrocefin, represented by EstU1 (Jeon et al., 2011).We used four β-lactam antibiotics as the substrates for the β-lactamase activity assay: ampicillin, nitrocefin, cefotaxime and imipenem (Supplementary Figure S4).As shown Figure 5A, the absorbance of nitrocefin at 482 nm raised to 1.42 after incubation with MBP-LipB, and the color of solution changed from yellow to red which indicated that LipB could hydrolyze the amide bond of nitrocefin (Lee et al., 2005), and exhibited the maximum β-lactamase activity at 40°C or pH 7.0 (Supplementary Figure S5).The β-lactamase activities against ampicillin, cefotaxime and imipenem by LipB were also analyzed by HPLC, but no novel peaks derived from the substrate hydrolysis were observed (Figure 5B), illustrating that LipB had no β-lactamase activity toward these three antibiotics.Thus, LipB mimicked the β-lactamase activity of EstC, although the structure of LipB was closer to EstU1 (Figure 3C).It was previously reported that appropriate length of the Ω-loop and the R1 segment (the connecting region between α6 and α8) in EstU1 played a critical role for its substrate hybridity to multiple β-lactam antibiotics, and the long Ω-loop might cover the R1 site and block the access to the catalytic triad (Cha et al., 2013).We aligned the amino acid sequences of LipB, EstU1, EstC and EstB, and found that the Ω-loop of LipB was 7 residues longer than that of EstU1, 4 residues longer than that of EstC, and 9 residues shorter than that of EstB (Figure 5C), which suggested that, compared with EstU1 and EstB, the intermediate length of Ω-loop might prompt LipB to form a conformation that allowed the moderate β-lactamase activity. In addition, based on the docking model estimated by AutoDock, the nitrocefin was well located in the active pocket without blocking from the moderate Ω-loop of LipB.Serine in the S-x-x-K tyrosine in the Y-x-x motif, and tryptophan in the W-x-G motif formed hydrogen bonds interaction with the ligand nitrocefin, and the carbonyl oxygen in the β-lactam ring of nitrocefin was located at the oxyanion hole derived from Ser118 and Gly410 (Figure 5D).While in the docking model of LipB/ampicillin, the carbonyl oxygen of the opened β-lactam ring of ampicillin was interacted with Ser118 and Tyr239 instead of the oxyanion hole (Supplementary Figure S6A), which may affect the occurrence of the second reaction of hydrolysis.For cefotaxime and imipenem, no key residue in active pocket was linked to the lactam ring of substrates in the docking models with LipB (Supplementary Figures S6B,C), explaining why they could not be hydrolyzed by LipB. Discussion Diverse microbial lipolytic enzymes exhibited versatile application potentials with their catalytic activities on various substrates under adverse conditions (Bornscheuer, 2002).S. cellulosum is an intriguing but unexploited resource for lipolytic enzymes screening.In the 13 sequenced S. cellulosum, we discerned hundreds of lipolytic enzymes belonging to 12 families.In addition to the LipA previously reported in S. cellulosum, the LipB also exhibited excellent properties potential for specific industrial processing.Hence deep exploration of S. cellulosum is promising to provide more novel candidate lipolytic enzymes for the various requirements of industrial biotechnology.Notably, although LipB was once predicted to be responsible for the hydrolysis of epothilones (Gerth et al., 2002;Zhao et al., 2010;Li et al., 2017), our in vitro and in vivo analyses indicated that the enzyme was unable to hydrolyze epothilones (Supplementary Figure S7). The broad substrate spectrum and harsh environment tolerance prompt family VIII carboxylesterases to be potentially applied in pharmaceutical, organic synthesis and other industrial productions, but only dozens of enzymes of this family have been investigated so far.Bioinformatics analysis shows that there were normally many family VIII carboxylesterases in S. cellulosum genomes, which were deserved for further investigation.Exemplified in this study, we analyzed the sequence and functional characteristics of LipB, a family VIII carboxylesterase.LipB had esterase activity toward glyceryl tributyrate and p-NP esters with short length of aliphatic side chains and weak β-lactamase activity against nitrocefin.The enzyme was alkaline and exhibited excellent activities in a wide range of temperature.Moreover, LipB was well tolerant to organic solvents, and even stimulated by methanol, ethanol, and isopropanol, which might indicate potential application in specific industrial processing associated with alcohols solvents.Similarly, several family VIII carboxylesterases were reported to be stimulated by methanol (Rashamuse et al., 2009;Selvin et al., 2012;Ouyang et al., 2013;Lee et al., 2016).It was confirmed by Müller et al. that some methanolstimulated esterase could catalyze the acylation of methanol and the acyl-enzyme intermediate would be rapidly disassociated to accelerate Yuan et al. 10.3389/fmicb.2023.1304233Frontiers in Microbiology 10 frontiersin.orgthe release of p-nitrophenol and in higher hydrolysis (Müller et al., 2021).Family VIII carboxylesterases show different hydrolase activities against different type of β-lactam antibiotics.Although the key active sites (Ser, Lys and Tyr) essential for the β-lactamase activity overlapped well in the three-dimensional structures of LipB and EstU1, LipB catalyzes the hydrolysis of only nitrocefin, but not ampicillin, cefotaxime and imipenem.The spatial adaptation of LipB might be a more essential criterion for β-lactamase activity according to the results of sequence alignment and AutoDock. Conclusion In this study, we discerned a total of 406 lipolytic enzymes in 13 S. cellulosum genomes, and most of them exhibit low sequence similarity with those reported.We characterized a family VIII carboxylesterase LipB, alkaline, feasible to a wide range of temperature, and especially stimulated by organic solvents like methanol, ethanol and isopropanol.We propose that S. cellulosum strains are a treasury for digging more novel and promising industrial lipolytic enzymes. FIGURE 1 FIGURE 1 Identification and amino acid sequences analysis of lipolytic enzymes from S. cellulosum.(A) The amount of lipolytic enzymes of each gene family from S. cellulosum was plotted with the histogram and labeled on top of each bar.(B) The number of each lipolytic enzyme family in various S. cellulosum genomes was counted with the heat map.The number larger than or equal to 5 was marked in white, otherwise in black.(C) The sequence similarity network constructed by EFI-EST.Dots represent enzymes identified from S. cellulosum, of them different enzyme families were exhibited in various colors.Reported enzymes were displayed with gray diamonds.(D)The phylogenetic tree of family VIII carboxylesterases established by IQ-TREE.These enzymes were divided into four groups.There were 29 reported family VIII esters, of which the branches were painted red.Seven enzymes from S. cellulosum So0157-2 were bold and highlighted with asterisks, and the LipB was highlighted with red asterisk. FIGURE 2 FIGURE 2Multiple amino acid sequences alignment of LipB and reported homologs.Identical residues are indicated by white text on a red background and similar residues are shown in red text on a white background.Groups of residues with a global similarity score above 0.7 are framed in blue.The pivotal conserved motifs S-x-x-K, Y-x-x, and W-x-G are marked as Block I, Block II and Block III, respectively.The putative catalytic triad (Ser118, Lys121 and Tyr239) is indicated by blue asterisks. FIGURE 3 3D FIGURE 3 3D structure modeling of LipB.(A) A ribbon diagram of LipB was shown with the transparent surface structures.The secondary structures were labeled in black.The protein was divided into two domains: an alpha/beta-domain (residues 1-135 and residues 258-454) shown in green and a helical domain (residues 136-257) shown in orange.(B) Three conserved motifs were shown in red by the surface drawing.And the key residues (S118 and K121 in S-x-x-K motif, Y239 in Y-x-x motif, W407 and G409 in W-x-G motif) were zoomed clearly and shown as sticks in the ribbon diagram of LipB on the right.(C) Alignment of 3D structures of EstU1 and LipB.EstU1 (PDB code: 4IVI) and LipB were colored in green and cyan, respectively.The key residues related to β-lactam hydrolytic activity in catalytic triad were shown in sticks and zoomed at the right. FIGURE 4 FIGURE 4 Esterase activity assay and biochemical characterization of recombinant LipB.(A) Hydrolysis of glyceryl tributyrate by LipB.Transparent zone was observed around the induced colonies.(B) Relative activity of LipB at p-NP esters with different chain lengths of fatty acids.Substrates used were p-NP acetate (C2), p-NP butyrate (C4), p-NP hexanoate (C6), p-NP octanoate (C8), p-NP decanoate (C10) and p-NP laurate (C12).(C) Effect of pH on activity of LipB.Activities under various pH conditions were marked by black (pH 3.0-5.0),wine (pH 5.0-7.0),navy (pH 7.0-9.0),dark yellow (pH 9.0-10.0).(D) Effect of temperature on activity of LipB.(E) Thermostability of LipB.The enzyme was preincubated at different temperatures for different time quantum, and the residual activity was determined.The residual activities incubated at 35°C, 45°C and 55°C were marked by black, wine and navy, respectively. FIGURE 5 FIGURE 5 Activity of LipB toward β-lactam antibiotics.(A) Color change of nitrocefin-containing solution with MBP and MBP-LipB, using reaction mixture without proteins as a blank control.Specific data of absorbance at 482 nm is shown in the histogram form.(B) High performance liquid chromatography profiles for β-lactam antibiotics (upper) and β-lactam antibiotics incubated with LipB (nether).The substrate peaks of ampicillin, cefotaxime and imipenem are at 15.4 min, 7.9 min and 3.5 min, which were marked with arrows.(C) Sequence alignment among LipB, EstU1 (AFU54388.1),EstC (ACH88047.1)and EstB (AAF59826.1).The secondary structure assignment corresponds to EstU1.Ω represents the Ω loop and R1 represents R1 segment.(D) Molecule docking of LipB with the substrate nitrocefin and interactions of key sites.The predicted LipB/nitrocefin complex was shown at the image upper, Ω and R1 represent Ω-loop (yellow, residues 277-325) and R1 segment (red, residues 181-216), respectively.S-x-x-K motif, Y-x-x motif and W-x-G motif are marked by cyan.The potential hydrogen bond interactions of serine, tyrosine, tryptophan and glycine toward nitrocefin was displayed below. TABLE 1 Effects of metal ions on the esterase activity of LipB. a The relative activities are given as a percentage of the activity in the absence of cations. TABLE 2 Effects of organic solvents on the esterase activity of LipB. a The relative activities are given as a percentage of the activity in the absence of organic solvents.b -represents the inactive state of enzyme.
2023-12-06T16:20:00.509Z
2023-12-04T00:00:00.000
{ "year": 2023, "sha1": "f08cafa18f611824e9246ed91eb910670ed63cc0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1304233/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09f44ec921c1642ce1c0d784f1ec098a46440919", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
119356875
pes2o/s2orc
v3-fos-license
News from the Adriatico Research Conference on"Superconductivity, Andreev Reflection, and Proximity Effect in Mesoscopic Structures" The Adriatico Research Conference on Superconductivity, Andreev Reflection, and Proximity Effect in Mesoscopic Structures took place at the International Center for Theoretical Physics in Trieste, Italy, July 8-11, 1997. The organizers were Elias Burstein, Leonid Glazman, Teun Klapwijk, and Subodh R. Shenoy. We describe some of the central issues discussed at the conference, along with more personal reflections prompted by new developments. Introductory treatments of the superconducting proximity effect -how electrons behave in the vicinity of an interface between a normal metal and a superconductortypically follow one of two tracks. One is to consider a Ginzburg-Landau picture of a superconductor/normal metal (SN) contact [1]. In this treatment, which is accurate for temperatures close to the superconducting critical temperature T c , one may define a position-dependent electron-pair correlation function < Ψ ↓ (x)Ψ ↑ (x) > which extends from the superconductor into the normal metal, decreasing exponentially (in diffusive materials) on thermal diffusion length scale of L T = hD/(k B T ) (here D is the diffusion constant and T is the temperature). This is perhaps a useful pedagogical approach, in that it allows one to think in a simple way that the Cooper pairs in the superconductor may "leak" through a nonzero thickness of normal metal. However, this framework is of limited practical value for temperatures much lower than T c , in that it does not provide a way to calculate experimental quantities such as supercurrents, the tunneling density of states in the normal metal, or the conductance properties for various geometries of superconductors and normal metals made into devices. The problem is that the simple Ginzburg-Landau theory does not properly reflect the energy dependence of electronic properties. In fact, in the low-temperature regime where the electron phase-breaking length is long enough to be ignored, the appropriate length scale for describing the range of pair-correlated electrons diffusing inside a normal metal that is in contact with a superconductor is given by the energy-dependent quantity L ε = hD/ε, where ε is the electronic energy measured with respect to the Fermi level. This can be much longer than L T . A theory which properly takes this energy dependence into account is the "quasiclassical Green's function theory", formulated in general by Eilenberger [2], and specialized to diffusive systems by Usadel [3] (some of their work being done during post-doctoral stays at our home institution -Cornell!). "Quasiclassical" means that the full, non-equilibrium Gorkov equations of superconductivity are coarse-grained so as to eliminate quantum features on the fine scale of 1/k F . An overview on the current status of these methods was given at the conference by Gerd Schön. A second pedagogical approach to understand the proximity effect is Andreev reflection [4,5]. In this picture one notes that if an electron is traveling from a normal metal into a superconductor, and it has an energy within a range of the superconducting gap ∆ about the Fermi energy, then it cannot simply be transmitted into the superconductor because of the superconducting gap. Instead, one may take the view that the electron, upon encountering the superconducting interface, produces a Cooper pair which is transmitted into the superconductor, and in the process a hole is retroreflected back into the normal state (in this way conserving charge, energy, and transverse momentum). The physics of Andreev reflection is contained within the quasiclassical Green's function treatment. However, in addition, more recent formulations have extended a purely scattering-theory approach to include coherent multiple processes in which both Andreev and normal reflections occur at imperfect SN interfaces, and electrons may also be scattered from defects within the metals. The overall transport properties are than calculated in the spirit of the Landauer-Büttiker formula, as a function of the transmission eigenvalues of an overall scattering matrix. Colin Lambert reviewed the development of these methods, and Carlo Beenakker described related results which take into account statistical random-matrix properties of the scattering matrix. The central theme of the conference was a clear consensus that both flavors of theory, the quasiclassical Green's function method and the scattering matrix approach, are equivalent in the regimes in which they are both applicable, and that their main results are well-supported by recent experiments. In the words of Michel Devoret, the proximity effect and Andreev reflection are "two sides of the same coin." Nathan Argaman went so far as to show that the Usadel equations of the Green's function theory for diffusive metals may be derived within a purely scattering-matrix approach based on multiple Andreev scattering. The two types of theories do have slightly different ranges of applicability. Green's function methods begin to break down in applications to nm-scale devices with just a few conducting channels, a regime in which the scattering matrix methods are particularly suited. Scattering matrix methods can also be used more easily to model sample-to-sample variations in the mesoscopic size regime. However the Green's functions methods are otherwise more general, as they may include effects of a wide range of interactions, and also possibilities for the modification of superconducting regions due to their contact with normal metals, not included in present scattering-matrix treatments. The Green's functions are the sole means to calculate quantities such as magnetization, that cannot be related directly to transmission coefficients. The driving force for renewed interest in the proximity effect in the last 5 years is that new technologies for the fabrication of small electronic devices have allowed SN devices to be studied in size regimes that have never before been accessible -smaller than both the lowtemperature phase-breaking length for electrons (L φ ), and also L T . These samples have been used to test the proximity-effect theories through measurements of densities of states, transport properties, and magnetization. Density of states: Michel Devoret reported tunneling measurements on small copper wires connected at one end to superconducting aluminum pads. The density of states at different distances from the interface was probed with tunnel junctions a few 10's of nm wide. The results displayed excellent agreement with calculations performed by Gerd Schön's group within the quasiclassical theory. As predicted, the density of states in the normal metal was depressed for energies below an scale corresponding tohD/x 2 , where x was the distance from the interface. This is the energy range over which electrons diffusing a distance x will all remain in phase. For the best fit to theory, an effective spin-flip scattering time of approximately 65 ps was required. The explanation of this somewhat unexpectedly short time is perhaps at this moment unclear. Reentrant resistance: Measurements of the transport properties of well-characterized diffusive SN devices having a variety of different geometries were described by representatives of several groups, including M. H. Devoret, H. Courtois, and B. J. van Wees. In accord with theory, the resistance of diffusive SN wires shows a "reentrance effect" as a function of temperature, meaning that as the temperature is lowered below T c , the resistance first decreases and then rises to begin to approach the normal state value as the temperature goes to zero. Both quasiclassical theory and matrix scattering approaches predict that at T = 0 the resistance of a disordered SN wire is precisely the normal state resistance, and the temperature scale for the minimum in the resistance is the Thouless energyhD/L 2 , divided by k B (L is the length of the normal region of the wire). Interferometer devices: A clever trick employed by many groups (including V. Petrashov, M. H. Devoret, H. Courtois, and B. J. van Wees) is to make "interferometer" devices which consist of an open loop of supercon-ductor whose ends are attached to different points of a normal metal conductor. An applied magnetic field then acts to change the relative superconducting phase of the two ends, and allows phase-dependent measurements of Andreev-scattering effects. The results of these experiments are all apparently in good qualitative accord with predictions. In particular, for interferometer devices in which there is good metallic contact between the superconductors and the normal metals, the conductance oscillates periodically with the superconducting phase difference, with a relative amplitude which scales roughly as hD/(L 2 k B T ), so that as a function of increasing temperature the oscillation amplitude falls slowly as 1/T rather than exponentially. The interpretation of this result is straightforward, in that while electrons at the Fermi energy within an energy window of k B T contribute to the total conductance, only those within an energy window ofhD/L 2 remain in phase over the sample. Disordered-enhanced Andreev reflection at tunnel barriers: Devices in which a superconductor and a normal metal are not in metallic contact, but are separated by a tunnel barrier, can also exhibit Andreev reflection processes. For ballistic metal samples joined by tunnel barriers, this effect is predicted to be very weak [5]. However, disorder in the normal metal can enhance Andreev processes by orders of magnitude (a factor of 1000 for the Saclay group), due to an effect dubbed "reflectionless tunneling". The mechanism may be understood in analogy to the Fabry-Perot interferometer in optics. In a disordered sample, an electron of energy ε may be viewed as taking a path in which it undergoes many ordinary reflections from the SN tunnel junction and disorder, thereby returning to the tunnel junction many times. At each reflection from the SN interface, there will be a small amplitude for Andreev reflection. However, because Andreev reflection is a retro-reflection process, the reflected hole state (corresponding at V = 0 to the electron energy −ε) will have almost precisely the same trajectory as the electron path (but in reverse), and the quantum-mechanical phase accumulated by the hole between reflections at the SN interface will match that of the electron. The end result is that the amplitudes for Andreev reflection at each scattering event at the SN tunnel junction will add constructively, producing a much larger tunneling signal than if the processes were added incoherently. With a voltage applied across the tunnel junction, the differences in energy (and hence wavelength) of the electron and reflected hole states will grow and the constructive interference will gradually be degraded, leaving a zerobias peak in the conductance. Another related effect, "giant Andreev reflection", was predicted by Beenakker and observed by van Wees for the geometry of a ballistic constriction in series with a disordered conductor. Resistance increases due to superconductivity: Frank Wilhelm proposed a theory to describe the counterintuitive experimental result (Petrashov) that the resistance of a wide, diffusive normal metal wire in contact with a superconductor can increase as the sample is cooled through the superconductor's T c . The explanation appears to be a geometrical effect -a consequence of the quasi-2-dimensional nature of a wide wire and the fact that current and voltage probes were positioned on opposite sides of the wire. Supercurrents: Experiments on diffusive SNS devices exhibiting supercurrents were not discussed at the conference in as much depth as conductance measurements. However, from the work of Courtois, it seems clear that supercurrents (at least in "long" devices where the N region is longer than the coherence length) are governed by the same energy scale, the Thouless energy E c =hD/L 2 , which plays the central role in conductance measurements. The typical magnitude of the critical current at T = 0 is I c = E c /R n , where R n is the normal state resistance of the device and the length scale in E c is the extent of the normal region. Ballistic samples: Conductance measurements for 2dimensional electron gas (2DEG) samples in which electron motion is ballistic, or quasiballistic (as opposed to diffusive) were presented by H. Takayanagi and A. F. Morpurgo. Morpurgo described a breakdown of the idea of simple retroreflection of the hole in Andreev scattering, when the interface contains disorder on the scale of the electron wavelength. Nevertheless, he argued that his experiments could still be described well by a semiclassical ray-tracing procedure in which the possible paths for electrons and holes were added coherently. It appears that additional work is still required to test the behavior of even smaller and cleaner devices (such as those containing ballistic point contacts) where semiclassical ideas break down and a fully quantum picture is required. It also seems that the characterization of ballistic 2DEG samples can be quite difficult, particularly concerning the quality of the coupling between the superconductor and the 2DEG. In present-generation devices, electron scattering is likely much stronger at this interface than in the 2DEG away from the superconductor. Magnetization: Magnetization measurements appear to be an area which may provide a challenge for existing theory. Joe Imry described measurements made by A. C. Mota (Zurich) on the susceptibility of fine superconducting wires (Nb) with a thin coating of normal metal (Cu or Ag). As a function of decreasing temperature, the susceptibility becomes increasingly diamagnetic as the sample is cooled below the superconducting T c , as is expected due to the proximity effect in the normal metal. However, when the samples are cooled even farther, to the low mK range and below, the susceptibility in some wires reaches an extremum and then turns around, so that in the T = 0 limit some wires display a susceptibility that is even more paramagnetic than at the superconductor's T c . Imry speculated that this behavior may be related to specific "whispering gallery" modes in the normal metal which may decouple from the superconductor well below T c . II. SUPERCONDUCTING BREAK JUNCTIONS AND MULTIPLE ANDREEV REFLECTION One of the most beautiful and interesting experiments discussed at the conference was the work of Elke Scheer and collaborators at Saclay, reported by Christian Urbina. They were able to make detailed measurements of the current traveling via discrete quantum mechanical modes in atomic-scale superconducting (Al) break junctions. This work may be viewed as a continuing development of the point-contact spectroscopy technique pioneered by Igor Yanson, and reviewed by him at the conference. The beauty of the new Saclay work is that an analysis of the conductance for applied voltages less than the superconducting gap of the electrodes (i.e., the subharmonic gap structure) was used to characterize the transmission coefficient for each of the active transport channels in the atomic-scale contact. The theory of the subharmonic gap structure, developed by J. C. Cuevas and collaborators (Madrid) using a Hamiltonian approach, and described by V. Shumeiko and D. V. Averin in a scattering-theory picture, is remarkably in such good shape that the transmission coefficients of 6 or more different quantum channels may be determined simultaneously. For aluminum break junctions, the Saclay group found, using fits to the theory of the Madrid group, that at least three partially-transmitting channels were required to describe their data, no matter how small they made their contact. Atomic-scale aluminum contacts are therefore different than 2DEG point contacts where transport may occur via a single, fully transmitting conductance channel. Urbina (referencing Cuevas et al.) speculated that the behavior of the aluminum contacts was rooted in chemistry -each aluminum atom can contribute 3 combinations of hybridized s and p atomic orbitals which lead to transport channels in a wire. Aluminum electrodes joined together by (purely s-like) gold atoms, on the other hand, can be reduced in size to the point that only a single quantum mode contributes to the conductance. In view of the fact that all the channels observed so far in the aluminum break junctions are only partially transmitting, it seems very puzzling that histograms of the values of the total conductance in these samples still show peaks near quantized values of conductance (integer multiples of 2e 2 /h), corresponding to individual, fullytransmitting quantum channels. In a poster presentation, P. Dieleman et al. from T. M. Klapwijk's group described shot noise measurements of superconductor-insulator-superconductor tunnel junctions which display subharmonic gap structure due to multiple Andreev reflection. At values of voltage corresponding to subharmonic peaks, they found an increase in the magnitude of the shot noise above the classical value, consistent with the view that in multiple Andreev reflection several electrons are transmitted simultaneously. III. SUPERCONDUCTIVITY IN NM-SCALE PARTICLES One afternoon of the conference was given over to consideration of the nature of superconductivity in nm-scale metal particles, small enough that the discrete electronsin-a-box energy-level spacing is comparable to the superconducting gap. One of us (D.C.R) described experiments in which these discrete levels were measured in single aluminum nanoparticles by a tunneling technique, and the effects on the spectra of a variety of different forces and interactions, including superconducting pairing, were analyzed. Andrei Zaikin reviewed theoretical results that as the level spacing in a metal sample is increased to approach the superconducting gap, the pairing parameter within BCS theory should be different for even and odd numbers of electrons. It was not clear how this effect might be measured experimentally, for two reasons. The first is that the even and odd pairing parameters may not be observables -what is measured in a tunneling experiment are energy differences between states with even and odd numbers of electrons, so that it is not a trivial matter to separate the different pairing parameters. Also, for nanoparticles in the size range where even and odd differences become interesting, the level spacing due to independent-electron quantum confinement becomes comparable to gaps due to superconductivity, and it is not clear how to separate these two effects. Jan von Delft described a model for how spin pairbreaking due to an applied magnetic field will affect the eigenstates in a small superconducting particle. Von Delft found that the discreteness of the electronic spectrum may change the nature of the superconducting transition, compared to the theory of Clogston and Chandrasekhar (C&C) [6] which describes well the transitions observed in thin film samples where the electronic spectrum is effectively a continuum. In the C&C theory, the tunneling threshold changes discontinuously at the transition field, but for a small particle this change may be continuous, if the transition from the superconducting state involves the flipping of just a single electron spin. IV. MEAN FIELD THEORIES AND BEYOND Igor Aleiner described recent work with Altshuler on the theory of a tunneling anomaly in a superconductor in a magnetic field above the paramagnetic limit. The interesting result is that even though the mean field order parameter is zero in this regime, there is a singularity in the density of states due to fluctuations in the order parameter. A. F. Andreev reported on his published work in which broken gauge symmetry is taken as the central defining characteristic of superconductivity, requiring a modified form of statistical mechanics. Some in the audience (including one of the authors, V.A., in closing remarks) ex-pressed the view that "broken symmetry" is merely a mean-field treatment of the interacting system, and that residual effects such as those discussed by Aleiner (above) also contain important physics. In his closing remarks, V.A. also said that since the conference was in some ways in honor of Andreev, it might be worth noting that a superconducting quasiparticle is built up from repeated virtual Andreev scattering against the mean-field order parameter. This can be seen by writing the electron propagator as The last term in the last denominator (the electron selfenergy) shows a normal electron being converted into a normal hole via the Andreev process, which here cannot be energy conserving because there is no analog of the voltage across an interface. V. LOOKING TO THE FUTURE Also presented at the conference were topics that look more to the future, in that mysteries remain which suggest avenues for future work. Andreev processes involving localized states: Z. Ovadyahu described data from superconductor-insulatornormal metal tunnel junctions in which the barrier material was the Anderson insulator indium oxide, which contains a high density of localized electronic states. Conductance measurements showed unusual zero-bias signals, and also features at voltages well above the superconducting gap. Igor Yanson noted that the abovegap features are similar to signals seen in metallic SN point contacts; however it seems to us that the details of the zero-bias signals are at least suggestive that something more interesting than pinholes may be at work. Andreev reflection involving localized states could conceivably serve as an excellent model system for exploring the interplay of Coulomb charging effects, Kondo physics, and superconducting pairing. Similar themes were touched upon by A. Golub in his talk, and R. Fazio and R. Raimondi in a poster. Nonequilibrium effects: The study of nonequilibrium processes such as charge imbalance and phase slip centers has a proud history that was reviewed in a talk by M. Tinkham. However, new non-equilibrium experiments in the mesoscopic regime continue to show unanticipated behavior, particularly when the samples are exposed to AC signals, as described by V. Chandrasekhar. d-wave superconductors: Thus far all the proximityeffect devices that we have described utilized conventional s-wave superconductors. Yu. S. Barach and Y. Tanaka provided a theoretical discussion of a variety of ways in which tunnel junctions made using high-T c dwave superconductors would produce qualitatively different results. These include an anomalous temperature dependence for the Josephson current, the existence of quasiparticle states bound to the tunnel barrier, and surface pair breaking. Actually fabricating well-controlled high-T c tunnel junctions will be a daunting task because of their difficult chemistry, but O. Fischer showed that low-temperature STM studies of the superconducting cuprates are already providing interesting results. He demonstrated striking differences in tunneling spectra for the electronic states in the cores of magnetic vortices in high-T c materials, as compared to s-wave N bSe 2 . Electron-electron interactions: The subject of electronelectron interactions in metals was not the focus of any scheduled presentations, but a recent paper by Mohanty, Jariwala, and Webb [7] was the object of much informal debate. This work proposes that zero-point fluctuations of the electromagnetic environment can produce electron dephasing in mesoscopic devices. Michel Devoret also mentioned puzzling results out of Saclay, where direct measurements of electron energy relaxation processes have suggested a scaling form that is not compatible with a present understanding of interaction processes. Looking perhaps even farther into the future, F. Hekking reported calculations of the properties of superconductors coupled to the interacting electrons in onedimensional Luttinger liquids, and H. Mooij speculated as to the use of Josephson-junction devices for quantum computations. There is no way to know at this point the prospects for whether the quantum coherence of Josephson junctions can be controlled sufficiently to allow for real quantum computations, but we expect that the macroscopic-quantum physics to be learned in this effort will be fascinating. An important intermediate goal on the way to computation will be to attain sufficiently long coherence times to produce a quantum clock using Josephson junctions, something long sought but without success to date. VI. PERSONAL REFLECTIONS Perhaps one way to summarize the present status of the theory of the superconducting proximity effect might be to paraphrase a remark made by Gerd Schön concerning the quasiclassical theory, "You give the theory to students, they solve some differential equations, and before long they come back with results!" Good agreement between recent experiments and theory give considerable confidence that a reasonably comprehensive understanding of (s-wave) superconducting/normal metal interfaces is close at hand. Before embarking upon triumphalism, however, we note that while the Green's function theory needed to explain most of the recent generation of proximity-effect experiments was complete long before the experiments were begun, there was still a delay of some years after the first experiments before their explanation was generally appreciated. Part of the difficulty undoubtedly lay in uncertainty about experimental pa-rameters (especially the quality of the interfaces between the superconductors and normal metals), but we also believe that there continue to be important issues of accessibility in the theory. We suggest that one of our goals, as a field of study, should be the further development of reliable tools for working intuition, so that those who live happy lives without benefit of Green's functions may have good pictures with which to begin to understand superconducting/normal metal devices, and a clear prescription for how to proceed in reliable modeling. We need popularizers, not prophets. Important steps along these lines were reported at the conference. Yuli Nazarov's "circuit theory" formulation for the quasiclassical Green's functions is a valuable contribution, though it still cannot be said to be optimally "user friendly". We find the scattering-matrix theories of Andreev reflection in SN devices to be very important as a more intuitive approach than the Green's function theory in many situations. Nathan Argaman's poster was particularly interesting in this regard, as it showed explicitly that the main formulas describing the proximity effect in the quasiclassical Green's function theory can in fact be derived from a simple picture involving nothing more than multiple Andreev reflection. In addition, we appreciated the strategy that Bart van Wees took in his talk to help build intuition. In the spirit of Nazarov's circuit theory, he considered the nature of Andreev scattering in the important cases of a tunnel barrier, a disordered wire, and a ballistic constriction, and then he considered what happens when these elements are combined.
2019-04-14T02:17:08.486Z
1997-09-20T00:00:00.000
{ "year": 1997, "sha1": "6c8bbbabb687c2e4764bc18830819111772275b1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "da2dde2becd1a52b3e2b32258be9ef8c2f965064", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15387849
pes2o/s2orc
v3-fos-license
Investigation of SHOX Gene Mutations in Turkish Patients with Idiopathic Short Stature Objective: The frequency of mutations in the short stature homeobox (SHOX) gene in patients with idiopathic short stature (ISS) ranges widely, depending mostly on the mutation detection technique and inclusion criteria. We present phenotypic and genotypic data on 38 Turkish patients with ISS and the distinctive features of 1 patient with a SHOX deletion. Methods: Microsatellite markers (MSMs) DXYS10092 (GA repeats) and DXYS10093 (CT repeats) were used to select patients for fluorescent in situ hybridisation (FISH) analysis and to screen for deletions in the SHOX gene. The FISH analysis was applied to patients homozygous for at least one MSM. A Sanger sequencing analysis was performed on patients with no deletions according to FISH to investigate point mutations in the SHOX gene. Results: One patient (2.6%) had a SHOX mutation. Conclusion: Although the number of cases was limited and the mutation analysis techniques we used cannot detect all mutations, our findings emphasize the importance of the difference in arm span and height when selecting patients for SHOX gene testing. Introduction Idiopathic short stature (ISS) is defined as a condition where a person's height is more than two standard deviations (SDs) below the average height for a specific age, gender, and population with no other systemic, endocrine, nutritional, or chromosomal abnormalities, nor a history of intrauterine growth retardation and low weight for gestational age (1,2). ISS excludes other identifiable conditions not based on positive specific signs of ISS. Height has a high degree of heritability and is a polygenic quantitative trait that shows complex and monogenic Mendelian inheritance patterns (3). One study reported that hundreds of variants clustered in specific genomic loci play roles in the human height trait (4). A clearly relevant gene that strongly affects height is the short stature homeobox (SHOX) gene, mapped to pseudoautosomal region 1 (PAR1) of the X and Y chromosomes. The SHOX gene has been reported to cause ISS and the short stature seen in patients with Turner's syndrome, Leri-Weill dyschondrosteosis, and Langer mesomelic dysplasia (5,6,7,8). A high recombination rate in PAR1 is associated with mandatory crossover between the X and Y chromosomes during meiosis (9,10,11). All 24 genes in the PAR1 region escape X inactivation (12). As a result, all genes located in the PAR1 region have two functional copies in humans and show a pseudoautosomal inheritance pattern (10,13). The only gene in the PAR1 region clearly associated with a disease is SHOX (14). The frequency of mutations in the SHOX gene in patients with ISS varies widely, depending mainly on the mutation detection technique and inclusion criteria. In one study, approximately 2.4% of a large cohort of patients with ISS had SHOX mutations, of which 80% were complete gene deletions (15). Stuppia et al (16) reported a 12.5% frequency of SHOX mutations in 56 patients with ISS. In this study, we evaluated the frequency of mutations in the SHOX gene in patients with ISS and discussed the distinctive clinical and radiological features of patients with such mutations. Methods The study was approved by the Ethics Committee of the Ankara University Faculty of Medicine. Written informed consent was obtained from all patients and their legal guardians. In all, 38 patients (34 females and 4 males; mean age, 11.84 years; range, 6.5-17 years) were included in the study. We used the following criteria based on the definition of ISS: height <-2 SD of the mean height for a given age, sex, and population group; normal karyotype (for girls); no evidence of chronic disease (e.g., chronic renal failure, chronic anaemia, celiac disease, malabsorption, malnutrition, chronic hepatic disease, chronic infectious disease, or congestive heart failure); no growth hormone (GH) deficiency and/or GH resistance based on the routine provocation test (peak GH>10 ng/mL) and normal insulin-like growth factor-1 level; no history of low birth weight; and no apparent skeletal disease. The clinical assessment included measurements of height, weight, and sitting height, as well as the lengths of the upper segment (US), lower segment (LS), forearm, upper arm, hands, and feet. Furthermore, the degree of short stature, US/LS ratio, difference between arm span and height, assessed body proportions, extremities/trunk ratio (ETR; sum of leg length and arm span divided by sitting height), relative body mass index (RBMI), and the presence of additional features (e.g., appearance of muscular hypertrophy, cubitus valgus, forearm bowing) were evaluated. Mutation Analysis Genomic DNA was extracted from 1 mL peripheral blood using the Magna Pure LC instrument (Roche Applied Science, Manheim, Germany). We used an approach similar to the study of Chen et al (17) in which microsatellite markers (MSMs) were used to select patients for multiplex ligation-dependent probe amplification (MLPA) analysis to screen deletions in the SHOX gene. We used DXYS10092 (GA repeats) and DXYS10093 (CT repeats) to select patients for fluorescent in situ hybridisation (FISH) analysis to screen for SHOX gene deletions ( Figure 1). Benito-Sanz et al (18) reported heterozygosity values of 0.96 and 0.69 for DXYS10092 and DXYS10093, respectively, and the repeat ranges were 18 and 14, respectively. Both MSMs were amplified by polymerase chain reaction and analysed on 8% polyacrylamide gels (see Supplementary Material). The FISH analysis was applied to patients homozygous for at least one MSM using lymphocyte metaphase spreads and the Aquarius SHOX probe (cat no: LPU 025; Cytocell, Cambridge, UK). Results In all, 36 index cases and an additional two children (patient 2 was a monozygotic twin brother of patient 1, and patient 34 was a sister of patient 33) were evaluated. All patient heights were <-2 SD (Figure 2). Mean height SD was -2.76±0.46. Height measurements and additional anthropometric data are shown in Figure 2 and Table 1. One patient (2.6%, patient 12) had a SHOX deletion detected by FISH analysis (Figure 3). Patient 12 was an 11.5-year-old girl. She had a sister and two brothers with normal height, and her parents were first cousins. Her mother's height was 153 cm and the father's height was 178 cm. The mother's SHOX FISH analysis was normal. Patient 12's main clinical findings were short stature (height, 137 cm; -2.02 SD), disproportionate body measurements (arm span/ height difference: -7, <-2 SD), obesity (RBMI, 126.1%), short forearms, cubitus valgus, muscular hypertrophy, genu valgus, micrognatia, high palate, and bilateral epicanthus. Hand and forearm radiography of the patient showed minimal bowing and mild wedging of the radius (Figure 4). Discussion GH treatment is quite effective for patients with ISS and a mutation in the SHOX gene (19). Thus, it is important to demonstrate genetic aetiology in these cases. The frequency of mutations in the SHOX gene in patients with ISS is 2-15% (15,16,20,21,22,23). According to our results, this frequency was 2.6% in children with ISS. Rappold et al (15) screened intragenic mutations using single-strand conformation polymorphism analysis in 900 patients followed by sequencing of 750 patients and detected 3 patients (0.4%) with functional mutations. They also analysed complete gene deletions using FISH in 150 patients and detected 3 patients (2%) with deletions. Another study on 56 patients with ISS reported a 12.5% (n=7) frequency of SHOX mutations (16). Jorge et al (21) reported a rate of 3.2% (2/63 patients with ISS). A large study that included 1534 patients with ISS reported a rate of 2.2% (n=34) (22). This wide range is mainly due to the mutation detection technique and the case inclusion criteria. Our results are compatible with the findings in these studies. The clinical expression of SHOX deficiency is highly variable, as short stature is frequently nonspecific in preschool children. SHOX deficiency is more severe in females than males. Young children with SHOX deficiency may not have any specific clinical findings, but the phenotype usually becomes more pronounced with age, and characteristic signs appear over time (21,24,25). The most prominent features besides short stature are a Madelung deformity, short fourth and fifth metacarpals, high arched palate, increased carrying angle of the elbow, scoliosis, and micrognathia. Rappold et al (22) investigated the presence of SHOX defects in a large cohort of 1608 children with short stature. The mean SD in height was not different between the participants Delil K et al. Height, upper segment/lower segment ratio, arm span-height difference and extremities-trunk ratio representations together with standard deviation score for all patients. Males are illustrated by square, whereas females by round. Patients lined up in order to patient number from left to right. Grey colour for P6, black for P9, red for P12, green for P13, yellow for P14, brown for P21,purple for P25, pink for P32. US: upper segment, LS: lower segment, ETR: extremities-trunk ratio with short stature with or without identified defects in the SHOX gene in that study. The authors created an evidencebased scoring system based on the clinical features of 68 patients with SHOX defects to identify the most appropriate children for testing. They concluded that some clinical findings were useful as clues to distinguish patients with a SHOX mutation among patients with short stature and that the presence of any combination of reduced arm span/height ratio, increased sitting height/height ratio, above average body mass index (BMI), a Madelung deformity, cubitus valgus, short or bowed forearms, dislocation of the ulna at the elbow, or muscular hypertrophy should prompt the clinician to conduct a molecular analysis for the SHOX gene. An increased sitting height/height ratio, above average BMI, cubitus valgus, short forearms, and muscular hypertrophy were noted in our case with an SHOX gene deletion. Binder et al (24) used ETR to select patients more likely to have a SHOX mutation. They suggested that screening for SHOX mutations should be limited to patients whose ETR is <1.95 + ½ height (m) and close inspection of a hand radiograph to detect the main characteristics of SHOX deficiency (pyramidalisation of the carpal row, radiolucency of the distal radius at the ulnar border, and triangularisation of the distal radius) in school-age children. Jorge et al (21) confirmed the usefulness of this approach and recommended using the sitting height/height ratio because it is easier to use than ETR. Our results suggest that the ETR and the difference in arm span and height are useful parameters. The US/LS ratio was not reliable alone, as this parameter was normal in our patients (Figure 2). A radiographic examination of a patient with an SHOX gene mutation may demonstrate abnormal carpal wedging, triangularisation of the distal radial epiphysis, radial lucency, shortening of fourth and fifth metacarpals, and radial bowing (26). We did not detect any striking findings on a radiograph of the left hand in our patient, and she had only minimal bowing of the radius and mild wedging. It is not possible to analyse every child with ISS for a SHOX gene mutation because of its low incidence. Phenotypic variation in short children can affect the decision to perform a genetic analysis. Beyond the typical dysmorphic signs, a positive family history, careful anthropometric measurements and an x-ray evaluation of the hand and wrist can be used to support this decision. Although we had a limited number of cases and the mutation analysis techniques used could not detect all mutations, our findings emphasize the importance of the difference between arm span and height when selecting patients for SHOX gene testing. Nevertheless, more extensive studies with larger groups of patients and a wider range of mutation screening techniques are needed. Deletions are the most frequently detected SHOX gene mutations (15). In our study, we first performed MSM and then a FISH analysis to screen for SHOX gene deletions. Funari et al (27) suggested that MLPA should be the first molecular method used to screen for SHOX gene deletions. We also suggest using MLPA first because SHOX deletions are highly heterogeneous, so numerous MSM loci may need to be studied, and MLPA can detect smaller deletions than FISH. In summary, our patient with a SHOX mutation had no obvious findings associated with such a gene deletion. She had a disproportionate body, which could easily go unnoticed, but she had no obvious Madelung deformity. In conclusion, we detected an SHOX gene deletion in 1 of 38 children with ISS. Short children should be carefully investigated with respect to these mutations, even if they have only mildly disproportionate stature. Ethics Ethics Committee Approval: The study was approved by the Ethics Committee of the Ankara University Faculty of Medicine Ankara University, 04/10/2010, Informed Consent: It was taken.
2017-09-06T13:51:48.534Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "6942474b64f7cbe7aba09eca58b1727bb938c256", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4274/jcrpe.2307", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6942474b64f7cbe7aba09eca58b1727bb938c256", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
243980975
pes2o/s2orc
v3-fos-license
Sorting It Out: Identifying and Addressing Conflicts and Business Ethics in Global Value Networks Global value networks are often large, complex, and opaque. Understanding the relationships among stakeholders involved in these networks or organizations can be challenging. This card sort task provides an interactive way to engage participants in questioning the roles of stakeholders who are involved in a business ethics dilemma or an organizational product failure. This card sort task and discussion activity encourages participants to recognize that stakeholders may hold different knowledge, responsibility, or power; identify competing, conflicting, or complementary interests across stakeholders; articulate logical arguments; and engage in debate, compromise, and critical evaluation. This technique has been used successfully with undergraduate and postgraduate business, management, and social science students and is suitable for in-person and remote classes. diverging stakeholder expectations and pressures and to multiple legal, regulatory, and soft law requirements. Yet when something goes wrong, the largest company within the network or the supposedly most powerful organization is typically held to account by media, the public, nongovernmental organizations (NGOs), or government. For example, most consumers may expect the fashion brand they buy from to be responsible for working conditions across the whole supply network, for example, to ensure that no child labor is present in the production of their jeans (Enderwick, 2018). How realistic is this expectation when there are many tiers of suppliers involved across different countries, from cotton picking to stitching the pairs of jeans together? Rather than using heuristics of organizational size, we designed this exercise to challenge pre-conceptions (e.g., that the largest organization always has more power within a value chain). The exercise involves students engaging with a case study that involves multiple stakeholders within a GVC involved in a complex ethical situation. Students work in small groups on a card sort task to identify and understand which actors within the case study's GVC (and potentially beyond) could be considered accountable, knowledgeable, or able to effect change. Students must discuss and debate to reach a decision within their group and then articulate their position as part of class discussion. We have designed the exercise to be flexible, and it can be applied to a wide range of case studies and problems. The exercise is suitable for Business Ethics, Corporate Social Responsibility, International Business, and Supply Chain undergraduate and postgraduate courses. With a single instructor the exercise can be run with eight to 40 students. The exercise can be run in a single 60-minute session, extended to 90 minutes, or divided across three sessions. While the activity has been delivered with an online class, the available virtual learning platform may reduce the interactive nature of the card sort task, and therefore, in-person delivery is recommended. We originally developed this exercise as part of an international multidisciplinary research project. Our research explored how modern slavery was identified and tackled within GVCs, and we used this exercise with an international fashion GVC from retailers in the United Kingdom through to small scale cotton spinners in India, including policy makers and NGOs from both countries. We have used the activity successfully in the United Kingdom with Japanese pupils, undergraduate students from management and social science backgrounds, international business masters students, and at international management conferences with academic colleagues. In the appendices, we provide detailed instructions on how to set up, run, and debrief the activity (A-C), examples of activities relating to the fashion industry, the Boeing 737 Max, and a polluting factory in India (D-F), a general debrief and ranking sheet (G), as well as instructions how to create your own card sorting activity (H) and how to use the activity remotely (I) (see Table 1). Stakeholder Theory The card sorting task is based on an understanding of the role of and relationships between stakeholders in global value networks. The dimensions of knowledge, responsibility, and power have been identified as key distinctions between stakeholders within GVCs (cf. Enderwick, 2018;Gereffi et al., 2005). This brief about stakeholder theory will familiarize participants and facilitators with essential tenets of the theory. Stakeholder theory was proposed by Freeman (1984) as a broader and more pluralistic approach to managing organizations. He emphasized the potential value organizations can create when they know and work collaboratively with their stakeholders. The stakeholder perspective argues that stakeholders include employees, investors, shareholders, customers, business partners, and societal stakeholders that represent the natural environment, local communities, government agencies, media, and academia (Birte et al., 2020;Bocken et al., 2013). These stakeholders can be situated close to a company's headquarters or be globally dispersed along the global value network of the company, including the production, consumption, and recycling/disposing of the goods and services the organization provides. Consequently, a firm's stakeholders may reside in countries where the company does not operate/produce/sell but where its waste is washed up or the impact of its services is felt. The theory thus argues that organizations should not solely focus on shareholders and shareholder maximization but consider the interests of all the parties that are directly and indirectly affected by the organization (Freeman et al., 2004). Donaldson and Preston (1995) categorize stakeholder theory into descriptive, instrumental, and normative aspects. The descriptive aspect focuses on how stakeholders are managed in practice. The instrumental aspect considers only the "primary" stakeholder groups, those with a direct economic connection to the firm such as employees and investors. The normative aspect is rooted in the moral intuition that believes a firm's responsibilities to its various stakeholders should go significantly beyond what is accepted by contemporary shareholder/stockholder approaches. Donaldson and Preston (1995) claim that the normative aspect of stakeholder theory is its core and that the other aspects of the theory play a subordinate role. An organization can, and should, maintain support from its stakeholders by considering and balancing their relevant interests (Reynolds et al., 2006). Organizations need to understand that some stakeholders, at a certain point in time and space, are more influential and relevant than others and that the amount of influence may change through interactions with stakeholders or through externalities and wider institutional support (Friedman & Miles, 2002). Identification of relevant stakeholders is therefore a key issue. Because organizations encounter a multitude of stakeholders with varying and at times conflicting interests, understanding which stakeholders to prioritize at any given time is a challenging task that can be assisted by the identification of salient stakeholders (Mitchell et al., 1997). A variety of approaches exist to help identify key stakeholders and inform judgments regarding prioritization of interests. For example, Mitchell et al. (1997) provide a framework of urgency, legitimacy, and power to support the identification of salient stakeholders. Alternatively, Reynolds et al. (2006) present two methods (within-decision approach and across-decision approach) to identify stakeholder interests and impacts. The within-decision approach considers every decision as singular and independent units whereas the across-decision approach balances stakeholder interests across the system (a series of decisions over time) rather than on a decision-by-decision basis (for further suggested readings, see Appendix J). Learning Objectives After participating in this exercise, students will be able to: Exercise Overview The exercise utilizes a card sorting methodology, which has been used extensively in psychological research to study managers' decision making, belief structures, and mental models (e.g., Barnett, 2008;Budhwar, 2000;Hodgkinson et al., 2004;Lantz et al., 2019). Card sorting tasks require participants to sort cards, each labeled with one item such as tasks, objects, stakeholders, scenarios, or outcomes. Participants are instructed to sort the cards in a specific manner, for example, to indicate the different levels of responsibility of stakeholders in a supply chain (see Appendix A). The approach forces participants to make discrete choices between options and construct hierarchies and can be used to prompt reflection on unconscious beliefs or knowledge that drive decision making and the assumptions held on particular problems or topics. The exercise involves participants reading a case study ahead of the exercise relating to a business problem or failure that involves a complex GVC or diverse set of stakeholders. Participants work in small groups to identify key stakeholders in the case who may have any knowledge, responsibility, and power regarding the situation. Participants are required to debate the role of the different stakeholders within their groups and to come to a shared agreement regarding the rank order of the stakeholders (from most to least knowledgeable, responsible, and powerful). Participants record their rankings and share and justify their decisions as part of a class discussion. The rankings help participants to recognize where they have made contradictory evaluations. The exercise requires participants to articulate logical arguments and engage in debate, compromise, and critical evaluation. Debriefing The debriefing provides an opportunity for participants to deepen their understanding of the competing interests and tensions within supply chains or among stakeholders by drawing out tensions between knowledge, responsibility, and power; to highlight the individual differences that arise regarding these attributions; and to identify competing, conflicting or complementary interests across stakeholders. It also provides an opportunity to follow up on the industry or topic specific nature of the case study, for example, to provide additional context or to link back to other course material. The debriefing can follow a three-step approach: ask participants about any additional stakeholders they added using the blank cards and discuss why they did so; ask how they attributed knowledge, responsibility, and power to stakeholders and how they justify the allocation; and, finally, recognize and highlight the diversity of perspectives on ethics and responsibility (see Appendix C). Single or Multiple Sessions The activity can be executed in one session or over three sessions. If the activity is conducted within a single session, then at least 60 minutes are suggested for briefing, the activity, discussion, and debriefing. Ask students to familiarize themselves with the case material in advance. When the activity is spread over three sessions, then focus each session on one of the three dimensions (knowledge, responsibility, or power). Spreading the activity over multiple sessions allows focus on one dimension and thus usually results in deeper discussion. Practical Actions We have extended the activity in some classes to include an additional small group discussion task after the third card sort in which we ask participants to consider what practical steps could be taken to address the focal problem. For example, what practical actions could be taken, and who should do these, to reduce modern slavery within t-shirt production? Student Led Set teams, or individual students, the task of researching a GVC and identifying five to 10 stakeholders. Students could then lead their peers through the card sort activity-providing a summary of their GVC, creating a card deck, leading the card sort, and then asking debrief questions supported by the instructor (see Appendix H). Online Teaching The activity is suitable for in-person and remote classes (see Appendix I). Conclusion This card sorting exercise sensitizes students to such situations by forcing them to take a step back to question their own assumptions and those of other students, discover the assumptions and objectives of stakeholders discussed, and identify relationships and interdependencies between the categories. The tasks involved in this exercise are thus a great stimulus for sensemaking of the business environment firms operate in and allow for a re-assessment of the framing they have used to understand business operations, GVCs, and the role of stakeholders. Instructions for Running the Exercise Prework. Provide participants with the case study in advance of the activity so that they are familiar with the problem and who is involved (see Appendix B). If the focus is on a specific issue such as modern slavery, bribery, environmental standards, or product innovation, provide preparatory reading that provides background (e.g., for modern slavery, Caruana et al., 2021;Crane, 2013;Voss et al., 2019). Room Set-Up. Organize participants into small groups of five to six people. We find it aids discussion and debate to mix participants up, for example, in terms of industry experience, discipline background, and culture. If possible, have groups spread out so that they are able to have independent discussion and have space to lay out their cards and ranking sheet. Exercise Instructions for a 60-Minute Session 1. Introduction (5 minutes). Introduce the exercise by providing an overview of the problem being addressed (e.g., modern slavery; see Appendices D-F) and a synopsis of the case study. Distribute materials (2 minutes). Hand out a set of cards, a ranking sheet, and pens. Card sort instructions (5 minutes). Explain to participants that they will be asked to do three card sorting tasks. They will sort their deck of cards to decide which stakeholders had most to least (a) knowledge, (b) responsibility, and (c) power regarding the problem. If participants believe that there are other stakeholders who should be included (e.g., an NGO that has knowledge), they can write them in on one of the blank cards. The small groups must discuss the case, reach a single decision regarding their ranking for each card sort, and record it using their ranking sheet. 4. First card sort: Knowledge (10 minutes). Instruct participants to discuss the stakeholders on their cards and discard those that have no knowledge of the problem. Participants should then rank the stakeholders in descending order from who knows the most to least. The agreed ranking order should be entered in the knowledge column on the ranking sheet (Appendix G). 5. Second card sort: Responsibility (10 minutes). Ask participants to reassemble all of their cards into one deck. They should now discuss the stakeholders on their cards and discard those that have no responsibility for the problem. Participants should then rank the stakeholders in descending order from those with the most responsibility to least. The agreed ranking order should be entered in the responsibility column on the ranking sheet. 6. Third card sort: Power (10 minutes). Ask participants to reassemble all of their cards into one deck. They should now discuss the stakeholders on their cards and discard those that have no power to effect change. Participants should then rank the stakeholders in descending order from those with the most power to least. The agreed ranking order should be entered in the power column on the ranking sheet. 7. Intermittently check whether all groups have completed rankings (0 minutes). 8. Card sort results (8 minutes). Ask the groups to explain and justify why they have decided on their rankings. Capture the ranking results and annotate them with key points of explanation on a whiteboard or flip chart as groups report back; a student could act as a scribe. Option 1. Focus on one group at a time, each group provides their rankings and explains why they differ. Option 2. Focus on one dimension at a time and ask each group to share their ranking for it. Then call on individual groups to explain their differing positions. Prompt justification of decisions by asking: • • "Why do you feel that X has most knowledge/responsibility/power?" • • "Why have you not included [choose stakeholder ranked highly by other groups]?" • • "How many people agree with X being most responsible?" Ask for a show of hands and then ask participants who disagreed why they disagreed. Pre-Task Arrangements and Logistics Prepare the following materials: names of the stakeholders from the case study or problem description, together with at least five blank cards so that participants can write in the names of any other stakeholders they consider relevant (see, Appendices C-E for exemplars). Only one stakeholder is to be listed per card. 3. A ranking sheet for each group. This ranking sheet is a table with three columns, labeled knowledge, responsibility, and power, respectively (Appendix G). 4. A flip chart, whiteboard, text file, or slide deck for the instructor to capture overall rankings and discussion points. Debriefing We suggest starting the debrief by asking whether students included additional stakeholders before discussing how and why they allocated knowledge, responsibility, and power. Throughout the debrief, it is important to acknowledge and welcome diversity of perspectives on ethics and responsibility. Additional Stakeholders. Begin the debriefing by identifying additional stakeholders. Ask participants, "Was there anyone missing from the supply chain?" or "Did you need to create any cards?" Then ask, "Who is the stakeholder? Why do you feel they were important to add to the case?" Often participants with greater industry or subject knowledge will add stakeholders. Asking participants to describe the new stakeholders may provide additional insights for the rest of the group. If additional stakeholders have been introduced, use this as an example of the difficulties in bounding any supply network and distinguishing who may be a salient stakeholder. Following the discussion, ask the participants to reflect on whether there are other stakeholders they now feel should be included; this could be a post-exercise reflection task. Knowledge, Responsibility, and Power. Refer to the most frequently identified stakeholders across the groups' rankings and remind participants of the key differences observed during the discussion. Use the rankings recorded on a flip chart/whiteboard as a visual aid. We find that participants commonly agree on rankings that (a) differ between groups and (b) often do not assign the same actor knowledge and responsibility, knowledge and power, or responsibility and power. These points are important for the class discussion and the application of theory. Ask students, "Are you surprised that you a stakeholder can have so much responsibility but so little power to change/control things?," "How could you hold [stakeholder] responsible if they have so little knowledge of what is happening?" and "Do you believe that the largest organization will have greater influence over its smaller suppliers?" These questions often lead on to discussions around the asymmetries between knowledge, responsibility, and power. If participants have not mentioned the role of governments or regulators, prompt them by asking "What responsibility do governments or regulators have in this case and how do you think that they could take action?" Follow up by asking, "Is government action an appropriate substitute for corporate action?" Participants often produce contradictory rankings. They often disagree over whether a stakeholder (e.g., an international brand in supply chain or a CEO) can be held responsible for what happens upstream in a supply chain (or among front-line employees in a large organization) if their suppliers or managers hide poor practice. Use this tension to discuss issues relating to due diligence, risk management, and ethical obligations. Diversity of Perspectives on Ethics and Responsibility. Finally, ask participants, "Was it difficult to come to a group decision?" Use their answers to explain that there is a diversity in views on ethics and responsibility and that this diversity makes judgments over responsibility, blame, and action complex. Be sensitive to where groups have experienced difficulty in reaching a collective decision. Do not draw public attention to them if it may cause embarrassment. Also be careful when monitoring small group discussion and the debriefing so that participants feel that their views are respected and that individual discussions are not closed down by dominant voices. To facilitate open discussion, it is useful to engage with the small groups during the card sorts and identify participants who have interesting viewpoints that differ from their peers. Notify them during the activity that you would like to call upon them during the debriefing so that they can prepare if they are comfortable to share their thoughts or inform you if they are not comfortable sharing their thoughts. Photograph the annotated table (written on the flip chart/whiteboard at step 8) and each group's table and make them available through a virtual learning environment. This allows participants to obtain a record of all rankings as discussed in class and their justifications. Exemplar Activity-Fashion Industry The following case and related card sorting activity discusses modern slavery in the fashion GVC. It highlights the complexities of modern value chains and attempts by governments to reduce exploitation by holding brands responsible. The case and the cards are also available as supplemental materials (available online). Fashion Industry. The modern fashion industry is a global and complex assembly of large and small, national and international organizations that combine to account for almost US$2 trillion of trade every year (Lehman et al., 2019). The basic premise of the industry is the conversion of raw materials into finished garments through a series of relatively simple manufacturing stages within the supply chain. In general, the majority of fashion manufacturing occurs in developing nations, with countries such as Bangladesh, Cambodia, or India heavily reliant on the international trade generated by their fashion supply chains. The management of these supply chains is complex due to the global nature of the industry and the huge diversity of different supply chain structures (Frostenson & Prenkert, 2015). However, in spite of this diversity, the supply chain can be broadly described in the following terms. Fashion brand: A business contracting a Tier 1 business to manufacture fashion items for them and sells them to consumers. Tier 1 Garment Makers: Garment panels are cut from the fabric and sewn together with trims (buttons, zips, etc.) to create the finished garment Tier 2 Dyehouses: Color and functional chemistry are applied to the fabric. Tier 3 Fabric Mills: Processes such as knitting or weaving are used to create fabrics from yarns. Tier 4 Spinning Mills: A process of producing yarns by twisting assemblies of fibers together. Tier 5 Fiber cultivation or production: This includes agricultural processes for growing cotton, timber (for viscose rayon), wool or cashmere, and the production of man-made materials such as polyester, nylon, and elastane. The tiering for any specific supply chain can vary depending on a wide range of factors. For example, the wool supply chain may consist of up to 20 different processes, with each process completed by independent business organizations. Highly vertical chains may have all the tiers within one organization, while, in other chains, even the ownership of materials and production outputs may be confused by the use of sub-contracting between processes and subprocesses. To add further complication, the industry relies on the use of trading agents between different tiers; this is particularly common between Tier 1 (garment making) and the fashion brand. The contractual structure between tiers is important characteristic of the fashion industry. Contracts tend to exist between, but not beyond, adjacent tiers. Therefore, a brand will hold a contract for purchasing finished goods from Tier 1, or their trading agent, but they will not have contracts with any other tiers in the chain. Associated with this contractual structure, there is very limited transparency across the supply chain, leading to a situation where a brand will not know who their Tier 2, 3, 4, or 5 suppliers are. And equally, Tier 2, 3, 4, and 5 suppliers may have very limited visibility of the final customer for their production (Environmental Justice Foundation, n.d.; Wilhelm, Blome, Wieck, & Xiao, 2016). The contractual structure and lack of transparency has major implications for the level of knowledge, control, and power that a brand can wield on their supply chain. Although there is no one definition for modern slavery (Kara, 2017), it has been generally accepted that modern slavery refers to and includes issues such as forced labor, bonded labor, debt bondage, human trafficking, forced and early marriage, poor pay, and child labor (Anti-Slavery International, n.d.). Management of modern slavery by fashion brands is a complex problem, which can be demonstrated by considering an example of a U.K. fashion brand attempting to comply with the U.K.'s Modern Slavery Act 2015 for its Indian supply chains. First, the extended global supply chain and the lack of transparency inhibit the ability of brands to identify their supply chain and therefore, their ability to detect and respond to modern slavery issues. Second, different aspects of modern slavery will be prevalent within different tiers of an Indian supply chain due to the nature of the work and the regional and social norms the location of that tier. For example, forced and child labor can be common in some cotton growing areas; bonded labor can be found in spinning mills, while issues of worker safety exist in the dyeing tier. Third, the U.K. government's definition of modern slavery differs from that of the Indian government, with different policies in place at a state level as well (U.K. Government, n.d.). Furthermore, the colonial history between these two nations adds additional cultural tension regarding the definition and application of modern slavery legislation. Unions and civil society also play an important role in defining modern slavery and identifying breaches of policies and legislation. This example explores the challenges and difficult for creating a strategy to eradicate modern slavery from the typical supply chains that Western fashion brands are reliant on. Figure D1 displays a set of cards identifying the actors for the card sorting activity and includes blank cards for additional actors. Debrief. The case and card sort activity should encourage the classroom to discuss the issues of defining global standards for complex issues such as modern slavery and the implications of culture on those definitions. Students should explore and question the allocation of responsibility for eradicating modern slavery at each tier, noting the potential power imbalance between each tier and the lack of transparency across the chain. They should also question the ability of a brand to influence and lever changes to the supply chain in a situation of opaqueness and complex supply chains. a. See Appendix C to debrief the card sort activity b. Additional debrief questions for this case: Who is responsible for ensuring that there is no modern slavery in each tier? Who has responsibility for ensuring that there is no modern slavery across the whole supply chain? What level of influence does a brand have on Tiers 2, 3, 4, and 5? c. Follow-up questions for this case: Which definition of modern slavery takes precedence: the U.K. government definition or that of the Indian government? What are the cultural implications of the United Kingdom applying its laws for modern slavery in a country like India? Updating the Case. The case and card sorting activity could be further developed by considering: Exemplar Activity-Pond's Thermometer Factory The following case and related card sorting activity discusses long-lasting environmental pollution as well as the responsibilities of an acquirer of a polluting company. The case and the cards are also available as supplemental materials (available online). Pond's Thermometer Factory. Pond's India established a thermometer factory in Kodaikanal in the southern India state of Tamil Nadu in 1983. In 1987, Pond's India came into the fold of the Anglo-Dutch company Unilever through its acquisition of Pond's India's American parent company Chesebrough-Pond's. Pond's India, and with it, the thermometer factory, merged with Hindustan Unilever Limited in 1998. The factory imported mercury for its thermometers from the United States and exported finished thermometers to markets in the United States and Europe. By 2001, the factory had 400 workers operating in two shifts of 200 each (Kodai Mercury, n.d.). In early 2001, factory workers of Hindustan Unilever Limited complained of health problems. Nongovernmental organizations, such as Greenpeace, alleged that the Hindustan Unilever Limited was not handling mercury, the third most toxic element, properly. Hindustan Unilever Limited was directed by the Tamil Nadu Pollution Control Board in 2001 to shut down the factory after Palani Hills Conservation Council and Greenpeace exposed the company's attempt to sell glass contaminated with mercury to a scrap dealer. Former employees and activists alleged that in 18 years of operation, the factory exposed more than 600 workers to toxic mercury and at least 45 workers have died prematurely, and hundreds are suffering from nervous disorders, dental problems, vision and hearing impairments, skin problems, and memory loss (Rajgopal, 2003;Shah, 2021;Sharma, 2003). Two years after the shutdown of the factory, around 300 tons of contaminated waste generated by the factory in 18 years including glass-scrap with residual mercury, semifinished and finished thermometers, effluent treatment plant waste, and elemental mercury were extracted from the site. The waste was packed under the supervision of the Tamil Nadu Pollution Control Board (TNPCB) officials and sent to the United States in 2003 (Kodai Mercury, n.d.). In 2006, former employees filed a petition in the Madras High Court seeking economic rehabilitation for the damage they incurred from working at the factory. In the same year, Hindustan Unilever Limited decontaminated the plant, machinery, and materials used in thermometer manufacturing at the site and disposed of them as scrap to industrial recyclers. Ten years after filling the petitions and facing class action litigation, Hindustan Unilever and former employees of Kodaikanal factory signed a settlement in March 2016 (Hindustan Unilever, n.d.;Sureshkumar, 2016;Unilever, n.d.;Unnikrishnan, 2017). Actors for the Card Sorting Activity. Unilever headquarters: Acquired the American company Chesebrough-Pond's and through this acquisition its overseas subsidiaries, including Pond's India. It is the ultimate parent company of Pond's India. Unilever India: Immediate owner of Pond's India. Pond's India: Established and ran the thermometer factory in Kodaikanal, Tamil Nadu. The thermometer factory had huge potential to generate earnings through export. The Indian government also attached high importance to enhancing export earnings. The factory produced 163 million thermometers using about 900 kg of mercury annually. TNAAC: Tamil Nadu Alliance Against Mercury (TNAAC) is a group which has alleged that the Pond's had been disposing mercury waste without following proper protocols. Mercury can cause severe health hazards, which include cancer and kidney ailments. It does not only affect the workers in the factory but also the people, flora, and fauna in the surroundings. Public Health Department: A noticeable increase in kidney ailments were reported in the area surrounding the thermometer factor with many of these cases concerning workers of the factory. Mercury vapor is absorbed through the mucous membrane, gets into the blood stream, and goes straight into the brain. Tamil Nadu Pollution Control Board (TNPCB): Received information that Hindustan Unilever disposed mercury waste without following proper protocols. Figure E1 displays these actors in a set of cards for the card sorting activity and includes blank cards for additional actors. Debrief. The case and card sord activity highlight internal and external actors' (stakeholders) perspective on the mercury thermometer factory that was set up in Kodaikanal, India in 1983. The case presents how NGOs and other bodies demanded the closure of the factory due to various problems including health impact on workers and environment damage. The case and the card sorting activity allow a classroom assessment and discussion about various stakeholders' claims and help students to understand priorities among those claims. Discussions should help them to be aware of the choices and the sustainability consequences in the business context. a. See Appendix C to debrief the card sort activity. b. Additional debrief questions for this case: Who within the GVC should have known about the soil and water pollution? Who had the responsibility to ensure that the toxic waste was handled properly? Who had the power to enforce operational change earlier? How could the situation have been handled differently? c. Follow up questions for this case: What happened to the workers who suffered from mercury poisoning? What is salient stakeholder theory and how companies can benefit from this? What are the sustainability consequences of various choices in the business context? Updating the Case. The case and the card sorting activity could be continuously updated by integrating more information on the 2016 settlement and the continuing demands from activists. The case has been written up as a case study by van Tulder and van der Zwart (2005). Business reporting and material by activists and the involved companies are also available in the following resources. Since then, the development and certification of the plane have come under intense scrutiny. During the certification process of the 737 Max, the FAA outsourced certain evaluation processes to Boeing and relied on the company's own assessment. Former Boeing engineer Adam Dickson claimed that during the certification process, Boeing intentionally labeled new features of the 737 Max as "minor" changes to avoid stricter scrutiny by the FAA. Consequently, the Inspector General of the U.S. Transportation Department concluded information concerning the novel flight control software, the Maneuvering Characteristic Augmentation System (MACS), was not fully shared with the FAA. FAA engineers responsible for approving pilot training requirements worked with incomplete information. Information about the technological details of the 737 Max and its safety measures were also not fully disclosed internally or taken notice of (Levin & Johnsson, 2020;Nicas, Gelles & Glanz, 2019). In late 2019, CEO Muilenberg acknowledged that he only recently had been made aware of and read an email exchange from 2016 between Boeing's chief technical pilot and another technical pilot about the egregious handling of the plane. Mr. Teal, Vice President and chief engineer on the 737 MAX program who signed off on the jet's technical configuration, stated, "The technical leaders well below my level would have gone into that level of detail [concerning safety measures]" (Levin, 2020, p. 1). Boeing defended this position by releasing a statement that Given the breadth of their responsibilities, Mr. Leverkuhn and Mr. Teal [the Vice-Presidents responsible for the 737 Max] were not, and could not have been, involved in every design decision and necessarily relied on engineering specialists to perform the detailed design and certification work associated with individual systems. (Levin, 2020, p. 3) According to former Boeing engineers such as Adam Dickson, Rick Ludtke, and Mark Rabin, the development and certification process of the 737 Max was further compounded by an excessive focus on reducing costs (BBC, 2019). The MACS was developed by Boeing with support from international suppliers and their sub-contractors, including software engineers from Rockwell Collins and HCL Technologies Ltd, India, and hardware engineers from Rosemount (Johnston & Harris, 2019;MacMillan & Gregg, 2019). Software engineers in India cost about US$10/hour, compared with US$35 to $40 Boeing would have to pay for similarly qualified personnel in the United States (Robinson, 2019). The cost focus in the development of the plane has been corroborated by Leverkuhn, who stated that Boeing pushed to minimize pilot training and hereby reduce operational costs for airlines. This was achieved by attempting to design (or by arguing) the 737 Max to be as similar as possible to previous versions of the 737. Figure F1 displays the cards for the card sorting activity and includes blank cards for additional actors. Rockwell Collins Rosemount European Union Aviation Safety Agency (EASA) Airline Pilots Unions and Associations Lower-Level Boeing Design Engineers Software Engineers Figure F1. Cards for the Boeing 737 Max case card sort Debrief. The case and the card sorting activity allow a classroom assessment and discussion about which actor should take responsibility about product failures. Product failures are common yet seldom as visibly lethal as in this case. The case highlights internal and external actors that were involved in the certification of the plane, hereby drawing attention to an organization's hierarchical structure, oversight, monitoring, and information sharing as well as its relationship with external bodies. Students should question and interrogate the internal and the external arrangements. Internally, core concerns relate to the chain of command, decision making, and absolution of responsibility. The development of the MACS is hereby seen as an internal process because it was development according to specifications set by Boeing. Externally, core concerns relate to the outsourcing of regulatory functions to the business that is seeking approval for a product. The process contains an interest of conflict and increases information asymmetry between the regulator and the business. a. See Appendix C to debrief the card sort activity. b. Additional debrief questions for this case: Who should have known about the 737 Max problems? Who had responsibility to ensure that the 737 Max was problem free? Who would have had the power to ensure that the 737 Max was problem free? C. Follow-up questions for this case: What has since happened to the CEO and the Vice Presidents? How has the Boeing-FAA relationship evolved since the crashes? What has happened to the 737 Max? Updating the Case. The case and the card sorting activity could be further developed by integrating more businesses from Boeing's global supply chain. Some 900 suppliers are involved in the development and production of the 737 Max. The case and card sort discussion could also reflect on the decision of global aviation regulators to re-certify the plane in 2020 and 2021. To what extend does the re-certification of the plane affect any previously reached conclusions? through a virtual learning environment. Groups of five to six people can be virtually assigned as well and participants can be requested to make their own arrangements to find time to assess the material and sort the cards. When the class meets for a shorter synchronous session, results can be shared as described above and/or through the chat function. Further Resources For sessions that are completely asynchronous, the above recommendations still hold but the delivery of the results requires adjustment. For these cases, we suggest that each group prepare a video summary of their discussion and their results. The video format may vary and include voice-over slide decks or talking heads. Videos need to be uploaded by a specified deadline. All participants are then asked to view every video. Participants can leave comments and contribute to a discussion forum.
2021-11-11T16:13:34.693Z
2021-11-08T00:00:00.000
{ "year": 2023, "sha1": "dd2561bb1fc81d40746a7384b97e7850f1ab076e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/23792981211054848", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "4f2472bd9828cd2da18bfec28377c2a9d70556a3", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
232103718
pes2o/s2orc
v3-fos-license
Clinical Impact of Vertical Artifacts Changing with Frequency in Lung Ultrasound Background: This study concerns the application of lung ultrasound (LUS) for the evaluation of the significance of vertical artifact changes with frequency and pleural line abnormalities in differentiating pulmonary edema from pulmonary fibrosis. Study Design and Methods: The study was designed as a diagnostic test. Having qualified patients for the study, an ultrasound examination was performed, consistent with a predetermined protocol, and employing convex and linear transducers. We investigated the possibility of B-line artifact conversion depending on the set frequency (2 MHz and 6 MHz), and examined pleural line abnormalities. Results: The study group comprised 32 patients with interstitial lung disease (ILD) (and fibrosis) and 30 patients with pulmonary edema. In total, 1941 cineloops were obtained from both groups and analyzed. The employment of both types of transducers (linear and convex) was most effective (specificity 91%, specificity 97%, positive predictive value (PPV) 97%, negative predictive value (NPV) 91%, LR(+) 27,19, LR(−) 0.097, area under curve (AUC) = 0.936, p = 7 × 10−6). Interpretation: The best accuracy in differentiating the etiology of B-line artifacts was obtained with the use of both types of transducers (linear and convex), complemented with the observation of the conversion of B-line artifacts to Z-line. Introduction: Lesions affecting the interstitium are most frequently caused by pulmonary edema (cardiogenic and non-cardiogenic) and interstitial lung disease (ILD) [1]. Such lesions are easily detected in a lung ultrasound (LUS), where B-line artifacts are searched for. However, without considering clinical data, differentiating the etiology of lesions affecting the interstitium is much more difficult [2][3][4][5]. Consequently, searching for further possibilities for differentiating a pulmonary from cardiogenic etiology of interstitial lesions using LUS is well grounded. In this study, vertical artifacts were analyzed (depending on the set operating frequency), as well as pleural line abnormalities. The first goal was to compare the length of vertical artifacts evaluated with a convex transducer at two extreme frequencies: 2 MHz, and then 6 MHz. The second goal was to assess pleural line abnormalities, with the employment of a linear transducer in both patient groups. B-line artifacts are significant in diagnosing many diseases that affect the pulmonary interstitial space and alveoli [1,6]. These artifacts are defined as laser-like vertical re-verberation artifacts arising from the pleural line, extending to the bottom of the screen (irrespective of the set depth), moving along with the lung slide, and leading to the disappearance of A-lines [7]. Z-line artifacts belong to one family of vertical artifacts, similar to B-line artifacts however, they are much shorter and do not extend to the bottom of the screen [8][9][10]. The mechanism of B-line and Z-line formation is still not fully examined. Study Design The study was conducted as a prospective cohort study. Approval from the local ethics committee (number: NKBBN/474/2018 and NKBBN/473/2018) and the informed consent of all participants in the study was duly obtained. Approval date for both 10 October 2018. Study Population Two groups of patients were examined: those patients diagnosed with ILD secondary to systemic sclerosis (group A), and patients diagnosed with pulmonary edema due to the exacerbation of congestive heart failure or to acute heart failure (group B). The exclusion criteria for patients with recognized ILD were as follows: comorbidity of congestive heart failure, pneumonia, and noncardiogenic edema. For patients diagnosed with pulmonary edema, the exclusion criteria were: pulmonary fibrosis, ILD, pneumonia, and noncardiogenic edema. The findings were anonymized and entered into a database by independent members of the research project. Written informed consent was obtained from those patients who agreed to participate. Duration of symptoms, examination findings, comorbidities, treatment, laboratory test results and echocardiography examination results, chest X-rays, and (in the case of ILD) high resolution computed tomography (HRCT) results were recorded. Patients were evaluated with LUS, and findings were recorded on standardized forms. Study Protocol LUS examinations were performed by three independent operators who are clinicians experienced in sonography (4 years, 10 years, and 10 years). Ultrasound examinations were recorded and re-analyzed by clinicians and physics specialists. An ultrasonography device (Philips Sparq, made in Bothell, WA, USA, 2013), with a 2-6 MHz convex curved transducer, and a 4-12 MHz linear transducer, was used. Patients were evaluated with the application of LUS performed in the same manner, and with the same technical criteria: (a) speckle reduction, compound imaging, and tissue harmonic imaging were switched off; (b) the focus of the image was positioned at the pleural line level; (c) imaging depth was set at 15 cm for a convex transducer, and at 6 cm for a linear transducer; (d) gain and time gain compensation (TGC) were adjusted in mid-scale. Moreover, when the lungs were examined with a convex transducer, vertical artifacts were evaluated with two extreme frequencies: 2 MHz, and then 6 MHz. When a convex transducer was employed, the sonomorphology of all artifacts, in both groups, was compared to each other, and analyzed statistically. When a linear transducer was employed, pleural line abnormalities were evaluated. Sonographic examinations were performed in the supine position and through the intercostal spaces on both sides of the chest. The probes were applied at four points over the front of the chest, and eight points over the posterior-lateral part of the chest. Statistical Analysis Data analyses were performed in R statistical software (open source (GNU license) statistical environment available with libraries (stat, pROC, plyr, ggplot2) at www.r-project.org (accessed on 26 February 2021)), version 3.6.0, using the following software: stat, plyr, ggplot2, pROC. The results were presented as the mean (standard deviation) for continuous variables and count (frequency) for discrete data. A p-value < 0.05 was regarded as statistically significant. Discrete data were compared for the groups with Pearson's χ 2 test, with appropriate modifications (i.e., Yates's correction, Fisher's exact test or V 2 Diagnostics 2021, 11, 401 3 of 12 test). For ultrasonographic features differentiating pulmonary fibrosis from heart failure, independently and in complex models, a receiver operating characteristic (ROC) curve was plotted, and the area under the curve (AUC) was calculated, determining whether it differed statistically by 0.5 with the application of the DeLong test. AUCs for differentiating parameters and predictive models were compared with the DeLong test. For quasi continuous variables (e.g., a total number of intercoastal spaces containing consolidations) optimal cut-off points were determined with two methods ("closest topleft" and Youden). For all diagnostic parameters, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and the likelihood ratio for a positive LR(+) and negative result LR(−) were calculated. For statistically significant models, logistic regression was performed, calculating the odds ratio for a positive result of the tested model and a respective Akaike information criterion (AIC) value. AIC allows for the comparison of different models, where the lower its value, the better a given model is adjusted to the experimental data. A Differentiating Model's Assumption A complete predictive model should take into account as many differentiating elements as possible in different chest areas and the examination technique. We considered three complex models: A-change in the image of the length of the vertical lines artifacts in three or more areas, and lack of consolidation as a fibrotic feature; B-change in the image of length of the vertical artifacts in three or more areas, lack of: consolidation, irregularity, fragmentation, or blurring of the pleural line in at least two points as a feature of pulmonary fibrosis. C-change in the image length of the vertical artifacts in three or more areas, or irregularity, fragmentation, or blurring of the pleural line in at least two points, as a feature of pulmonary fibrosis. All models were created by taking into account the ultrasound protocol: use of the convex transducer to visualize vertical artifacts, followed by a linear one and the assessment of the pleural line. Models were limited to the anterior and posterolateral areas of the chest, which makes it possible to use them in the diagnosis of severe conditions, where ultrasound evaluation of the posterior surface of the chest is impossible. All models require the assessment of vertical artifacts and the pleural line in at least six points: upper, lower, and posterolateral bilaterally. The statistical properties of these models are presented in Table 1. Table 1. Sensitivity and specificity of findings differentiating pulmonary edema from pulmonary fibrosis in interstitial lung disease (ILD). PLA-pleural line abnormalities, B to Z-Conversion of B-line artifacts to Z-line (with the frequency changed from 2 MHz to 6 MHz), *-single examination zone; SE-sensitivity, SP-specificity, PPV-positive predictive value, NPV-negative predictive value, LR(+)-positive likelihood ratio, LR(−)-negative likelihood ratio, AUC-area under the curve, AIC-Akaike information criterion, OR-odds ratio, CI-confidence interval. Group A Characteristics-Patients with ILD A total of 32 consecutive patients diagnosed with ILD secondary to systemic sclerosis qualified for the study, 17 females and 15 males, with an average age of 56 (21.2 SD) years. Diagnosis of systemic sclerosis was based on current ACR/EULAR 2013 criteria. ILD in this patient group was diagnosed on the basis of: patient's clinical picture, abnormalities in immune tests, HRCT as well as pulmonary function tests, bronchofiberoscopy, and echocariography. Infection as the cause of lesions affecting the interstitium was excluded based on microbiological tests. Group B Characteristics-Patients with Pulmonary Edema 30 consecutive patients with a clinical diagnosis of exacerbation of left ventricular failure and acute pulmonary edema qualified for the study, 13 females and 17 males, with an average age of 69 (21 SD) years. Pulmonary edema was diagnosed on the basis of clinical symptoms (dyspnea, orthopnea, bilateral abnormalities on auscultation: crackles), high NT-proBNP (N-terminal pro-brain natriuretic protein) level, and typical abnormalities indicating edema visible in a chest X-ray and echocardiography. Analysis of LUS Findings During the study, 789 video clips (cineloops) were obtained and analyzed from patients with pulmonary edema, as well as 1152 cineloops from patients with ILD secondary to systemic sclerosis. While, 876 cineloops containing vertical artifacts evaluated with a convex transducer, and 644 cineloops assessing a specific area with a linear transducer were selected from the collected material, respectively. The recorded cineloops were assessed by clinicians and an engineer specializing in physics. Following the analysis of the video recordings, 128 cineloops were rejected due to inappropriate ultrasound device settings. The remaining cineloops were analyzed as regards the length of the artifacts (convex transducer), and determining whether a given artifact meets the definitional criteria of B-line (long artifact) or Z-line (short artifact). Moreover, the sonomorphology of the pleural line was analyzed in all patient, and evaluated with a linear transducer. LUS Findings-Convex Transducer: B-line artifacts are detected in patients with cardiogenic pulmonary edema and ILD. In this study, B-line artifacts were evaluated with a convex transducer, at a depth of 15 cm, consistently with the settings described in the methodology section. The ultrasound frequency was changed during the examination, and the evaluation was performed at two extreme values: 2 MHz and 6 MHz. In the cineloops obtained from patients with both pulmonary edema and ILD, Bline artifacts were almost always visualized at the frequency of 2 MHz (consistent with the adopted definition of B-line artifact), in 68% of the examined points in patients with pulmonary edema, and 63% of the examined points in patients with ILD, respectively. Z-lines were visible only in single points: one (0.4%) in heart failure, and 11 (3%) in pulmonary fibrosis, whereas in three cases (0.8%) Z-lines coexisted with B-lines. At the frequency of 6 MHz, cineloops recorded for patients with ILD presented Z-lines in 62% of the evaluated points and B-lines in 13%, whereas in 10% of the examined points the findings were mixed ( Figure 1). In patients with cardiogenic pulmonary edema, at the frequency of 6 MHz, B-line artifacts were present in 62% of the evaluated points, and Z-lines in 24%, including the mixed profile of B and Z in 16% of the examined areas ( Figure 2). Consequently, the change in frequency leads to a change in the profile of vertical artifacts, whereas this phenomenon is much more frequent in patients with pulmonary fibrosis secondary to ILD. Collected data are demonstrated in Table 2. adopted definition of B-line artifact), in 68% of the examined points in patients with pulmonary edema, and 63% of the examined points in patients with ILD, respectively. Z-lines were visible only in single points: one (0.4%) in heart failure, and 11 (3%) in pulmonary fibrosis, whereas in three cases (0.8%) Z-lines coexisted with B-lines. At the frequency of 6 MHz, cineloops recorded for patients with ILD presented Zlines in 62% of the evaluated points and B-lines in 13%, whereas in 10% of the examined points the findings were mixed (Figure 1). In patients with cardiogenic pulmonary edema, at the frequency of 6 MHz, B-line artifacts were present in 62% of the evaluated points, and Z-lines in 24%, including the mixed profile of B and Z in 16% of the examined areas ( Figure 2). Consequently, the change in frequency leads to a change in the profile of vertical artifacts, whereas this phenomenon is much more frequent in patients with pulmonary fibrosis secondary to ILD. Collected data are demonstrated in Table 2. The change of the ultrasound frequency from 2 to 6 MHz leads to a shortening or even the disappearance of vertical artifacts (conversion to A lines was observed in three cases), and this phenomenon is more characteristic for pulmonary fibrosis than edema (61% vs 24% of the examined areas, p < 10 −6 ). LUS Findings-Linear Transducer Both groups (A and B), having been examined with a convex transducer, were reevaluated with a linear transducer. Complete data obtained from 641 projections were analyzed statistically. The change of the ultrasound frequency from 2 to 6 MHz leads to a shortening or even the disappearance of vertical artifacts (conversion to A lines was observed in three cases), and this phenomenon is more characteristic for pulmonary fibrosis than edema (61% vs. 24% of the examined areas, p < 10 −6 ). LUS Findings-Linear Transducer Both groups (A and B), having been examined with a convex transducer, were reevaluated with a linear transducer. Complete data obtained from 641 projections were analyzed statistically. In patients with cardiogenic pulmonary edema, the pleural line was evaluated in 260 points. In 257 (98.8%) points, no pleural line abnormalities were detected. Only in three (1.2%) points were irregularities in the pleural line observed. Moreover, in 23% (60) of the evaluated areas, subpleural consolidations (up to 2-3 mm in diameter) were found in patients with cardiogenic pulmonary edema. These small consolidations correlated statistically significantly with vertical artifacts that, in the majority of cases, converted from B-lines to Z-lines when the frequency was changed from 2 MHz to 6 MHz in the convex probe (in 85%, p < 10 −6 ). In patients with pulmonary fibrosis, in all 381 points evaluating the pleural line, the following abnormalities were detected: coexisting irregularity and fragmentation of the pleural line in 68% (259 localizations) and blurred pleural line in 22% (84) These findings indicate that an irregular and fragmented pleural line is a feature that differentiates pulmonary fibrosis from cardiogenic pulmonary edema. Detection of this feature in a single evaluated point allows for diagnosis of pulmonary fibrosis with a specificity of 99%, a sensitivity of 68%, PPV 99%, NPV 68%, LR(+) 60, and LR(−) 0.32, at AUC = 0.836 and p = 0.0002. An attempt at diagnosis: a differentiating model: Although both features described above differentiate fibrosis from edema, a single observation point is not sufficient. We proposed the best final model B, shown in Figure 4, as a decision tree graph. It is characterized not only by excellent Sp, Se, PPV, and NPV, but also the lowest Akaike criterion value. A low negative (<0.1) and high positive (>10) likelihood ratio indicates a high discriminatory value of the model. For example: suppose that a patient has an a priori probability of fibrosis of 50%. If the test result is positive for fibrosis, a posteriori probability increases to 96%, or else it falls to 9%. A comparative graph of the ROC curves tested is presented in Figure 5. These findings indicate that an irregular and fragmented pleural line is a feature that differentiates pulmonary fibrosis from cardiogenic pulmonary edema. Detection of this feature in a single evaluated point allows for diagnosis of pulmonary fibrosis with a specificity of 99%, a sensitivity of 68%, PPV 99%, NPV 68%, LR(+) 60, and LR(−) 0.32, at AUC = 0.836 and p = 0.0002. An attempt at diagnosis: a differentiating model: Although both features described above differentiate fibrosis from edema, a single observation point is not sufficient. We proposed the best final model B, shown in Figure 4, as a decision tree graph. It is characterized not only by excellent Sp, Se, PPV, and NPV, but also the lowest Akaike criterion value. A low negative (<0.1) and high positive (>10) likelihood ratio indicates a high discriminatory value of the model. For example: suppose that a patient has an a priori probability of fibrosis of 50%. If the test result is positive for fibrosis, a posteriori probability increases to 96%, or else it falls to 9%. A comparative graph of the ROC curves tested is presented in Figure 5. Physical Hypothesis It has been suggested in a previous paper [11,12] that every vertical artifact which can be observed in a LUS image is probably generated by multiple reflections between the Physical Hypothesis It has been suggested in a previous paper [11,12] that every vertical artifact which can be observed in a LUS image is probably generated by multiple reflections between the Physical Hypothesis It has been suggested in a previous paper [11,12] that every vertical artifact which can be observed in a LUS image is probably generated by multiple reflections between the walls of the lung aerated spaces. It is highly unlikely that a vertical artifact can be generated by a vibrating air bubble (alveolus or an alveolus partially filled with water) [12]. An acoustic trap is needed to generate a vertical artifact: (a) an acoustic pulse is transmitted from the thoracic wall to the trap through a thickened interstitial space; (b) multiple reflections between the walls of the aerated spaces which surround the trap generate an acoustic perturbation inside the trap; (c) such an acoustic perturbation acts as an ultrasound source, and gradually re-radiates the trapped acoustic energy to the transducer [12,13]. Figure 6 shows two types of acoustic trap. The panel on the left shows a medium (water, blood, tissue, etc.) which is connected to the thoracic wall by means of a single channel. The panel on the right shows a more complex acoustic trap, which is formed by sparse media connected to the thoracic wall by means of multiple channels. In the first case, the aperture of the acoustic trap is given by a single channel, while in the second case by multiple channels. Diagnostics 2020, 10, x FOR PEER REVIEW 8 of 12 walls of the lung aerated spaces. It is highly unlikely that a vertical artifact can be generated by a vibrating air bubble (alveolus or an alveolus partially filled with water) [12]. An acoustic trap is needed to generate a vertical artifact: (a) an acoustic pulse is transmitted from the thoracic wall to the trap through a thickened interstitial space; (b) multiple reflections between the walls of the aerated spaces which surround the trap generate an acoustic perturbation inside the trap; (c) such an acoustic perturbation acts as an ultrasound source, and gradually re-radiates the trapped acoustic energy to the transducer [12,13]. Figure 6 shows two types of acoustic trap. The panel on the left shows a medium (water, blood, tissue, etc.) which is connected to the thoracic wall by means of a single channel. The panel on the right shows a more complex acoustic trap, which is formed by sparse media connected to the thoracic wall by means of multiple channels. In the first case, the aperture of the acoustic trap is given by a single channel, while in the second case by multiple channels. The characteristics which distinguish vertical artifacts are: brightness, length, lateral width, and internal structure. In this study, only the length of vertical artifacts has been analyzed, depending on the change in ultrasound frequency. The length of an artifact is an interesting parameter, even though it represents really complex information. It depends on the duration of the trap response which, from a theoretical point of view, is infinite. Once a US pulse has been partially trapped by an acoustic trap, the latter re-radiates the trapped energy during an infinite time interval. Therefore, the question is: Why do we sometimes observe artifacts which reach the bottom of the screen and sometimes shorter artifacts? The answer lies in the signal to noise ratio (SNR). The amplitude of the trap signal decreases during a time interval until the signal is no longer distinguishable from the noise, and there are no possibilities for the time gain compensation (TGC) to make it visible. Therefore, now the question becomes: How much does the trap signal decrease during a time interval? This is an interesting question, and the answer should open our minds to the world of ultrasound artifacts. As an example, Figure 7 shows three different acoustic traps with a single channel aperture at the top. The left panel shows an almost closed trap. Once the acoustic energy is transmitted to this trap, the trapped energy escapes slowly, since there is an impenetrable air barrier everywhere except at the small aperture at the top. In this case, a long artifact is expected. The central panel displays a trap which is closed at the bottom and opened at the top. Once the acoustic energy is transmitted to the trap, the trapped energy meets an impenetrable barrier at the bottom and at the sides of the trap. However, a large aperture with a low impedance mismatch exists at the top which allows both: a minimal reflection (towards the bottom of The characteristics which distinguish vertical artifacts are: brightness, length, lateral width, and internal structure. In this study, only the length of vertical artifacts has been analyzed, depending on the change in ultrasound frequency. The length of an artifact is an interesting parameter, even though it represents really complex information. It depends on the duration of the trap response which, from a theoretical point of view, is infinite. Once a US pulse has been partially trapped by an acoustic trap, the latter re-radiates the trapped energy during an infinite time interval. Therefore, the question is: Why do we sometimes observe artifacts which reach the bottom of the screen and sometimes shorter artifacts? The answer lies in the signal to noise ratio (SNR). The amplitude of the trap signal decreases during a time interval until the signal is no longer distinguishable from the noise, and there are no possibilities for the time gain compensation (TGC) to make it visible. Therefore, now the question becomes: How much does the trap signal decrease during a time interval? This is an interesting question, and the answer should open our minds to the world of ultrasound artifacts. As an example, Figure 7 shows three different acoustic traps with a single channel aperture at the top. The left panel shows an almost closed trap. Once the acoustic energy is transmitted to this trap, the trapped energy escapes slowly, since there is an impenetrable air barrier everywhere except at the small aperture at the top. In this case, a long artifact is expected. The central panel displays a trap which is closed at the bottom and opened at the top. Once the acoustic energy is transmitted to the trap, the trapped energy meets an impenetrable barrier at the bottom and at the sides of the trap. However, a large aperture with a low impedance mismatch exists at the top which allows both: a minimal reflection (towards the bottom of the trap), and a significant transmission (towards the transducer) of the trapped energy. In this case, a short artifact is expected. The right panel shows a few traps with single apertures at the top, where traps are connected by secondary channels. Once acoustic energy is transmitted to one of these traps through a single aperture at the top, the trapped energy can escape the trap both through the aperture at the top (by providing in this way the artifact which is visualized by the transducer), and through the secondary channels. Due to the energy loss through the secondary channels, a shorter artifact is expected in this case also. Diagnostics 2020, 10, x FOR PEER REVIEW 9 of 12 In this case, a short artifact is expected. The right panel shows a few traps with single apertures at the top, where traps are connected by secondary channels. Once acoustic energy is transmitted to one of these traps through a single aperture at the top, the trapped energy can escape the trap both through the aperture at the top (by providing in this way the artifact which is visualized by the transducer), and through the secondary channels. Due to the energy loss through the secondary channels, a shorter artifact is expected in this case also. From the Physical to the Clinical Significance of the Study Results The analysis of LUS findings in the case of diseases involving interstitial space presently poses a serious challenge for clinicians. In many diseases that affect the interstitium and alveoli, vertical reverberation artifacts are detected. B-line artifacts are found, for instance, in: cardiogenic pulmonary edema, noncardiogenic pulmonary edema, and ILD (both in the active phase, when ground-glass opacities are found in HRCT, and in the fibrotic phase, in which honeycombing is the corresponding HRCT finding) [10,[14][15][16][17]. Zline artifacts have not yet been described as having clinical significance. So far, the literature in the diagnosis of pulmonary fibrosis has been based mainly on B-line artifacts [18]. It has been proved, inter alia, that the more severe the pulmonary fibrosis, the more B-line artifacts are visible on lung ultrasound images [19][20][21][22]. The publications also emphasized the importance of coexistence with B-line artifacts of lesions in the pleural line [23][24][25][26][27][28]. Consideration should be given here to the diversity of the study protocols, depending on the investigator. The use of convex and linear probes allows for a general and detailed assessment of lesions on the pleural line and the subpleural area [28]. This shows how technical aspects can influence the quality of the ultrasound examination. Optimizing the settings of the ultrasound machine becomes another link in improving the quality of ultrasound in the diagnosis of diseases that affect the interstitial space of the lungs. From the Physical to the Clinical Significance of the Study Results The analysis of LUS findings in the case of diseases involving interstitial space presently poses a serious challenge for clinicians. In many diseases that affect the interstitium and alveoli, vertical reverberation artifacts are detected. B-line artifacts are found, for instance, in: cardiogenic pulmonary edema, noncardiogenic pulmonary edema, and ILD (both in the active phase, when ground-glass opacities are found in HRCT, and in the fibrotic phase, in which honeycombing is the corresponding HRCT finding) [10,[14][15][16][17]. Z-line artifacts have not yet been described as having clinical significance. So far, the literature in the diagnosis of pulmonary fibrosis has been based mainly on Bline artifacts [18]. It has been proved, inter alia, that the more severe the pulmonary fibrosis, the more B-line artifacts are visible on lung ultrasound images [19][20][21][22]. The publications also emphasized the importance of coexistence with B-line artifacts of lesions in the pleural line [23][24][25][26][27][28]. Consideration should be given here to the diversity of the study protocols, depending on the investigator. The use of convex and linear probes allows for a general and detailed assessment of lesions on the pleural line and the subpleural area [28]. This shows how technical aspects can influence the quality of the ultrasound examination. Optimizing the settings of the ultrasound machine becomes another link in improving the quality of ultrasound in the diagnosis of diseases that affect the interstitial space of the lungs. Pulmonary Fibrosis We observed a change in the length of vertical artifacts in both examined groups, and the concurrent presence of pleural line abnormalities. In the first examined group (A), conversion of B-lines to Z-lines was usually accompanied with pleural line abnormalities and/or small subpleural lesions. In this patient group, both B-line to Z-line, at two extreme frequencies (2 MHz and 6 MHz) this most likely results from generating artifacts in acoustic traps, that are adjacent to the pleural line, by means of a single large channel and multiple channels. Probably both types of traps are present at the pleural line, given the apparent shape of the pleural line, which appears irregular and blurred. Pulmonary Edema Changes that occurred during the conversion of B-line artifacts in the second patient group (B) were much more diversified. A large number of B-lines did not undergo conversion with a change in ultrasound frequency. Concurrently, some B-line artifacts converted to Z-lines, or B-and Z-lines. However, a correlation between the conversion of B-line artifacts to Z-lines (or B-and Z-lines) and coexisting pleural line abnormalities was observed in this group as well. This was most likely associated with the presence of microconsolidations (for instance, due to small areas of atelectasis caused by pulmonary edema). This study also demonstrates that differentiating a cardiogenic etiology of B-line artifacts from a pulmonary etiology should be based, both on the analysis of the artifact length after the conversion, and on the evaluation of pleural line abnormalities. Accounting for both findings results in a much higher accuracy in differentiating the causes of lesions affecting the insterstitium. Study Limitations The only analyzed feature was the artifact length. It is possible that more factors (brightness, lateral width, internal structure) are significant in differentiating the etiology of vertical artifacts; however, these factors were not analyzed in this study. The second limitation is the selection of patients for the study group. Patients with chronic pulmonary congestion, but in a stable clinical condition, were not included in the study. The third limitation is given by the lack of information on the pulse power spectrum and on the modulation transfer functions of the probes. Conclusions The visualization and sonomorphological analysis of vertical artifacts with the application of a convex transducer employing two different, extreme frequencies (2 MHz and 6 MHz) may be useful in differentiating lesions affecting the interstitium (cardiogenic pulmonary edema vs lesions due to ILD). However, a higher accuracy is achieved when the pleural line and subpleural lesions are concurrently evaluated with a linear transducer.
2021-03-04T06:16:47.313Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "76cf8b58894d62f1b9b3fe2ab84bd734a41b1e75", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/3/401/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cd5e273a81cff4e25f8becf7f78a37b5bf14d44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261065159
pes2o/s2orc
v3-fos-license
From Mundane to Meaningful: AI's Influence on Work Dynamics -- evidence from ChatGPT and Stack Overflow This paper illustrates how generative AI could give opportunities for big productivity gains but also opens up questions about the impact of these new powerful technologies on the way we work and share knowledge. More specifically, we explore how ChatGPT changed a fundamental aspect of coding: problem-solving. To do so, we exploit the effect of the sudden release of ChatGPT on the 30th of November 2022 on the usage of the largest online community for coders: Stack Overflow. Using quasi-experimental methods (Difference-in-Difference), we find a significant drop in the number of questions. In addition, the questions are better documented after the release of ChatGPT. Finally, we find evidence that the remaining questions are more complex. These findings suggest not only productivity gains but also a fundamental change in the way we work where routine inquiries are solved by AI allowing humans to focus on more complex tasks. Introduction "We can only see a short distance ahead, but we can see plenty that needs to be done." Alan Turing ChatGPT 3.5, a chatbot produced by the company OpenAI, released in November 2022, broke the record for the fastest-growing consumer application in history with 100 mio monthly active users in two months. The fascination for this novel app is also accompanied by fears. Following this release, Goldman Sachs published a report claiming that such innovation could replace more than 300 mio jobs globally (1). In addition, more than 1,000 tech leaders and researchers signed a letter calling for a pause on the most advanced AI developments. 1 Following the citation by Alan Turing above, this article does not intend to infer heroically on the far future of AI and its consequences but rather delve into one of the major observable current consequences ChatGPT has. ChatGPT is particularly good at helping us to code, from code production to debugging. A significant amount of time if not most of the time is spent on the internet to look for commands or solutions to problems while coding. Debugging only is estimated to represent about half of the time spent coding (2,3). Hence, any improvement in this key aspect would have important consequences on productivity as coding is nowadays widely spread across numerous sectors from finance to scientific research, including data science. In this article, we explore how the release of ChatGPT 3.5 affected the usage of the largest online coding community: Stack Overflow. 2 The first key aspect is that the release of ChatGPT was sudden, public (free access), and occurred without the presence of any comparable model at the time. Later other models by OpenAI (ChatGPT 4 or Code Interpreter), or by competitors (Bard by Google) were released. Hence, focusing on ChatGPT 3.5 allows us to observe the initial shock of such models on the worldwide coding community (not only paid users) while 1 Hence, it is natural to focus on this language as it affects a large share of the coding community across several sectors due to its versatility. In addition, it is likely that vast resources are available to train ChatGPT to answer questions on this specific language due to its popularity. On the other hand, R, another freely accessible programming language is often compared to Python but is somewhat less versatile (initially designed for statistics) and not as widely used (e.g. 16th in the TIOBE Programming Community index). More importantly, anecdotal evidence revealed that ChatGPT was not very efficient to answer questions on R. Hence, R is a good potential 'control' as it is subject to seasonality or other time-varying effects on the platform while not being substantially impacted by ChatGPT. In order to test how ChatGPT affects the way we code, we test three hypotheses. Diff-in-Diff reveals that the quality of the questions (measured by a score on the platform) is increased and a higher proportion are left without answers. In addition, our statistical model is unable to reject the null hypothesis that there is no change in the number of views per question (p-value=0.477) and hence supports our conjecture that the complexity of the questions is increased. Hence, this paper provides evidence for the three hypotheses defined above. These findings suggest not only productivity gains but also a shift towards more meaningful work. Indeed, by solving routine inquiries, generative AI allows humans to focus on more demanding tasks requiring expertise. From the industrial revolution (5) to the effect of AI (6)(7)(8), including robotization at the end of the last millennium (9), technological change is known to reshape the job market significantly. Recent research revealed the effect of ChatGPT on productivity for text writing tasks (10). Additionally, a research paper found evidence of productivity gains on coding using GitHub 3 data and exploiting the ban of ChatGPT in Italy which lead to a 50% loss of productivity two business days after the ban (11). An analysis of the capacity of ChatGPT for automatic bug fixing revealed that it was competitive with other state-of-the-art models (12). However, 3 GitHub is an online platform for coders allowing to store and manage their code. another research estimated that approximately 50% of ChatGPT coding answers had inaccuracies (13). Moreover, despite the promising productivity gains presented by AI, we often fail to observe those on the measure of productivity growth (14). The current paper enriches the literature by highlighting the potential significant productivity improvements caused by generative AI models and how AI-Human interactions might affect the way we work. To test the last hypothesis (H3) we use the weekly proportion of unanswered questions. Working with the proportion has the advantage that it is not affected by the fact that the stock of questions for Python is reduced after the release of ChatGPT. Method In order to address endogeneity issues, we use a Difference-in-Difference model. The first key aspect is that we exploit the sudden release of ChatGPT 3.5 on the 20th of November 2022. At the time, no other similar apps were publicly available. Moreover, this app was publicly released and freely accessible. Hence, we can observe a global shock affecting the online coding community Stack Overflow. Despite the exogeneity of the shock, seasonality and time could affect the activity on the online platform as explained in the introduction and hence could be confounded with the effect of ChatGPT release. To address this issue, we use a Diff-in-Diff model to compare publications on R and Python. On one hand, Python is often cited as the best substitute for R. On the other hand, anecdotal pieces of evidence suggest that the results of ChatGPT to answer coding questions are significantly better for Python than for R. One reason could be that the vast amount of data available online for Python offered a richer training set for ChatGPT. Definition of the econometric model: p-value: 0.477; see Figure 5). Limitations The current analysis does not dive into the nature of the users which opens several questions. First, we do not know if the reduction in the number of questions asked online concerns any profile or more or less skilled workers. Recent findings on the effect of ChatGPT on text writing tasks revealed that low-skilled workers benefited the most from such tools (10). Secondly, if routine tasks are solved by AI, will it boost the efficiency of lower-skilled jobs or will it replace them? Previous research findings favored the former (15,16). However, given the novelty of such technology, it would be safer to reassess this particular case. Third, does it helps particularly in the initial learning phase, rather than while practicing or in both situations? Again, this consideration is important to establish who is benefiting the most from such tools and for what usage. Hence, once we have a deeper understanding of who benefits and how, a quantification of productivity gains could be made. Implications The reduction in the volume of questions with an increase in the quality and potentially of the B. Parallel trends assumption In order to test if the trends are parallel in the pre-ChatGPT period, we run two placebo tests. The sample is restricted to the pre-ChatGPT period. Then, we define set an indicator variable equal to one from week three and onward as well as a second one on week four and onward. Finally, we run our Diff-in-Diff model on this sub-sample using as a treatment the two different placebo periods. Based on the results of the regressions, it is not possible to reject the parallel trends (the p-values of the two coefficients are respectively 0.722 and 0.397).
2023-08-23T06:45:44.267Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "293d1746c698b3e77b739da75f130b9a0b25dba0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "293d1746c698b3e77b739da75f130b9a0b25dba0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
232060476
pes2o/s2orc
v3-fos-license
Prediction of Liver Disease using Regression Tree —Data Mining plays a decisive role especially in medical domain. Decision trees are predominant model in machine learning. Decision trees are simple and very effective classification approach. The decision tree identifies the utmost prime features of a given problem. One of the most common disease in India is Liver Cirrhosis. It is distinctly difficult to uncover Liver Cirrhosis in its initial stage. However early diagnosis of Liver Cirrhosis is highly important. The liver disease data set has a collection of distinguishing features that affect the healthy state of a patient. Machine Learning methods enable knowledge acquisition in early stages and use of this acquired knowledge plays an important role in solving problems like suppose if we want to predict whether the patient with the Liver Cirrhosis has also been suffering from Hepatitis C or not. In order to easily arrive at this knowledge certainly there is a need for fully integrated system. In this paper the collected Liver disease data set is analyzed and prognosticated whether the patient is suffering from liver cirrhosis or not. Introduction Machine Learning [1] is an approach to enhance the performance of the machines.This is done by developing efficient algorithms, which makes the system to learn by experience for a given task.Classification is one such method that makes the machines to learn.The well-known procedure for classification is decision trees which has the capability to recognize and split the data into separate classes.Some of the other classification learning techniques are C4.5, ID3, boosted decision trees.The liver of a human is amidst one of the largest organs.It weighs around 1500 grams.At the right side of abdominal cavity just below the diaphragm the liver is located.There are two large veins.One is Hepatic Artery and the other is Portal vein.These two veins are in charge for transporting blood to the liver.Blood from arteria, which is rich in oxygen is provided by artery.Hepatic Artery and Portal vein blood vessels branch within the liver in a continuous manner and ends with extremely tiny capillaries.Every single disorganization with respect to liver may steer to weakness of liver, an associated serious or chronic inflammation and sometimes there are chances of harming other organs inside the body.Cirrhosis is a slowly progressive disease where in healthy liver tissue is replaced with scar tissue.It prevents the liver from functioning accurately.The damaged tissue blocks the flow of blood through the liver and this slows the processing of nutrients, hormones, drugs and naturally produced nutrients.Liver cirrhosis is the 12 th leading cause of death by disease according to the National Institute of Health.One of the main reasons for liver cirrhosis is over consumption of alcohol for a long duration oftime.Hence it is required to predict any illness related to this organ very effectively.So is the development of this proposed work Related Work Manish Varma Datla et.al.[2] has done the comparison of two machine learning algorithms, Decision tree and Regression tree.After carrying out the study arrived at the conclusion that decision tree works well with a small data sets whereas the regression tree gives the better result for the huge data set. For the prediction of liver disease researchers have used unsupervised machine learning algorithms [3].The prediction is grounded onrecital of the implementation of different techniques.The multitude of factors like Adjusted Mutual Information, Homogeneity, Completeness, Vmeasure, Adjusted Rand Index were used for measuring the performance Varun Vats et.al [4] used three machine learning techniques such as K-Means, DBscan and affinity propagation.These three algorithms were used in order to compare the complexity of computation and accuracy prediction on the liver disease data set.In order to predict the accuracy Silhouette coefficient was used.Out of three techniques K-means was found to be optimal method.Amodel was proposed by Kanza Hamid et.al [5] which abstain from engender the label of a test example when the prediction is not correct.A novel stochastic gradient descent-based solver has been proposed by the researcher for learning with abstention paradigm and this is been used in order to construct practical up to the minute model for performing classification of liver disease data set. A model was developed by R. H. Lin et.al [6] that performs the task in two stages.In the First stage the task of identifying the presence of disease is carried out and in the second stage recognizing the type of liver disease is done.CART and CBR techniques were integrated in the proposed intelligent model, which was used for the prognosis of liver disease.CART was used to diagnose whether a patient is suffering from liver disease or not and CBR is used to identify liver disease type. SinaBahramiradet.al [7] applied eleven data mining classification algorithms on the data set containing four hundred and sixteen liver patient's record and one hundred and sixty-seven non liver patient records.Out of six hundred twenty-seven, four hundred and forty-one male patient records and one hundred and forty-two female patient records were taken.The measures such as Precision, Recall and Accuracy were used to measure the performance.P. Rajeswariet.al[8] carried out the data classification on the liver disordered data set collected from UCI repository.A total of three hundred forty-five records with seven different attributes were taken.To classify the data WEKA tool was used and tenfold cross validation was done in order to assess the data.In this paper regression tree learning is used to inspect the collected liver disease data set. From literature survey it is evident that one should make the good choice of features which play an important role in making the decision whether the patient suffers from liver cirrhosis or not.The paper is collocated as follows, section 3 confers about the classification methods, section 4 briefs feature engineering, section 5 deals with empirical analysis of prediction, section 6 summarizes and discusses future work. Classification Methods Classification [9][10] [11] is an important procedure for machine learning.It has three forms 1. Supervised learning 2. Unsupervised learning and 3. Semi-supervised learning.In supervised learning process the procedure works with the group of examples whose labels are known.The classification learning approach considers categorical values but the regression procedure takes numerical values.In the unsupervised learning method, the class labels are un-known in advance but are grouped into clusters as per their attribute characteristics.Semi-supervised learning utilizes both labeled and unlabeled class data.The classification learning is normally a supervised procedure that takes an example in the data set and identifies to it to a class attribute.An example has two parts the predictor attribute values and target attribute values respectively.The predictor attribute values are used to predict the values of target attribute value.It is also used to predict the class of an example.In the classification learning process the collected dataset is split up into two sets, the training data set and the test data set.The classification process consists of two stages.The model is obtained by using training data set at the training stage.The testing stage uses the model on the test data set to predict the target attribute value.When classifying examples in the test set are unseen during training, the classifier maximizes the predictive accuracy.The knowledge learnt by the classification procedure can be constituted in different manner such as the association rule learning, decision tree learning and artificial neural network learning. Decision tree Decision tree [12][13] [14][15] [16] is utilized for classification in the decision-making process.It consists of two distinct nodes, the internal node and the leaf node.One of the internal nodes is designated as the root node.The internal nodes are related to attributes, whereas the leaf nodes represent the class name.Every non-leaf node has an outgoing branch.To find the class name for the new record in the data set the search process starts at the root node.The subsequent internal nodes are covered till the leaf node is arrived.To find the right class for a leaf node, testing is done for every internal node from a given root node.Starting from the root node move down by visiting each and every internal node between them and assign the class of the new record same as the class of the leaf node. Regression tree A Regression tree [13] may be observed as a variant of decision tree.It is depicted to approximate real-valued function instead of being used for classification methods.Regression trees used especially for prediction type problems but for the classification types of problems classification trees are used.However, classification tree are used where there is a need for the dataset to be fractionated into classes that belongs to a response variable.The construction of regression tree is carried out with a binary recursive partitioning process.This process is an iterative splitting method.In this each partition is split into smaller groups and this method of splitting keeps on moving up for each branch. Feature Engineering The collected liver data set is taken for the purpose of studying the classification process.Methods used for Data collection are: (1) By having direct interaction with the patients (2) Recording the outcomes of blood tests and (3) Recording the outcome of the scanning.A total of four hundred and thirty-five records were collected.This collected data set is fractionated into 2 sets, training data set and testing data set.The procedure and the associated feature engineering are performed on the training data set and this results in building a classification model.Then obtained model is applied on the test data set in order to predict whether or not the patients suffering from liver disease.In this study, we have used three measures of performance for the purpose of analysis.They are the Root Mean Squared Error (RMSE), Mean of Squared Error (MSE) and Mean Absolute Error (MAE).RMSE is the square root of the average of squared errors.It is given by equation 1, Where RMSE is Root Mean Square Error, n is the number of samples, fi is the i th predicted value and oi is the i th actual value in the data set. MSE is stated as the mean of the squares of the actual and predicted values of the instances in the data set.It is given by equation 2, Where MSE is the mean square error, n is the number of samples, yiis the i th predicted value and oi is the i th actual value in the data set. MAE is defined as the mean of the absolute difference between the actual and predicted values of the records in the data set.It is given by equation 3, Where MAE is the mean absolute error, yiis the i th predicted value and oi is the i th- actual value in the data set. The features that are considered for our study are shown below: lc_age: attribute age of patient expressed in terms of number of years lc_gen: attribute gender of patient expressed in number as male (1) or female (0) lc_dalc:attribute duration of alcohol consumptionexpressed in years lc_qalc:attributequantity of alcohol consumptionexpressed in quarters per day lc_mcv:attribute mean corpuscular volume is expressed as femtoliters per cell lc_plcnt:attribute total platelet count expressed in lakhs per mm.lc_alb: attribute albuminexpressed in gm per dl lc_tpn: attribute total protein expressed in gm per dl lc_gln:attribute globulinexpressed in gm per dl lc_sgotast:attribute SGOT/AST expressed in (U/L) lc_agratio:attribute albumin/ globulin ratio lc_sgptatl:attribute SGPT/ALT.lc_dia:attribute patient suffering from diabetes expressed with values, yes or no.lc_obe:attribute for patient suffering from obesity expressed with values ,yes or no.lc_class: attribute for patient suffering from liver cirrhosis or not, expressed as yes or no. The liver data set containing 435 records is taken, the data set is divided into training data set of 348 records and remaining 87 records into test data that is 80% training data and 20% testing data.For fractionating the dataset into training data set and testing data set we have used the measures mean absolute error, mean squared error and root mean squared error.Observing the Table 1, we find that MAE, MSE and RMSE is high when 70% of data set is taken for training and 30% of data set is taken for testing.MAE, MSE and RMSE is moderate when 60% of data set is taken for training and 40% data set is taken for testing.MAE, MSE and RMSE is comparatively low when 80% of the data set is taken for training and 20% of data is taken for testing.Hence it is appropriate to take 80% of the data set for training the model and remaining 20% is taken as test data set.The regression decision tree is then constructed and used for prediction of class for test data set. Prediction Analysis Before the prediction analysis is carried out the liver disease dataset has been preprocessed.The missing data are filled by taking the mean of the attribute.For the collected data set MAE, MSE, RMSE is calculated.Obtained results are recorded in the tables 2, 3, 4 and 5. Now analyzing Table 2, we could observe from the table 2 that MAE has the value 0.29 for female and 0.54 for male.MSE has the value 0.44 for female and 0.68 for male.MAE and MSE is less when the gender attribute lc_gen is 0 that is for female.Also, RMSE is lower for female that is when lc_gen is 0. This supports the fact that the model prediction with higher accuracy when the gender attribute lc_gen is female.Now going through the Table 3 we observe there are equal values for MAE, MSE and RMSE for both male and female.This occurs when the total platelet count attribute lc_plcnt is less than 1.5 for female, and less than 1.25 for male.Hence male and female have equal chances for liver disease for the corresponding values of total platelet count attribute lc_plcnt.Now going through the Table 4 we observe that the MAE and MSE are 0.17 and 0.20 which is lowest value for female.This turns out when the albumin attribute lc_alb is less than 4 and lc_gen is 0. We also observe that RMSE is lower whenlc_gen is 0, which is female.This affirms that the model predicts with higher accuracy when the attribute lc_gen is female.We define a measure p, to estimate the accuracy of alcohol consumption for male and female.The measure p is stated as the absolute difference between duration of alcohol consumption and quantity of alcohol consumption.We consider 70% of alcohol consumption by female affects the liver in comparison with male.For a given value of p with 6 for male and 4.2 for female, the value for the performance measures MSE, RMSE of MAE are calculated and tabulated in Table 5.Now interposing the Table 5 we notice that the MAE and MSE has lowest value that is 0.23 and 0.17for the gender attribute lc_gen is 0. We also observe that RMSE is lower when the attribute lc_gen is 0, which is female.The outcomes of tables 2 to 5 supports the affirmation that the model predicts with higher accuracy when lc_gen is 0. That is women has the higher chances of getting liver cirrhosis. Conclusion and Future Work In our work, prediction is carried out using regression tree on the liver disease data set.The collected liver cirrhosis data set has the attributes such as gender, obesity, age, quantity and quality of alcohol consumption, platelet count, albumin, globulin etc.The MAE, MSE and RMSE are calculated.It is found that MAE for the male is more than the female for the attributes such as diabetes, albumin, platelet count and duration of alcohol consumption in the data set.From the analyses of regression tree, we find that the prediction model performs better for lc_gen=0 attribute in the data set in terms of MAE and MSE that is for female attribute.The model predicts that female have higher chances of being affected with liver disease than male.By observing the results in section 5 it is clear that female are more prone to liver cirrhosis than male.In our future work we are planning to apply various other machine learning techniques such as Support Vector Machine, Artificial Neural Networks and Genetic algorithms for analyses and also, we are planning to take more number medical attributes such as mean corpuscular volume (mcv), globulin, albumin/globulin ratio (a/g ratio), obesity, that have the direct impact on the liver disease. Table 1 . Table for type of error with range of training and testing data set Table 3 . MAE, MSE& RMSE for Male and Female w.r.t to lc_plcnt Table 5 . MAE,MSE&RMSE for Male and Female w.r.t to p
2021-02-27T14:09:06.253Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "1a780fbccc4fe10ed1804d066515a3b84cf3e2b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3991/ijoe.v17i02.19287", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cf0b0f8ddab4d1c0567f93401ea4692457ba494e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
222144054
pes2o/s2orc
v3-fos-license
Antimicrobial Activity of Some Essential Oils against Methicillin-Susceptible and Methicillin-Resistant Staphylococcus pseudintermedius-Associated Pyoderma in Dogs Simple Summary Pyoderma is one of the most common diseases in dogs, and Staphylococcus pseudintermedius, a Gram-positive coagulase-positive bacterium, represents the most common infectious agent causing canine pyoderma. Since multidrug-resistant S. pseudintermedius strains have become a relevant threat in veterinary medicine, this study aimed to test the antimicrobial properties of some essential oils (EOs) against S. pseudintermedius strains isolated from dogs suffering from pyoderma. The obtained findings demonstrated a clear in vitro efficacy of some tested EOs against clinical methicillin-resistant and methicillin-sensible S. pseudintermedius strains. The applicability and efficacy of EOs in cases of canine pyoderma supported by S. pseudintermedius could be beneficial for both dogs and pet owners, who are inevitably exposed to this zoonotic bacterium. Abstract This study aimed to test in vitro the antimicrobial activity of 11 essential oils (EOs) against four methicillin-resistant Staphylococcus pseudintermedius (MRSP) and four methicillin-susceptible S. pseudintermedius (MSSP) clinical isolates. The obtained findings demonstrated a clear in vitro efficacy of some tested EOs against both MRSP and MSSP strains. Particularly, modal minimum inhibitory concentration (MIC) values ranging from 1:2048 v/v for Melissa officinalis against an MSSP strain to 1:256 v/v for Cymbopogon citratus against all MRSP strains were observed. The best results, highlighting a modal MIC value of 1:1024 v/v for all tested isolates, was provided by Cinnamomum zeylanicum. Intriguingly, Cinnamomum zeylanicum showed, in many cases, a correspondence between minimum bactericidal concentration (MBC) and MIC values, indicating that the inhibiting dose is also often bactericidal. Moreover, a mild antibacterial and bactericidal activity against both MRSP and MSSP isolates was detected for the other tested EOs. Considering the zoonotic potential of S. pseudintermedius and the increased dissemination of multidrug-resistant strains, the employment of EOs could be useful for the treatment of canine pyoderma. Since antibiotic resistance has become the most urgent issue, from the perspective of the One Health initiative, alternative therapeutic approaches are desirable to limit the use of antibiotics or to improve the efficacy of conventional therapies. Introduction In recent years, alternative treatments, including essential oils (EOs), have become very popular as natural remedies in human and veterinary medicine. The establishment of new approaches to conventional therapies, using selected EOs, for the treatment of canine skin disorders was the objective of this study. Skin disorders are very common in pet animals, and the most frequent causes are allergies from parasites such as fleas, environmental allergies, and adverse food reactions. However, all alterations of the skin surface microenvironment promote bacterial multiplication [1]. It is known that Staphylococcus pseudintermedius is the staphylococcal species most frequently isolated from dogs suffering from pyoderma. This coagulase-positive bacterium is an opportunistic canine skin pathogen that inhabits healthy dogs, and its nasal carriage was also demonstrated in healthy pet-owning household members [2]. In the past, S. pseudintermedius isolates were generally susceptible to β-lactam antibiotics; however, since over a decade, methicillin-resistant strains (MRSP; methicillin-resistant S. pseudintermedius) have emerged as a significant health problem in pet animals. Over the years, MRSP has been reported with increasing frequency [3][4][5]. Furthermore, MRSP strains often show multidrug resistance profiles worldwide, including resistance to several classes of antimicrobial drugs [6]. In recent years, several studies were carried out both in vivo and in vitro on the efficacy of some EOs against the etiological agents of pyoderma in dogs [7][8][9]. Many EOs can be used in these skin disorders; however, thanks to their bioactive chemical compounds, some of them are effective tools especially against Gram-positive bacteria [10]. In particular, several EOs derived from plants belonging to the Lamiaceae family have shown a significant antibacterial activity [11]. Moreover, EOs characterized by high percentages of thymol and carvacrol show a remarkable membrane-damaging activity in bacteria. In this work, the EOs of savory, lemon balm, and basil were selected as representatives of this important family of medicinal plants. The other EOs selected for this research were obtained from plants whose antibacterial activity has been less studied than that of the botanical species belonging to the Labiatae family. With regard to the antibacterial activity of manuka essential oil, not many data are available; however, some recent researches reported good activity against Staphylococcus spp. and in general against Gram-positive bacteria, thanks to the presence of some compounds such as leptospermone and isoleptospermone [8,12]. In particular, one study analyzed the efficacy of manuka EO against S. pseudintermedius isolated from canine pyoderma and otitis samples, highlighting its excellent activity against all these bacterial isolates [13]. Few scientific works reported the antibacterial activity of some resins such as myrrh, although many important biological activities are traditionally recognized [14,15]. Cinnamon EO is effective against many Gram-positive and Gram-negative bacteria and it is also used in the food industry with considerable results [16]. The antibacterial activity of eucalyptus and lemongrass EOs was reported in numerous studies available in the literature [17,18]. On the other hand, less experimental evidence is available to demonstrate the antibacterial efficacy of verbena EO [19,20]. Recent studies supported the antibacterial effectiveness of the EOs obtained from many citrus fruits, including Citrus aurantium, even though they did not show a particularly high activity [21,22]. The antibacterial activity of Cannabis sativa EO is one of the aspects considered most recently, since other biological activities of this plant have received more attention from the scientific world. A recent study conducted in Italy showed how the presence of some compounds, such as αand β-pinene, β-myrcene, and β-caryophyllene, promote the antibacterial activities of essential oils derived from Cannabis sativa against different microorganisms [23]. The topical application of EOs could be a promising alternative therapeutic tool in dog skin disorders, such as pyoderma. For this reason, the main purpose of this research was to evaluate the inhibitory and bactericidal activity of different commercially available EOs potentially viable in therapy against methicillin-susceptible and methicillin-resistant S. pseudintermedius isolates from canine pyoderma. Essential Oils The EOs of Citrus aurantium L. . According to the indications on the label, EOs were obtained by steam distillation, except for the Citrus aurantium L. EO, which was obtained by cold pressing. Chemical Composition of the Tested EOs A chemical characterization of the EOs was carried out by GC-EIMS (Gas chromatography coupled with electron impact mass spectrometry) at the Department of Pharmacy, University of Pisa (Pisa, Italy). Each EO was diluted to 5% in HPLC-grade n-hexane and then injected into a GC-EIMS apparatus. GC-EIMS analyses were performed with an Agilent 7890B gas chromatograph (Agilent Technologies Inc., Santa Clara, CA, USA) equipped with an Agilent HP-5MS (Agilent Technologies Inc., Santa Clara, CA, USA) capillary column (30 m × 0.25 mm; coating thickness 0.25 µm) and an Agilent 5977B single-quadrupole mass detector (Agilent Technologies Inc., Santa Clara, CA, USA). Analytical conditions were as follows: injector and transfer line temperatures of 220 • C and 240 • C, respectively; oven temperature programmed from 60 • C to 240 • C at 3 • C/min; carrier gas helium at 1 mL/min; injection of 1 µL; split ratio 1:25. The acquisition parameters were the following: full scan; scan range: 30-300 m/z; scan time: 1.0 s. Identification of the constituents was based on a comparison of the retention times with those of the authentic samples, comparing their linear retention indices relative to a series of n-hydrocarbons. Computer matching was also used against commercial (NIST 14 and ADAMS) and laboratory-developed mass spectral libraries built up from pure substances and components of known oils and MS literature data [24][25][26][27][28][29]. EOs were stored at 4 ± 2 • C in the dark until their use. Phenotypic and Genotypic Identification of Bacterial Isolates Eight veterinary clinical isolates, named from 1 to 8, using four MRSP (methicillin-resistant S. pseudintermedius) and four MSSP (methicillin-sensible S. pseudintermedius) strains, were selected from the bacterial stocks stored at −80 • C in Microbank™ vials (Pro-lab Diagnostics, Richmond Hill, ON, Canada) belonging to Microbiology Laboratory of the Department of Veterinary Medicine and Animal Production of the University of Naples Federico II (Naples, Italy). Briefly, from dogs, attending the Veterinary University Teaching Hospital of Naples, skin samples were collected to perform bacteriological analysis and antimicrobial susceptibility tests. Upon arrival at the laboratory, specimens were cultured on Columbia Nalidixic Acid agar (CNA) with 5% sheep blood and on mannitol salt agar (MSA) plates (Oxoid, Milan, Italy) and incubated aerobically at 37 • C for 24 h. Staphylococcus spp. presumptive colonies were subjected to a first identification using standard techniques: colony morphology, Gram staining, and coagulase and catalase tests. Then, all the isolates were identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) (Bruker Daltonik, Germany) using fresh colonies grown on Columbia CNA agar. Specifically, the bacterial colony was first inoculated in the plate for mass spectrometry and, then, 1 µL of the organic matrix, cinnamic acid, was added to the sample. Afterward, the plate was placed in the equipment for MALDI-TOF-MS analysis. The identification was based on the score value released by the manufacturer's instructions. Values from 2.3 to 1.9 indicated the best identification of genus and species [30]. For the molecular characterization of the stored strains, each S. pseudintermedius isolate was cultured again on MSA plates with incubation at 37 • C overnight. The bacterial DNA extraction of the isolates was carried out by using the commercial Isolate II Genomic DNA kit (Bioline, London, UK) and following the manufacturer's instructions. The obtained bacterial DNA was stored at −20 • C. All isolates were tested by polymerase chain reaction (PCR) for the species-specific nuc and hlb genes ( Table 1) to further confirm the proteomic identification by MALDI-TOF-MS. S. pseudintermedius ATCC ® 49444TM was used as positive control. Indeed, to distinguish the species belonging to the Staphylococcus intermedius group (SIG), a species-specific multiplex PCR as a function of the thermo nuclease (nuc) gene was generally performed [31]. S. pseudintermedius constitutively produces β-hemolysin. On the basis of the S. pseudintermedius ED99 complete genome, deposited in Genbank, a new pair of primers for hlb gene, which enable the analysis of S. pseudintermedius strains, were designed [32]. These investigations allow better identifying S. pseudintermedius and distinguishing it from the other members of the SIG group. Minimum Inhibitory Concentration (MIC) and Minimal Bactericidal Concentration (MBC) Determinations Minimal inhibitory concentration (MIC) was determined using a twofold serial microdilution method, as previously described [36], at the Department of Veterinary Sciences, University of Pisa (Pisa, Italy). Ninety-five microliters of BHI (Brain Hearth Infusion, Thermo Fischer, Milan, Italy) broth was distributed in a 96-well microtiter plate; the EO dilution stock was prepared in BHI broth with dimethyl sulfoxide (DMSO) added to a final ratio of 1:3:4 (EO:DMSO:BHI, v/v/v). Ninety-five microliters of EO dilution was dispensed in the first well of each series, and then twofold dilutions were performed. Bacterial suspensions, adjusted to 0.5 on the McFarland standard turbidity scale (approximately 1.5 × 10 8 colony-forming units (CFU)/mL), were added to each well to reach a final volume of 100 µL. Wells containing bacterial suspension and BHI or BHI alone were employed as positive and negative controls, respectively. Microplates were incubated at 37 • C for 24 h in a humid chamber. EO MIC determinations were performed in triplicate. Minimal bactericidal concentration (MBC) was determined by streaking one drop from each well that showed a concentration of EO equal to or higher than the MIC value on TSA (Trypticase Soy Agar, Thermo Fischer Scientific, Milan, Italy). TSA plates were incubated at 37 • C for 24 h. MBC values were determined as the lowest concentrations that did not allow colonies growth. S. pseudintermedius Strain Identification The eight isolated strains were identified, with a log (score) of ≥2.0, as S. pseudintermedius by MALDI-TOF-MS. Moreover, all isolates harbored the species-specific nuc and hlb genes, thus confirming the proteomic identification by MALDI-TOF-MS. Antibiotic Resistance Patterns of the S. pseudintermedius Isolates Four isolates were MRSP strains carrying the mecA gene. Interestingly, they also displayed multidrug-resistant profiles, showing resistance to at least three different antibiotic classes. In fact, MRSP antimicrobial susceptibility results (Table 2), obtained from Kirby-Bauer disc diffusion testing, showed a complete resistance to amoxicillin-clavulanate, ampicillin, ceftriaxone, ciprofloxacin, erythromycin and sulfamethoxazole-trimethoprim (100%). The MSSP isolates displayed broad resistance to ampicillin and penicillin (100%) but revealed broad susceptibility to the other tested antibiotics, as shown in Table 2. No resistance was observed to vancomycin and linezolid for both MRSP and MSSP isolates. Essential Oil Composition The percentage of identified compounds ranged between 87.6% of Leptospermum scoparium to 100% of Citrus aurantium (Table 3). Limonene was the main compound identified in Citrus aurantium with a percentage of 92.6% followed by 1,8-cineole (84.2%) in Eucalyptus globulus and by trans-cinnamaldehyde (63.2%) in Cinnamomum zeylanicum. Discussion Canine bacterial skin infections represent the main reason behind presentation in small animal practice. S. pseudintermedius, a normal inhabitant of the skin and mucosa of dogs, is the major causative agent of superficial pyoderma [4]. The increasing spread of multidrug-resistant S. pseudintermedius strains has become a relevant challenge in veterinary medicine [4]. Repeated antibiotic treatments may then increase the risk of selecting for multidrug-resistant bacteria, one of the most relevant current threats to public health. The close contact between animals and their owners provides opportunities for bacterial transmission, including MRSP strains [39]. Studies on alternative nonantibiotic substances need to be explored in order to carry out new therapies for disease treatments. In the present paper, the obtained promising in vitro results demonstrated a clear efficacy of some EOs against canine MRSP and MSSP. Particularly, some tested EOs demonstrated a relevant antibacterial activity against all tested strains. Precisely, Cinnamomum zeylanicum EO provided the best results against both MRSP and MSSP, showing almost always a concordance in MBC and MIC values. This study finding confirms the efficacy of Cinnamomum zeylanicum EO, whose antibacterial activity was already reported against bacterial isolates from human orofacial infections [40] and against the food-borne pathogens Staphylococcus aureus and Escherichia coli [41]. Moreover, in vivo studies also reported the activity of Cinnamomum zeylanicum EO against both planktonic and biofilm forms of Gram-positive and Gram-negative bacteria [42]. Herein, Melissa officinalis EO showed similar antibacterial activity against both MRSP and MSSP, and a more effective bactericidal activity against MSSP isolates. Melissa officinalis EO properties are already known in veterinary medicine. Indeed, Ehsani et al. [43] reported the possible appropriate application of Melissa officinalis EO in the food industry, due to its antioxidant and antibacterial properties against four important food-borne bacteria (Salmonella typhimurium, Escherichia coli, Listeria monocytogenes, and Staphylococcus aureus). Furthermore, a strong antimicrobial activity of Melissa officinalis EO against bacterial microflora isolated from fish was also described [44]. However, in this study, we also obtained good results for Leptospermum scoparium, Satureja montana, and Cymbopogon citratus EOs against all selected S. pseudintermedius strains. Since this preliminary investigation highlighted that some of the tested EOs proved to be valuable tools in pyoderma therapy, it seems desirable to continue to perform further studies on EOs, in order to assess their efficacy in not only in vitro but also in vivo trials. Particularly, Cinnamomum zeylanicum, Melissa officinalis, Cymbopogon citratus, and Satureja montana EOs may represent promising and valid candidates for in vivo use. Interestingly, the efficacy demonstrated by Melissa officinalis EO makes it the best prospect for in vivo use. However, it is also necessary to remember that the yield in essential oil from this plant is extremely low, often below 0.1%; thus, for this essential oil, it would be absolutely desirable to use it in a mixture with other oils [45]. From some of the tested EOs, we could have expected a greater effectiveness in antibacterial action in view of the data reported in literature; however, the differences among the compounds are probably linked to their different biological activities [46]. Hence, mixtures of the EOs could also be considered to determine their potential synergistic action. The extremely low dosages needed for EOs allow minimizing any adverse effects, giving effective alternatives to topical treatment with antibiotics. It is worth noting that these nonantibiotic treatment strategies might help to reduce the severity of canine S. pseudintermedius infections and to limit further colonization, thereby also preserving the health of pet owners. Conclusions In our knowledge, the present study revealed for the first time the antimicrobial properties of our selected EOs against both MRSP and MSSP strains isolated from dogs suffering from pyoderma. In particular, Cinnamomum zeylanicum and Melissa officinalis showed the strongest antibacterial activity. Our results underline that EOs may be considered promising therapeutic agents to treat infections caused by zoonotic multidrug-resistant S. pseudintermedius strains, which are becoming more and more difficult to manage.
2020-10-07T13:06:56.305Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "d1c533d5affa6fce7032212e225977e459afba59", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/10/10/1782/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f75a59bbd46f68823b3a73a9f500cdd9bc9a028", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256765746
pes2o/s2orc
v3-fos-license
Vaginal microbial dynamics and pathogen colonization in a humanized microbiota mouse model Vaginal microbial composition is associated with differential risk of urogenital infection. Although Lactobacillus spp. are thought to confer protection against infection, the lack of in vivo models resembling the human vaginal microbiota remains a prominent barrier to mechanistic discovery. Using 16S rRNA amplicon sequencing of C57BL/6J female mice, we found that vaginal microbial composition varies within and between colonies across three vivaria. Noting vaginal microbial plasticity in conventional mice, we assessed the vaginal microbiome of humanized microbiota mice (HMbmice). Like the community structure in conventional mice, HMbmice vaginal microbiota clustered into community state types but, uniquely, HMbmice communities were frequently dominated by Lactobacillus or Enterobacteriaceae. Compared to conventional mice, HMbmice were less susceptible to uterine ascension by urogenital pathobionts group B Streptococcus (GBS) and Prevotella bivia. Although Escherichia and Lactobacillus both correlated with the absence of uterine GBS, vaginal pre-inoculation with exogenous HMbmouse-derived E. coli, but not Ligilactobacillus murinus, reduced vaginal GBS burden. Overall, HMbmice serve as a useful model to elucidate the role of endogenous microbes in conferring protection against urogenital pathogens. elucidate the role of endogenous microbes in conferring protection against urogenital pathogens. IMPORTANCE An altered vaginal microbiota, typically with little to no levels of Lactobacillus, is associated with increased susceptibility to urogenital infections, although mechanisms driving this vulnerability are not fully understood.Despite known inhibitory properties of Lactobacillus against urogenital pathogens, clinical studies with Lactobacillus probiotics have shown mixed success.In this study, we characterize the impact of the vaginal microbiota on urogenital pathogen colonization using a humanized microbiota mouse model that more closely mimics the human vaginal microbiota.We found several vaginal bacterial taxa that correlated with reduced pathogen levels but showed discordant effects in pathogen inhibition between in vitro and in vivo assays.We propose that this humanized microbiota mouse platform is an improved model to describe the role of the vaginal microbiota in protection against urogenital pathogens.Furthermore, this model will be useful in testing efficacy of new probiotic strategies in the complex vaginal environment. interactions and their interplay with vaginal physiology and host immunity.Lack of an in vivo model system that closely resembles the human vaginal microbiota remains a prominent barrier to mechanistic discovery.While mouse models have served a seminal role in delineating host-microbe interactions in reproductive diseases, the murine vaginal microbiota has only recently been defined and is quite distinct from that of women; Staphylococcus succinus and Enterococcus spp.are the most common members in C57BL/6J mice (23,24), Enterobacteriaceae and Proteus spp.are dominant in CD-1 mice (25), and Streptococcus spp.and Proteus spp.have been observed in FVB mice (26). Conventional mice are poorly colonized by human vaginal Lactobacillus spp.and require multiple, high-inoculum doses to observe in vivo effects (27)(28)(29).Several studies have evaluated the ability of human vaginal microbial communities to colonize mice with low or no endogenous microbiota, but stable colonization by these human communities was not achieved (30,31).Not only is there a need to better understand the dynamic of the vaginal microbiota in conventional mice, especially as it relates to disease modeling, there is also a dire need for an animal model that better recapitulates the human vaginal microbial environment and provides translational relevance (32,33). Here, we evaluated the impact of environment on the vaginal microbiome in conventional C57BL/6J mice born and raised at three distinct vivaria.To circumvent challenges with transient colonization of human vaginal microbes, we assessed whether a mouse model that achieved stable colonization with human microbes, the humanized microbiota mouse ( HMb mice) (34), would exhibit a more human-like vaginal microbiota.We investigated the impact of environment (vivarium room) and estrous on vaginal microbiome composition (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. The copyright holder for this preprint this version posted February 9, 2023.; https://doi.org/10.1101/2023.02.09.527909 doi: bioRxiv preprint of HMb mice and determined susceptibility to vaginal colonization by three human pathobionts.We found that the murine vaginal microbiota is malleable in composition and that the distinct HMb mouse vaginal microbiota is capable of conferring protection against human pathobionts group B Streptococcus and Prevotella bivia compared to mice with conventional vaginal microbiota. The vaginal microbiota differs between vivaria and demonstrates high intra-colony variability Both human and mouse studies have shown that, despite host genetic selection for certain microbiome features (35), environmental factors play a dominant role in microbial composition of the gut (36)(37)(38)(39)(40)(41)(42)(43); however, it is unknown whether the vaginal microbiota is likewise affected.To test this, vaginal swabs collected from conventional C57BL/6J mouse colonies bred at three different institutions [Baylor College of Medicine (BCM), Jackson Labs, and the University of California San Diego (UCSD)] were subjected to 16S v4 rRNA amplicon sequencing.Like prior studies, mice from Jackson Labs were primarily dominated by Staphylococcus succinus and Enterococcus spp.(23,24), with occasional appearance of Lactobacillus spp., Corynebacterium spp., or Acinetobacter spp.dominant communities (Fig. 1A).Similarly, BCM mice displayed S. succinus, Acinetobacter spp.and Pseudomonadaceae dominant communities, with rare occurrence of Lactobacillus spp.dominance.In contrast, UCSD mice vaginal samples were primarily composed of Pseudomonadaceae or a combination of Pseudomonadaceae and Pasteurellaceae. Primary taxa driving differences between individual mice were S. succinus, Enterococcus The number of observed OTUs was highest in the BCM colony, reaching statistical significance compared to the Jackson Labs colony (Fig. 1C), but no differences in Shannon entropy, a weighted alpha diversity metric, were observed (Fig. 1D).Intracolony variability was significantly different within each site, with UCSD having the most intra-colony similarity [Bray-Curtis distance (BCmed) = 0.639] compared to BCM (BCmed = 0.978) and Jackson (BCmed = 0.998) (Fig. 1E).Inter-site dissimilarity was greater than that of intra-colony variability across institutional comparisons achieving statistical significance for most comparisons (Fig. 1F-H).Together, these data support that vaginal microbiota composition is heavily influenced by environment in the C57BL/6J genetic background, and thus manipulations of microbial or environmental exposures may alter the vaginal microbiota. HMb mice have distinct vaginal communities compared to conventional mice and are enriched in Lactobacillus-dominant communities To determine whether stable colonization of human-derived microbes in mice would alter the vaginal microbiota, we defined the vaginal microbiome of HMb mice.HMb mice, founded from germ-free WT C57BL/6J mice colonized with human fecal microbiota via oral (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. gavage, display a more human-like gastrointestinal community compared to conventionally raised mice (34).The fecal microbiota of HMb mice remains humanized over their lifespan and demonstrates generational stability of these communities among offspring (34).Vaginal swabs collected from multiple generations of HMb mice over two years were subjected to 16S rRNA v4 amplicon sequencing.The first cohort of mice, twelve generations removed from founder mice, demonstrated Lactobacillus dominance (>70% relative abundance) in most samples (Fig. 2A).Subsequent cohorts had at least one mouse with Lactobacillus colonization (2.7-99% relative abundance), but the proportion of mice with Lactobacillus dominance decreased in cohorts 2-5 (Fig. 2A). Importantly, vaginal Lactobacillus in HMb mice represent multiple species according to 16S v4 sequences.The most frequent Lactobacillus OTU sequences were identical to L. fermentum, L. gasseri, or murine-associated L. murinus by BLAST search (Table S1). HMb mice vaginal microbiota was notably distinct from the fecal microbiota in terms of composition and clustering by sample type on a PCoA of Bray-Curtis dissimilarities (Fig. S1A-B).Vaginal samples had decreased richness (median = 6.5 OTUs) compared to fecal pellets (median = 217 OTUs) (Fig. S1C) and lower alpha diversity determined by Shannon Entropy (Fig. S1D).Additionally, vaginal communities demonstrated greater (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. Compared to conventional mice, Bray-Curtis dissimilarity PERMANOVA measured high dissimilarity between HMb mice and conventional mouse colonies for most permutations (Fig. S2A).PCoA visualization of HMb mice overlaid with conventional mice showed separation of HMb mice from UCSD and BCM mice, with some overlap of HMb mice and Jackson mice that was partially driven by Lactobacillus spp.(Fig. S2B).Yet, as seen in conventional mice (Fig. 1E), HMb mice also demonstrated high variability across cohorts (Fig. 2C).The extent of dissimilarity between cohorts did not correspond with time between sampling periods.For example, dissimilarity between longitudinal samples from Cohort 5 taken only one week apart (Cohort 5a and 5b) was not different than that between two distinct sets of mice (Cohort 4 and Cohort 5a) taken six months apart (Fig. 2C).As part of the experimental design, animals were transferred from a colony (breeding) room to an experimental (biohazard) room in a detached vivarium, providing the opportunity to examine the vaginal composition shifted as the mice acclimated to the new vivarium.Although dissimilarity between mice within each room was high (BCavg = 0.937 in the breeding room and BCavg = 0.928 in the biohazard room), comparisons between rooms was even higher (BCavg = 0.963) suggesting divergent vaginal microbiota (Fig. 2D).Indeed, mice in the breeding room had higher abundances of Lactobacillus and Enterobacteriaceae than mice housed in the biohazard room for durations beyond one week (Fig. 2E). (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. The vaginal microbiota in HMb mice clusters into unique community states and minimally correlates with reproductive factors such as estrous stage Vaginal microbial composition is implicated in birth outcomes in humans (47,48) and fecundity in mice; reproductive success improves when germ-free mice become colonized with bacteria (49).To resolve whether HMb mice have altered reproductive capacity, reproductive performance data were compared between conventional C57BL/6J mice at Jackson Labs (50), and genetically-matched conventional, germ-free, and HMb mice colonies at BCM.The average age at first weaned litter in HMb mice (14.4 weeks) was intermediate between conventional BCM mice (12.6 weeks) and germ-free BCM mice (20 weeks) (Table 1).Additionally, HMb mice litter size (5.8 pups) was closest to that of conventional BCM mice (5.6 pups) compared to Jackson mice or BCM germfree mice.HMb mice were most similar to BCM germ-free mice in terms of total number of litters and gestational interval.Together, HMb mice data fall within the range observed between conventional and germ-free mice implying that a humanized microbiota has minimal impact on reproductive performance.Reproductive differences between conventional BCM and Jackson mice were likely the result of colony management methods rather than biologic divergences. The human vaginal microbiota displays modest fluctuations in composition and stability over the course of the menstrual cycle, including increased alpha diversity and decreased Lactobacillus relative abundance during menses (51)(52)(53)(54)(55)(56).To determine if the HMb mice vaginal microbiota is influenced by estrous stage, five HMb mice were swabbed daily for one week.Relative abundances of taxa changed daily with each mouse displaying at (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission.least two different dominant taxa (>60% relative abundance) over the course of the week (Fig. 3A).Estrous stages were assigned by visualizing wet smears of vaginal samples as described previously (24) (Fig. 3B).To determine if estrous stage influenced vaginal microbiota composition, pairwise Bray-Curtis distances were calculated for vaginal samples (n = 34 mice, 1-7 samples/mouse) and grouped by the corresponding estrous stage at the time of sample collection (Fig. 3C).Although some clustering of similar communities were observed, these clusters were not associated with a particular estrous stage (Fig. 3C).Vaginal communities within each stage showed high dissimilarity, ranging from diestrus (BCmed = 0.93) to metestrus (BCmed = 0.993) (Fig. 3D).While richness did not differ between stages (Fig. 3E), alpha diversity was higher for mice in proestrus (Shannonmed = 2.13) and diestrus (Shannonmed = 2.08) compared to estrus (Shannonmed = 0.44) (Fig. 3F). Vaginal samples across cohorts were hierarchically clustered into murine community state types using Ward's linkage of Euclidean distances as done previously (23,24). Because the dominant taxa differed from conventional mice, we designated HMb mice profiles as "humanized mCST" ( h mCST).Two communities resembled mCSTs of conventional mice; h mCST II (Staphylococcus succinus-dominant) and h mCST IV (heterogenous taxa with an even composition).However, h mCST I was Lactobacillusdominant, h mCST III was Enterobacteriaceae-dominant, and h mCST V contained either Enterococcus, Streptococcus, or Lactobacillus-dominant communities (Fig. 4A). Proportions of each h mCST were not significantly different between estrous stages (Fig. 4B).ANCOM was performed between combined proestrus and estrus (increased estrogen) stages and combined metestrus and diestrus (decreased estrogen) stages or by the transition stages diestrus and proestrus versus estrus and metestrus; neither comparison revealed significantly different taxa between stages.When plotted by Bray-Curtis distances, samples did not separate into discrete clusters based on estrous stage but did cluster by h mCST which is driven largely by taxonomy (Fig. 4C).Notably, there were two separate Enterobacteriaceae-driven clusters (Fig. 4C). HMb mice exhibit decreased uterine ascension of group B Streptococcus compared to conventional mice To determine whether the distinct HMb mice vaginal microbiota impacted colonization by potential pathogens, we used an established colonization model of the neonatal pathogen group B Streptococcus (GBS) (57).GBS asymptomatically colonizes the maternal vaginal tract, but perinatal exposure during pregnancy or labor and delivery can cause severe disease including stillbirth or neonatal sepsis (58).Conventional C57BL/6J mice and HMb mice were vaginally inoculated with 10 7 CFU of GBS and swabbed daily over seven days (Fig. 5A).At early time points, HMb mice had similar or higher GBS colonization compared to conventional mice.At later time points, however, some HMb mice cleared GBS below the limit of detection resulting in significantly lower vaginal GBS burdens than conventional mice at Day 7 (Fig. 5B).To assess GBS ascension, reproductive tract tissues were harvested at Day 3 and Day 7. Uterine GBS burdens were significantly lower in HMb mice compared to conventional mice at both time points, while vaginal and cervical GBS burdens were not different between groups (Fig. 5C-D). (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission.To determine if vaginal microbial profiles correlated with GBS burdens, HMb mice GBS burdens were replotted according to the h mCST assigned at Day 0 immediately prior to GBS inoculation.GBS vaginal burdens were not significantly different across h mCST groups at Day 2 nor Day 7 (Fig. 5E-F).Furthermore, no significant differences in GBS uterine burdens from combined Day 3 and 7 samples were detected between h mCSTs (Fig. 5G).To determine whether specific vaginal taxa were associated with GBS uterine ascension, HMb mice were binned into two categories across both time points: those with no detectable GBS uterine CFU (GBS-) or those with detectable GBS uterine CFU (GBS+).Corresponding vaginal swab 16S sequences from all timepoints were then probed for differentially abundant taxa by ANCOM.Mice with no detectable uterine GBS exhibited an enrichment of Enterobacteriaceae, Acinetobacter, Pseudomonadaceae, Pseudomonas, Comamonadaceae, or Lactobacillus (Fig. 5H-M). Lactobacillus murinus and E. coli display discordant phenotypes towards GBS in competition assays in vitro and GBS vaginal colonization in vivo To gain insight into mechanisms of differentially abundant taxa between groups, bacterial isolates were collected from mice with h mCST I and h mCST III communities as described in Methods.Two isolates, identified as L. murinus and E. coli by full-length 16S sequencing, were each cultured in MRS broth in competition with GBS at two timepoints across five different starting ratios.Minimal differences in competitive index were observed between L. murinus and GBS, with GBS displaying a significantly increased advantage at the 1:2 and 1:10 ratios at 3 h (P = 0.0102 and 0.0002 respectively) which was retained at the 18 h timepoint in the 1:10 condition (P = 0.0165) (Fig. 6A).To determine how coculture impacted growth of each organism, viable CFU of each organism in coculture was compared to CFU recovered from monoculture.L. murinus was minimally impacted (Fig. 6B), however, GBS growth at 18 h was impaired in the presence at L. murinus at all but the highest GBS starting inoculum (1:10 L. murinus to GBS) (Fig. 6C).Conversely, GBS demonstrated a strong competitive advantage in coculture with E. coli, which was significant in the 1:2 and 1:10 conditions at 3 h (P = 0.001 and <0.0001 respectively) and in all five ratios at the 18 h timepoint (P ≤ 0.0286) (Fig. 6D).Again, growth of each organism in coculture was compared to growth in monoculture.No differences were observed at the 3 h timepoint (Fig. 6E).At 18 h, E. coli growth was significantly impaired in the presence of GBS in all conditions (Fig. 6F).Raw viable CFU values for each organism are provided in Fig. S3. To validate ANCOM findings and determine whether pre-existing vaginal taxa could confer protection against GBS, L. murinus or E. coli were separately vaginally inoculated into HMb mice prior to GBS challenge (Fig. 7A).Despite decreased growth of GBS in the presence of L. murinus in vitro, GBS vaginal colonization and dissemination into the upper reproductive tract was unaffected in vivo (Fig. 7B-C).Prophylactic inoculation with E. coli reduced GBS vaginal burden on Day 1 (2-log reduction) and Day 2 (3-log reduction) (Fig. 7D) but did not influence tissue burdens at Day 7 (Fig. 7E).To determine whether HMb mice were protected against multiple vaginal pathogens or selectively resistant to GBS, we challenged HMb mice with two additional vaginal pathobionts associated with vaginal dysbiosis: Prevotella bivia, which is increased in women diagnosed with bacterial vaginosis (59), and uropathogenic E. coli (UPEC), a causative agent for urinary tract infection, which can establish vaginal reservoirs (60) or cause aerobic vaginitis (61).Unlike GBS, there was no difference in vaginal swab P.bivia burdens between conventional or HMb mice at any timepoint (Fig. 8A).However, at Day 7, P. bivia burdens in cervical and uterine tissues, but not vaginal tissues, were significantly lower in HMb mice compared to conventional mice (Fig. 8B).In comparison, no differences in vaginal swab (Fig. 8C) or tissue (Fig. 8D) burdens were observed between HMb mice or conventional mice vaginally inoculated with UPEC. DISCUSSION Despite strong clinical correlations between the vaginal microbiota and women's health outcomes, the ability to ascribe function of the microbiota in vaginal physiology, immunity, and susceptibility to disease is impaired by the lack of an animal model that recapitulates the human vaginal microbiota.Lactobacillus spp.dominate ~73% of human vaginal communities and comprise 70% of the total community abundance in humans (18) but are rare (<1%) of the vaginal communities in other mammals (62).Attempts to colonize animal models such as non-human primates (63) or laboratory mice (27)(28)(29)(30)(31) with human vaginal bacteria have failed to achieve long-term colonization.Key goals of this study were to examine the impact of host environment, microbial exposure, and estrous cycle on the vaginal microbiota on the same murine genetic background and to establish the influence of the vaginal microbiota on pathogen introduction in mice exposed to human microbes. To our knowledge, this is the first comparison of the vaginal microbiota across mice of the same genetic background within and between vivaria.Similar to murine fecal communities (43,64), we observed a strong influence of vivaria on vaginal microbiota composition in conventional C57BL/6J mice across three distinct facilities (Fig. 1). Moreover, we discovered high variability between vaginal compositions of mice within the same colony and effects of moving mice between rooms.Factors that may contribute to the microbial variation are likely due to a culmination of small differences in housing maintenance, diet, and bedding between facilities, or whether mice are differentially exposed to stressors such as noise, frequency of handling or transportation, or exposure to light (43,64,65).Congruent with our study, distinct changes to the vaginal microbiota have also been reported in wild field mice upon captivity (66). The vaginal microbiota of HMb mice, an existing humanized gut-microbiota murine model, recapitulated the community structure seen in conventional mice characterized by low alpha diversity and dominance by a single taxa in the majority of mice (Fig. 2).This finding suggests that host selective pressures drive the vaginal community towards a skewed dominance of a single organism independent of microbial exposure (23)(24)(25).We observed high variability between cohorts of HMb mice sampled at different times from the same colony, but this variability could still be grouped into consistent h mCSTs (Fig. 4) suggesting continuity of dominant microbes over time.Vaginal microbial fluctuation was not explained by seasonal changes or convergence of microbial compositions as reported previously (42,67,68) over the two-year sampling period of this study.Compared to conventional mice, HMb mice were enriched in colonization by Lactobacillus spp.; however, there remain several limitations to this model.There is likely some "conventionalization" of HMb mice since we detected a S. succinus-dominant community ( h mCST II) which is the most common community present in conventional C57BL/6J Jackson mice (23,24). Additionally, HMb mice were frequently colonized by Lactobacillus-dominant communities ( h mCST I and h mCST V) that had OTUs mapping to L. murinus.While not a humanassociated Lactobacillus sp., L. murinus has been isolated from the vaginal tract of wild mice (66) and from the gut of conventional C57BL/6J mice (69).Lastly, another frequently observed HMb mice community was dominated by Enterobacteriaceae including E. coli ( h mCST III).Although E. coli are reported in human vaginal samples, they are typically at low relative abundance in 12-27% of non-pregnant women and 14-33% of pregnant women (70)(71)(72)(73). In humans, the vaginal microbiota composition fluctuates over the menstrual cycle likely due to steroid hormone-mediated changes in glycogen availability, a key nutrient source for Lactobacilli and other microbes (62,74,75).Observations in other mammals are mixed; reproductive cycle-associated fluctuations have been reported in some studies of non-human primates, bovine species, and rats (76)(77)(78)(79)(80) but not in other non-human primates, horses, or mini-pigs (81)(82)(83)(84).Like conventional C57BL/6J mice (24), we did not observe a strong influence of estrous stage on vaginal microbial compositions or individual taxa in HMb mice.Counter to human studies (51), we observed a modest, but significant, fluctuation in Shannon diversity index over the estrous cycle with the lowest diversity occurring during estrus (Fig. 3F).An important caveat to consider is that we only qualitatively determined estrous stage based on cytology of vaginal washes and did not measure hormone levels.It is also possible that this study was underpowered to delineate distinct microbial signatures.Still, our findings are consistent with previous studies in conventional C57BL/6 mice demonstrating minimal impacts of estrous stage on the fecal (85) and vaginal (24) microbiomes. Because the vaginal microbiota is believed to play an important role in protection against pathogens (86), we tested the impact of the altered vaginal microbiota of HMb mice on vaginal colonization by GBS, a leading cause of neonatal invasive disease and agent of aerobic vaginitis (58).GBS colonization is correlated with specific taxa including Staphylococcus spp., P. bivia, and E. coli in non-pregnant women (72,87), and GBS uterine ascension is correlated with Staphylococcus-dominant vaginal microbiota in conventional mice (24).GBS uterine ascension is a mechanism for pregnancy complications including chorioamnionitis, preterm birth, or stillbirth (88)(89)(90).Although there were minimal differences in GBS vaginal burdens between HMb mice and conventional mice, HMb mice consistently demonstrated lower uterine burdens and revealed six different vaginal taxa that were inversely correlated with detection of uterine GBS (Fig. 5).In vitro, L. murinus, but not E. coli, reduced GBS growth in coculture experiments, while exogenous treatment of E. coli, but not L. murinus, reduced GBS colonization in HMb mice in vivo (Fig. 6-7).The discordance between in vivo and in vitro findings could be explained by attenuation of L. murinus anti-GBS activity in vivo due to HMb mice were not consistently protected against vaginal pathobionts compared to conventional mice; HMb mice displayed reduced cervical and uterine burdens of P. bivia, but not UPEC (Fig. 8).P. bivia uterine burdens in conventional C57BL/6J mice (10 4 -10 5 CFU/g) were comparable to previous studies (95) suggesting that HMb mice may actively suppress P. bivia uterine ascension or persistence.No differences in UPEC colonization were seen between HMb mice and conventional mice; tissue burdens were consistent with previous findings (60,96,97).Counter to other murine gut colonization models (98), exogenous UPEC did not appear to be negatively impacted by the frequent endogenous vaginal Enterobacteriaceae or E. coli found in HMb mice; however, we did not assess changes to the vaginal microbiome following UPEC inoculation, so it is unknown whether UPEC had any impact on endogenous Enterobacteriaceae including E. coli.GBS and UPEC, but not P. bivia, induce vaginal immune responses in conventional murine models (96,99,100), and an important limitation of our study is that we did not assess host immune responses to these pathobionts. Our results reveal the plasticity of the mouse vaginal microbiota in response to environmental exposures, perhaps a more potent driver of variability than host genetics or biological factors such as estrus.Even so, open questions remain regarding the biologic factors driving rapid changes to the vaginal microbiota in mice.Although not an exact representation of the human vaginal microbiota, the HMb mouse model described here is enriched in Lactobacillus-dominant communities and demonstrates the importance of the vaginal microbiota in shaping outcomes of reproductive tract infections. Continued improvement of humanized mouse models will provide a pathway to establish the functional role of the vaginal microbiota in health and disease and serve as an improved preclinical model for microbe-based therapies. Bacterial strains GBS strain COH1 (ATCC BAA-1176) was grown in Todd-Hewitt Broth (THB) for at least 16 h at 37°C.Overnight cultures were diluted 1:10 in fresh THB and incubated at 37°C until mid-log phase (OD600nm=0.4).A spontaneous streptomycin-resistant mutant of UPEC strain UTI89 (101) was generated by plating an overnight culture on Luria Broth (LB) agar containing 1000 μg/mL Streptomycin.UPEC Strep R was grown overnight in LB with 1000 μg/mL Streptomycin and washed twice with PBS prior to inoculation.Prevotella bivia Strep R (100) was grown anaerobically (<100 ppm oxygen) in a Coy anaerobic chamber maintained at 37°C.P. bivia was cultured in Tryptic Soy Broth (TSB) with 5% laked, defibrinated sheep blood for three days.E. coli and L. murinus were isolated from were grown anaerobically in MRS overnight or over two days, respectively. Animals Animal experiments were approved by the Baylor College of Medicine and University of California San Diego Institutional Animal Care and Use Committees and conducted under accepted veterinary standards.Mice were allowed to eat and drink ad libitum.Humanized Microbiota mice ( HMb mice) were generated and maintained as described previously (34). WT C57BL/6J female mice (#000664) were purchased directly from Jackson Labs or from C57BL/6J stocks bred at BCM and UCSD.Prior to bacterial infections, mice were acclimated for one week in the biohazard room.Mice ranged in age from 2-6 months. Sample collection and estrous stage assignment Vaginal swabs for 16S sequencing and estrous staging were conducted as described previously (57).Wet mounts of vaginal swab samples were observed under brightfield 100X magnification on an Echo Revolve microscope.Estrous stages were delineated by three independent researchers according to parameters described previously (102,103) and assigned with a consensus of at least two researchers.Mice were sampled at a single time point (n = 2), every three days (n = 32), or daily (n = 5) over the span of seven days. DNA extraction and 16S rRNA V4 amplicon sequencing DNA from vaginal swabs were extracted using the Quick-DNA Fungal/Bacterial Microprep Kit protocol (Zymo Research) and following manufacturer's instructions with two deviations: samples were homogenized for 15 minutes during lysis, and DNA was eluted in 20 µL of water.Amplification and sequencing of the V4 region of the 16S rRNA gene were carried out by BCM Center for Metagenomics and Microbiome Research or UCSD Institute for Genomic Medicine using the Illumina 16Sv4 and Illumina MiSeq v2 2x250bp protocols as described (23,24).Sequences were joined, trimmed to 150-bp reads, and denoised using Deblur through the QIIME2 pipeline using version 2022.2 (104). Vaginal samples of Jackson Labs mice from our previous work (24) were downloaded from EBI accession number PRJEB25733 and included in our present study (EBI accession number PRJEB58804) for Fig. 1 and Fig. S2.Since many of the samples were low biomass, DNA contaminants from sequencing reagents and kits had a substantial impact on the dataset and necessitated filtering of Feature IDs as presented in Fig. S4. First, feature IDs that appeared in less than seven samples were removed.Second, negative controls that went through the entire pipeline, from DNA extraction to sequencing, were run through the R package Decontam (106) (R version 4.2.0 (2022-04-22) -"Vigorous Calisthenics"), which identified 35 Feature IDs that were subsequently removed from the feature table.Lastly, the feature table was re-imported into Qiime2 where other abundant contaminants (Streptophyta, Geobacillus, Thermus, Phyllobacteriaceae, Bradyrhizobium, and P. veronii) were filtered out. Reproductive parameters and data Data for Jackson Labs was found in the Handbook on Genetically Standardized JAX Mice (50).Data for BCM C57BL/6J and HMb mice colonies were sourced from colony managers. Ranges were not provided by other vivaria but could be determined from the HMb mice breeding data. Murine pathogen colonization models Vaginal colonization studies were conducted as described previously (57,95).For GBS and UPEC colonization experiments, mice were synchronized with 0.5mg β-estradiol administered intraperitoneally 24h prior to inoculation (Fig. 5A).For P. bivia colonization experiments, mice received β-estradiol 48h and 24h prior to inoculation.Mice were vaginally inoculated with 10μL of GBS COH1 (10 7 CFU), UPEC Strep R (10 7 CFU) or P. bivia Strep R (10 6 CFU).Vaginal swabs were collected at indicated timepoints and tissues were harvested on day 7 as previously described (23).CHROMagar StrepB Select (DRG International Inc.) agar plates were used to quantify recovered GBS (identified as pink/mauve colonies).CHROMagar Orientation plates were used to quantify recovered UPEC (identified as pink colonies).P. bivia was quantified on blood agar containing (which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission.1000mg/mL Streptomycin.For pre-treatment experiments, 0.5 mg β-estradiol was given on Day -5 and -2 and either L. murinus (10 6 CFU), E. coli (10 6 CFU), or MRS media were administered on Day -4 and -1 before GBS challenge (Fig. 7A).Swabs and tissues were collected as stated above. In vitro competition assays GBS and murine isolates of L. murinus and E. coli grown anaerobically in MRS media as cocultures or monocultures at the indicated concentrations.Samples collected at 3 and 18 hours were plated aerobically on THB (L.murinus competition) or CHROMagar Orientation (E. coli competition) plates cultured at 37°C in an anaerobic chamber for two days. Data Availability Sequencing Data used in this study is available in EBI under accession number PRJEB58804.Scripts are accessible at GitHub under project "MouseVaginalMicrobiota-HMb_filtering_CST". Statistics All data were collected from at least two independent experiments unless otherwise stated.Mean values from independent experiment replicates, or biological replicates, are represented by medians with interquartile ranges or box-and-whisker plots with Tukey's as indicated in the figure legends.Independent vaginal swab microbial communities, some taken at multiple timepoints from the same mouse, are represented by each symbol ( which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission.The copyright holder for this preprint this version posted February 9, 2023.; https://doi.org/10.1101/2023.02.09.527909 doi: bioRxiv preprint spp., and Pseudomonadaceae, with mice from BCM and Jackson clustering more closely together compared to the UCSD cohort (Fig. 1B).Analysis of Composition of Microbiomes (ANCOM) identified Pseudomonadaceae and Pasteurellaceae (most abundant in UCSD mice) and Comamonadaceae (most abundant in BCM mice) as significantly different OTUs between vivaria. ( which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission.The copyright holder for this preprint this version posted February 9, 2023.; https://doi.org/10.1101/2023.02.09.527909 doi: bioRxiv preprint poor colonization or insufficient production or ineffective local concentrations of anti-GBS factors.Concurrently, the in vivo success of exogenous E. coli could be explained through outcompeting GBS for key nutrients or attachment to host surfaces or the elicitation of an altered immune response.These studies highlight the complexity of host and microbial factors dictating GBS colonization success and may explain why some probiotics with potent anti-GBS activity in vitro have failed to reduce GBS vaginal colonization in clinical trials(91)(92)(93)(94). Figure 2 . Figure 2. HMb mouse vaginal microbiota contains distinct taxa compared to conventional mice, is dynamic within the colony, and is sensitive to changes in vivarium.Vaginal swabs were collected from separate cohorts of HMb mice over the course of two years.(A) Vaginal microbial compositions of distinct cohorts with the duration since previous sampling noted.Samples from Cohort 1 -Cohort 4 represent baseline vaginal swabs from unique mice (n = 10-27).Cohort 5 was swabbed at baseline (Cohort 5a) and sampled again one week after acclimation to the biohazard room (Cohort 5b).(B) Clustering of individual mouse cohorts according to Bray-Curtis Distances.(C) Dissimilarity between consecutive cohorts.(D) Dissimilarity between samples from mice swabbed in upon transfer from the breeding facility (n = 38) compared to those acclimated to and swabbed in the biohazard facility (n = 59).(E) ANCOM of taxa differentially abundant in the biohazard room (positive axis) and the breeding room (negative axis).Red points represent taxa that were found significant by ANCOM.Each column (A) or symbol (B) represents a unique mouse.Symbols (C, D) represent pairwise Bray-Curtis distances generated using PERMANOVA with Tukey's boxplots displayed.Data for C and D were statistically analyzed by Kruskal-Wallis with Dunn's multiple comparisons test and statistically significant P values are reported. Figure 3 . Figure 3. Vaginal microbiota dynamics over different estrous stages in HMb mice.(A) Vaginal microbial compositions of five individual HMb mice swabbed daily over the course of a week.The initials of estrous stage assignment at the time of sampling are denoted below the sample (P = proestrus, E = estrus, M = metestrus, D = diestrus).Only Mouse 1 and Mouse 2 were co-housed.(B) Representative microscopic images of vaginal wet smears collected at each stage of the estrus cycle.Wet smears from vaginal swab samples were visualized at 10X on a brightfield microscope.(C) Bray-Curtis distance matrix of vaginal swab samples colored by estrous stage (n = 119).Samples include 1-7 swabs per mouse (n = 34 mice).(D) Bray-Curtis dissimilarity of microbial compositions between samples categorized in the same estrous stage.(E) Observed OTUs and (F) Shannon diversity of samples grouped by estrous stage.Each column (A) represents a sample from Day 1 through Day 7. Symbols (D) represent pairwise Bray-Curtis distances.Open circles (E, F) represent individual mice outside Tukey's whiskers.Data were statistically analyzed by Kruskal-Wallis with Dunn's multiple comparisons test and statistically significant P values are reported. Figure 4 . Figure 4. HMb mice exhibit distinct community state types that are not associated with specific estrous stages.Vaginal swabs were subjected to paired estrous staging and 16S rRNA sequencing.(A) Community state type categorization for HMb mice (n = 183 samples) hierarchically clustered into humanized murine CST (upper bar) and its associated estrous stage (lower bar).HMb mice include mice (n = 34) from Fig. 3C and additional mice (n = 64) that were sampled for 16S sequencing but were not staged.(B) Prevalence of h mCSTs in each estrous stage.(C) Clustering of HMb mice (n = 34) vaginal communities sampled over the course of a week according to estrous stage (left panel) and h mCST (right panel).Each symbol (C) represents a unique mouse.Data in (B) were analyzed by Chi Square analysis of fractions.No values were statistically significant. Figure 5 . Figure 5. Increased vaginal clearance of GBS and restriction of uterine ascension in HMb mice is not due to h mCST but may be attributed to individual taxa.(A) HMb mice (Hmb) and conventional (Conv) mice were vaginally inoculated with 10 7 CFU of GBS (n = 10-27).(B) GBS CFU recovered from daily vaginal swabs.Vaginal, cervical, and uterine GBS tissue burdens were collected at (C) Day 3 and (D) Day 7 post-inoculation.GBS CFU counts from (E) Day 2 swabs, (F) Day 7 swabs, and (G) Day 3 and 7 uterine tissues delineated by h mCST assignment of respective mice on Day 0 prior to GBS inoculation.Relative abundances of vaginal (H) Enterobacteriaceae, (I) Acinetobacter, (J) Pseudomonadaceae, (K) Pseudomonas, (L) Comamonadaceae, and (M) Lactobacillus across all vaginal swabs in mice group into detectable uterine GBS (GBS+) or no detectable uterine GBS (GBS-) at the time of tissue collection.Symbols represent individual mice.Data were statistically analyzed by Mann-Whitney test (A-C, G-L) or Kruskal-Wallis with Dunn's multiple comparison test (D-F) and statistically significant P values are reported. Figure 6 .Figure 7 . Figure 6.In vitro competition assays demonstrate GBS inhibition by L. murinus but (Figure 8 . Figure 8. HMb mice have reduced cervical and uterine tissue burdens of P. bivia but Prevotella bivia, but not uropathogenic E. coli, compared to conventional mice ( which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. Table 1 . Reproductive parameters a for C57BL/6J mice housed in different facilities and colonized by different microbial communities. (50)CoA plots.Pathogen burdens between conventional and HMb mice were assessed by Mann-Whitney test.Alpha and beta-diversity metrics, and GBS burdens by h mCST were analyzed by Kruskal-Wallis with Dunn's multiple comparisons test.Competitive indexes were statistically analyzed by one sample t test with a theoretical mean of 1.0.Coculture and monoculture comparisons were performed using two-way ANOVA with Šídák's multiple comparisons test.hmCSTfrequencies across estrous stages were compared by Chi square test.Statistical were performed using GraphPad Prism, version 9.4.0.P values < 0.05 were considered statistically significant.Data for Jackson Labs were taken from The Jackson Laboratory Handbook on Genetically Standardized Mice, 6 th edition(50).which was not certified by peer review) is the author/funder.All rights reserved.No reuse allowed without permission. a Mean values are reported.b(
2023-10-01T00:11:53.865Z
2023-11-20T00:00:00.000
{ "year": 2023, "sha1": "3b34db38e1c5c3ce2f9659d3b22ca4d9843b564a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "3b34db38e1c5c3ce2f9659d3b22ca4d9843b564a", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49582890
pes2o/s2orc
v3-fos-license
miRNA-146a rs2910164 C>G polymorphism increased the risk of esophagogastric junction adenocarcinoma: a case–control study involving 2,740 participants Purpose The miRNA-146a rs2910164 C>G polymorphism may contribute to the development of cancer. However, the association between this polymorphism and the risk of esophagogastric junction adenocarcinoma (EGJA) remains unclear. In the present study, we carried out a case–control study to explore the potential relationship between miRNA-146a rs2910164 C>G polymorphism and EGJA risk. Patients and methods In total, 1,063 EGJA patients and 1,677 cancer-free controls were enrolled. The SNPscan™ genotyping assay, a patented technology, was used to test the genotyping of miRNA-146a rs2910164 C>G polymorphism. Results We found that miRNA-146a rs2910164 C>G polymorphism was associated with a risk of developing EGJA (additive model: adjusted odds ratio (OR), 1.27; 95% CI, 1.07–1.51; P=0.006; homozygote model: adjusted OR, 1.31; 95% CI, 1.03–1.65; P=0.027 and dominant model: adjusted OR, 1.36; 95% CI, 1.15–1.60; P<0.001). After adjustment for the Bonferroni correction, these associations were also found in additive and dominant genetic models. In the subgroup analyses, after adjustment by sex, age, alcohol consumption, and smoking status, results of multiple logistic regression analysis indicated that miRNA-146a rs2910164 C>G polymorphism increased the risk of EGJA in males, females, <64 years old, ≥64 years old, never smoking, and never drinking subgroups. Conclusion The current study highlights that the miRNA-146a rs2910164 C>G polymorphism increased the risk of EGJA in eastern Chinese Han population. Introduction Gastric carcinoma (GC) is the second most commonly diagnosed cancer and the second leading cause of cancer-related death in China, 1 with an estimated 679,100 new GC cases and 498,000 related deaths in 2015. 1 The esophagogastric junction adenocarcinoma (EGJA) was proposed by Siewert in 1999 as a unique disease: EGJA is considered as a special clinical malignancy and its clinicopathologic characteristic and biologic behavior are quite different from that of GC. EGJA may be a multifactorial disease, which is caused by a number of potential susceptibility factors, involving genetic predisposition, overweight, obesity, and environmental factors (eg, foods preserved by salting, smoking, drinking, and so on). The incidence and prevalence of EGJA are increasing worldwide in recent decades, 2-4 most likely as a result of increases in the prevalence of overweight/obesity and of chronic gastroesophageal reflux disease. 5 The increase may also be related to the decreasing prevalence of Helicobacter pylori infection, which may be a protective factor for EGJA. 6 Although these mentioned factors may contribute to the etiology of EGJA, hereditary factors may also influence the incidence of EGJA. As malignancy-related deaths can be decreased by controlling susceptibility factors, early diagnosis, and more effective treatment, the identification of new biomarkers may be beneficial for early detection and prevention of EGJA. MicroRNAs (miRNAs) are a series of single-stranded noncoding-RNA molecule (including about 22 nucleotides), which are found in plants, animals, and some viruses. 7 In general, miRNAs are similar to the small-interfering RNAs. The functions of miRNAs are RNA-silencing and suppression of translation. 8 Previous studies suggested that miRNAs were implicated in a number of complex biologic processes (eg, cell differentiation, development, apoptosis, proliferation, and so on). [9][10][11][12] Accumulating evidence demonstrates that the expression of many vital genes may be regulated by miRNAs. [13][14][15] It was reported that most of the miRNAs acted on cancer-related genomic areas, and this might contribute to oncogenesis. 16 Recently, Shin and Chu reported that miRNAs might act as important biomarkers and therapeutic targets of GC. 17 Single-nucleotide polymorphisms (SNPs) are a common genetic variation that occurs at a certain position in the genome. SNPs occur more frequently in noncoding regions than in coding regions. Results of previous investigations indicated that SNPs may influence susceptibility to human diseases. SNPs in miRNAs could influence both their expression and function, 18 which might, therefore, alter the risk of cancer. 19,20 In addition, several case-control studies and functional investigations reported that miRNA SNPs could affect GC susceptibility and their influence was closely related to their role in miRNAs' expression. 21,22 Although there are some case-control studies indicating that the rs2910164 C>G polymorphism in miRNA-146a could influence the risk for gastric cancer, [23][24][25] the association between this polymorphism and the risk of EGJA remains unclear. To shed some light on this issue, we enrolled 2,740 participants to investigate the potential relationship between the miRNA-146a rs2910164 C>G polymorphism and EGJA susceptibility. Materials and methods Subjects This hospital-based case-control study consisted of 280 EGJA patients who were consecutively recruited between January 2014 and May 2016 from the Affiliated Union Hospital and the Affiliated Cancer Hospital of Fujian Medical University. An additional 783 EGJA patients were consecutively recruited from the Affiliated People's Hospital of Jiangsu University from January 2008 to November 2016. The EGJA patients were enrolled without any restriction of age. We have defined EGJA as tumors that have their center within 5 cm proximal and distal of the anatomical cardia. 26 Siewert type I EGJA has its center within 1-5 cm proximal of the anatomical cardia. In addition, Siewert type II and III EGJA have their center within 1 cm proximal and 2 cm distal, and 2-5 cm distal of the anatomical cardia, respectively. In the present study, all Siewert type II EGJA cases were diagnosed by gastroscope and during surgery. All of the cases were recruited before their operation and pathologically confirmed. Those EGJA cases who received chemotherapy or radiotherapy or had a history of other malignancy were excluded. For comparison, 1,677 cancer-free subjects matched with the EGJA cases were recruited as controls. All subjects were unrelated. Each participant answered a questionnaire by faceto-face interview. Experienced doctors collected the useful information on demographic variables and risk factors. The related data are listed in Table 1. A written informed consent was signed by each participant. This study protocol was in accordance with the Declaration of Helsinki and approved by the ethics committee of Jiangsu University (Zhenjiang, China) and Fujian Medical University (Fuzhou, China). In this study, each participant donated a blood sample, which was anticoagulated with EDTA. Selection of SNPs To determine the potential relationship between miRNA SNPs and EGJA risk, we selected the miRNA-146a rs2910164 C>G polymorphism according to the literature, which was significantly associated with cancer, 27,28 type 2 diabetes, 29,30 autoimmune diseases, [31][32][33] and coronary artery disease 34,35 in some studies. The corresponding information about the miRNA-146a rs2910164 C>G polymorphism is presented in Table 2. DNA extraction and genotyping Genomic DNA was extracted from the peripheral blood samples collected in EDTA test tubes using a DNA Purification Kit (Promega, Madison, WI, USA). SNPscan ™ genotyping assay (Genesky Biotechnologies Inc., Shanghai, China) was used to analyze the genotyping of miRNA-146a rs2910164 1659 MiRNA-146a rs2910164 C>G polymorphism and increased risk of EGJA C>G polymorphism. In brief, a 150 ng DNA sample was heated to 98°C and held for 5 minutes. The ligation reaction was carried out in an ABI 2720 thermal cycler. Then, a 48-plex fluorescence polymerase chain reaction (PCR) was conducted. In an ABI 3730XL sequencer, capillary electrophoresis was harnessed to analyze the PCR products. GeneMapper 4.1 software (Applied Biosystems, Foster City, CA, USA) was used to read the information of the genotype. For quality control, different technicians genotyped 4% of the genomic DNA samples that were randomly selected. And, the results were in full accord with the findings of the first assays. Statistical analysis The distribution of age was expressed as the mean ± SD. The age difference between EGJA patients and cancer-free controls was evaluated by using the Student's t-test. Differences in the distributions of age, sex, smoking and drinking status, and frequencies of miRNA-146a rs2910164 C>G genotype between EGJA cases and controls were assessed using the c 2 -test (for categorical variables). We used an online calculator (http://ihg.gsf.de/cgi-bin/hw/hwa1.pl) to assess Hardy-Weinberg equilibrium (HWE) in controls. The relationship between the miRNA-146a rs2910164 C>G polymorphism and susceptibility to EGJA was estimated by calculating crude and adjusted odds ratios (ORs) and 95% CIs. Adjustments were performed by age, sex, and smoking and drinking status using a multiple logistic regression model. A P-value <0.05 (two sided) was accepted as statistically significant. All analyses were conducted with the software SAS 9.4 (SAS Institute, Cary, NC, USA). The Power and Sample Size Calculator (http://biostat.mc.vanderbilt.edu/twiki/bin/view/ Main/PowerSampleSize) 36 was harnessed to obtain the power value (a=0.05). We used a Bonferroni correction to adjust for multiple testing. 37,38 Baseline characteristics A total of 2,740 individuals were enrolled for the present case-control study; out of those, 1,677 subjects were healthy participants (controls), and their mean age was 63.91±10.22 years (Table 1). Similarly, for the 1,063 EGJA patients, the mean age at diagnosis was 64.19±8.63 (Table 1). This study was fully matched by age and sex (P=0.451 and 0.909, respectively). The minor allelic frequency distribution of miRNA-146a rs2910164 C>G is 0.38 ( Table 2). The success rate of genotyping was 99.09%. Genotype distribution of the miRNA-146a rs2910164 C>G polymorphism is shown in Table 3. The frequencies of miRNA-146a rs2910164 CC, CG, and GG were 38.47%, 47.07%, and 14.52% in control subjects compared to 31.41%, 52.16%, and 16.43% in EGJA patients, respectively. We found that the frequencies of the miRNA-146a rs2910164 CG, GG, and G allele were slightly higher in the EGJA cases than in the control group (52.16% vs 47.07%, 16.43% vs 14.52%, and 42.51% Association of miRNA-146a rs2910164 C>G polymorphism with EGJA When compared to the frequency of the miRNA-146a rs2910164 CC genotype, the miRNA-146a rs2910164 CG genotype was associated with the risk of EGJA (crude OR=1.28; 95% CI, 1.08-1.52; P=0.005). When the miRNA-146a rs2910164 CC genotype was used as reference, there was a significant difference in the frequency of the miRNA-146a rs2910164 GG genotype between EGJA cases and cancer-free controls (P=0.027). When compared to the miRNA-146a rs2910164 CC genotype, the frequency of the miRNA-146a rs2910164 CG/GG genotype also associated with a significantly increased risk of EGJA (P<0.001). After adjustments for age, sex, smoking, and drinking, an increased risk of EGJA was also found in these genetic models (CG vs CC: P=0.006; GG vs CC: P=0.027; and GG/CG vs CC: P<0.001, Table 4). After adjustments for multiple comparisons (Bonferroni correction), the association of the miRNA-146a rs2910164 C>G polymorphism with EGJA risk was also found in additive and dominant genetic models. For the miRNA-146a rs2910164 C>G polymorphism, the power value was 0.808 in the additive model, 0.604 in the homozygote model, and 0.960 in the dominant model. Association of miRNA-146a rs2910164 C>G polymorphism with EGJA in different subgroups In the subgroup analyses, the genotype frequencies of the miRNA-146a rs2910164 C>G polymorphism in different sex, age, alcohol consumption, and smoking subgroups are summarized in Table 5 Association of miRNA-146a rs2910164 C>G polymorphism with lymph node status in EGJA patients We found no statistically significant difference in genotype distribution of the miRNA-146a rs2910164 C>G polymorphism with different lymph node status ( Table 6). Discussion As the miRNA SNPs potentially affect the miRNA biogenesis and change the target selection, 39 people have paid more attention to the relationship of miRNA polymorphisms with risk of cancer. To the best of our knowledge, this casecontrol study is the largest sample size used to determine the association between the miRNA-146a rs2910164 C>G polymorphism and risk of EGJA. 1661 MiRNA-146a rs2910164 C>G polymorphism and increased risk of EGJA In our study, it was established that the miRNA-146a rs2910164 C>G polymorphism significantly increased the risk of EGJA in overall comparison. Furthermore, in the subgroup analyses, results of multiple logistic regression analysis suggested that the miRNA-146a rs2910164 C>G polymorphism increased the risk of EGJA in male, female, <64 years, ≥64 years, never smoking, and never drinking subgroups. With the promoting application of gene-related studies, [40][41][42] it is highly encouraged to assess the association between the miRNA-146a rs2910164 C>G polymorphism and cancer risk to obtain robust and replicable results. Considering the fact that most of the genetic variants usually have a low or moderate influence on future cancer susceptibility, this case-control study emphasizes the necessity of related large sample sizes to obtain a sufficiently precise estimate between the miRNA-146a rs2910164 C>G variants and the 43,44 In addition, several meta-analyses indicated that the miRNA-146a rs2910164 C>G polymorphism also increased the risk of GC. [45][46][47][48] However, the association between this polymorphism and the risk of EGJA remains controversial. Xia et al reported that the miRNA-146a rs2910164 C>G polymorphism was not correlated with the development of gastric cardia adenocarcinoma, 49 while Okubo et al found it associated with increased risk of upper third anatomic locations GC. 50 Considering that only two case-control studies with related small sample sizes focusing on the relationship of this SNP with risk of EGJA, the results are still obscure. We recruited 2,740 participants to determine the potential relationship between the miRNA-146a rs2910164 C>G polymorphism and EGJA susceptibility. And we found that the C>G polymorphism increased the risk of overall EGJA susceptibility, which was very similar to the previous studies in Asians. 23,51,52 However, the observed results should be interpreted with caution. An evident variation in allele frequency of the miRNA-146a rs2910164 G has been identified across different populations, ranging from 0.362 in Asians to 0.774 in Caucasians. 53 In the future, more case-control studies with larger sample sizes and detailed gene-environment factors should be performed to confirm or refute these associations. Some limitations in our study must be acknowledged. Firstly, in this study, only the miRNA-146a rs2910164 C>G polymorphism was included for exploring the association between this SNP and EGJA risk, and other SNP loci in the miRNA gene were not considered. Secondly, because of lack of sufficient EGJA samples, a replication study was not conducted. Thirdly, the relationship of the miRNA-146a rs2910164 C>G polymorphism with cancer subtypes or tumor stages was not analyzed. These limitations might decrease the validity of results because some potential susceptibility factors were not well considered. Finally, for the controls enrolled in local hospitals, they might not fully represent the whole Chinese population, and these possible biases may result in spurious findings. In summary, the current study identifies the association between the miRNA-146a rs2910164 C>G polymorphism and EGJA risk in the eastern Chinese Han population. We have provided evidence for a potential cancer biomarker for EGJA early detection in the Chinese Han population and potentially for other countries. Well-designed case-control studies are needed to validate these primary findings and explore the potential interaction of gene-gene and geneenvironment factors involved in miRNA-146a rs2910164 C>G polymorphism and EGJA.
2018-07-08T00:33:33.399Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "5e2c571d514bc8c533cf4cdb70a6dbf44121c3bc", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/cmar.s165921", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5a07d56b6cecc4a47bba88d5142b656a8fea2d8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259133495
pes2o/s2orc
v3-fos-license
Pathology of Renal Transplantation Renal transplantation has become the treatment of choice for patients with end-stage renal disease (ESRD) resulting from a variety of causes. The short-term patient and graft outcomes have improved markedly over the recent years (Hariharan et al., 2000). Renal transplant recipients are subject to all those diseases which affect the general population. In addition, like all other allograft recipients, renal transplant recipients are also susceptible to a variety of unique pathological lesions not seen in the non-transplant population. These lesions may involve the transplanted organ or other native organs/systems of the transplant recipients. The focus of this chapter will be on the major pathological processes affecting the kidney allograft itself and are diagnosed on renal allograft biopsy. In this chapter we will present a brief but comprehensive overview of the pathology of the renal allograft seen on allograft biopsies supplemented by representative pictures. Introduction Renal transplantation has become the treatment of choice for patients with end-stage renal disease (ESRD) resulting from a variety of causes. The short-term patient and graft outcomes have improved markedly over the recent years . Renal transplant recipients are subject to all those diseases which affect the general population. In addition, like all other allograft recipients, renal transplant recipients are also susceptible to a variety of unique pathological lesions not seen in the non-transplant population. These lesions may involve the transplanted organ or other native organs/systems of the transplant recipients. The focus of this chapter will be on the major pathological processes affecting the kidney allograft itself and are diagnosed on renal allograft biopsy. In this chapter we will present a brief but comprehensive overview of the pathology of the renal allograft seen on allograft biopsies supplemented by representative pictures. Role of renal allograft biopsy in the management of renal transplant patients The renal allograft biopsy plays an important role in the diagnosis and management of causes of renal allograft dysfunction (Al-Awwa et al., 1998;Colvin,1996;Gaber, 1998;Mazzali et al., 1999;Matas et al., 1983;Matas et al., 1985;Parfrey et al., 1984). Regarding biopsy indications, it is befitting to state that it is always indicated to answer a clinical question. The question is formulated by the transplant physicians with the knowledge of the patient's clinical scenario, the results of relevant laboratory and imaging studies, and the response to any therapeutic measures already instituted to remedy the problem. The established indications for performing renal allograft biopsies are shown in Table 1. Protocol biopsies These are the renal allograft biopsies which are performed at pre-determined intervals after transplantation in normal functioning allografts. These biopsies have provided marked insights into the subclinical processes affecting the graft with implications for the long term graft outcome (Choi et al. 2005;Jain et al., 2000;Rush et al., 1998;Serón et al., 1997). Indeed, the concept of Banff classification of renal allograft pathology originated from the experience with the use of, and the publication of studies related to, protocol biopsies. However, these biopsies have been done at only a few centers in the world and are not universal. Causes of renal allograft dysfunction The causes of renal allograft dysfunction can be conveniently divided into two categories depending on the time after transplantation; early and delayed, and generally follow the same pattern of etiologic factors as observed in native kidneys; pre-renal, renal, and postrenal types. The causes of renal allograft dysfunction according to time after transplantation are shown in table 2. Acute or subacute renal allograft dysfunction generally manifests in the form of a sudden rise of serum creatinine. It is quite common and occurs in roughly half of all patients with kidney transplants. In the immediate post-transplant period, ischemic injury is the major cause, but acute rejection may occur during this period, especially acute antibody-mediated rejection (ABMR) in pre-sensitized recipients. However, majority of acute rejections manifest after one week. Over the first month, the risk of rejection is high and it gradually decreases over the ensuing few months. Acute rejection is rare after six months of transplantation. In contrast, acute ischemic injury can continue to occur at any time. Drug toxicity caused by calcineurin inhibitors (CNI) can occur at any time after transplantation and should always be in the differential diagnosis. Rarely, thrombotic microangiopathy (TMA) may occur, mainly caused by CNI toxicity, but has many other causes (Bergstrand et al., 1985: Pascual et al., 1999 creatinine. It is often also accompanied by low grade proteinuria and hypertension as the post transplant duration increases. This chronic allograft loss occurs at a relatively constant rate of 2-4% per year and is the major cause of graft failure throughout the world. It is caused by a multitude of causes; both the allo-immune and the non-immune causes contribute to this process. Chronic CNI toxicity and hypertension are among the major etiologic factors leading to chronic graft loss. In addition, chronic obstruction, reflux, and hyperlipidemia are also contributing factors. As post transplant duration increases, the risk of recurrence of original renal disease or de novo occurrence of the same also increases. More recently, chronic allo-immune injury has been identified as a major cause of chronic graft loss. An acute rise in serum creatinine may occur during late post transplant period, and in most instances is caused by stopping the drugs by the patients. Similarly, a chronically failing allograft may show an apparent acute rise in serum creatinine, resulting from diminished functional reserve, and precipitated by some acute insult (John & Herzenberg, 2010). It is worth reiterating that the causes of renal allograft dysfunction vary depending on the induction protocol, maintenance immunosuppression, living vs. cadaveric organ source, and many other factors (D'Alessandro et al., 1995;Farnsworth et al., 1984;Matas et al., 2001;Mihatsch et al., 1985;Mishra et al., 2004;Ratnakar et al., 2002;Rizvi et al., 2011;Verma et al., 2007). Procurement of renal allograft biopsy Renal allograft biopsy procurement should follow the same methodology, as the native renal biopsy, discussed previously in chapter 1, especially, if ABMR is suspected or proteinuria is the clinical indication. The timing of obtaining biopsy is also important, especially for dysfunctional graft biopsies. Ideally, the biopsy should be obtained before any attempt at treatment of the suspected rejection process. It should be planned as an elective procedure, and a technician from the histopathology department should be present in the biopsy suite to examine the removed tissue under the dissection microscope for the adequacy of the tissue removed and for apportioning the removed tissue for immunoflourescence (IF) and EM study, if the later are required. This allows fulfillment of adequacy criteria for the proper histopathological evaluation of the biopsy material and complete pathologic evaluation including IF study for complement fragment C4d and renal panel IF. Two cores of renal graft tissue including both cortex and medulla should be obtained. The sensitivity of rejection diagnosis increases with increasing number of cores. The rejection process can be patchy and can be missed if only a single core is obtained. The sensitivity for rejection diagnosis is estimated to be around 90% with one core, and reaches 99% if two cores of renal cortex are obtained. The sensitivity for rejection diagnosis varies from 75 to 80% if medulla alone is received. The specificity of diagnosis of rejection in the medullary tissue is even lower, as other causes of graft dysfunction such as infection, obstruction, or drug hypersensitivity may present with infiltrates and even tubulitis in the medulla (John & Herzenberg, 2010). Preparation of the biopsy for evaluation After the adequacy criteria are fulfilled, the graft biopsy material should be prepared with great care and dexterity. The biopsy should be processed and prepared according to the guidelines for allograft biopsy handling by the most experienced technologists. The quality of biopsy material available for pathologic study is of utmost importance in the correct interpretation of the abnormalities in the tissue (Serón et al., 2008). Many centers process the biopsy by urgent methods, including microwave oven method (John & Herzenberg, 2010). We also process the allograft biopsies by the rapid method using autoprocessor and report the biopsies on the same day. The quality of reagents is also very important. According to Banff schema, it is recommended to prepare at least seven slides, with multiple sections mounted on each slide. Three of these should be stained with hematoxylin and eosin (H&E), three with periodic acid-Schiff reagent (PAS), and one with a Masson's trichrome stain. The PAS and/or silver stains are very useful in delineating tubular basement membranes (TBMs) and in defining the severity of tubulitis, and for evaluating glomerulitis. The PAS stain is also useful in the identification of arteriolar hyalinosis (ah) and tubular atrophy and their semi-quantitative scoring. Trichrome stains help in assessing the chronic sclerosing changes in the interstitium and in the arterial intima. Banff schema recommends cutting tissue sections at a thickness of 3 to 4 microns for an accurate semiquantitative assessment of the morphological lesions in the biopsy sections (Racusen et al., 1999). Pathologic evaluation of allograft biopsy The accurate pathologic evaluation of renal allograft biopsy requires a well trained renal pathologist with a thorough knowledge of renal transplant pathology, and also of renal and transplant medicine in order to correlate the morphologic abnormalities with the detailed clinical information. The importance of correlation of morphological findings on the renal allograft biopsy with clinical data and a close liaison between the nephrologists and pathologists cannot be overemphasized and is self-explanatory. However, the biopsy should be examined by the pathologist initially, without reference to the available clinical information and a morphological diagnosis formulated. This morphological diagnosis should be an objective and unbiased record of all abnormalities seen under the microscope. An attempt should then be made to correlate the clinical details provided with the morphological changes and preferably following discussion with the clinicians. A final diagnosis is then made and any treatment available, given. Further, in an ideal situation a follow up on the patient's progress is also communicated to the pathologist so that the predictions made from the biopsy can be confirmed or corrected if possible. Renal allograft biopsy interpretation is therefore developed out of a discussion between a clinician and the renal pathologist and is a learning process for both based on the patient's clinical course. In this context, it is worth emphasizing that transplant pathology is the youngest discipline of surgical pathology and is continuously evolving rapidly (John & Herzenberg, 2010). Diagnosis of acute graft dysfunction Acute graft dysfunction may be caused by acute ischemic injury, acute rejection, or drug toxicity. Rare causes include; infections, surgical complications, vascular complications, or obstruction. Acute ischemic injury with delayed graft function (DGF) is more common in the cadaveric setting and is recognized by degenerative and regenerative changes in the tubular epithelium. Renal graft biopsy is the gold standard test to identify many of these lesions. However, it is invasive, and not without risks (Vidhun et al., 2003;Wilckzek, 1990). Renal allograft biopsies are of three major types according to their indications: time zero biopsies or implantation biopsies; dysfunctional graft biopsies; and protocol biopsies. Among these, the second category is obviously the most common type in most of the centers around the world. Many centers do not perform routine implantation or protocol biopsies. Diagnosis of acute rejection Renal allograft biopsy is the gold standard procedure for the diagnosis of acute rejection. Acute rejection was traditionally classified on the basis of rapidity and severity of the process, as hyperacute, acclerated acute, and acute rejection. Banff classification tried to classify the rejection on the basis of pathological and pathogenetic mechanisms with considerable refinements in the classification over the past 20 years (Solez et al., 1993;Racusen et al., 1999;Racusen et al., 2003;Solez et al., 2007;Solez et al., 2008). More recently, the Banff classification has categorized acute rejection on pathogenetic mechanisms, as acute ABMR and acute T cell mediated rejection (TCMR). Each of these types of rejection has unique morphological, immunohistochemical, and clinical features and different responses to therapy. Acute TCMR is diagnosed on the concurrent fulfillment of two key thresholds: significant interstitial lymphocytic infiltration (i2) associated with significant tubulitis (t2). If only one of these features is present, the diagnosis is made of borderline rejection. The borderline category exists only in type I or TCMR. Once a diagnosis of acute TCMR is made, its severity is assessed mainly on the basis of severity of tubulitis as Type IA and IB. Acute TCMR may also manifest as varying degrees of arterial inflammation and necrosis. It most often causes intimal arteritis, but occasional cases may manifest as V3 lesion. Often the vascular involvement is accompanied by tubulo-interstitial inflammation. Mechanisms of rejection Rejection is a complex and somewhat redundant response of the specific and innate immune systems to the allograft tissue. The major targets of this response are the major histocompatibility complex (MHC) antigens, which are known as human leukocyte antigens (HLAs) in humans. The HLA genes on the short arm of chromosome 6 encode two structurally distinct classes of cell-surface antigens, known as class I (HLA-A, -B, and -C) and class II (-DR, -DQ, -DP). The T lymphocytes recognize allograft antigens by one of two mechanisms; direct and indirect allorecognition. In the direct pathway, T cells recognize intact allogenic MHC molecules on the surface of allogenic donor cells. The T-cell response that results in early acute TCMR is caused mainly by direct allorecognition. In the indirect pathway, T cells recognize processed alloantigens in the context of self antigen presenting cells (APCs). Indirect presentation may be important in maintaining and amplifying the rejection response, especially in chronic rejection. In both pathways, T lymphocytes recognize foreign antigen only when the antigen is associated with HLA molecules on the surface of APCs. Helper T lymphocytes (CD4) are activated and they proliferate, differentiate, and secrete a variety of cytokines. These cytokines increase expression of HLA class II antigens on the allograft tissues, stimulate B lymphocytes to produce antibodies against the graft antigens, and help cytotoxic T cells (CD8), macrophages, and natural killer cells to develop effective specific and innate immunity against the graft (Nankivell & Alexander, 2010). Semiquantitative assessment of histological changes -The mainstay of Banff schema The semiquantitative scoring of the acute and chronic structural changes in different compartments of the graft parenchyma forms the mainstay for the Banff classification of renal allograft pathology (Solez et al., 1993;Racusen et al., 1999;Racusen et al., 2003;Solez et al., 2007;Solez et al., 2008). Altogether, five categories of acute and four of chronic changes are assessed. These are given in table 3. The focus of acute rejection diagnosis in Banff schema is on the tubulitis and intimal arteritis. However, it is worth emphasizing that with the exception of arteritis, there is no single specific feature of rejection. The diagnosis of rejection depends on the concurrence of interstitial inflammation of at least i2 (>25% to <50% of the unscarred parenchyma) and a tubulitis of grade t2 (4 -10 lymphocytes invading the tubule), as shown in Figures 1 and 2. The tubulitis grading is carried out on the most severely involved tubule. Most difficulty is encountered in the diagnosis of Type I acute cellular rejection, ie., the tubulo-interstitial type, especially during very early stages of the process. The process starts and builds gradually with interstitial accumulation of progressively increasing numbers of inflammatory cells which later invade and attack the tubules. Thus if the biopsy is done at very early stage, tubulitis may not be found (Kazi et al., 1998). The rejection also begins as a patchy process, which in later stages becomes diffuse. The clearly defined threshold of rejection diagnosis, especially interstitial inflammation and tubulitis, has helped in improving the interobserver reproducibility of diagnosis (Furness et al., 1997;. The rationale behind this threshold setting is that some inflammatory changes are to be expected in any allograft, but do not signal rejection. At the same time, this has resulted in lower sensitivity of diagnosis of very early acute TCMR. For this reason, various investigators have tried alternative approaches for increasing the sensitivity of diagnosis of early acute TCMR. One such approach involves the use of a computer program, known as Baysian Belief Network (BBN) to record and analyze multiple biopsy features to diagnose more accurately the cases of early acute rejection. In one study involving 21 difficult cases of early acute rejection, the use of computer program resulted in higher correct diagnoses than any of the pathologists using the Banff criteria Kazi et al., 1998). Moreover, there are interinstitutional differences in the quality and quantity of inflammatory infiltrates of rejection (Furness & Taub, 2001;Kazi et al., 1999). In spite of these limitations, Banff schema has become the international benchmark for the pathologic interpretation of renal allograft biopsies. Topics in Renal Biopsy and Pathology 168 The diagnosis of acute vascular rejection (AVR) is most often straight forward. Detection of even a single lymphocyte in the arterial intima (intimal arteritis) is sufficient to diagnose a case as AVR. The severity of rejection is also graded on the basis of V scores. AVR may be a manifestation of TCMR or antibody-mediated rejection (ABMR). The later mechanism of rejection most often results in V3 lesions, while the former pathway causes V1 and V2 lesions (Figures 3 to 7). Significant tubulointerstitial inflammation and vasculitis may also be a manifestation of recurrent or de novo development of renal disease in the allograft. A good pretransplant clinical history is highly valuable in resolving this differential, the occurrence of which increases with increased post-transplant duration. Antibody-mediated rejection (ABMR) Recently, more attention is focused on antibody mediated rejection (ABMR) as a common cause of graft loss, and it is increasingly being recognized as an important cause of both acute and chronic renal allograft injury (Mauiyyedi et al., 2001;Mauiyyedi et al., 2002). This has been made possible with the discovery and the widespread use of C4d as a marker of ABMR. The detailed diagnostic criteria and classification of ABMR have been developed during recent updates of the Banff classification. A category of C4d negative ABMR has also been included in Banff 07 classification. The definite diagnosis of ABMR requires fulfillment of three criteria; the histological evidence of graft injury, the immunohistochemical evidence of C4d positivity, and the presence of donor specific antibodies (DSA). If only two of these criteria are present, the case is labeled as presumptive ABMR. The pathological changes of ABMR may coexist with other categories of alloimmune or non-immune injuries of the graft (Racusen et al., 1999;Racusen et al., 2003;Solez et al., 2007;Solez et al., 2008). with a small area of fibrinoid necrosis in one of the arteries. This is consistent with V3 lesion and is categorized as acute vascular rejection; Banff category, III. Although, this morphological change may be seen in acute cellular rejection, this lesion is typically seen in cases of antibody mediated rejection (H&E, ×200). A variety of morphological changes have been described, which, although, not entirely specific, are found more commonly in cases of ABMR. These changes include; polymorphonuclear glomerulitis, peritubular capillaritis, fibrin thrombi in glomerular capillaries, and fibrinoid necrosis of arteries. More recent Banff updates have formulated criteria for scoring the peritubular capillaritis and C4d positivity. These are undergoing clinical validation studies in many transplant centers in the world (Racusen et al., 1999;Racusen et al., 2003;Solez et al., 2007;Solez et al., 2008). Calcineurin inhibitor (CNI) drug toxicity Calcineurin inhibitors (CNIs) including cyclosporine (CsA) and tacrolimus form the mainstay of maintenance immunosuppression. The discovery of CsA in 1979 has revolutionized the iatrogenic immunosuppressive protocols and the overall success rate of solid organ transplantation. However, the drugs are also potentially nephrotoxic, causing both acute and chronic nephrotoxicity. Acute CNI toxicity is one of the important causes of acute graft dysfunction. It also frequently poses differential diagnostic problems with acute TCMR. Toxic effects of CsA have been studied in detail, however, the toxicity profile of tacrolimus is still being defined. Both the mechanism of action and the toxicity profile of the two drugs also shows overlapping features (Figures 8 to 10). Acute tubular injury (ATI) is the most common lesion, accompanied by isometric vacuolization of tubular epithelial cell cytoplasm. This change is observed in both the proximal and distal convoluted tubules, and focal coalescence of vacuoles may yield larger vacuoles. Both the drugs are also associated with microvascular toxicity characterized by damage to glomerular capillaries and renal arterioles. Acute arteriolar damage manifests in a variety of ways: there may be endothelial cell swelling, mucinous intimal thickening, nodular hyalinosis, and focal medial necrosis. Marked vacuolization of media of arterioles is also frequently observed (Figure 9). Sometimes, CNI toxicity manifests itself in the form of thrombotic microangiopathy (TMA). Chronic CNI toxicity results in nodular arteriolar hyalinosis, characterized by hyaline, eosinophilic deposits encroaching onto the media. These deposits consist of fibrin, IgM, C3, and C1q. This nodular hyalinosis differs from the circumferential arteriolar hyalinosis limited to the intima, and found in aging, hypertension, and diabetes mellitus. We have observed nodular arteriolar hyalinosis in CNI toxicity as early as one week after transplantation (unpublished data). Drug induced vasculopathy leads to ischemic injury accentuated in the medullary rays, leading to striped or diffuse interstitial fibrosis (Myers et al., 1984). infections result in a mixed inflammatory cell infiltrate in the interstitium with a predominance of neutrophils, associated with tubular microabscesses (Figures 11 and 12). The infiltrate is usually localized in the medulla but may be found in the cortex. Sometimes, the infection may not be picked up on urine culture (Imtiaz et al., 2000;Oguz et al., 2002). Among the viral infections affecting the graft, CMV and polyoma viruses are of paramount importance (Nickeleit et al., 1999). Posttransplant Lymphoproliferative Disorder (PTLD) Although rare, this disorder is an important differential diagnosis with acute cellular rejection, especially as the posttransplant duration increases. An early diagnosis of this complication is necessary for its successful management. Although, typically the disorder occurs many months to years after transplantation, there are many examples of its occurrence during early posttransplant period. On light microscopy, PTLD is characterized by a monomorphic or polymorphic lymphocytic infiltrate containing plasma cells, many of which are atypical. There is typically a diffuse interstitial infiltrate without associated tubulitis or arteritis, the later features help in its differential diagnosis from rejection. Occasionally, the two processes may be concurrent. Immunophenotyping of lymphocytes helps in the definite diagnosis of this concurrence. Acute Tubular Necrosis (ATN) Acute tubular injury (ATI) or ATN is a common finding in renal biopsies from transplanted kidneys, especially in the cadaveric setting. It is the main cause of primary nonfunction of the allograft in this setting. ATI results from a multitude of causes and situations, including in situ injury in the donor; ischemia during organ harvesting, storage, or transportation of the organ; and ischemic injury incurred perioperatively in the recipient. The morphological picture is similar to that seen in the native kidneys and spans the whole spectrum from mild injury, which is difficult to identify, to severe flattening and loss of tubular epithelium from the tubular basement membrane. These degenerative changes in the tubular epithelial cells are accompanied by signs of regeneration, including mitoses. There may be accompanying interstitial edema, and mild mixed inflammatory cell infiltration. Tubulitis is typically absent or only trivial. Other changes include tubular cell vacuolization and blebbing, and tubular dilatation reflecting downstream tubular obstruction. There are also deposits of calcium salts in tubular lumina in the form of dystrophic calcification. There is a poor correlation between the morphological changes of ATN and the allograft function. Although, the morphological lesions of ATI or ATN in the transplanted kidneys are similar to those of native kidneys, some authors have noted a few differences in the morphological profile. Acute Tubulointerstitial Nephritis (ATIN) Non-immune related ATIN may occur in the transplanted kidneys and may be very difficult to distinguish from the tubulointerstitial rejection. The disorder may result from a variety of insults to the transplanted kidneys, such as infection, drug hypersensitivity, viral infection, etc. A predominance of neutrophils in the mixed inflammatory cell infiltrate in the interstitium, especially if associated with tubular microabscesses or leucocyte casts favor the possibility of infection. A predominance of eosinophils raises the possibility of drug hypersensitivity. Viral infections are accompanied by appropriate viral cytopathic effects in addition to the infiltrate. It may be reiterated here that neutrophils and eosinophils may also be seen in rejection, and sometimes the above lesions are superimposed on underlying rejection reaction. Diagnosis of chronic allograft dysfunction As is evident in table 2, the causes of late allograft dysfunction are more varied that those of acute allograft dysfunction. The late graft dysfunction may manifest as an acute rise in serum creatinine or a slowly increasing serum creatinine, and the causes vary accordingly. An advanced failing allograft may show an apparent acute decline of graft function due to diminished renal reserve, as in native kidneys. Renal allograft biopsy is essential to diagnose the causes of late allograft dysfunction. In the past, all cases of chronic allograft dysfunction were labeled as "chronic allograft nephropathy" by the pathologists, a "paper wastebasket" for all forms of chronic allograft damage (Cornell & Colvin, 2005;Ivanyi et al., 2001;Nankivell et al., 2003). This was mainly because the morphological features of various diseases were not clearly defined, as well as, the loss of features of primary pathology in advanced stages of sclerosing process. The main morphological changes of specific causes of chronic allograft dysfunction are shown in table 4. The diagnosis of interstitial fibrosis/tubular atrophy, not otherwise specified, is reserved only for those cases, which show no evidence of specific causes after a detailed and meticulous investigation of the allograft biopsy by morphology, immunohistochemistry, electron microscopy, and molecular genetic methods. Recurrent and de novo renal diseases There are many renal diseases, especially glomerular diseases, which can recur in the transplanted kidneys after a variable period of time (Hariharan, 2000). Currently, glomerular diseases account for approximately 10-20% of cases of ESRD undergoing transplantation, and overall approximately 20% of these patients experience recurrence. The same disease can also occur as de novo disease in the transplanted kidneys. Disease characteristics of the recurrent disease are similar to those of the original disease, but are usually mild in nature. This may be due in part to the use of immunosuppressive agents in the transplant patients. De novo diseases generally occur later than the recurrent diseases. Almost all diseases that occur in the native kidneys can occur de novo in transplant kidneys. However, the two most common diseases are membranous glomerulonephritis and focal segmental glomerulosclerosis. The work up of renal allograft biopsies in cases suspicious for recurrent or de novo glomerulopathies should follow the approach used in native renal biopsy investigation. One important non-glomerular disease that frequently recurs in transplanted kidneys is the primary hyperoxaluria, if kidney transplantation is carried out without concomitant liver transplantation. www.intechopen.com Topics in Renal Biopsy and Pathology 176 Chronic hypertension: fibrous thickening of the arterial intima with reduplication of elastic lamina, and arteriolar hyalinosis. Chronic calcineurin inhibitor toxicity: nodular peripheral arteriolar hyalinosis, and striped interstitial fibrosis Chronic obstruction: prominent tubular dilation, and ruptured tubules with extravasated casts Chronic pyelonephritis: chronic interstitial inflammation and fibrosis, out of proportion to vascular or glomerular changes, in the context of clinical history of recurrent urinary tract infections Polyomavirus nephropathy: tubular epithelial viral infection evidenced by typical viral inclusions on H&E stain, or positive staining for SV40-large T antigen De novo/recurrent renal diseases: morphological features of respective diseases Table 4. The morphological features of specific causes of chronic allograft dysfunction, other than chronic allo-immune causes. Conclusion In conclusion, renal transplant pathology is a complex and rapidly evolving field, in which significant improvements have taken place in recent years in both the characterization and categorization of allo-immune mechanisms of injury. More refinement is expected to take place in near future with the inclusion of molecular genetic and image analysis techniques into the Banff classification.
2018-09-07T04:56:30.617Z
2012-04-04T00:00:00.000
{ "year": 2012, "sha1": "102c84eef8c5f8bb001bfcf18a073815b122dc05", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/39182", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6228a059603691164f5be6ae56a53744582593a8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
259145038
pes2o/s2orc
v3-fos-license
Random analytic functions via Gaussian multiplicative chaos We define a random analytic function $\varphi$ on the unit disc by letting a Gaussian multiplicative measure to be one of its Clark measures. We show that $\varphi$ is almost surely a Blaschke product and we provide rather sharp estimates for the density of its zeroes. Introduction Given an analytic self map ϕ : D → D of the unit disc in the complex plane, for each α ∈ T := ∂D the measure ν α := ν ϕ,α is defined via The measure ν α , or especially its singular part, describes how strongly and where on the boundary the function ϕ takes the value α -note that this can happen only at the boundary. The Clark measures ν α thus obtained for α ∈ T have been much studied especially in connection with applications to perturbative operator theory, including the spectral theory of 1-dimensional Schrödinger equation. Clark measures are defined in [Cla72] and further studied by Aleksandrov [Ale87, Ale89, Ale94, Ale95, Ale96], so they are also called Aleksandrov measures. We refer to the reviews [PS06] and [Sak07] for the basic properties of this interesting family of measures. In the present paper we will employ the Clark measures to construct random analytic functions on the unit disc by defining ϕ : D → D as the analytic self map of D such that its Clark measure at point α = 1 equals a given random measure ν. Especially, we will consider the choice (2) ν ϕ,1 = µ γ := " exp(γX(θ))", γ ∈ (0, √ 2]. Here γ ∈ (0, √ 2] is a fixed parameter, and " exp(γX(θ))" stands for the (random) Gaussian multiplicative chaos measure corresponding to a log-correlated Gaussian field X. The most Y.H. is partially supported by National Key R&D Program of China (No. 2022YFA1006300), and is grateful for the support from ERC Advanced Grant 741487 QFPROBA, as most of this work was carried out at the University of Helsinki. E.S. is supported by the Finnish Academy grant 309940. * We have to admit that there is a little delay from our part. . . natural choice for X on T is the so-called "canonical log-correlated field" X c (θ) = ∞ n=1 n −1/2 A n cos(nθ) + B n sin(nθ) , θ ∈ [0, 2π), where the A n , B n are independent, identically distributed standard Gaussians. In general, log-correlated Gaussian fields X on T have the covariance structure (3) E [X(θ 1 )X(θ 2 )] = log 1 |e iθ 1 − e iθ 2 | + g(θ 1 , θ 2 ) where g : T 2 → R is a (symmetric) continuous function. For the canonical log-correlated field X c , one has g ≡ 0. Multiplicative chaos measures form a natural and important class of random positive measures that has re-appeared during last 15 years in various important roles in statistical physics and other applications. We recall their proper definition and basic properties in Section 2, and we also recall the definition of critical chaos measures at the threshold γ = √ 2 when it is needed. Overall, we refer the reader to the review [RV14] for a general background. It is known that the mesures µ γ for γ ∈ (0, √ 2) and γ = √ 2 are almost surely purely singular with respect to the Lebesgue measure on T. The basic theory of Clark measures then implies that ϕ is almost surely an inner function, i.e. |ϕ(e iθ )| = 1 for almost every θ, see Proposition 11. Any inner function ϕ admits the decomposition (see [Gar07,Theorem 5.5]) where the measure dη is positive and singular to dθ. The harmless convention Im(ϕ(0)) = 0 will be used in this paper. Since we focus on the case α = 1, by the Herglotz representation formula, Our first result shows that the singular part of our random function ϕ is almost surely trivial: Theorem 1. Let γ ∈ (0, √ 2] (where γ = √ 2 refers to the critical chaos) and let µ γ be the chaos measure on T corresponding to a log-correlated field X with the covariance structure (3). Assume that g ∈ W s,2 (T 2 ) for some s > 1. Then, the inner function ϕ defined via (4) with the choice ν 1 = µ γ is almost surely a Blaschke product. This result can be thought of as a random analogue of a famous result of Frostman [Gar07, Theorem 6.4], which states that given an inner function f , for almost every a ∈ D the "Frostman shift" τ a •f is a pure Blaschke product. Here τ a is the Möbius automorphism τ a : D → D, and τ a (z) = a−z 1−az . We also provide an estimate for the density of the zeroes of the random Blaschke product ϕ: Theorem 2. Assume that g ∈ W 2,2 (T 2 ) in (3) and let z 1 , z 2 , . . . stand for the zeroes of ϕ. Then we have the following phase transition for the density of zeroes: ( The above result is slightly surprising in the sense that the density of the zeroes goes up when the parameter γ of the chaos measure decreases. On the other hand, it is natural when one notes that in a heuristic way the support of µ γ "decreases" in a sense when γ increases. One may ask what happens in the supercritical case, especially how does the analogue of Theorem 2 look like for γ > √ 2. One of the questions that we plan to study in the future is how the zero set behaves realization wise as the parameter γ varies. The present note is a part of our long-term project to study some aspects of random spectral theory via Clark measures. We thank Alexei Poltoratski for reawakening our interest in these questions, and especially we are grateful to discussions with Håkan Hedenmalm who independently suggested us (during a visit to the University of Helsinki a couple of years ago) that one should expect a result to the direction of Theorem 1 above. Log-correlated Gaussian fields and Gaussian multiplicative chaos In the following, we consider several random (generalized) functions defined on the unit disc D ⊂ R 2 , on the unit circle T = ∂D, or on an interval of the real axis, say − 1 2 , 1 2 ⊂ R. We will usully denote by X, Y, Z, . . . some log-correlated Gaussian fields. The parameter γ ∈ 0, √ 2 will be fixed and µ := µ γ will denote the (subcritical) Gaussian multiplicative chaos measure with parameter γ. The critical case γ = √ 2 will be recalled separately in the proofs. Since this note is intended for readers from both probability and analysis backgrounds, we recall some definitions and basic facts about log-correlated Gaussian fields and Gaussian multiplicative chaos. 2.1. Log-correlated Gaussian fields on the unit circle. Consider the Gaussian field X c on the unit circle T defined by a random Fourier series: where A n , B n and i.i.d. standard Gaussian variables. We refer to this particular logcorrelated field as the canonical field (on T). Let us recall why the Gaussian field X c (θ) is log-correlated -actually its covariance kernel writes in the simple form Indeed, a direct calculation yields To check that this is the same as and develop the Taylor series. Due to the divergence of − log e iθ − e iθ ′ on the diagonal θ = θ ′ , the Gaussian field X c cannot be defined as a random function, but only as a random distribution in the sense of Schwartz. Indeed, one verifies that the field X c almost surely lives in the negative order Sobolev space W −s,2 for all s > 0. Recall that the Sobolev spaces W s,2 for s ∈ R are defined as Then W −s,2 and W s,2 are mutual duals under the natual pairing. For relevant properties of these classical function spaces W s,2 with s ∈ R in relation to Gaussian multiplicative chaos, we refer to [JSW19, Section 2.2]. We will consider more general log-correlated covariance kernels of the form with g ∈ W s,2 (T 2 ) for some s > 1. The Gaussian field X on T with kernel K is also called a log-correlated Gaussian field, and the above regularity remark applies also to X. 2.2. Gaussian multiplicative chaos on the unit circle. Fix γ ∈ (0, √ 2). The theory of Gaussian multiplicative measures developed by Kahane [Kah85] as well as its recent developments take care of rigorously exponentiating a log-correlated Gaussian field such as γX. Definition 3 (Gaussian multiplicative chaos measures). Let Y denote any log-correlated Gaussian field on T with covariance kernel Let Y ε be a standard ε-mollification of Y using a compact and smooth test function. The Gaussian multiplicative chaos measure µ Y on T, associated to Y with parameter γ ∈ 0, √ 2 , is defined as the following limit taken in probability: Here the convergence of the measures is in the sense of weak * convergence, i.e. for any continuous test function φ : The Gaussian multiplicative chaos measure µ Y is unique (i.e. does not depend on the mollification) and non-trivial for all γ 2 < 2 (we refer to this as the subcritical regime) and degenerate (i.e. almost surely zero everywhere) if γ 2 ≥ 2. The convergence in law or in probability for Gaussian multiplicative chaos measures (i.e. that the above definition makes sense) was established essentially in [Kah85], and later on the theory has been developed in many works, e.g. [BM03,RV10,Sha16,Ber17]. Remark 4 (1d-Gaussian multiplicative chaos measures). The more classical setting, completely similar to the above proposition, defines the Gaussian multiplicative chaos measures on a compact subset of the real line R. In this note, we will use mostly the interval − 1 2 , 1 2 ∈ R. Both of them are 1d-Gaussian multiplicative chaos measures, as are the chaos measures defined on T. We first record some important known properties of the Gaussian multiplicative chaos measures in one-dimension, and we will provide references after stating the results. Fact 5 (Existence of moments). Let Y be a log-correlated Gaussian field on T or − 1 2 , 1 2 and µ Y the associated Gaussian multiplicative chaos measure with γ ∈ 0, √ 2 as in Definition 3. Then for all sets T ⊂ T or T ⊂ − 1 2 , 1 2 with non-empty interior, if and only if p < 2 γ 2 (this includes finiteness of all negative moments). Especially one may note that in the subcritical regime γ 2 < 2, the first moment of the mass of a 1d-Gaussian multiplicative chaos measure always exists. Fact 6 (Exact scaling log-correlated fields). Consider the log-correlated Gaussian field Z on the interval − 1 2 , 1 2 with covariance kernel: Then Z is (locally) translation invariant and exact scale invariant: in particular, for all 0 < r < 1, the following Gaussian fields are equal in law: where N is a standard Gaussian independent of the field Z. As a consequence, the following equality in law holds for the Gaussian multiplicative chaos measure µ Z associated to the field Z with parameter γ ∈ 0, We refer to the last identity as "exact scaling" relation in the sequel. Fact 7 (Kahane's convexity inequality). Let Y 1 , Y 2 be two continuous centered Gaussian processes indexed by T or on any interval I ⊂ R, such that for all θ, θ ′ , Then, for all convex function F with at most polynomial growth at infinity and any positive finite measure σ on T or on I, To apply the last convexity inequality to log-correlated Gaussian fields, we systematically use the regularization procedure in Proposition 3 to meet the continuity assumption: this is standard and we will not write out the regularization everytime. Remark 8 (Comparison of moments). For people with background in analysis we now demonstrate a standard use of Kahane's convexity inequality mainly for comparing moments of two different chaos measures, since we need to do this later on with careful book-keeping of the constants involved. Consider thus p < 2 γ 2 and the function F (x) = x p for x > 0. Then either F or −F is convex. Consider a general log-correlated Gaussian field Y as in Definition 3 and an exact scaling log-correlated Gaussian field Z as in Fact 6. Since the correction term g(θ, θ ′ ) in the covariance function of the field Y is bounded in absolute value by a finite constant C, we can apply Kahane's inequality to the following Gaussian fields (with N a normal Gaussian variable independent of Y and Z): to conclude that there exists some positive constant K such that, for all positive functions P e iθ , This relation tells us that modulo a multiplicative constant, the moments of the positively weighted mass of a general Gaussian multiplicative chaos measure associated to any logcorrelated Gaussian field Y scales similarly like in the exact scaling case with the field Z. We will write the above equation as The constant in ≍ only depends on the parameter γ, the correction term g e iθ , e iθ ′ , and p. To summarize, when estimating the moment of a positively weighted mass of a Gaussian multiplicative chaos measure µ Y , we can switch the underlying Gaussian field to an exact scaling log-correlated Gaussian field Z of Fact 6, up to a deterministic multiplicative constant. When µ = µ Xc is the Gaussian multiplicative chaos measure on T associated to X with the canonical covariance kernel in (6), the measure dµ(θ) is then invariant under rotations of the unit circle. In fact, with this particular choice, the measure µ transforms nicely under any Möbius transformation of the unit disc D. Since we don't need this fact as we study general log-correlated Gaussian fields, we leave the interested reader to consult [Var17] for the fascinating connections between Gaussian multiplicative chaos measures and Conformal Field Theory. We record a simple auxiliary result. It follows immediately that the analytic self-map ϕ : D → D (defined via (2) and (1)) is a.s. an inner function, as alluded in the introduction. Proof. It is a classical fact [Sak07, Theorem 2.2] on Clark measures that ϕ is an inner function if and only if the Clark measure ν ϕ,1 is singular with respect to the Lebesgue measure. The claim follows from Lemma 10. Blaschke product property for the canonical field We find it instrumental to first provide a short and simple proof of the Blaschke product property (see Theorem 1) in the case of the canonical log-correlated Gaussian field X c . In addition, it prepares the reader gently for the perturbation philosophy applied also in the more involved proofs in the sequel. A Frostman type lemma. To prove the Blaschke product property, we first provide a variant of the classical Frostman lemma [Gar07, Theorem II.6.4] adapted to our situation. Lemma 12 (A Frostman type lemma). Let ϕ be a random inner function 1 on the unit disc. Then ϕ is almost surely a Blaschke product if for some ε > 0 Since almost surely ϕ is an inner function, and hence a.s. its boundary values are unimodular a.e. at the boundary ∂D, we deduce by Fubini that for a.e. θ ∈ [0, 2π] it holds that a.s. lim r→1 log(1/|ϕ(re iθ )|) = 0. By our assumption, we may apply uniform integrability for the expectation in the definition of F (re iθ ) and deduce that Thereafter (note that F is bounded under our assumption) we may apply dominated convergence theorem to deduce that lim r→1 − 2π 0 F (re iθ )dθ = 0, whereafter Fubini's theorem and Fatou's lemma yield that This finishes the proof. Since Re(h(z)) ≥ 0, the quantity inside the expectation only blows up when h(z) is close to 1. It is then an elementary exercise to verify that the following condition suffices: Corollary 13. Let ϕ be defined via Equation (4) with a random singular measure µ. Then ϕ is almost surely a Blaschke product if for some ε > 0 1 Naturally we assume that the map (z, ω) → ϕ(z) is jointly measurable, where ω stands for an element in the probability space. Remark 14. Since in the proof of Lemma 12 we used the assumption only to guarantee uniform integrability under the expectation, and at the same time boundedness of F , we could equally well replace the condition (7) by 3.2. Proof of the Blaschke product property. We now prove Theorem 1 for the canonical log-correlated Gaussian field X c . Since the measure µ is invariant under rotations of T, it is enough to show that Since for any p ∈ (0, 1) and ε > 0 we have it is sufficient to consider the imaginary part Im(h(r)) and show that for some p ∈ (0, 1), Notice that the imaginary part Im(h(r)) writes as a signed Gaussian multiplicative chaos measure: The key idea is to take out a Fourier mode in the canonical field X c and write the following independent decomposition of X c : It follows that (where µ is the Gaussian multiplicative measure associated with the logcorrelated Gaussian field X): and the following derivative has constant sign: Now if B 1 ≥ 0, by looking at the interval where sin(θ) ≥ 0, the integral is bigger than Similarly if B 1 ≤ 0, by looking at the interval where sin(θ) ≤ 0, the integral is bigger than 2γr · sin 2 (θ) |r − e iθ | 2 d µ(θ). In any case, for all B 1 ∈ R and r ∈ [1/2, 1), The above is a lower bound on the derivative of Im(h(r)) with respect to the Gaussian variable B 1 using the auxiliary field X. We now record an elementary observation. To conclude, let ρ parametrize the Gaussian variable B 1 and let b be the above lower bound independent of B 1 , and apply Lemma 15. Taking the expectation with respect to B 1 is equivalent to integrating over ρ, and we get the following for the conditional expectation on B 1 with p ∈ (0, 1): Now taking the expectation with respect to all other Gaussian variables, The last quantity, independent of r, is finite for all p ∈ (0, 1) by Lemma 9: this finishes the proof. On the density of zeroes: preparations In this section, we prepare our study for the density problem, Theorem 2. Section 4.1 first develops a quantitative version of the Frostman-type Lemma 12, and provides a sufficient condition to Theorem 2. It turns out that we again need uniform estimate for the expectation of a suitable functional with a log-singularity, acting on the real part of the Poisson extension of the measure. In Section 4.2 we then get rid of the log-singularity using a perturbative method, whose idea is similar to the one illustrated in the previous section, although the situation here is much more complicated since we need to extract two independent perturbations with suitable properties. Their existence is nontrivial since we are dealing with a general logcorrelated field, and the technical details are postponed to Section 4.3. The upshot of Section 4 is Corollary 23 below, which yields a sufficient condition to the first part of Theorem 2 in the form of a moment estimate of a weighted auxiliary positive Gaussian multiplicative chaos measure. Lemma on the density of the zeroes. The goal here is to provide a simple quantitative version of Lemma 12. In the sequel, the symbol means smaller or equal to within a multiplicative constant independent of r = |z|. Proof. Assume first that ϕ is a finite Blaschke product. We set v(z) = (a − |z| 2 ) β with 0 < β < 1 and first let the auxiliary parameter a satisfy a > 1. Compute where the implicit constants in ≍ are independent of a > 1. We note that v is smooth on D up to the boundary, and so is z → log(1/|ϕ(z)|) apart from its poles, and the latter function vanishes on the boundary. Hence we may apply the Green's formula and the fact where · stands for the R 2 inner product. By letting a ↓ 1 + we obtain This same relation is then obtained for a general Blaschke product by applying it first to a partial Blaschke product and using monotone convergence. Finally, the lemma follows by noting that in our setup the series k≥1 (1−|z k | 2 ) β and k≥1 (1−|z k |) β converge simultaneously. Corollary 17. The first part of Theorem 2 holds, if for all ε > 0 and all r close enough to 1, we have for our random inner function ϕ To establish the first part of Theorem 2, we use a slightly weaker form of Corollary 17. In order to state it, let us denote by x(z) and y(z) respectively the real and imaginary parts of h(z), i.e. We can write them explicitly in terms of the Poisson kernel and its conjugate: Corollary 18 (Upper bound for the density problem). The first part of Theorem 2 holds if, for all ε > 0, This follows directly from Lemma 16 by writing Control of the log-singularity via perturbation. We first establish an auxiliary result in the spirit of Lemma 15, factorizing out this time two independent Gaussian components. The idea is the same: by establishing a lower bound in the derivatives of certain directions, we can reduce the estimate with log-singularity to a relatively classical moment estimate. Now that we parametrize the Gaussian perturbation in higher dimensions, the estimation becomes considerably more complicated and we need to study level sets of convex functions with suitable derivative bounds. The proof of the following proposition is technical and will be postponed to Section 4.3. Proposition 19. Consider a log-correlated Gaussian field X on T with kernel with g ∈ W s,2 (T 2 ) for some s > 1. There exist two continuous and real functions f 1 , f 2 on T satisfying and a decomposition of X into three independent Gaussian components, standard Gaussians and independent of the residual field X. If we denote by µ and µ the associated Gaussian multiplicative chaos to X and X, then for z ∈ D, where P z (θ) is the Poisson kernel as before. Let y 1 , y 2 parametrize the Gaussians V 1 , V 2 and define the random function Especially, by the tower law we obtain log 1 + 4u(y 1 , y 2 ) (u(y 1 , y 2 ) − 1) 2 e −γ 2 (y 2 1 +y 2 2 )/2 dy 1 dy 2 . The key to the estimation of the above integral will be the following analytic proposition. (ii) The following integral is controlled by the value of u(0, 0): The constants c 1 and c 2 depend only on κ, K and γ. Before proving this result we first check that the function defined via (8) satisfies the conditions of the above proposition. Proof. Note first that (y 1 , y 2 ) → exp(ay 1 + by 2 ) is convex, and sums (or integrals of a parametrized family) of convex functions stays convex, so u is convex. Next, we may estimate The upper bounds for the derivatives follow in a similar way, and finally the positivity of u is evident. We write hereafter y = (y 1 , y 2 ) ∈ R 2 for simplicity. Proof. By translation we may assume that y 0 = 0. The assumption yields that ∆(u(y) − A|y| 2 ) ≥ 0 in B for any A ≤ κm u /4 . Thus the subharmonic function y → u(y) − A|y| 2 achieves its maximum at the boundary. In particular, for some y 1 ∈ ∂B we have u(y 1 ) − Ar 2 ≥ u(0), which yields the claim. Intuitively, we are interested in the geometry around the level set u(y) = 1, where the singularity is present. All non-degenerate level sets are convex Jordan curves by convexity of u, and the above lemma can be transformed into an upper bound on the distance between levels sets of 1 and 1 ± t. Proof of Proposition 20. We first prove item (i). According to Lemma 22, our function u is convex, tends to infinity and is not constant in any non-empty open set. Denote m = inf z∈R 2 u(z) ≥ 0. The sublevel sets Ω t := {y : u(y) < t} for t > m are convex, non-empty and open sets, increasing in t, and the level sets S t := {y : u(y) = t} satisfy ∂Ω t = S t . In turn, S m is either empty, a point, or a closed line (segment). Let m < t < t 2 and assume that y 0 ∈ ∂Ω t . By convexity, for given r > 0, we may pick an open ball B of radius r/2 such that B ⊂ C \ Ω t and y 0 ∈ B. Lemma 22 implies that B ∩ ∂Ω t 2 = ∅ as soon as t(1 + κr 2 /16) ≥ t 2 . Let us denote by A ε := {y ∈ C : d(y, A) ≤ ε} the ε-fattening of a given set A ⊂ C, we thus have shown that ∂Ω t is contained in We now estimate the size of the intersection (Ω t 2 \ Ω t 1 ) ∩ Q. Assuming ε < 1/2 we obviously have where 2Q is the twice dilated cube with the same center as Q. The domain Ω t 2 ∩ 2Q is convex, and its boundary has length at most 8 (one can define a contraction by sending each point of ∂(2Q) to its nearest point in ∂(Ω t 2 ∩ 2Q) and this shortens the boundary length), and we deduce by (9) that which implies for t ∈ (0, 1/2) that where c 1 := 160(κ −1/2 + κ −1 ). Previously we were implicitly assuming that m ≤ 1/2, but the analysis goes through with obvious changes when m ∈ (1/2, 3/2), and the case m ≥ 3/2 is not needed. Finally, this enables us to estimate, separating the case |u(y) − 1| > 1/2 and |u(y) − 1| ≤ 1/2, For item (ii), let us first assume that u(0) =: a < c 2 , where c 2 ≤ e −4K /2 will be fixed later. Then we partition R 2 into squares Q m,n := (m, n) + [0, 1) 2 , where (m, n) ∈ Z 2 , and apply the estimate of item (i) on each of these squares separately. We invoke the knowledge |∇u| ≤ Ku, which implies in the polar coordinates |du/dr| ≤ Ku, and u(y) ≤ e K|y| a. Especially u(y) ≤ 1/2 inside the ball B(0, log(1/2a)/K). When dealing with the second domain, let l ≥ log(1/2a)/K ≥ 4 be an integer. We apply part (i) on the square Q m,n with Q m,n ∩ ∂B(0, l) = ∅, and see that the integral over Q m,n is at most Every square Q m,n intersects some ∂B(0, l) with an integer l, and for each fixed l = 0, the number of such squares is at most 250861l. Summing up we obtain the following upper bound in the second case: where the first inequality holds as x → (x + √ 2)e −γ 2 x 2 /2 is decreasing on the range of integration, and the last one e.g. if − log(2a) ≥ 6 + 2K 2 /γ 2 . All the needed constaints are clearly satisfied for a < c 2 < 1 by choosing c 2 small enough depending on K and γ. In the case a ≥ c 2 , it suffices to show that I is bounded by a constant independent of a. This easily follows from item (i) by again summing over all Q m,n with (m, n) ∈ Z 2 . The upshot of this section is: Corollary 23. The first part of Theorem 2 holds if, for all z ∈ D and ε > 0, where with the notations in the beginning of this section, Proof. Combine Corollary 18, Equation (8) and item (ii) of Proposition 20. 4.3. Existence of a rank two independent Gaussian summand. We now prove Proposition 19 by showing that one may actually choose f 1 and f 2 to be two trigonometric polynomials. We start first by considering suitable compact perturbations of the identity operator. Lemma 24. Assume that T is compact and self-adjoint on a separable Hilbert space H and Id + T ≥ 0. Then there are vectors ϕ 1 , . . . , ϕ l 0 with the following property: for a given vector u ∈ H there exists ε > 0 such that if and only if u, ϕ j = 0 for j = 1, . . . , l 0 . Proof. Let {ϕ n } ∞ n=1 be an orthonormal basis of H consisting of eiegnvectors of T , ordered so that the eigenvectors that correspond to eigenvalue −1 are listed first, and write the spectral decomposition Here we used the compactness of the operator T to deduce that l 0 may be taken finite, and we also see that there is ε > 0 such that λ j > ε − 1 for all j > l 0 . Assume then that u ∈ H is normalized and satisfies the stated condition so that u = j>l 0 u j ϕ j with j>l 0 u 2 j = 1. If x ∈ H is arbitrary, x = ∞ j=1 x j ϕ j , we may compute In turn, the necessity of the condition is seen by observing The following example shows that it is not always enough to consider perturbations with one base element. In the context of Proposition 19 this means that choosing f 1 to be simply a trigonometric monomial will not always suffice. Example 25. Let (e j ) j≥1 be an orthonormal basis of H, and set a := ∞ j=1 2 −j/2 e j . Then Id − a ⊗ a ≥ 0. However, for all n ≥ 1 and ε > 0 we have Id − a ⊗ a − εe n ⊗ e n ≥ 0. In our next auxiliary result, we generalize Lemma 24 to cover operators of form C + A, where the compact operator C is assumed to dominate A in a suitable sense (i.e. A is compact with respect to C, see condition (10) below). Lemma 26. Assume that C, A are compact (symmetric) and self-adjoint operators so that both C ≥ 0 and C + A ≥ 0. Assume also that the kernel M := Ker(C) is finite-dimensional, and denote by P the orthogonal projection onto M . We write C 1 := C + P and assume that Proof. Simply note that if u is a linear combination of eigenvectors of C 1 , then b = C 1/2 1 u is also. We then apply the previous observations to our situation. We still need to find two trigonometric polynomials f 1 and f 2 , such that they both satisfy (11) and without common zeroes. For this purpose, we first fix f 1 to be any nontrivial trigonometric polynomial satisfying (11). This can obviously be done by choosing the degree of f 1 to be at least l + 1. The polynomial f 1 has only finitely many zeroes. Using a perturbative argument and induction on the number of zeroes, it clearly suffices to find f 2 such that it does vanish in a given point θ 0 and it satisfies (11). For any sequence y = (y n ) n≥0 denote by P N y the finite sequence P N y := (y n ) 0≤n≤N . A trigonometric polynomial f 2 = N j=0 c j e j vanishes at θ 0 exactly when f 2 , P N g 0 = 0, where g 0 := (1, sin(θ 0 ), cos(θ 0 ), sin(2θ 0 ), . . . ). Thus the choice of f 2 as a trigonometric polynomial is not possible if and only if However, then an elementary argument implies that g 0 ∈ span {ψ 1 , . . . , ψ l }, which is impossible since g 0 / ∈ ℓ 2 . Now the positivity of both and this completes the proof. Proof of the upper bound on the density of zeroes In this section we verify the sufficient condition in Corollary 23, which implies the first part of Theorem 2, and it also automatically implies the general form of Theorem 1. Since for all p ∈ (0, 1), it suffices to establish a p-moment bound on x(z) with p < 1. We only need the fact that x(z) is generated by a general log-correlated Gaussian field in the rest of the proof, so some notations are deliberately not strict (the proofs work for any Gaussian multiplicative chaos measures). In view of this and (13) we may estimate (use q < 2p for p ∈ (0, 1)) which yields the claim by the definition of q. Proof. This follows simply by choosing p = 1/2 in Lemma 29. This completes the proof for the first part of Theorem 2 in view of (12). We end this section by presenting an alternative to Corollary 30 that is based on the following known result: Lemma 31. Let Y be a log-correlated Gaussian field on the unit circle T and µ Y the associated Gaussian multiplicative chaos measure with parameter γ ∈ (0, For the upper bound we proceed as in (14) and estimate the action of Poisson kernel at z ′ := re iθ ′ by convex combinations of the averages A µ,2 k (1−r) (θ ′ ), where k = 0, 1, . . . satisfies 2 k (1 − r) ≤ 1. Using additonally the fact that x is positive harmonic function, Harnack's inequality allows us to estimate c ′ By Kahane's convexity inequality (see Fact 7 and the discussions thereafter), we can trade Y , restricted to I, for the exact scaling log-correlated fieldỸ with covariance kernel K(θ, θ ′ ) = log 1 |θ−θ ′ | , θ, θ ′ ∈ [− π 8 , π 8 ]. In order to use the good scaling properties of the field Y , consider a dyadic tiling of the interval [−π/8, π/8] by denoting Q k := [−2 −k π/8, 2 −k π/8], and R k := Q k \ Q k+1 for k ≥ 0. We recall that the exact scaling property states that for all 0 < r < 1, in law, (Ỹ (x + ry)) y∈J = (Ỹ (x + y)) y∈J + − log rN for any interval J ⊂ R with |J| < 1, where N is a standard Gaussian independent of
2023-06-14T01:16:06.054Z
2023-06-13T00:00:00.000
{ "year": 2023, "sha1": "4783f870c9434f116dae4564503e54af169aa2ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4783f870c9434f116dae4564503e54af169aa2ba", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
270752656
pes2o/s2orc
v3-fos-license
Co-Existence of Sickle Cell Disease with Rheumatic Heart Disease Sickle Cell Disease (SCD) is common in some part of India especially area inhabited by the Dravidian Tribal Population. Co-existence of Sickle Cell Disease with Rheumatic Heart Disease is not uncommon. The clinical findings in sickle cell anemia closely simulate those of rheumatic heart disease. This poses an important diagnostic challenge for the clinicians. Availability of 2-D Echocardiography has made it simpler to diagnose rheumatic heart disease in patients with sickle cell anemia whenever there is mixed symptoms & signs in a patient. We hereby report a case of Sickle Cell Disease with Rheumatic Heart Disease. An Eighteen years female, native of the State of Chhattisgarh of India, presented to the Outpatient Clinic with chief complaints of pain in the entire back & shortness of breath for 2 days.The patient was apparently well 2 days back when she developed pain in the left shoulder gradually involving the whole back, squeezing in nature.She consulted General Practitioner, who advised her pain killers, however, the pain was not relieved.The patient had persistent pain, so she was hospitalized. The next day of admission, the patient-reported left elbow pain, gradual in onset and associated with swelling over the left elbow joint, which was relieved with medication. The patient developed jaundice and yellow discoloration of urine a day after admission.Past history revealed episodes of elbow & ankle joint swelling with pain, chest and back pain on and off for the past 10 years.She also gave the history of sore throat on and off for the last 8 years.She gave a history of bleeding from the nose on and off for the past 2 years. Family history revealed both parents being healthy. The patient has four siblings.The younger brother had similar episodes of joint pain, body ache and jaundice and died at the age of 7 years due to weak liver as informed by the treating doctor. On clinical examination, she was conscious, oriented, and well-hydrated.She is of a thin build, had pallor, and there was an icteric tint present in the conjunctiva.No clubbing or lymphadenopathy was found.Her JVP was normal, her Pulse was 92 beats/min., and her Blood pressure in her right arm was 110/70 mm Hg. Cardiovascular Examination revealed normal precordium on inspection.Apex beat present at 5 th intercostal space at the anterior clavicular line.There were no visible precordial pulsations.Apex beat on palpation was tapping type, mild precordial heave present & no epigastric pulsations noted.No palpable thrill were found.On percussion, there was normal cardiac dullness.On auscultation first heart sound (S1) was loud.There was grade 3/6 Mid diastolic murmur at apex, localized with no radiation.There was grade 2/6 Mid systolic murmur, heard at apex, non-radiating.There was mid systolic click & Pulmonic component of second heart sound (P2) was loud. Abstract Sickle Cell Disease (SCD) is common in some part of India especially area inhabited by the Dravidian Tribal Population.Co-existence of Sickle Cell Disease with Rheumatic Heart Disease is not uncommon.The clinical findings in sickle cell anemia closely simulate those of rheumatic heart disease.This poses an important diagnostic challenge for the clinicians.Availability of 2-D Echocardiography has made it simpler to diagnose rheumatic heart disease in patients with sickle cell anemia whenever there is mixed symptoms & signs in a patient.We hereby report a case of Sickle Cell Disease with Rheumatic Heart Disease. Keywords: sickle cell disease; rheumatic heart disease; 2-D echocardiography; anemia; cardiovascular examination On Abdominal examination, the liver was palpable, 3 cm below the costal margin & tender.Splenomegaly was present.Respiratory system and neurological examination were normal. In view of the above-mentioned clinical features, a diagnosis of Rheumatic heart disease with anemia was made & the patient was investigated for the same. Hematological Examination revealed -Hb 8. Fig: 3 A final diagnosis of Rheumatic heart disease with sickle cell anemia was made.The patient was treated with hydroxyl urea & folate supplementation besides a blood transfusion.She improved clinically and was discharged with advice to continue the abovementioned treatment and Penicillin prophylaxis 1.2 lacs/month for RHD and to remain under regular follow-up. Discussion Sickle cell disease is particularly common among people whose ancestors come from sub-Saharan Africa, South America, Cuba, Central America, Saudi Arabia, India, Turkey, Greece, and Italy. Indian Prevalence First described in the Nilgiri Hills of Northern Tamil Nadu in 1952, the sickle cell gene is now known to be widespread among people of the Deccan Plateau of Central India with a smaller focus in the North of Kerala and Tamil Nadu.Anthropological survey of India has documented the distribution and frequency of the sickle cell traits which reaches level as high as 35% in some communities especially in tribal population (1,2) . Haemoglobin S results from a single base mutation in beta chain of haemoglobin, the adenine base is replaced by thymine, so glutamic acid is replaced by valine.The sickling test and haemoglobin electrophoresis help in diagnosis and differentiate homozygous from heterozygous types. Sickle cell diseases (SCDs) are severe and chronic inflammatory processes on vascular endothelium, terminating with end-organ insufficiencies in early years of life.Haemoglobin S (HbS) causes loss of elasticity and biconcave disc shaped structures of red blood cells (RBCs).Probably loss of elasticity instead of shape is the main problem since sickling is rare in peripheric blood samples of the SCDs with associated thalassemia minors, and human survival is not so affected in hereditary spherocytosis or elliptocytosis.Loss of elasticity is present during whole lifespan, but exaggerated with various stresses of the body.The hard RBCs induced severe and chronic vascular endothelial damage, inflammation, edema, and fibrosis terminate with tissue hypoxia all over the body (3, 4).Capillary systems may mainly be involved in the process due to their distribution function for the hard bodies. RHD is caused by an autoimmune reaction against Group A -hemolytic streptococci.The majority of morbidity and mortality associated with rheumatic fever is caused by its destructive effects on cardiac valves.It is characterized by repeated inflammation with fibrinous repair.Fibrosis and scarring of valve leaflets, commissures, and cusps lead to abnormalities that can result in valvular stenosis or regurgitation.The valvular endothelium is a prominent site of lymphocyte-induced damage. Moderate to severe anemias, auto-splenectomy, frequent painful crises, hospitalizations, invasive procedures, RBC, and a suppressed mood of the body may just be some of the possible reasons of immunosuppression in the SCDs (27-29).As a result, the significantly higher prevalence of RHD due to repeated bacterial infections is not an uncommon finding in the SCDs. The confusing resemblance between the symptoms of sickle cell anaemia and those of rheumatic heart disease is well known.Yater and Hansmann have pointed out that the diagnosis of rheumatic heart disease in cases of sickle cell anaemia has been made many times from the history of joint pains, presence of an enlarged heart, systolic mitral or precordial murmur, and hepatomegaly.Other common symptoms are leg pain, ankle edema, pallor, and dyspnoea.Often-times organic heart disease is diagnosed clinically only to find at autopsy that all changes noted are compatible with a severe anaemia Rheumatic heart disease and sickle cell anaemia are frequently diagnosed in the same patient.The differentiation of sickle cell anaemia from rheumatic heart disease with mitral stenosis may be impossible on clinical grounds alone (Bland, White and Jones, McKusick) (4) . Klinefelter has shown that the clinical findings in sickle cell anaemia could closely simulate those of rheumatic heart disease, but noted that specific lesions have never been demonstrated at autopsy.Hansman emphasized the importance of being extremely cautious in making a non-anaemia diagnosis of heart disease when studying a patient with sickle cell anaemia.The clinical discussion as to whether sickle cell anaemia alone is present or whether there is an accompanying rheumatic heart lesion is made even more complex by Cooley's statement: "Organic heart disease is about as common here (sicklemia) as in any group of children subject to tonsillitis as these children often are." Very recent study by Mehmet Rami Helvaci etal' in 428 patients with the SCDs (208 females) and 2,855 controls (1,620 females), found RHD just in 0.3% of control (eight females and one male).Whereas this was 6.5% (13 females and 15 males) in the SCDs (p<0.001).The mean ages of RHD were 48.2 and 32.2 years in the control and SCDs groups, respectively.Mitral valve was involved in 58.8%, aortic valve was involved in 32.3%, and tricuspid valve was involved in 8.8% of cases with the SCDs.Interestingly, the tricuspid valve was never involved alone instead together with mitral valve in all of the cases (5) . Conclusion Many of the clinical symptoms and signs of sickle cell disease and rheumatic heart disease are similar in nature.To differentiate these two conditions based on clinical examination is a significant challenge to clinicians.However, meticulous history taking especially probing about the geographic origin of the patient and family history helps in getting clues about the possibility of sickle cell disease in a given patient.Echocardiography remains the gold standard for diagnosing concomitant rheumatic heart disease in a patient with sickle cell disease. Fig: 1 Fig: 2 2 Fig: 1 Fig: 2 2-D ECHO was suggestive of Rheumatic Heart Disease with moderately thickened, Anterior5 mitral leaflet (AML) showed diastole doming, post mitral leaflets (PML) shows restricted motion with mitral valve area of 2.1 cm 2 .Moderate mitral regurgitation (eccentric jet).There was tricuspid regurgitation.Pulmonary artery systolic pressure was around 40 mm Hg with good right and left ventricular function.There was no pericardial effusion or clot (Fig 3).
2024-06-27T15:02:07.587Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "3e6840ee07b57cc3d1fca560b3c64be4f79a0e15", "oa_license": "CCBY", "oa_url": "https://www.mediresonline.org/uploads/articles/1710392499Co-Existence_of_Sickle_Cell_Disease_with_Rheumatic_Heart_Disease.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4fdb6de1ec368ce5d68a908ae2270d5d193bce6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237453309
pes2o/s2orc
v3-fos-license
Copy-Move Image Forgery Detection Based on Evolving Circular Domains Coverage The aim of this paper is to improve the accuracy of copy-move forgery detection (CMFD) in image forensics by proposing a novel scheme and the main contribution is evolving circular domains coverage (ECDC) algorithm. The proposed scheme integrates both block-based and keypoint-based forgery detection methods. Firstly, the speed-up robust feature (SURF) in log-polar space and the scale invariant feature transform (SIFT) are extracted from an entire image. Secondly, generalized 2 nearest neighbor (g2NN) is employed to get massive matched pairs. Then, random sample consensus (RANSAC) algorithm is employed to filter out mismatched pairs, thus allowing rough localization of counterfeit areas. To present these forgery areas more accurately, we propose the efficient and accurate ECDC algorithm to present them. This algorithm can find satisfactory threshold areas by extracting block features from jointly evolving circular domains, which are centered on matched pairs. Finally, morphological operation is applied to refine the detected forgery areas. Experimental results indicate that the proposed CMFD scheme can achieve better detection performance under various attacks compared with other state-of-the-art CMFD schemes. INTRODUCTION W ITH With the development of computer and image processing software, digital image tampering becomes much easier; therefore, lots of digital images lack authenticity and integrity, which poses a threat to many critical fields. For example, it may lead to misdiagnosis when forged images are used in medical fields [1], and forged newspaper photographs may mislead people and cause unnecessary social unrest [2]. Hence, the ability to credibly authenticate an image has become a major focus of image forensics and security. The existing detection techniques fall into two main categories: active and passive. Active forensics techniques ensure the authenticity of digital images by verifying the integrity of authentication information, such as digital watermark [3]- [5] and digital signature [6]- [8]. These active methods have strong detection abilities and cannot be easily avoided, but their main defect is that the watermark must be inserted as a key into the image. Passive forensics techniques are used to verify the authenticity by analyzing the information and structure of the image, which overcomes the disadvantage of active forensics techniques. There are two main forgeries that alter the contents of images: splicing and copy-move. The common splicing forgery method consists in copying and pasting a part of an image into another image, while the copy-move forgery method is a way to copy and paste a part of an image into the same image. In recent years, copy-move forgery has become one of the most popular subtopics in forgery detection [9]. To make copy-move tampered images more trustworthy, some processing methods are probably required, including rotation, scaling, downsampling, JPEG compression, and noise addition. Considering that image copy-move forgery detection (CMFD) is a challenging topic, this paper focuses on CMFD algorithms. The general steps of CMFD are feature extraction, feature matching, and postprocessing. Based on different extracting features, CMFD is divided into block-based, keypointbased, and fusion of these two methods. The last one has become more popular in recent years. In this paper, we propose a CMFD scheme based on evolving circular domains coverage (ECDC), which combines block-based and keypoint-based methods. It extracts two different descriptors from an image, and then we match and filter those descriptors to obtain a rough localization. After that, we employ the proposed ECDC algorithm to cover forgery areas. The refined forgery areas are obtained by postprocessing ultimately. The two main contributions of this paper are listed below: 1) we combine the speed-up robust feature (SURF) in log-polar space and the scale invariant feature transform (SIFT) as descriptors to depict a host image more accurately. This not only raises the precision of the proposed scheme under plain copy-move forgery evidently, but also improves its robustness to various geometric transformations and signal processing. 2) we propose a novel algorithm, named ECDC, to present forgery areas exactly. By comparing the differences of block features in jointly evolving circular domains, this algorithm which is based on the pre-positioning of keypoints can greatly reduce computational complexity and improve its running efficiency. In addition, ECDC cannot only cover large-scale tampered areas completely, but also depict small areas accurately. The rest of this paper is organized as follows: Section 2 briefly reviews the related work of CMFD; Section 3 displays the framework of the proposed CMFD scheme and then explains each step in detail; Section 4 shows the experimental results of CMFD and their analysis; finally, Section 5 gives the conclusion. RELATED WORK In this section, we review some classic and state-of-theart CMFD schemes. Based on the difference of extracted features from the image, we divide this review into the following parts: block-based method, keypoint-based method, and fusion of the two methods. Block-Based Methods The block-based CMFD methods, in general, divide a host image into small, regular, and overlapped blocks. After extracting features of each subblock, the results are obtained by matching and postprocessing those features. Fridrich et al. [10] proposed the CMFD algorithm, which is a milestone in the field of CMFD. They used quantified discrete cosine transform (DCT) coefficients as features. Then, a lexicographically ordered feature matrix reducing the range of feature matching, was used to detect similar regions [9]. Popescu and Farid [11] used principal components analysis (PCA) as features to detect tampered areas. Bayram et al. [12] proposed Fourier-Mellin transform (FMT) to extract features. They applied counting bloom filters instead of Lexicographic sorting as a matching scheme which was more efficient. Wang et al. [13], [14] used the Gaussian pyramid to reduce the dimensions of images. The former used the Hu-moments of blocks, and the latter employed the mean value of image pixels in circle blocks, which were divided into concentric circles. Ryu et al. [15] proposed a method based on rotationally-invariant Zernike moments, which can detect forged regions even though they are rotated. Li [16] proposed an algorithm that matched polar cosine transform (PCT) with locality sensitive hashing (LSH), which required simpler calculations than Zernike moments. This algorithm excels at large-scale rotation. Similarly, polar sine transform (PST) and polar complex exponential transform (PCET) also belong to polar harmonic transform (PHT) [17]. Bravo and Nandi [18] used colour-dependent feature vectors to perform an efficient search in terms of memory usage. Cozzolino et al. [19], [20] proposed a new matching method called PatchMatch, and a fast postprocessing procedure based on dense linear fitting. This method greatly reduces the computational complexity and it is robust to various types of distortions. Overall, although applying Lexicographic sorting and reducing dimensions make block-based methods detection more efficient, it still has higher computational complexity than keypoint-based methods. In addition, when faced with large-scale scaling, the robustness of block-based methods, in general, is significantly reduced. Keypoint-Based Methods The keypoint-based CMFD methods usually extract features from an entire image, which is the main difference from block-based methods, and they effectively reduce computational complexity. Huang et al. [21] proposed the bestbin-first nearest neighbor identification algorithm based on SIFT. Xu et al. [22] proposed SURF to extract features with a faster speed compared with SIFT. Amerini et al. [23] used generalized 2 nearest neighbor (g2NN) on SIFT descriptor to obtain qualified features. Then the random sample consensus (RANSAC) was used to remove mismatched points. Shivakumar and Baboo [24] proposed a CMFD scheme based on SURF and kd-tree for multidimensional data matching. In high-resolution image processing process, this method can detect different sizes of copied regions with a minimum number of false matches. To present tampered areas accurately, Pan and Lyu [25] utilized RANSAC to estimate the affine transformation matrix, and then they obtained correlation maps by calculating correlation coefficients to locate forged regions. Silva et al. [26] proposed to separate forged points and the corresponding original ones into different clusters by clustering matched keypoints based on their locations and the final decision is based on a voting process. Park et al. [27] utilized SIFT and the reduced local binary pattern (LBP) histogram to detect tampered areas. However, [23], [24] only roughly marked the detected regions with connections on matched pairs. Furthermore, when tampering occurs in low-entropy or small-size areas, the detection results of many keypoint-based methods are unsatisfying due to the small number of keypoints. Fusion of Block-Based and Keypoint-Based Methods For better detection performance, combining the advantages of block-based and keypoint-based methods have currently become a trend. Some researchers proposed to segment the host image into non-overlapped and irregular blocks and then to match features extracted from those segmented regions [28], [29]. But their accuracy depends on the size of superpixels and detected results may have fuzzy boundaries. Zheng et al. [30] classified the host image into textured and smooth regions in which, SIFT and Zernike features were respectively extracted and matched. However, this method cannot accurately distinguish between smooth and textured areas, especially when tampered regions are attacked by noise. Zandi et al. [31] proposed a new interest point detector and used an effective filtering algorithm and an iteration algorithm to improve their performance. Although they can effectively detect tampered areas in low contrast areas, their detected results usually contain mismatches. Pun and Chung [32] proposed a two-stage localization for CMFD. The weber local descriptor (WLD) was extracted from each superpixel in their rough localization stage, and in their precise localization stage, discrete analytic Fourier-Mellin transform (DAFMT) of roughly located areas were extracted. Li and Zhou [33] proposed a hierarchical matching strategy to improve the keypoint matching problems and an iterative localization technique to localize the forged areas. Wang et al. [34] classified irregular and non-overlapping image blocks into smooth and textured regions. They combined RANSAC algorithm with a filtering strategy to eliminate false matches. This method can detect a high-brightness smooth forgery. However, these methods achieve high detecting accuracy at the expense of low efficiency. In summary, the main problems faced by block-based CMFD methods are the inability to detect images with largescale scaling and high computational complexity, while the main problem of keypoint-based CMFD methods is that there are fewer keypoints in low-entropy areas, which lead to incomplete coverage of tampered areas. Fusing blockbased and keypoint-based methods reasonably can preserve their advantages and avoid certain shortcomings at the same time. Our scheme fairly integrates block-based and keypoint-based methods, which results in complete coverage of tampered areas and higher detection efficiency. The algorithm is described in more detail in Section 3. PROPOSED COPY-MOVE FORGERY DETECTION SCHEME In this section, we explicate our CMFD scheme. The framework of the whole scheme is given in Fig. 1. Firstly, we extract both SIFT descriptor and log-polar SURF descriptor (LPSD) from an entire image. Secondly, g2NN is employed on each descriptor to obtain massive matched pairs. Then, we employ RANSAC to eliminate mismatched pairs. Finally, the ECDC algorithm is used to present entire forgery regions through those matched pairs. In the rest of this section, Section 3.1 explains the feature extraction algorithm combining SIFT and LPSD; Section 3.2 introduces the keypoints matching algorithm using g2NN; Section 3.3 describes the process of eliminating mismatched pairs by using RANSAC; Section 3.4 explains in detail how matched pairs are expanded to whole forgery regions by using ECDC algorithm. Feature Extraction Using Combination of SIFT and LPSD In this section, we explain how to extract keypoints as descriptors of the image. SIFT and SURF algorithms have been widely used in the field of computer vision in recent years. These keypoints are robust to various attacks, including rotation, scaling, downsampling, JPEG compression, and noise addition. As a result, SIFT and SURF are often used to extract keypoints in existing keypoint-based methods. In this paper, unlike in general keypoint-based methods, we combine SIFT and LPSD to depict images. SIFT Lowe [35] decomposed the SIFT algorithm into the following four steps: firstly, extrema in scale space were located with the computation searching over all scales and image locations; secondly, at each candidate location, keypoints were selected based on measures of their stability; then, based on local image gradient directions, one or more orientations were assigned to each keypoint location; at last, the local image gradients were measured at the selected scale in the region around keypoint to generate descriptors. In general, the extreme points of a given image are detected at different scales in scale space, which is constructed by using the Gaussian pyramids with different Gaussian smoothing and resolution subsampling. These keypoints are extracted by applying difference of Gaussian (DoG), and a DoG image D is denoted by [35]: where L(x, y, kσ) is the convolution of the original image I(x, y), with the Gaussian blur G(x, y, σ) at scale space k. To ensure rotation invariance, for each keypoint, the algorithm assigns a canonical orientation which can be determined by calculating the gradient in its neighborhood. Specifically, for an image sample L(x, y, σ) at scale σ , the gradient magnitude m(x, y) and orientation θ(x, y) can be pre-calculated using pixel difference as follows [35]: SURF SURF proposed by Bay et al. [36] is an improvement on SIFT, and being faster is its prominent characteristic. By using a Hessian matrix for optimization, SURF algorithm accelerates SIFT detection process without reducing the quality of the detected points. Then, box filters of different size are used to establish scale space and to convolute with the integral image. Given a point x = (x, y) in an image I, the Hessian matrix H(x, σ) in x at scale σ is represented as follows [36]: where L xx (x, σ) is the convolution result of the second order derivative of Gaussian filter with the image I in point x, and similarly for L xy (x, σ) and L yy (x, σ). Hessian matrix and non-maximum suppression are used to detect potential keypoints. While assigning one or more canonical orientations, the dominant orientation of the Gaussian weighted Harr wavelet responses can be detected by a sliding orientation window at every sample point within a circular neighborhood around the interest point. Combination of SIFT and LPSD Kaura and Dhavale [37] showed that the combination of SIFT and SURF would improve the detection performance of the keypoint-based method. In consideration of the lower detection accuracy of SURF, compared with SIFT [38], we improve this accuracy by applying log-polar coordinates to it [39]. It can be seen in experiments that SURF in log-polar space, whose detection results are much more accurate than SIFT, succeeds well in detecting plain copymove forgery, especially for detailed objects. Fig. 2(a1)-(a3) and Fig. 2(b1)-(b3) show SIFT and LPSD matched results for plain copy-move forgery, respectively. (The matching algorithm is explained in Section 3.2). We can observe that LPSD can obtain more matched pairs on small or detailed areas from Fig. 2(a3) and (b3). However, SIFT exhibits a surprising stability when forgery regions are attacked by noise or any other manipulations, as shown in Fig. 2(a4) and (b4). In these two figures, noise with standard deviation of 0.1 has been added to the copied fragments. In this case, LPSD hardly detects any right matched pairs while SIFT performs well. Thus, we decide to combine SIFT and LPSD to improve the instability of LPSD and the accuracy of SIFT. g2NN After feature extraction, two descriptor groups are obtained: where f SIFT is the n dimensional SIFT descriptor vector and f LPSD is the m dimensional LPSD descriptor vector. To find similar descriptors in the image, we need to match them to each other. Lowe [40] employed the distance ratio between the nearest neighbor and the second-nearest neighbor to compare it with a threshold T . Only if the ratio is less than T , the keypoints are matched. However, this matching process is unable to manage multiple keypoints matching. Since the same image areas may be cloned over and over in a tampered image, we employ g2NN algorithm [23] which can cope with multiple copies of the same descriptors. Specifically, taking SIFT as an example, we define a sorted distance vector χ i for f SIFT i to represent the Euclidean distance between f SIFT i and the other (n − 1) descriptors, i.e., where d i,j (i, j = 1, 2, · · · , n; i = j) is the Euclidean distance between f SIFT i and f SIFT j , i.e., To facilitate the finding of an appropriate threshold, we measure the similarity between descriptors by using d 2 i,j (the Euclidean distance square). Thus, for all f SIFT , an n×(n−1) matrix ξ will be generated: We iterate 2 nearest neighbor (2NN) algorithm on every row of the distance matrix ξ to find multiple copies. Based on χ i as an example, the iteration will stop when If the iteration stops at d 2 i,k , each keypoint corresponding to the distance in is considered as a match for the inspected keypoint. Threshold T Huang et al. [21] analyzed that, if the ratio T of the distance is reduced, then the number of matched keypoints will be reduced, but the matching accuracy will be improved. To test and verify this conclusion, we set different thresholds and observe the number of matched pairs and mismatched pairs of f SIFT and f LPSD under plain copy-move forgery. We use Figs. 3 and 4 to perceptibly and statistically describe the result. Fig. 3(a1)-(a4) and Fig. 3(b1)-(b4) show separately detected results of SIFT and LPSD, where thresholds range from 0.1 to 0.7 in steps of 0.2. To select an appropriate threshold, we randomly selected 100 images, including plain copymove, rotation, scaling, noise, and other attacks, from the FAU dataset [9] for g2NN testing. Statistics data of SIFT and LPSD correct and wrong matches at different thresholds are respectively plotted as a line chart in Fig. 4(a) and (b). From Fig. 4, we can observe that with the increase of the threshold, correct matches tend to be constant, while incorrect matches increase rapidly. Thus, we come to two conclusions: 1) A higher threshold will lead to more false matches, while a lower one may miss some correct matches. It is believed that appropriate threshold should not only obtain as many correct matches as possible, but also guarantee the number of incorrect matches within acceptable limits. 2) Because LPSD has more mismatches at lower thresholds than SIFT descriptor, we set different g2NN thresholds T SIFT and T LPSD for them. The parameters used are presented in Section 4.1. Multiple Keypoints Matching After keypoints matching, we will get a large number of matched pairs. Due to the fact that adjacent keypoints have high similarity, we must remove the matched pairs when where (x a , y a ) and (x b , y b ) indicate the coordinates of matched keypoints. However, after that, many mismatched pairs still remain, which will seriously put an negative impact on covering or presenting forgery areas. Thus, we employ a widely used and robust algorithm named RANSAC [41] to eliminate them. RANSAC algorithm can estimate a model parameter precisely even when there are lots of mismatched pairs. It divides those pairs into inlier and outlier groups. To get enough matched pairs and, at the same time, to eliminate mismatched pairs with high similarity, our RANSAC algorithm is based on [34]. We set the threshold N and repeat RANSAC algorithm until the inlier groups points number is less than N . The higher N is, the more mismatched pairs are eliminated. Meanwhile, those slight forgery or low-entropy regions are more likely to be overlooked. By contrast, a lower N is better for detecting those regions. However, it can cause difficulty in eliminating mismatched pairs with high similarity. Therefore, we should get the right balance between the two contradictions. Forgery Areas Coverage Algorithm After postprocessing, we get a number of precisely matched pairs; however, these matched pairs can only cover tampered areas partially, which means that the original appearance of those areas cannot be fully revealed. Hence, accurately covering tampered areas is pivotal in CMFD. In fact, the matching results of block-based methods and keypoint-based methods are essentially the position of two sets of pixels. More specifically, for the coverage of tampered areas, block-based methods require to compare many image block features centered on pixels. If the features of two blocks are sufficiently similar, their central pixels are recorded as a pair of matched points and their corresponding blocks are subsequently covered. Similarly, we consider that keypoint-based methods can also achieve the goal of covering tampered areas by comparing features, which are within a certain range and centered on pixel points. With the help of keypoint prepositioning, the algorithm complexity can be greatly reduced, thereby improving its detection speed. Thus, we propose a new algorithm to cover tampered areas, which is called ECDC. Selection of an appropriate feature Then, for a better coverage, we analyzed and discussed a variety of features. Christlein et al. [9] listed most of the effective features, including four types: moment-based, dimensionality reduction-based, intensity-based, and frequency domain-based features. The DCT coefficients of the frequency domain-based features perform well against noise attacks. Wang et al. [34], through experiments, concluded that PCET moments perform better than other momentbased features under various geometric transformations. Therefore, we chose DCT coefficients and PCET moments for subsequent experiments. Block feature matching We extract block features from two separate circular domains centered on a matched pair. Then, we compare those features, and if they are similar enough, the corresponding circular domains will be covered. However, different features have different ways to measure their similarity. We usually employ the Euclidean distance to measure the resemblance of PCET moments because its dimension is constant. If the Euclidean distance between F PCET 1 and F PCET 2 is smaller than the predefined threshold K PCET , it will be considered as a matched pair, i.e., Concerning the DCT coefficients, the dimension of the matrices depends on the size of sub image blocks. Consequently, large sub image blocks are stored in grand matrices, which is not conducive to computation. Thus, we use singular value decompositions (SVD) [42], [43] to decompose the extracted DCT coefficients matrices, i.e., where U and V are unitary matrices and Λ is a diagonal matrix whose entries are the singular values of F DCT . Since Λ contains the basic information of F DCT , and its maximum value includes most of the basic information of F DCT , we choose the maximum value λ of Λ to represent F DCT of a circular domain, i.e., If the difference between λ 1 and λ 2 of two circular domains is less than the threshold K DCT , i.e., we determine that these two circular domains are tampered areas. We take 48 images from the FAU dataset [9], crop their center into sub image blocks of 3 × 3, 39 × 39, and 75 × 75 sizes, and attack them in various ways. Then, we calculate the mean value of λ (denoted asλ) in these three sets of sub image blocks, and list the results in Table 1. It shows that λ has only a slight difference under various attacks, which proves the feasibility of representing F DCT by λ to depict sub image blocks. The selection of the aforementioned thresholds K PCET and K DCT has a great influence on the accuracy and robustness of our algorithm. If they decrease, the criteria get more stringent and the coverage is more precise; however, if the image is attacked by noise and geometric transformations, this algorithm would more easily miss or misjudge tampered areas. On the contrary, it is more robust. These thresholds can be determined through a large number of experiments and they depend on the image resolution and attack type of datasets. For FAU dataset [9], to make it more robust to noise attacks, we set those thresholds as functions K PCET (σ s ) and K DCT (σ s ), where σ s is the difference of variance between two circular domains. Finally, based on numerous experiments, we have established two empirical formulas for K PCET (σ s ) and K DCT (σ s ), which are piecewise functions: Circular domains evolution Since tampered areas sizes are uncertain, they may not be covered ideally if only the features within a single radius are used as coverage basis. Therefore, we set the radius to an evolving vector in steps of τ : where r 1 < r 2 < · · · < r m . In this way, we can compare the features of matched pairs in an evolving radius range by looping. The detail of ECDC is illustrated in Fig. 5, in which radii changing process for a keypoint of a matched pair is shown in closeup. For ease of interpretation, the rings in the closeup are labeled with different colors. In the first comparison, we compare the features in the red ring centered on one of the matched pair. When the threshold K ∈ {K PCET , K DCT } is met, the radius is enlarged to the size of the blue ring and a second round of comparison is made. If the difference between the features in the blue ring is still less than K, the radius continues to be enlarged until it reaches its maximum or the difference no longer fulfills that condition. Then, the previous radius is recorded and the loop is broken. After traversing all matched pairs with the above algorithm, their coverage is finally completed. The position of matched pairs is also of great importance on radii expansion. Three expansion results are presented in Fig. 5. The red and green pairs are near the edges of the tampered areas; thus, their rings' extension ends before the radius enlarged to its maximum r m , which means that ECDC can accurately distinguish the edges. On the contrary, the blue pair is in the center of the tampered areas and, obviously, surrounded by it, so the expansion of the blue ring ends when the radius enlarges to its maximum r m . Fig. 6 presents the flowchart of ECDC algorithm, in which the middle image only partially shows the coverage of matched pairs. Furthermore, Fig. 5 represents the enlarged and detailed diagram of the step 'Threshold Comparison Repetition' of the loop in Fig. 6. Morphological postprocessing Finally, depending on the image resolution, the disk size used for close operation varies. This step fills small holes and cracks in the merged areas while maintaining the overall outline of the areas as it is, which is advantageous to completely cover tampered areas. EXPERIMENTAL RESULTS AND ANALYSIS In this section, we conducted a series of experiments to compare validity and robustness between our scheme and other state-of-the-art schemes. Section 4.1 presents datasets we used, experimental setup, and parameters. Section 4.2 presents how we evaluated CMFD schemes. Section 4.3 presents the comparison between the proposed and other CMFD schemes at pixel level. Section 4.4 presents the comparison between the proposed and other CMFD schemes at image level. Image Datasets For a comprehensive comparison, three datasets, i.e., FAU [9], GRIP [19], and COVERAGE [44] are used to demonstrate the effectiveness of our scheme. FAU [9] dataset consists of 48 high-resolution images and it contains sub-datasets under various image attacks, including scaling, rotation, noise, downsampling, and JPEG compression. GRIP [19] only has plain copy-move images but some very smooth tampered areas, while COVERAGE [44] contains similar-but-genuine objects under a combination of different attacks. Hence, we chose FAU [9] to objectively evaluate CMFD schemes at pixel level, and GRIP [19], COVERAGE [44] to evaluate them at image level. The detailed information of these three datasets is summarized in Table 2. The experiments in this paper were performed in MAT-LAB 2019b on a 64-bit win10 PC with the Intel Core i7-8650 CPU model and 8 GB RAM. Finally, we listed the parameters used in the proposed scheme in Table 3. Evaluation Metrics Some state-of-the-art schemes uses True Positive Rate (TPR), False Positive Rate (FPR) and Accuracy (ACC) [33], [45] to Step of radii group evaluate their performance, while some choose precision p, recall r [9], [29], [30] and F 1 score. To comprehensively evaluate CMFD methods, these two different metrics are used at two different levels. At the image level, we focus on the practicality of our scheme to evaluate whether it can distinguish or not the difference between authentic images and forged images, as our original intention is to expose digital image forgery. In this case, metrics TPR, FPR and ACC are used. In CMFD schemes, the TPR t indicates the percentage of correctly classified copy-move regions, while the FPR f denotes that of incorrectly located cloned regions. They are defined as [33], [45]: where N TP is the number of correctly detected forged images, N TN indicates the number of correctly detected authentic images, N FP denotes the number of authentic images which have been erroneously detected as forged, and N FN denotes the number of forged images which have not been detected. The accuracy of CMFD schemes a denotes the performance of CMFD schemes based on TPR and FPR. It is defined as below [33], [45]: However, at the pixel level, we should not only pay attention if the proposed scheme can distinguish forged images and authentic images, but also cover detected forgery regions perfectly. In this case, precision p and recall r [9], [29], [30] are used to evaluate detection performance. Metrics p, r and F 1 are defined as follows [9]: where N TP denotes the number of correctly detected forged pixels, N FP denotes the number of pixels which has been erroneously detected as forged, and N FN is the number of forged pixels which has not been detected. p is used to describe the percentage of correctly detected pixels. A higher value of p means there are less erroneous detections. r describes whether the forgery areas are completely covered or not. A higher value of r means the more complete the coverage of forgery areas is. By combining p with r, the F 1 score is obtained [9]. The higher F 1 score gets, the better the performance is. An intuitive illustration of the relationship between N TP , N FP , and N FN is shown in Fig. 7. As the way of presenting in [31], [32] is clear, we employ the same way: green for correct detected areas, red for incorrect detected areas, and white for ground-truth areas, in which forged areas have not been detected. Detection Results Obtained on FAU at Pixel Level In this section, we mainly examined the ability of CMFD schemes to distinguish both authentic and forged images from FAU dataset [9] at pixel level. They should be able to show the forged areas in detail, which means they can perfectly display the particulars in the ideal situation. The performance of the proposed scheme is compared with that of various state-of-the-art CMFD methods, including blockbased methods (e.g. [14], [15], [19]), keypoint-based methods (e.g. [21]- [25]) and fusion of both (e.g. [29]- [31]). Plain CMFD We first evaluate their plain copy-move foregery detection performance. The detection results of the 48 images from the nul sub-dataset are listed in Table 4, in descending F 1 order. The proposed scheme, while using DCT, achieves the optimal F 1 , with p = 92.61%, r = 91.48%, and p = 91.56%. It has better CMFD performance at the pixel level compared with other algorithms. The highest r is achieved when using PCET, because the coverage is more comprehensive; however, this leads to more coverage errors. Wang et al. [35] achieved the highest p, which means they had the least number of detection errors. To sum up, our scheme reaches better results at the image level and the pixel level in the case of plain copy-move forgery. [31] 83.65 79.53 79.66 SURF [22], [24] 68.13 76.43 69.54 SIFT [21], [23], [25] 60.80 71.48 63.10 CMFD under Various Attacks As images are not only forged under plain copy-move manipulations, the robustness of different schemes should be tested especially when they are under various attacks. Therefore, sub-datasets of various types attacks are used, including scaling, rotation, noise, JPEG compression, downsampling, to present CMFD performance. Fig. 8 shows the detection results of our scheme under different attacks. The first and third columns represent forged images. The second and fourth columns are detection results. Fig. 9 shows the p results of the proposed scheme compared with the aforementioned schemes under different attacks. We can observe that the precision of the proposed scheme surpasses most of the others. Under small-scale rotation and scaling, the proposed scheme performs well, its precision with DCT is higher than that with PCET. In terms of largescale rotation and scaling attacks, the results of the proposed scheme are superior to most of the others. The test results are displayed in Fig. 8(b2). Remarkably, the precision of our method running with DCT reaches more than 80% at largescale magnification. It also shows the highest results under the most severe global noise attacks. However, our scheme is affected by JPEG compression because of the extraction of many inoperative keypoints, especially when the quality factor is below 30. In this situation, as shown in Fig. 8(f2), our scheme can only maintain good performance to detect large forged areas, as it cannot filter out the invalid matches which are far more than the correct matches, when faced with small tampered areas. Of course, by making parameters of RANSAC and ECDC thresholds more stringent can reduce false coverage and the precision of JPEG compression with a low quality factor can be significantly improved; nevertheless, as the number of effective keypoints decreases, the precision of these results will considerably be reduced under local noise and global noise attacks. At this point, after strict filtering and ECDC, there will be only a few remaining matched pairs which do not have the ability to completely cover tampered areas. To sum up, it requires a compromise between performance under serious noise and under JPEG compression with a extremely low quality factor. From Fig. 10, it also can be observed that the recall of our scheme with PCET is higher than that with DCT; therefore, we recommend using ECDC with PCET in vulnerable situations to cover forged areas more completely. Considering Figs. 9 and 10, we note that higher recall leads to lower precision, which means that the larger the coverage is, the lower the accuracy of detection may be. If tampered areas only have to be precisely indicated and they do not need to be presented perfectly, using ECDC with DCT would be a better option because it has higher detection precision with fewer mismatches. Fig. 11 depicts the comparison of all F 1 scores. We can intuitively conclude that ECDC is robust against all kinds of attacks whether it is in combination with DCT or PCET. Though the robustness of ECDC against some attacks is slightly inferior to the scheme [19], it is exceptionally better in large-scale detection compared with most classic and state-of-the-art schemes tested. Running Time Comparison To comprehensively evaluate a CMFD scheme, we should pay attention to its running efficiency in addition to its effectiveness and reliability; thus, we also evaluate the processing efficiency of proposed scheme and others on FAU datasets. As the experimental platform of Christlein et al. [9] is different from ours, we only compare the schemes avaiable and implemented on the same platform, and record the average running time of each scheme in Table 5, in an ascending order. It can be observed that our running time is relatively fast, and is above average compared with other solutions. Detection Results Obtained on GRIP and COVER-AGE at Image Level In this section, for a comprehensive and fair comparison, Other popular datasets, such as GRIP [19] and COVERAGE [44], and metrics are used to evaluate state-of-the-art CMFD Pun [29] 128. 45 Cozzolino [19] 149.03 ECDC-DCT 164.81 Zandi [31] 192.23 ECDC-PCET 275.87 Zheng [30] 554.36 methods at image level. Fig. 12 illustrates serveral challenging forgery detection examples for these two datasets by using proposed method. GRIP [19] contains some extremely smooth forged regions which is challenging for many keypoint-based methods, such as first three columns of Fig, 12. For comparison, keypoint-based methods [23], [26], [46], block-based methods [18], [19] and fusion of both methods [28], [31], [33] are used. Table 6 presents the detection performance on this dataset, in descending ACC order. As shown in Table 6, both Li [33] and Bravo [18] exhibits the highest ACC of 100%. The proposed CMFD algorithm using DCT and PCET achieves the second and third rank repectively, with an ACC of 98.75% and 96.86%. For this dataset, block-based methods [18], [19] and fusion of both methods [28], [31], [33] demonstrate generally better performance than keypoint- based methods [23], [26], [46], due to challenging smooth tampered images. [23] 70.00 20.00 75.00 Li [28] 83.75 35.00 74.38 Each image in COVERAGE [44] contains similar-butgenuine objects, resulting the fact that discrimination of forged from genuine objects is highly challenging. Moreover, many of their images are forged under a combination of image attacks. For comparison, keypoint-based methods [23], [26], [27], block-based methods [18], [19] and fusion of both methods [28], [31], [33] are used. Table 7 shows the detection results on COVERAGE, in descending ACC order. It is obvious that all the algorithms perform poorly on this dataset. Silva [26] achieves the best TPR but the highest FPR high false positive rate, while Bravo [18] do not wrongly detected any authentic image as tampered one but it has the lowest TPR. Compared with the other algorithms, our method using DCT obtains the best ACC of 75.50% and using PCET gets the third one. CONCLUSION Nowadays, the phenomenon of easy falsification of images has been a hot spot in the field of digital image forensics and information security. Copy-move forgery is one of the most common manipulations in image forgery. In this paper, we propose a new CMFD scheme based on ECDC. By using the combination of SIFT and LPSD extraction algorithms, we get both SIFT and LPSD descriptors of the entire image. In this way, those descriptors can get more detail features, while being more robust to various attacks. Then we use g2NN to gain a large number of matched pairs. After that, we use RANSAC to eliminate most of the mismatched pairs and gain more precise matched pairs; thus, forgery regions have been located roughly. Then, to get the accurate forgery regions, we propose ECDC algorithm, which can cover forgery regions according to the block features of evolving circular domains. Finally, we use morphological operations to improve the results of ECDC algorithm. Nowadays, as the resolution of images gets higher, their size gets larger. Due to the complexity of the matching features step, block-based methods take too much time, and keypoint-based methods have difficulty in perfectly covering forgery regions. These two factors become our driving force to propose this scheme. In that way, we surmount the barriers caused by applying block features or keypoints alone. We conduct a large number of experiments on the proposed scheme with satisfactory results to testify that it is an advanced scheme in CMFD field. Those results show both high effectiveness and efficiency, a notable increase in evaluation metrics and running speed. In comparison with other state-of-the-art CMFD schemes, the proposed scheme achieves more outstanding performance, especially under plain copy-move forgery. In the future, we will strive to combine ECDC with more robust features, and enable it to cope with more various image attacks. Meanwhile, we will research a more flexible and reasonable way of fusing block-based and keypointbased methods, so that it can get better performance and higher efficiency.
2021-09-10T01:16:26.826Z
2021-09-09T00:00:00.000
{ "year": 2021, "sha1": "dbd41b14c1d845988afe48e048456fc648b788ac", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11042-022-12755-w.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "fdc2e5e9ba5a67e7092c110a9e34934390561202", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119053134
pes2o/s2orc
v3-fos-license
Phase ordering in bulk uniaxial nematic liquid crystals The phase-ordering kinetics of a bulk uniaxial nematic liquid crystal is addressed using techniques that have been successfully applied to describe ordering in the O(n) model. The method involves constructing an appropriate mapping between the order-parameter tensor and a Gaussian auxiliary field. The mapping accounts both for the geometry of the director about the dominant charge 1/2 string defects and biaxiality near the string cores. At late-times t following a quench, there exists a scaling regime where the bulk nematic liquid crystal and the three-dimensional O(2) model are found to be isomorphic, within the Gaussian approximation. As a consequence, the scaling function for order-parameter correlations in the nematic liquid crystal is exactly that of the O(2) model, and the length characteristic of the strings grows as $t^{1/2}$. These results are in accord with experiment and simulation. Related models dealing with thin films and monopole defects in the bulk are presented and discussed. I. INTRODUCTION Most phase-ordering systems studied to date support only one type of topologically stable defect species [1][2][3]. One example is the O(n) model with an n-component vector order-parameter. In three spatial dimensions, the defects formed at the quench are line-like strings for n = 2, and point-like monopoles for n = 3. Phase-ordering in bulk uniaxial nematic liquid crystals (nematics) provides the simplest scenario in which two defect species -monopoles and strings -are topologically stable. The stability of monopoles derives from the O(3) symmetry of the nematic directorn( r, t). The additional invariance under the local inversionn( r, t) → −n( r, t) allows the nematic to support stable charge 1/2 disclination lines (strings) [4]. The issue of which defect species dominates the dynamics in bulk nematics at late times t following a quench has recently been settled. Cell-dynamical simulations using spin models of bulk nematics [5,6] have computed the order-parameter correlation function and found it to be indistinguishable from that of the O(2) model, and consistent with a string-dominated late-time scaling regime. Experiments by Chuang et al. [7] directly imaged the bulk nematic, revealing an intricate, evolving defect tangle. While both types of defect were observed, the strings dominated at late times. The length scale L s characterizing the typical separation of the strings was seen to grow as L s ∼ t 1/2 while the average line density of string η decayed like η ∼ L −2 s ∼ t −1 . The study of ordering in nematics is also of interest to cosmologists [8,9] since similar processes involving cosmic string and monopole evolution, thought to occur in the early universe, may be responsible for structure formation. In this paper a theory is presented that describes the dominant scaling behaviour of the bulk nematic in terms of a string-dominated late-time regime. Generalizing a successful method used to treat the ordering kinetics of the O(n) model, the nematic order-parameter tensor is mapped onto a two-component Gaussian auxiliary field [2]. The string defects explicitly appear in the construction of the mapping. As discussed below, this approach has several advantages over an earlier, semi-numerical theory by Bray et al. [10]. The auxiliary field approach is first applied to the straightforward case of phase-ordering in nematic films containing charge 1/2 vortices, which have been studied in simulations [5] and experiments [11]. As in the bulk nematic, the mapping is constructed to account for the rotation of the director by only π about the core of the defect. Once this is done the theory reveals that phase-ordering in the nematic film is equivalent to phase ordering in the twodimensional O(2) model examined previously [2]. This is not surprising since the two systems are known to be isomorphic [5,12]. Constructing a theory for the bulk nematic is more challenging since the order-parameter tensor must include a biaxial piece near the core of the string. In the earlier theory of Bray et al. [10] this point was not addressed since there they used a "hard-spin" approximation for the dynamics of the nematic. However, the necessity of a having a biaxial core region when treating the full equations has been noted in the numerical work of Schopohl and Sluckin [13] on bulk nematic string defects in equilibrium. The present theory successfully incorporates biaxiality and clarifies the role that it plays in the coarsening of the bulk nematic. The theory recovers the growing length L s ∼ t 1/2 seen in simulations [5,6] and experiments [7]. In the scaling regime, the order-parameter correlation function for the bulk nematic is found to be exactly that of the three-dimensional O(2) model [2], in excellent agreement with simulations [5] (Fig. 1). Although the theoretical results of Bray et al. [10] suggested agreement between the correlation function for the bulk nematic and the O(2) model they were unable to demonstrate an exact equivalence since their theory was not based on a mapping that explicitly contained strings. The major accomplishment of this work is to analytically demonstrate the isomorphism between the dynamics of the bulk nematic and the dynamics of the three-dimensional O(2) model, within the Gaussian approximation. Through this isomorphism, the well-developed theory for the O(2) model [2,14,15] can be applied directly to the nematic. In particular, this theory predicts that the average line density of string decays as η ∼ L −2 s ∼ t −1 [14,16], in accord with the experiments of Chuang et al. [7]. Although strings are generically present in bulk nematics, certain choices of experimental setup and sample material will produce copious amounts of monopoles at the quench [17]. The theory of Bray et al. [10] is unable to address these experiments since in that theory there is no signature for monopoles. However, within the framework presented below it is relatively straightforward to develop a theory of nematics in which monopoles appear. In this theory the order-parameter correlation function is found to be similar to (but not exactly) that for the three-dimensional O(3) model [2] (Fig. 2). The characteristic monopole spacing L m grows as L m ∼ t 1/2 and leads to a decaying average monopole density n ∼ L −3 m ∼ t −3/2 . Experiments [17] that examine monopole-antimonopole annihilation in isolation from strings suggest that these growth laws should hold. However, experiments [7] also reveal that the average monopole density decays more rapidly in the presence of strings, with n ∼ t −3 . It appears that in order to account for this observation the theory presented here should be extended to consider the interactions between strings and monopoles [16]. II. MODELS In this section the O(n) model and the Landau-de Gennes model of nematics are discussed. Since the former model is used as a guide in the treatment of the latter, the theory for ordering kinetics in the O(n) model is also reviewed. Initially, the structural features common to both models are emphasized. In later sections, the technical details specific to the ordering of nematics will be discussed. A. The O(n) model In the O(n) model the evolution of the non-conserved, n-component order-parameter field ψ is governed by the time-dependent Ginzburg-Landau (TDGL) equation The free energy F [ ψ] has the form where the potential V (ψ), expressed in terms of ψ ≡ | ψ|, is O(n) symmetric with a degenerate ground state at non-zero ψ = ψ 0 . In this model, as with the nematic liquid crystal, the disordered high temperature initial state is rendered unstable by a quench to a low temperature where the usual noise term on the right-hand side of (2.1) can be ignored. Substitution of (2.2) into (2.1) produces the explicit equation of motion The evolution induced by (2.3) causes ψ to order and assume a distribution that is far from Gaussian. To make analytic progress it is by now standard [1] to introduce a mapping ψ = σ( m) (2.4) between the physical field ψ and an n-component auxiliary field m with analytically tractable statistics. The mapping σ is chosen to reflect the defect structure in the system and satisfies the Euler-Lagrange equation for a defect in equilibrium As shown below, (2.5) is also instrumental in treating the non-linear potential term in (2.3). Defects correspond to the non-uniform solutions of (2.5) which match on to the uniform solution far from the defect core. Since only the lowest-energy defects, those with unit topological charge, will survive until late-times the relevant solutions to (2.5) will be of the form [2] σ( m) = A(m)m (2.6) where m = | m| andm = m/m. Thus the magnitude of m represents the distance away from a defect core and its orientation corresponds to the orientation of the order parameter field at that point. This geometrical interpretation will later be exploited when the generalization of (2.5) is used to choose the appropriate mapping, analogous to (2.6), for string defects in the nematic liquid crystal. The magnitude of m grows as the characteristic defect separation, L(t), becoming large in the late-time, scaling regime. Inserting (2.6) into (2.5) gives an equation for A, the order-parameter profile around a defect [2] For small m an analysis of (2.7) yields the linear dependence A(m) ∼ m, characteristic of a unit charge defect [18]. For large m the amplitude A approaches its ordered value A = ψ 0 algebraically, which is a feature common to both the O(n) model and the nematic. The order-parameter correlation function is C( r, t) = σ( r, t) · σ(0, t) = ψ 2 0 m( r, t) ·m(0, t) (2.8) where the last equality holds for late-times and to leading order in 1/L. To evaluate the last average in (2.8) we choose m to be a Gaussian field with zero mean. This Gaussian approximation forms the basis of almost all present analytical treatments of phase-ordering problems, and has had much quantitative success in describing the correlations in these systems [1][2][3]. Theories where m is a non-Gaussian field also exist [19,20]. In the Gaussian approximation the order-parameter correlation function (2.8) can be related to the normalized auxiliary field correlation function f , defined as The relation is [2,21] where B is the beta function and F is the hypergeometric function. In the late-time scaling regime the functions F and f can be expressed solely in terms of the scaled length x = r/L(t) so that F = F (x). In this regime the equation of motion (2.3) can be written as non-linear scaling equation for F In the derivation of (2.12) the relation (2.5) is used to replace the potential term in (2.3), and then the Gaussian identity is used to get the last term on the left-hand side of (2.12). The constant µ enters through the definition of the scaling length L: This is the well-known [2,21] growth law L ∼ t 1/2 for phase-ordering in non-conserved vector systems. Since the auxiliary field m is smooth [15], f is analytic for small-x. This implies, through an examination of (2.12) in d spatial dimensions, that for small-x F behaves like for n = 2 and for n = 3, the cases relevant to this paper. The non-analytic terms in F reflect the shortdistance singularities in the order parameter field produced by the defects, and lead to the Porod's law [22] power law decay of the structure factor at large wavenumber. The x 2 ln x term in (2.15) is characteristic of string (or vortex) defects while the x 3 term in (2.16) is due to monopole defects. For large x both F and f decay rapidly to zero. The eigenvalue µ is determined numerically by matching the short-and long-distance behaviours of the solution of (2.12). In this way the auxiliary field correlation function f is determined selfconsistently along with F . In contrast, there is no such self-consistency in theories based on the Ohta-Jasnow-Kawasaki (OJK) approximation [10,23]. Values of µ at various n and d for the O(n) model have been determined [2]. The scaling functions F of this theory are in excellent agreement with the results of simulations [1,2]. B. Nematic Liquid Crystals The order-parameter for a bulk nematic liquid crystal is a traceless, symmetric, 3 × 3 tensor Q αβ , which measures the anisotropy of physical observables in the nematic phase. The tensor Q αβ has the general form [24] The unit 3-vectorsn,ĝ andĥ form an orthonormal triad. The amplitudes A and B are chosen to be non-negative. A is a measure of the degree of uniaxial order in the liquid crystal; it is zero in the isotropic phase and non-zero in the nematic phase. Biaxiality in the liquid crystal is measured by B. In the uniaxial nematic phase B is zero everywhere except near the string cores. The description of nematics in terms of Q αβ reduces to the Frank continuum theory of elasticity in terms of a directorn( r, t) [25] when A is set to its ordered value and B = 0. In the phase-ordering scenario, where defects occur, all of A, B,n,ĝ and h are space and time dependent. In the tensor formulation the director, which measures the average local molecular orientation in the nematic, is the unit eigenvector of Q αβ which corresponds to the largest eigenvalue. The unit eigenvectors and associated eigenvalues of Q αβ are: Since the nematic is uniaxial, B ≤ 3A and the director can be identified withn. The tensor formulation respects the full RP 2 symmetry of the uniaxial nematic since physical quantities, such as correlations, are written in terms Q αβ which is invariant under the local inversion n( r, t) → −n( r, t). At a string core B = 3A > 0 and the eigensubspace corresponding to the largest eigenvalue 2A/3 is two-fold degenerate. Thus in the plane perpendicular toĥ, the tangent to the string, the orientation of the director is ambiguous. At the isotropic core of a monopole A = B = 0 and all three eigenvalues of Q αβ are degenerate so the orientation of the director is completely unspecified. The dynamics of the nematic is governed by the TDGL equation for Q αβ with the Lagrange multiplier λ αβ included to enforce the traceless condition. The Landau-de Gennes free energy is with the potential The coefficient of the quadratic term in (2.21) is chosen to be negative so that the bulk isotropic phase is unstable towards nematic ordering. The gradient term in (2.20) is written within the equal-constant approximation [25]. Substitution of the form (2.17) in (2.21) results in a useful expression for the potential as a function of A and B: with the non-linear piece given by Static solutions to (2.23) satisfy the Euler-Lagrange equation The order-parameter correlation function is defined as Later, through a development that closely parallels that previously given for the O(n) model, it will be shown how (2.25) and (2.27) lead to a scaling equation for order-parameter correlations in the nematic. III. STRING DEFECTS IN THE NEMATIC At late-times the dominant defects in the bulk nematic are strings with topological charge 1/2. Many of the main features of phase-ordering in the bulk nematic are described by the model containing strings which is presented in Sec. III.B below. A. Vortices in thin films To begin, a model applicable to nematic thin films where the director is constrained to lie in a plane without breaking then → −n symmetry is examined. By restricting the director to a plane, the intricacies of how to map the order-parameter tensor onto an auxiliary field when the director rotates by only π about the vortex can be demonstrated, without the additional complication of biaxiality that appears near the string core in bulk samples. For a uniaxial thin film nematic the order-parameter is a 2 ×2 traceless symmetric tensor wheren is the two-component director. In analogy to the theory of the O(2) model, the defects are incorporated through a mapping of the order-parameter tensor onto a twocomponent auxiliary field. The only defect species present at late-times are charge 1/2 point vortices with property that the director rotates by only π around the vortex. This property is essential in constructing the mapping. Consider a charge 1/2 vortex at the origin with the typical director configuration n = cos where φ is the polar angle in the x − y plane. For future convenience we write the radial vector in the x − y plane as s and define angles in terms ofŝ througĥ whereP αβ has a slightly modified definition from P αβ (2.24) because Q αβ is a 2 × 2 tensor: From (2.21) and (3.4) the potential U is given by An examination of (3.7) at small s has A ∼ s, indicative of charge 1/2 vortices [18]. At large s the amplitude A algebraically approaches its ordered value A = 2/3. To treat many such vortices in a phase-ordering context, s in (3.4) is taken to be a Gaussian field s( r, t) with zero mean. As in the O(2) model, s represents the distance to the nearest vortex, growing as the characteristic vortex spacing L v (t) at late-times. However, unlike in the O(2) model, the director is not mapped directly onto s -a 2π rotation of s about a vortex corresponds to a rotation of the director by π. At late-times, the amplitude A approaches it's ordered value and from the definition (2.26) and equation (3.4) order-parameter correlation function is seen to be C( r, t) = ŝ( r, t) ·ŝ(0, t) (3.9) to leading order in L −1 v . This is just the O(2) correlation function (2.8), and is related to f , the correlation function for the auxiliary field s defined in analogy to (2.9), through (2.10) and (2.11) for n = 2. In the scaling regime the equation of motion (2.27) for C( r, t) becomes equation (2.12) for the O(2) scaling function F , expressed in terms of the scaled length x = r/L v (t). The length L v has the same definition as the length L in (2.14), with m replaced by s. The path from (2.27) to (2.12) is similar to that taken in the O(2) case [2]. The Euler-Lagrange equation (3.5) is used to replaceP αβ occuring in the last term in (2.27). The resulting expression is evaluated using the Gaussian identity T rQ[( s( r, t))Q( s(0, t))] , (3.10) analogous to (2.13), and produces the last term on the left-hand side of (2.12). Thus the scaling function F for the order-parameter correlations and the growth law L v (t) ∼ t 1/2 for the nematic thin film are exactly those of the two-dimensional O(2) model. This correspondence, seen in simulations, can be simply understood as a consequence of the mapping of variables φ → 2φ between the two models [5,12]. This isomorphism is relevant to experimental efforts that use constrained nematics to study coarsening in the two-dimensional O(2) models [27] since it indicates that the existence of the localn → −n symmetry does not affect the leading order dynamics in the scaling regime. B. Strings in the bulk nematic In addition to the complication of a having director configuration with a charge 1/2 geometry, strings in a bulk nematic have a biaxial core. The form (2.17) for Q αβ contains the biaxiality that is required if an analytical solution to (2.25) is to be found. String defects enter the theory through the mapping of (2.17) onto a two-component auxiliary field. To motivate the form for the mapping consider the geometry of the director field around a charge 1/2 string defect oriented along the z axis. Since locally the coordinate system can always be chosen so that the string has this geometry the following development is quite general. The directorn is still given by (3.2). The other members of the orthonormal triad in (2.17) areĝ h =ẑ. (3.12) With the notation (3.3) for the radial vector s in the x − y plane, the order-parameter tensor (2.17) becomes This form for Q αβ is a solution of (2.25) written in terms of s, provided that where V (A, B) is given in (2.22). Note that equations (3.15) and (3.16) would be inconsistent had a uniaxial ansatz (B = 0) been assumed at the outset. For the potential (2.22) these equations are degenerate [28] and reduce to a single equation for A after the identification B = 1 − A: At small s the solution to (3.17) is where c is a constant, determined numerically. At large s the solution of (3.17) takes the form As expected, the mapping (3.13) connects the biaxial saddle point on the potential surface V (A, B), representing the string core, to the uniaxial nematic minimum away from the string (see Fig. 3). The linear behaviour in (3.18) and (3.19) at small s is that expected for charge 1/2 strings in the nematic. Both the linear behaviour near the core and the algebraic relaxation (3.20-3.21) to the bulk uniaxial state are seen in the numerical results of [13]. Once again, to examine the statistical properties of the string defect tangle, s is a taken to be a Gaussian auxiliary field with zero mean. The magnitude s grows as the characteristic string separation L s (t). Therefore, at late-times, s is large and the biaxial piece of Q αβ , with an amplitude B given by (3.21), is suppressed. This is physically reasonable since biaxiality occurs on length scales around the core size while the late-time scaling properties are dominated by physics at the much larger scale of L s (t). At late-times, when A ≈ 1, the definition (2.26) and the mapping (3.13) show that the order-parameter correlation function reduces to C( r, t) = ŝ( r, t) ·ŝ(0, t) (3.22) which is the O(2) correlation function (2.8). As before, C( r, t) is related to f ( r, t), the normalized correlation function for the auxiliary field s, by relations (2.10) and (2.11) with n = 2. The dynamical equation (2.27) for C( r, t) reduces, in the scaling regime, to (2.12) for F from the three-dimensional O(2) model. Note that the spatial dimensionality enters through the Laplacian operator in (2.12). The scaled length in this case is x = r/L s (t) with L s defined as L in (2.14). The derivation of this correspondence parallels the steps taken in the O(2) model that lead to (2.12). The Euler-Lagrange equation (3.14) enables the non-linear quantity P αβ occuring in the last term of (2.27) to be replaced by ∇ 2 s Q αβ . The resulting average is then evaluated using (3.10) and produces the last term on the left-hand side of (2.12). The single-length scaling result L s ∼ t 1/2 is recovered for the phase ordering of the bulk nematic. In Fig. 1 the theoretical results for F in the the three-dimensional O(2) model [2] and the F determined in cell-dynamical simulations of the bulk nematic [5] are compared. The agreement between the two is excellent. At short-scaled distances F has the form (2.15) which is also seen in the simulations and is an indication that string defects are the dominant disordering agent in the bulk nematic. The theory is now structured so that many well-established phase-ordering results for the O(2) model [2,14] can be applied to the bulk nematic. In particular, the string line density η is related to the auxiliary field s, whose zeros locate the positions of the strings, through [14,16,29] η = δ( s)| ω| (3.23) where the tangent to the string, points in the direction of positive winding number. The calculation performed in Appendix A shows that the average line density of string obeys η ∼ L −2 s ∼ t −1 for late-times, in accord with experiments [7]. IV. MONOPOLES IN THE BULK NEMATIC To address experiments that are designed to produce copious amounts of monopoles at the quench [17], a theory for the ordering kinetics of bulk nematics is considered in which monopoles appear. The model consists of mapping the directorn near a monopole directly onto a three-component Gaussian auxiliary field m vian =m. Thus the order-parameter is Since the isotropic monopole core can be connected to the nematic minimum along the B = 0 line on the potential surface (Fig. 3), a biaxial piece does not appear in (4.1). Equation if the amplitude A satisfies A similar result was obtained in [30] for equilibrium. For small m, (4.3) indicates that A ∼ m 2 while for large m the amplitude A algebraically approaches its ordered value of 1. The m 2 dependence at small m indicates that (4.1) describes charge 1 monopoles in the nematic [18]. This is also evident geometrically since m (and thusn) is a radial vector field near the monopole. At late-times, using (4.1) with A ≈ 1 the order-parameter correlation function (2.26) is In contrast to the string models considered earlier, the expression (4.4) for the orderparameter correlation function in the monopole model is new. The Gaussian average in (4.4) is computed in Appendix B. In the late-time scaling regime C( r, t) can be written in terms of the scaled length x = r/L m (t), where L m (t) is the typical monopole separation. Thus C( r, t) = F (x) with The auxiliary field correlation function f is defined in (2.9). The scaling function F satisfies the scaling equation (2.12) with L m (t) ∼ t 1/2 . The development of this result closely parallels that of the string case considered earlier. The only difference between the scaling results for this model and those for the O(3) model is that the relation between F and f is (4.5) instead of (2.11). Since m is smooth, f has power series expansion that is analytic at small x. By using this expansion in (2.12) and (4.5) the small-x behaviour of F is found to be The non-analytic x 3 term in F , also found in the O(3) model (2.16), is due to the presence of point monopole defects. Using a fourth-order Runge-Kutta scheme, the non-linear eigenvalue problem represented by (2.12) and (4.5) is solved for d = 3. The eigenvalue is µ = 1.27306 . . ., which differs from the value µ = 0.5558 . . . for the O(3) model [2]. The function F is plotted in Fig. 2 along with the scaling function for order-parameter correlations in the threedimensional O(3) model. Fig. 2 also compares the cell-dynamical simulation data for the bulk nematic [5] to the function F , equation (4.5). The function F does not describe the simulation data as well as the string model, showing deviations at short distances. These deviations are expected since the structure of the theory at short distances (4.6) represents the wrong defects (monopoles) instead of the correct ones (strings). Since the zeros of m locate the monopole cores, the monopole number density n can be expressed in terms of the auxiliary field m [14] as where the quantity between the absolute value signs is the Jacobian for the transformation from real space coordinates to auxiliary field variables. From the development in [14] the average monopole number density obeys n ∼ L −3 m ∼ t −3/2 . This result holds only for monopole annihilation in the absence of strings, the case considered in this section. In the experiments of Chuang et al. [7] where monopole annihilation occured in the presence of strings the average monopole density was observed to decay faster, with n ∼ t −3 . V. DISCUSSION The dominant scaling behaviour observed during ordering in the bulk nematic is welldescribed by the the theory presented here in which string defects are the major disordering agents. The growth law L s ∼ t 1/2 is recovered, leading to an average string line density η that decays as η ∼ t −1 , as seen in experiments [7]. The theoretically determined scaling form for order-parameter correlations in the bulk nematic is shown analytically to be exactly that for the three-dimensional O(2) model [2], and this is in excellent agreement with the simulation results [5] (Fig. 1). This paper addresses the issue of biaxiality near the string cores and demonstrates that it is irrelevant to the leading order scaling properties of the system. However, the theory is capable of being extended into the pre-scaling regime, where biaxiality may play a role in the dynamics. The major accomplishment of this work is the explicit demonstration of the isomorphism between the late-stage ordering in the bulk nematic and the late-stage ordering in the threedimensional O(2) model, within the Gaussian approximation. It is shown that, in the scaling regime, the order-parameter equations of motion for the O(2) model (2.1) and the bulk nematic (2.19) produce the same scaling equation (2.12) for the correlation function. The essential element in the present theory, which was missing in earlier theories [10], is the mapping (3.13), which explicitly includes string defects and makes a direct connection with the O(2) model. As a consequence, results for the O(2) model, such as string and vortex density correlations [14,31] or conservation laws involving string densities [32], can be directly applied to the bulk nematic. This paper also presents a model for bulk nematics in which monopoles appear. The model is applicable to situations where monopole-antimonopole annihilations occur in isolation from string defects. Such scenarios have been realized experimentally [17], and the data is suggestive of the growth law L m ∼ t 1/2 predicted by the theory. However, to properly treat monopole dynamics in the presence of strings, theories that include interactions between string and monopole defects are required. This interesting aspect of the problem is under current investigation [16]. ACKNOWLEDGMENTS The author thanks Gene Mazenko for guidance and for many stimulating discussions. The author also benefited from discussions with Alan Bray, Andrew Rutenberg, Bernard Yurke, and Martin Zapotocky. The simulation data shown in this paper was graciously provided by Rob Blundell. Support from the NSERC of Canada is gratefully acknowledged. This work was supported in part by the MRSEC Program of the National Science Foundation under Award Number DMR-9400379. and the one-point reduced probability distribution G(ξ), given by The Gaussian average in (A5) is straightforward to evaluate by first writing the δ-functions in the integral representation and then performing the resulting standard Gaussian integrals. One finds G(ξ) = 1 (2πS 0 (t)) n/2 1 (2πS (2) ) n(n+1)/2 exp − n+1,n µ=1,ν=1 (ξ ν µ ) 2 2S (2) (A6) with the definitions S 0 (t) = 1 n [ s(0, t)] 2 (A7) In this theory S (2) = 1/(n + 1) [16]. Substitution of (A6) in (A3) produces the final form for the average line density of string: with the n-dependent constant C n defined as C n = 1 π n(n+1)/2 n+1,n µ=1,ν=1 For n = 2 it can be shown that C 2 = 1 [16]. Since S 0 (t) ∼ t at late-times, the average line density of string scales like In particular, for n = 2, η ∼ t −1 . APPENDIX B: This appendix outlines the evaluation of the average A = [m( r, t) ·m(0, t)] 2 (B1) appearing in the correlation function (4.4) for the monopole model. For an n-component Gaussian m field, the average A can be written in the integral form in terms of the two-point reduced probability distribution [2] Φ( x 1 , x 2 ) = γ 2π where the auxiliary field correlation function f is defined in (2.9) and γ = 1/ The integrals over x 1 and x 2 in (B5) are readily done. After differentiating with respect to λ and setting λ = 1, the integral over r 1 is performed. After a change of variables, y = (r 2 ) 2 , the following integrals remain: 1 γ 2 f 2 (F [1, 1; n/2; f 2 ] − 1) − 1 for n > 2. (B9) In particular, for n = 3, equation (B9) gives which leads to (4.5) for F in the nematic with monopoles.
2019-04-14T02:16:48.752Z
1997-07-18T00:00:00.000
{ "year": 1997, "sha1": "7bc414c88a8843fc597045cd40bb4c8b2538ed9e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9707201", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7bc414c88a8843fc597045cd40bb4c8b2538ed9e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27253011
pes2o/s2orc
v3-fos-license
Intrahepatic Pseudoaneurysms Complicating Transjugular Liver Biopsy in Liver Transplantation Patients : Three Case Reports native method of obtaining hepatic tissue for pathologic diagnosis in patients with parenchymal liver disease, but in whom conventional percutaneous transhepatic liver biopsy is contraindicated due to coagulopathy, a large amount of ascites, or massive obesity (1, 2). Transjugular liver biopsy is also indicated in patients with liver function deterioration following liver transplant and in patients with congenital clotting disorders (1, 2). There are generally few complications following TJLB; the liver-puncture-related major complication rate was reported to be 0.2% in a review of 64 published reports involving 7,493 adults who had undergone TJLBs (2). We have treated 3 patients with intrahepatic pseudoaneurysms following 503 TJLBs in 320 liver transplant recipients since 2005, and report our experience for managing intrahepatic pseudoaneurysms following TJLB. Transjugular liver biopsy (TJLB) is an accepted alternative method of obtaining hepatic tissue for pathologic diagnosis in patients with parenchymal liver disease, but in whom conventional percutaneous transhepatic liver biopsy is contraindicated due to coagulopathy, a large amount of ascites, or massive obesity (1,2).Transjugular liver biopsy is also indicated in patients with liver function deterioration following liver transplant and in patients with congenital clotting disorders (1,2). There are generally few complications following TJLB; the liver-puncture-related major complication rate was reported to be 0.2% in a review of 64 published reports involving 7,493 adults who had undergone TJLBs (2).We have treated 3 patients with intrahepatic pseudoaneurysms following 503 TJLBs in 320 liver transplant recipients since 2005, and report our experience for managing intrahepatic pseudoaneurysms fol-lowing TJLB. Case 1 A 34-year-old man who had undergone right lobe living donor liver transplantation (LDLT) 24 days previously was referred to our department for TJLB caused by deteriorating liver function.The laboratory data were as follows: serum aspartate aminotransferase, 297 IU/L; alanine transaminase, 497 IU/L; total bilirubin, 8.7 mg/dL; hemoglobin, 10.4 g/dL; platelet count, 243 × 10 3 /mm 3 ; and international normalized ratio (INR), 1.02.Although coagulation function was normal, the clinician in charge preferred TJLB to percutaneous transhepatic biopsy in order to avoid the risk of post-biopsy bleeding.A pre-TJLB CT revealed no abnormal findings in the liver graft.A TJLB (5 passes) using an 18-gauge Quick-Core biopsy needle (Cook, Bjaeverskov, Denmark) was performed uneventfully in the right hepatic vein.A pathology examination revealed severe canalicular and ductular cholestasis without evidence of rejection.The liver function gradually improved; however, the hemoglobin level fluctuated between 7.9 and 8.9 g/dL from 4 days post- An intrahepatic pseudoaneurysm is a rare complication following transjugular liver biopsy.Transarterial embolization is considered a safe and effective treatment for treating pseudoaneurysms.Herein we report three cases of intrahepatic pseudoaneurysms following transjugular liver biopsies.The three pseudoaneurysms were managed by the following methods: transarterial embolization, percutaneous transhepatic embolization, and close observation. Index words : Liver Transplantation Aneurysm TJLB without evidence of internal bleeding.The usual post-LDLT follow-up CT was performed 7 days after TJLB, and a 12-mm-dimension pseudoaneurysm with a focal arterioportal shunt was noted in the inferior portion of the right hepatic vein (Figs. 1A, B).Transarterial embolization of the pseudoaneurysm was planned because of the high probability of rupture.Arteriography showed a pseudoaneurysm with an arterioportal shunt in one peripheral branch of the posterosuperior intrahepatic artery (Figs.1C, D).The branch was then embolized through a microcatheter using a mixture (1:1; < 1 ml) of N-butyl cyanoacrylate (B.Braun, Melsungen, Germany) and lipiodol. A post-embolization arteriogram revealed that the pseudoaneurysm and arterioportal shunt were occluded (Fig. 1E).The hemoglobin level was restored following embolization and without liver function deterioration, and the patient remains healthy. A pre-TJLB CT revealed no abnormal findings in the liver graft.A TJLB (six passes) in the right hepatic vein was successfully performed in an effort to determine the presence of acute rejection or hepatitis C reactivation.A pathology examination demonstrated favorable hepatitis C reactivation, and ribavirin (Robavin � ; Shinpoong, Ansan, Korea) was started.A follow-up CT was performed 20 days after the TJLB due to an elevated serum bilirubin level, and a 12-mm-dimension pseudoaneurysm was found in the inferior portion of the right hepatic vein.As the patient did not have any clinical signs or symptoms related to the presence of the pseudoaneurysm, we continued close observation as we expected spontaneous thrombosis of the pseudoaneurysm.However, the post-TJLB, 27-day follow-up CT revealed the continued presence of the pseudoaneurysm, which had slightly increased in size (Fig. 2A).Owing to the risk of worsening of the pseudoaneurysm with rupture, a hepatic arteriography was immediately performed to embolize the pseudoaneurysm.The arteriogram revealed that the pseudoaneurysm arose from a branch of the posterosuperior intrahepatic artery (Figs.2B, C).However, super-selection of the branch using a microcatheter failed because of the small arterial diameter and acute angulation.As proximal embolization of the posterosuperior intrahepatic artery might have induced further deterioration of liver function if there were a large area of liver ischemia, percutaneous transhepatic puncture of the pseudoaneurysm using a 22-G Chiba needle was performed under fluoroscopy and ul-trasonography guidance (Fig. 2D).One milliliter of thrombin (500 IU/mL; Reyon Pharmaceutical Co., LTD., Seoul, Korea) was then injected into the pseudoaneurysm.Completion hepatic arteriography (Fig. 1E) and CT obtained 1 day after thrombin injection showed disappearance of the pseudoaneurysm.The patient was discharged 3 days following the procedure with stable liver function; however, she died of chronic rejection 7 months later. Case 3 A 32-year-old woman was referred to our department for TJLB and evaluation of hepatic venous outflow due to an unknown origin hyperbilirubinemia and ascites.She had undergone right-lobe LDLT 46 days previously.The laboratory data were as follows: serum aspartate aminotransferase, 48 IU/L; alanine transaminase, 62 IU/L; total bilirubin, 8. TJLB CT revealed no abnormal findings in the liver graft.A TJLB (three passes) and stent placement in the right hepatic vein were performed uneventfully.A pathology examination demonstrated severe cholestasis with mild centrilobular hepatocellular degeneration without evidence of rejection.A CT obtained 4 days after the TJLB to evaluate hepatic vein status revealed an 11-mm pseudoaneurysm in the anteroinferior portion of the right hepatic vein (Fig. 3A).As the patient did not have any clinical signs related to the presence of the pseudoaneurysm, close observation was performed.The 4-day follow-up CT (Fig. 3B) revealed partial thrombosis of the pseudoaneurysm, and the 18-day follow-up CT (Fig. 3C) revealed complete thrombosis of the pseudoaneurysm.The patient had an uneventful recovery and the patient remains healthy. Discussion A TJLB reduces the risk of hemorrhage as hepatic tissue is acquired through the hepatic vein and therefore avoids any potential liver capsule damage and results in draining of associated bleeding into the hepatic vein (3).However, various major complications, including extracapsular hemorrhage, hemobilia, and intrahepatic pseudoaneurysms, may occur following TJLB.The first two complications are usually detected within the first several hours following TJLB with serious bleeding.However, a pseudoaneurysm may not be detected until it ruptures or until follow-up CT or US demonstrates a pseudoaneurysm (4,5).A hemoperitoneum and liver hematoma associated with a pseudoaneurysm 13 days following TJLB have been reported (5).Therefore, aggressive treatment is usually considered for treating intrahepatic pseudoaneurysms because of the risk of rupture, even though patients are hemodynamically stable (4,6).Transarterial embolization of the bleeding or pseudoaneurysm has been shown to be a safe and effective treatment for these arterial complications (4-8).However, transarterial embolization may induce liver function deterioration with liver ischemia occurring during the early post-LDLT period as hepatic arterial flow is important for regeneration of the engrafted liver (9).In addition, it may be technically impossible to selectively embolize only the end feeder artery if there is marked redundancy and tortuosity of the hepatic artery (9). Therefore, we performed transarterial embolization immediately after the diagnosis of a pseudoaneurysm in only one patient with a clinical suspicion of internal bleeding.However, in the remaining two patients, our first choice for managing their pseudoaneurysms was close observation as they did not have any clinical signs related to the pseudoaneurysm and there was a risk of liver ischemia after transarterial embolization.As the pseudoaneurysm in one patient disappeared with spontaneous thrombosis, we assume that close observation may be an effective alternative for managing a pseudoaneurysm if it is relatively small in size and without rupture. However, in the other patient, we treated the pseudoaneurysm with percutaneous thrombin injection as it is a well-documented procedure for treating femoral artery pseudoaneurysms (10).However, percutaneous thrombin injection is not well-established for treating intrahepatic pseudoaneurysms, probably due to the risk of formation of another pseudoaneurysm or bleeding.We identified only one case report in which an intrahepatic pseudoaneurysm was treated using percutaneous transhepatic embolization with thrombin and coils in a liver transplant patient following failed, selective intraarterial embolization.However, in our patient (case 2), we assumed that percutaneous thrombin injection might be preferable to transarterial embolization in order to avoid creation of a large area of liver ischemia.As the patient's pseudoaneurysm was successfully treated without sequalae following the procedure, we assume that percutaneous thrombin injection is another safe alternative for treating an intrahepatic pseudoaneurysm if transarterial embolization is difficult or contraindicated. In summary, pseudoaneurysms may occur following TJLB and may be asymptomatic.Therefore, there should be a high index of suspicion regarding the area around the biopsy site following a TJLB.Although transarterial embolization is an established and relatively safe and effective method for treating pseudoaneurysms, close observation or percutaneous transhepatic thrombin injection may be other successful therapeutic methods for treating pseudoaneurysms. Seung-Won Jang, et al : Intrahepatic Pseudoaneurysms Complicating Transjugular Liver Biopsy in Liver Transplantation Patients .A, B. Arterial phase axial (A) and delayed phase coronal (B) CT images show a pseudoaneurysm (arrow) in the graft liver.C, D. Arteriogram reveals the pseudoaneurysm (arrow) with an arterioportal shunt (arrowhead) from one peripheral branch of the posterosuperior intrahepatic artery.E. Postembolization arteriogram shows that the pseudoaneurysm has disappeared.F. Enhanced CT obtained 1 week following embolization shows that the pseudoaneurysm has disappeared (arrow), and lipiodol is taken up in the pseudoaneurysm. 3 . Seung-Won Jang, et al : Intrahepatic Pseudoaneurysms Complicating Transjugular Liver Biopsy in Liver Transplantation Patients A. Contrast-enhanced CT shows a pseudoaneurysm (arrow) in the graft liver.B, C. Four-(B) and 18-day (C) follow-up CT images show gradual thrombosis of the pseudoaneurysm (arrow).
2017-09-23T08:08:54.315Z
2010-08-01T00:00:00.000
{ "year": 2010, "sha1": "10893bd6d3bb3fc979c054f21db20c8c75b298b1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3348/jksr.2010.63.2.131", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "10893bd6d3bb3fc979c054f21db20c8c75b298b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245933853
pes2o/s2orc
v3-fos-license
An international, interlaboratory ring trial confirms the feasibility of an extraction-less “direct” RT-qPCR method for reliable detection of SARS-CoV-2 RNA in clinical samples Reverse transcription–quantitative polymerase chain reaction (RT-qPCR) is used worldwide to test and trace the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). “Extraction-less” or “direct” real time–reverse transcription polymerase chain reaction (RT-PCR) is a transparent and accessible qualitative method for SARS-CoV-2 detection from nasopharyngeal or oral pharyngeal samples with the potential to generate actionable data more quickly, at a lower cost, and with fewer experimental resources than full RT-qPCR. This study engaged 10 global testing sites, including laboratories currently experiencing testing limitations due to reagent or equipment shortages, in an international interlaboratory ring trial. Participating laboratories were provided a common protocol, common reagents, aliquots of identical pooled clinical samples, and purified nucleic acids and used their existing in-house equipment. We observed 100% concordance across laboratories in the correct identification of all positive and negative samples, with highly similar cycle threshold values. The test also performed well when applied to locally collected patient nasopharyngeal samples, provided the viral transport media did not contain charcoal or guanidine, both of which appeared to potently inhibit the RT-PCR reaction. Our results suggest that direct RT-PCR assay methods can be clearly translated across sites utilizing readily available equipment and expertise and are thus a feasible option for more efficient COVID-19 coronavirus disease testing as demanded by the continuing pandemic. Introduction The global coronavirus disease (COVID-19) pandemic response depends on effective rollout of recently approved vaccines and the use of nonpharmaceutical interventions to slow the spread of the disease. Physical distancing supported by test-and-trace informed containment strategies has been promoted worldwide [1]. The effectiveness of testing as a containment strategy requires the implementation of accessible, affordable, reliable, and rapidly executable test methods that can meet the rapid pace of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission [2][3][4]. At present, this goal remains largely unmet. The majority of regional and national health laboratories around the world rely on reverse transcription-quantitative polymerase chain reaction (RT-qPCR) SARS-CoV-2 virologic testing methods such as those developed by the World Health Organization and the U.S. Centers for Disease Control and Prevention (CDC) to support their public health programs [5,6]. The methods themselves are robust and have proven to be useful standards for detection and reporting. However, sample processing time and a lack of supplies to support extraction as required to run this type of assay have resulted in widely reported backlogs and shortages in the United States and around the world [7]. In regions that also suffer from systemic financial and logistical challenges (e.g., Africa, the Caribbean, and South America), these hurdles will continue to consistently impair reliable procurement of consumables, support for staffing, and thus testing viability [8,9]. Although the diversity and efficiency of commercial virologic and serologic test methods expands weekly, most public health laboratories lack the resources (human and capital) or remit to pivot to novel commercial methods. To address these challenges, the nonprofit Health and Environmental Science Institute (HESI) convened an international network of public and academic COVID-19 testing laboratories-the Propagate Network-with the goal of collectively evaluating and disseminating practical, efficient, and impactful transparent and accessible methods for SARS-CoV-2 detection. The Propagate Network and others have identified "extraction-less" or "direct" real timereverse transcription polymerase chain reaction (RT-PCR) as a transparent and accessible qualitative method for SARS-CoV-2 detection from nasopharyngeal samples with the potential to generate actionable data more quickly, at a lower cost, and with fewer experimental resources than full RT-qPCR [10,11]. The method allows for detection of SARS-CoV-2 viral ribonucleic acid (RNA) with the omission of the most labor-intensive step-the RNA extraction step-and its associated extraction reagents. Published intralaboratory studies indicate that the technique is internally reproducible (with some loss of sensitivity compared to standard RT-PCR) and is effective in detecting both true negatives and positives. Notably, direct RT-qPCR remains sufficiently sensitive to detect viral RNA from patients most likely to be infectious (cycle threshold [Ct] < 33) [12][13][14][15]. The major goal of this Propagate Network study was to determine the practical utility of a transparent and accessible, direct RT-PCR assay [10] via an international, interlaboratory ring trial. The study engaged 10 global sites, including laboratories currently experiencing many of the testing limitations described above, in a series of studies involving a common protocol, common reagents, aliquots of identical pooled clinical samples, and purified nucleic acids, using their existing in-house equipment. Our results suggest that this transparent and accessible, direct RT-PCR assays are a feasible option for more efficient COVID-19 testing as demanded by the growing pandemic. Participants The Propagate Network study was coordinated and partially funded by the international nonprofit HESI as part of its global public health mission and via voluntary contributions of time and effort from the participating partners. Special acknowledgment is given to the University of Washington Virology Laboratory (UWVL) for their efforts to prepare and ship samples for this study and to the University of Vermont Larner School of Medicine for their support in refining the study protocols and recruiting partner laboratories. Ten laboratories were recruited to participate in the trial for the detection of SARS-CoV-2 RNA from patient nasopharyngeal swabs without RNA extraction using kits provided by UWVL (Table 1). Laboratories participated voluntarily and were not offered any compensation for their participation. Due to logistical shipping challenges, which were in large part brought on by the pandemic, samples were unable to be sent to Malawi or Nigeria, underlying the hardships some areas face when testing relies on reagents or materials from other countries. Ethical statement Use of the samples was determined to be exempt under UW institutional guidelines because they were de-identified and pooled prior to inclusion in the test kits, and therefore were not considered human subjects because they contained no individually identifiable material. For Project C, participating laboratories sought the locally appropriate review and permissions for use of de-identified clinical samples as described below. Study design The Propagate ring trial consisted of three components. Projects A and B engaged participant laboratories in the analysis of pooled samples disseminated from the lead laboratory (UWVL) for the purpose of evaluating the cross-laboratory performance of the direct method with parallel (identical) samples. Project C characterized the feasibility of the direct method as applied to locally sourced samples collected as part of regional public health testing efforts (Fig 1). All laboratories were invited to participate in Projects A, B, and C. Logistical challenges due to COVID-related shipping restrictions prohibited the involvement of the Malawi and Nigeria laboratories in Projects A and B. Project A. To confirm that all reagents arrived safely and that every laboratory could perform the direct RT-PCR method, each laboratory tested a set of eight nucleic acid samples purified from patients with COVID-19 and supplied by UWVL, which included six blinded samples (three positive and three negative), one sample identified positive, and one sample identified negative to serve as controls, plus a laboratory-supplied no-template water control. Each laboratory reviewed the results of Project A with the study coordinator to confirm that they had correctly identified 100% of the positive and negative blinded samples before proceeding. Project B. Each laboratory then tested a set of 34 samples supplied by UWVL, including 30 blinded samples (25 positive CT and 5 negative) as well as 2 identified positive and 2 identified negative samples as controls, plus a laboratory-supplied no-template water control. Results for all Project B samples were shared with the study coordinator. Project C. When and where possible, laboratories selected known-positive, locally collected clinical samples and known-negative samples and tested each by both their standard extraction method and the direct RT-PCR method. Results for all Project C samples were shared with the study coordinator. Heat inactivation To validate whether a 10-min heating step would inactivate infectious SARS-CoV-2 virus present in clinical samples, the University of Vermont team incubated high-titer stocks of authentic SARS-CoV-2 at 95˚C (in a heat block) or room temperature in 1.5-ml Eppendorf tubes for 10 min, spun samples briefly in a microcentrifuge, then measured the infectious units remaining in the heat-treated samples versus the untreated controls by immunofocus assay [16]. The stock virus (strain 2019-nCoV/USA_USA-WA1/2020 [WA1]) was graciously provided by PLOS ONE Global ring trial of direct RT-qPCR method for SARS-CoV-2 detection Kenneth Plante and the World Reference Center for Emerging Viruses and Arboviruses at the University of Texas Medical Branch. Sample preparation Identical kits were prepared by UWVL and sent on dry ice to each participating laboratory. These kits included reagents for the three possible projects (Fig 1): blinded purified total nucleic acid samples plus positive and negative controls (Project A); blinded unpurified patient samples plus positive and negative controls (Project B); and sufficient enzyme, buffer, primers, and probes to test all samples for Projects A and B, as well as local patient samples (Project C). To make positive and negative control samples for the kits, nasopharyngeal patient samples in viral transport media were gathered from UWVL's clinical specimen collection. Three positive samples with a high concentration of SARS-CoV-2 viral RNA (C t~1 5) were pooled together and then diluted 1:32 in a pool of negative samples. Aliquots of positive pooled samples (C t~2 0) and of negative pooled samples were subjected to RNA extraction using a MagNA Pure LC (Roche) to generate purified positive and negative total nucleic acid for Project A. A total of four 50-μl aliquots each of positive and negative were included in each kit for Project A: one identified positive and negative control, and six blinded samples. The unpurified positive pool was diluted further in the negative pool to generate samples at 18 specific expected C t values ranging from 21 to 32 for Project B. One 50-μl aliquot each of 11 of these samples, and two 50-μl aliquots each of seven of the samples (expected C t 28-31) were included in the kit, for a total of 25 positive samples. Two identified negative and two identified positive aliquots (C t~2 1) were included as controls in each kit for Project B. A random number generator was used to determine the order of blinded samples within the kits. All samples were tested in parallel by direct RT-PCR method and by MagNA Pure LC nucleic acid extraction followed by RT-PCR at UWVL to confirm negative samples and Ct range of positive samples. Testing method The direct RT-PCR method is described in Bruce et al. [10]. Briefly, 20 μl of each sample was heat treated for 10 min at 95˚C then vortexed and spun down. A Master Mix was made by combining 7 μl of water, 12.5 μl of buffer mix, 1.5 μl of primer/probe mix (IDT), and 1 μl of AgPath-ID enzyme (ThermoFisher) per reaction. Either in 96-well optical PCR plates or optical strip tubes, 22 μl of Master Mix and 3 μl of heat-treated sample was added to each well or tube. All manipulations of clinical samples (transfer for heat inactivation as well as loading of the RT-PCR plate) were performed in a class IIA biosafety cabinet following biosafety level 2 practices. The plates or tubes were then covered with an optical adhesive cover or caps and spun down at 1000 rpm for 1 min. The RT-PCR reaction consisted of 10 min at 48˚C for reverse transcription, 10 min at 95˚C, and 40 cycles of 95˚C for 15 s, followed by 60˚C for 45 s with fluorescence measured at the end of each cycle. All samples were tested in duplicate, with water controls on each plate. Reactions to measure the SARS-CoV-2 N-gene (using CDC N2 primers and FAM-labeled probe) and human RNase P gene (using CDC RP primers and FAM-labeled probe) were carried out for each sample in parallel. Data collection and analysis For each sample, a mean C t value was computed by averaging individual C t values from all laboratories. A C t value residual (for a given laboratory and sample) was defined as the individual C t value minus the associated mean C t value. For data visualization, individual C t values and residual C t values were plotted against mean C t values. Assay specificity and sensitivity was evaluated using the negative and positive blinded samples. Heat inactivation High-titer stocks of SARS-CoV-2 were treated at 95˚C for 10 min. The stock virus had a titer of >10 6 focus forming units (FFUs) per milliliter. After heat treatment there was more than a 5-log drop, with no detectable foci after 10 min at 95˚C (Fig 2). PLOS ONE Global ring trial of direct RT-qPCR method for SARS-CoV-2 detection Project A: Laboratory qualification All participating laboratories correctly identified 100% of the positive (n = 3) and negative (n = 3) blinded samples sent for purposes of confirming that the samples arrived safely and that the laboratory was able to run the direct RT-PCR method. Project B: Interlaboratory agreement using blinded samples Qualitative agreement between laboratories. The most critical performance measures of a SARS-CoV-2 test are its sensitivity and specificity (simply put, the ability to accurately distinguish the presence versus absence of the viral RNA) and the consistency of its performance across laboratories. As an initial approach, assay specificity and sensitivity was evaluated using 5 known-negative and 25 known-positive samples that were tested in a blinded fashion by the 10 laboratories. For the five negative samples, a total of 50 values were reported by the laboratories, all of which were reported as negative for virus. Thus, the assay demonstrated consistently high (100%) specificity across the laboratories for negative samples ( Table 2). For the 25 positive samples, the 10 laboratories reported a total of 250 C t values. All of these but one were reported as positive. Thus, all 10 laboratories were able to correctly detect the virus in 24 of 25 samples, and 9 of 10 laboratories were able to correctly detect the virus in all 25 samples using the assay, yielding consistently high [99.6% = (249/250) × 100%] sensitivity across the 10 laboratories for positive samples. Quantitative agreement between laboratories. In addition to providing a qualitative determination of the presence versus absence of virus, RT-PCR tests for SARS-CoV-2 can provide additional value by reporting their C t value, which serves as a proxy for the amount of viral RNA present. We therefore investigated the C t values reported for the blinded positive samples tested by the participating laboratories. In general, the agreement between C t values from different laboratories was good, with tighter agreement at lower average C t (higher viral loads) than at higher average C t (lower viral loads; Fig 3). We then evaluated the overall quantitative performance of each individual laboratory against the sample-specific average C t value as determined by all 10 laboratories. The C t value residual for a given laboratory and sample was defined as the C t value for the corresponding laboratory and sample minus the sample-specific average C t value; the narrower their distribution within a laboratory, the more consistent the relationship of C t values from the laboratory with the average C t value from all laboratories. Residual C t values had overall similar variability across samples and were minimally affected by the actual viral load (Fig 4). The residuals appeared to be centered around zero for most laboratories (Fig 5), with the exception of laboratories 3 and 7, for which residuals appeared systematically negative (indicative of C t values consistently lower than average), and laboratory 9, in which residuals tended to be positive (indicative of C t values consistently higher than average). There was no evidence that the assay had a lower sensitivity in this particular laboratory. Project C: Application of direct RT-PCR on locally collected clinical samples Seven Propagate partner laboratories conducted side-by-side comparisons of direct RT-PCR and extraction RT-PCR on clinical samples collected from their regions. For four laboratories (2, 7, 8, and 10), these studies demonstrated average losses of sensitivity of between 1.5 and 3.8 cycles in RP C t value ( Fig 6A; Table 3) and between 2.6 and 4.8 cycles in N2 C t value ( Fig 6B; Table 3), compared with direct RT-PCR. For RP, this resulted in no failure to detect any sample from any of the four laboratories. For samples in which N2 was detectable by both direct and extraction RT-PCR, the difference in C t values between the two methods did not correlate with the C t value obtained by either method (Fig 6C). However, a few samples (6 of 93) that were detected between C t values of 28 and 39 by extraction RT-PCR were undetectable by direct RT-PCR, while other samples in that range were still detectable by the same laboratories (Fig 6C). For laboratory 9, direct RT-PCR yielded lower Ct values for both RP and N2 than extraction RT-PCR, with an average difference of −0.3 cycles and −1.6 cycles, respectively ( Table 3). The reason for this unexpected result is not clear, but it may have been related to the laboratory's observation that samples became "highly viscous" after the heating step (an observation not reported in any of the other participating laboratories). As with any method, the authors recommend internal validation of the approach prior to clinical implementation. For two laboratories, direct amplification of both N2 and RP was unsuccessful in all samples, including for samples with low N2 C t values (high viral loads) as measured by extraction RT-PCR. These samples were later determined to have been collected in transport media containing ingredients that were inhibitory to PCR, including charcoal and guanidine (e.g., ManTacc UTM and Jiangsu Rongye Technology LinkGen media were reported as incompatible). Information on the brand/type of viral transport media was not available for all samples used in Project C (this information is often not reported with swabbed samples as provided to analysis laboratories). However, the following media were specifically identified as compatible with this method (Hardy viral transport media, saline, and phosphate-buffered saline). Overall, the direct approach worked effectively to detect samples deemed positive by standard RT-qPCR when samples were collected in media lacking charcoal or guanine. Discussion This study confirms that the direct RT-qPCR method, initially described by Bruce et al. [10], has the potential to meaningfully contribute to global efforts to detect and contain the COVID-19 pandemic. This study provides evidence that the direct RT-qPCR method is an efficient, reliable, and achievable method for detection of SARS-CoV-2. Although the reproducibility of the method has been reported in single-laboratory studies previously, this study is the first to demonstrate that a globally diverse set of laboratories operating with different equipment, clinical sample collection and handling conditions, resource limitations, and operating practices can successfully implement the method. As described above, when centrally disseminated pooled samples were evaluated with a common Master Mix and primers/probes (Projects A and B), the Propagate partner laboratory results were >99.5% concordant (all negatives and all but one positive correctly identified and strong agreement on C t values). This result demonstrates the robustness of the methodology. Although all Propagate partner laboratories had prior experience with standard RNA extraction RT-PCR analysis of SARS-CoV-2 test samples, they were able to adopt and implement the direct method on Project A and B samples with only a minimum of instruction (a brief written protocol and a few minutes of discussions via web meeting), showing that the method is easily transferable. Due to halts and delays in air shipments, the partner laboratories in Malawi and Nigeria were unable to receive or analyze the Project A/B sample kits. While this was unfortunate, it is emblematic of the challenges that the African continent (among others) continues to face in receiving needed laboratory supplies and the importance of resource-sensitive methods development efforts such as these. In Project C of this study, Propagate partner laboratories were encouraged to use their own extraction methods and locally collected samples with RT-qPCR reagents supplied by UWVL to compare results from the direct method versus standard extraction-based PCR. The majority of the participating partner laboratories were successfully able to apply the method and reliably detect RT-PCR-positive samples. The importance of "ground testing" new methods was made evident when some of the laboratories were unable to detect any signal (N2 or RP) following the direct method despite using high-titer positive samples as detected by standard methods. Laboratories experiencing this problem in some cases were working with samples collected in commercial viral transport media that were often found to contain charcoal or in inactivating media such as those containing guanidine. We hypothesize that these are inhibitory to the RT-PCR reaction in the absence of an extraction phase. Similar inhibitory outcomes have been subsequently identified by other laboratories [11,17]. In other cases, the constituents of media were unknown, so we were not able to hypothesize why the direct method was incompatible. Although our data suggest that the inhibitory factor(s) were site specific-not patient specific-prior studies have also suggested that mucosal material can contribute to signal inhibition [18]. We recommend that laboratories seeking to employ the direct method for SARS-CoV-2 detection should conduct a small pilot run (comparing results from direct and full PCR analyses on the same samples) to ensure that sample media are compatible with this method. This pilot should be replicated if/when sample collection methods or media are changed. The success of this ring trial is of critical importance given the growing calls for COVID-19 screening as a containment strategy. The growing pandemic requires that we supplement definitive clinical testing with scalable screening strategies that generate efficient, reliable results that can readily inform public health action (e.g., quarantine and isolation) [2]. Non-PCR immunoassay antigen screening kits have decreased sensitivity as compared to standard PCR but are widely utilized depending on the country's COVID-19 pandemic testing strategy [19]. As anticipated per previous studies, the direct method as applied to SARS-CoV-2 results in some loss in sensitivity compared to standard PCR. One primary explanation for this observation is that RNA extraction typically concentrates RNA present in the clinical sample (by eluting the sample in a smaller volume. In addition, there is a low level of inhibition seen in clinical NP samples loaded directly into an RT-PCR reaction, and the sensitivity of the approach drops when more than 3 ul of patient sample is used [10]. However, this loss is of lower significance to the method's potential value as a public health screening tool. The direct method succeeds in all of the areas of greatest contemporary need: 1) it reliably detects samples with RNA levels correlating to the presence of live virus (and thus most potential for infectivity), 2) it provides the potential to optimize throughput and reduce costs/logistics for SARS--CoV-2 testing, 3) it is a methodology with no commercial barriers or de novo equipment investment hurdles, and 4) it can be readily adopted by most current public health or clinical laboratories with experience handling infectious samples [14,15]. We believe that the direct RT-qPCR method for SARS-CoV-2 screening is ripe for adoption in laboratories seeking to reduce turnaround time for processing samples, experiencing challenges in accessing extraction reagents, seeking to decrease costs, and/or looking to reduce the use, handling, and disposal of chemicals in their laboratory. We do not propose this method as a substitute for samples requiring ultrasensitive detection. As with the adoption of any new method, appropriate validation must be conducted by the host laboratory. As standard RNA extraction reagents for PCR can cost $5-$6 USD per extraction and millions of these tests are performed each day around the world, the potential savings are significant. The utilization of this method could lead to greater testing coverage of individuals per dollar invested, or alternatively a larger number of examinations per individual, either of which would allow for the follow up of suspected cases. The opportunity and feasibility described here is not simply theoretical. At the time of publication, several of the Propagate Network partner laboratories (Brazil, France, United States) are promoting or exploring the broad-scale adoption and implementation of this method for ongoing SARS-CoV-2 public health screening efforts in their regions [20]. Additionally, in October 2020, the Infectious Disease Diagnostic Laboratory at the Children's Hospital of Philadelphia implemented an extraction-free protocol for routine diagnostic testing of SARS-CoV-2 [21]. In the 3 months following implementation, >40,000 samples were tested using this workflow. The laboratory observed several critical advantages with this approach, including dramatically reduced extraction reagent costs and a halving of the average laboratory turnaround time, despite increasing test volumes. Further, the independence from specialized extraction reagents for routine testing alleviated pressure on supply chains to meet the increased demand. These same positive impacts on testing efficiency are expected to apply to other laboratories that adopt the method. While no current testing or screening method is optimal to all situations, the direct method should be considered as a viable, fit-for-purpose resource to address the growing need for population monitoring during a challenging vaccination rollout and amidst the emergence of increasingly virulent strains of SARS-CoV-2. In addition to the valuable data described above, the global viral testing network established for this study exemplifies the feasibility and importance of establishing transparent, transparent and accessible engagements in the public health sciences. Following this study, the Propagate Network will continue to serve as a forum for scientific information exchange and collaboration in the face of future pandemics or health challenges. Conclusions The need for testing for SARS-CoV-2 continues and in many regions is increasing dramatically. This study provides multisite evidence that the direct RT-PCR method can be employed for the detection of SARS-CoV-2 viral RNA with the omission of the RNA extraction step and its associated extraction reagents. This effort represents the first step toward simplifying detection of SARS-CoV-2 viral RNA for the global research community by leveraging evidencebased guidance such as the results present herein. Many options for detecting SARS-CoV-2 have emerged recently, such as antibody testing, saliva testing, and point-of-care testing, which taken together support the urgent need for actionable viral testing. This work lays the foundation for an adoptable method for future viral outbreaks.
2022-01-15T05:11:18.132Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "556fd7503fe20918bd4e13b72cc72d31af69a443", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "556fd7503fe20918bd4e13b72cc72d31af69a443", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221111259
pes2o/s2orc
v3-fos-license
Parental psychological distress in the postnatal period in Japan: a population-based analysis of a national cross-sectional survey Mental health assessments of both members of a couple are important when considering the child-rearing environment. The prevalence and factors associated with both parents’ psychological distress have not been fully investigated. A nationally representative sample from the 2016 Comprehensive Survey of Living Conditions in Japan was used to examine the prevalence of moderate and severe psychological distress in parents in the first year after childbirth. In total, 3,514 two-parent households raising children under one year old met the study criteria. The Japanese version of Kessler 6 was used to assess moderate and severe psychological distress. The prevalence of either or both parents experiencing psychological distress in the first year after birth were 15.1% and 3.4%, respectively. A multivariate logistic regression analysis showed factors of fathers who worked ≥ 55 h a week, reduced duration of sleep in mothers, age in months of the youngest child, and high household expenditures were significantly associated with both parents simultaneously having moderate or severe psychological distress. This study implied the importance of prevention and early detection of parental psychological distress in both parents. Assessing parents’ psychological distress and work-style reform in the childcare period is an urgent issue to improve their mental health conditions. www.nature.com/scientificreports/ and factors associated with individual psychological distress and simultaneous psychological distress in both partners in the postpartum period. Therefore, the purpose of this study was to describe the prevalence of paternal and maternal psychological distress at both the individual and simultaneous psychological distress in both partners and to explore the factors associated with both parents experiencing simultaneous psychological distress using a nationwide crosssectional survey in Japan. Results Descriptive information for the participants is presented in Table 1. The mean ages of fathers and mothers were 33.9 years (SD = 6.0) and 32.1 years (SD = 5.1), respectively. The number of boys was 1,765 (50.2%) and the number of girls was 1,749 (49.8%). The numbers of households with one child and those with two or more children were 1,593 (45.3%), and 1,921 (54.7%), respectively. Households in which parents were the main caregivers of children during daytime were 2,257 (67.4%). Of the fathers, 3,479 (99.3%) were employed and 872 (26.4%) fathers reported working more than 55 h per week. Among the mothers, those who were employed, and those who worked one hour or more per week were 1,545 (44.0%), and 653 (19.6%), respectively. Table 1. Basic characteristics of households, parents, and children (n = 3,514). a Monthly household expenditure per person in May, 2016 (10,000 JPY). b Defined graduation from 2-year college, vocational school, or less as low and 4-year college or graduate school as high. www.nature.com/scientificreports/ The prevalence of psychological distress at both the individual and couple level in the first year after childbirth is shown in Table 2. The Japanese version of the Kessler Psychological Distress Scale (K6) was used to assess moderate psychological distress (MPD), defined as a score of 9-12, and severe psychological distress (SPD), defined as a score ≥ 13. The prevalence of MPD or SPD (i.e. K6 ≥ 9) during the first year after delivery was 11.0% for fathers and 10.8% for mothers. The prevalence of SPD (i.e. K6 ≥ 13) was 3.7% for fathers and 3.5% for mothers. The prevalence of households in which either parent was assessed at a K6 rating of ≥ 9 (i.e. MPD or SPD) was 15.1% and ≥ 13 (i.e. SPD) was 5.2% in the first year after delivery. The prevalence of households in which both parents were assessed at a K6 rating ≥ 9 was 3.4% and a K6 rating ≥ 13 was 0.4%. The associations and crude odds ratios (COR) between paternal and maternal MPD or SPD and between paternal and maternal SPD using chi-square analyses are presented in Supplementary Tables 1 and 2. Both analyses showed significant associations between paternal and maternal psychological distress, and the crude odds ratios were 4.84 (95% CI, 3.76-6.22) and 3.66 (95% CI, 2.04-6.58), respectively. The results of the univariate and multivariate analysis used to explore the associations with both parents experiencing psychological distress at the same period are shown in Table 3. In the univariate analysis, both parents assessed as having MPD or SPD at the same time was significantly associated with the age in months of the youngest child, monthly household expenditure per person, and mothers or fathers who slept less than 6 h per night. For a sensitivity analysis, fathers who were assessed as having MPD or SPD were significantly more likely in the multivariate analysis (Supplementary Table 3) to be have slept less than 6 h per night, work 55 h or more per week, and have a high monthly household expenditure per person. Mothers who were assessed as having MPD or SPD were significantly more likely in the multivariate analysis (Supplementary Table 4) to be mothers who had husbands who smoked, had lower alcohol consumption, had a child aged 6-12 months, and being mothers who slept less than 6 h per night. Discussion This is the first study to describe the trends in the prevalence of psychological distress in the first year after delivery and their associated factors at both at the individual level and simultaneous psychological distress in both partners. We identified factors associated with both parents having MPD or SPD at the same time during the first year postpartum using data from a nationwide population-based survey. Overall, we found that approximately 11% of fathers and mothers were at risk for either MPD or SPD in the first year after the birth of their child based on ratings on the K6. We also found that the proportion of households in which either or both parents were identified as having MPD was 15.1% or SPD was 3.4%. When one parent developed MPD or SPD, the odds ratio of the partner developing MPD or SPD was 4.84 times higher and developing SPD was 3.66 times higher compared with those partners without distress. The response rate of the Comprehensive Survey of Living Conditions in Japan (CSLC) in 2016 was 77.5%. Most of the non-respondents of the CSLC were young and living in single households, especially those who were Table 2. Prevalence of moderate and severe psychological distress among parents by age in months in the first year after birth (n = 3,514). MPD (moderate psychological distress): the prevalence of the K6 score between 9 and 12 points. SPD (severe psychological distress): the prevalence of the K6 score ≥ 13 points. MPD or SPD: the prevalence of the K6 score ≥ 9 points. Age in months of their youngest children Overall period (0-12 months) 0-3 months 3-6 months 6-9 months 9-12 months www.nature.com/scientificreports/ Table 3. Crude and adjusted odds ratio between parental psychological distress (both parents scored K6 ≥ 9; MPD or SPD) and potential risk factors such as socio-economic status, health behavior, and working situation among Japanese parents. (n = 3,514). 2,773 couples were included in the multivariate analysis. www.nature.com/scientificreports/ living in urban areas, when compared to the distribution of the National Census 25 . Thus, the representativeness of the population of this study compared to the census population is high. In comparison with the prevalence of paternal, maternal, and both parents' depression at nine months postpartum that was reported in a previous study conducted in the United States, which were 7.4%, 11.5%, and 2.9%, respectively 9 , the prevalence of fathers, mothers, and both of couple having MPD or SPD between 9-12 months postpartum in this study was higher (11.2%, 11.5%, and 3.6%, respectively). The prevalence of mothers was almost same 9 . The prevalence of psychological distress for fathers in this study was higher than the prevalence of paternal depression reported in a previous meta-analysis 15 , but the prevalence of psychological distress for mothers in this study was similar to the prevalence of maternal depression reported in a previous systematic review 16 . We found that the prevalence of MPD or SPD in this period might be different according to the age in months of the youngest child because the prevalence of 0-6 months postpartum for both parents and mothers was lower than those parents whose youngest child was 6-12 months postpartum. To the best of our knowledge, this is also the first study using a national survey to explore the factors associated with both parents experiencing psychological distress in the same period in the first year after birth. Fathers who worked 55 h or more per week, mothers who slept less than 6 h per night, the age in months of the youngest child, and high household expenditure per person, were associated with both parents having MPD or SPD at the same time. The negative effects of long work hours and working weekends on the mental health status of both parents have previously been reported [26][27][28] . Our results have shown that 26.4% of fathers worked 55 h or more per week in the first year after delivery and that long work hours among fathers may raise the possibility of psychological distress among fathers as well as mothers. There are a number of systematic reviews that have examined poor and interrupted sleep and its negative effects on mental health in the postnatal period [29][30][31] , and parents who are raising young children need to have time for housework, childcare, and rest at home. In particular, there are increased time demands related to childcare, such as feeding, changing diapers, and getting children to sleep during the initial years after birth. If the father works long hours, the primary responsibility for most of the housework and childcare falls upon the mother, who may not receive enough support from her partner. As a result, both parents may be exhausted by the fathers' long working hours. The promotion of worklife balance is an urgent issue for parents with young children to improve their quality of life and mental health. Although a significant association between higher monthly household expenditure per person and both parents experiencing psychological distress in the same time period was observed, we cannot conclude a causal relationship. The relationship between mental health problems and economic burdens is an issue that has been extensively examined at the global, national, household, and individual levels [32][33][34] . In addition to the increase in the healthcare cost in society, factors that contribute to mental health problems include decreases in earnings, increases in household expenditures, and the unpaid costs for informal caregivers for those with mental illnesses. In this study, we speculate that the increase in the household expenditure is not the cause of psychological distress but the result. Further longitudinal studies are necessary to show the effect of psychological distress on household expenditure. Even when one parent experiences poor mental health, the quality of child-rearing in the home deteriorates 3,7,9,35 , so the adverse effect on the child's environment may be even greater if both parents are psychologically distressed. From the perspective of child development and parental quality of life, an environment in which both parents experience psychological distress may be a critical situation that should be addressed as soon as possible. Unintentionally, parental psychological distress may affect the level of care given to children, even leading to neglect. Neglect during early childhood is known to contribute to poor outcomes in later life, such as antisocial behaviour in adolescent boys and depression in adults 36,37 . Limitations. The current study has several limitations that should be considered. First, the K6 in the CSLC was assessed not through a structured interview but by a self-administered questionnaire. This scale, when previously used and developed in the United States 38 , and then validated in Japan 39 , was administered using a structured interview. Second, the identification of parents and children was made using a parent/child identification variable included in the CSLC dataset. Therefore, we are not entirely certain of the biological relationship between the parents and children as parent-child relationships, foster parents, and other familial arrangements that were included in the original data collection. Third, reverse causation should be considered in this study because CSLC employed a cross-sectional design; however, the negative effects of long work hours [26][27][28] and poor sleep 29-31 on mental health have been well established in previous research. In addition, age cannot suddenly change according to an event because the age of a child increases continuously and at a regular interval. Fourth, the effects of data sparse bias in the multivariate analysis should be considered because of the large number of covariates compared to the number of outcome events. However, there were no noticeable differences of the direction or value of the odds ratios between the univariate and multivariate analyses. Finally, the effects of psychological history, including mental health status and psychiatric consultation in the prenatal period and before the pregnancy, which is well known as an important risk factor for mental health status in the postnatal period of both mothers and fathers 40,41 , were not adjusted for in the multivariate analysis as the variables were limited in the CSLC. Despite these limitations, this study, which used a nationwide population-based dataset and included couples, identified several important findings. conclusion The prevalence for either or both parents experiencing MPD or SPD in the first year after birth was 15.1% and 3.4%, respectively. The condition where both parents reported having MPD or SPD at the same time was associated with fathers worked 55 h or more per week, reduced hours of sleep, age in months of the youngest child, and high household expenditures. To prevent parental psychological distress, mental health assessments during Scientific RepoRtS | (2020) 10:13770 | https://doi.org/10.1038/s41598-020-70727-2 www.nature.com/scientificreports/ the postpartum period should be promoted among both mothers and fathers. Especially, the workplace may have an important role in assessment and support of psychological distress among fathers because there are few other ways to outreach them during this period. Along with enhancing the quality of health service, improving the promotion of work-life balance is another urgent issue for parents with young children to promote parental mental health. Methods Study population. We analysed the data from the Comprehensive Survey of Living Conditions in Japan (CSLC), which is a repeated national cross-sectional survey conducted by the Ministry of Health, Labour, and Welfare of Japan (MHLW). A summary of the CSLC is published each year 42 . The CSLC applied a stratified random-sampling method based on enumeration districts from the annual census. In the 2016 survey, 5,410 enumeration districts were selected randomly, and all of the members of the 289,470 households within the selected districts were recruited for participation. Individuals who were hospitalized, institutionalized, or on long-term business trips were excluded from the CSLC. Valid responses were collected from 224,208 households (response rate: 77.5%), which comprised 568,426 members. The survey was implemented on June 2, 2016. The data is one of the government survey data with limited accessibility. The inclusion criteria in this study were (1) being a couple who participated in the CSLC in 2016; (2) either or both parents were 65 years or younger, and (3) being a couple who had a child < 13 months old. The exclusion criterion was having missing values on the K6 scores for either or both parents. The flow chart of data extraction is shown in Fig. 1. Out of the 224,208 households responding to the CSLC, we extracted 3,871 households with at least one child under the age of 12 months for analysis in this study. There were 38 households with two children under one year old. Of these 38 households, 34 households had twins, and the remaining four households had non-twin siblings. In the case of non-twin siblings, only data from the younger child were included. Out of the 3,871 households, 149 households which had either no father or mother, and 208 households that did not have a K6 score for either or both parents were excluded from this study. In total, 3,514 households, composed of 3,514 fathers and 3,514 mothers, met the above criteria from the original CSLC data set and were included in the final analysis. www.nature.com/scientificreports/ Measurement. The individual and household data were collected via a self-administered questionnaire as part of the CSLC. A data collector distributed and collected the questionnaires during visits to participants' homes. Psychological distress was assessed using the Japanese version of the Kessler Psychological Distress Scale (K6) in the CSLC questionnaire. The K6 consists of six questions that examine the frequency during the last 30 days of the following items: (1) nervousness, (2) hopelessness, (3) restlessness or fidgetiness, (4) being so depressed that nothing could cheer you up, (5) feeling that everything is an effort, and (6) 38,39 . The optimal cut-off scores of the Japanese version of the K6 has been examined. The K6 score ≥ 13 is often used to indicate severe or serious psychological distress [43][44][45][46] . The performance of the Japanese version of the K6 was examined using the areas under the receiver operator characteristics curves (AUCs) and stratum-specific likelihood ratios (SSLRs) 39 . The AUC was excellent with a high value of 0.94 (95% CI, 0.88-0.99) and the SSLRs for a score of between 6-8 points, 9-13 points, 14-24 points on K6 were 4.9 (95% CI, 1.7-11.2), 16 (95% CI, 6.1-34.0), and 110 (95% CI, 11-400), respectively 39 . A likelihood ratio greater than 10 is considered to be an informative criterion in the diagnostic process for a disease 47,48 . These results showed that a K6 score ≥ 9 is one of the optimal cut-off points, although some previous studies have adopted a K6 score ≥ 5 as a cut-off point to define moderate psychological distress 46,49,50 . The same cut-off scores on the K6 are typically adopted for Japanese men and women. Therefore, in the current study, for both fathers and mothers, we used a score between 9 and 12 to indicate moderate psychological distress (MPD), and a score of greater than or equal to 13 was used to indicate severe psychological distress (SPD). Age in months of the youngest child was calculated using their birth year and month on the date of survey implementation, which was June 2, 2016. The date of their birth was not surveyed in the CSLC. For example, in the case of children born in March 2016 and in September 2015, their age was classified as two months old and eight months old, respectively. As an exception, a child born in either May or June 2016 was classified as being less than one month of age because of proximity to the date of the CSLC. The age in month of the youngest child was divided into two periods: birth to 6 months, 0 days postpartum (0-6 months); and6 months, 1 day to 12 months, 0 days postpartum (6-12 months). In addition, birth to 12 months, 0 days was defined as the overall study period (0 to 12 months). This period was used to consider results presented in the latest meta-analysis regarding paternal depression 15 . Participants were asked to consider their monthly household expenditure per person and employment status in the past month. The number of work hours per week was provided for the period between16th May to 22nd May. The average number of hours slept per night in May, 2016 was assessed. Different categorizations of weekly work hours were used for fathers and mothers in this study. In Japan, work hours are defined in the Labor Standards Law as being 40 h or less per week, and overtime hours that are up to 15 h per week are allowed by the specific labor-management agreement pursuant to Article 36 of the Labor Standard Act. Therefore, working 55 or more hours in a week was defined as working inappropriately long work hours for fathers in this study. In contrast, most women utilize childcare leave in the first year after delivery. Thus, weekly work hours for mothers were dichotomized based on whether they worked one hour or more per week. Data analysis. We measured the means and frequencies of the socioeconomic status, health behaviours, and working environments in the household for both fathers and mothers. The proportions of households in which either one or both parents were assessed as having MPD and SPD were also reported. The crude odds ratio (COR) with 95% CI for the association between paternal and maternal psychological distress (K6 ≥ 9 and K6 ≥ 13) was calculated using univariate logistic regression. The crude odds ratios with 95% CI for the association between both parents scoring K6 ≥ 9 and K6 ≥ 13 were calculated using a univariate analysis. The parity, monthly household expenditure, main caregiver of children during daytime, children's characteristics such as age in months and sex, paternal characteristics such as age, education, health condition, smoking, drinking, number of hours slept, and number of work hours, and maternal characteristics such as health condition, smoking, drinking, number of hours slept, and number of work hours, were analysed using univariate and multivariate logistic regression with complete-case analysis in each analysis. Parental, paternal, and maternal psychological distress were set as the dependent variables. In the multivariate analysis, 2,773 couples were included without any missing variables. As a sensitivity analysis, the COR and AOR with 95% CI was calculated to examine the association between these factors, paternal, and maternal psychological distress factors separately. The alpha level was set at 5%. No imputations were completed for missing data in this study. All statistical analyses were performed with IBM SPSS statistics version 19.0 (IBM, Armonk, NY). ethical considerations. The use of de-identified individual-level data from the CSLC for scientific research was approved by the MHLW through the official application procedure under Article 33 of the Statistics Act (March 1, 2018). The informed consent was waived in the CSLC because this fundamental statistical survey was conducted to be based on the Statistics Act. We also did not obtain the consent in this study because we performed only secondary data analysis using the national statistics. This study was approved by the Japanese National Center for Child Health and Development ethics committee (No. 1533). This study was conducted to be based on the Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan.
2020-08-13T15:28:38.434Z
2020-08-13T00:00:00.000
{ "year": 2020, "sha1": "9c69d3f24355750751db309f3572752f9c4b2738", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-70727-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c69d3f24355750751db309f3572752f9c4b2738", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270637899
pes2o/s2orc
v3-fos-license
Enhancing the electrical readout of the spin-dependent recombination current in SiC JFETs for EDMR based magnetometry using a tandem (de-)modulation technique Electrically detected magnetic resonance (EDMR) is a promising method to readout spins in miniaturized devices utilized as quantum magnetometers. However, the sensitivity has remained challenging. In this study, we present a tandem (de-)modulation technique based on a combination of magnetic field and radio frequency modulation. By enabling higher demodulation frequencies to avoid 1/f-noise, enhancing self-calibration capabilities, and eliminating background signals by 3 orders of magnitude, this technique represents a significant advancement in the field of EDMR-based sensors. This novel approach paves the way for EDMR being the ideal candidate for ultra-sensitive magnetometry at ambient conditions without any optical components, which brings it one step closer to a chip-based quantum sensor for future applications. A more straightforward approach is EDMR, relying on the spin-dependent recombination (SDR) process and operating exclusively in the electrical domain, without the need for indirect processes such as optical absorption and emission 16 .While EDMR is traditionally performed at low magnetic fields to explore spin-dependent phenomena, recent efforts have extended its application to high magnetic field regimes, aiming to uncover new insights into material properties and spin dynamics 17 .Though EDMR may not currently achieve the same level of sensitivity as ODMR when using NV diamond, two potential approaches arise to improve its sensitivity: linewidth reduction and signal-to-noise ratio (SNR) enhancement.Linewidth reduction can be achieved through sample engineering, notably using isotopically purified samples 18 .SNR can be improved through various methods, including above bandgap excitation (UV illumination), controlled defect creation via radiation, common mode rejection (CMR) and different biasing methods to utilize e.g. the bipolar amplification effect (BAE) for EDMR [19][20][21][22][23][24][25] . In this paper, we introduce a novel tandem (de-)modulation technique, performing two modulations-magnetic field modulation (BM) and RF amplitude modulation (AM)-while demodulating solely at a single frequency (sum or difference of the modulation frequencies).In contrast to tandem demodulation in other fields 26 this technique does not need a second step demodulation with extra filtering and can easily be implemented within a virtual lock-in.Thus, this approach can be readily incorporated into existing EDMR setups and has the potential to be simply combined with established SNR enhancement techniques.This new measurement technique offers three significant advantages compared to the conventional magnetic field modulation-based approach: 1. Higher demodulation frequencies: The demodulation frequency is no longer constrained by the modulation coils, potentially enabling the reduction of 1/f noise until measurements are limited solely by shot noise. Reduced self-calibration requirements: The calibration process, traditionally reliant on bias fields in the 10 mT range, may be accomplished with smaller fields.This technique is blind to the inherent spin-dependent recombination magnetoresistive response that arises at zero-magnetic field due to energy-level mixing, mitigating calibration challenges that are associated with its presence. Separation of demodulation frequency from modulation frequency: This separation eliminates the background interference waveform arising from the modulation coil, which induces an offset current in the sensor that manifests itself as a magnetometer offset, resulting in an enhancement of the signal-to-background ratio (SBR) by a factor of 1000. Basic concept The novel tandem demodulation technique can be seamlessly integrated into a standard electrically detected magnetic resonance (EDMR) setup.Therefore, we will now focus on the main components of the setup as illustrated in Fig. 1a: Magnetic field components: The magnetic field generation is achieved through a set of three Helmholtz coils, with two pairs serving for field offset control and one pair dedicated to modulation purposes. Radio frequency (RF) components: The RF system comprises an RF source and an amplifier, essential for achieving the requisite B 1 fields.In this study, two coils were utilized, both oriented perpendicular to the offset magnetic field.Depending on the nature of the measurement, either a 250 MHz coil was employed to enhance the signal-to-noise ratio (SNR), owing to its strong B 1 field conversion, or a broadband antenna loop was utilized, offering a frequency range between 2 MHz and 250 MHz ( 2 MHz is the lower limit of the RF amplifier employed). Silicon carbide (SiC) device for spin-dependent recombination: The heart of the system is a wire-bonded SiC Junction Field-Effect Transistor (JFET) device (see "Methods" section and Aichinger et al. 27 for sample details) with a positively biased gate.Due to the highly doped n-channel region, source and drain are shorted, thus, any port of the two can be used for current measurement, and the other one can be left floating. In Fig. 1b, the IV characteristic of the JFET device is presented in grey, which is essentially the DC offset current during the EDMR measurements.For bias voltages ranging between 2.1 and 2.5 V , a magnetic field dependent current (pink symbols) due to spin-dependent recombination (SDR) of the involved electrons emerges when the device is close to zero magnetic field (see 28 ).The effective total SDR current is defined as the integrated EDMR current divided by the full magnetic field range (see 16 for details).In order to reveal these small magnetic field dependent current contributions, we added an transimpedance amplifier (Stanford Research System SR570) to the biasing scheme and recorded the output with an ADC (National Instruments USB 6216).To enhance the SNR, we modulate the magnetic field strength (grey magnetic field coils) with one of the DAC outputs.Using a virtual lock-in amplifier enables an extraction of current contributions oscillating with the modulation frequency, thus revealing magnetic field dependent contributions of the current.The best SNR can be obtained when applying 2.3V , which is a trade-off between a high resonant SDR current and sufficiently low DC device current, the latter driving measurement noise. We now want to focus on the different types of EDMR spectra: In the top of Fig. 1c we plotted in black the energy levels of the involved triplet and in grey the energy levels when a 29 Si isotope is involved in the process (simulations conducted with EasySpin 29 assuming a triplet S = 1 with g ≈ 2 , D = 0 ).As soon as a magnetic field is applied, the energy levels of the triplet is detuned.The EDMR signal versus the applied magnetic field is shown in Fig. 1c (black).Note that, due to magnetic field modulation, the observed EDMR spectrum is recorded as the first derivative of the EDMR response. By introducing RF radiation at a specific frequency (e.g., ν = 250 MHz ), additional energy transitions are induced.The associated energy transitions resulting from spin flips are illustrated in Fig. 1c (BM EDMR, red curve).The central main peak, as well as the two RF induced peaks (referred to here as resonant SDR peaks) have minute shoulders (marked with grey stars), attributed to hyperfine peaks of 29 Si .A comprehensive analysis of BM EDMR spectra, and more detailed energy schemata can be found in Cochrane et al. and Harmon et al. 16,17,28 . The resonant SDR transition can be leveraged to self-calibrate an EDMR-magnetometer, however, to characterize the transitions in the presence of the zero-field spin-dependent recombination (ZFSDR), requires a significant field bias to resolve the resonant SDR from the ZFSDR transitions.From an application perspective, large field biasing for near-zero DC magnetometry would significantly drive instrument size, weight and power (SWaP), parameters of utmost importance, on par with sensitivity, for space and defense applications.One goal of this work is to reduce this bias requirement for self-calibrated magnetometer operation. To provide an overview of all involved modulation techniques of this study, we proceed with amplitude modulation (AM) of the RF, revealing contributions exclusively associated with RF interactions.Due to the square wave modulation of the RF amplitude and continuous magnetic field sweeping, we observe the absorption line-shape rather than the first-derivative line-shape characteristic of B 0 modulation (center measurement shown in green in Fig. 1c). We show below that the first derivative zero-crossing of the EDMR signal amplitude is indispensable to implement resonance locking 30 , and thus AM zero order responses are not conducive for easy magnetometer implementation.One alternative is frequency modulation (FM), as depicted in Fig. 1c (FM, yellow curve) 31 .Like AM EDMR, FM EDMR avoids a signal contribution from ZFSDR at zero field, however now offering a www.nature.com/scientificreports/derivative signal.The phase shift of π between the two resonant SDR peaks occurs due to the symmetric energy level splitting around B 0 = 0 , resulting in a mirrored spectrum when only using the frequency modulation technique.While this mode can be used for a scalar measurement of magnetic fields, the signal does not contain any information on the magnetic field orientation.This is due to detuning of the RF energy from resonance, unlike modulating magnetic field, does not shift the Zeeman energy levels ( E = gµ Bohr B ) themselves.However, to determine external field orientation, we have to probe energy level shift in three directions, which is unfeasible with diffuse RF field.To recover field directionality, magnetic field modulation has to be implemented as demonstrated in 16 . Therefore, we have developed the tandem modulation (TM) technique (displayed in cyan in Fig. 1c).TM EDMR combines the advantages of magnetic field modulation (BM) and RF modulation (AM or FM).TM corresponds to the product of BM and AM spectra but is achieved through straightforward demodulation at a single frequency.This results in a spectrum similar to the BM EDMR spectra, but disabling the ZFSDR signal (as can be seen by comparing the cyan and red spectra in subfigure c).This phenomenon arises because TM involves magnetic fields and RF simultaneously, making it sensitive only to transitions that require both. While in ODMR using NV centers, a bias can result in many transitions being addressed simultaneously (see Schloss et al. 9 ), in EDMR, we cannot utilize both the resonant SDR transition and the ZFSDR transition at the same time due to the requirement of zero vs. nonzero magnetic field.Therefore, it is advantageous to bring the transitions as close as possible without causing them to overlap.While BM EDMR without RF enables only the ZFSDR transition (shown in black), TM EDMR now offers a new possibility to address exclusively the resonant SDR transitions.Consequently, we can selectively address either ZFSDR or resonant SDR for measurement and calibration purposes, which will be further discussed in the "Tandem modulation and demodulation" section of the manuscript. Note that each of the subfigures in Fig. 1c were recorded within a 90-second interval to compare the quality of the data of each modulation method.Any additional noise observed in the TM spectrum can be attributed to phase noise, stemming from the lack of synchronization between the two modulation frequencies.The observed current noise δI in the signal I is directly projected onto the magnetic field axis B using the observed slope ( I B ) of the resonant SDR signal to obtain the sensitivity δB Here, f is the standard bandwidth of 1 Hz given by the chosen acquisition time t = 1 s .We estimate the sensitivity of the different modulation techniques to be 0. (TM).Please note that these are only proof-of-concept results to provide an overview of all used modulation techniques in this paper.While BM performs twice as good as FM and three times as good as TM, we want to emphasize that TM will perform as good as FM when using two sideband frequencies (as mentioned later in the "Tandem modulation and demodulation" section).Furthermore, the true benefits of TM will be realized when higher demodulation frequencies are employed and devices with stronger signals are explored, as TM will then enable access to the full dynamical range, as explained in the "Enhancement of the signal background ratio" section.For an additional discussion about the observed linewidth see the "Method" section. EDMR with arbitrary RF The measurements presented in Fig. 1c were conducted with an RF frequency of ν=250MHz, corresponding to a bias field of 9mT.However, this level of bias field may not be practical for miniaturized devices.To enable the use of smaller fields, we must employ lower RF frequencies.The key advantage of lower RF frequencies lies in the reduced separation between the two resonant SDR peaks, thereby lowering the required field strength for offset coils.In Fig. 2a, we showcase EDMR spectra acquired using the broadband loop antenna, spanning RF frequencies from 2 to 250MHz. Notably, due to the less intense B 1 field generated by this antenna, the resonant EDMR peaks appear smaller in amplitude compared to the central ZFSDR peak.A critical challenge emerges when using resonant EDMR transitions at lower fields: the EDMR spectrum becomes primarily dominated by the ZFSDR transition, resulting in an impractical utilization of the resonant SDR peaks.As previously discussed, we can introduce RF modulation to selectively highlight the resonant SDR contributions.However, the application of AM (see Fig. 2b) yields a non-derivative behavior, making it unsuitable for simple resonance locking.Deviations from the resonance mode would result in the same sign of the EDMR current, thereby preventing the differentiation of the necessary field strength.Additionally, AM alone lacks the ability to function as a vector magnetometer, as the demodulated signal does not contain information about the field orientation. Despite these disadvantages of AM, it enables an analysis of the broadband antenna's capabilities, demonstrating the feasibility of using lower frequencies despite the strong ZFSDR signal.ZFSDR only damps the signal around zero field, since this effect introduces leakage channels that reduce the EDMR current when employing the lock-in technique.This effect is also known in hole-burning spectroscopy, which has its roots in laser spectroscopy, and it has been observed for various other materials using optically detected magnetic resonance (ODMR) and electroluminescence detected magnetic resonance (ELDMR) with two frequencies [32][33][34] . The peak amplitudes are analyzed in Fig. 2c, d, pertaining to the BM and AM methods, respectively.As mentioned the reduction in signal amplitude at lower frequencies can be attributed to the ZFSDR signal.In contrast, the damping observed at higher frequencies results from the increased impedance of the coil due to its inductance: , X L ∝ f B .At higher frequencies, the coil becomes less efficient, leading to a reduced B 1 field for the same RF power.Consequently, we employ frequencies around ν = 50 MHz for low magnetic field bias measurements, as they yield maximum performance.For higher frequencies, we continue to utilize the ν = 250 MHz coil due to its superior performance.www.nature.com/scientificreports/Remarkably, we observe a discrepancy in peak amplitudes between positive and negative resonant SDR peaks for both BM and AM methods.The peaks for positive magnetic field values appear larger than their negative counterparts across all measurements within this study.We can exclude hysteresis effect since an identical behavior for the positive magnetic field value is observed when the magnetic field is swept from the opposite direction.The origin of this phenomenon remains unclear and demands further examination in future studies using different setups. Tandem modulation and demodulation Following the efficiency assessment of the broadband antenna, this section focuses on the evaluation of the proposed TM technique at both high ( ν = 250 MHz ) and low ( ν = 50 MHz ) RF frequencies.The underly- ing principle of this technique centers on leveraging the benefits of both BM and AM, similar to fundamental modulation techniques used in communication.In this approach, one modulation frequency serves as the carrier frequency, while the other modulates the carrier signal.The resulting signal manifests itself in two sidebands.For this study, we used f B = 500 Hz as the magnetic field modulation frequency and f RF = 5100 Hz as the amplitude modulation frequency of the RF signal.The choice of AM modulation frequency is set an order of magnitude higher than BM to clearly differentiate them.Furthermore, a small offset of 100 Hz was chosen to avoid higher harmonic interference (e.g., 10x 500 Hz ).The standard demodulation frequency employed is therefore the sum of both modulation frequencies, totaling f + demod = 5600 Hz .Importantly, we anticipate exploring various modula- tion and demodulation frequencies in the following section to demonstrate the technique's adaptability across different combinations. In Fig. 3a, b, we present the results for the BM (red) and TM (cyan) techniques, respectively.Figure 3b notably showcases only the resonant SDR peaks with the characteristic first derivative behavior typical of standard BM.Notably, even the half-field transitions around 5mT are discernible (indicated by the blue arrows) and show a phase difference of pi.This might provide access to the population difference of the triple m s = −1 and m s = +1 state, but is beyond the scope of this work.Comparing TM with BM clearly shows an overall signal amplitude reduction for the TM measurements due to the lower effective RF power (resulting from square wave modulation instead of constant RF power) and the signal's distribution between the two sidebands.Remarkably, an additional second sideband could be employed concurrently, theoretically leading to an SNR enhancement of √ 2 (see 35 for comparable approach) since the random noise of N measurements can be averaged and therefore reduced by a factor of √ N .Notably, taking this factor into account TM would result in the same sensitivity as previously observed for FM: It is important to note that the two modulation frequencies of the here presented TM measurements were not synchronized, introducing some additional phase noise.Consequently, the in-phase and out-of-phase data were phase corrected after demodulation.The primary motivation behind introducing the tandem modulation and demodulation technique was to mitigate the substantial signal arising from the ZFSDR effect.In Fig. 3c and d, we conducted analogous measurements with lower frequencies, where the ZFSDR response dominates.Employing TM, we observe two distinct resonances without the interference of the substantial ZFSDR signal.The ZFSDR primarily affects the signal's amplitude, serving as a leakage channel for the spins.Despite the minor reduction in signal amplitude due to this effect, resonance transitions are now available for potential use in future magnetometer applications, particularly for coil calibration, as elaborated in Cochrane et al. 16 . Sideband configurations for TM After demonstrating the TM technique for arbitrary RF frequencies, we now focus on examining the possibilities offered by arbitrary modulation frequencies.As previously noted, TM generates two sidebands at frequencies |f RF ± f B | .In all measurements in this paper, we utilized f B at 500 Hz and f RF at 5.1 kHz unless otherwise stated.The frequency configuration is visually depicted in Fig. 4a.It is important to mention that the measured fast Fourier transform (FFT) is more intricate, encompassing higher harmonics, interference frequencies from the line voltage, and the demodulation peaks are too narrow compared to the entire frequency spectrum.A comprehensive FFT is later presented in Fig. 5, but for the current analysis, a simplified illustration suffices. In Fig. 4b, we provide a comparison of both sidebands, plotted over each other to highlight the similarity of the result.At both f + demod = 5.6 kHz and f − demod = 4.6 kHz , clear EDMR signals emerge from the two resonant SDR transitions.To underscore the versatility of the TM technique, we interchanged the frequencies of f RF and f B in Fig. 4c and d.The signal exhibits increased noise, attributed to the modulation coil's lack of optimization for frequencies above 1kHz, which leads to a reduced effective modulation depth. The true advantage of employing arbitrary modulation and demodulation frequencies becomes apparent when both frequencies are positioned to higher frequencies as showcased in Fig. 4e and f for the 5kHz regime.For proof of concept, we can demodulate the signal at f − demod = 100 Hz , although this has no practical benefit due to the higher noise floor.The real benefit emerges in the higher sideband ( f + demod = 10.1 kHz ): with TM we are able to demodulate the signal within a range where the magnetic field coil becomes highly inefficient.Thus, TM empowers devices to operate far beyond the conventional modulation coil's operational range.In principle, this approach can entirely circumvent 1/f noise, making the measurement predominantly limited by shot noise.The maximum demodulation frequency is constrained by f max B + f max RF , which is limited by the inductance of the coil (or the RLC circuit) and the bandwidth limitations of the RF antenna. Enhancement of the signal background ratio The TM technique not only shifts the signal to a less noisy frequency regime but also effectively separates it (in frequency) from the strong modulation-driven background current induced in the sensor arising from electromagnetic pickup of the electrical circuit.In Fig. 5a, we present the FFT of a typical BM EDMR measurement before demodulation.The upper figure exhibits a color map representing the FFT during a magnetic field sweep, while the lower Fig. 5a displays a cross-section at the on-resonance (red) and off-resonance (grey).Conventional BM measurements typically exhibit the characteristic 1/f noise reduction at higher frequencies.However, despite the high SNR, the Signal-to-Background Ratio (SBR) remains significantly smaller than the SNR value.In Fig. 5b, we provide a zoomed-in view of the modulation frequency, where the SNR compared to the noise floor reaches approximately 10 5 , but the SBR right at the demodulation frequency remains less than 2. This background, frequently observed in lock-in measurements, becomes especially critical when attempting to use zero crossings for sensor applications.The unwanted background not only compromises sensor accuracy but also limits sensitivity, as the digitizer's dynamic range is often consumed by this background.Without the background, the dynamic range can be fine-tuned to the noise floor, enabling the detection of smaller external fields.Shielding can help mitigate this background, since it is primarily generated by induced currents from the modulation coils into the sensor probe wires.www.nature.com/scientificreports/With the TM technique, however, we can decouple the demodulation from the modulation frequencies, as illustrated in Fig. 5c.The modulation frequencies create a distinct, strong peak in the cross-section.This time, however, the resonant SDR peaks emerge from the noise floor when measured via the sidebands.A closer examination is provided in Fig. 5d.Even in the color map, we observe EDMR resonances as strong peaks.The cross-section of the zoomed-in region reveals a huge SBR of approximately 2000, which is approximately three orders of magnitude stronger than the peak seen in Fig. 5b.It is important to note that the amplitudes of these peaks are still 2 orders smaller than those created by BM and 3 orders of magnitude smaller than the f RF peak.Nevertheless, no discernible background is observed above the noise floor with this technique. Incorporating a filter that blocks the modulation peak at f RF = 5.1 kHz would allow for dynamic range adjustment solely to the resonance signal, significantly enhancing sensitivity by up to 3 orders of magnitude, assuming the current sensitivity is limited by the large background signal.This capability offers opportunities for enhanced signal detection in applications where background interference has traditionally posed a challenge.As a side note, we want to mention that TM can be realized not only by using the direct sideband frequencies (as depicted in subfigure d) but also through their higher harmonics.In subfigure c, multiple higher harmonics are visible, including those of the sidebands as well as those of the magnetic field modulation (see 1 kHz ).It's important to note that spectra obtained from these demodulation frequencies offer additional benefits, though they are not as straightforward to use as detailed in Cochrane et al. 16 . Zero field application and vector magnetometer mode The tandem modulation (TM) technique presented here primarily focuses on the resonant SDR transitions, which are crucial for the remote self-calibration of the solid-state magnetometer under development.In our final exploration, we examine whether this technique, designed for background avoidance, can also be applied at zero magnetic field, thus avoiding the zero-field mixing signal completely.Instead of relying on the ZFSDR to sense near-zero magnetic fields, we deliberately overlap the two resonant transitions, resulting in a new "resonant" slope centered on zero magnetic field.To achieve this, we select a low RF frequency ( ν = 10 MHz ), as depicted in Fig. 6a.A closer look at the two overlapped first derivative peaks is presented in Fig. 6b.Due to the overlap we create a new magnetic field dependent zero-crossing manifesting as a linear slope at zero field, with absolutely no offset in the measurement.When the spin system resides between both resonant transitions, the EDMR response remains at zero.Any detuning resulting from an external field induces a positive or negative TM EDMR current, contingent upon the field's direction.Notably, the required field for calibration is now reduce by a factor of 25, thus, enabling self-calibrating EDMR devices with much smaller coils.This concept e) and (f) with high BM and high AM leading to a low sideband (blue) and a very high sideband (cyan).Especially this approach is attractive since it can bring the demodulation frequency far above the operation regime of magnetic field coils towards the shot-noise limit.can potentially be extended to a vector mode by employing multiple pairs of modulation coils, one pair for each orientation (see illustration in Fig. 6c based on Cochrane et al. 16 ).Each orientation can be modulated with a distinct f B , as previously described.A common RF frequency of ν = 10 MHz can be used for all orientations simultaneously.By employing a high f RF , exceeding the 1/f noise regime, all f B frequencies are shifted into this regime, with sidebands symmetrically mirrored around f RF .Demodulating each of these frequencies provides access to all three orientations of an external field. As long as the external field remains within the linear regime (i.e., below the modulation depths), the EDMR current of each demodulation is directly proportional to the field's strength in that particular orientation.For external fields that exceed this linear range, compensation fields can be applied, as elaborated in Cochrane et al. 16 .The originally proposed self-calibration process can be executed by introducing an additional current to the coil to align it with one of the resonant SDR peaks (e.g.0.36 mT for ν = 10 MHz). Conclusion In this work, we introduced a novel tandem demodulation technique for EDMR that effectively addresses critical challenges in enhancing sensitivity for miniaturized quantum devices, particularly magnetometers.Our innovative approach enables the use of higher demodulation frequencies, effectively mitigating the influence of 1/f noise.Furthermore, it allows for arbitrary bias fields during calibration, even within the main peak of the EDMR spectrum.Most importantly, this technique eliminates background interference, enabling the full utilization of the sensor's dynamical range.This does not only promises advancements in EDMR but also in related measurement techniques such as EPR, ODMR, and ELDMR.In addition to applying tandem (de-)modulation to Figure 1 . Figure 1.EDMR overview: (a) setup configuration of the different magnetic field coils.The offset coils (black) are in line with the magnetic field modulation coils (grey).The two RF coils for broadband (orange) and 250 MHz specifically (dark orange) are perpendicular.(b) JFET IV characteristics and sample photo (with 1mm scale bar).The JFET is wire bonded to apply a forward bias, which leads to the standard characteristics (grey).Between 2.1 and 2.5 V a zero-field spin-dependent current (ZFSDR) displayed in pink is observed.(c) EDMR measurements using different modulations: magnetic field modulation (BM), RF amplitude modulation (AM), frequency modulation (FM) and the novel tandem modulation (TM).The top subfigure shows the calculated energy level (units in MHz) of the involved triplet ( 28 Si in black,29 Si isotope in grey).The other subfigures present the various modulation techniques BM (red), AM (green), FM (yellow) and TM (cyan).Observable hyperfine peaks are marked with grey stars.All four measurements show the 250 MHz resonance peak and even the spin-forbidden half-field transition (light grey) is visible for BM, AM and TM. Figure 2 . Figure 2. Low frequency EDMR with broadband antenna loop: (a) BM EDMR with RF between ν = 2 MHz and ν = 250 MHz.The signal is dominated by the standard ZFSDR signal.(b) AM EDMR with different frequencies between ν = 2 MHz (light green) and ν = 250 MHz (dark green)to reveal only the RF involved transitions.The peaks are damped by the ZFSDR leakage (pink) and the high impedance of the loop antenna due to its high inductance (black).(c) and (d) peak amplitude of BM and AM, respectively.The broadband antenna is most efficient at ν = 50 MHz which is used for low frequency proof of concept measurements. Figure 3 . Figure 3. BM vs TM EDMR: (a) standard BM (red) EDMR for ν = 250 MHz .(b) novel TM (cyan) EDMR which avoids the strong ZFSDR signal peak.Interestingly, the half-field transitions (cyan arrows) show a phase flip of pi compared to the main resonant SDR peak transitions indicating different populations of the m s = +1 and m s = −1 state.(c) BM EDMR with ν = 50 MHz .(d) TM EDMR with 50 MHz .The novel TM EDMR approach is also applicable to small fields (/frequencies) when the signal is usually dominated by ZFSDR.Multiple hyperfine peaks can be observed (cyan arrows) which are less suppressed than the resonant SDR peak located within the ZFSDR signal. Figure 4 . Figure 4. TM EDMR with different (de-)modulation frequencies: (a) FFT illustration of the standard TM EDMR of this paper.(b) TM EDMR spectra with higher (cyan) and lower (blue) demodulation frequency.The two spectra are plotted over each other to highlight the similarities of both sidebands.(c) and (d) TM EDMR with high BM and low AM.(e) and (f) with high BM and high AM leading to a low sideband (blue) and a very high sideband (cyan).Especially this approach is attractive since it can bring the demodulation frequency far above the operation regime of magnetic field coils towards the shot-noise limit. Figure 5 . Figure 5. FFTs during EDMR (before demodulation): (a) FFT of BM EDMR (red).The colormap represents the absolute value of the FFT during a BM with f B = 5.1 kHz with a cross sections in blue (on resonance) and grey (off resonance = noise floor).(b) Zoom of BM EDMR revealing a large background signal at the demodulation frequency.(c) FFT of TM EDMR (cyan).TM with f B = 500 Hz and f RF = 5.1 kHz leading to sideband in resonance.(d) Zoom of TM EDMR revealing the resonance peak far above the background noise. Figure 6 . Figure 6.TM EDMR for vector magnetometry.(a) and (b) Low TM EDMR with ν = 10 MHz .An overlap of the two resonant SDR peaks leads to a new B = 0 resonance condition which can be employed for magnetometry.(c) Illustration of an extension to three dimensions.f x B , f y B and f z B have distinguishable values but their maximum frequency is limited by the performance of the modulation coils.Using a common f RF leads to high frequency resonant transitions in the shot noise regime and distinguishable for orientation information.
2024-06-22T06:17:44.652Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "6fd69c3010cfc4a544f0b95183adbe9b0f4730ee", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-64595-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ddaa6eeaac90ef4b2a397e1226c7a1c4ec67e281", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
53226434
pes2o/s2orc
v3-fos-license
Every Testable (Infinite) Property of Bounded-Degree Graphs Contains an Infinite Hyperfinite Subproperty One of the most fundamental questions in graph property testing is to characterize the combinatorial structure of properties that are testable with a constant number of queries. We work towards an answer to this question for the bounded-degree graph model introduced in [Goldreich, Ron, 2002], where the input graphs have maximum degree bounded by a constant $d$. In this model, it is known (among other results) that every \emph{hyperfinite} property is constant-query testable [Newman, Sohler, 2013], where, informally, a graph property is hyperfinite, if for every $\delta>0$ every graph in the property can be partitioned into small connected components by removing $\delta n$ edges. In this paper we show that hyperfiniteness plays a role in \emph{every} testable property, i.e. we show that every testable property is either finite (which trivially implies hyperfiniteness and testability) or contains an infinite hyperfinite subproperty. A simple consequence of our result is that no infinite graph property that only consists of expander graphs is constant-query testable. Based on the above findings, one could ask if every infinite testable non-hyperfinite property might contain an infinite family of expander (or near-expander) graphs. We show that this is not true. Motivated by our counter-example we develop a theorem that shows that we can partition the set of vertices of every bounded degree graph into a constant number of subsets and a separator set, such that the separator set is small and the distribution of $k$-disks on every subset of a partition class, is roughly the same as that of the partition class if the subset has small expansion. Introduction Understanding the structure of very large graphs like social networks or the webgraph is a challenging task. Given the size of these networks, it is often hopeless to compute structural information exactly. A feasible approach is to design random sampling algorithms that only inspect a small portion of the graph and derive conclusions about the structure of the whole graph from this random sample. However, there are different ways to sample from graphs (random induced subgraphs, random sets of edges, random walks, random BFS, etc.) and also many structural graph properties. This raises the question, which sampling approaches (if any) are suitable to detect or approximate which structural properties. Graph property testing provides a formal algorithmic framework that allows us to study the above setting from a complexity theory point of view. In this framework, given oracle access to an input graph, our goal is to distinguish between the case that the graph satisfies some property or that it is "far from" having the property by randomly sampling from the graph. Here, a graph property denotes a set of graphs that is invariant under graph isomorphism. Both oracle access and the notion "far from" depend on the representation of the graph. Several models have been proposed in the past two decades for dealing with different types of graphs (see the recent book [Gol17]). For dense graphs, Goldreich et al. [GGR98] introduced the adjacency matrix model, in which the algorithm can perform any vertex-pair query to the oracle. That is, upon an input vertex pair u, v, the oracle returns 1 if there is an edge between u, v and 0 otherwise. A graph is called ε-far from having a property Π if one has to modify more that εn 2 vertices to make it satisfy Π for any small constant ε. Since the time when the model was introduced, many properties Π were found to be testable in the sense that there exists an algorithm, called tester, that can distinguish if a graph satisfies Π or is ε-far from having Π while only making a constant number of queries. The research in this model has culminated in the seminal work by Alon et al. [AFNS09], who gave a full characterization of constant-query testable properties by the regularity lemma. Our understanding of property testing for sparse graphs (e.g., bounded degree graphs) is much more limited. Goldreich and Ron [GR02] initiated the study of property testing for bounded degree graphs in the adjacency list model. A graph G is called a d-bounded graph if its maximum degree is at most d, which is assumed to be a constant. The property tester for a d-bounded graph is given oracle access to the adjacency list of the graph, that is, upon an input (u, i) such that i ≤ d, the oracle returns the i-th neighbor of u if such a neighbor exists, and a special symbol otherwise. A d-bounded graph is said to be ε-far from having the property Π if one needs to modify more than εdn edges to obtain a graph that satisfies Π. In this model, there exist several properties that are known to be testable with a constant number of queries (see discussion below). There also exist a number of properties that requireÕ( √ n) orÕ(n 1/2+c ) queries, including bipartiteness [GR99], expansion [GR00,CS10,NS10,KS11], k-clusterability [CPS15] and one-sided error minor-freeness [CGR + 14, FLVW18,KSS18]. For the property of being 3-colorable there is a known Ω(n) lower bound on the number of queries needed to test the property [BOT02]. One of the most important questions in this area is to give a purely combinatorial characterization of which graph properties are testable with a constant number of queries. Goldreich and Ron were the first to show that a number of fundamental graph properties including connectivity, k-edge connectivity, subgraph-freeness, cycle-freeness, Eulerian and degree regularity can be tested with constant queries in bounded degree graphs [GR02]. A number of properties with small separators are now known to be testable in a constant number of queries, such as minor closed properties [BSS10,HKNO09], and hyperfinite properties [NS13]. In particular, in the latter work it is proved that every property is constant-query testable in hyperfinite graphs. There are also constant-query properties that are closed under edge insertions, including k-vertex connectivity [YI12], perfect matching [YYI12], sparsity matroid [ITY12] and the supermodular-cut condition [TY15]. Furthermore, there exist global monotone properties 1 that contain expander graphs and can be tested with constant queries, including the property of being subdivision-free [KY13]. There also exist some work on testable properties in some special classes of bounded degree graphs. For example, it is known that every hereditary property 2 is testable with a constant number of queries in nonexpanding d-bounded graphs [CSS09]. A property called δ-robust spectral property is constantquery testable in the class of high-girth graphs [CKSV18]. However, very little is known about characteristics of all testable properties in general. Our Results Although many properties are known to be constant-query testable in bounded degree graphs, our knowledge on characteristics of all testable properties is fairly restricted. One prominent example of testable properties is the family of hyperfinite properties [NS13], which includes planar graphs and graphs that exclude any fixed minor (see e.g., [BSS10,HKNO09]). For the statement of our results and the discussion of techniques, we state the definition of hyperfinite graphs at this place. Definition 1.1. Let ε ∈ (0, 1] and k ≥ 1. A graph G with maximum degree bounded by d is called (ε, k)-hyperfinite if one can remove at most εd|V (G)| edges from G so that each connected component of the resulting graph has at most k vertices. For a function ρ : Also, many testable properties are known that are not hyperfinite. Our main result is that, nevertheless, for infinite properties the existence of an infinite set of hyperfinite graphs in the property is a necessary condition for its constant-query testability (finite properties are trivially hyperfinite). Since some of these testable properties, e.g., subdivision-freeness, contain expander graphs, a hyperfinite subproperty might seem somewhat surprising. (A subproperty of a property Π is a subset of graphs in Π that is also invariant under graph isomorphism.) Indeed, the complement of every non-trivially constant-query testable property also contains hyperfinite graphs, where a property is non-trivially testable if it is testable and there exists an ε > 0 such that there is an infinite number of graphs that are ε-far from Π. Theorem 1.2. Every constant-query testable property Π of bounded-degree graphs is either finite or contains an infinite hyperfinite subproperty. Also, the complement of every non-trivially constantquery testable graph property contains an infinite hyperfinite subproperty. To our best knowledge, our theorem gives the first non-trivial result on the combinatorial structure of every constant-query testable property in bounded-degree graphs. A direct corollary from our main result is that expansion and the k-clusterability property are not constant-query testable, as any hyperfinite graph will have many small subsets with small expansion and thus does not satisfy the properties. Indeed, a much stronger lower bound of Ω( √ n) on the query complexity for testing these two properties was already known prior to this work [GR00]. However, our result further implies that every infinite intersection of a family of expander graphs with any other property is also not testable. Corollary 1.3. Let Π be a property that does not contain an infinite hyperfinite subproperty, and let Π ′ be an arbitrary property such that Π ∩ Π ′ is an infinite set. Then, Π ∩ Π ′ is not testable. Note that in general, the intersection of a property that is not constant-query testable with another property may be testable. For example, the property of being planar and bipartite is testable since it is a hyperfinite property [NS13]. However, bipartiteness is not constant-query testable [GR02]. We then study the question whether a similar result can be obtained for expander or nearexpander subproperties in testable non-hyperfinite properties. Expander graphs are those that are well connected everywhere, and thus can be thought as anti-hyperfinite graphs. Indeed, many known testable, while non-hyperfinite, properties do contain infinite expander subproperties. Typical examples include k-connectivity, subgraph-freeness and subdivision-freeness. However, this turns out to not be the case in general. We show that there exists a testable property that is not hyperfinite and every graph in the property has distance Ω(n) to being an expander graph: The property consists of all graphs that have a connected component on ⌈|V |/2⌉ vertices and all other vertices are isolated. Theorem 1.4. There exists an infinite graph property Π of bounded-degree graphs such that • Π is not hyperfinite, • every graph in Π differs in Ω(n) edges from every connected graph. Motivated by the above result we also obtain a theorem (Theorem 5.1) that shows that we can partition the set of vertices of every bounded degree graph into a constant number of subsets and a separator set, such that the separator set is small and the distribution of k-discs on every subset of a partition class, is roughly the same as that of the partition class if the subset has small expansion. Our Techniques It is well known that constant-time property testing in the bounded-degree graph model is closely connected to the distribution of k-disc isomorphism types (see, for example, [BSS10,NS13]). The k-disc of v ∈ V is the rooted subgraph that is induced by all vertices at distance at most k from v and has root v, i.e. the local subgraph that can be explored by running a BFS upto depth k. Thus, the distribution of k-disc isomorphism types describes the local structure of the graph. We then show (in Theorem 3.2) that every constant-query property tester can be turned into a canonical tester that is based on approximating the k-disc distribution and decides based on a net over the space of all distribution vectors. Technically, our proof for this result mostly follows an earlier construction of canonical testers introduced in [GR11] (see also [CPS16,MMPS17]). We then exploit a result by Alon [Lov12,Proposition 19.10] that is derived from open questions in graph limits theory. Alon proved that for every bounded-degree graph G, there exists a graph of constant size H whose k-disc distribution can be made arbitrarily close (in terms of ℓ 1 norm distance) to the k-disc distribution of G. Given a graph G on n vertices from some constant-query testable property Π we can use multiple copies of H to obtain a graph that consists of connected components of constant size and whose distribution of k-discs is close to that of G. The latter implies that a canonical tester will behave similarly on H and G and thus accepts with probability at least 2/3. Although H does not necessarily have the tested property, it must be close to it. This implies that there exists a graph H ′ in Π from which we can remove εdn edges to partition it into small connected components. Thus, H ′ is (ε, O ε (1))-hyperfinite, where O ε (1) is a constant depending on ε. However, H ′ may not be (ε ′ , O ε ′ (1))-hyperfinite for ε ′ < ε. The challenge is how to construct such a graph. In order to do so, we proceed as follows. For every suitable choice of n, we construct a series of n-vertex graphs H i such that each H i approximately inherits the (ε, O ε (1))-hyperfinite properties of all graphs H i ′ for all i ′ < i. The key idea is to maintain the hyperfinite properties of H i by causing only a small perturbation of its k-disc vector. Carefully choosing the parameters of this process, at the end we obtain a graph H (n) that is ρ(ε)-hyperfinite for a monotone function ρ(·) and every ε > 0. In order to show that we cannot obtain a similar result for expander graphs in non-hyperfinite properties, we have designed the aforementioned property of graphs which consist of a connected component on half of the vertices and all other vertices are isolated. Our proof of testability combines earlier ideas of testing connectivity with simple sampling based estimation of the number of isolated vertices. Other Related Work Goldreich and Ron [GR11] gave characterizations of the graph properties that have constant-query proximity-oblivious testers for bounded-degree graphs and for dense graphs. As noted in [GR11], such a class of properties is a rather restricted subset of the class of all constant-query testable properties. Hyperfiniteness is also closely related to graphings that have been investigated in the theory of graph limits [Ele07, Sch08, Lov12]. Preliminaries Let G = (V, E) be a graph with maximum degree bounded by d, which is assumed to be a constant. We also call G a d-bounded graph. Definition 2.1. A graph property Π is a set of graphs that is invariant under graph isomorphism. If all the graphs in Π have maximum degree upper bounded by d, then we call Π a d-bounded graph property. We let Π n ⊆ Π denote the set of graphs in Π with n vertices. Note that Π = ∪ n≥1 Π n . Let Π denote the complement of Π, i.e., Π = U \ Π, where U denotes the set of all d-bounded graphs. Let Π n denote the set of n-vertex graphs that are not in Π n , i.e., Π n = U n \ Π n , where U n denotes the set of all d-bounded n-vertex graphs. We have the following definition on graphs that are far from having some property. Definition 2.2. Let Π = ∪ n≥1 Π n be a d-bounded graph property. An n-vertex graph is said to be ε-far from having property Π n if one has to modify more than εdn edges to make it satisfy Π n . Let Π n;>ε denote the set of all n-vertex graphs that are ε-far from Π n . Let Π >ε ⊆ Π be the set of all graphs that are ε-far from Π, i.e., Π >ε = ∪ n≥1 Π n;>ε . Given a property Π = ∪ n≥1 Π n , an algorithm is called a tester for Π, if it takes as input parameters 0 < ε ≤ 1, n, d, and has query access to the adjacency lists of an n-vertex d-bounded graph G, and with probability at least 2/3, accepts G if G ∈ Π n and rejects G if G ∈ Π n;>ε . The following gives the definition of constant-query testable properties. Definition 2.3. We call a d-bounded graph property Π = ∪ n≥1 Π n (constant-query) testable, if there exists a tester for Π that makes at most q Π = q Π (ε, d) queries for some function q Π (·, ·) that depends only on ε, d. k-Discs and frequency vectors. The notions of k-discs and frequency vectors play an important role for analyzing constant-query testable properties. For any vertex v ∈ V , we let disc k (G, v) denote the subgraph rooted at v that is induced by all vertices that are at distance at most k from v. For any two rooted subgraphs H 1 , H 2 , we say H 1 is isomorphic to H 2 , denoted by H 1 ≃ H 2 , if there exists a root-preserving mapping Φ : . Note that for constant d, the total number of possible non-isomorphic k-discs is also a constant, denoted by N (d, k). Furthermore, we let T k = {∆ 1 , · · · , ∆ N } be the set of all isomorphism types of k-discs in any d-bounded graph, where N = N (d, k). Finally, we let freq k (G) denote the frequency vector of G which is indexed by k-disc types in T k such that n for any ∆ ∈ T k , i.e., freq k (G) ∆ denotes the fraction of vertices in G whose k-discs are isomorphic to ∆. Furthermore, for any subset S of G, we let freq k (S | G) denote the vector that is indexed by types in T k such that For any vector f , we let f 1 denote its ℓ 1 -norm. We have the following simple lemma on the ℓ 1 -norm distance of the frequency vectors of two graphs that are ε-close to each other. The proof follows from the proof of Corollary 3 in [FPS15], while we provide a proof here for the sake of completeness. Proof. Let F := E(G 1 )△E(G 2 ) denote the set of edges that appear only in one of the two graphs G 1 , G 2 . Since G 1 is ε-close to G 2 , it holds that |F | ≤ εdn. Note that for any e ∈ F , the total number of vertices that are within distance at most k to either of its endpoint is at most 2(1 + d + d(d − 1) + · · · + d(d − 1) k−1 ) ≤ 3d k . This further implies that the total number of vertices that may have different k-disc types in G 1 and G 2 is at most |F | · 3d k ≤ 3εd k+1 n. Finally, we note that each vertex with different k-disc types in G 1 and G 2 contributes at most 2 n to the ℓ 1 -norm distance of freq k (G 1 ) and freq k (G 2 ), which implies that This completes the proof of the lemma. The converse to the above lemma is not true in general, that is, it is not true that the closeness of the frequency vectors of two graphs implies the closeness of these two graphs. However, Benjamini et al. [BSS10] showed that the converse somehow still holds for hyperfinite graphs. More precisely, they proved the following result. Frequency preservers and blow-up graphs. The following lemma is due to Alon, and it roughly says that for any n-vertex d-bounded graph, there always exists a "small" graph whose size is independent of n that preserves the local structure well, i.e., its k-disc frequencies. Lemma 2.6 (Proposition 19.10 in [Lov12]). For any δ > 0 and d, k ≥ 1, there exists a function M d (δ, k) such that for every n-vertex graph G, there exists a graph H of size at most Definition 2.7 ((δ, k)-DFP). We call the small graph H obtained from Lemma 2.6 a (δ, k)-disk frequency preserver (abbreviated as (δ, k)-DFP) of G. We remark that though we know the existence of the function M d (δ, k) that upper bounds the size of some (δ, k)-DFP, there is no known explicit bound on M d (δ, k) for arbitrary d-bound graphs (see [FPS15] for explicit bounds of M d (δ, k) for some special classes of graphs). We use DFPs as a building block to construct n-vertex graphs that have constant-size connected components and approximately preserve the k-disc frequencies of a given n-vertex graph G. More precisely, we have the following definition. . Let H ′ be the n-vertex graph that is composed of ⌊n/h⌋ disjoint copies of H and n − h · ⌊n/h⌋ isolated vertices. We call H ′ the (δ, k)-blow-up graph of G. The following lemma follows directly from the above definition of blow-up graphs and the fact that the blow-up graph contains at most h ≤ M d (δ, k) isolated vertices. Expansion and expander graphs. Let G = (V, E) be a d-bounded graph. Let S ⊂ V be a subset such that |S| ≤ |V |/2. The expansion or conductance of set S is defined to be φ G (S) = e(S,V \S) d|S| , where e(S, V \ S) denotes the number of crossing edges from S to V \ S. The expansion of G is defined as φ(G) := min S:|S|≤|V |/2 φ G (S). We call G a φ-expander if φ(G) ≥ φ. We simply call G an expander if G is a φ-expander for some universal constant φ. Constant-Query Testable Properties and Hyperfinite Properties In this section, we give the proof of main theorem, i.e., Theorem 1.2. We first give the necessary tools in Section 3.1, and then give the proof of the first part and second part of Theorem 1.2 in Section 3.2 and 3.3, respectively. Basic Tools The following is a direct corollary of Lemma 2.5 by Benjamini et al. [BSS10]. Our second tool is the following characterization of constant-query testable properties by the so-called canonical tester. Such a characterization is similar to the previous ones given in [GR11,CPS16] for bounded-degree testable graph properties. The main difference here is that our canonical tester makes decisions based on the frequency vectors, instead of the forbidden subgraphs as considered in the previous work. We have the following theorem, whose proof is deferred to Section 3.4. Infinite Testable Property Contain Infinite Hyperfinite Subproperties We now prove the first part of Theorem 1.2, i. e., every infinite testable property contains an infinite hyperfinite subproperty. We start by showing that for any fixed ε, and any graph G in a testable property Π, we can find another graph G ′ such that G ′ is (ε, s)-hyperfinite and the frequency vectors of G and G ′ are close. Lemma 3.3. Let δ, ε, k > 0. Let ε ′ = min{ε, δ 18d k+1 }. Let n ≥ n 2 (ε, δ, d, k). Let Π be a testable graph property with query complexity q Π = q Π (ε, d) and let G ∈ Π n . Then, there exists G ′ ∈ Π n such that Proof. Let n 2 (ε, δ, d, k) = max{n 0 ( δ ′ 3 , d, k), n 1 (ε ′ , d)}, where n 0 , n 1 are the numbers in the statements of Lemma 2.9 and Theorem 3.2, respectively. Let t = c · q Π (ε ′ , d) for the constant c > 1 from Theorem 3.2. By definition, it holds that t ≤ k ′ . Let H be the ( δ ′ 3 , k ′ )-blow-up graph of G. By Lemma 2.9 and our assumption that n ≥ n 2 , it holds that freq as t satisfies that 1 5t ≥ δ ′ and that t ≤ k ′ . Let T C be the canonical tester for Π with parameter ε ′ with corresponding query complexity t =q Π (ε ′ , d). Then by Theorem 3.2, T C will accept H with probability at least 2/3. This implies that H is ε ′ -close to Π. Let G ′ ∈ Π such that H is ε ′ -close to G ′ . We claim that G ′ is the graph we are looking for. First, we show that G ′ is (ε, M d ( δ ′ 3 , k ′ )))-hyperfinite. Recall that by definition, H is composed of ⌊n/h⌋ disjoint copies of a graph of size h and n − h · ⌊n/h⌋ isolated vertices, where h ≤ M d ( δ ′ 3 , k ′ ). This implies that H is (0, M d ( δ ′ 3 , k ′ ))-hyperfinite. It follows that G ′ is (ε, M d ( δ ′ 3 , k ′ ))-hyperfinite because we can remove at most ε ′ dn ≤ εdn edges from G ′ to obtain a graph of which all connected components have size at most M d ( δ ′ 3 , k ′ ). Second, we prove that freq k (G) − freq k (G ′ ) 1 ≤ δ. Note that the bound given by inequality (1) implies as k ≤ k ′ and δ ≥ δ ′ . Now since H and G ′ are ε ′ -close to each other, by Lemma 2.4, we have that where the last inequality follows from our setting of parameters. The claim then follows by applying the triangle inequality. This completes the proof of the lemma. The above lemma only guarantees that for every fixed ε > 0, and graph G ∈ Π n , one can find a graph G ε ∈ Π n that is (ε, M d (δ ′ , k ′ ))-hyperfinite (for δ ′ and k ′ as in Lemma 3.3). However, we cannot directly use G ε to construct an infinite hyperfinite subproperty. Recall that a set Π of graphs is called to be a hyperfinite property if there exists a function ρ : (0, 1] → N such that Π is (ε, ρ(ε))-hyperfinite for every ε > 0. Now, for any ε ′ < ε, we cannot guarantee that after removing ε ′ dn edge from G ε , one can obtain a graph that is the union of connected components of constant size. Furthermore, it is not guaranteed that Our idea of overcoming the above difficulty is to start with the above hyperfinite graph G 0 := G ε ∈ Π n for some fixed ε > 0, and then iteratively construct a sequence of graphs G i ∈ Π n with i ≥ 1 from G i−1 . The constructed graph G i+1 is guaranteed to inherit hyperfinite properties from G i . The key idea is to maintain the hyperfinite properties of G i by causing only a small perturbation of its k-disc vector. Choosing the parameters in this process carefully, we can maintain these hyperfinite properties for the whole sequence of graphs. Now we give the details in the following lemma. Note that the first part of Theorem 1.2 follows from this lemma. Proof. Let X := {|V | : G = (V, E) ∈ Π} be the set of sizes |V (G)| of graphs G in Π. Since Π is an infinite graph property, it holds that X is also an infinite set. We show there exists a monotonically decreasing function ρ : (0, 1] → N such that for each n ∈ X, we can find a graph H (n) ∈ Π n that is (ε, ρ(ε))-hyperfinite for every ε > 0. This will imply that the set Π ′ = {H (n) : n ∈ X} is an infinite ρ-hyperfinite property, which will then prove the lemma. Let us now fix an arbitrary n ∈ X and let G ∈ Π n be an arbitrary graph in Π n . We let FindHyper(G, δ, ε, k, Π n ) denote the graph G ′ that is obtained by applying Lemma 3.3 on G ∈ Π n with parameters δ, ε, k. Now we construct H (n) as follows. Let G 0 = G. We start by applying Lemma 3.3 to G 0 with parameters δ = δ 1 , ε = ε 1 and k = k 1 to obtain a graph G 1 that is (ε 1 , s 1 )-hyperfinite, where s 1 := M d ( We apply Lemma 3.3 to G i with parameters ε = ε i+1 , δ = δ i+1 and k = k i+1 to obtain a graph Finally, we stop the process after the i ′ -th iteration such that ε i ′ dn < 1. We set H (n) = G i ′ . The pseudo-code of the whole process is given in Algorithm 1 (which invokes Algorithm 2 for setting the parameters as a subroutine). 11: return H (n) ← G i 12: end procedure Algorithm 2 Set the value of s 1: procedure SetSize(ε, δ, k, Π n ) 2: return s 5: end procedure Now we also note that by the construction and Lemma 3.3, it holds that for any i ≥ 0, By noting that k j ≤ k i+1 for any j ≤ i + 1, we have that Furthermore, we have the following claim. Proof. Recall that ε i+1 = ε i /2 and δ i+1 = δ i /2 for all i > 1. We have where the first inequality follows from the triangle inequality and the second inequality follows from the convergence of the geometric series ∞ ℓ=0 2 −ℓ = 2. Since δ j = δ 1 2 j−1 , ε j = ε 1 2 j−1 and δ 1 = 4ε 1 /d log(4/3), it holds that δ j = 4ε j d log(4/3) . This completes the proof of the claim. Now by the fact that G j ∈ Π n is (ε j , s j )-hyperfinite, Claim 3.1 and Lemma 3.1, it follows that G i+1 is (4ε j log 4d ε j , s j )-hyperfinite, for any j ≤ i + 1. In particular, let i ′ denote the index such that our algorithm outputs G i ′ , i.e., It is important to note that even though i ′ might depend on n, the index j ε is always independent of n, and depends only on ε. By the above analysis, for any n ∈ X with n ≥ n 3 (d), we find an n-vertex graph H (n) ∈ Π n satisfying the following: for any ε > 0, there exists j ε such that by removing (4ε jε log 4d ε jε )·dn ≤ εdn edges, one can decompose H (n) into connected components each of which has size at most s jε ≤ ρ(ε). Thus, it holds that H (n) is (ε, ρ(ε))-hyperfinite for any ε > 0. This completes the proof of the lemma. Every Complement of a Non-Trivially Testable Property Contains a Hyperfinite Subproperty We now prove the second part of Theorem 1.2, i. e., the complement of every non-trivially testable property contains a hyperfinite subproperty. The formal definition of non-trivially testable property is given as follows. Definition 3.5 (non-trivially testable). A graph property Π is non-trivially testable if it is testable and there exists ε > 0 such that the set of graphs that is ε-far from Π is infinite. Note that for a property that is not non-trivially testable, for any ε > 0, we can always accept all graphs of size n ≥ n 4 , where n 4 := n 4 (ε) is a finite number (that might not be computable) such that there are at most n 4 graphs that are ε-far from having the property. For graphs of size smaller than n 4 , one can simply read the whole graph to test if the graph satisfies the property or not. The second part of Theorem 1.2 will follow from the following lemma. Lemma 3.6. The complement of every non-trivially testable d-bounded graph property Π contains an infinite (0, c)-hyperfinite subproperty, where c depends only on Π. Proof. Since Π is non-trivially testable, by Definition 3.5, there exists ε > 0 and an infinite set N ⊆ N such that for every n ∈ N , Π n;>ε is non-empty. Let ε > 0 be the largest value such that Π >ε contains an infinite number of graphs. Let δ = 1 13t , where t := q Π (ε, d) denotes the query complexity of Π. Let k =q Π (ε, d) = t 2t . Fix an arbitrary n ∈ N such that n ≥ n 0 , where n 0 = n 0 (δ, d, k) is the number given in Lemma 2.9. Let G n ∈ Π n;>ε be an arbitrary graph in Π n;>ε . Let H (n) be the (δ, k)-blow-up graph of G n . Note that H (n) is (0, k)-hyperfinite. Now we claim that H (n) / ∈ Π. Assume on the contrary that H (n) ∈ Π. By Lemma 2.9, freq k (G n ) − freq k H (n) 1 ≤ 1.1δ. Therefore, by Theorem 3.2, the canonical tester for Π accepts G n with probability at least 2/3, which is a contradiction to the fact that G n ∈ Π n;>ε . The lemma follows by defining the set Π ′ := {H (n) : n ∈ N } and c = k = q Π (ε, d) 2q Π (ε,d) . Proof of Theorem 3.2 In this section, we give the proof sketch of Theorem 3.2. The first part (i.e., the transformations from the original tester T to the canonical tester T C ) of the proof follows from the proof of the canonical testers in [GR11,CPS16], and we sketch the main ideas for the sake of completeness. The last part (i.e., how the behaviour of tester T C relates to the frequency vector) of the proof differs from previous work and it is tailored to obtain the characterization as stated in the theorem, which in turn will be suitable for our analysis of the structures of constant-query properties. Proof Sketch of Theorem 3.2. Let T be a tester for Π n on n-vertex graphs with error probability (reduced to) at most 1 24 . The query complexity of the tester T will be t := c · q Π (ε, d) for some constant c > 1, where q Π (ε, d) is the query complexity of the tester for Π with error probability at most 1 3 . We will then transform T to a canonical tester T C in the same way as in the proof of Lemma 3.1 in [CPS16] (see also [GR11]). Slightly more precisely, we first convert T into a tester T 1 that samples random t-discs of the input graph and answers all of T 's queries using the corresponding subgraph H. That is, it samples a set S of t vertices and then makes its decision on the basis of the t-discs rooted at vertices in S by using uniformly random ordering of vertices and emulating the execution of T accordingly on the permuted graph. Then, we convert T 1 into a tester T 2 whose output depends only on the edges and non-edges in the explored subgraph, the ordering of all explored vertices and its own random coins. This can be done by letting T 2 accept the input graph G with the average probability that T 1 accepts G over all possible labellings of H with corresponding sequences of queries and answers. Next, we convert T 2 into the final tester T 3 whose output is independent of the ordering of all explored vertices. This can be done by letting T 3 accept with probability that is equal to the average of all acceptance probabilities of T 2 over all possible relabellings of vertices in H. Finally, we convert T 3 into a tester T C that returns the output deterministically according to the unlabeled version of the explored subgraph and its roots. This can be done by letting T C accepts the input graph if and only if the probability associated with the explored subgraph H is at least 1/2. By similar arguments in the proof of Lemma 3.1 in [CPS16], we can show that T C is a tester for Π that has error probability at most 1/12. That is, for each G ∈ Π n , T C accepts G with probability at least 1 − 1 12 . For any graph G ∈ Π n;>ε , T C rejects G with probability at least 1 − 1 12 . Furthermore, note that the query complexity of T C is at most t · d t+2 . Now if we let n 1 := 12d 2t t 2 , then for any n ≥ n 1 , it holds that with probability at least 1− d 2t t 2 n ≥ 1 − 1 12 , none of the t sampled t-discs will intersect. That is, with probability 1 − 1 12 , the decision of the tester T C will only depend on the structure (or the isomorphic types) of the explored t disjoint t-discs. Let δ C = 1 12t . We now consider the input graph G satisfying that min G ′ ∈Πn freq t (G)−freq t (G ′ ) 1 ≤ δ C . Let G ′ ∈ Π n denote a graph for which this minimum is attained. Note that there is a bijection Φ(v)) for at most a δ C -fraction of the vertices v ∈ V (G). Recall that S denotes the sample set. Note that for any vertex v that is sampled independently and uniformly at random, the probability that disc t (G, v) ≇ disc t (G ′ , v) is bounded by the total variation distance of freq t (G) and freq t (G ′ ), which is at most δ C /2 by our assumption. By the union bound, the probability that there exists some vertex v ∈ S with disc t (G, v) ≇ disc t (G ′ , Φ(v)) is at most |S| · δ C ≤ t · 1 12t ≤ 1 12 . Since T C rejects G ′ with probability at most 1 12 and the probability that there exists some pair of all t sampled t-discs intersecting is at most 1 12 , T C rejects G with probability at most 1 12 + 1 12 + 1 12 = 1 4 . The case when G satisfying that min G ′ ∈Πn;>ε freq t (G) − freq t (G ′ ) 1 ≤ δ C can be analyzed analogously. In particular, if G satisfies this condition, then T C accepts G with probability at most 1 12 + 1 12 + 1 12 = 1 4 . Therefore, T C accepts (resp. rejects) G with probability at least 1− 1 This completes the proof of the theorem. Do Testable Non-Hyperfinite Properties Contain Infinitely Many Expanders? In the light of the previous result, a natural question is whether every testable infinite property that is not hyperfinite must contain an infinite subproperty that consists only of expander graphs or graphs that are close to an expander graph. Unfortunately, such a statement is not true as the aforementioned Theorem 1.4 shows. In the following, we present the proof of Theorem 1.4. Proof of Theorem 1.4. We start by defining the graph property. Π consists of all graphs G = (V, E) with maximum degree d that have a single connected component with ⌈|V |/2⌉ vertices and the remaining ⌊|V |/2⌋ connected components are isolated vertices. We observe that Π is not hyperfinite as the big connected component may be an expander graph and so it requires to remove Ω(n) edges to partition it into small connected components. Furthermore, it requires to insert Ω(n) edges to make the graph connected, which is a necessary condition for having expansion greater than 0. Finally, we show that the property can be tested with query complexity O(d/ε 2 ). The algorithm consists of two stages. In the first stage, we sample O(1/ε 2 ) vertices uniformly at random and estimate the number of isolated vertices. We reject, if this number differs from ⌊|V |/2⌋ by more than ε|V |/8. In the second stage, we sample another O(1/ε) vertices and perform, for every sampled vertex v, a BFS until we have explored the whole connected component of v or we have explored more than 12/ε vertices. We may assume that the graph contains more than, say, 100/ε vertices as otherwise, we can simply query the whole graph. The tester rejects, if it finds a connected component that is not an isolated vertex. We now prove that the above algorithm (with proper choice of constants) is a property tester. Our analysis (in particular for the second stage) uses some ideas that were first introduced in an analysis of a connectivity tester in [GR02]. We first show that the tester accepts every G ∈ Π. For some sufficiently large constant in the O-notation we obtain by Chernoff bounds that the first stage of the tester approximates with probability at least 9/10 such that the number of isolated vertices in G with an additive error of ε|V |/8. If this approximation succeeds, the first stage of the tester does not reject. Furthermore, the second stage never rejects a graph G ∈ Π. Thus, the tester accepts with probability at least 9/10. Next consider a graph that is ε-far from Π and begin with the following claim. Claim 4.1. Let G be ε-far from Π. Then either the number of isolated vertices in G differs by more than ε|V |/4 from ⌊|V |/2⌋ or there are more than ε|V |/12 connected components of size at most 12/ε that are not isolated vertices. Proof. Assume that the claim is not true and there is a graph G that is ε-far from Π, the number of isolated vertices in G differs by at most ε|V |/4 from ⌊|V |/2⌋ and there are at most ε|V |/12 connected components of size at most 12/ε that are not isolated vertices. We will argue that in this case, we can modify at most εdn edges to turn G into a graph that has Π, which is a contradiction. We start with the connected components that are not isolated vertices. We can add a single edge to connect two such components. However, we must make sure that we are not violating the degree bound. If both connected components have a vertex of degree at most d − 1, we can simply add an edge to connect them. If all vertices of a connected component have degree d > 1 then the component contains a cycle. We can remove an edge from the cycle without destroying connectivity. Thus, we need to modify at most 3 edges to connect two connected components. We observe that there are at most εn/12 connected components of size more than 12/ε and so there are at most εn/6 connected components that are not isolated vertices. We can create a single connected component out of them by modifying εn/2 edges. Our previous modifications did not change the number of isolated vertices in G, so it still differs by at most ε|V |/12 from ⌊|V |/2⌋. If there are too many isolated vertices, we can connect each of them to the big connected component with at most 2 edge modifications resulting in at most εn/2 modifications. If there are too few isolated vertices, we need to disconnect vertices from the big connected component. For this purpose consider a spanning tree T of the connected component. We will remove a leave of T . This can be done with d edge modifications and does not change connectivity. Thus we can create exactly ⌊|V |/2⌋ isolated vertices using at most εdn/4 modifications. Overall, the number of modifications is at most εdn, which proves that the graph was not ε-far from Π. A contradiction. It remains to show that our tester rejects any G that is ε-far from Π. By Claim 4.1 we know that either the number of isolated vertices in G differs by more than ε|V |/4 from ⌊|V |/2⌋ or G has at least ε|V |/12 connected components of size at most 12/ε. In the first case, our algorithm rejects with probability at least 9/10 as it approximates the number of isolated vertices with additive error ε|V |/8 and rejects if the estimate differs by more than ε|V |/4 from ⌊|V |/2⌋. In the second case we observe that for sufficiently large constant in the O-notation with probability at least 9/10 we sample a connected component of size at most 12/ε. In this case our algorithm detects the component and rejects. Thus, with probability at least 9/10 the algorithm rejects. The query complexity and running time of the algorithm are dominated by the second stage, which can be done in O(d/ε 2 ) time. Since an expander graph is connected, it follows also that this property contains no graphs that are close to expander graphs. Consider the k-discs of graphs from the property Π in the proof of Theorem 1.4. Recall that the graphs from the property consist of a connected graph on ⌈|V |/2⌉ vertices and ⌊|V |/2⌋ isolated vertices. We may view graphs in Π as the union of two graphs G 1 and G 2 of roughly the same size that satisfy two different properties: G 1 is connected and the G 2 has no edges. The k-discs of these graphs have two interesting properties: • no k-disc in G 1 occurs in G 2 and vice versa, and • their centers cannot be adjacent in any graph. If G 1 and G 2 have the above properties then this means that the k-discs cannot "mix" in any connected component of another graph. Thus, we know whether they are supposed to come from G 1 or G 2 , which is helpful to design a property tester. We remark that this phenomenon can also happen for other k-discs like, for example, if G 1 is 4-regular and G 2 is 6-regular. We believe that understanding this phenomenon is important for a characterization of testable properties in bounded-degree graphs as we can use it to construct other testable properties in a similar way as above. This motivates the following definition: Definition 4.1. We call two k-disc isomorphism types D 1 , D 2 with roots u 1 , u 2 incompatible, if there exists no graph in which two adjacent vertices u 1 and u 2 have k-disc type D 1 and D 2 , respectively. Partitioning Theorem for Bounded-Degree Graphs The fact that there are testable properties that are composed of other properties with disjoint sets of incompatible k-discs (see Definition 4.1) leads to the question if we can always decompose the vertex set of a graph into sets such that the k-disc types behave "similarly" within each set. A simple partition would be to divide the vertex set according to its k-disc isomorphism type. But such a partition is meaningless. In the light of previous work, we decided to consider the case that a partition has to have only a small fraction of the edges between the partition classes. We would like to obtain a partition into sets S 1 , . . . , S r and a set T (which is a separator), such that no edges are between S i and S j for any i = j and T is of small size. The next question is to specify what it means to behave "similarly". One such specification is to ask that the k-disc distribution inside the partition is stable for every subset. Obviously, this cannot always be the case unless there is only one k-disc isomorphism type. Instead, we are only looking at sets that do not have too many outgoing edges. For these subsets we can show that they always have roughly the same k-disc distribution as their partition. The formal theorem we prove is the following. Theorem 5.1. Let G = (V, E) be a d-bounded graph. For every k ≥ 0 and every 1 ≥ δ > 0 the vertex set V can be partitioned into r ≤ f (δ, d, k) subsets S 1 , · · · , S r and a set T such that • for every i = j there are no edges between S i and S j , • |T | ≤ δd|V |, • and for every i and every subset X of S i with φ G (X) ≤ δ 2 it holds that freq k (X | G) − freq k (S i | G) 1 ≤ 3δ. It remains to construct the sets S i . For this purpose, we put a δ-net over the space of all k-disc frequency vectors, i.e. we compute a smallest set N = {v 1 , . . . , v |N | } of frequency vectors such that every frequency vector there exists a vector in N within l 1 distance at most δ. We observe that |N | is a function of k, d and δ. We then define S i to be the union of all A j that have v i as the closest vector to their frequency vector. It remains to prove that the S i satisfy the third property for δ 2 . For this purpose consider an arbitrary subset X ⊆ S i . We consider X ∩ A j for the sets A j whose union S i is. If X ∩ A j = A j then we know that φ G (X ∩ A j ) > δ. Recall that the edges that leave X ∩ A j either go to A j \ X or to T , where X ∩ T = ∅. If φ G (X) ≤ δ 2 , then it holds that at most a δ-fraction of the elements from X can be from a subset A j with φ G (X ∩ A j ) > δ. This is true as otherwise the number of edges crossing X and V \ X is at least δ|X| · δd, which contradicts the assumption that φ G (X) ≤ δ 2 . Let J be the set of all indices j such that A j ∩ X = A j . Hence we get Now let us define X 1 = {x ∈ X|x ∈ A j , j ∈ J} and X 2 = X \ X 1 . We know that |X 2 | ≤ δ|X|. We also observe that 1 since all frequency vectors have l 1 -norm 1. It follows that This completes the proof of the theorem. Conclusions We have shown that every constant-time testable property in the bounded-degree graph model is either finite or contains an infinite hyperfinite subproperty. We hope that this result is a first step to obtain a full characterization of all testable properties in bounded-degree graphs. Unfortunately, a similar result cannot be derived for expander graphs, i.e. it is not true that every testable infinite property that is not hyperfinite contains an infinite family of expander graphs or graphs that are close to expander graphs. The structure of this counter-example motivated us to study partitionings of bounded-degree graphs into sets of vertices such that the distribution of k-discs on any subset with bounded expansion is close to the distribution of the set. We hope that this partitioning will be helpful to make further progress towards a characterization of all testable properties in bounded-degree graphs.
2018-11-07T15:34:41.000Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "b1e50389621993ebdcaa3f66d6675c3dbdd90a57", "oa_license": null, "oa_url": "https://epubs.siam.org/doi/pdf/10.1137/1.9781611975482.45", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b1e50389621993ebdcaa3f66d6675c3dbdd90a57", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
55220815
pes2o/s2orc
v3-fos-license
Martingale property of exponential semimartingales: a note on explicit conditions and applications to financial models We give a collection of explicit sufficient conditions for the true martingale property of a wide class of exponentials of semimartingales. We express the conditions in terms of semimartingale characteristics. This turns out to be very convenient in financial modeling in general. Especially it allows us to carefully discuss the question of well-definedness of semimartingale Libor models, whose construction crucially relies on a sequence of measure changes. Introduction Local martingales are the core object of stochastic integration. Thus they provide a natural access to time evolutionary stochastic modeling, which is a cornerstone of mathematical finance. The fundamental theorem of asset pricing states that the absence of arbitrage is essentially equivalent to the local martingale property of discounted asset prices under some equivalent probability measure. One important benefit of the true martingale property of discounted asset price processes is their use for density processes of a change of measure. In financial terms this corresponds to a change of numeraire. Since the seminal work of Geman et al. (1995) this concept became indispensable for both computational and modeling aspects. Often a change of numeraire facilitates option pricing by reducing complexity of computations. Moreover, it is a building stone of the construction of Libor market models introduced by Brace et al. (1997) and Miltersen et al. (1997). More fundamentally, a change of measure connects historical and risk-neutral probability measures. On the other hand if the discounted asset price process is a strict local martingale, i.e. a local martingale which is not a true martingale, this is sometimes interpreted as financial bubble. However, the definition and existence of financial bubbles critically depends on the specific notion of the market price, arbitrage and admissible strategies, see for example Cox and Hobson (2005) and Jarrow et al. (2010). In a typical modeling situation it is enjoyable to work with true martingales. Usually price processes are non-negative and therefore are modeled as exponentials of semimartingales, which form a wide and flexible class of positive processes. One can characterize the local martingales in this class by a drift condition. It is, however, more involved to identify conditions for their true martingale property. In order to formulate the problem more precisely denote by X an R d -valued semimartingale and by λ an R d -valued predictable process which is integrable with respect to X. Then λ · X := i≤d · 0 λ i dX i denotes the real-valued stochastic integral process of λ with respect to X. Moreover, let V be a predictable process with finite variation. We pose the following question: Under which conditions on the characteristics of X is a real-valued semimartingale Z of the form Z := e λ·X−V a (uniformly integrable) martingale? If e λ·X is a special semimartingale, there exists a unique predictable process of finite variation V such that Z is a local martingale. In this case, V is called the exponential compensator of λ · X, see Section 2 for details. Various criteria for the more delicate true martingale property of Z have been proposed. The seminal paper by Novikov (1972) treats the continuous semimartingale case. Sufficient conditions for general semimartingales are provided for example in Lepingle and Mémin (1978), Kallsen and Shiryaev (2002), Jacod (1979), Cheridito et al. (2005) and Protter and Shimbo (2008), see also a recent paper by Larsson and Ruf (2014) for further generalizations of Novikov-Kazamaki type conditions based on convergence results for local supermartingales. Moreover, we refer to Section 1 and Section 3 of Kallsen and Shiryaev (2002) for an exhaustive literature overview. In the special case when X is a process with independent increments and absolutely continuous characteristics and λ deterministic, show that if Z is a local martingale, it is also a true martingale. Deterministic conditions ensuring the martingale property of an exponential of an affine process are given in Kallsen and Muhle-Karbe (2010). The conditions for more general semimartingales are not as explicit. Our contribution is to give explicit conditions for the martingale property of an exponential quasi-left continuous semimartingale in terms of its characteristics. In Section 2 we introduce the notation and describe the general semimartingale setting following Jacod and Shiryaev (2003). Section 3 contains the main results. The advantage of the explicit conditions is their convenience for applications. We illustrate this by investigating the true martingale property of asset prices in semimartingale stochastic volatility models in Section 4.1. Finally, in Section 4.2 we prove the well-definedness of the backward construction of Lévy Libor models. More precisely, we show that the candidate density processes for the measure changes are indeed true martingales which has not been rigorously proved earlier. Moreover, we present a natural extension to the semimartingale Libor model. Semimartingale notation and preliminaries In this section we introduce the notation and summarize the basic notions and facts from the semimartingale theory in order to keep the paper self-contained. Our main reference is Jacod and Shiryaev (2003), whose notation we use throughout the paper. Other standard references for stochastic calculus and semimartingales are e.g. Jacod (1979), Métivier (1982) and Protter (2004). Let (Ω, F, (F t ) t≥0 , P) denote a stochastic basis, i.e. a filtered probability space with right-continuous filtration. For a class of processes C, we say that a process X is in the localized class C loc if there exits a sequence of stopping times (τ n ) n∈N such that a.s. τ n ↑ ∞ as n → ∞ and X τn ∈ C. Denote by M the class of càdlàg uniformly integrable martingales. The processes in the localized class M loc are called local martingales. We denote by V + (resp. V) the set of all real-valued càdlàg processes starting from zero that have nondecreasing paths (resp. paths with finite variation over each finite interval [0, t]). Let A + denote the set of all processes A ∈ V + that are integrable, i.e. such that E[A ∞ ] < ∞, where A ∞ (ω) := lim t→∞ A t (ω) ∈ R + for every ω ∈ Ω. Moreover, let A denote the set of all A ∈ V that have integrable variation, i.e. Var (A) ∈ A + , where for every t ≥ 0 and every ω ∈ Ω, Var (A) t (ω) is defined as the total variation of the function s → A s (ω) on [0, t]. A process X is called a semimartingale if it has a decomposition of the form where X 0 is finite-valued and F 0 -measurable, M ∈ M loc with M 0 = 0 and A ∈ V. If A in decomposition (2.1) is predictable, X is called a special semimartingale and the decomposition is unique. A semimartingale is called quasi-left continuous if a.s. ∆X τ = 0 on the set {τ < ∞} for all predictable times τ . Let X be an R d -valued semimartingale, i.e. each component of X satisfies (2.1). Denoting by ε a the Dirac measure at point a, the random measure of jumps µ X of X is an integervalued random measure of the form There is a version of the predictable compensator of µ X , denoted by ν, such that the R d -valued semimartingale X is quasi-left continuous if and only if ν(ω, {t} × R d ) = 0 for all ω ∈ Ω, c.f. Jacod and Shiryaev (2003), Corollary II.1.19. In general, ν satisfies (|x| 2 ∧ 1) * ν ∈ A loc . (2. 2) The semimartingale X admits a canonical representation where h : R d → R d is a truncation function, i.e. a function that is bounded and behaves like h(x) = x around 0, B(h) is a predictable R d -valued process with components in V, and X c is the continuous martingale part of X. Denote by C the predictable R d ⊗ R d -valued covariation process defined as C ij := X i,c , X j,c . Then the triplet (B(h), C, ν) is called the triplet of predictable characteristics of X (or simply the characteristics of X). It can be shown (see Proposition II.2.9 in Jacod and Shiryaev (2003)) that there exists a predictable process A ∈ A + loc such that where b(h) is a d-dimensional predictable process, c is a predictable process taking values in the set of symmetric non-negative definite d × d-matrices and F is a transition kernel from (Ω × R + , P) into (R d , B(R d )). Here P denotes the predictable σ-field on Ω × R + . We call (b(h), c, F ; A) the triplet of differential (or local) characteristics of X. If X admits the choice A t = t above, we say that X has absolutely continuous characteristics (or shortly AC) and call X an Itô semimartingale. An important subclass of semimartingales is the class of Itô semimartingales with independent increments. These processes are known as time-inhomogeneous Lévy processes or as Processes with Independent Increments and Absolutely Continuous characteristics (PI-IAC), see e.g. Section 2 in . The differential characteristics (b(h), c, F ) of a PIIAC X, for every truncation function h, are deterministic and satisfy the following integrability assumption: For every T > 0 where · denotes any norm on the set of d × d-matrices. For every t > 0, the law of X t is characterized by a Lévy-Khintchine type formula for its characteristic function, see again Section 2 in . This property makes the class of PIIAC particularly suitable for applications. The following definition and results on exponentials of semimartingales are given in Definition 2.12, Lemma 2.13 and Lemma 2.15 in Kallsen and Shiryaev (2002). Remark 2.2. Let Y be a real-valued semimartingale and denote by ν Y the compensator of the random measure of jumps of Y and h a truncation function. (a) The following statements are equivalent: Let X be an R d -valued semimartingale with differential characteristics (b(h), c, F ; A) and λ ∈ L(X), where L(X) denotes the set of predictable processes integrable with respect to X, c.f. Jacod and Shiryaev (2003), page 207. Moreover, assume that λ · X is exponentially special. Following Jacod and Shiryaev (2003), Section III.7.7a we define the Laplace cumulant process and the modified Laplace cumulant process K X (λ) := ln(E( K X (λ))), where E denotes the stochastic exponential, and The following results are proved in Proposition III.7.14 and Theorem III.7.4 in Jacod and Shiryaev (2003): Proposition 2.3. Let X be an R d -valued semimartingale and λ ∈ L(X) such that λ · X is exponentially special. (i) The modified Laplace cumulant process K X (λ) is the exponential compensator of λ · X, i.e. the process Z defined by is a local martingale. (ii) If X is quasi-left continuous, the Laplace cumulant process K X (λ) and the modified Laplace cumulant process K X (λ) coincide, i.e. K X (λ) = K X (λ). In the following section we give sufficient conditions for the martingale property of exponential semimartingales. The Martingale Property of Exponential Semimartingales Integrability conditions ensuring the (UI) martingale property of a non-negative or positive local martingale were studied from many perspectives and in various levels of generality. It started with the classical conditions of Novikov (1972) which applies to continuous exponential local martingales. A natural generalization included jumps was given in the seminal paper of Lepingle and Mémin (1978). Various related conditions are given by Kallsen and Shiryaev (2002). A profound overview on Novikov-type conditions as well as boundedness conditions is given in the monograph of Jacod (1979). In this section we collect conditions for exponential semimartingales and express them in terms of semimartingale characteristics. Thanks to these expression we usually call these type of conditions predictable conditions. Let us start with a Novikov-type integrability condition which is based on the main result of Lepingle and Mémin (1978). We follow its statement given by Jacod (1979) as Corollary 8.44. Proof: Note that the characteristics of X T , for any T > 0, are given by . Now, since local martingales whose localizing sequence is deterministic are martingales, the first claim follows immediately from the second. Note that (A1) implies that X is exponentially special and hence that M is a local martingale. In view of Theorem 2.19 in Kallsen and Shiryaev (2002), the second claim follows from Jacod (1979), Corollary 8.44. As an immediate corollary we derive the follows sufficient conditions for the case where X is given as a stochastic integral. Proof: The characteristics of λ · X are given by Proposition IX.5.3 in Jacod and Shiryaev (2003). Now the claim follows by an application of Proposition 3.1. For applications the following boundedness condition turns out to be useful, see for instance Corollary 4.1, Proposition 4.3 and 4.2 in the sections below. Proposition 3.5. Let X be as in Corollary 3.3 and let λ ∈ L(X). If (C1) for every T ≥ 0, there exists a non-negative constant κ(T ) such that a.s. Proof: Again, the first part is an immediate consequence of the second. Note that (C1) implies that λ · X is exponentially special. Hence, we can deduce the claim from Theorem 2.19 in Kallsen and Shiryaev (2002) together with Lemma 8.8 and Theorem 8.25 in Jacod (1979). Remark 3.6. Clearly, thanks to Corollary 3.3, the condition (C1) could be replaced by the following condition: for every T ≥ 0 there exists a constant κ(T ) such that a.s. The elementary inequality as for instance noted in Esche (2004), Lemma 2.13, shows that condition (C1) is an improvement to (3.1). Let us shortly turn to the subclass of semimartingales with independent increments (SII processes), for which the situation is slightly different than in the more general case. For exponential SII processes the local martingale property is equivalent to the true martingale property. From a mathematical finance perspective this interesting fact for instance implies that exponential SII models cannot include bubbles which are modeled as strict local martingales. Let us formalize this observation and add some simple deterministic conditions for the martingale property. The main implication (ii) ⇒ (i) is essentially thanks to Kallsen and Muhle-Karbe (2010), Proposition 3.12. Note that the assertion does not require quasi-left continuity. Proposition 3.7. Let X be an R d -valued semimartingale with deterministic characteristics (B X , C X , ν X ), λ ∈ L(X) be deterministic and M := e λ·X−K X (λ) . The following are equivalent (i) M is a martingale. (ii) M is a local martingale. Proof: The implication (i) ⇒ (ii) is trivial and equivalence (ii) ⇔ (iii) holds by definition. The equivalences of (iii), (iv) and (v) are due to Remark 2.2 and Jacod and Shiryaev (2003), Proposition IX.5.3, which shows that λ · X has deterministic characteristics with It is left to show the implication (iii) ⇒ (i). In view of (2.6) and since λ·X has deterministic characteristics, K X (λ) is a deterministic process of finite variation and hence also has deterministic characteristics. Define f (x, y) := x−y, then Y := λ·X −K X (λ) = f (λ·X, K X (λ)). It follows from Goll and Kallsen (2000), Corollary 5.6 applied to f (λ · X, K X (λ)) that Y has also deterministic characteristics. From the relationship of ordinary and stochastic exponentials given in Jacod and Shiryaev (2003), Theorem II.8.10, we obtain that M = e Y = E(Y ), where Y is a semimartingale with ∆Y = (e ∆Y −1) > −1. We deduce from Jacod and Shiryaev (2003), Equation II.8.14 that Y inherits the property of deterministic characteristics from Y . Due to Remark 2.2(b), the condition e λ,x 1 { λ,x >1} * ν ∈ V yields that λ · X is exponentially special. Thus, since K X (λ) is the exponential compensator of λ · X, c.f. Proposition 2.3 (i), M is a local martingale. The claim now follows from Proposition 3.12 in Kallsen and Muhle-Karbe (2010). Applications to Finanical Models In this section we present two applications of the results from Section 3 to financial modeling. A detailed overview concerning applications of general semimartingales in finance is for instance provided by the monographs of Shiryaev (1999), Cont and Tankov (2003), Musiela and Rutkowski (2005) Stochastic Volatility Asset Price Model. Here, we illustrate how the conditions of Section 3 can be used to facilitate pricing in arbitrage-free models driven by semimartingales. Let (Ω, F, (F t ) 0≤t≤T , P) be a stochastic basis, where T > 0 denotes a finite time horizon. We model the asset price S and a bank account B with stochastic interest rate r by S := S 0 e σ S ·X S −V , B := e σ r ·X r (4.1) with S 0 > 0, a d-dimensional semimartingale X := (X S , X r ) with X S d 1 -dimensional and X r d 2 -dimensional such that d 1 + d 2 = d, and a d-dimensional predictable process σ := (σ S , −σ r ) with σ S ∈ L(X S ) and σ r ∈ L(X r ) such that σ · X is exponentially special. We assume that the process V is the exponential compensator of σ S · X S − σ r · X r , c.f. Proposition 2.3. Thanks to this assumption, the discounted stock price S := B −1 S is a local martingales, i.e. in other words P is a risk-neutral probability measure. According to the fundamental theorem of asset pricing for general semimartingales in Delbaen and Schachermayer (1998), the No Free Lunch With Vanishing Risk (NFLVR) holds is this case. Note that in general the risk-neutral probability measure may not be unique and the model is incomplete. Let us now consider a European call option with strike K > 0 with payoff (S T − K) + at maturity T > 0. Its fundamental price under P, denoted by C * t for any t ∈ [0, T ], is given by which is well-defined and finite a.s. The inequality E( S T |F t ) ≤ S t is a consequence of S being a positive local martingale and hence a supermartingale. The price C * is an arbitragefree price, which -even in the case of a complete model -might be non-unique. This subtle issue is closely related to financial bubbles, see for example Definition 3.6 in Jarrow et al. (2010) and Definition 2.10 in Biagini et al. (2014). For a detailed mathematical treatment we refer to Protter (2013). When S is a true martingale, these delicate issues do not appear: The asset price has no bubble and the market prices coincide with the fundamental prices -this was proved for example in the setting of Jarrow et al. (2010). Thus, our Section 3 provides convenient conditions to exclude ambiguities in the pricing due to the possible presence of bubbles. To illustrate a further benefit of the explicit martingale conditions from Section 3 we use the true martingale property of the discounted asset price to perform a change of numeraire which reduces the complexity of a pricing problem at hand. Considering for example a call option as above, in order to compute the expectation in (4.2) directly, information on the joint distribution of S and B is required. Here the true martingale property of S allows to facilitate the computation of the expectation by a change of numeraire. More precisely, we can express the call price as a conditional expectation of a function of the asset value S T solely. Defining a probability measure P via d P dP | Ft := S −1 0 S t for 0 ≤ t ≤ T , and denoting by E P the expectation under P, Bayes formula yields Compared with the original pricing formula C t = B t E P B −1 T (S T − K) + F t , the random variable B T does not appear in the conditional expectation in (4.3). This typically facilitates the computation since the semimartingale characteristics of S are known under the new probability measure. By combining Corollary 3.3 and Proposition 3.5 the following characterization of the true martingale property for the semimartingale asset price model defined above. Corollary 4.1. Assume that X is quasi-left continuous and denote its local characteristics by (b, c, F ; A). If (b, c, F ; A) and σ satisfy (B1) of Corollary 3.3, resp. condition (C1) of Proposition 3.5, the discounted asset price process S is a true martingale. Under the conditions of Corollary 4.1 the fair price at time t of the call option with maturity T and strike K is therefore given by C t in (4.3). 4.2. Semimartingale Libor model. In this subsection we apply the results from Section 3 to Libor models. These are models for discretely compounded forward interest rates known as Libor rates, where the term Libor stems from the London Interbank Offered Rate. The Libor models were introduced in Brace et al. (1997) and Miltersen et al. (1997) and later further developed and studied by many authors. We refer to Musiela and Rutkowski (2005), Section 12.4, for a detailed overview. The challenge in modeling Libor rates is to simultaneously define the rates for different maturities as local martingales under different equivalent measures which ensures the absence of arbitrage. These measures are in fact forward measures and they are interconnected via the Libor rates themselves. A convenient way to obtain such a model is by backward construction, following the pioneering work of Musiela and Rutkowski (1997). This construction relies on the martingale property of Libor rates (under the corresponding forward measures), which allows to define changes of measure. In the backward construction the Libor rates thus have to be not only local, but true martingales under their corresponding forward measures. When the model is driven by a continuous semimartingale this is standard by using Novikov type conditions, but verifying that the Libor rates are true martingales in general semimartingale models including jumps is more involved and has not been properly addressed in the financial literature. Using explicit conditions from Section 3, we study this issue in detail below to close this gap. Let us begin by describing a general semimartingale Libor model. Assume that T * > 0 is a fixed finite time horizon and we are given a pre-determined collection of maturities 0 = T 0 < T 1 < . . . < T n = T * , with δ k := T k+1 − T k for k = 0, . . . , n − 1. Moreover, let (Ω, F T * , (F t ) 0≤t≤T * , P T * ) be a stochastic basis. A general semimartingale Libor model consists of a family of semimartingales modeling the Libor rates (L(·, T k )) 1≤k≤n−1 for lending periods ([T k , T k+1 ]) 1≤k≤n−1 and a family of probability measures (P T k ) 1≤k≤n , where L(·, T k ) and P T k are defined on (Ω, F T k , (F t ) 0≤t≤T k ) and P Tn = P T * , such that (SML1) L(·, T k ) is a P T k+1 -martingale for all k = 1, . . . , n − 1. For each k the probability measure P T k is called the forward Libor measure for maturity T k , cf. Musiela and Rutkowski (2005), Definition 12.4.1. The measure P T k is in fact the forward martingale measure associated with maturity T k and the density process above is a forward price process. This can be seen from the link between forward Libor rates and zero-coupon bond prices, see Musiela and Rutkowski (2005), Sections 12.1.1 and 12.4.4. Below we present the main ideas of the backward construction of the Libor model in a semimartingale framework. We start by modeling the Libor rate with the most distant maturity under a given probability measure and then proceed backwards. We define in each step the next forward measure via a density process based on the previously modeled Libor rates and model the next Libor rate under this measure. (4.6) Now we recursively model the Libor rates L(·, T k ) for k = n − 2, . . . , 1 by for t ≤ T k , where L(0, T k ) > 0 and λ(·, T k ) ∈ L(X) is a volatility process such that λ(·, T k )· X is P T k+1 -exponentially special with P T k+1 -exponential compensator K X (P T k+1 , λ(·, T k )). As above, this means that L(·, T k ) is a P T k+1 -local martingale. Note that the Libor rate for the interval starting at T 0 = 0 and ending at T 1 is simply the given spot Libor rate L(0, T 0 ) > 0. The probability measure P T k is defined on (Ω, F T k ) by the Radon-Nikodym where it has to be assumed that L(·, T k ) is a true P T k+1 -martingale. Then we have for (4.9) Furthermore, we obtain that the probability measure P T k+1 is related to P T * via (4.10) Note that the construction is well-defined if the Libor rates L(·, T k ) are P T k+1 -martingales for all k = 1, . . . , n − 1. To justify the backward construction (4.4) -(4.10) of the measures (P T k ) 1≤k≤n−1 , we prove the required martingale property of the Libor rates in the proposition below. Proposition 4.2. Let X in equation (4.7) be an R d -valued quasi-left continuous semimartingale with differential characteristics (b T * , c, F T * ; A) with respect to P T * , and nonnegative λ(·, T k ) ∈ L(X). Assume (SL) for all i = 1, . . . , n − 1 there exists a non-negative constant κ such that a.s. Proof: For k = n − 1, the assertion follows directly from assumption (SL) and Proposition 3.5. Let us link our discussion to the Lévy Libor model of Eberlein andÖzkan (2005) in which the driving process X is assumed to be an R d -valued PIIAC with differential characteristic (0, c, F T * ) under P T * . Eberlein andÖzkan impose the following assumptions: For some M, ε > 0 and every k = 1, . . . , n − 1 we have λ(·, T k ) : [0, T * ] → R d + is a bounded, nonnegative function such that for t > T k , λ(t, T k ) = 0 and n−1 k=1 λ j (t, T k ) ≤ M, for all t ∈ [0, T * ] and every coordinate j ∈ {1, . . . , d}, (L3) λ(·, T k ) : [0, T * ] → R d + is deterministic. Let us point that even when the driving process has deterministic characteristics under P T * and λ is deterministic (as in the case above), the characteristics of X under P T k for k = 1, . . . , n − 1 are stochastic. We obtain the following sufficient conditions for the Lévy Libor model, where we also allow λ to be stochastic. Corollary 4.3. Assume that n j=1 |λ(·, T j )| ≤ N for a non-negative constant N , and that there exists a non-negative constant κ such that Then for each k = 1, . . . , n − 1 the process L(·, T k ) defined in (4.7) is a martingale with respect to P T k+1 given by (4.10). Proof: It suffices to show that (SL) is satisfied. Note that we find a non-negative constant K * such that for any i = 1, ..., n − 1 for all x ∈ R d with |x| ≤ 1 1 − e λ(t,T i ),x 2 e n−1 k=i+1 λ(t,T k ),x ≤ K * |x| 2 . Next we bound the large jumps. Using the fact that (1 − √ x) 2 ≤ 1 + x for x > 0 and some non-negative constant K, and the Cauchy-Schwarz inequality, we obtain e N |x| F T * t (dx)dt. Finally, since where · denotes the operator norm of c, we conclude that (SL) holds. This concludes the proof. As mentioned in the introduction of the section, the martingale property of Libor rates under their corresponding measures is crucial for the validity of the backward construction of Libor models. Therefore, Proposition 4.2 and Corollary 4.3 provide a theoretical justification of the construction of the Lévy Libor model by Eberlein andÖzkan (2005), and more generally of Libor models driven by quasi-left continuous semimartingales.
2016-08-11T07:39:59.000Z
2015-06-26T00:00:00.000
{ "year": 2016, "sha1": "b58c980118143fcb0e7cd90be725e4d792d19a12", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "80f6dbe5c1f3556636da322a768af2ca47ea4536", "s2fieldsofstudy": [ "Mathematics", "Business" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
252230361
pes2o/s2orc
v3-fos-license
Synthesis of Hydantoin Androgen Receptor Antagonists and Study on Their Antagonistic Activity Hydroxymethylthiohydantoin, hydroxymethylthiohydantoin, and hydantoin, containing a pyridine group, were synthesized to study their androgen receptor antagonistic activities. Among them, compounds 6a/6c/7g/19a/19b exhibited excellent androgen receptor antagonistic activity, which was consistent with or even superior to enzalutamide. In addition, compounds 19a and 19b exhibited better antiproliferative activity than enzalutamide in prostate cancer cells. The results show that compound 19a has great potential as a new AR antagonist. Introduction Prostate cancer, fueled by the androgen axis, is a major public health problem and has a very high leading cause of death among men worldwide [1]. Although androgen deprivation therapy (ADT) has been proved to be effective initially, the tumor will eventually progress and develop into the lethal castration-resistant prostate cancer (CRPC) [2]. Most often, death occurs 2 to 4 years after the onset of the castration-resistant state. However, the over-expression of AR was found in most CRPCs, which is essential for CRPC to adapt to the low levels of androgen. As AR receptor activation plays a crucial role in the progression of CRPC, it has been recognized as an attractive target for the treatment of CRPC [3]. AR antagonists are currently the mainstay of treatments for prostate cancer [4]. Bicalutamide, enzalutamide and apalut salts are marketed as nonsteroidal antiandrogens [5][6][7]. As a first-generation nonsteroidal AR antagonist, bicalutamide diminishes androgenic effects by competitively inhibiting androgen-AR binding. However, due to the occurrence of LBD point mutations and the expression of active AR splice variants, these antiandrogens may become partial AR agonists after a period of treatment (~2 years) [5]. (Figure 1). Enzalutamide and apalutamide (ARN-509) are second-generation nonsteroidal antiandrogens with high-affinity binding for the AR LBD. In particular, enzalutamide received FDA approval in 2012 for the treatment of patients with metastatic castration-resistant PC who have previously received docetaxel [8,9]. However, a F876L missense mutation in the AR LBD has been shown to confer resistance to enzalutamide and apalutamide (ARN-509) by switching their activity on ARs from antagonist to agonist [10,11]. (Figure 1). Enzalutamide and apalutamide (ARN-509) are very similar in structure and function to each other. The two agents are potent AR antagonists, which both have a high affinity for the AR. Both of them bind to AR and inhibit androgen-mediated gene transcription in AR-overexpressing prostate cancer cells, but also impair the nuclear localization of Chemistry The design of novel antiandrogen compounds was performed to explore several different chemical modifications around the thiohydanthoin and hydantoin scaffolds as depicted in Schemes 1-3. Various compounds (±)-6a-k, (±)-7a-j were synthesized as shown in Scheme 1 by deprotonation-hydroxyalkylation of the carbon of the thiohydanthoin ring and the conversion of thiohydantoin to the corresponding hydantoin. The alkylation of commercially available correponding anilines 1a-k by 2-bromopropionic acid was carried out to obtain In this work, enzalutamide was used as the lead compound for structural modification. The structural transformation is as follows: (1) the aromatic ring on one side of hydroxymethyl thiohydantoin was transformed to contain both p-CN and m-CF 5 , and an active derivatized group of enzalutamide was connected to the aromatic ring on the other side, which has been proven to have good properties [14][15][16]; (2) the sulfur atom in the hydroxymethylthiohydantoin was replaced by an oxygen atom, which was shown to have unexpected effects in our previous work [17]; (3) the aromatic ring on the right side of the hydroxymethylthiohydantoin ring was replaced by a pyridine ring, which also contains both p-CN and m-SF 5 . A total of 30 compounds with CF 3 groups were designed and synthesized, and their in vitro activities were tested. (Figure 2). tion. The structural transformation is as follows: (1) the aromatic ring o droxymethyl thiohydantoin was transformed to contain both p-CN an active derivatized group of enzalutamide was connected to the aromatic side, which has been proven to have good properties [14][15][16]; (2) the su hydroxymethylthiohydantoin was replaced by an oxygen atom, which w unexpected effects in our previous work [17]; (3) the aromatic ring on the hydroxymethylthiohydantoin ring was replaced by a pyridine ring, wh both p-CN and m-SF5. A total of 30 compounds with CF3 groups were d thesized, and their in vitro activities were tested. (Figure 2.) Chemistry The design of novel antiandrogen compounds was performed to ex ferent chemical modifications around the thiohydanthoin and hydantoin picted in Schemes 1-3. As shown in Table 1, a series of hydroxymethylthiohydantoin-like SARM compounds exhibited different degrees of AR antagonistic activity. Fortunately, the compounds 6a and 6c are comparable to enzalutamide in both their antagonistic activity and efficacy (IC 50 = 46.8, 46.2, 42.82 nM). What's more, when the aromatic ring contains a methyl or methoxy group such as 6b and 6f, the antagonistic activity is significantly decreased (IC 50 = 548.5, 286.2 nM), which also indicates that the donor group on the aromatic ring has a negative effect on the antagonistic activity of the compound. A slight decrease in antagonistic activity was also observed when either chlorine or bromine was attached to the aromatic ring (6d, 6e, IC 50 = 79.27, 116.6 nM). When electron-withdrawing groups such as cyano groups, nitro groups, and ester groups are linked to the aromatic ring, it does not show a positive effect on the antagonistic activity of the compound (6j, 6h, 6j, IC 50 = 338.4, 79,616, 531.9 nM). However, the trifluoromethyl-group-containing 6i (IC 50 = 68.15 nM) retained the same order of magnitude of antagonistic activity as enzalutamide. No increase in antagonistic activity was observed when 6k, containing both fluorine and acyl groups on the aromatic ring, was investigated, nor for 6l, which contains a heterocycloalkoxy group on the aromatic ring (6k, 6l, IC 50 = 125.75, 337.35 nM). Compoumd 9 and 11a, containing acyl or hydroxyl groups on the aromatic ring, although inferior to EM, still show a certain antagonistic activity (9, 11a, IC 50 = 938.8, 3641 nM). We examined the AR antagonist activity of the representative compounds 6a-l, 7a-j, 9, 10, 11a-b and 19a-d with a luciferase gene reporter assay in mouse myoblast CV-1 cells. As shown in Table 1, a series of hydroxymethylthiohydantoin-like SARM compounds exhibited different degrees of AR antagonistic activity. Fortunately, the compounds 6a and 6c are comparable to enzalutamide in both their antagonistic activity and efficacy (IC50 = 46.8, 46.2, 42.82 nM). What's more, when the aromatic ring contains a methyl or methoxy group such as 6b and 6f, the antagonistic activity is significantly decreased (IC50 = 548.5, 286.2 nM), which also indicates that the donor group on the aromatic ring has a negative effect on the antagonistic activity of the compound. A slight decrease in antagonistic activity was also observed when either chlorine or bromine was attached to the aromatic ring (6d, 6e, IC50 = 79.27, 116.6 nM). When electron-withdrawing groups such as cyano groups, nitro groups, and ester groups are linked to the aromatic ring, it does not show a positive effect on the antagonistic activity of the compound (6j, 6h, 6j, IC50 = 338.4, 79616, 531.9 nM). However, the trifluoromethyl-group-containing 6i (IC50 = 68.15 nM) retained the same order of magnitude of antagonistic activity as enzalutamide. No increase in antagonistic activity was observed when 6k, containing both fluorine and acyl groups on the aromatic ring, was investigated, nor for 6l, which contains a heterocycloalkoxy group on the aromatic ring (6k, 6l, IC50 = 125.75, 337.35 nM). Compoumd 9 and 11a, containing acyl or hydroxyl groups on the aromatic ring, although inferior to EM, still show a certain antagonistic activity (9, 11a, IC50 = 938.8, 3641 nM). Data presented are the means ±SD of three independent experiments. As shown in Table 2, after replacing the sulfur atoms on thiohydantoin with oxygen atoms, although no derivatives were found to be more active than the positive compounds, more compounds showed the same magnitude of antagonistic activity as enzalutamide. Compounds Antagonist Activity in CV- As shown in Table 2, after replacing the sulfur atoms on thiohydantoin with oxygen atoms, although no derivatives were found to be more active than the positive compounds, more compounds showed the same magnitude of antagonistic activity as enzalutamide. To improve affinity and activity, the methylene group (CH) at the ortho position of the aryl nitrile was replaced with a nitrogen (N) atom, activating the cyano group on the aryl group to form a reversible covalent bond with the endogenous cysteine (Cys784) within the AR ligand-binding pocket. The results in Table 3 show that this strategy was effective. Compound 19a exhibits excellent antagonistic activity (19a, IC 50 = 18.4 nM), and the antagonistic activity of compound 19b is almost the same as that of enzalutamide (19b, IC 50 = 32.45 nM). However, the antagonistic activity of 19c is inferior to that of 6k (Table 1), which is exactly the opposite of that of 19d, which better than 6l (Table 1). Anti-Proliferative Activity of Compounds 6a, 6c, 19a and 19b in Prostate Cancer Cell Lines We performed experiments accordingly to detect the anti-proliferative effect of compounds using the prostate cancer LNCaP cell line. The measurement of proliferative activity was performed after treatment with a concentration range of each compound for 72 h. As can be seen from Table 4, all four compounds exhibited good antiproliferative activity, Molecules 2022, 27, 5867 6 of 22 especially 19b; its inhibitory effect on the proliferation of LNCAP cell line is even better than that of enzalutamide. Data presented are the means ±SD of three independent experiments. To improve affinity and activity, the methylene group (CH) at the ortho position of the aryl nitrile was replaced with a nitrogen (N) atom, activating the cyano group on the aryl group to form a reversible covalent bond with the endogenous cysteine (Cys784) within the AR ligand-binding pocket. The results in Table 3 show that this strategy was effective. Compound 19a exhibits excellent antagonistic activity (19a, IC50 = 18.4 nM), and the antagonistic activity of compound 19b is almost the same as that of enzalutamide (19b, IC50 = 32.45 nM). However, the antagonistic activity of 19c is inferior to that of 6k (Table 1), which is exactly the opposite of that of 19d, which better than 6l (Table 1). Data presented are the means ±SD from three independent experiments. Anti-Proliferative Activity of Compounds 6a, 6c, 19a and 19b in Prostate Cancer Cell Lines We performed experiments accordingly to detect the anti-proliferative effect of compounds using the prostate cancer LNCaP cell line. The measurement of proliferative activity was performed after treatment with a concentration range of each compound for 72 Data presented are the means ±SD of three independent experiments. To improve affinity and activity, the methylene group (CH) at the ortho posit the aryl nitrile was replaced with a nitrogen (N) atom, activating the cyano group aryl group to form a reversible covalent bond with the endogenous cysteine (C within the AR ligand-binding pocket. The results in Table 3 show that this strateg effective. Compound 19a exhibits excellent antagonistic activity (19a, IC50 = 18.4 nM the antagonistic activity of compound 19b is almost the same as that of enzalutamid IC50 = 32.45 nM). However, the antagonistic activity of 19c is inferior to that of 6k 1), which is exactly the opposite of that of 19d, which better than 6l (Table 1). Data presented are the means ±SD from three independent experiments. Anti-Proliferative Activity of Compounds 6a, 6c, 19a and 19b in Prostate Cancer Cell We performed experiments accordingly to detect the anti-proliferative effect o pounds using the prostate cancer LNCaP cell line. The measurement of proliferati tivity was performed after treatment with a concentration range of each compound In addition, by using Schrödinger to study the binding mode of compound 19b to AR (PDB ID: 2OZ7), it was found that the binding posture of 19b was very similar to that of enzalutamide. The critical interactions of 19b with the critical residuals are shown in Figure 3a. The cyanyl group of 19b forms an important hydrogen bond with GLY708, Gln711, and TRP741 via a water molecule, similar to enzalutamide and apalutamide (Figure 3b,c). There is a π-π interaction between the pyridine ring and Phe764, which is similar to apalutamide (Figure 3b). Enzalutamide and apalutamide can form a hydrogen bond with ARG779 due to the presence of an acyl group. Obviously, the absence of an acyl group in 19b hinders the formation of this hydrogen bond. However, the extra hydroxyl group in 19b can form a hydrogen bond with ASN705 in the AR, which may be the reason for the high potency of 19b toward AR. Data were normalized and plotted relative to DMSO-control treated (no compound) cells and expressed as the means ± SD. In addition, by using Schrödinger to study the binding mode of compound 19b to AR (PDB ID: 2OZ7), it was found that the binding posture of 19b was very similar to that of enzalutamide. The critical interactions of 19b with the critical residuals are shown in Figure 3a. The cyanyl group of 19b forms an important hydrogen bond with GLY708, Gln711, and TRP741 via a water molecule, similar to enzalutamide and apalutamide (Figure 3b,c). There is a π-π interaction between the pyridine ring and Phe764, which is similar to apalutamide (Figure 3b). Enzalutamide and apalutamide can form a hydrogen bond with ARG779 due to the presence of an acyl group. Obviously, the absence of an acyl group in 19b hinders the formation of this hydrogen bond. However, the extra hydroxyl group in 19b can form a hydrogen bond with ASN705 in the AR, which may be the reason for the high potency of 19b toward AR. Conclusions In this work, a series of hydroxymethylthiohydantoins, hydroxymethylhydantoins and pyridine SARMs were synthesized, and the remarkable AR antagonistic activities of these compounds were revealed by in vitro cell experiments. Among them, compounds 6a, 6c, 6d, 6i, 7a-e, 19a-b, etc., exhibited the same magnitude of antagonistic activity as enzalutamide. In addition, compounds 6a, 6c, 19a and 19b exhibited good anti-proliferative activity. The inhibitory effect of 19b on the proliferation of the LNCAP cell line is even better than that of enzalutamide. Conclusions In this work, a series of hydroxymethylthiohydantoins, hydroxymethylhydantoins and pyridine SARMs were synthesized, and the remarkable AR antagonistic activities of these compounds were revealed by in vitro cell experiments. Among them, compounds 6a, 6c, 6d, 6i, 7a-e, 19a-b, etc., exhibited the same magnitude of antagonistic activity as enzalutamide. In addition, compounds 6a, 6c, 19a and 19b exhibited good anti-proliferative activity. The inhibitory effect of 19b on the proliferation of the LNCAP cell line is even better than that of enzalutamide. Materials and Methods All reagents are commercially available and were used without further purification. The the solvents used were of analytical grade. Melting points were taken on a Fisher-Johns melting point apparatus, uncorrected and reported in degrees Centigrade. 1 H NMR and 13 C NMR spectra were scanned on a Bruker DRX-400 (400 MHz) using tetramethylsilane (TMS) as an internal standard and using one or two of the following solvents, DMSO-d6 and General Procedure for the Synthesis of 3a-l In a 100 mL round-bottomed flask, we added amines (1.0 eq), 2-bromopropanoic acid (1.5 eq) and TEA (3.0 eq) in 150 mL DCM to give a colorless suspension. The reaction mixture was held at room temperature with stirring on for 3 days. The mixture was concentrated by rotovap. One-hundred milliliters water was added. An amount of 60 mL 2M HCl(aq) was added to adjusted pH to 5. The aq layer was extracted with EA. The organic was dried Na 2 SO 4 , filt and conc to give crude product. Then, 20 mL DCM and 50 mL Et 2 O was added. The reaction mixture was filtered through a sintered glass funnel with 50 mL Et 2 O to give 3a-l. General Procedure for the Synthesis of 5a-l and 18a-d In a 100 mL round-bottomed flask, we added 4 or 17 (1.0 eq) and TEA (1.5 eq) in 30 mL CHCl 3 to give a yellow solution. The reaction vessel was purged with nitrogen. The reaction was heat to 65 • C with stirring on for 1 hr. The reaction mixture was cooled to 25 • C with stirring on. Then, 3a-l (1.0 eq) was added. The reaction was heated to 65 • C with stirring for 16 h. The mixture was concentrated by rotovap. The crude product was purified by column chromatography to give 5a-l and 18a-d. General Procedure for the Synthesis of 6a-l and 19a-d In a 50 mL round-bottomed flask, we added compound 5a-l or 18a-d (1.0 eq) in 10 mL THF to give a colorless solution. The reaction vessel was purged with nitrogen. The reaction mixture was cooled to −78 • C with stirring on. LiHMDS (1.3 eq) was added. The reaction mixture was held at −78 • C with stirring for 10 min. Formaldehyde (3.0 eq) was added. The reaction mixture was warmed up to rt with stirring on for 30 min. Then, 10 mL sat. NH 4 Cl (aq) was added. The aq layer was extracted with EA. We combined the organic layers and washed them with brine. The organic was dried with Na 2 SO 4 , filt and conc to give crude product. The crude product was purified by column chromatography to give 6a-l and 19a-d. 13 180.36, 173.77, 153.28, 135.28, 135.25, 134.98, 133.12, 129.56, 129.52, 129.38, 128.85, 128.67, 128.34, 114.21, 71.95, 63.60 180.68, 173.71, 163.35, 160.90, 153.23, 135.23, 135.19, 133.06, 131.87, 131.77, 131.18, 131.15, 129.03, 128.91, 122.83, 120.11 13 13 In a 25 mL round-bottomed flask, we added 6a-j and 9 (1.0 eq) in 1 mL CCl 4 and 1 mL MeCN to give a colorless solution. The reaction mixture was cooled to 0 • C with stirring on. NaIO 4 (4.0 eq) in 2 mL water was added. Ruthenium (III) chloride (0.05 eq) was added. The reaction mixture was held at rt with stirring on for 2 h. Then, 2 mL NaHCO 3 (aq) was added. The aq layer was extracted with DCM. We combined the organic layers and washed them with brine. The organic was dried with Na 2 SO 4 , filt and conc to give a crude product. The crude product was purified by column chromatography to give 7a-j and 11. General Procedure for the Synthesis of 11a-b In a 10 mL round-bottomed flask, we added compound 6f or 7f (1.0 eq) in 5 mL DCM to give a colorless solution. BBr 3 (5.0 eq) was added. The reaction mixture was held at rt with stirring on for 2 h. Then, 5 mL NaHCO 3 (aq) was added. The aq layer was extracted with DCM. We combined the organic layers and washed them with brine. The organic was dried with Na 2 SO 4 , filt and conc to obtain a crude product. The crude product was purified by column chromatography to give 11a-b. temperature with stirring on for 6 hr. The mixture was poured into 500 mL ice-water with stirring on for 2 h. The reaction mixture was filtered through a Buchner funnel and washed with water to give 5-nitro-3-(trifluoromethyl)pyridin-2-ol (13) (14) In a 100 mL round-bottomed flask, we added 5-nitro-3-(trifluoromethyl)pyridin-2-ol (13) (6 g, 28.8 mmol, 1.0 eq), phosphorus oxybromide (24.80 g, 86 mmol, 3.0 eq) and DMF (0.335 mL, 4.32 mmol, 0.15 eq) to give a yellow suspension. The reaction was heated to 110 • C with stirring on for 3 h. The reaction mixture was added into 200 g ice portionwise, then, we adjusted the pH to 7. The aq layer was extracted with EA. We combined the organic layers and washed them with water and brine. The organic was dried with Na 2 SO 4 , filt and conc to give a crude product. The crude product was purified by column chromatography to give 2-bromo-5-nitro-3-(trifluoromethyl)pyridine (14) (15) In a 250 mL round-bottomed flask, we added 2-bromo-5-nitro-3-(trifluoromethyl)pyridine (14) (7.5 g, 27.7 mmol, 1.0 eq), iron (5.41 g, 97 mmol, 3.5 eq), and HOAc (23.77 mL, 415 mmol, 15 eq) in 25 mL ethyl acetate to give a black suspension. The reaction was heated to 65 • C with stirring on for 2 h. sat. Na 2 CO 3 (aq) was added to adjust the pH to 10. The reaction mixture was filtered through a Buchner funnel. The aq layer was extracted with EA. We combined the organic layers and washed them with water and brine. The organic was dried with Na 2 SO 4 , filt and conc to give 6-bromo-5-(trifluoromethyl)pyridin-3-amine (15) (6.1 g, 91% yield) as a yellow solid. The product was used in the next step without further purification. 1 H NMR (400 MHz, CDCl 3 ) δ 7.97 (s, 1H), 7.26 (s, 1H), 3.81 (s, 2H). 13 (16) In a 250 mL round-bottomed flask, we added 6-bromo-5-(trifluoromethyl)pyridin-3amine (15) (6 g, 24.90 mmol, 1.0 eq) and copper(I) cyanide (2.56 g, 28.6 mmol, 1.2 eq) in 60 mL NMP to give a black suspension. The reaction vessel was purged with nitrogen. The reaction mixture was heated to 160 • C with stirring on for 2 h. Then, 200 mL 25% EDA (aq) was added. The aq layer was extracted with EA. We combined the organic layers and washed them with 25% EDA (aq) and brine. The organic was dried with Na 2 SO 4 , filt and conc to give a crude product (7g). The crude product was purified by column chromatography to give 5-amino-3-(trifluoromethyl)picolinonitrile (16) (17) In a 100 mL round-bottomed flask, we added 5-amino-3-(trifluoromethyl)picolinonitrile (16) (1.0 g, 5.34 mmol, 1.0 eq) in 20 mL DCM and 10 mL water to give an orange solution. Thiophosgene (0.451 mL, 5.88 mmol, 1.1 eq) was added. The reaction mixture was held at rt with stirring on for 16 h. The aq layer was extracted with DCM. We combined the organic layers and washed them with water and brine. The organic was dried with Na 2 SO 4 , filt and conc to give a crude product. The crude product was purified by column chromatography to give 5-isothiocyanato-3-(trifluoromethyl)picolinonitrile (17) C2C12 cells were provided and certified by the Cell Bank at Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences and confirmed as being negative for mycoplasma contamination. C2C12 were cultured in phenol red-free HG-DMEM medium (Life Technologies, Carlsbad, CA, USA) supplemented with 10% (v/v) fetal bovine serum, 50 IU/mL penicillin, 50 µg /mL streptomycin and 1% sodium pyruvate. Cells were maintained at 37 • C in a 5% CO 2 incubator and seeded onto 10 cm cell culture dishes before transfection. After overnight culture, the cells were transiently co-transfected with AR-expressing plasmid (pSVAR0) and luciferase reporter gene vector (MMTV-Luc) using FuGENE ® HD Transfection Reagent (Promega, Madison, WI, USA), while cells reached 80-90% confluence. After 18 h, the transfected cells were distributed to 384-well plates at a density of 15,000 cells per well and incubated for a further 6 h at 37 • C before compound treatment. Antagonist Assay Enzalutamide was used as a positive control. The antagonist activity of testing compounds was measured by Steadylite plus the Reporter Gene Assay System (PerkinElmer, Boston, MA, USA) according to the manufacturer's instructions. In brief, after incubation for 6 h as mentioned above, 5 µL testing of compounds diluted in culture medium with eight different working concentrations (384 pM to 30 µM; enzalutamide (128 pM-10 µM)) was added to the cell well followed by 5 µL DHT (final concentration as EC 80 ). After 24 h incubation, Steadylite reagent (50 µL, equal volume) was introduced, gently shaken for 2 min and kept at room temperature for 15 min before luminescence 384 measurement on an EnSpire multilabel plate reader (PerkinElmer). Cytotoxicity CellTiter-Glo ® 2.0 Assay (Promega) was applied to assess cytotoxicity. In brief, cells were seeded onto 384-well plates at a density of 1500 cells per well and incubated for 24 h. Ten microliters of testing compounds diluted in culture medium was added and reacted for 24 h. CellTiter-Glo reagent was then introduced and the luminescence 384 was measured as above. Anti-Proliferative Activity Assay LNCaP cells were cultured in RPMI-1640 medium (Life Technologies, Carlsbad, CA, USA) supplemented with 10% (v/v) fetal bovine serum, 1% sodium pyruvate and 1% L-Glutamine. The cells were seeded into 96-well plates at a density of 8 × 104 cells per well. After 24 h incubation, the culture medium was removed and the cells were cultivated in the medium supplemented with 2% (v/v) fetal bovine serum for 2 days and then treated with different concentrations of test compounds for 3 days. Cell viability was measured with the CCK-8 kit (Dojindo). Molecular Docking All docking procedures were completed with programs as implemented in Schrodinger Suite. The crystal structure of the androgen receptor (PDB: 2OZ7) was prepared in Protein Preparation Wizard with all water molecules deleted and bond orders assigned. The 2D structures of 19b, enzalutamide and apalutamide subjected to LigPrep and possible tautomeric states at pH 7.0 ± 2.0 were generated using Epik. Induced-fit docking was performed using the program Induced Fit and briefly consisted of several steps: first, the initial docking was performed using Glide, and then the sampling and minimization of sidechain residues within 5 Å of the docked ligand was achieved using Prime, followed by re-docking using Glide. In the initial docking, the receptor van der Waals radii scaling was set at 0.50 and the ligand van der Waals radii was scaled at 0.5 to soften the potentials. Finally, the binding energy for each output pose was estimated as the IFDScore.
2022-09-15T15:20:44.422Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "99fd420cd2f60a11f296d2e8764914f6f033e88c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/18/5867/pdf?version=1663047159", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92860ba0996262371ccd1b22708e63d38f2ae540", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [] }
203623957
pes2o/s2orc
v3-fos-license
Mutational spectrum of Mexican patients with tyrosinemia type 1: In silico modeling and predicted pathogenic effect of a novel missense FAH variant Abstract Background Tyrosinemia type 1 (HT1, MIM#276700) is caused by a deficiency in fumarylacetoacetate hydrolase (FAH) and it is associated with severe liver and renal disfunction. At present, the mutational FAH (15q25.1, MIM*613871) spectrum underlying HT1 in the Mexican population is unknown. The objective of this study was to determine the FAH genotypes in eight nonrelated Mexican patients with HT1, who were diagnosed clinically. Methods Sequencing of FAH and their exon–intron boundaries and in silico protein modeling based on the crystallographic structure of mouse FAH. Results We identified pathogenic variants in 15/16 studied alleles (93.8%). Nine different variants were found. The most commonly detected HT1‐causing allele was NM_000137.2(FAH):c.3G > A or p.(?) [rs766882348] (25%, n = 4/16). We also identified a novel missense variant NM_000137.2(FAH):c.36C > A or p.(Phe12Leu) in a homozygous patient with an early and fatal acute form. The latter was classified as a likely pathogenic variant and in silico protein modeling showed that Phe‐12 residue substitution for Leu, produces a repulsion in all possible Leu rotamers, which in turn would lead to a destabilization of the protein structure and possible loss‐of‐function. Conclusion HT1 patients had a heterogeneous mutational and clinical spectrum and no genotype–phenotype correlation could be established. Despite the fact that no clear correlation between genotype and phenotype has been established for HT1, analyses of the protein structure of pathogenic FAH alleles could facilitate a better understanding of their potential clinical effects . Furthermore, establishing the causative FAH genotype would be useful for accurate genetic counseling (Mayorandan et al., 2014) and prenatal diagnosis in families that require it. Hence, the aim of the present study was to determine the mutational spectrum of the FAH in Mexican HT1 patients. | Patients In this study, we assessed the FAH genotype of eight (5 male/3 female) nonrelated Mexican HT1 patients, whose biochemical diagnosis was confirmed by quantitation of SA in blood by tandem mass spectrometry. We also analyzed the results of Tyr, phenylalanine (Phe), methionine (Met), alpha-fetoprotein (AFP), and liver function tests. Family history, including consanguinity and siblings with HT1, or suggestive symptoms were recorded. Nutritional management (Tyr and Phe restriction), nitisinone administration (1-2 mg kg −1 day −1 ), and liver transplant, or their combination, and clinical outcomes were also documented. The patients were classified into one of the following three clinical types proposed by Morrow and Tanguay in 2017, based on the age of symptom onset and clinical manifestations: (a) Acute form (onset before 2 months of age), (b) Subacute (symptoms appearing between 2 and 6 months), and (c) Chronic (symptoms present after 6 months of age) . The study was approved by the Bioethics and Research Committees of the National Institute of Pediatrics (017/2011). | FAH genotyping Genomic DNA samples were obtained from dried blood spots using standard methods. Direct and bidirectional automated DNA sequencing was applied to the 14 coding exons of the FAH and their exon-intron boundaries (NG_012833.1 RefSeqGene, NM_000137.2). Details regarding the primers and PCR conditions used are available upon request. All missense FAH variants were assessed with respect to the dbSNP (http://www.ncbi.nlm.nih. gov/snp), the Genome Aggregation Database (gnomAD, http://gnomad.broad insti tute.org/), and the NHLBI Exome Sequencing Project at the Exome Variant Server (EVS, http://evs.gs.washi ngton.edu/EVS/), ClinVar Database (https ://www.ncbi.nlm.nih.gov/clinv ar/), as well as with reference to the literature. The novel FAH missense variants identified were assessed according to the pathogenicity/benignity scoring system recommended by the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG-AMP) (Kleinberger, Maloney, Pollin, & Jeng, 2016). | Protein in silico modeling In order to predict the possible deleterious effects of the novel pathogenic variant we identified, we performed in silico modeling based on the crystallographic structure of mouse FAH, which has 89% sequence identity with human FAH at the amino acid level (PDB Code: 1QQJ). In silico mutagenesis was performed for p.(Phe12Leu) using Pymol (PDB code 1QQJ, http://www.rcsb.org/pdb/ home/home.do). | Clinical and biochemical phenotypes The main phenotypic characteristics of the eight studied patients are shown in Table 1, only one of whom was diagnosed through newborn screening (15 days old). This latter patient was treated at an early stage with nitisinone, and has been asymptomatic for 1 year, with normalization of the main biochemical parameters and normal development according to age. The remaining seven patients were diagnosed at ages ranging from 3 to 36 months of age, with a delay from 2 to 24 months since the initial appearance of symptoms until the definitive diagnosis. Chronic, acute, and subacute HT1 presentation forms were observed in four, two, and one patient(s), respectively. FAH genotypes in five of the patients were homozygous and in two of these cases, parental consanguinity was documented. A compound heterozygous genotype was identified in two patients, whereas in the remaining patients, we identified one monoallelic genotype. This latter female patient had a classical biochemical phenotype characterized by high blood levels of SA (6.57 µM, reference value: <1 µM), Tyr (48 µM, reference value: 22-108 µM), Met (335 µM, reference value: 9-42 µM), and AFP (4,44,500 ng/ml, reference value: 0.5-5.0 ng/ml) with a subacute presentation. This patient had previously presented with hepatocellular carcinoma that required a liver transplantation ( Table 1). The novel missense variant was present in the homozygous state in patient number 4, with an acute HT1 form and fatal course (Table 1). This variant was predicted to be likely pathogenic (II) according to ACMG-AMP scoring (Kleinberger et al., 2016), as it met the following criteria: (a) its prevalence in affected individuals is significantly increased compared with the prevalence in controls (PS4-strong evidence), in accordance with its absence in the gnomAD and EVS databases (PM2-moderate evidence), (b) multiple lines of computational evidence support a deleterious effect on the gene product (PP3-supporting criteria; Table 2, Figure 2a and b) the patient's biochemical phenotype and his family history (i.e., parental consanguinity related to an autosomal recessive disorder) are highly specific for HT1 (PP4-supporting criteria). The homozygous p.(Phe12 Leu) patient was a 5-month-old boy, for whom clinical data had been recorded from 18 days of age, and who showed progressive abdominal distension, pallor, fever, irritability, and hypotonia. At the age of 2 months, he was hospitalized, during which time hepatosplenomegaly, hyperbilirubinemia, anemia, prolonged clotting times, thrombocytopenia, and hypoglycemia were detected. A metabolic disease was suspected, and the patient was sent to our medical unit with aminoaciduria, glycosuria, and hypophosphatemic rickets (Fanconi syndrome). The clinical and biochemical HT1 diagnosis was established based on high blood Tyr levels (226 µM, reference value: 22-108) and elevated blood SA (2.36 µM, reference value: <1 µM) [ Table 1]. Consequently, a low Tyr-Phe diet was started immediately, whereas nitisinone treatment commenced at a later stage (4 months old), owing to difficulties in obtaining this orphan drug in our country (Ibarra-González et al., 2017). The patient also began a liver transplantation protocol; however, there were no compatible donors. The clinical outcome was poor, with rapid progression to cirrhosis and liver failure, and he died in the palliative care unit of our institution at 1 year and 8 months of age. No autopsy was performed. | Protein in silico modeling of the p.(Phe12Leu) allele Phe-12 is located at a distance of 28 Å from the active site of the FAH enzyme ( Figure 1) and is in close contact with residues from β-sheet number 15 (Timm et al., 1999). This residue shows high phylogenetic conservation (from human to Caenorhabditis elegans). When Phe-12 residue is substituted by Leu, repulsion is produced in all possible leucine rotamers ( Figure 2). | DISCUSSION To the best of our knowledge, this is the first genetic study performed in Mexican patients with HT1, each of whom was from a different geographic region of the country. In all cases, the biochemical phenotype was similar to that previously reported worldwide , characterized by high blood levels of SA and AFP, prolonged coagulation times, and variable elevation of blood Tyr. However, the clinical presentations were heterogeneous, as has been reported previously in other series (Mayorandan et al., 2014). Although we observed a predominance of chronic presentation of the disease (4/8, 50%), we should not discount the possibility of acute forms, as these are not readily detected clinically, and patients may die without diagnosis. The high proportion of hepatocellular carcinoma observed in our patients (5/8, 62.5%) is within the incidence previously reported for this complication in nontreated or late-treated patients (14%-75%) (Khanna & Verma, 2018). A conclusive explanation for the hepatocellular carcinoma pathogenesis in HT1 has not been established, it is known that fumarylacetoacetate, maleylacetoacetate, and SA form glutathione adducts that can promote free radical damage of hepatocytes and susceptibility to genotoxicity (Chinsky et al., 2017). Furthermore, fumarylacetoacetate inhibits DNA glycosylases, which play a role in the repair of mutagenic oxidative base lesions in DNA, (Bliksrud, Ellingsen, & Bjørås, 2013) and explains the high incidence of hepatocellular carcinoma seen in HT1 patients. Early nitisinone treatment has been found to reduce the incidence of hepatocellular carcinoma (Khanna & Verma, 2018). In the present study, the patient in whom HT1 was detected at an early stage was promptly treated with nitisinone and showed a positive 1-year outcome, which is consistent with the successful experiences reported worldwide (Alvarez & Mitchell 2017). Nevertheless, the development of hepatocellular carcinoma remains a risk The novel variant is in bold type, which was predicted either as "disease causing" (Mutation Taster; http://www.mutat ionta ster.org/), "damaging" (SIFT score: 0.00; http://sift.bii.a-star.edu.sg/), "probably damaging" (PolyPhen-2 score: 1.00, sensitivity 0.000, specificity: 1.00; http://genet ics.bwh.harva rd.edu/pph2/), or "deleterious" (PROVEAN; http://prove an.jcvi.org/index.php). a According with in all HT1 patients, thereby indicating the need for promising novel or complementary therapeutic strategies (Aktuglu-Zeybek, Kiykim, & Cansever, 2017;VanLith et al., 2018). The delay in establishing a diagnosis of HT1 reflects the fact that, at least in Mexico, general physicians and pediatricians are poorly skilled at identifying the possibility of HT1, and thus it is necessary to increase the requisite training and to enhance the likelihood of early detection of the disease through newborn screening (Couce, Dalmau, Del Toro, Pintos-Morell, & Aldámiz-Echevarría, 2011;Ibarra-González et al., 2017;Mayorandan et al., 2014). Here, we documented seven different genotypes, with a predominance of homozygosity (5/8, 62.5%, Table 1), which is similar to that reported in Spain 66% (Moreno-Estrada et al., 2014) and Turkey 60% (Couce et al., 2011). The highly heterogeneous mutational spectrum identified in this study is consistent with that reported worldwide (Angileri et al., 2015;Couce et al., 2011;. We found that one-third (3/9) of the pathogenic FAH alleles were in exon 1, which differ from that reported by other authors, who describe exons 9 and 12 as harboring the largest clusters of disease-causing FAH variants . The current Mexican population is characterized by considerable ethnic diversity (Arranz et al., 2002), and therefore it is expected that HT1 alleles previously reported in Asian and European populations were detected in the present study (Table 2). In our study, the c.1062 + 5G>A variant, one of the most frequently identified worldwide (5.4%-32% in Barcelona to 90% in Quebec) (Bergman et al., 1998;, was detected only in a monoallelic genotype in patient 8 (Table 1). This 8-month-old female had a classical biochemical phenotype, characterized by high blood levels of SA, Tyr, Met, and AFP with subacute presentation. Despite the late diagnosis, nitisinone treatment was started; however, she developed hepatocellular carcinoma that required liver transplant. Using Sanger sequencing methodology, the mutation diagnostic rate for HT1 is close to 91%-100% (Bliksrud, Brodtkorb, Backe, Woldseth, & Rootwelt, 2012;Imtiaz et al., 2011;Park et al., 2009), and thus failure to identify another pathogenic allele is a possibility as noted in patient 8. To date, however, no variants other than point mutations or small deletions have been reported, which make us suspect that deep intron sequences, large deletions, duplications, gene inversions, or promoter defects could go undetected by Sanger sequencing, thereby highlighting the necessity to apply alternative techniques, such as multiplex ligation-dependent probe amplification or massive parallel sequencing of regions other than coding regions (Georgouli et al., 2010). The c.3G > A variant was the most frequently detected variant in this study, and was identified in two patients with acute and chronic forms, respectively (Table 1). This is a very rare allele that has only twice been reported in a heterozygous state from individuals of Latin American and European descent, according to the gnomAD database (http://gnomad. broad insti tute.org/varia nt/15-80445 399-G-A). In this case, although a founder effect might be suspected, it has been difficult to demonstrate because the number of studied families has been insufficient, and thus further population genetic studies are needed. Start-loss variants have been described in approximately 2% of all known pathogenic FAH alleles , and these were observed in three of our patients who carried one of two homozygous genotypes (c.1A > G, n = 1 and c.3G > A, n = 2). The former was found in one patient with a chronic presentation. Other authors have reported this variant in the homozygous state, although associated with subacute and acute forms (Bliksrud et al., 2012;Mohamed et al., 2013;Pomerantz et al., 2018). We were, however, unable to establish a phenotype-genotype correlation in our patients with start-loss variants ( Table 2, patients 1-3). Similar types of start-loss variants have been identified in several human disorders, and have been suggested to lead to aberrant mRNA processing from the next downstream Met codon, and consequently to the generation of a shorter and hence partially functional protein product Touriol et al., 2003), or to the usage of an alternative Val start codon, which has been reported in COS cells (Sniderman King, Trahms, & Scott, 2017). Thus, in order to establish a genotype-phenotype correlation, the precise pathogenic effect of these variants in our patients would require functional and in vitro or in vivo expression studies. The in silico predicted that pathogenicity of the c.36C > A of p.(Phe12Leu) novel variant was disease-causing in all the bioinformatic tools used (Table 2), which is consistent with the severe phenotype observed in our homozygous patient, F I G U R E 1 Ribbon scheme of the FAH dimer. The N-terminal end is colored red and orange; the C-terminal end is colored green and cyan. Phe-12 is located at a distance of 28 Å from the active site residues (His 133, Glu 199, Glu 364); (PDB: 1QQJ; the figure was prepared using Pymol) | 7 of 9 IBARRA-GONZÁLEZ Et AL. who was affected by an acute and rapidly progressive form with fatal outcome. The structural in silico modeling revealed that when phe-12 is substituted by a leucine, a generalized repulsion is produced in all possible leucine rotamers ( Figure 2). This in turn would promote a destabilization of the protein structure, which could eventually lead to a pronounced loss of FAH activity. Although we were unable to establish a genotype-phenotype correlation, knowledge of a patient's genotype enables the provision of better genetic counseling to his/her family, including prenatal diagnosis, detection of carriers, and informed reproductive decisions (Mayorandan et al., 2014). In conclusion, we found that clinical presentation of HT1 was heterogeneous thus, a clear genotype-phenotype correlation could not be established. Sanger automated sequencing enabled us to identify 93.8% (n = 15/16) of pathogenic FAH alleles in a sample of Mexican HT1 patients, among whom we detected a heterogeneous mutational spectrum and in one case identified a novel missense variant c.36C > A or p.(Phe12Leu). This latter variant was found to be associated with the fatal acute form of the disease, and on the basis of protein modeling, we predicted that this mutation would cause a destabilization of FAH structure. However, further studies are required to establish the pathogenic effect of this mutation and to investigate its effect on the functional activity of the resulting FAH mutant enzyme.
2019-10-02T13:04:14.856Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "93b7b48e71a93a1f8897328e953183504822c754", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mgg3.937", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2df5d1c3f9c948a7f0462d61fb7ef6bf9d77270", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
244443880
pes2o/s2orc
v3-fos-license
Medium-term Influence of the Coronavirus Disease 2019 Pandemic on Patients with Diabetes: A Single-center Cross-sectional Study Objective This study evaluated the lifestyle changes in patients with diabetes and their independent associations with glycemic and body weight control. In addition, the correlation between changes in mental health and lifestyles was evaluated. Methods This single-center cross-sectional study included 340 patients with diabetes who periodically visited our department. Changes in dietary habits, activities of daily living, and mental health before and during approximately six months after the onset of the coronavirus disease 2019 (COVID-19) pandemic were evaluated using a questionnaire, including the International Physical Activity Questionnaire-Short Form. Results Approximately 20%, 30%, and over 50% of patients had worsened dietary habits, decreased activities of daily living, and deteriorated mental health, respectively. A multiple regression analysis showed that irregular meal timing was significantly associated with change in HbA1c (β=0.328, p=0.001), and decreased walking time was significantly associated with changes in body weight (β=-0.245, p=0.025). The change in fear and anxiety was positively associated with changes in meal timing regularity (r=0.129, p=0.019) and carbohydrate consumption (r=0.127, p=0.021). Subsequently, the change in depressed mood was positively associated with changes in carbohydrate (r=0.142, p=0.010) and alcohol (r=0.161, p=0.037) consumption, and the change in psychological stress was positively associated with changes in carbohydrates (r=0.183, p=0.001) and snack (r=0.151, p=0.008) consumption as well as sedentary time (r=0.158, p=0.004). Conclusion The COVID-19 pandemic has had a considerable medium-term impact on the lifestyle and mental health of patients with diabetes. Lifestyle changes were associated with glycemic and body weight control, and mental health changes were associated with lifestyle changes. These findings may provide important information on diabetes care during the pandemic. Introduction The novel coronavirus disease 2019 (COVID-19) has caused a global health emergency; governments of pandemic-hit countries have adopted different measures to prevent the spread of the disease and the collapse of the healthcare system. Depending on the rate of transmission and the robustness of the medical system, these measures Intern Med 61: 303-311, 2022 DOI: 10.2169/internalmedicine.8010-21 14, 2020 (2). This situation temporarily affected Japanese people's lifestyle and mental health, including their dietary habits and activities of daily living (3,4). However, the second and third waves of infection have since begun, and as of December 31, 2020, Japan had recorded 233,785 confirmed COVID-19 cases and 3,459 deaths (2). For patients with diabetes, dietary habits, activities of daily living, and mental health are particularly important in disease management, and any disruptive change is likely to adversely affect patients' glycemic and body weight control (5,6). Indeed, recent studies have evaluated the impact of the COVID-19 pandemic on the changes in lifestyles and/ or mental health and their impact on glycemic and body weight control (7)(8)(9)(10)(11)(12)(13)(14). However, these reports evaluated the early phase, approximately two months after the beginning of the COVID-19 pandemic. The medium-term impact of the COVID-19 pandemic on changes in lifestyles and mental health and their independent impact on glycemic and body weight control in patients with diabetes thus remain unclear. Furthermore, the correlation between changes in mental health and lifestyle also remains unclear. This single-center, cross-sectional study evaluated the lifestyle and mental health changes and their independent association with glycemic and body weight control in patients with diabetes during the medium-term period after the beginning of the COVID-19 pandemic. In addition, the correlation between mental health and lifestyle changes was also evaluated. Study design and participants This single-center, cross-sectional study was conducted at the Department of Diabetes, Metabolism, and Endocrinology of the Osaka Police Hospital in Japan. Patients with diabetes over 20 years old who periodically visited our department and provided their written consent to participate in the study were included. Individuals with malignant tumors, mental illnesses, or dementia and those with serious illnesses or conditions that significantly affected their daily lives were excluded or deemed ineligible for participation by an attending physician. Demographic data and anthropometric measurements were obtained from medical records at the time the questionnaire survey was conducted, from September 1 to October 30, 2020. Questionnaire The questionnaire was developed by our department and consisted of three sections. The first section contained questions on changes in dietary habits (regularity of meal timings, amount of total diet, and consumption of carbohydrates, snacks, fruits, and alcohol) during the pandemic (from April 2020 to the time the questionnaire was completed) compared to before the pandemic. The responses to the question on meal timing regularity included "became regular," "no change," or "became irregular." The responses to the question on dietary habits included "decreased," "no change," or "increased." The second section contained questions regarding activities of daily living, including physical activity and sedentary time before and during the pandemic, which were measured using the International Physical Activity Questionnaire-Short Form (IPAQ-SF) (15). This questionnaire comprises seven questions assessing the total time (in minutes) spent on vigorous-/moderate-intensity activities and walking per week and being sedentary per day. The responses regarding the vigorous-/moderate-intensity activities and walking were converted to metabolic equivalent task minutes per week (MET.min/week), using the IPAQ scoring protocol. The total weekly physical activity level was estimated by adding the scores for each activity. The last section contained questions regarding changes in mental health, including fear and anxiety, depressed mood, and psychological stress during the pandemic compared to before the pandemic. The three-item responses included "decreased," "no change," or "increased." The respondents completed the questionnaire at regular visits during the pandemic (September 1 to October 30, 2020). Changes in HbA1c, body weight, and hypoglycemic agents The change in HbA1c was calculated as the difference in this indicator at the time of questionnaire completion during the pandemic from that at the regular visits within nearly two months before the state of emergency over the COVID-19 outbreak (April 7, 2020). The same calculation applied to the change in body weight. The change in hypoglycemic agents was evaluated as "strengthened," "unchanged/modified," or "attenuated" during the pandemic (from April 7, 2020, to the time of questionnaire completion). Ethics statement The protocol was approved by the Osaka Police Hospital Clinical Research Review Committee in compliance with the Declaration of Helsinki and the current legal regulations in Japan. Written informed consent was obtained from all participants before participation in the study. Statistical analyses Statistical analyses were performed using the IBM SPSS Statistics for Windows, version 21.0, software program (IBM, Armonk, USA). Continuous variables are presented as means and standard deviations, while categorical variables are presented as valid percentages. The changes in dietary habits, mental health, and hypoglycemic agents were coded as follows: regularity of meal timing was -1= became regular, 0= no change, and 1= became irregular. Other dietary habits were coded as -1= decreased, 0= no change, and 1= increased. Mental health was coded as -1= decreased, 0= no change, and 1= increased. Hypoglycemic agents were as follows: -1= strengthened, 0= unchanged/modified, and 1= attenuated. Differences in physical activity and sedentary time before and during the pandemic were analyzed using Wilcoxon's signed-rank test. A Pearson's correlation analysis and multiple regression analysis were used to evaluate if and how lifestyle changes were associated with those in HbA1c and body weight. A Pearson's correlation analysis was also used to examine the correlation between mental health and lifestyle changes. Statistical significance was set at p<0.05. Demographic characteristics and anthropometric measurements A total of 360 patients completed the questionnaire, with HbA1c and body weight changes evaluated in 340. Overall, 226 (66.5%) were men, and the majority (94.1%) had type 2 diabetes. Their mean age was 67.2±11.2 years old; most patients (63.8%) were between 60 and 79 years old, followed by those between 40 and 59 years old (22.6%). Patients' mean body mass index (BMI) was 25.5±4.3 kg/m 2 . The mean duration of diabetes and HbA1c levels was 16.3±11.6 years and 7.0% ±0.8%, respectively. Established cardiovascular diseases were present in 32.9% of patients. The summarized demographic data and anthropometric measurements of all patients are presented in Table 1. Changes in dietary habits during the pandemic The changes in dietary habits before and during the pandemic are shown in Fig. 1. Changes in the regularity of meal timing showed that 15.2% of patients reported regular habits, whereas 7.5% reported irregular meal timings. Total diet decreased in 11.8% of patients but increased in 13.0% during the pandemic. The proportions of patients who reported reduced carbohydrate, snack, and fruit consumption were 6.0%, 9.1%, and 12.9%, while 22.2%, 15.3%, and 21.8% reported increased consumption, respectively. Alcohol consumption decreased and increased in 24.9% and 10.1% of patients, respectively. Table 2 Changes in mental health during the pandemic Regarding changes in mental health, increased fear and anxiety, depressed mood, and psychological stress were re-ported in 50.0%, 27.7%, and 37.2% of patients, respectively during the pandemic (Fig. 2). Hypoglycemic agent changes during the pandemic During the pandemic, the usage of hypoglycemic agents was strengthened, unchanged/modified, and attenuated in 10.0%, 82.4%, and 7.6%, respectively. Association of lifestyle changes and mental health with changes in HbA1c and body weight The mean HbA1c value slightly but significantly decreased during the pandemic (7.09% ±0.73% to 6.94% ±0.73%, p<0.001). In contrast, the body weight did not significantly vary during the pandemic compared with before the pandemic (69.0±14.4 kg vs. 68.8±14.5 kg, p=0.267). Tables 3 and 4 describe how lifestyle changes were related to HbA1c and body weight changes. Irregular meal timing was positively associated, but changes in walking time were negatively associated with changes in HbA1c (β=0.202, p=< 0.001, and r=-0.116, p=0.048, respectively). A multiple regression analysis included all items from the questionnaire and patient attributes (age, sex, BMI, duration of diabetes, and changes in hypoglycemic agents) as explanatory variables and the change in the HbA1c as an ob- HbA1c: glycated hemoglobin A1c, BMI: body mass index Change in hypoglycemic agents was coded as "strengthened" (-1 point), "unchanged/modified" (0 point), and "attenuated"(1 point). jective variable. Irregular meal timing was identified as a factor significantly related to the change in HbA1c (β= 0.328, p=0.001) ( With all items of the questionnaire and patient attributes (age, sex, BMI, duration of diabetes, and changes in hypoglycemic agents) included as explanatory variables and the change in body weight as an objective variable, we performed a multiple regression analysis. This analysis revealed that only the change in walking time significantly contributed to body weight change (β=-0.245, p=0.025) ( Table 4). Correlation between mental health and lifestyle changes The correlations between mental health and lifestyle changes are shown in Table 5. The change in fear and anxiety was positively associated with changes in meal timing regularity (r=0.129, p=0.019) and carbohydrate consumption (r=0.127, p=0.021). The change in depressed mood was positively associated with changes in carbohydrate consumption (r=0.142, p=0.010) and alcohol (r=0.161, p=0.037). In addition, changes in psychological stress were positively associated with changes in consumption of carbohydrates (r= 0.183, p=0.001), snacks (r=0.151, p=0.008), and sedentary time (r=0.158, p=0.004). Discussion In this study, we demonstrated considerable lifestyle and mental health changes and the independent impact of the lifestyle changes on glycemic and body weight control during the medium-term period of the COVID-19 pandemic in Japan. In addition, we also demonstrated an association between mental health and lifestyle changes. Based on the study's findings, we confirmed that dietary habits worsened and activities of daily living decreased in approximately 20% and 30% of patients, respectively. In addition, the study demonstrated that irregular meal timing was positively associated with changes in HbA1c levels during the COVID-19 pandemic. Furthermore, decreased walking time was associated with changes in body weight. As is well known, dietary habits and activities of daily living are quite important factors for glycemic and body weight control in patients with diabetes (5,6). Recent studies have revealed that lifestyle changes during the early phase of the COVID-19 pandemic influenced glycemic and body weight control in patients with diabetes in core pandemic areas (7-9). Kishimoto et al. investigated lifestyle changes, including dietary habits and physical activity, body weight, and HbA1c levels in 168 patients with diabetes (7). They revealed that physical activity levels (coded as increased, no change, and decreased) and dietary habits (coded as improved, no change, and deteriorated) were significant determinants of group categorization (patients with elevated HbA 1c levels >0.2% and decreased HbA1c levels >0.2%) in the multiple logistic regression analyses. However, their analysis did not include individual components of dietary habits and physical activities, unlike our report. In addition, changes in body weight have not been analyzed. Munekawa et al. evaluated the association of changes in dietary habits quantitatively, including total diet, snacks, prepared food, and exercise, with changes in HbA1c and body weight in 203 patients with diabetes, using a visual analogue scale (8). They revealed that the total diet intake was positively associated with changes in HbA1c and that total diet intake and snack consumption were positively associated while exercise was negatively associated with changes in body weight, findings that are partially consistent with those of our study. However, these associations were unadjusted, and whether or not these factors are independently related to HbA1c and body weight changes is unclear. In contrast, Takahara et al. evaluated the association of detailed lifestyle changes with HbA1c and body weight changes in 1,402 patients with diabetes using a linear regression model (9). They reported that the change in leisure time for physical activities was inversely associated with HbA1c and weight changes. In contrast, the quantitative change in meals, with a decline in meals eaten out and snacks, was positively associated with HbA1c and weight changes. However, in their reports, physical activity was not analyzed using the quantified indicators used in our study. These previous studies were performed during the early stage of the COVID-19 pandemic. However, the present study was performed during the medium-term period of about half a year after the COVID-19 pandemic onset, a major difference from previous studies. We observed worsening of dietary habits and a decrease in activities of daily living in a considerable number of patients. Such changes may have led to a deterioration in glycemic control and weight gain, which are generally consistent with previous studies conducted during the early stage of the pandemic. Conversely, the mean HbA1c during the pandemic was significantly lower than before the pandemic. This improvement may have been due in part to patients' medical guidance from attending physicians and/or their self-care behavior in glycemic control during the pandemic. Thus, considering these findings, including our own, physicians should pay close attention to lifestyle changes leading to worsening glycemic control and body weight gain and provide careful dietary and exercise guidance to diabetic patients while the COVID-19 pandemic persists. Regarding mental health changes, we confirmed that over 50% of patients had deteriorated mental health. A recent study reported that as many as 87% of people with diabetes were affected by psychological stress during the initial stage of the COVID-19 pandemic (11). It is well known that patients with diabetes tend to have more psychological problems, including anxiety, depression, and stress, than the general population (16). In addition, diabetes has been associated with higher severity and mortality rates due to COVID-19 (17,18). Furthermore, because such warnings were HbA1c: glycated hemoglobin A1c, BMI: body mass index Change in hypoglycemic agents was coded as "strengthened" (-1 point), "unchanged/modified" (0 point), and "attenuated" (1 point). widely disseminated through the mass media in Japan, fear of transmission may have deteriorated the mental health of patients with diabetes. Indeed, a recent meta-analysis re-vealed that patients with noninfectious chronic diseases, including diabetes, had a higher risk of depression and anxiety than others during the COVID-19 pandemic (19). Although there was no significant association between changes in mental health and glycemic control in this study, mental health deterioration was associated with worsening of dietary habits, such as increased carbohydrates and snacks. Recent studies have reported an association between changes in mental health and dietary habits in patients with diabetes during the COVID-19 pandemic (8,12). Sanker et al. reported that 15.5% of patients had increased mental stress during the lockdown. The majority had an unhealthy dietary pattern, including higher consumption of snacks, which was consistent with our study findings (12). Munekawa et al. also reported that 41.9% of patients experienced increased stress during the early phase of the COVID-19 pandemic, and this was positively associated with changes in prepared food intake (8). The causal relationship between mental health deterioration and increased consumption of carbohydrates and snacks may be attributed to the finding that negative emotion is likely to cause emotional hunger and eating. In addition, carbohydrate ingestion stimulates serotonin production, which enhances mood and alleviates stress (20,21). Regarding the association between mental health and activities of daily living, we found an association between increased psychological stress and sedentary time. Stanton et al. reported that higher depression, anxiety, and stress symptoms were associated with decreased physical activity during the COVID-19 pandemic, based on a study of 1,491 Australian adults (22). Although the association between mental health and activities of daily living is bidirectional (23), deterioration of mental health may be more likely to lead to a decrease in activities of daily living during the pandemic. Given these findings, the deterioration of mental health may lead indirectly to worsened glycemic control through the worsening of self-care, including dietary habits and activities of daily living during the pandemic. Unfortunately, we found no significant association between changes in mental health and glycemic control in this study; however, negative psychological factors are directly associated with the worsening of glycemic control through increased generation of stress hormones, including cortisol (24,25). However, a systematic review and meta-analysis showed that psychosocial interventions modestly but significantly improved glycemic control and mental health outcomes (26). Therefore, we believe that physicians should pay close attention to changes in mental health and lifestyle changes while the COVID-19 pandemic persists. Several limitations associated with the present study warrant mention. First, the responses to the questions were dependent on memory and the patients' subjective opinions, which may have affected the outcome validity. Second, this study was performed in a single diabetes center in Japan, with a relatively small sample size and no control. Third, we used the IPAQ-SF, a self-report questionnaire that assesses physical activity and has been accurately validated in people 15-69 years old. However, our study population included elderly people over 70 years old. This data collection tool has only recently been used in clinical practice to evaluate this age group (27,28). Fourth, the COVID-19 infection rate was lower and measures employed in the study area less stringent than those in Western countries and other regions with government-enforced lockdowns and quarantines. Therefore, these results may not be representative of other populations. Furthermore, this study did not include patients whose visits to our department had been suspended due to the pandemic. Those patients may have been experiencing more lifestyle changes related to dietary habits, activities of daily living, and mental health. Therefore, the present results may have been underestimated. Finally, although we detected an association between lifestyle changes and glycemic and body weight control, and changes in mental health and dietary habits, we were unable to clarify any causal relationship because of the cross-sectional design of this study. Therefore, further prospective, large-scale clinical studies are needed to clarify the causality. However, we believe that this study's findings will provide important information on diabetes care during the COVID-19 pandemic. In conclusion, the persistence of the COVID-19 pandemic has had a considerable impact on the lifestyles and mental health of patients with diabetes. In addition, lifestyle changes were shown to be associated with glycemic and body weight control, and mental health changes were associated with lifestyle changes. Thus, physicians need to provide more careful diet and exercise guidance and mental healthcare to patients with diabetes during the COVID-19 pandemic.
2021-11-21T16:11:23.634Z
2021-11-20T00:00:00.000
{ "year": 2021, "sha1": "2de7518037354811974b6f1007e7cb2bc6d78520", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/61/3/61_8010-21/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a93a1bc3977331f52cb0cdb40527fb5070d07c09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253005559
pes2o/s2orc
v3-fos-license
Cleaning the Flue in Wood-Burning Stoves Is a Key Factor in Reducing Household Air Pollution In experimental settings, replacing old wood stoves with new wood stoves results in reduced personal exposure to household air pollution. We tested this assumption by measuring PM2.5 and levoglucosan concentrations inside homes and correlated them with wood stove age. Methods: Thirty homes in the Albuquerque, NM area were monitored over a seven-day period using in-home particulate monitors placed in a common living area during the winter months. Real-time aerosol monitoring was performed, and filter samples were analyzed gravimetrically to calculate PM2.5 concentrations and chemically to determine concentrations of levoglucosan. A linear regression model with backward stepwise elimination was performed to determine the factors that would predict household air pollution measures. Results: In this sample, 73.3% of the households used wood as their primary source of heating, and 60% burned daily or almost daily. The mean burn time over the test week was 50 ± 38 h, and only one household burned wood 24/day (168 h). The average PM2.5 concentration (standard deviation) for the 30 homes during the seven-day period was 34.6 µg/m3 (41.3 µg/m3), and median (min, max) values were 15.5 µg/m3 (7.3 µg/m3, 193 µg/m3). Average PM2.5 concentrations in 30 homes ranged from 0–15 μg/m3 to >100 μg/m3. Maximum PM2.5 concentrations ranged from 100–200 μg/m3 to >3000 μg/m3. The levoglucosan levels showed a linear correlation with the total PM2.5 collected by the filters (R2 = 0.92). However, neither mean nor peak PM2.5 nor levoglucosan levels were correlated with the age (10.85 ± 8.54 years) of the wood stove (R2 ≤ 0.07, p > 0.23). The final adjusted linear regression model showed that average PM2.5 was associated with reports of cleaning the flue with a beta estimate of 35.56 (3.47–67.65) and R2 = 0.16 (p = 0.04). Discussion: Cleaning the flue and not the wood stove age was associated with household air pollution indices. Education on wood stove maintenance and safe burning practices may be more important in reducing household air pollution than the purchase of new stoves. Introduction Exposure to wood smoke (WS) is increasing not only in low-income countries but also in America, Canada, Europe, the Taigas in Canada, Alaska, and Siberia [1]. Climate change and other factors contribute to the rise in prevalence of wildfire events [2], causing populations in large areas being exposed to outdoor air pollution and toxic particulate matter (PM). In addition, household air pollution is a major concern [3,4] because approximately one-third of the world's population, comprising over 2.4 billion people, still uses solid fuels, such as wood, coal, or biomass (vegetable remains and dung), for cooking and heating their homes [5,6]. During the winter months, 30% of ambient fine particles (PM 2.5 ) mass stems from wood burning used for heating and cooking in some areas of the United States (US) [7]. More recently, exposure to household air pollution has increased during the COVID-19 pandemic as people were confined to their homes for longer periods [8]. According to the guidelines published by the World Health Organization (WHO) in September 2021, the levels of 24 h mean particulate matter (PM) concentration is 45 µg/m 3 for PM 10 and 15 µg/m 3 for PM 2.5 [9]. However, indoor PM 2.5 concentrations in the US often exceed health-based air quality standards, especially in homes that use stoves for cooking or heating. In Montana, mean indoor PM 2.5 concentrations of 45 µg/m 3 and 51 µg/m 3 were reported in homes with wood stoves [10,11], exceeding the WHO 24 h standard for PM 2.5 of 15 µg/m 3 . However, in addition to PM, levoglucosan content is a large fraction of the emitted fine particles from wood burning [12]. Levoglucosan is a product of pyrolysis generated during the combustion of wood and a major constituent of PM 2.5 , and, therefore, it has been proposed as a tracer of WS [12]. An estimated 3.8 million premature deaths are caused each year from illnesses attributable to household air pollution due to heating or cooking with inefficient stoves using either solid fuel or kerosene [5]. Exposure to household pollutants is particularly high among women and young children, and this contributes to many deaths in children under 5 years of age. These deaths are primarily due to acute lower respiratory infections such as pneumonia and in adult women due to chronic obstructive pulmonary disease (COPD) [13,14]. However, even exposure to low levels of WS PM can cause oxidative stress in lung cells and elicit airway inflammation. Thus, exposure to WS can be a major cause of respiratory illness in all susceptible individuals [15] and has been implicated in respiratory illness, including COPD exacerbations [16], lower respiratory infections [17], and cough and wheezing [18,19]. Epidemiological studies suggest that WS exposure may cause an increased risk of infection and reduced lung function [20][21][22]. Several controlled exposure studies also demonstrated a clear association between exposure to WS particulates and respiratory dysfunction [23,24]. A 25 µg/m 3 increase in 6-d mean indoor PM 2.5 concentrations was associated with the presence of lower respiratory tract infection in children [25]. Exposure to WS affects not only the respiratory system but also increases the risk for cancer (lung, head and neck, cervical), interstitial lung disease, cardiovascular diseases, hypertension, low birth weight, and reduces growth rate of children [26,27]. Over 90% of the total PM from biomass burning is smaller than 2.5 µm, which can enter the alveolar region and pass into circulation [28,29]. Because wood stoves are major sources of household air pollution, several intervention strategies have been implemented to reduce indoor PM 2.5 . Open fires generate high concentrations of WS particulate matter of 2000-30,000 µg/m 3 , and the use of improved wood stoves reduces exposures to the 1000-5000 µg/m 3 range [11]. Modern technologies in biomass combustion, such as automatic small-scale wood pellet appliances and larger domestic heating plants, are commonly more efficient and emit much lower levels of PM [30]. Changeout programs of older wood stove models with new EPA-certified wood stoves in Libby, Montana, showed a >70% reduction in indoor PM 2.5 concentrations [10,31]. These studies determined a drop of basal mean values from 51.2 to 15 µg/m 3 PM 2.5 after the changeout of stoves [10]. The follow-up study showed that the mean PM 2.5 level of 45.0 µg/m 3 before changeout was reduced to 21.0 µg/m 3 over the following three winters. However, over subsequent winters, the average concentrations across homes varied, and several homes actually showed increased concentrations [11]. Despite the overall reduction in indoor pollutants, the study suggested that not only the introduction of a new wood stove but other factors contribute to the level of pollutants. Recently, the same research group proposed that a lack of cleaning of chimneys may also contribute to indoor pollution [32]. Therefore, simply replacing old woodstoves with newer improved ones may not have a long-term benefit of reducing exposure to WS PM 2.5 . Additional studies are necessary to determine whether newer wood stoves reduce household air pollution [33]. Confirmatory studies should not only have accurate measurements of PM emission from wood stoves but also consider including chemical markers that define the source of pollutants. Therefore, the main objective of the present study was to document the age of wood stoves, in addition to important factors, such as stove maintenance and frequency of cleaning the flue, as these variables were not included in earlier studies. Further, earlier investigations did not consider the possibility that other indoor pollutants, such as PM 2.5 , generated from cigarette smoking can affect indoor air quality. The present study measured the wood pyrolysis product levoglucosan to confirm that the PM 2.5 stems from wood burning specifically. Therefore, by including both PM 2.5 and levoglucosan, the present study was designed to elucidate whether the age of wood stoves is a key determinant of reducing household air pollution in "real-life settings" or whether other factors, such as flue cleaning, affect indoor air pollutant levels. Study Population The sample for this investigation was drawn from those currently enrolled in the Lovelace Smokers Cohort (LSC), who reported yes to the question, "have you been exposed to WS over the last year". Further details of the LSC have been described previously [34][35][36]. Most LSC participants were recruited through newspaper or television advertisements, and ongoing recruitment continues using these methods in Albuquerque, an urban, diverse, high-altitude Southwestern community. Study Design This analysis was part of a larger study that tested the development of a self-report questionnaire concerning exposure to household WS. Exposure to WSwas self-reported in response to a question administered at study entry as part of the general health survey. The question "Have you ever been exposed to WS for 12 months or longer" provided no additional details about the type, intensity, and duration of WS exposure. The original research design was a cross-sectional sample, monitoring the particulate matter and levoglucosan concentrations over seven days in 30 homes. The homes (n = 30) were selected from those originally contacted by a study coordinator and those contacted by word of mouth. All individuals were enrolled during the heating season when wood stoves were active in the home to obtain real-world experience. This analysis reports on these 30 homes' internal environment exposures. Demographic information such as age, gender, ethnicity, race, smoking history, and history of respiratory disease was obtained using the American Thoracic Society (ATS)-DLD-78 questionnaire, with some questions added about in-home exposures to smoking. Demographic information concerning wood stove maintenance and burning details were also asked, including what type of fuel was burned and cleaning of the flue and stove. The thirty homes were monitored over the seven-day period using in-home particulate monitors placed in a common living area during the winter months of 2013-2014. The Teflon filter was conditioned a minimum of 24 h prior to and after sample collection at 25 • C and 40% relative humidity. Filter samples were collected with a Personal Environmental Monitor that had a 2.5-micron size selective inlet (PEM, Model 200, PEM-10-2.5, MSP Corporation, Shoreview, MN, USA) ( Figure S1). Real-time aerosol monitoring was performed by using a DustTrak Aerosol Monitor (Model 8520, TSI, Inc., Shoreview, MN, USA). The aerosol sample enters through a multi-nozzle, single-stage impactor to remove particles with an aerodynamic diameter (AD) larger than 2.5-µm in diameter. Particles smaller than the impactor cut-point were collected on a 37 mm diameter filter. Two different types of filters were used during this study. For the first 12 deployments, PTFE Zefluor filters, pore size, 3.0-µm (Part No. 60230, Pall Life Sciences, Ann Arbor, MI, USA) were used. However, it was observed that due to heavy particulate loading in some residences and a long sampling time (1 week), the pressure dropped across the filters and reduced the sampling flow rate or caused the failure of the pump. Due to this consequence, Zefluor filters were replaced with PallFlex Membrane Filters (Type: Fiberfilm T60A20, Pall Life Sciences, Ann Arbor, MI, USA) for the remaining household deployments. With this change, no drops in the sampling flow rate were observed despite the high loading of the filters. Figure S2). The flow rate through the sample was maintained at 10 ± 1 L/min to maintain a 2.5 µm cut-point. The Leland Legacy Air Sampling Pump (SKC, Inc., Eighty-Four, PA, USA) was used to provide the required sampling flow rate through the PEM. The PEM sampler was installed on a vertical rod about 4-5 ft. from the ground (typical breathing zone while sitting), with the inlet holes aligned parallel with the floor to avoid gravitational settling. Additional detail on the sampling system can be found in the Supplementary Materials. In-Home Particulate Samples and Analysis Sampling Deployment Procedure: Pre-deployment began with the Personal Environmental Monitor (PEM) being prepared by cleaning the impaction surface, and a thin film of grease was applied to the impaction surface to minimize particle bounce and re-entrainment. The filter was weighed using a microbalance (Model MX5, Mettler Toledo, Columbus, OH, USA) and installed in the PEM. Then, the PEM was connected to the sampling pump, and the sampling flow rate was adjusted to achieve a 10 ± 1 L/min flow rate. The flow rate was measured by installing a TSI flow meter (Model: 4100, TSI, Inc., Shoreview, MN, USA) between the sampling pump and the PEM. Pickup and post-deployment began with both samplers (DustTrak and PEM) being stopped with the final flow rate, and the pressure drop across the PEM sampler was measured and recorded similarly to pre-deployment. The filter was weighed and stored in a −80 • C freezer until levoglucosan analysis could be performed. Additional detail on the sampling deployment procedure is described in the Supplementary Figure S2. Chemical Analysis: Levoglucosan (1, 6-anhydro-b-D-glucopyranose), a cellulose combustion product, is a tracer species for WS, mainly because of its high resistance to degradation. Levoglucosan levels were determined in the collected PM to determine whether the PM was primarily from wood burning, as 23% of the study participants were also current cigarette smokers. The amount of levoglucosan analysis of the filter extraction solution was determined by GC-MS. The analysis was performed by Desert Research Institute, Reno, Nevada, USA, using the following analytical method. Details of the chemical analyses are described in Supplementary Materials. Due to storage and chemical analysis failure, levoglucosan values were obtained in only 23 of 30 homes. Data were reported in ng/m 3 units. Statistical Analysis: Summary demographic statistics for continuous variables consisted of means and standard deviation (S.D.), and categorical variables are presented as proportions. We conducted Pearson's correlations to examine associations of the woodstove age with the average of PM 2.5 , peak value of PM 2.5 , or levoglucosan levels of in-home particulate measures (n = 30) and whether or not the stove was maintained regularly (yes/no) or the flue cleaned (yes/no). Based on our hypothesis of associations with the subject's proximity to the stove (based on three questions: (a) Over the past week, when wood was burning in the stove/fireplace, there was some smoke in the room? (b) When wood is burning, how close to the stove/fireplace are you? (c) Usually, when wood was burning in the stove/fireplace, I was in the same room?), stove age, number of cigarettes smoked per day, cleaning the flue, cleaning the stove, income, and education, we performed a linear regression model with backward stepwise elimination to determine the factors that would predict household air pollution measures. All analyses were conducted using SAS version 9.4 (Cary, NC, USA). Demographic Characteristics of Subjects and the Homes The mean age of the study participants was almost 60 years, 43% were male, and about half were Hispanic (Table 1). Approximately a quarter (23%) were current smokers, and two-thirds (66.7%) of the individuals reported some chronic conditions ( Table 1). The majority of the study participants reported living in a home with 5 rooms and 1-2 individuals residing in the home. The stove was reported to be serviced in the last year in 65.5% of the households, with the flue being cleaned in over half of the sample within the past year. Only 30% reported using a humidifier, and only one household used an air filter in their home. Among the 30 households in this study, 73.3% used wood as their primary source of heating, and 60% burned wood daily or almost daily. The mean burn time over the test week was 50 ± 38 h, with only one home burning 24 h a day ( Figure 1A). On average, the measured PM 2.5 ranged between 0-15 µg/m 3 and >100 µg/m 3 ( Figure 1B). Over the 6 days, the average PM 2.5 was >100 µg/m 3 in only 1 home, 15 homes (50% of the sample) showed 0-15, 6 homes (20%) measured 15-35 µg/m 3 , and in 8 homes, PM 2.5 ranged between 35 and 100 µg/m 3 ( Figure 1B). Interestingly, the peak measurements ranged from 100-200 µg/m 3 to >3000 µg/m 3 in four homes ( Figure 2). WS Was the Cause of the Measured PM There was a linear relationship (R 2 = 0.83) between PM 2.5 in the filter measurement concentration and DustTrak average concentration reading. Levoglucosan collected by filters showed a positive linear relationship with (R 2 = 0.92, p < 0.01) the particulate mass collected on the filter (Figure 3). Age of Wood Stoves Was Not Associated with the PM and Levoglucosan The result was shown as Figure 4. The Age of Wood Stoves Did Not Play a Role in PM 2.5 or Levoglucosan Levels The age of wood stoves was not correlated with either PM 2.5 or levoglucosan (Figure 4). Over the seven-day measurement period, peak PM 2.5 emission (R 2 = 0.01, p = 0.555) ( Figure 4A) or mean PM 2.5 values (R 2 = 0.00, p = 0.893) ( Figure 4B) were not associated with the age of the stoves. Similarly, the mean concentration of the levoglucosan (R 2 = 0.07, p = 0.230) was not associated with the age of the wood stoves ( Figure 4C). Cleaning of Wood Stoves Was Associated with PM 2.5 Emission Using a univariate linear regression model, we identified that only the variable "flue cleaning" was associated with PM emission ( Table 2). When using the full model that includes all variables in multivariate linear regression, we did not find an association of any of the co-variates with PM (Table 3) Discussion The current study identified that the age of wood stoves is not correlated with PM emission, but it is rather the maintenance and cleaning of the flue that is correlated with household air pollution due to wood stoves. Furthermore, the PM 2.5 emission level was positively associated with the level of levoglucosan. In the current study, the level of indoor PM 2.5 was significantly higher than the WHO's recommended levels, suggesting that people who use wood stoves are exposed to high levels of WS, even in high-income countries. Our findings are similar to the levels of indoor PM 2.5 in homes with wood stoves that were measured in earlier studies [10,11,31,37,38]. Over 24 h, PM 2.5 concentrations ranged from 24 to 60 µg/m 3 , whereas sampling over a 2 h cooking period exceeded 1000 µg/m 3 [38]. The peak levels of PM emission are expected to be high during the cooking or heating periods. In another study, the measured PM 2.5 levels were higher (1910 to 6030 µg/m 3 ) than in our study when the measurements were taken during the cooking period [39]. However, these high concentrations in PM emissions may be the result of the types of stoves used. Mean PM 2.5 levels of 5310 µg/m 3 and maximum PM 2.5 levels of 13,800 µg/m 3 were measured in the homes that used open fire and in some homes with Plancha or Lorena wood stoves. However, all the homes in our study used stoves with chimneys, and the PM 2.5 was measured over the 7-day period. The peak value of PM 2.5 in our study exceeded 3000 µg/m 3 during the cooking and heating period in four homes (>10% of homes studied). These levels of PM 2.5 concentrations are usually thought to be present only in low-income countries [40]. Changing older wood stoves with newer EPA-certified wood stoves was widely encouraged to help reduce household air pollution [10,31,37,41]. The wood stove changeout program in Libby reduced indoor PM 2.5 concentrations by >70%, but the first study did not provide any information on the sustainability of the effect. The follow-up study conducted multiple samplings over subsequent winters and found large variability in the average PM 2.5 levels across the homes: several homes had higher concentrations than the concentration pre-changeout [11]. In the original study [10], only one measurement was taken following the changeout. Although multiple samplings from 21 homes in the follow-up study suggested, on average, a 53% reduction in PM 2.5 in 16 homes, 7 homes demonstrated no reduction post-changeout. Interestingly, samplings from seven of the homes exhibited even higher levels of PM 2.5 than pre-changeout [10]. Another study revealed [41] that the wood stove changeout program reduced phenolics and PAH compounds on average by 64%, while the mass of PM 2.5 was reduced by only 20%. However, in that study, EPAcertified stoves were installed with efficient burning woods. The efficient combustion of wood in modern, certified stoves may also contribute to lowering PM emissions. Wood with higher moisture content produces a higher quantity of PM compared to dry wood [42,43]. High moisture content causes incomplete wood combustion resulting in high emission of PM. Therefore, indoor WS levels are likely determined by the type of wood used, moisture content, the combustion appliances, as well as the combustion phase, which affects PM generation [44][45][46]. These studies determined that types of wood and airflow play a role in the amount and different types of PM, including the generation of pyrolysis products. Studies with more controlled burning conditions are needed to determine the contribution of these factors to PM generation by wood burning. Next, we measured levoglucosan, which forms from the pyrolysis of starch and cellulose of wood. Hydrolysis or biodegradation, even the combustion of fossil fuel, does not produce levoglucosan. Cellulose combustion generates levoglucosan, which is considered a tracer for biomass burning [47][48][49][50]. Depending on the air supply, the relative range of levoglucosan to total particle emission from wood burning was reported to be 3-17%. Although, as a fine particle, levoglucosan constitutes a large fraction of total emitted particles from wood burning. Studies with levoglucosan measurements as a marker of wood smoke from wood stoves are sparse. The measurements of levoglucosan in our study were carried out over 7 days, and levoglucosan is stable over 10 days [51]. The levoglucosan level detected in the homes of the current study was similar to the levels previously reported (mean 300 ng/m 3 ) in a different study [52]. The PM 2.5 level was associated with levoglucosan, but no association was found between levoglucosan and the age of the stoves, which is in line with the finding on the association between PM 2.5 and the age of the stoves. However, reports of cleaning the flue were able to explain 16% of the variance in the PM 2.5 level [32]. In addition to best burning practices, operating and maintenance, including flue cleaning, may have a significant influence on the emission of PM. A previous study [32] showed higher PM 2.5 in the homes that reported cleaning their chimney more than 12 months before the sampling period compared to those that cleaned chimneys within 6 months of the sampling period. Regular cleaning of the flue is crucial to reduce indoor PM emissions, as the ashes can accumulate and clog the passage of the WS outdoors. Further, air intake vents can improve wood-burning efficiency [47]. Cleaning of the flue may increase wood-burning efficiency by air intakes and generate less PM, thus reducing PM emissions. To reduce indoor household pollution, education on wood stove maintenance and safe burning practices are more important than replacing old stoves with new stoves. The limitations of this study include the relatively small number of homes studied, and extensive characterization of the WS in all 30 homes could provide detailed information on exposures experienced by all study participants. Future studies should inquire about the experience and skill of the individual who operates and maintains the wood stove and include homes without wood stoves for comparison. Furthermore, determining the effect of different types of wood in smoke generation and identifying the wood type that may be less likely to clog the flue would be the most efficient path to reduce indoor PM emissions. Although all the homes were in the same area, the effect of ambient air pollution on the indoor air should be considered. Further, 23% of the household participants currently smoked, and although smoking did not seem to be a factor associated with PM, it is possible that it could have affected the measurement. The strengths of our studies are the positive association of PM 2.5 with levoglucosan levels, confirming that the PM indeed represents WS. Comparative Section In line with earlier studies (45-58), our study further determined that levoglucosan is a useful wood-burning marker to study indoor air pollution in connection with WS. Furthermore, the homes included in this study were selected at random, but all the homes were with similar architecture and ventilation, and the type of wood stoves are representative of the stoves used in the area. Indoor air pollution varies in different geographical regions or areas. We are not aware of an earlier study that was conducted in New Mexico that investigated the age of wood stoves and household air pollution. Conclusions and Future Perspectives Although changeout with EPA-certified woodstoves was suggested as a strategy to minimize exposure to PM 2.5 , our study suggests that the age of wood stoves is not associated with the level of indoor PM but rather with flue cleaning. In addition, better characterization of PM 2.5 is recommended to ensure the origin is warranted. Additionally, further studies with bigger sample sizes are needed to elucidate whether moisture content in the wood and cleaning the flue improve indoor air quality. Installation of proper stoves, operation following best burning practices with proper wood selection with maintenance, and regularly cleaning flue may reduce the level of PM emissions from wood stoves. Whether smoke from different types of wood or moisture content may result in the rapid clogging of the flue should also be investigated. Finally, whether flue cleaning will reduce household air pollution to an extent that translates into health benefits will need further investigation. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/toxics10100615/s1, Figure S1: PEM-10-2.5 in disassembled form. Multiple jets of single stage impactor can be seen on the red inlet part. The impaction surface on the left shows collected particles greater than 2.5-µm. The center part shows the filter (mostly black with a white boundary).; Figure S2: PEM installed on a rod attached to an electric cooler enclosure Funding: This work was funded by NIH R15 HL115544 and RO1 HL140839 and RO1 HL068111. Institutional Review Board Statement: All procedures performed for the study were in accordance with the approval of the Ethical Review Authority. Informed Consent Statement: All the subjects gave informed consent for the study. Data Availability Statement: Not applicable.
2022-10-20T16:03:13.268Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "dc2f83d801cf59f2c90420473adb2456a2df30ba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6304/10/10/615/pdf?version=1666003959", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2bc3ed5314a142fa71c072f9030ac40e80658209", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
118522431
pes2o/s2orc
v3-fos-license
Optical counterparts of two ultraluminous X-ray sources NGC4559 X-10 and NGC4395 ULX-1 We study the optical counterparts of ultraluminous X-ray sources NGC4559 X-10 and NGC4395 ULX-1. Their absolute magnitudes, after taking the reddening into account, are $M_V \approx -5.3$ and $M_V \approx -6.2$, respectively. The spectral energy distribution of the NGC4559 X-10 counterpart is well fitted by a spectrum of an F-type star, whereas NGC4395 ULX-1 has a blue power-law spectrum. Optical spectroscopy of NGC4395 ULX-1 has shown a broad and variable HeII~$\lambda$4686 emission, which puts this object in line with all the other spectrally-studied ULXs. Using the Swift archival X-ray data for NGC4395 ULX-1, we have found a period of $62.8\pm 2.3$ days. The X-ray phase curve of the source is very similar to the precession curve of SS433. The optical variation of the counterpart (between two accurate measurements) amounts to 0.10 mag. Analyzing the absolute magnitudes of 16 well-studied ULX counterparts one may suggest that as the original accretion rate decreases (but nevertheless remains supercritical), the optical luminosity of the wind becomes dimmer and the donor star dominates. However, an observational bias may also influence the distribution. X-ray studies of ULXs have shown that the behavior and shape of the spectra of these objects differ strongly from what is observed in galactic black holes. The often observed high-energy curvatures (Stobbart, Roberts, & Wilms 2006;Gladstone, Roberts, & Done 2009;Caballero-García & Fabian 2010) with a downturn between ∼ 4 and ∼ 7 keV in the X-ray spectra of ULXs suggest that the ULX accretion discs are not standard. The inner parts of the accretion discs may be obscured by a hot outflow or ⋆ vinokurov@sao.ru optically thick corona (Gladstone, Roberts, & Done 2009) which comptonize radiation from the inner disc. Observations of ULXs and their environment in the optical range provide additional information about the objects themselves, e.g. about the masses of their progenitor stars which turn out to be greater than 50 solar masses (Poutanen et al. 2013). Observations of nebulae surrounding ULXs (Pakull & Mirioni 2002;Lehmann et al. 2005;Abolmasov et al. 2007;Kaaret et al. 2010) testify that they form due to jets or powerful winds. All ULXs identified in the optical range (about 20 objects are identified reliably) are faint sources with mV = 21−24 (Tao et al. 2011), the brightest counterpart being ULX P13 in NGC 7793, mV ≈ 20.5 (Motch et al. 2014). At present, the spectra of less than 10 optical counterparts of ULXs have been studied. Fabrika et al. (2015) have shown that optical spectra of ULXs contain broad emission lines of HeII λ4686 and of hydrogen Hα and H β with F W HM ∼ 1000 km/s. All the optical spectra turned out to be similar to each other (see also Roberts et 2014)), and to the spectra of WNLh-type stars (Sholukhova et al. 2011) and SS 433 (Fabrika 1997(Fabrika , 2004). In the mentioned paper it was also as-sumed that the studied ULXs represent a uniform class of objects that are probably supercritical accretion discs. Recent spectroscopy of M81 ULS-1 (Liu et al. 2015) has shown, in addition to the broad HeII, Hβ, and Hα lines, the presence of blueshifted and redshifted Hα lines forming in baryonic relativistic jets. Previously, the relativistic lines were observed only in SS 433 (Fabrika 2004). Here we present the identification of the optical counterpart of NGC 4559 X-10 and accurate astrometry for NGC 4395 ULX-1. The first source with the X-ray luminosity LX ∼ 7 × 10 39 erg s-1 is located in a star-forming region of a late-type spiral galaxy at a distance of 7.3 Mpc (Tully et al. 2013). X-10 was studied in the optical range by Cropper et al. (2004) and Ptak et al. (2006), but they were unable to find an unambiguous identification for this object. The luminosity of NGC 4395 ULX-1 is about 4 × 10 39 erg/s in its bright state. The source is located in a nearby Seyfert galaxy at a distance of 4.76 Mpc (Tully et al. 2013). The optical counterpart of NGC 4395 ULX-1 was first identified in Gladstone et al. (2013). We also report the presence of a long-term X-ray variability in NGC 4395 ULX-1 and present its optical spectroscopy. We analyze the spectral energy distributions of both ULXs using the Hubble Space Telescope (HST) data and discuss the optical luminosities of these two sources in comparison with other well-known ULXs. Astrometry and optical counterparts The archive images from Chandra X-Ray Observatory and HST were used to identify the optical counterparts of NGC 4559 X-10 and NGC 4395 ULX-1; reference sources were used to improve the relative astrometry. In the case of NGC 4559 X-10, the reference object was the well known ULX NGC 4559 X-7 (e.g., Soria et al. (2005); Tao et al. (2011)). Both sources are located on chip S3 of ACIS with a moderate offset from the optical axis (less than 1.9 ′ ) in the Chandra observation (ID 2026). The best angular resolution HST observation of the X-10 region was taken on March 9, 2005, with ACS/HRC in the F555W filter. Since X-10 and X-7 are located in different images in all HST observations, for X-7 we chose an ACS/WFC/F550M image taken on the same date. In order to correct for the offset in coordinates between the HST observations of these two sources we used a g-band SDSS (Sloan Digital Sky Survey; Alam et al. (2015)) image. For the offset between the HST images of X-10 and SDSS we used two additional reference stars and a bright pointlike optical source in the nuclear region of the galaxy. To determine the shift between the HST image of X-7 and the SDSS image we used four bright isolated stars near X-7. Finally, using the corrected position of the X-7 counterpart in the optical and Chandra images we derived a position for X-10 on ACS/HRC/F555W R.A. = 12 h 35 m 58 s .512, Dec = + 27 • 57 ′ 42 ′′ .87 (J2000.0) with a 1σ-accuracy of 0.15 ′′ . To make astrometric measurements for NGC 4395 ULX-1, we chose the HST observation taken on March 31, 2014, with WFC3/UVIS in the F438W filter. There is only one Chandra observation of NGC 4395 ULX-1 (ID 402) with a large offset from the optical axis of 4.5 ′ . Due to a considerable shift along the optical axis, the PSF shape of the object is strongly distorted, which leads to the relatively low accuracy of its coordinate measurements. Three X-ray sources from Chandra which we identified in the SDSS image were used as reference sources. The corrected position of ULX-1 relative to HST is R.A. = 12 h 26 m 01 s .437, Dec = + 33 • 31 ′ 31 ′′ .18 with an accuracy of about 0.3 ′′ . In Fig. 1 we present the positions of our ULXs. There is a single relatively bright object in the HST image within the corrected Chandra X-ray error box of NGC 4559 X-10; a much fainter feature lies near the edge of the error circle. The bright source has diffuse morphology with a sharp brightening toward the center, its size being ≃ 0.09 ′′ ×0.08 ′′ , whereas the surrounding stars have a full width at half maximum of F W HM ≃ 0.05 ′′ . Apparently, this source is a stellar-like object surrounded by faint unresolvable stars. The only optical counterpart of NGC 4395 ULX-1 within the error box of the X-ray coordinates is a stellar-like object. Photometry and spectral energy distributions of the optical counterparts To study the spectral energy distributions (SEDs) in the optical range we used ACS/HRC images in F435W, F555W, F814W for NGC 4559 X-10, and WFC3/UVIS/F275W, F336W, and F438W images for NGC 4395 ULX-1 from the same HST datasets as for astrometry. Photometry was performed on drizzled images, using the apphot package in iraf. All magnitudes are given in the Vegamag system. The background was estimated from a concentric annulus around the objects. We have performed photometry of the bright optical counterpart of NGC 4559 X-10 and the faint source near the error circle boundary (Fig. 1). To reduce the contribution of faint stars surrounding the X-10 counterpart, we chose a small aperture with a 2-pixel radius (0.05 ′′ ). Aperture corrections were calculated using the CALCPHOT procedure of the SYNPHOT package. We were unable to measure the aperture corrections directly because of the small field of view of ACS/HRC and the high density of stars in this region. The reddening correction was carried out in SYNPHOT package using the extinction measured from our spectra (see below). The dereddened magnitudes are mF 435W = 24.38 ± 0.07, mF 555W = 24.04 ± 0.04, and mF 814W = 23.68 ± 0.04. The fainter source at the X-10 error box boundary is identified as a relatively isolated source only in the images in filters F435W and F555W. Its dereddened magnitudes are mF 435W = 25.05 ± 0.12 and mF 555W = 24.91 ± 0.09. Photometry of the NGC 4395 ULX-1 counterpart was performed in an aperture with a 3 pixel radius. Aperture corrections for stellar magnitudes in each filter were determined by 3−5 bright stars. Extinction in the optical range was determined from spectroscopy of a nebula around the object. The extinction-corrected stellar magnitudes of the ULXs are equal to mF 275W = 19.971 ± 0.015, mF 336W = 20.497 ± 0.016, and mF 438W = 22.075 ± 0.016. The stellar magnitude errors for all objects do not include the uncertainty related to reddening. The filter wavelengths were corrected for the spectral slope; they were calculated using the CALCPHOT task with the avglam parameter in the SYNPHOT package. The fluxes of the ULX-1 counterpart are in good agreement with the During our spectroscopic observations of ULX-1 with the Russian 6 m BTA telescope we also obtained V-band images. The source magnitudes for our three best-seeing observations are 22.14 ± 0.10 (January 2014), 22.18 ± 0.12 (January 2015), and 22.15 ± 0.12 (February 2015). All magnitudes are close to those of HST. Spectroscopy Long-slit spectroscopy of both ULXs was obtained using the BTA telescope with the SCORPIO spectrograph (Afanasiev & Moiseev 2005). All data reduction and calibrations were performed with midas procedures. For NGC 4559 X-10, we have data obtained on December 17, 2015, with a 13Å resolution in the 3500-7200ÅÅ spectral range. The seeing was 1.5 ′′ . The optical counterpart of NGC 4559 X-10 is an extremely faint source and we managed to obtain only a spectrum of its environment. From the spectra of its nearest nebulae we determined the reddening value. Using the ratios of the nebula lines Hα, Hβ, Hγ, and Hδ we found consistent estimates, E(B − V ) = 0.26 ± 0.06. NGC 4395 ULX-1 was observed on January 1, 2014 and January 17, 2015 in a spectral range of 4000-5700ÅÅ and on February 21, 2015 in a spectral range of 3600-5400ÅÅ. The resolution was 5Å, the seeing was ≈ 1 ′′ . We also observed ULX-1 and its environment on March 11, 2016 with the same mode as for X-10. Measuring the extinction value for NGC 4395 ULX-1 is complicated due to the low brightness of the nebula lines surrounding the object and a patchy background. For NGC 4395 ULX-1 we obtained the reddening E(B − V ) = 0.23 ± 0.13. The normalized optical spectra of NGC 4395 ULX-1 obtained in 2014 and 2015 are shown in Fig. 2. The narrow hydrogen spectral lines Hβ, Hγ, and the [OIII] λλ4959,5007 lines belong to a compact nebula surrounding the source. The profile of the HeII λ4686 line appears to be variable. In the spectra obtained in January 2014 and 2015 we found a broad component of HeII with average width F W HM ≈ 700 km/s. In February 2015 the broad line component was not detected: the line is well fitted by a Gaussian profile with F W HM = 310 ± 40 km/s, which agrees well (within the errors) with the spectral resolution. X-ray variability of NGC 4395 ULX-1 To test the X-ray variability of the source we used Swift/XRT observations. The Swift archive contains a total of 226 data sets obtained between December 2005 and April 2015, however, most of them were obtained in 2008 and 2011. Using all the available observations we extracted light curves and spectra from a circular region with a 25 ′′ radius. The background region was taken in a nearby area free of other sources. We found that the X-ray flux of NGC 4395 ULX-1 varies from 0.008 to 0.05 cnt/s, which corresponds to the X-ray luminosities from 5.6 × 10 38 to 3.5 × 10 39 erg/s in a 0.3-10 keV range. The upper point in the light curve in Fig. 3 corresponds to a luminosity of 4.0 × 10 39 erg/s. The background was about 3.2 × 10 −4 cnt/s during all observations. The X-ray spectra are well fitted by the two-component model tbabs*(dikbb+powerlaw), yielding the disc temperature T d = 0.22 ± 0.03 keV and Γ = 2.9 ± 0.5 in the bright state of the object and T d = 0.19±0.01 keV and Γ = 3.5±0.3 in the faint state. We have adopted NH = 0.25 × 10 22 cm −2 which corresponds to the optical value E(B −V ) ≈ 0.23. We conclude that despite the notable variations in the object's luminosity (with a factor of ∼ 6) the spectrum is unchanged. To search for periodicity in the X-ray light curve we computed the Lomb-Scargle periodogram (Lomb 1976;Scargle 1982). The most prominent peak greater than the 4σ level (the false alarm probability is 3.5 × 10 −5 , Horne & Baliunas (1986)) corresponds to a period of 62.8 ± 2.3 days. The light curve folded on this period is shown in Fig. 3. The reference epoch is MJD 53735. DISCUSSION AND CONCLUSIONS The spectra of all optical counterparts including NGC 4395 ULX-1 are similar to one another. They are NGC 1313 X-2 (Roberts et al. 2011), NGC 5408 X-1 (Cseh et al. 2011), NGC 7793 P-13 (Motch et al. 2014), Holmberg II X-1, Holmberg IX X-1, NGC 4559 X-7 and NGC 5204 X-1 (Fabrika et al. 2015), and M81 ULS-1 (Liu et al. 2015). The main feature of the spectra is the broad HeII line with FWHM ≈ 500 ÷ 1600 km/s. The detection of a broad HeII line in the spectrum of the NGC 4395 ULX-1 counterpart puts this object in line with the other ULXs and may indicate their identical nature. One may conclude that ULXs represent a roughly homogeneous class of objects, because the presence of a broad HeII emission is rare in the optical spectra. In Fabrika et al. (2015) arguments have been made as to why this type of spectrum cannot belong to a WR-type star. Recently, an ultraluminous X-ray pulsar (Bachetti et al. 2014) has been detected, reaching the X-ray luminosity of 1.8 × 10 40 erg/s. Obtaining the optical spectra of this object is an important task. The HeII line in NGC 4395 ULX-1 has a two-component profile; the broad component was detected in only two spectra. The narrow component of the line is present in all three spectra (Fig. 2). It might be possible that the broad component of the HeII line correlates with the X-ray light curve (Fig. 3); in the fainter state the broad component disappears. The X-ray phase curve of NGC 4395 ULX-1 may be connected to the precession of the supercritical disc. This is confirmed by the similarity of the phase curve and the precession curve of SS 433 (Cherepashchuk et al. 2009). On the other hand, we did not detect a correlation between the object's optical brightness and the width of the HeII line. However, we only have three spectral observations of ULX-1, therefore the behavior of the broad component with the X-ray period phase should be confirmed by further observations. Out of all the objects in the diagram, three ULX counterparts, NGC 4559 X-10, NGC 5474 X-1, and M66 X-1 (Avdan et al. 2016) have the cool spectra of F-G type supergiants, their absolute magnitudes being MV > −5.3. Other objects have power-law-like spectra. The abrupt decrease in the number of objects with decreasing MV can be related both to the effects of observational selection and to the physics of the objects themselves. The first possibility is explained by the faintness of the objects, which makes it difficult to detect them in galaxies farther than 10 Mpc. The second possibility can be related to the decrease in the luminosity of the supercritical disc wind as the original accretion rateṀ decreases. As was shown by Fabrika et al. (2015), the optical luminosity of supercritical discs is roughly LV ∝Ṁ 9/4 , because stronger winds will reprocess more X-ray radiation emerging from the disc funnel. In the case of ULXs with the lowest optical luminosity, a considerable contribution to their luminosity can be made by donor stars. This is suggested by the cooler (on average) spectra of faint objects with MV > −5.3. Accordingly, as the luminosity of the supercritical disc wind decreases, the donor star becomes dominating. Acknowledgements Our results are based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This research has made use of data obtained from the Chandra Data Archive and software provided by the Chandra X-ray Center (CXC) in the application package CIAO. This work has made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. The research was supported by the Russian RFBR grants 16-32-00210, 16-02-00567, and the Russian Scientific Foundation grant N 14-50-00043 for observations and data reduction. The authors are grateful to A.F. Valeev for his help with the observations.
2016-06-09T16:51:31.000Z
2016-06-09T00:00:00.000
{ "year": 2016, "sha1": "acd92b328073bf490b4713469ce622e6aa37ffbf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "acd92b328073bf490b4713469ce622e6aa37ffbf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266734183
pes2o/s2orc
v3-fos-license
“Mesmerizing and Terrifying”: Senusret III’s Unique Macrotia The ancient Egyptian pharaoh Senusret III was a legend to both his contemporaries and his descendants: an ideal of kingly power whose legacy of control and intimidation was remembered for centuries. Of particular note is the unique macrotia that the king's statues display. In this paper, we discuss possible etiologies of Senusret's unique presentation and ultimately conclude that Senusret's immortalized features were likely rooted in propaganda rather than a medical cause. Introduction And Background In his writings on the history of Egypt, the ancient Greek historian Herodotus wrote of a legendary pharaoh who had lived centuries earlier.Well over a millennium after his passing, the ruler that Herodotus described as "Sesostris" was remembered by the Egyptians as an unmatched ideal of kingly power [1], a great conqueror who had spread his domain to an unprecedented scale through an unstoppable campaign of expansionist imperialism [2].Some legends depicted him as a conqueror who had crushed all of Asia, even greater than Alexander the Great [1].Although it often exaggerated his exploits into hyperbole, the mythmaking of later generations had a strong basis in the truth.Senusret III, also known as Senwosret III, was one of the most powerful pharaohs of ancient Egypt's Twelfth Dynasty.In a lineage that saw a massive centralization of authority and the expansion of the state's power, Senusret III was notable among his peers for the heights of his ambition and the ruthlessness with which he pursued his goals [3]. Review Senusret III Born to Senusret II and his sister-wife, Khenmetneferhedjet-waret [4], Senusret III rose to the Egyptian throne in approximately 1836 BCE [3].He quickly perfected the "despotic model of monarchy" that his predecessors of the Twelfth Dynasty had developed [3], and almost immediately set his scribes and poets on a massive propaganda campaign that saw some of ancient Egypt's most famous works of literature.Pieces such as The Cycle of Hymns, The Complaints of Khakheperraseneb, and The Admonitions of Ipuwer, all compositions that featured significant propaganda praising the monarchy's endless virtues, were specifically targeted at the literate upper class.Simultaneously, Senusret tightened his grip on the powerful provincial governors who ruled over the banks of the Nile.In the past, these governors, also known as nomarchs, had torn the kingdom apart when a series of weak kings had allowed them to gather too much power.Ceding over power was not a mistake that Senusret would ever make.Throughout his reign, he successfully beat and bullied both his immediate circle and the provincial elites into utter submission to the throne [3].In a single generation, the previous model that had once ruled Egypt for centuries, nomarchs controlling individual provinces, while paying lip service to a weak and ineffective pharaoh, had completely vanished [3].Even the private tombs that the nomarchs had once constructed in their provinces vanished, as the once-powerful elites frantically jostled one another to be buried as close to the royal court as possible [5].Only Senusret's hands could be allowed to steer the ship of state [3]. While literature and political maneuvering served to keep the kingdom's elites in line, control over the kingdom's numerous and widely illiterate population required different strategies.Like his forebears, Senusret set out on a series of ambitious building projects that showcased the "obsession with rigid planning" that characterized much of the Twelfth Dynasty [3].He soon had a pyramid constructed at Abdju, and to house his workers, he had a mathematically laid out town built near it called Wah-sut-Khakauramaa-kheru-em-Abdju, meaning "enduring are the places of Khakaura (Senusret's throne name), the justified, in Abdju" [3].The state enacted a program of surveillance, particularly on the southern borders, where "in an atmosphere of nervousness approaching paranoia", patrols constantly roamed the countryside, stopping and searching locals at will [3].The dispatches that his commanders sent to their monarch frequently ended with the phrase: "All the affairs of the King's Domain (life, prosperity, health!) are safe and sound" [3]. But it was not enough to simply secure the South.In the 8th, 10th, 16th, and 19th years of his reign, Senusret launched a series of bloody campaigns into Nubia, expanding the Egyptian kingdom over an unprecedented amount of territory [5].A series of mighty fortresses with names like "Destroying the Nubians", "Subduing the foreign lands", and "Suppressing the Nubians", secured Egyptian control over its broken southern neighbor [3].At the time, the forts were both military and logistical marvels, providing an integrated and unprecedented system to support Egyptian occupation [3].The legacy of Senusret's campaigns would last for centuries; eventually, he would be venerated as a God in Wawat, the ever-shifting border between the rival kingdoms [3]."His Majesty's tongue restrains Nubia," inscriptions praising the king boasted, "His utterances make the Asiatics flee" [2]. Although his accomplishments in propaganda, surveillance, conquest, and control already outpaced the other members of the Twelfth Dynasty, Senusret III also developed a new tool for projecting his power across Egypt: portrait sculpture.As Wilkinson notes, "Never before in the history of ancient Egypt had a king used sculpture so effectively to project so terrifying an image of royal power" [3].Where previous pharaohs had official statues that portrayed them with eternally youthful, idealized portraits, Senusret was the first to adopt a new realistic sculptural style.This "frank portrayal" was shocking to his contemporaries, depicting "protruding ears, rounded, projecting eyes with prominent lids, pouches beneath the eyes… and a generally downturned mouth with mounds of flesh at the sides" [6].His face also featured worry lines, and bulging, hooded, eyes.The king is often depicted with deep nasolabial folds, giving a distinctive "disconsolate look" [7].While the king always retained a "taut, muscular, and virile" torso reminiscent of a young warrior in his prime, his face -radically and unnaturally realistic, by contemporary standards -was at once "mesmerizing and terrifying" to those who viewed it [3].Adding to the effect, each of the statues had to be treated with the same awe and reverence that one might reserve for the pharaoh himself, as an "individual divinity worthy of its own offerings" [5].Just as terrifying was the sheer quantity of statues made, many remain to this day [3], and in many temples, Senusret "tripled or quadrupled his presence, adding multiple statues, each with the same stature, yet none exactly the same....he seemed everywhere at once, his statues terrifying in their mastery of stone" [5].The upper class of Egypt followed their pharaoh's example, mimicking his unique features in their statutory.However, none of them dared to depict themselves with the same muscular body as Senusret III, for all power, symbolic and otherwise, belonged in the hands of no one but the pharaoh.The best that they could hope for was to be represented as extensions of the king's body, "watching on his behalf" [5].The purpose of such terrifying imagery was quite blatant.In one of his fortresses established in Nubia, Senusret established a statue of himself in a special shrine for his soldiers.The inscription reads: "My Majesty has had an image of My Majesty made upon this frontier… so that you will be steadfast for it, so that you will fight for it" [3].Even on the utmost borders of his domain, the king's intimidating visage elicited a powerful mix of reverence and fear [3]. Senusret's features are particularly notable for one trait in particular -his enlarged ears, predominately displayed across a majority of his imagery [4], as shown in Figures 1-3.Senusret's statutory evolved over his reign, from very youthful statues to ones that appear as a man in his sixth or seventh decade of life [8].Even in the earliest years of his rule, the monarch was depicted with enlarged, high-set ears [4].Given the new school of realistic sculpture that Senusret brought to the forefront of Egyptian society [3], as well as his desire, both cultural and religious, to preserve his individual identity for all time [9], it is possible that these depictions were consistent with the king's actual features.Given this, we propose that there may be a medical explanation for Senusret's unique appearance in the archaeological record. Macrotia Congenital anomalies of the external ear are rare and exhibit a broad spectrum in terms of type and severity.Prominent ears, sometimes known as macrotia or protruding ears, are one of the most common deformities and may affect up to 5 percent of the population [13,14].In a normal human ear, the superior aspect of the ear usually approximates the height of the brow, and the width of the ear correlates with approximately 50-60 percent of its length [14,15].In macrotia, the upper third of the ear is most often elongated rather than the lobe [16,17].Ear protrusion is related to the concho-scaphal angle (the angle of the ear in relation to the head), and can occur when the conchal bowl is overdeveloped, the antihelical fold is poorly developed, or a combination of these features [18]. Senusret's imagery shares many of these characteristics.Statues of the king recovered at Deir el-Bahari show a distinct overgrowth of the upper concha [4].Many other statues, particularly those produced in the kingdom's productive Theban workshops, depict the king with an exaggerated angle of inclination between the ears and the cheeks [4], consistent with modern definitions of macrotia [18]. Syndromes While prominent ears by themselves do not cause hearing impairment, approximately 30 percent of external ear deformities are associated with syndromes, including additional malformations [19].Examples of syndromes with related ear deformities include Treacher-Collins syndrome, Crouzon syndrome, Apert syndrome, Wildervanck syndrome, and chromosomal abnormalities such as trisomy 13, trisomy 18, and trisomy 21, among others [19].While syndromic ear malformations demonstrate autosomal recessive inheritance in 90% of cases, non-syndromic ear deformities such as microtia show a slightly different distribution, with most cases being attributed to autosomal dominant inheritance [19,20].Various studies of inner ear development have demonstrated the role of various molecules, such as transcription factors, genes, growth factors, and cell adhesion proteins in ear malformations [19].Neither Senusret III's father, Senusret II, nor his mother, Khenmetneferhedjet-waret, seem to share his appearance of enlarged ears, which could suggest a spontaneous genetic mutation rather than an inherited or syndromic cause.However, this possibility also remains unlikely.The majority of syndromic ear malformations and genetic abnormalities are associated with microtia or acrotia rather than macrotia [19], and hearing changes are very rarely associated with macrotia or protruding ears. While genetic factors may play a cause, a significant number of acquired ear malformations may arise from exogenous factors during pregnancy, including viral infections such as cytomegalovirus, herpes simplex virus, rubella, toxoplasmosis, and poliomyelitis [19].Other factors are as varied as malnutrition, hypoxia, Vitamin A deficiency, alcohol, and even noise exposure [19].In cases such as Pendred syndrome, thyroid hormone deficiency in a pregnant mother may lead to ear malformations of the fetus [19].Many of these injuries occur during embryogenesis when the pinna of the ear, along with the tragus, crus helices, and upper helix, develop from the first branchial groove at approximately 40-45 days following conception.This process is completed approximately four months after conception [19].It is possible that Senusret III may have been exposed to an exogenous exposure that affected the development of his outer ears during this period of development.Unfortunately, over 3000 years since the pharaoh's reign [3], it is impossible to determine what teratogens he may have been exposed to.Despite this, teratogenic factors may be reasonably ruled out; approximately 70-90% of malformations of the outer and middle ear are unilateral [19], and the vast majority are present with either microtia or macrotia [19].Given this, it is likely that Senusret's depicted macrotia was due to another process entirely. Trauma Senusret III prided himself as a warrior-king and often bragged of his conquests.His monuments often featured declarations of his ferocity and ruthlessness -"I have carried off their women and brought away their dependents, burst forth to poison their wells, driven off their bulls, ripped up their barley, and set fire to it," he proudly announced upon the conclusion of a campaign against Nubia [3].Even for pharaohs, life in ancient Egypt was not always an easy one.It was not unknown for the highest rulers of Egyptian society to fight in the front lines; later figures such as Taa fought and died in combat, their mummies still bearing the blows and wounds that felled them in life [3].Physical education was often stressed just as much as intellectual pursuits for the future rulers of Egypt, and elites were often expected to participate in sports such as wrestling, rowing, running, and swimming [3].Given his proclivity for warfare and likely upbringing, it is possible that Senusret III may have experienced trauma to his ears as a young man. A significant cause of ear deformities includes exogenous force and trauma; contemporary examples include bites (both animal and human), traffic accidents, and burn injuries [21].Similar traumatic accidents would have been present in the violent world of ancient warfare.However, the bilateral nature of Senusret III's macrotia detracts from this theory, as exogenous force would be more likely to cause asymmetrical defects [21].Interestingly, a number of Senusret's statues depict him wearing headgear that would have offered his ears a degree of protection during combat [4].While members of the Egyptian royal family often participated in warfare at young ages [3], Senusret's bilateral macrotia suggests that this is unlikely. Classic "cauliflower ear", as often experienced by boxers and wrestlers, is a rare exception, and may often present with deformities to the bilateral ears.However, these deformities, secondary to repeated trauma leading to scarring and improper regeneration of the auricular cartilage, are characterized by "thickening, irregular projection of the anterior ear and distortion of the auricular outline" [22].Senusret's ears, while depicted as enlarged in his statutory, show no signs of visible deformity or distortion associated with this phenomenon [4]. Arterio-venous malformations Although rare, macrotia has been identified in relation to congenital arterio-venous malformations (AVM) [23,24].The most common sites of AVMs in the face include the cheek, ear, lips, nose, and forehead, and usually involve blushing of the skin or the presence of a birthmark [25].While this is an interesting and possible cause of macrotia, it is unlikely to have affected Senusret III as his ears were bilaterally enlarged [4], and there are no reports of him being affected by any similar conditions. Aging The role of natural aging on the ear should be considered.Earlobes naturally elongate with age due to the natural breakdown of collagen and elastin in the skin.This can become even more obvious with the use of earrings and other ear accessories, jewelry Senusret III and other pharaohs of the time would have used often [26]. However, this theory can also be discounted.While the rest of his features display a spectrum that appears to subtly change over the progression of his reign, possibly consistent with the king's actual physical aging [8], his ears appear to be consistently enlarged [3]. Propaganda Of all the theories that we propose, propaganda is the most likely.From the earliest days of his reign, Senusret III displayed a remarkable talent for mythologizing [3] and skillfully developed a cult of personality that lasted for centuries [2].Given this, the depiction of his enlarged ears was likely symbolic, a message that the king was all-hearing, a warning consistent with the powerful police state apparatus that he strengthened and enlarged [3].While this depiction was a new innovation for Egyptian rulers, it had already been down centuries earlier by rulers such as Gudea of Lagash in nearby Mesopotomia.Notably, Gudea's depiction of enlarged ears was "meant to show him as a wise and attentive leader" [4], a depiction with less intimidating connotations than Senusret's visage. It is likely that Senusret's imagery did have some basis in reality -there does appear to be a correspondence between his features and the king's aging throughout his reign [8].However, only a tiny fraction of the Egyptian population would have ever seen Senusret in person.Given this, it would have been in the king's interest to alter his features as needed for propaganda purposes [4].Even the unique haggard appearance of the pharaoh could have served as a metaphor for 'intellectual strength in maturity", in a culture that associated age with knowledge, while his slightly bulging eyes symbolized eternal vigilance [6].Senusret's actual features remain unknown to the present day, as his mummy has never been found [4]. Conclusions The realistic depictions of Senusret III, unprecedented in the records of ancient Egypt, are notable for his depiction of macrotia.While a number of medical etiologies are possible, including trauma, congenital syndromes, arterio-venous malformations, and aging, we ultimately conclude that Senusret's macrotia is a product of propaganda designed to reinforce the image of the pharaoh as an all-hearing monarch. FIGURE 1 : FIGURE 1: An image of Senusret III as a sphinx This work was obtained from the Metropolitan Museum of Art and is part of the Public Domain [10]. FIGURE 2 : FIGURE 2: Granodiorite statue of Senusret IIIObtained from the British Museum under the CC BY-NC-SA 4.0 license[11]. FIGURE 3 : FIGURE 3: A series of statues featuring Senusret III Obtained from the British Museum under the CC BY-NC-SA 4.0 license [12].
2024-01-03T16:04:33.318Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "fc6b551408fedad8f770d509a1a1ec6579699947", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/216524/20240101-24800-1ivf64p.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f0443308b7078ceb9ce59375f5d380573e7074c", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Medicine" ] }
18203594
pes2o/s2orc
v3-fos-license
The Untold Story of the Caudal Skeleton in the Electric Eel (Ostariophysi: Gymnotiformes: Electrophorus) Alternative hypotheses had been advanced as to the components forming the elongate fin coursing along the ventral margin of much of the body and tail from behind the abdominal region to the posterior margin of the tail in the Electric Eel, Electrophorus electricus. Although the original species description indicated that this fin was a composite of the caudal fin plus the elongate anal fin characteristic of other genera of the Gymnotiformes, subsequent researchers proposed that the posterior region of the fin was formed by the extension of the anal fin posteriorly to the tip of the tail, thereby forming a “false caudal fin.” Examination of ontogenetic series of the genus reveal that Electrophorus possesses a true caudal fin formed of a terminal centrum, hypural plate and a low number of caudal-fin rays. The confluence of the two fins is proposed as an additional autapomorphy for the genus. Under all alternative proposed hypotheses of relationships within the order Gymnotiformes, the presence of a caudal fin in Electrophorus optimized as being independent of the occurence of the morphologically equivalent structure in the Apteronotidae. Possible functional advantages to the presence of a caudal fin in the genus are discussed. Introduction The order Gymnotiformes includes 33 genera and more than 200 extant species of Neotropical electric fishes plus one fossil form from the Late Miocene of Bolivia [1,2]. Gymnotiforms inhabit freshwaters across the expanse from northern Argentina to southern Mexico in settings ranging from shallow, slowflowing streams to deep rivers, with subsets of several families specialized for life in rapids and other high energy settings [3][4][5][6]. Species of gymnotiforms range in body size from miniatures of 50 mm total length such as Hypopygus minissimus [7] to the over 2.5 m total length of Electrophorus electricus [8]; a 50 times range notable in itself, but particularly striking in a lineage of circa only 200 species. Arguably one of the most noteworthy characteristics of all gymnotiforms is their ability to produce electric organ discharges (EODs) which serve dual purposes -communication and exploration of the surrounding environment. Two alternative forms of such discharges occur among these electric fishes: pulse EODs (via myogenic organs) and wave EODs (via myogenic or neurogenic organs). Electrophorus is unique within the Gymnotiformes in having a third form of discharge of up to 600 volts used for hunting and self-defense [9,10]. Such powerful discharges are dramatically apparent to anyone in contact with, or in close proximity, to these fishes in the water during a discharge. These shocks were reported by naturalists commencing early in the European exploration of the Neotropics, have been the subject of study by physiologists and are well known in popular lore [11]. Electrophorus was erected by Gill [12] to include the Electric Eel, Gymnotus electricus Linnaeus [13]. The description by Linneaus [13] was based on what was for the period a very detailed account and accompanying illustration by Gronovius [14] of a specimen probably originating in Suriname. Ichthyofaunal sampling over the following two and one-half centuries documented that Electrophorus has a broad distribution in low-and mid-elevation settings across the vast expanse encompassed by the Amazon and Orinoco basins and additionally through the river systems of northern Brazil and the Guianas between the mouths of those two major drainages [15,16]. Various autapomorphies unique within the Ostariophysi distinguish Electrophorus [17], with one of the most prominent being the presence of three hypaxial electric organs (the Main, Hunter and Sach organs) versus a single hypaxial organ in adults of other gymnotiforms [1]. Electrophorus also has a highly vascularized oral respiratory organ with multiple folds that greatly increase its surface area [1,9]; an elaboration unique to the genus among Neotropical electric fishes and critical for respiration in this obligatory air breather. The Electric Eel, moreover, differs from all other gymnotiforms in the elongate fin extending along the ventral surface of the body and tail from posterior of the abdominal cavity to the end of tail ( Fig. 1; [18]). Other gymnotiforms conversely have the lengthy anal fin terminating further anteriorly along the tail. Alternative hypotheses have been advanced concerning the components of the elongate fin coursing along the ventral surface of the body and tail of Electrophorus. Linnaeus [13] originally postulated that the anal fin of Gymnotus electricus (the Electrophorus electricus of this paper) was posteriorly continuous with the rays of the caudal-fin, i.e., the caudal fin is present. Subsequent authors ascribed to the alternative concept of the absence of a caudal fin in the genus. Intriguingly, the details of the unusual tail along the ventral and posterior margins of the body in Electrophorus have not been the subject of analysis to evaluate the two alternative hypotheses -that the fin at the posterior of the tail is a true caudal fin versus that the terminal portion of the elongate fin in the genus is a posterior extension of the anal fin to form a false caudal fin. We herein address that question and evaluate the results within the context of the divergent hypotheses of intraordinal phylogenetic relationships in the Gymnotiformes. Materials and Methods Specimens were examined at, or borrowed from, the following institutions: AMNH, American Museum of Natural History, New York; AUM, Auburn University Museum, Auburn; ANSP, Academy of Natural [19] (see List S1). Specimens with a damaged posterior region of the tail were excluded from the analysis. Due to the ontogenetically late onset of chondrification and calcification of the posterior portions of the body in the genus it was not feasible to provide informative photos of the caudal region in early life stages. The Caudal Skeleton in Electrophorus Well over two centuries ago, Linnaeus [13]: 427 commented that Gymnotus electricus had ''Pinna caudali obtufiffima anali annexa'' ( = the caudal fin very obtuse and joined to the anal). Information in that account indicated that his statement was most likely derived from a detailed description and illustration of a specimen of the species by Gronovius [14] rather than based on the examination of material of G. electricus. This concept of conjoined anal and caudal fins in what was later termed Electrophorus electricus (hereafter Electrophorus) then vanished without comment from the scientific literature for more than 200 years. The alternative accepted scenario was that the anal fin extended posteriorly to the end of tail in Electrophorus and formed what has been termed a false caudal fin [1,17,[20][21][22][23][24][25]. An assumption that the terminal portion of the elongate fin in Electrophorus was a false, rather than true, caudal fin may have been, in part, based on the absence of the caudal fin in Gymnotus, a genus showing a number of derived characters with Electrophorus, with those two genera now forming the Gymnotidae. Comments as to a possible contrary arrangement were limited to remarks by Meunier & Kirschbaum [26,27]. Meunier & Kirschbaum [26]: 216 briefly mentioned the possible presence of a caudal fin in Electrophorus as an alternative to the prevailing concept of an elongate anal fin extending posteriorly to the terminus of the tail. Soon thereafter Meunier & Kirschbaum [27]: 149 speculated again on the presence of a caudal skeleton in the genus, stating that ''…the last vertebra is terminated by a small cartilage, which serves to support some lepidotrichia.'' That observation notwithstanding, those authors did not explicitly interpret the cartilaginous element in question as a caudal skeleton, perhaps due to the absence of an ontogenetic series of the species. In so far as they commented on the presence of a small cartilage rather than an ossification at the rear of the vertebral column, it appears, based on our examination of a broad size range of specimens, that their observations were likely made on a late larva or early juvenile. No subsequent analysis delved into the question of the presence versus absence of a true caudal fin ( = hypural complex plus caudal-fin rays) in Electrophorus. Examination of a broad ontogenetic series of specimens of Electrophorus proved informative as to this question. Presence of a ventral embryological fin fold in individuals of Electrophorus shorter than approximately 85 mm TL gives a false first impression of a continuous anal-caudal fin during the early stages of the development in the genus. In actuality the anal-fin rays terminate well anterior to the posterior limit of the fin fold in specimens of less than this length. Larvae of Electrophorus of approximately 19 mm TL have anal-fin rays as evidenced by Alcian blue staining plus non-staining rays apparent in transmitted light limited to the anterior one-half of the fin fold that extends the length of the tail. Specimens at that size possess a cartilage body at the posterior end of the tail as evidenced in transmitted light without, however, any obvious associated caudal-fin rays. Conversely, fin rays are apparent at the posterior end of the tail in a circa 26 mm TL whole specimen, but with the retention of a distinct gap along the ventral margin of the tail between the posterior most apparent anal-fin ray and the ventral most caudal-fin ray. This condition is comparable to that found in adults of all species of the Apteronotidae (see Fig. 2). By approximately 60 mm TL, analfin rays are apparent along circa 95% of the length of the tail, but the posterior most anal-fin ray remains distinctly separate from the horizontally elongate plate-like cartilaginous mass and associated caudal-fin rays at the terminus of the vertebral column. At 295 mm TL, the anal and caudal fins are now confluent, with the posterior most anal-fin ray as evidenced by its association with a proximal pterygiophore now situated immediately proximate to the ventral most caudal-fin ray that attaches to the hypural complex. Internally the caudal fin at this size is supported by a horizontally-elongate cartilage running ventral to the terminal portion of the notochord. The caudal fin is continued dorsally beyond the arrangment in smaller specimens by a variable number of dorsal procurrent rays within the fin fold in that region. This overall arrangement in Electrophorus is reminiscent of that shown for larvae of Apteronotus leptorhynchus by Meunier and Kirshbaum (Fig. 6 in [27]). The major difference is that in A. leptorhynchus all caudalfin rays articulate solely with the posterior cartilage whereas in Electrophorus the dorsal caudal-fin rays attach to the ossifying notochord. Examination of the posterior portion of the tail in multiple samples of larger juveniles through adults of Electrophorus up to 1500 mm TL revealed a prominent, well ossified complex at the posterior terminus of the vertebral column (Figs. 3, 4). Two distinct components contribute to this ossification. Anteriorly, a forward facing terminal centrum contacts the posterior most independent centrum of the vertebral column via a broad articular surface comparable to those at the interfaces of the other posterior vertebrae of the vertebral column. This terminal centrum in Electrophorus seamlessly conjoins posteriorly with a plate-like, posteriorly vertically expanding ossification. The posterior margin of the plate-like ossification serves as the area of attachment for five to 10 caudal-fin rays; the ventral most of which adjoins the posterior most ray of the elongate anal fin. In addition to the caudal-fin rays, some specimens of Electrophorus possess one to three dorsal procurrent rays. Overall morphology of the complex formed by the terminal centrum and the posterior plate of Electrophorus is comparable to the hypural complex at the rear of the vertebral column in most species of the Apteronotidae (Fig. 5); the one clade within the Gymnotiformes long considered to uniquely bear a true caudal fin (Figs. 2, 5; Figs. 4-5A in [27], Fig. 471 in [28], Fig. 23E in [29], Fig. 5A in [30], Fig. 1B in [31], Fig. 17A in [32]). One notable difference between the hypural complexes of Electrophorus and the Apteronotidae is the greater degree of ossification of that complex in midsized through adult individuals of Electrophorus relative to the hypural complex in most of the members of the Apteronotidae. The varying levels of ossification between these taxa may reflect the different body sizes in these taxa. Electrophorus is larger, sometimes significantly so (20 times), than all members of the Apteronotidae and the hypural complex of Electrophorus remains incompletely ossified to at least circa 300 mm TL. Uncertainty remains about what contributes to the terminal centrum and hypural plate in Electrophorus and the Apteronotidae due to the reduced nature of the elements in these taxa versus the condition in other lineages in the Otophysi; for example basal members of the Siluriformes, the sister group to the Gymnotiformes. A parhypural plus six separate hypurals [33,34] are present in Olivaichthys viedmensis ( = Diplomystes papilosus in Lundberg and Baskin [33]), a proposed basal member of the Diplomystidae [35], that, in turn, has been hypothesized in morphological analyses to be the sister group to the remainder of the Siluriformes. Siluriforms, however, demonstrate a tendency towards the fusion and/or reduction and loss of elements in the caudal skeleton in more derived taxa [34] resulting in a single bony caudal complex in some catfishes (e.g., Chaca, Fig. 3 in [33]; Plotosus, Fig. 53 in [36]) that is reminiscent of the caudal complex in Electrophorus. Fink & Fink [29] proposed that the caudal plate in the apteronotid genus Platyurosternarchus (cited therein as Sternarchorhamphus) was composed of the compound centrum (the terminal centrum of Hilton et al. [32]), hypurals, parhypural and accessory hemal spine. Our examination of a broader series of specimens of the genus failed to reveal these elements as discrete ossifications during ontogeny in either species of Platyurosternarchus. These surveys revealed a notable degree of intraspecific variation within P. crypticus and P. macrostoma in the elaborations of the ventral portion of the terminal vertebral centrum-hypural plate complex. The accessory hemal spines of Fink & Fink [29] are most often absent (Fig. 2) and when present vary in number, form, and position and are, thus, questionably homologous with haemal spines. Analysis of examined specimens of a broad size range of Electrophorus proved similarly uninformative as to which elements of the typically more complex hypural system elsewhere in the Ostariophysi contribute to the posterior hypural ossification in the genus. Elements of the reduced caudal skeleton in the Apteronotidae have been identified by several alternative terminologies. Monod [28] termed the structure an ''urophore complexe.' ' Meunier & Kirschbaum [27], in turn, applied the name ''hypuro-opisthural'' to the complex. In the most recent analysis, Hilton et al. [32] found that Orthosternarchus has what they identified a terminal vertebral centrum (the ''tv'' of that study) followed posteriorly by a hypural plate (the ''hp'' of that study); a form of the caudal-fin skeleton comparable with that present in adults of Electrophorus other than for two features. The hypural plate in Orthosternarchus is cartilaginous and disjunct from the terminal centrum whereas in adult specimens of Electrophorus the hypural plate and terminal centum are both ossified and broadly conjoined (Figs. 3, 4). Nonetheless, the basic pattern of these two caudal elements is common to, and indicative of the equivalence of, the components in Electrophorus and Orthosternarchus and we apply the Hilton et al. [32] terminology for the apteronotid caudal skeleton to Electrophorus. The Presence of a Caudal Skeleton in Electrophorus and its Evolutionary Implications Elongate bodies with associated lengthening of the anal fin characterize various taxa in the Ostariophysi; however, conjoined anal and caudal fins are restricted within the superorder to a few genera of the Gymnotiformes and Siluriformes. Analysis reveals that continuous anal and caudal fins in the Ostariophysi derive from two alternative elaborations of the separate anal and caudal fins general across the superorder: 1) a joining of the two fins at least, in part, as a result of the increase in the number of ventral procurrent rays with a consequent anterior extension of the caudal fin towards the anal fin; versus 2) the posterior extension of the anal fin to contact an unelaborated caudal fin (i.e., without an increase in the number of ventral procurrent rays). The anteroventral most ray of the caudal fin serves as an appropriate landmark for the anterior limit of that fin versus the conjoined anal fin. This ray is readily distinguished from the terminal anal-fin ray via the lack of the associated proximal pterygiophore characteristic of anal-fin rays. Additionally, the anteroventral ray of the caudal fin is most often associated with the hypural plate (the tv+hp complex of the Apteronotidae [e.g., Platyurosternarchus and Apteronotus, Figs. 2, 5]; Electrophorus [Figs. 3,4]), whether the hp is ossified or partially cartilaginous. The first of the two forms of anal-caudal fin continuity is the consequence of the caudal fin extending anteriorly to varying degrees along the ventral margin of the body to meet a posteriorly extended anal fin. This state can be recognized by the presence of multiple ventral procurrent caudal-fin rays lacking associated proximal pterygiophores posterior of the terminal anal-fin ray as demarked by the posterior most proximal anal-fin pterygiophore. Within the Siluriformes, this morphology was observed in the Neotropical genus Phreatobius which has 11 to 26 ventral procurrent rays [37][38][39] and the African genus Gymnallabes in which there are at least five ventral procurrent rays extending forward to meet an posteriorly extended anal fin (Fig. 5 in [40]). The second, and non-homologous, mode of continuity between the anal and caudal fins is achieved via the posterior extension of the anal fin to contact a non-anteriorly lengthened caudal fin (i.e., without multiple ventral procurrent caudal-fin rays). This condition is characterized by the immediate proximity of the posterior most anal-fin ray as evidenced by an associated proximal pterygiophore with the ventral most caudal-fin ray; the condition found in Electrophorus. As detailed above, the anal fin in Electrophorus progressively expands posteriorly during ontogeny until the posterior margin of that fin reaches and conjoins the anteroventral margin of the caudal fin thereby yielding a continuous anal-caudal fin complex (Fig. 3). Elsewhere in the Ostariophysi, an anal fin confluent with the caudal fin as a consequence of the posterior extension of the anal fin to conjoin a non-anteriorly lengthened caudal fin is also known to occur in the Plotosidae, a family of marine and freshwaters catfishes of the Indo-Pacific region (Fig. 53 in [36]). The Plotosidae is well embedded within the Siluriformes based on both morphological [41] and molecular data [42], and the monophyly of the Gymnotiformes is, in turn, supported by multiple synapomorphies [1]. Thus, the conjunction of the anal and caudal fins via the posterior elongation of the anal fin in Electrophorus is clearly homoplastic relative to the similar condition in the Plotosidae. Given that continuity between the anal and caudal fins as a consequence of the posterior expansion of the anal fin to contact the ventral-most caudal-fin ray is unique to Electrophorus in the Gymnotiformes, that condition serves as an additional autapomorphy for the genus. In so far as it had been assumed that the caudal fin was absent in Electrophorus, information on the number of caudal-fin rays for that genus was not included in prior phylogenetic analyses. Within the Apteronotidae, the only other group in the order with a caudal fin, the number of rays ranges from five to 30 with the basal clades, such as that formed by Orthosternarchus plus Sternarchorhamphus, possessing five to nine rays and the other genera in the family 10 to 30 rays (e.g., Platyurosternarchus, Fig. 2; Apteronotus, Fig. 5). The four to 10 caudal-fin rays in Electrophorus (Fig. 3), thus, parallel the count for hypothesized basal apteronotids. According to prior literature, a true caudal fin formed by a terminal vertebral centrum (tv) and hypural plate (hp) is restricted in the Gymnotiformes to members of the Apteronotidae, the most speciose family in the order [1,17,18,[22][23][24][25]27,29,43,44]. The presence of an apteronotid form of tv+hp complex and caudal fin in Electrophorus contra the previous assumption of the lack of those systems in that genus may impact previous hypotheses of phylogenetic relationships within the Gymnotiformes. Indeed in isolation, this discovery raises the question of whether the absence of a caudal fin in all taxa of the Gymnotiformes other than the Apteronotidae and Electrophorus is a potential synapomorphy for a clade composed of all members of the order lacking the fin. Examination of the impact of the discovery of a caudal fin in Electrophorus on prior hypothesis would necessitate not only the inclusion of information concerning the presence of a caudal-fin and caudal-fin rays in the genus, but also the incorporation of the extensive data from recently published phylogenetic analyses of various genera within the order, e.g., [5][6][7]. That undertaking lies beyond the purpose of this study. Nonetheless, there are two primary hypotheses of phylogenetic relationships among Gymnotiformes reiterated in the last two decades that serve as a framework for an interpretation of the presence of the caudal skeleton in Electrophorus. The most significant divergence between these involves the taxa judged to be the sister group to all other members of the order. The first of these hypotheses -that the Apteronotidae (with a caudal and caudal-fin rays) is the sister group to all other families in the Gymnotiformes was advanced based on morphological [23,24] and molecular [45] data (Fig. 6). Under that scenario the presence of a caudal fin in the Apteronotidae is most parsimoniously hypothesized to represent the retention of the plesiomorphic condition common to all members of the Siluriformes, the sister group to the Gymnotiformes. Both of the morphological analyses [23,24] have Electrophorus separated from the Apteronotidae within the Gymnotiformes by three nodes. Given the phylogenetic distance between the Apteronotidae and Electrophorus, the most parsimonious explanation for the distribution of a caudal fin in the two lineages involves retention of the caudal fin in the basal Apteronotidae, the loss of the fin in the ancestor of the remainder of the order, and a reacquisition of the fin in Electrophorus. This involves fewer evolutionary steps than the perhaps intuitively more appealing hypothesis of multiple loses of the fin in the Sternopygidae, the ancestor of the Hypopomidae plus Rhamphichthyidae, plus Gymnotus in the Gymnotidae (Fig. 6). The molecular study [45] includes fewer taxa, but again the hypothesis of an independent caudal fin acquisition in Electrophorus is the most parsimonious within the context of the phylogeny. The second major phylogenetic hypothesis of relationships for the Gymnotiformes, this based on morphological data, alternatively has the Gymnotidae (Electrophorus plus Gymnotus) as the sister clade to the remainder of the order [1,17,25] (Fig. 7). Under that scenario the Apteronotidae is a crown group within the Gymnotiformes separated by four nodes from Electrophorus. Within this phylogenetic scheme, the presence of a caudal fin in those taxa again optimizes as separate events, with two alternative equally parsimonious explanations. Under one, the presence of the caudal fin in the Apteronotidae and Electrophorus represents separate acquisitions post the presumed loss of the complex in the ancestor of the Gymnotiformes. The second scheme involves the loss of the fin in Gymnotus (the sister group to Electrophorus) in the Gymnotidae and in the ancestor of the Rhamphichthyidae, Hypopomidae, Sternopygidae and Apteronotidae and the reacquistion of the fin in the Apteronotidae (Fig. 7). Under all of these phylogenetic hypotheses, the distribution of a caudal complex and fin within the Gymnotiformes would potentially involve the retention of a plesiomorphic condition (presence of the tv+hp and caudal fin) or acquisition of the fin in a clade sister to the remainder of the order and a secondary presence of the caudal complex in another lineage. The alternatives mirror each other with the presence of a true caudal fin in Electrophorus being the secondary presence under the phylogenetic hypothesis of the Apteronotidae as sister to a clade formed by other families (Fig. 6) and the occurrence of the fin in the Apteronotidae being a secondary presence under the assumption of the Gymnotidae (including Electrophorus) being the sister of the remainder of the Gymnotiformes (Fig. 7). Functional Considerations Absence of the caudal fin is common to many components of the Gymnotiformes, but overall is limited to relatively few groups within the Teleostei; a not unexpected situation in so far as the caudal fin provides the majority, or a significant portion, of the propulsive force to the fish along with contributing to steering functions. A universal lack of the pelvic fin across Neotropical electric fishes in addition to the general absence of the caudal fin is also noteworthy. Although the pelvic fins are not a major factor in propulsion across fishes, they contribute to fine movement control. Offsetting the loss of these two fins across the Gymnotiformes is a dramatic lengthening of the anal fin and increased fine motor control of propulsive movements within the fin. Depending on the taxon, the gymnotiform anal fin commences anteriorly within the region between the vertical through the orbit to the posterior limit of the abdominal cavity and continues caudally to varying positions along, or at the end of, the tail (see figures in [25]; Figs. 1-3). Reflecting the pronounced elongation of the anal fin are anal-fin ray counts of circa 100-400 across the order; numbers that are dramatically higher than in most other taxa in the Ostariophysi [1,25]. Sinusoidal movements along this elongate anal fin among species of the Gymnotiformes provide the primary propulsive mechanism for the distinctive anterior and posterior movements of these fishes and in conjunction with the pectoral fin, critical fine scale control of such movements [46]. Fine control of posterior motion is a necessity for effective foraging behavior among gymnotiforms, with movement of the rigid body anteriorly and posteriorly prerequisite for scanning potential prey items via the electroreceptive array on their skin [47,48]. Dependence on the anal fin for propulsion in conjunction with the necessity of a straight alignment of the body for electroreceptive functioning diminished the propulsive importance of the caudal fin. This reduced or obviated the need for a substantial caudal fin; a system which is absent across the Gymnotiformes with the exception of the Apteronotidae and Electrophorus. Strikingly similar absences of the caudal and pelvic fins occur in the African electrogenic genus Gymnarchus (Osteoglossiformes). Gymnarchus also swims with a largely rigid body and propels itself via sinusoidal movements along an elongate median fin; the propulsive fin in that genus being, however, the dorsal rather than anal fin yielding an amiiform swimming mode [46]. The Gymnotidae is unique within the Gymnotiformes in demonstrating intrafamilial variation in the presence versus absence of the caudal fin, with the fin present in Electrophorus versus absent in its sister group, Gymnotus. A potential functional difference underlying this variation may be the rigid body posture in life of species of Gymnotus with sinusoidal movements along the anal fin generating the primary propulsive force [46,48]. Conversely, Electrophorus demonstrates two alternative swimming modes. The first of these is the straight alignment of the body during obligate gulping of air and in the detection, location and shocking of prey items. This is the body orientation general across the Gymnotiformes, e.g., [46,48]. Electrophorus is additionally able to use sinusoidal or anguilliform movements along the length of the entire body to supplement the waves of movements along the anal fin during capture of prey and rapid forward motion. During this swimming mode, the posterior portion of the body undergoes pronounced side-to-side movements; a situation in which a caudal fin would increase the anterior propulsive force and thereby be functionally advantageous as is the case with other groups of fishes using anguilliform swimming modes. Taxa of the Apteronotidae which also have caudal fins lack, however, anal-caudal fin conjunction and is there no indication of alternative swimming modes in the family. Supporting Information List S1 List of specimens of Electrophorus and outgroups examined in this study. (DOC)
2016-05-12T22:15:10.714Z
2013-07-24T00:00:00.000
{ "year": 2013, "sha1": "1c371f3db718a5d0df2f1c7e63140d6ca41a764b", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068719&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c371f3db718a5d0df2f1c7e63140d6ca41a764b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233348183
pes2o/s2orc
v3-fos-license
The U.S. EPA wildland fire sensor challenge: Performance and evaluation of solver submitted multi-pollutant sensor systems Wildland fires can emit substantial amounts of air pollution that may pose a risk to those in proximity (e.g., first responders, nearby residents) as well as downwind populations. Quickly deploying air pollution measurement capabilities in response to incidents has been limited to date by the cost, complexity of implementation, and measurement accuracy. Emerging technologies including miniaturized direct-reading sensors, compact microprocessors, and wireless data communications provide new opportunities to detect air pollution in real time. The U.S. Environmental Protection Agency (EPA) partnered with other U.S. federal agencies (CDC, NASA, NPS, NOAA, USFS) to sponsor the Wildland Fire Sensor Challenge. EPA and partnering organizations share the desire to advance wildland fire air measurement technology to be easier to deploy, suitable to use for high concentration events, and durable to withstand difficult field conditions, with the ability to report high time resolution data continuously and wirelessly. The Wildland Fire Sensor Challenge encouraged innovation worldwide to develop sensor prototypes capable of measuring fine particulate matter (PM2.5), carbon monoxide (CO), carbon dioxide (CO2), and ozone (O3) during wildfire episodes. The importance of using federal reference method (FRM) versus federal equivalent method (FEM) instruments to evaluate performance in biomass smoke is discussed. Ten solvers from three countries submitted sensor systems for evaluation as part of the challenge. The sensor evaluation results including sensor accuracy, precision, linearity, and operability are presented and discussed, and three challenge winners are announced. Raw solver submitted PM2.5 sensor accuracies of the winners ranged from ~22 to 32%, while smoke specific EPA regression calibrations improved the accuracies to ~75–83% demonstrating the potential of these systems in providing reasonable accuracies over conditions that are typical during wildland fire events. Introduction Wildland fires can produce significant air pollution emissions which pose health risks to those working and living in close proximity such as first responders and nearby residents, as well as downwind populations (Bell et al., 2004;Vedal and Dutton, 2006;Rappold et al., 2011;Johnston et al., 2012;Reisen et al., 2015;Reid et al., 2016;Cascio, 2018;Weitekamp et al., 2020). Land management practices affecting forest fuel loading (under growth and tree density), drought, higher global temperatures, longer fire seasons, and increasing acres burned and fire intensity have resulted in increasing smoke emissions over a longer temporal duration (Kitzberger et al., 2007;Littell et al., 2009;Johnston et al., 2012;United States Department of Agriculture, 2014;United States Department of Agriculture, 2016;Westerling, 2016;Landis et al., 2018). The emission and downwind transport of smoke from wildland fires needs to be quantified and managed for first responder force protection and public health messaging by incident response teams, burn teams, and public health professionals. The important primary constituents of emitted smoke that negatively impact air quality are particulate matter less than 2.5 μm in mass median aerodynamic diameter (PM 2.5 ), carbon monoxide (CO), nitrogen oxides (NOx), and volatile organic compounds (VOCs) (Urbanski et al., 2009). Even in developed countries with relatively advanced regulatory air monitoring networks, remote wildland firefighter camps and population centers impacted by smoke in many instances lack adequate observational air quality data. The meteorological dispersion of wildland fire smoke is influenced by wind, atmospheric stability, and terrain, and therefore regulatory monitoring sites (if any) within the region may not adequately characterize the spatial and temporal variability of smoke impacts. During some large wildfire incidents, the U.S. Interagency Wildland Fire Air Quality Response Program (IWFAQRP) augments long-term regulatory monitoring networks with temporary air quality monitors dispatched with Air Resource Advisors (IWFAQRP, 2021). The cost, technical expertise required, and need for electrical power infrastructure generally limits the number of temporary monitors that are deployed. In many cases no additional monitors are deployed to provide actionable information on ambient air quality resulting from smoke impacts on affected population areas. Meanwhile, there has been rapid development of miniaturized, user-friendly air quality sensor systems Baron and Saffell, 2017;Karagulian et al., 2019;Malings et al., 2019). Significant advancements in internal gas and particulate matter sensor components, compact microprocessors, power supply/management, wireless data telemetry, advanced statistical data fusion/analysis, real-time sensor calibration, and graphical data interfaces hint at the future potential of accurate small form factor integrated sensor systems. This technology is being developed for a variety of potential applications, including exposure assessment (Morawska et al., 2018), industrial applications (Thoma et al., 2016), local source impact estimation (Feinberg et al., 2019) , and to increase the spatial density of outdoor monitoring networks (Mead et al., 2013;Bart et al., 2014). Routine performance testing of sensors, to date, has been mostly limited to typical ambient conditions Jiao et al., 2016;Feinberg et al., 2018;Zamora et al., 2019;Collier-Oxandale et al., 2020), with more limited assessment of certain technologies at higher concentrations (Johnson et al., 2020;Zheng et al., 2018). These previously published findings have indicated, in some cases, high correlation between collocated sensors and reference monitors; however, there are also many sensor test results that exhibit measurement artifact (Mead et al., 2013;Lin et al., 2015;Spinelle et al., 2015;Hossain et al., 2016), inconsistency among identical sensors Castell et al., 2017;Sayahi et al., 2019), drift over time (Artursson et al., 2000;Feinberg et al., 2018;Sayahi et al., 2019), sensitivity to environmental conditions (e.g., temperature, relative humidity; Cross et al., 2017;Wei et al., 2018), and limitations to upper limit measurement capabilities (Schweizer et al., 2016;Zou et al., 2020). New approaches to assess smoke impacts from wildfire are of significant interest for U.S. federal agencies coordinating wildfire response and public health officials to communicate appropriate public health messages to impacted populations. Sensor technology may be at a point to improve upon or complement the limited cache of temporary monitors utilized by technical experts during wildfire episodes and, in that use scenario, enable more granular information on air quality for the public to reduce their exposure. Some researchers have integrated satellite and low-cost sensor data with regulatory monitoring network results to better understand and model human exposure and health effects of wildland fire smoke (Liu et al., 2009;Gupta et al., 2018), but the performance of most commercially available sensors under smoke conditions is unknown. An important consideration in the use of air quality sensors for wildfire smoke is the selection of key pollutants of interest for public health and understanding target measurement ranges. PM 2.5 monitors near wildfires have reported hourly concentrations exceeding 3-5 mg m −3 , resulting in daily averages well above the level of the 24-hr average PM 2.5 regulatory standards. Peak hourly levels of CO near wildfire have been recorded between 2 and 3 ppm (Vedal and Dutton, 2006). Ozone (O 3 ) concentrations are typically low near field of wildland fires due to combustion nitrogen oxide (NO) emissions titrating ambient O 3 (NO + O 3 → NO 2 + O 2 ) faster than it can be produced. However, many studies suggest an O 3 enhancement further downwind (Jaffe and Wigsder, 2012;Brey and Fischer, 2016;Lindaas et al., 2017;Jaffe et al., 2018;Liu et al., 2018). Several U.S. federal government agencies including the Environmental Protection Agency (EPA), National Aeronautics and Space Administration (NASA), National Oceanographic and Atmospheric Administration (NOAA), National Park Service (NPS), Forest Service (USFS), and Centers for Disease Control (CDC) sponsored a Challenge in 2017 (U.S. EPA, 2017) to spur development of a prototype small form factor measurement systems that could be deployed rapidly, operated with minimal expertise, and provide continuous ambient monitoring of key air pollutant concentrations during fire events. The challenge specified a system that would include measurements of pollutants with known negative health effects in humans (PM 2.5 , CO, and O 3 ) and provide carbon dioxide (CO 2 ) levels to quantify fire combustion efficiencies. In addition, each of the measured pollutants needed to be accurate over a large dynamic range of concentrations expected during periods of wildfire smoke impact, the system needed to be designed to transmit data to a central receiving unit, be durable to operate unattended in harsh conditions, and within a production-scale cost limit ($40,000 for a 6-node sensor system with central data receiver). The purpose of this publication is to present and discuss the performance of the solver sensor systems submitted as part of the Wildland Fire Sensor Challenge under different temperature, relative humidity, and exposure conditions and announce award recipients. Phase I testing evaluated the solver submitted sensors under controlled temperature and relative humidity conditions in EPA's research facility in Chapel Hill, North Carolina (Ghio et al., 2012) where known concentrations of pure standards were blended with zero air and introduced into an exposure chamber. Phase II testing evaluated the solver submitted sensors under simulated wildland fire exposure conditions at the USFS Rocky Mountain Research Station Fire Sciences Laboratory (FSL) combustion research facility in Missoula, Montana where varying concentrations of smoke from burning biomass fuel typical of the western U.S. under different combustion conditions (e.g., smoldering, flaming) were produced in the chamber. The accuracy, collocated precision, and linearity of each solver submitted sensor system was determined by comparison of the sensors measurement results with EPA designated federal reference method (FRM) or federal equivalent method (FEM) measurements (Hall et al., 2012) during the extended periods of Phase I and Phase II testing. Challenge details Eleven prototype sensor systems from ten different private solvers were received by EPA prior to the January 18, 2018 submission deadline that were responsive to the Sensor Challenge (Appendix Table A.1). Each solver submitted sensor system was first evaluated for condition and operability by the EPA testing team. The solvers were contacted and provided the opportunity to address any observed operation and/or data telemetry deficiencies prior to the Phase I (March 28 -April 2, 2018) and Phase II testing (April 16-24, 2018). Seven of the eleven sensor systems were cleared for Phase I testing, two sensor systems were returned to solver E for repair of physical damage that occurred during shipping, solver H sensors were returned due to inoperability prior to Phase I testing, and one system was disqualified when solver J declined to address identified data telemetry/ logging issue(s). Sensor systems were returned to solvers I and F after Phase I and Phase II testing, respectively, to provide them an opportunity to recover internally logged monitoring data after telemetry and EPA attempts to recover data failed. All solver submitted sensor systems were tested as received, no calibrations or modifications of any kind were made. All solver sensor systems were either mounted onto custom made steel mesh support stands or a tripod with a uniform height of ~1.2 m based on its designed mounting configuration (Appendix Figure B.1). Phase I testing Phase I testing was carried out at the EPA Office of Research and Development research facility in Chapel Hill, North Carolina (Ghio et al., 2012) over a predetermined range of target analyte concentrations (Appendix Table A.2) repeated at two temperature and two relative humidity conditions (Appendix Table A.3). Challenge CO 2 and CO concentrations were produced by diluting certified reference gas standard cylinders, O 3 concentrations were generated using a custom chamber integrated corona discharge system, and ammonium sulfate aerosols were generated using a TSI Incorporated (Shoreview, MN, USA) Model 9306 six-jet seed aerosol generator. Sensor systems were tested in a 4.8 × 5.8 × 3.2 m (width x depth x height) stainless-steel chamber with a single pass laminar flow (ceiling to floor) air system with approximately 40 air changes per hour (113 m 3 min −1 ). The outside air was purified by passing through a bed of Purafil (potassium permanganate on an alumina substrate) and a bed of Purafil Corporation (Doraville, GA, USA) Puracol® activated charcoal, dehumidified, passed through a bed of Hopcalite, and sent to the Clean Air Plenum. The chamber air was taken from the Clean Air Plenum, brought to the proper test protocol conditions (Appendix Table A.3) by being heated/cooled and humidified with deionized water. Target pollutants were injected into the air stream before passing through a mixing baffle and entering the top of the chamber. The chamber pollutant monitoring system consists of an 8 cm diameter glass sample manifold that starts with an inverted glass funnel in the middle of the chamber at a height of ~3 m. This sample manifold runs along the back of the instrument racks containing the reference gas pollutant analyzers. Each gas analyzer had a 0.64 cm diameter perfluoroalkoxy alkane (PFA) Teflon® sample inlet line connecting the analyzer to the sample manifold. A fan pulled ~1.5 m 3 min −1 from the chamber past the reference gas analyzers. FRM filter based PM 2.5 and continuous FEM PM 2.5 instruments sampled from an isokinetic probe located at the inlet of the chamber. The solver submitted sensor systems performance characteristics were evaluated by comparing their reported measurement concentrations versus the EPA reference instrument reported concentrations. Phase II testing Phase II testing was carried out at the FSL. The main combustion chamber is a square room with internal dimensions 12.4 × 12.4 × 19.6 m and a total volume of 3000 m 3 and has been described previously (Christian et al., 2004). The chamber was ventilated with outdoor ambient air prior to each burn. All sensor testing was conducted using "static chamber" burns to simulate sensor exposure under in-situ sampling conditions. During the static chamber burns the combustion chamber was sealed after being flushed with outdoor air. Fuel beds were prepared and placed in the center of chamber. Two large circulation fans mounted on the chamber walls and destratification fans on the chamber ceiling facilitated mixing and maintained homogeneous smoke conditions during the tests. Continuous-sampling gas reference instruments were placed in the observation room adjacent to the combustion chamber and connected to PFA sampling manifolds that brought chamber air from Savillex (Eden Prarie, MN, USA) 47 mm inlet PFA filter packs (401-21-47-10-21-2) loaded with 5 μm pressure drop equivalent Millipore (Burlington, MA, USA) Omnipore® polytetrafluoroethylene (PTFE) Teflon membrane filters mounted on a tripod in the sensor testing zone, and PM 2.5 reference instruments were distributed around the solver submitted sensor systems on the chamber floor (Appendix Figure B.1). The fuels utilized for Phase II burns were ponderosa pine (Pinus ponderosa) needles (PPN) and fine dead wood (PPW), alone or mixed. Combustion efficiency of burns was managed by fuel moisture content as summarized in Appendix Table A.4. The Phase II burn plan was to target six different concentration ranges for the three primary air pollutants (PM 2.5 , CO, CO 2 ) as summarized in Appendix Table A.5 under different burn conditions for a total of thirty-three 1-h burns. EPA reference measurements EPA gaseous reference instruments utilized in Phase I and Phase II testing included (i) Licor (Lincoln, NE, USA) model LI-850 non-dispersive infrared absorbance (NDIR) CO 2 instrument, (ii) ThermoScientific (Franklin, MA, USA) Model 48C gas filter correlation (GFC) FRM CO analyzer, (iii) Teledyne API (San Diego, CA, USA) Model T265 NO chemiluminescence O 3 FRM analyzer and 2B Technologies (Boulder, CO, USA) Model 211 UV photometric O 3 FEM analyzer, (iv) Teledyne API Model T200 chemiluminescence FEM NO instrument, and (v) Teledyne API Model T500U cavity attenuated phase shift (CAPS) FEM "true" NO 2 instrument. All continuous gas analyzers were zeroed, and span calibrated at the beginning and end of each chamber test day using Teledyne API Model T700U dynamic dilution calibration systems with certified O 3 photometers. EPA protocol certified gas standard cylinders diluted in ultra-scientific grade zero air were used for CO, CO 2 , and NO 2 instruments. Multi-point span calibrations were conducted at the beginning and end of each Phase of testing to ensure linearity. A USFS Picarro Inc. (Santa Clara, CA, USA) Model G2401-m cavity ring-down spectroscopy (CRDS) CO/CO 2 gas analyzer was utilized to measure high time resolution (2 s) concentrations in the Phase II chamber to calculate burn integrated modified combustion efficiency (MCE) as previously described by Urbanski (2013). A three-point calibration using gas mixtures of CO and CO 2 in scientific grade zero air were run daily to maintain accuracy of the CRDS measurements. Continuous optical black carbon (BC) measurements were made during Phase II testing with collocated Magee Scientific (Berkley, CA, USA) Model AE-33 seven wavelength spectrum Aethalometers equipped with Model 5610 sample stream dryers and BGI (Waltham, MA, USA) Model SCC-1.829 sharp cut PM 2.5 cyclones. All reported BC concentrations represent the self-adsorption compensated BC6 (880 nm) channel. Performance of the AE-33 light emitting diode (LED) sources were verified prior to and after Phase II testing using the Magee Scientific Model 7662 neutral density optical filter validation kit. High time resolution (1 min) EPA PM 2.5 mass reference measurements were made using Teledyne API Model T640 continuous FEM instruments that were normalized to Tisch Environmental (Cleves, OH, USA) Model TE-WILBUR filter-based (1 h) FRM measurements on a test (Phase I) or burn (Phase II) specific basis. The API T640 FEM indicated PM 2.5 concentration was suspected of being sensitive to chamber aerosol size distribution (Phase I & Phase II testing) and BC concentration (Phase II testing) necessitating normalization to the hourly FRM concentration. The Phase I testing FEM correction factor averaged 1.30 ± 0.19 (mean ± standard deviation) and ranged from 1.08 to 1.80 for T640 instrument 296 (serial number). The Phase II testing FEM correction factors averaged 0.99 ± 0.38 and 1.01 ± 0.36, and ranged from 0.58 to 2.11 and 0.50 to 1.86 for T640 instruments 294 and 296, respectively. The T640 PM 2.5 instruments were zeroed before each chamber test day. Leak checks and multi-point flow calibrations were conducted on the PM 2.5 FRM samplers on a weekly basis. Statistical analysis Data processing and all statistical analyses were performed using SAS v.9.4 (SAS Institute, Cary, NC, USA). Accuracy of solver submitted sensor pods was calculated using Equation (1), and precision was calculated using the coefficient of variation or relative standard deviation using Equation (2), for solvers that submitted two sensor pods as requested in the challenge. where X is the reported sensor concentration and R is the reference concentration. where x = mean of collocated sensor concentrations, ∑ (x i − x) 2 = the sum of square of differences between individual collocated sensor concentrations and the mean, and n = the Landis et al. Page 7 number of collocated sensor observations. Solvers A-D all provided two sensor pods and therefore have reported precision values. All sensor systems reported data every 5 min and some collocated units reporting were temporally offset. In those cases, temporally offset data was first synchronized using the SAS lag function prior to calculating precision. Solver E provided one repaired sensor pod during Phase II precluding the calculation of collocated precision values. Parametric statistics used in this analysis include simple linear regression and multivariate analysis of variance (MANOVA). The assumptions of the parametric procedures were examined using residual plots, skewness and kurtosis coefficients, Shapiro-Wilk test, and the Brown-Forsythe test. Phase I and Phase II Δ sensor values (EPA Reference instrument concentration -reported sensor concentration) were log transformed, and a constant value was added when necessary to avoid negative values prior to MANOVA analysis to improve normality and stabilize variance. A level of significance of α = 0.05 was used for all statistical procedures unless otherwise stated. The SAS REG and GLM procedures were used for least square general linear model regressions and MANOVA analysis, respectively. Challenge award committee An independent multi-agency award panel reviewed the EPA testing team report for each solver, calculated a score based on sensor accuracy and ancillary capabilities, and awarded prizes to three of the solvers. Two thirds of the score was based on weighted sensor accuracy, with the highest weighting on PM 2.5 followed by CO, O 3 and CO 2 . The ancillary capabilities score was based upon form factor (size, weight), design durability, battery life, data transmission range, data visualization, software calibration functionality, and data completeness. Phase I clean air testing results Phase I involved a full day of testing for each temperature and relative humidity set point (Appendix Table A.3). Each test day started with establishing the chamber environmental (temperature/relative humidity) conditions, a 1-h chamber zero followed by 1 h of testing at each concentration set point 1-6 (Appendix Table A.2). Stable transitions between concentration set points were typically achieved in 15-20 min. Results of Phase I testing (Appendix Table A.6) demonstrated that the automated chamber environmental controls accurately maintained the target testing conditions, while the target analyte concentrations were typically achieved within a reasonable tolerance to the test plan ensuring testing over a wide dynamic range. An example testing day from March 28, 2018 is presented in Fig. 1 showing the progression of target gas concentrations from the chamber zero (SP0) through target set point 6 (SP6). Official testing periods are shown, and transition periods are highlighted. PM 2.5 target points were more variable and experienced larger deviations from the test plan as indicated by Reference Values (Appendix Figure B.2). This was a product of deteriorated aerosol generator jet performance over time as visible ammonium sulfate salt deposition built up at the jet orifice over the course of testing. The Phase I testing accuracy and collocated precision results are summarized in Table 1. We observed mean accuracies ranging from 14 to 70% (PM 2.5 ), 58-91% (CO), 35-88% (CO 2 ), 19-66% (O 3 ), and collocated precision was generally within ±15% for those sensors reporting reasonable results (e.g., responding to changing concentrations and no large negative values). Time series and scatter plots are presented for each EPA reference instrument measurement versus Solver sensor pod concentrations in Appendix Figure B.2 -B.31. Solver C's sensor pods performed the best in Phase I testing for PM 2.5 , CO, O 3 , but did not incorporate a CO 2 sensor. Lower maximum thresholds and sensor response lag impacted Solver A and Solver B accuracies, respectively for CO and CO 2 ( Fig. 2 and Appendix Figure B.4-B.7; B.12-B.15). Solver B's sensor pod #1 reported significantly lower PM 2.5 , CO 2 , and O 3 concentrations which also degraded their precision results. Solver D sensors experienced data transmission and logging failures that negatively impacted data completeness (~18%). Solver A reported maximum concentration thresholds of 35 ppm for CO and 2000 ppm for CO 2 were exceeded by set point 5 (CO) and set point 6 (CO & CO 2 ). These limitations effectively lowered their overall accuracy scores and artificially improved collocated precision due to invariant maximum concentration values being reported during impacted testing set points (Fig. 2). Accuracy and precision results were recalculated using only those CO (n = 187) and CO 2 (n = 235) values under their sensor maximum reporting limits. Solver A's CO sensor accuracy improved from 70 to 82% (sensor #1) and 58-67% (sensor #2), and precision degraded from 15 to 17%. Solver A's CO 2 sensor accuracy improved from 88 to 92% (sensor #1) and 88-91% (sensor #2), and precision degraded from 3 to 4%. A multivariate statistical analysis on the impact of chamber temperature (CT) and relative humidity (RH) on all the Solvers' sensor pod measurements was investigated. The analysis was conducted by first calculating a Δ sensor (EPA Reference instrument concentrationreported sensor concentration) value for each Solver's sensor measurement and then testing the significance of the listed environmental factors on the Δ sensor value using a MANOVA type III sum of squares. The results of the MANOVA analysis are summarized in Table 2, and CT and RH effects were found to significantly impact the Δ sensor values from each of the Solvers but were not uniformly observed. Differences in Δ sensor values could be a result of individual sensor type characteristics, solver implementation of sensors design, and potential implementation of compensation algorithms. For example, PM 2.5 results from Solver A and Solver B were significantly impacted by CT but not RH, while Solver C (Sensor #1 and Sensor #2) and Solver D (Sensor #1) PM 2.5 measurements were significantly impacted by RH but not CT. Phase II smoke testing results Phase II involved eight days of active testing for a total of thirty-three static chamber burns. An example burn day from April 18, 2018 is presented in Fig. 3 showing PM 2.5 concentrations through the progression of burns 10-13 and highlighting fuel bed ignitions (red), start and stop times of official 1-hr challenge burn periods (drop lines), and chamber ventilation (blue). A statistical summary of chamber conditions, fuels combusted, and burn integrated MCEs are summarized in Appendix Table A.7. The resulting achieved target pollutant concentrations as well as BC, NO, and NO 2 concentrations for each integrated burn are summarized in Appendix Table A.8. The results demonstrate smoke testing over a wide dynamic range of target analytes with a more limited range of integrated burn conditions. The burn integrated MCEs ranged from 0.91 to 0.97 representing a range of predominantly flaming combustion conditions (Akagi et al., 2011(Akagi et al., , 2013, chamber temperature ranged from 23 to 27 °C, and chamber relative humidity ranged from 16 to 28%. The burn integrated target EPA sensor challenge reference concentrations ranged from 29 to 1815 μg m −3 (PM 2.5 ), 0.5-15.2 ppm (CO), and 465-951 ppm (CO 2 ). Homogeneity of the smoke within the chamber was verified by evaluating the precision of the filter based PM 2.5 FRM samplers and continuous PM 2.5 FEM instruments within the sensor testing zone (Appendix Figure B1). The burn integrated absolute percent difference between the PM 2.5 FRMs was 5.2% (mean) and 2.4% (median), and between the continuous PM 2.5 FEMs was 3.5% (mean) and 1.9% (median). The Phase II testing accuracy and collocated precision results are summarized in Table 3. We observed mean accuracies ranging from 26 to 52% (PM 2.5 ), 54-73% (CO), and 14-93% (CO 2 ), and collocated precision was generally within ±20% for those sensors reporting reasonable results (e.g., responding to changing concentrations and no large negative values). O 3 present in the chamber prior to fuel bed ignition from ventilated outside ambient air was rapidly titrated by NO to NO 2 resulting in the range of burn integrated concentrations from only 0.1-1.1 ppb (Appendix Table A.8). The combination of virtually no O 3 present in the chamber and observed positive sensor artifact from NO 2 (even for those sensor pods measuring NO 2 and correcting reported O 3 values) resulted in very large negative accuracies ranging from -1964 to -42598 and poor collocated precision. Improved Phase II testing PM 2.5 accuracies from Solvers A and B in relation to Phase I testing suggests that smoke specific calibrations were implemented by the solvers, while Solvers B and D lower Phase II testing accuracy for gas phase target species indicated that smoke meaningfully degraded their gas sensor performance. Time series and scatter plots are presented for each EPA reference instrument measurement versus Solver sensor pod concentrations in Appendix Figures B.32 -B.55. Solver C's sensor pods required 2G cellular connectivity to transmit data to their cloud server, and the absence of 2G cellular coverage in Missoula resulted in no data acquisition during Phase II testing. Solver D data receiving unit experienced data transmission and logging failures again that negatively impacted data completeness (~31%). Solver E's repaired passive sensor pod was received during Phase II testing resulting in lower data completeness (~32%). The overall Phase II testing PM 2.5 sensor calculated accuracy distributions for Solvers A and B with high data completeness (>99%) covering the full range of test burn combustion conditions were relatively similar ranging from 26 to 48% (Table 3). However, when the aggregate results are plotted versus the EPA reference values as presented in Fig. 4a-d some differences in PM 2.5 sensor performance are observed. Solver A's PM 2.5 sensor (Plantower Model PMS5003) provides a non-linear response that was well fit to a quadratic model (r 2 = 0.94; Fig. 4a and b) consistent with other reported results with Plantower sensors (Zheng et al., 2018) under ambient conditions, and does not appear to be very sensitive to changes in burn conditions. Solver B's PM 2.5 sensor (Nova Model SDS011) provides a moderately well fit linear response (r 2 = 0.8; Fig. 4c and d) that appears to be more sensitive to changes in burn conditions. When evaluating Solver B's PM 2.5 sensor performance on an individual burn basis (e.g., Appendix Figure B.56) we observed very good linear responses relative to the EPA reference concentrations but varying sensor sensitivities (slopes ranged from 1.42 to 23.77 for the nine burns presented) due to changing burn conditions. Interestingly, the largest slopes (>10) occurred when fuels burned were >90% fine dead wood by mass. A multivariate statistical analysis on the impact of CT, RH, BC, and NO 2 on the Solvers' sensor pod measurements was investigated in the same manner as previously described in the Phase I testing. The results of the MANOVA analysis are summarized in Table 4, and CT, RH, BC, NO 2 effects were found to significantly impact the Δ sensor values from each of the Solvers but were not uniformly observed even between collocated sensors from the same Solver. However, a few general trends were observed between modeled chamber conditions and the reported sensor concentrations. The impact of combustion conditions as modeled by BC and NO 2 concentrations significantly impacted PM 2.5 and O 3 sensor performance from most Solvers. Fuel type, arrangement, and moisture content affect the combustion process and hence the size distribution, optical properties, and emission intensity of aerosol produced. For example, flaming combustion of PPN produces higher number concentrations of smaller, more light absorbing aerosols compared with smoldering (Carrico et al., 2016). Additionally, smoke aging within the FSL chamber reduces the aerosol number concentration and increases aerosol size (Carrico et al., 2016). Burn to burn variability in the amount and mix of fuel and combustion conditions likely exerted a variable impact on this aerosol aging process during the 1-h sampling periods of our testing. Laser and LED photometer-based sensors accuracies are affected by changes in aerosol size distributions, aerosol density assumptions, and optical properties of the aerosols (Kelly et al., 2017) as reflected in the range of burn conditions achieved during this evaluation. The cross sensitivity on many electrochemical O 3 sensors to NO 2 as well as sensitivities to environmental conditions such as CT and RH were also reflected in the MANOVA results. Challenge awards The EPA/USFS testing team produced a report for each Solver submitted sensor system summarizing (i) Phase I and Phase II testing accuracy results as presented and discussed herein, (ii) functionality testing of data telemetry, and (iii) a qualitative review of sensor pod form factor (e.g., size, weight), ease of deployment, battery life, durability, and safety features. These reports were provided to each Solver and an independent interagency judging panel made up of representatives from CDC, EPA, NASA, NPS, NOAA, and USFS. The judging panel reviewed the testing reports and scored each submitted sensor system with sensor performance counting for 65% of the total score weighted most heavily toward PM 2.5 followed by CO, O 3 , and CO 2 . The remaining 35% of the total score was based on a qualitative review of usability, durability, data telemetry, and cost. The winners of the Wildland Fire Sensor Challenge were announced, and awards presented at the Air Sensors International Conference in Oakland, California on September 12th, 2018 (U.S. EPA, 2018). The First Place Award and $35,000 USD was presented to SenSevere/Sensit Technologies (Pittsburgh, PA, USA; Solver A), the Second Place Award and $25,000 USD was presented to Thingy LLC (Bellevue, WA, USA; Solver B), and an Honorable Mention Award was presented to Kunak Technologies (Pamplona, Spain; Solver C). The Wildland Fire Sensor Challenge testing team provided important feedback to all the Solvers on the quantitative performance of their sensor systems under a wide range of target pollutant concentrations and environmental conditions, data telemetry, data user interface, and form factor. Winners of the challenge have continued to develop their wildland fire sensor technologies and their products are currently commercially available as the Sensit RAMP (Zimmerman et al., 2018;Malings et al., 2019;Sensit, 2020), the Thingy AQ (Thingy, 2020), and the Kunak Air A10 (Kunak, 2020; Reche et al., 2020). Post challenge calibration of sensors As detailed in the rules of the Wildland Fire Sensor Challenge (U.S. EPA, 2017), all Solver submitted sensors were tested and awards distributed based on the performance of the systems "as received" (e.g., no calibrations were conducted by EPA). After the completion of the Challenge post-testing Phase I and Phase II regression calibration equations were developed and applied to all the Solver submitted sensors to evaluate the potential for improving their performance. This was a unique opportunity to re-evaluate the EPA reference "calibrated" systems and their underlying technologies under both ideal clean chamber and controlled smoke chamber conditions over very large concentration ranges. All the Phase I and Phase II testing calibration equations are presented in Table 5, and time series and scatter plots of the EPA reference versus raw and regression calibrated sensor measurement data are presented in Appendix Figures B.2 The updated regression calibrated sensor accuracy results are summarized in Table 6 and demonstrate that some sensor systems performance was significantly improved versus their raw challenge accuracies (Table 1, Table 3). As expected, sensors with well fit calibration regression equations produced the most significant performance improvements. For example, the Solver A calibrated PM 2.5 measurements improved Phase I mean accuracies from ~22% to ~75% and Phase II mean accuracies from ~32% to ~83%. Sensors that demonstrated responses to Phase II burn conditions that were poorly modeled by best fit regression equations produced no significant improvements. For wildland fire applications the best performing sensors were not overly impacted by burn conditions (BC concentrations and implied changes in aerosol size distribution) and produced well fit calibration models over very large concentration ranges. The accuracy improvements demonstrated for most of the solvers sensors when regression calibrated demonstrates the potential of these systems providing reasonable accuracies over conditions that are typical during wildland fire events. However, as the differences in calibration equations shown in Table 5 indicate, the variability in specific sensor type response can be quite large and more evaluations are needed before generic smoke correction equations could be generally applied. Conclusions The Wildland Fire Sensor Challenge succeeded in bringing the sensor manufacturer community's attention to the unique air pollution measurement needs of federal, state, local, and tribal agencies managing wildland fire response and public health messaging during large events; and several Solvers succeeded in designing and building fit for purpose sensor systems. The small form factor, ruggedness, and easy deployment of the award-winning Solver systems reflected careful design and implementation. However, data telemetry and data presentation/visualization solutions were identified as a general shortcoming. The 1st place award-winning Solver submitted sensor system provided reasonable mean accuracies (>80%) for PM 2.5 , CO, and CO 2 over conditions that are typical during wildland fire events when smoke specific EPA/USFS calibrations were applied. The O 3 results in Phase I testing were reasonable for most solvers but Phase II testing were poor for all Solver systems due to very low chamber concentrations (NO titration) and positive NO 2 measurement artifacts. This study also highlighted the need for using FRMs for evaluating/calibrating sensor systems in biomass smoke. Regulatory FEMs providing continuous measurements for PM 2.5 are not well characterized under smoke conditions and our results demonstrate the 1-h FRM correction factors ranged from 0.50 to 2.11 and were a function of burn condition (BC concentrations and implied changes in aerosol size distribution). Similarly, ultraviolet (UV) photometric FEM O 3 instruments suffer from large positive measurement artifacts in the FSL smoke chamber and in near-field prescribed burning smoke plumes (Long et al., 2021). Supplementary Material Refer to Web version on PubMed Central for supplementary material. • Wildland Fire Sensor Challenge aimed to advance technology for smoke applications. • Fine particulate matter, carbon monoxide, carbon dioxide, and ozone were targeted. • Submitted systems were tested in research chambers over large dynamic ranges. • Sensor accuracy, precision, linearity, and operability are presented and discussed. • Sensor performance was dramatically improved by smoke specific post calibration. Table 6 Post challenge regression calibrated chamber testing accuracy (%) results (mean ± standard deviation) for PM
2021-04-23T05:16:38.426Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "8a54a0d3d025e38e4f7479c1fb722f88d75b20b5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.atmosenv.2020.118165", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8a54a0d3d025e38e4f7479c1fb722f88d75b20b5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237468931
pes2o/s2orc
v3-fos-license
Anticancer Activities of Hesperidin via Suppression of Up-Regulated Programmed Death-Ligand 1 Expression in Oral Cancer Cells Up-regulated expression of programmed death-ligand 1 (PD-L1) by interferon-gamma (IFN-γ) has been associated with promotion of cancer cell survival and tumor cell escape from anti-tumor immunity. Therefore, a blockade of PD-L1 expression can potentially be used as a molecular target for cancer therapy. The aim of this study was to investigate whether suppression of IFN-γ induced PD-L1 expression in two oral cancer cell lines, HN6 and HN15, by hesperidin effectively decreased cell proliferation and migration. Further, our objective was to elucidate the involvement of the signal transducer and activator of transcription 1 (STAT1) and STAT3 in the inhibition of induced PD-L1 expression by hesperidin. Our findings indicate that IFN-γ induced expression of PD-L1 protein in HN6 and HN15 via phosphorylation of STAT1 and STAT3 and that hesperidin significantly reduced that induction through suppression of phosphorylated STAT1 and STAT3 in both cell lines. Moreover, hesperidin also significantly decreased the viability, proliferation, migration, and invasion of both cell lines. In conclusion, hesperidin exerted anticancer effects against oral cancer cells through the suppression of PD-L1 expression via inactivation of the STAT1 and STAT3 signaling molecules. The findings of this study support the use of hesperidin as a potential adjunctive treatment for oral cancer. Introduction Oral cancer, ranked the sixth most commonly diagnosed form of cancer, continues to emerge as a major concern in various regions of the world [1]. Oral squamous cell carcinoma (OSCC) is recognized as the most common type of oral cancer [2]. The main risk factors for OSCC are associated with tobacco use and alcohol consumption [3]. However, several other factors may also favor its development and progression such as infections with human papillomavirus and poor oral hygiene [4]. In OSCC, there is a relatively high incidence of invasion into the underlying tissue and/or potential metastasis to distant organs via the lymphatic vasculature, which is known to lead to tumor recurrence and patient morbidity and mortality [5]. Therefore, there is a five-year survival rate associated with OSCC if the cancer is detected at a later stage [6]. Consequently, it would be beneficial to identify a novel adjunctive agent that can be used in combination with standard chemotherapeutic drugs to enhance the treatment efficacy of OSCC. Interferon-gamma (IFN-γ) is a main cytokine that is produced and secreted by several types of immune cells including T lymphocytes and natural killer cells present in the tumor microenvironment [7]. In melanoma, IFN-γ induces the expression of programmed cell death-ligand 1 (PD-L1) via activation of Janus kinase JAK1 and JAK2, signal transducer and activator of transcription 1 (STAT1), and interferon regulatory factor 1 [8]. Moreover, STAT3 can be induced by IFN-γ as an atypical signal transducer, while a cross-talk relationship between STAT1 and STAT3 has been demonstrated [9]. IFN-γ also induces the expression of PD-L1 in other cancer cell types, including those of oral cancer [10], breast cancer [11], and lung cancer [12], leading to a mechanism known as the immune evasion of cancer cells [13]. A comprehensive understanding of up-regulated PD-L1 expression by IFN-γ, and its associated functions in the aggressiveness of oral cancer cells may, thus, contribute to the development of certain necessary strategies that could maximize the anticancer activities of newly identified agents. PD-L1 is a transmembrane protein that is expressed in several somatic cells such as antigen-presenting cells. It is a primary ligand for programmed cell death protein-1 (PD-1), a membrane-bound receptor of activated T cells. The binding that occurs between PD-L1 and PD-1 negatively regulates T-cell activation by inducing T cell apoptosis and subsequently acting as a negative immune checkpoint [14,15]. Many cancer cell types exploit this mechanism in order to evade the host immune system for their survival by overexpressing PD-L1 protein on the cell surface. High degrees of PD-L1 expression also benefit cancer cells by enhancing their resistance to chemotherapy and supporting their metastatic ability. Several previous studies [16] have shown that PD-L1 and PD-1 are potential targets for the development of new drugs for cancer treatments. Therefore, any studies that identify the natural compounds that can inhibit PD-L1 signaling in oral cancer cell lines are of significant interest with regard to the search for viable treatment options that could address the growing problem of oral cancer worldwide. The present study focused on hesperidin, an active compound in orange peel and other citrus species, that is widely used in Chinese herbal medicine [17]. Hesperidin has displayed a potent anticancer effect against various cancer cell lines including prostate cancer, bladder cancer, hepatocellular carcinoma, and breast cancer [18][19][20]. It has been reported that hesperidin exerts its anticancer activity by promoting apoptosis [21], while inhibiting the invasion and metastasis of lung cancer and hepatocellular carcinoma [22]. It is particularly noteworthy to mention that hesperidin has been shown to not be toxic to normal cells [21]. A previous study has suggested that hesperidin may act as an immune checkpoint inhibitor that can suppress expression of the PI3K/AKT pathway in the breast cancer cell line, MDA-MB231 [23]. Consequently, it has been hypothesized that hesperidin may also act as an immune checkpoint inhibitor in oral cancer cells by targeting PD-L1 expression that has been induced by treatment with IFN-γ via suppression of the STAT1/STAT3 signaling pathway, which ultimately can result in the down-regulation of PD-L1 expression and a decrease in the degree of aggressiveness of cancer cells. PD-L1 Expression in Oral Cancer Cells By immunoblotting, a faint immunoreactive band of PD-L1 expression at the predicted size (50 kDa) was found in HN15 but not in HN6, whereas an intense band at the same molecular weight was detected in the breast cancer cell line, MDA-MB231, that was used as a positive control [23] ( Figure 1A). Note that additional bands or a weak band detected at around 70-95 kDa in HN6 and HN15, or in MDA-MB231, respectively, were consistent with the ubiquitinated PD-L1 proteins that had been previously reported at a molecular weight greater than 64 kDa [24]. Expression of β-actin was equal among the whole cell lysates of these three different cell lines. By immunofluorescence, cytoplasmic localization of PD-L1 protein was not detected in either HN6 or HN15, whereas an intense signal (red) of PD-L1 protein was found in the cytoplasm of the MDA-MB231 cell line ( Figure 1B). Note that nuclear staining in HN6 and HN15 ( Figure 1B) is considered an artifact as has been previously reported [25]. Up-Regulation of PD-L1 Expression by IFN-γ in Oral Cancer Cells The viability of HN6 and HN15 upon treatment with IFN-γ at indicated concentrations for 24 h was first checked using MTT assay. There was no difference in the mean percentage values of cell viability in HN6 or HN15 upon treatment with doses of IFN-γ up to 400 IU/mL when compared to that of the control untreated cells. This indicated that IFN-γ treatment was not toxic to both cancer cell lines ( Figure 2A). Next, the effect of treatment with IFN-γ at different doses (0-400 IU/mL) for 24 h on PD-L1 protein expression in both cell lines was examined. Treatment with IFN-γ induced PD-L1 expression in a dose-dependent manner in both cell lines ( Figure 2B,C). Notably, expression of β-actin was equivalent among the different samples. By densitometry, treatment with IFN-γ at 200 or 400 IU/mL significantly enhanced the mean percentage values of PD-L1 expression in both HN6 and HN15 (p < 0.01; Figure 2D and Figure 2E, respectively). Moreover, treatment with IFN-γ at 200 IU/mL for 24 h enhanced PD-L1 expression (red) in the cytoplasm of both HN6 and HN15 when compared to nuclear staining in the control untreated HN6 and HN15 cell lines ( Figure 2F and Figure 2G, respectively). The cytoplasmic presence of PD-L1 protein by IFN-γ treatment in HN6 and HN15 corresponded with that found in the MDA-MB231 cell line ( Figure 1B). Thus, the concentration of IFN-γ at 200 IU/mL was selected for oral cancer cell stimulation in subsequent experiments. assay. (B,C) Representative images of up-regulated PD-L1 expression clearly shown by treatment with IFN-γ at 200 or 400 IU/mL for 24 h in HN6 and HN15, respectively. (D,E) Bar graphs demonstrate the relative intensities of PD-L1 to those of β-actin in IFN-γ-treated HN6 and HN15, respectively. These were compared to those of the untreated cells whose ratio was set to 100%. Data in (A,D,E) are presented as mean ± SD values (error bars) obtained from three separate experiments. ** p < 0.01. (F,G) Representative images from three experiments in HN6 and HN15 that were treated with IFN-γ at 200 IU/mL for 24 h. Note the cytoplasmic localization of PD-L1 (red) upon treatment with IFN-γ, while an artifact of nuclear staining was still found in the control untreated HN6 and HN15 cell lines as indicated by staining with DAPI (blue). Enhancement of Phosphorylated STAT1 and STAT3 Levels by IFN-γ To determine whether treatment of HN6 and HN15 with IFN-γ phosphorylated STAT1 and STAT3, two major signaling molecules that mediated the effect of IFN-γ in other cell types [9] were treated with IFN-γ at 200 IU/mL for indicated times. The results revealed that the levels of phosphorylated-STAT1 (p-STAT1) and p-STAT3 were transiently increased in HN6 and HN15 upon treatment with IFN-γ with a salient increase observed at 30 min in both cell lines ( Figure 3A and Figure 3B, respectively). Therefore, the inhibitory effect of hesperidin on the phosphorylation of STAT1 and STAT3 was later studied after treatment with IFN-γ for 30 min. Suppression of the Viability of Oral Cancer Cells by Hesperidin To investigate the anticancer effects of hesperidin on HN6 and HN15, the cell viability upon treatment with hesperidin at various doses from 0 to 200 µM for 24, 48, or 72 h was first studied using the MTT assay. Treatment with hesperidin decreased cell viability in both dose-and time-dependent manners with significant reductions in the mean percentage values of cell viability detected at 50 µM for a 24 h incubation period in HN6 or at 25 µM for 48 and 72 h incubation periods in HN6 and HN15 (p < 0.05; Figure 4A and Figure 4B, respectively). Inhibitory concentrations of hesperidin at 50% (IC 50 ) for HN6 at 48 and 72 h were 169.53 and 184.62 µM, respectively, which were determined to be lower than those for HN15 (199.51 and 195.98 µM at 48 and 72 h, respectively) indicating that HN6 is more sensitive to hesperidin than is HN15. Likewise, IC 20 at 24 h for HN6 was lower than that for HN15 (55.20 versus 72.22 µM, respectively). Concentrations of hesperidin that were lower than IC 20 , i.e., ≤ 50 µM, were then selected for subsequent experiments. In addition, the morphology of HN6 and HN15 was monitored after exposure to hesperidin at 50 or 200 µM for 24 h. Some HN6 and HN15 cell lines were rounded up and detached from the culture vessel. These were readily seen upon treatment with hesperidin at 200 µM (black arrows in Figure 4C and Figure 4D, respectively). Inhibition of Colony-Forming Capacity and Migration of Oral Cancer Cells by Hesperidin Next, the antiproliferative effect of hesperidin was determined using colony formation assay. The number of colonies was reduced by treatment with hesperidin in a dosedependent manner with significant reductions in the mean percentage values of the colony numbers found at treatment with 50 µM of hesperidin in the HN6 and HN15 cell lines (p < 0.01; Figure 5A, Figure 5C, Figure 5B and Figure 5D, respectively). This implied that treatment with hesperidin could suppress the ability of both oral cancer cell lines to proliferate. Moreover, a wound healing assay revealed that cell migration was decreased by treatment with hesperidin in a dose-dependent manner with significant decreases in the mean percentage values of the closing area identified at treatment with 50 µM of hesperidin in HN6 (p < 0.01) and at treatment with 25 µM (p < 0.05) or 50 µM (p < 0.01) of hesperidin in HN15 ( Figure 5E, Figure 5G, Figure 5F and Figure 5H, respectively). Furthermore, cell invasion assay revealed that treatment with hesperidin inhibited significant increases in the mean percentage values of cell invasion upon IFN-γ treatment in both HN6 and HN15 in a dose-dependent manner ( Figure 5I and Figure 5J, respectively). A significant degree of inhibition was also observed upon treatment with hesperidin at 50 µM only in HN6 (p < 0.05; Figure 5I). Inhibitory Effect of Hesperidin on IFN-γ-Induced PD-L1 Protein Expression in Oral Cancer Cells To determine the effect of hesperidin on up-regulated PD-L1 expression by treatment with IFN-γ, HN6 and HN15 were treated with IFN-γ at 200 IU/mL in the presence or absence of hesperidin at various doses (6.25-50 µM) for 24 h. Through the process of immunoblotting, treatment with hesperidin decreased the level of up-regulated PD-L1 expression by IFN-γ in a dose-dependent manner in both HN6 and HN15 ( Figure 6A and Figure 6B, respectively) with significant reductions in the mean percentage values of PD-L1 expression that were found after treatment with 50 µM of hesperidin in HN6 (p < 0.01; Figure 6C) and with 25 or 50 µM of hesperidin in HN15 (p < 0.01; Figure 6D). In addition, treatment with hesperidin at 25 or 50 µM abrogated the cytoplasmic fluorescence signal (red) of PD-L1 protein induced by treatment with IFN-γ at 200 IU/mL in HN6 and HN15 ( Figure 6E and Figure 6F, respectively). These results indicate that treatment with hesperidin at doses lower than IC 20 could effectively down-regulate PD-L1 expression that was induced by IFN-γ in both oral cancer cell lines. This suggests that the anticancer activities of hesperidin observed in Figures 4 and 5 may be mediated by a decreased level of PD-L1 expression. Inhibition of IFN-γ-Induced Phosphorylated STAT1 and STAT3 Levels by Hesperidin To gain further insights into the inhibitory mechanisms of hesperidin on PD-L1 expression via the IFN-γ signaling pathway, HN6 and HN15 were pretreated with hesperidin at indicated doses (0-50 µM) for 4 h. This was followed by treatment with IFN-γ at 200 IU/mL for 30 min. The levels of p-STAT1 and p-STAT3 were detected by Western blot hybridization. Pretreatment with hesperidin reduced the levels of p-STAT1 and p-STAT3 that were induced by IFN-γ at 200 IU/mL in a dose-dependent manner in HN6 and HN15 ( Figure 7A and Figure 7B, respectively). By densitometry, the increased mean ratio of p-STAT1/total STAT1, and that of p-STAT3/total STAT3 by IFN-γ treatment, were significantly inhibited by pretreatment with hesperidin at 25 or 50 µM in HN6 (p < 0.01; Figure 7C and Figure 7E, respectively) and in HN15 (p < 0.01; Figure 7D and Figure 7F, respectively). These findings suggest that treatment with hesperidin could potentially inhibit up-regulated PD-L1 protein expression by IFN-γ via the STAT1 and the STAT3 signaling molecules. Discussion Unlike PD-L1 expression, detected as an immunoreactive band at 50 kDa or signal localized within the cytoplasm of the breast cancer cell line, MDA-MB231, the present in vitro study has demonstrated very low to no PD-L1 expression in the two oral cancer cell lines, namely HN6 and HN15. The discrepancy in PD-L1 expression between the oral and the breast cancer cell lines implies a unique feature for the molecular pathogenesis of oral cancer. However, treatment with exogenously added IFN-γ, a pro-inflammatory cytokine found within the tumor microenvironment, could significantly induce the expression of PD-L1 in a dose-dependent fashion in both oral cancer cell lines. This could possibly have occurred via the transient phosphorylation of STAT1 and STAT3, two major signaling molecules mediating the signal transduction of IFN-γ. Furthermore, although the cytoplasmic localization of PD-L1 protein was not detected in the untreated HN6 or HN15 cell lines, as assayed by immunofluorescence, the PD-L1 protein was found to be localized in the cytoplasm of IFN-γ-treated HN6 and HN15 cell lines. It is likely that the immunoreactive signal found in the nuclei of HN6 and HN15 is an artifact that results from inappropriate cell fixation and permeabilization during an immunocytochemical study as has been previously suggested [25]. Regardless of whether treatment with IFN-γ was administered, the PD-L1 protein had been already found to be expressed in several types of untreated cancer cell lines [26,27]. The absence or the presence of a faint immunoreactive band at the predicted size in untreated HN6 or HN15 cells, respectively, was initially rather surprising to us. Nevertheless, the unexpected immunoreactive bands at around 70-95 kDa were instead detected by immunoblotting in HN6 and HN15. It is possible that these bands are the ubiquitinated PD-L1 proteins, as has been previously demonstrated, at a molecular weight greater than 64 kDa [24]. In addition, it has been lately demonstrated that the PD-L1 protein can indeed undergo ubiquitination in oral squamous cell carcinoma [28]. The issue of posttranslational modification of the PD-L1 protein by ubiquitination in HN6 and HN15 has not yet been addressed in this study. Nevertheless, another previous study has suggested that inflammation could increase PD-L1 expression via the COP9 signalosome complex subunit 5 (CSN5), an essential regulator of the ubiquitin conjugation pathway, by decreasing the ubiquitination of the PD-L1 protein. This could lead to stabilization of the PD-L1 protein [29]. Therefore, it is probable that treatment with IFN-γ may decrease PD-L1 ubiquitination in HN6 and HN15 resulting in more intact PD-L1 protein being detectable at its predicted size. This subject remains to be further explored. Several previous studies have demonstrated that treatment with IFN-γ induces the expression of PD-L1 in various types of cancer cells such as breast cancer, lung cancer, and oral cancer cells [10,23,30]. Moreover, it has been well-established that a higher degree of PD-L1 protein expression is associated with immune escape and metastasis in different types of cancer cells [31,32]. The findings from this in vitro study, which demonstrated significant increases in PD-L1 expression as well as in cell invasion upon treatment with IFN-γ in the oral cancer cell lines, HN6 and HN15 ( Figure 5I,J), are thus in line with those of a previous study [33] that emphasized the significant role of PD-L1 in cancer aggressiveness. In this study, treatment with hesperidin at doses lower than IC 20 , i.e., ≤50 µM, could significantly diminish oral cancer cell survival, proliferation, migration, and invasion. This probably would have occurred as a result of reduced STAT1/STAT3 activation followed by decreased PD-L1 expression upon hesperidin treatment ( Figure 8). Nonetheless, the direct roles of PD-L1 in the four aspects of oral cancer aggressiveness, as has been aforementioned, as well as in the promotion of an evasive mechanism that oral cancer cells, especially HN6 and HN15, exploit in order to escape immune surveillance have not yet been determined (Figure 8). Consequently, these roles should be the subject of further investigations. Otherwise, an additional investigation into enhanced tumor immunity, which leads to the inhibition of immune evasion and cancer aggressiveness, by suppressing the up-regulated expression of PD-L1 via the reduced activation of STAT1/STAT3 signaling, is worth pursuing in oral cancer cell lines. The development of natural compounds as an adjunctive treatment for cancer is of great interest. Any natural compound that possesses an anti-inflammatory property and can block or regulate the inducible expression of PD-L1 protein is thus suitable as a therapeutic candidate for an immune checkpoint inhibitor [34]. In this regard, it has recently been reported that hesperidin acts as an essential regulator of PD-L1 expression in the breast cancer cell line [23]. Consequently, it is of considerable interest to determine the anticancer effects of hesperidin and evaluate the regulation of PD-L1 expression in human oral cancer cell lines in vitro. Firstly, the findings from the cytotoxic screening showed that treatment with hesperidin inhibited the viability of the two oral cancer lines in dose-and timedependent fashions. In addition, from the previous study, they found this concentration (0-200 µM) of hesperidin does not affect normal cells such as the human dermal fibroblast (NHDF) cell line [35] and the normal liver cell line [36]. However, it is reported that hesperidin suppresses cell proliferation in several cancer cells types. Secondly, our findings revealed that treatment with hesperidin was able to partially decrease the cell proliferation, migration, and invasion of these oral cancer cell lines. Finally, treatment with hesperidin partially diminished IFN-γ-induced PD-L1 expression and the phosphorylation of STAT1 and STAT3. Since STAT1/STAT3 signaling is regarded as one of the critical pathways for cell proliferation, migration, and invasion [36,37], it is likely that the anticancer effects of hesperidin are mediated by STAT1/STAT3 signaling and activated by treatment with IFN-γ in the oral cancer cell lines. Nevertheless, due to the partial reduction of STAT1/STAT3 phosphorylation upon treatment with hesperidin, it is, therefore, necessary to elucidate the involvement of other signaling pathways. Hesperidin has been shown to have anti-cancer effects in different malignancies, emphasizing its molecular mechanism of action. Hesperidin acts as an anti-cancer agent by promoting apoptosis in malignant cells such as liver cancer and bladder cancer cells via NF-κB, MAPK, and PI3K/AKT pathways. Moreover, hesperidin inhibits the expression of MMP and epithelial-mesenchymal transition (EMT)-related proteins, suppressing cell migration and invasion, as well as being an anti-inflammatory [34]. This study discovered that hesperidin prevents IFN-γ-induced PD-L1 protein expression by inactivating STAT1/STAT3 signaling in OSCC cancer cells, contributing to tumors' immune evasion. In addition to the partial blockade of STAT1/STAT3 activation by treatment with hesperidin, their off-target effect is still questionable. To address the potential off-target problem of STAT1 and STAT3 in IFN-γ-induced PD-L1 expression in the oral cancer cell lines, knockdown of the STAT1/STAT3 expression with specific siRNAs would be required. In addition, the effect of hesperidin on signaling pathways other than STAT-1 and STAT-3 in OSCC cell types has not been explored. Other reports have indicated that often signaling involves PD-L1 up-regulation, including in NF-κB, MAPK, and PI3K/AKT pathways. In future research, the other signaling mechanisms for the effect of hesperidin should be explored. Moreover, the efficacy and the safety of standard chemotherapy in combination with hesperidin treatment in a responsive syngeneic tumor model are essential issues of concern. Before hesperidin can be applied in clinical trials, which may offer a novel treatment option for patients with oral cancer, additional in vivo studies are required to confirm the inhibitory effect of hesperidin on the inducible PD-L1 expression by IFN-γ. The effect of hesperidin on T cell activity should also be investigated both in vitro and in vivo. Lastly, the insight gained from the present study could enable researchers to propose the potential clinical application of hesperidin as an immune checkpoint inhibitor in the future. Chemical Reagents and Antibodies Serum-free keratinocyte growth medium (KGM) was obtained from Lonza Oral Cancer Cell Lines and Cell Cultures The two human OSCC cell lines used in this study were HN6 and HN15. HN6 was originally isolated from the tongue of a male patient with OSCC with T 2 N 0 M 0 staging who had received a histopathological diagnosis of acquiring moderately differentiated OSCC [35]. HN15 was isolated from the metastatic lymph node with the primary OSCC site established on the floor of the mouth [36]. These cells were cultured in serum-free KGM supplemented with 1% penicillin/streptomycin at 37 • C in a humidified atmosphere containing 5% CO 2 . When cells reached 80% confluence, they were trypsinized and plated in appropriate culture vessels for either expansion of their cell numbers or further experimentation. Cytotoxic Assay HN6 and HN15 (5000 cells/well) cells were seeded in a 96-well culture plate overnight and then exposed to various doses (0, 6.25, 12.5, 25, 50, 100, and 200 µM) of hesperidin for 24, 48, or 72 h. Cell viability was determined by MTT assay as has been previously described [23]. The optical density (OD) of dissolved formazan dye was measured using a spectrophotometric plate reader (Thermo Fisher Scientific, Inc.) at 540 nm with a reference wavelength of 630 nm. The percent of cell viability (% of cell viability) was calculated as the sample/OD of control × 100. Colony Formation Assay The antiproliferative effect of hesperidin on HN6 and HN15 was examined using a colony formation assay. Briefly, HN6 and HN15 at 8 × 10 2 cells/well were seeded in a 6-well culture plate overnight. The cells were treated with IFN-γ at 200 IU/mL in the presence or absence of hesperidin at 6.25, 12.5, 25, or 50 µM for 24 h. Next, KGM was removed and replaced with fresh KGM every other day to allow for colony formation over the course of 2 weeks. The colonies were washed with cold PBS, fixed with 95% ethanol for 15 min, and stained with 0.5% crystal violet for 1 h at room temperature. Images of the stained colonies were captured with a digital camera attached to a microscope and 10% (v/v) of acetic acid was then added to each well. This was followed by measurement of the absorbance value of the dissolved dye at 595 nm using the spectrophotometric plate reader. (Figure 1). Immunofluorescence staining of PD-L1 was performed the following day. In brief, cells were washed with 1X PBS, fixed with 4% paraformaldehyde for 40 min, and solubilized with 0.1% (v/v) Triton X-100 in 3% (w/v) bovine serum albumin-PBS for 2 min. They were then reacted with the PD-L1 antibody (1:200) in PBS without any detergent at 4 • C overnight. After being washed, they were incubated with anti-rabbit NorthernLights™557-conjugated IgG (1:500), Alexa Fluor™488-conjugated Phalloidin (1:500), and DAPI (1:1.000) in PBS at room temperature for 1 h. The slides were then mounted with DAKO ® Fluorescent Mounting Medium (DAKO Corporation, Carpinteria, CA, USA). The fluorescence signals were observed and captured under a fluorescence microscope (Axio with ApoTome.2, Carl Zeiss Microscopy GmbH, Göttingen, Germany). As can be seen in Figure 6, HN6 and HN15 treated with 0.5% (v/v) DMSO were used as a vehicle control for hesperidin. Subsequently, the cells treated with IFN-γ at 200 IU/mL for 24 h in the absence of immunoreaction with the PD-L1 antibody were used as a conjugated control. Immunoblotting To determine PD-L1 expression and p-STAT1 and p-STAT3 levels, HN6 and HN15 seeded at 2 × 10 5 cells/well in a 6-well plate were treated with indicated doses (0-400 IU/mL) of I IFN-γ for 24 h (Figure 2) or with IFN-γ at 200 IU/mL for various periods of time (10-60 min and 24 h; Figure 3). To elucidate the involvement of the PD-L1 expression and the p-STAT1 and p-STAT3 levels, these cell lines were treated with IFN-γ at 200 IU/mL in the presence or absence of treatment with hesperidin at 6.25, 12.5, 25, or 50 µM for 24 h (Figure 6), or were pretreated with hesperidin at 6.25, 12.5, 25, or 50 µM for 4 h, followed by treatment with IFN-γ at 200 IU/mL for 30 min (Figure 7). The cells were then lysed in Mammalian Protein Extraction buffer containing both protease and phosphatase inhibitors. Total protein content was determined using the Bradford assay. A 30 µg quantity of total protein obtained from each sample was resolved on 12% SDS-PAGE, according to the method previously described [37], and then transferred to nitrocellulose membranes (GE Healthcare Europe GmbH, Freiburg, Germany). The membranes were blocked with 5% (w/v) skim milk in Tris-buffered saline with Tween-20 (TBST) for 1 h at room temperature. The membranes were then washed three times with TBST and probed overnight with specific primary antibodies against p-STAT1, total STAT1, p-STAT3, total STAT3, PD-L1, or β-actin at a dilution factor of 1:1.000 at 4 • C. After being washed three times, the membranes were exposed to an appropriate secondary antibody for 1 h at room temperature. After three additional washings with TBST, the membranes were allowed to react with an enhanced chemiluminescence substrate (Super Signal West Femto) in order to develop protein bands that were captured using the ChemiDoc XRS system (Bio-Rad Laboratories, Hercules, CA, USA). The band intensity was then analyzed using ImageJ software. After one target protein was detected, the primary and secondary antibody complex was removed using the stripping buffer (Thermo Fisher Scientific, Inc.) for 15 min. Subsequently, the membrane was re-probed with the antibody against the other protein of interest and the intensities of the protein bands were again determined. The intensity of β-actin band in each sample was used as an internal control for PD-L1 expression, while that of total STAT1 or of total STAT3 was used as an internal control for p-STAT1 or p-STAT3, respectively. Wound Healing Assay HN6 and HN15 were cultured in a 6-well plate. At a level of 80% confluence, cells were scratched using a 200 µL pipette tip and washed with PBS. Thereafter, the cells were cultured with serum-free KGM and treated with IFN-γ at 200 IU/mL in the presence or absence of hesperidin at amounts of 6.25, 12.5, 25, or 50 µM for 24 h. Images of the closing area were captured with a camera attached to a microscope (Carl Zeiss Microscopy GmbH) both at the beginning (0 h) and at the 24 h timepoint. The percentage of closing area within the space between the wound edges was calculated using ImageJ software. Invasion Assay HN6 and HN15 at 2 × 10 5 cells/mL were seeded in a 24-well culture plate for 48 h. At 80% confluence, cells were treated with IFN-γ at 200 IU/mL in the presence or absence of hesperidin at 25 or 50 µM for 24 h. The Cell Detachment Solution prepared from the fluorometric QCM™ 24-Well Cell Invasion Assay kit (Merck, Darmstadt, Germany) was then added into each well to lift each of the cells, whose number was first counted and adjusted to 5 × 10 5 cells/mL. The invasion assay was performed using the fluorometric QCM™ 24-Well Cell Invasion Assay. In brief, cell inserts were rehydrated with 300 µL of prewarmed KGM for 30 min at room temperature. After removal of the medium, the insert was filled with a 250 µL volume of KGM, which contained HN6 or HN15, and the lower chamber was filled with only 500 µL of KGM. The invasion assay was performed in a CO 2 incubator for 14 h. The cells and medium collected from the top side of the invasion chamber insert were removed through careful pipetting and the insert was placed into a clean well containing 225 µL of the prewarmed Cell Detachment Solution. The resulting cell mixture was incubated for 30 min at 37 • C. The invasive cells were dislodged from the underside of the insert by gently tiling the invasion chamber plate back and forth several times. After removal of the insert, 75 µL of the Lysis Buffer/Dye Solution obtained from the kit was added and the cell mixture was then incubated for 15 min at room temperature. The cell mixture at a volume of 200 µL was transferred to a 96-well black plate with a clear bottom. The degree of fluorescence intensity was measured using a multi-mode microplate reader (Spark, Tecan Austria GmbH, Grödig, Austria) at 480 nm excitation and 520 nm emission wavelengths. The fluorescence signal of each condition was compared to that of the untreated cells of the control set to 100% of invasion. Statistical Analysis Each experiment was repeated independently three times. Data were expressed as mean ± SD values of the three experiments. Statistical analysis was performed using SPSS 12.0 software and one-way ANOVA followed by Tukey's HSD post hoc test. Additionally, p values of < 0.05 were considered statistically significant.
2021-09-11T06:17:03.971Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "75a5a593526d463ed79481a80e472ac2b6136a0e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/17/5345/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53a8826a33cb94f340342620c545b8aad4b404a9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225095191
pes2o/s2orc
v3-fos-license
Finite Element Assessment of the Screw and Cement Technique in Total Knee Arthroplasty Background The screw and cement technique is a convenient method used to rebuild medial tibial plateau defects in primary total knee arthroplasty (TKA). The objective of this study was to perform a finite element assessment to determine the effect of different numbers of screws on the stability of TKA and to determine whether differences exist between two different insertion angles. Method Six tibial finite element models with defects filled with screws and cement and one model with defects filled only with cement were generated. Contact stresses on the surface of cancellous bone in different areas were calculated. Results Compared to the cement-only technique, the stress on the border of cancellous bone and bone cement decreased by 10% using the screw and cement technique. For bone defects with a 12% defect area and a 12-mm defect depth, the use of 1 screw achieved the greatest stability; for those with a 15% defect area and a 20-mm defect depth, 2 screws achieved the greatest stability. Conclusions (1) The screw and cement technique is superior to the bone cement-only technique. For tibial defects in which the defect area comprises a large percentage but the depth is less than 5 mm, the screw and cement technique is recommended. (2) Vertical screws can achieve better stability than oblique screws. (3) Screws should be used in moderation for different defects; more is not always better. Introduction Medial tibial plateau defects can often be found in primary total knee arthroplasty (TKA) and require additional management to ensure implant stabilization, support, and durability. Many techniques have been used, including cement, metal augmentation [1][2][3][4][5], bone grafts (autografts or allografts) [1,2,[6][7][8][9][10], and the screw and cement technique [11][12][13]. Compared to other techniques, the screw and cement technique has many advantages, namely, it is less expensive, easier to perform, and less time-consuming. Ritter [11][12][13] reported successful results at early, intermediate, and long-term follow-up after use of the screw and cement technique to correct large tibial defects (5-30 mm). However, in previous studies, the number and insertion angle of screws were chosen based on personal experience, to fill a given medial tibial defect, previous authors have used as many screws as possible to ensure the stability of the tibial prosthesis [14][15][16][17][18], and no published study has determined the optimal number of screws or whether differences exist between the two different frequently used screw insertion angles. The purpose of this study was to perform a finite element (FE) assessment to determine the effect of different numbers of screws on the stability of TKA for two types of moderate uncontained type-2 defects and to determine whether differences exist between two different screw insertion angles. Materials and Methods A knee of a healthy volunteer (height 1.73 m, weight 60 kg, male) was scanned by computed tomography (CT), and a geometric knee model was built using the Mimics 11 software. The composition of each model is shown in Table 1. The performance of the component materials in each model is shown in Table 2. Then, based on the geometric knee model and the area percentage and depth statistics of tibial defects summarized for patients who underwent TKA using screws and cement due to medial plateau defects, we simulated two types of familiar defects with a 12% defect area (12-mm depth) and a 20% defect area (15-mm depth) after performing a horizontal resection of 11 mm (according to clinical experience) above the tibial plateau. The defect depths were two common depths that were measured during the operations of 40 patients whose tibial defects were treated using screws and cement. Based on the two tibial defect models above, 7 threedimensional, static, proximal tibial FE models implanted with a tibial prosthesis (PFC sigma@Dypu) and a plastic insert were built using the Mimics 11 software (Figures 1(a) and 1(b)). The selected prosthesis is one frequently used by our senior surgeon. The diameter of the screw was 6.5 mm; the distance between the upper surface of the screw head and the lower surface of tibial component was 0 mm. The modulus of the cortical bone was used by Frehill et al. [14], and it is within the modulus range cited and used by other authors. The value of the cancellous modulus used is within the range of values (389-1132 MPa) cited and obtained experimentally by Au et al. for cancellous bone [15]. The contact between the bearing and tibial tray was modelled using a surface-to-surface contact algorithm, and a constant coefficient of friction (0.1) was used in all models [16]. The load application area is shown in Figure 2. Traditionally, the loads applied to the knee to represent a level gait in FE modelling have been 2.5-3 times the body weight [15,18,19]. These data are based on models of knee biomechanics developed by Morrison and Morrison [17]. Recently, somewhat lower levels of loading (2.2 times the body weight) have been measured in vivo [20], and it seemed more appropriate to use these loads in the present study. Thus, a total load of 1294 N (representing a 60-kg person) was used in this study. In all models, the distal end of the tibia was assumed to be constrained in all directions. Contact stresses on the surface of cancellous bone were measured using Abaqus 6.12 software to determine whether the use of screw(s) and cement to fill proximal medial defects would result in an increased likelihood of bone failure due to increased stresses. The critical level for this stress was considered to be 2.8 MPa (equivalent to approximately 4000 με) [21]. This very conservative value is one of the lowest values in the literature. Cancellous bone stresses were also examined to ensure that reduced stresses did not lead to severe bone resorption using an adopted resorption threshold of 0.1 MPa (equivalent to approximately 150 με) [21]. To assess whether differences exist among the stresses on the surface of Results The stresses at 12 points on the surface of cancellous bone in the medullary cavity of each model are shown in Figure 5. No significant difference was found, and all stresses measured were within the normal range (0.1-2.8 MPa). The stresses at 4 the trisection points are shown in Figure 6. The stresses of the anteromedial trisection points in models with a 12% defect area (0.22-0.26 MPa) were lower than those in models with a 20% defect area (0.33-0.38 MPa). However, no other statistically significant difference was found. Table 3 shows the stresses at the focus points that exist in the cancellous bone around the screws. All stresses were within the safety range (0.1-2.8 MPa). In models with a 12% defect area and a 12-mm depth, the use of 1 vertical screw to rebuild the defect resulted in a lower focused stress (1.05 MPa) than the stress found with the use of 1 oblique screw (1.23 MPa). In models with a 20% defect area and a 15-mm depth, the use of 3 screws resulted in a higher focused stress (1.77 MPa) than that resulting from the use of 1 or 2 screws (1.66 MPa and 1.71 MPa, respectively). As shown in Figure 7, each defect was divided into 4 sections (medial, lateral, anterior, and posterior), and the stresses at 6 points were measured in each section. The results show that, compared to the cement-only technique, the use of 1 vertical screw combined with bone cement to repair the defect (area of 12%, depth of 12 mm) resulted in a 32% reduction in the stress on the surface of the defect (anterior 22%, posterior 22%, medial 52%, lateral 21%), while the use of 1 oblique screw combined with bone cement to repair the defect (area of 12%, depth of 12 mm) resulted in a 15% reduction of the stress on the surface of the defect (anterior 0%, posterior 0%, medial 30%, lateral 3%). Compared to the use of one oblique screw combined with bone cement to repair the defect (area of 12%, depth of 12 mm), the use of 1 vertical screw reduced the stresses on the surface of the defect by 20% (anterior 26%, posterior 26%, medial 31%, lateral 19%). In models with a 12% defect area and a 12-mm 3 BioMed Research International depth, the use of 2 screws compared to 1 vertical screw resulted in lower stresses on the surface of the defects; however, the stresses on the medial side were less than 0.1 MPa, which can lead to severe bone resorption. Finally, when comparing models with a 20% defect area and a 15-mm depth with models with a 12% defect area and a 12-mm depth, a greater defect range resulted in greater stress on the surface of the defect when using the same number of screws. Discussion Medial tibial plateau defects are common in complex primary TKA, and for defects less than 10 mm, resection of the tibial plateau allows for complete removal of the defect without requiring further procedures [22]. However, in deeper and larger lesions, tibial resection of more than 12 mm may damage ligamentous structures. It has been observed that increasing the stress on the proximal tibia [4,23] will cause many other problems, such as the need for a thicker tibial insert and patellar joint complications [23]. Berend found that using a thicker tibial insert would not cause direct surgical failure, but increased tibial resection and ligament imbalance may result in an increased failure rate [24]. Thus, for defects with a depth of more than 10 mm, other reconstruction methods need to be used. In this study, after making a horizontal resection of 11 mm, the defect depths were 12 and 15 mm; the author chosen these two defects based on data of 40 patients measured during TKA and got good clinical outcome after up to 10 years follow-up. BioMed Research International There are 5 types of basic reconstruction methods, including tibial component downsizing and resection of uncapped proximal medial bone, the cement-only technique, the screw and cement technique, metal augmentation, and autologous bone grafting [5-7, 13, 25, 26]. Compared to other methods, the screw and bone cement technique has many advantages as follows: (1) compared to the bone cement-only method, the strength of bone cement is greatly enhanced; (2) compared to bone grafting and metal augmentation, the screw and cement technique can simplify the operation, shorten the operative time, decrease the risk of infection, and reduce the use of additional implants; and (3) it is less expensive, and the effect is reliable. Although Brooks' in vitro biomechanical experiments reported that the use of the screw and bone cement technique in repairing defects of greater than 5 mm was associated with potential problems [27], Ritter first applied the screw and cement technique in clinical practice and obtained satisfactory short-term results [11]. This team further proceeded with medium-and longterm follow-up and obtained satisfactory results [12,13]. In Brooks' [27] in vitro biomechanical experiments, tibial defects were rebuilt using bone cement only, bone cement combined with 2 screws, a stainless steel wedge, a Plexiglas wedge, and an integral metal custom-made component. The best results were found for the integral metal custommade component, followed by the results for the metal wedge and Plexiglas wedge, and the worst results were observed for bone cement only. Bone cement combined with 2 screws showed only little improvement compared to the results for bone cement only. However, no FE analysis has been performed to demonstrate these results. In this study, based on clinical experience, we generated two types of defects, which are often observed clinically, in the FE model of the tibial plateau. They then used different strategies to repair the defect, analysed the questions regarding screw number and insertion direction, and obtained valuable conclusions. BioMed Research International The load application area ( Figure 2) used was based on the conditions that occur in the late stance phase of gait, where maximum joint reaction occurs [17], and was determined from the work of Villa, who evaluated contact locations using Fuji Prescale pressure-sensitive films and in vitro TKA models [16]. This phase of the gait produces the highest stresses in the proximal bone and ligament. Loading was applied as a uniform pressure load to selected surfaces of the bearing where the medial and lateral femur condyles would make contact. The gastrocnemius muscle is the only active muscle in this late stance gait phase. As the gastrocnemius muscle does not attach to any region of the proximal tibia, it was not necessary to include any ligaments or muscles in the models. The effect of the gastrocnemius, however, is represented by the applied joint reaction force. In this FE study, the load on the plateau was 2.2 times the body weight, and the patient's body weight was 60 kg. Based on the mechanical result of each model, we found that the use of more screws can achieve lower stresses on the surface of the defect. However, more screws may cause stress shelter. In the model consisting of a 12% defect area and a 12-mm defect depth, the use of 2 or more screws caused stress shelter, and in the model consisting of a 20% defect area and a 15mm defect depth, the use of 3 or more screws caused stress shelter. However, in patients with a higher body weight, the load on the plateau will increase, and use of the same number of screws may not cause stress shelter. To test this conjecture, we increased the load to 4000 N. The results showed that the stress on the same points, which was below the safety range of <1294 N, increased to the safety range. Furthermore, the body weight corresponding to 4000 N on the plateau was approximately 185 kg, and this weight is rare in Chinese patients. Therefore, it can be concluded that, in patients over 60 kg, the optimal number of screws will increase, and the increased quantity will not exceed 1. In clinical practice, different patients have different body weights, tibial plateau sizes, and defect characteristics. This study does not include all situations, and patient-specific FE models could be built to further elucidate the optimal screw number. In this study, we found that the vertical screw direction was superior to the oblique screw direction in terms of mechanical stability; therefore, the vertical screw direction is recommended in clinical practice. In this study, the diameter of screws was 6.5 mm. And the upper surface of the screw head was on the same level of tibial platform which touched the lower surface of tibial component. These were the same with our senior surgeon's clinical practice, and we acquired good long-term clinical outcome. We believed there would be effect of different diameters, material, and distance of screw from the implant on the stresses, while our finite models were limited, we would study this question in the future. We believe our study may provide a surgical guidance to surgeons while performing TKA for patients with tibial bone defects. This study did not consider all of the prosthesis types and every screw angle, but it can be a good reference in clinical practice. Further study will be conducted in the future. Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest All of the authors of this article claim no conflicts of interest.
2020-10-28T19:22:31.204Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "5e7f3f0f7d831a65b53cd838835b729a32a1c341", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/3718705", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3adb175be85b7e36bbeccce568fa5a812f4d2dff", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
201587015
pes2o/s2orc
v3-fos-license
Identity in Information Behaviour Research: A Conceptual Analysis Using a conceptual analysis approach, this study explores how identity has been conceptualized in the information behaviour literature. Findings indicate that researchers have employed three main approaches when conceptualizing identity: identity as personal project, identity and social groups, and identity as selfpresentation were the three most common conceptualizations of identity. The findings contribute to the identification of the key factors of information behaviour and suggest areas for future research. Introduction This study explores how identity has been conceptualized in the information behaviour literature. The study of identity in relation to information behaviour provides a way to explore individuals' and groups' understanding, assumptions and implicit theories of information and how this relates to their information practices. Understanding how information behaviour researchers have conceptualized identity in past research will not only illuminate what is currently known about how people use information to shape their identities and in turn how their identities shape their interactions with information, but it will also suggest areas for future research. Therefore, the present study addresses the following research questions: 1) How have information behaviour researchers conceptualized identity? 2) In what ways have information behaviour researchers characterized the effect of identity on people's information practices? Literature Review Identity can be conceptualized in myriad ways. At its most basic level, identity refers to personal identity, traditionally conceived of as a "subjective individual achievement" (Wetherell 2010, 3). In essence, this understanding conceptualizes identity as that which provides people with their sense of self and a sense of continuity through their lives. This understanding of personal identity, however, has shifted over the past 60 years from a stable and coherent achievement that teenagers undertake as they become adults to mobile, flexible, and negotiated social accomplishments that occur at all stages of one's life and in all contexts. In addition to understanding identity as a personal experience, identity is conceived of as a social occurrence linked to a person's group memberships, i.e., social identity. A person's social identity is linked to social categories, roles, and locations, such as gender, ethnicity, class, or nationality, and the stakes that people have in these categories. Other understandings of identity seek to complicate the apparent binary between personal and social identity. A common approach to this is understanding identity as a discursive construction. This understanding frames identity as a subject position either provided to people through dominant social or institutional discourses or a product of localized social interactions. Using a conceptual analysis approach, this study explores how identity has been conceptualized in the information behaviour literature and suggests lines of inquiry for future research. There have been no comprehensive studies of identity in information behaviour research to date. Bates (2010), however, in a history of information behaviour research did highlight the connection between the emergence of social identity as a focus of societal interest in the 1960s and 1970s and a shift in information behaviour research towards studying specific population groups. Although the influence of social identity on information behaviour was not the focus of these studies, social identity acted as a way to identify populations to study. Or as Bates described it: "many members of the general public had been studied by their social identitiesthe poor, the elderly, etc. -there was a tendency to study information-related behaviour by looking at groups of these sorts" (2383). Other scholars have argued in favor of particular approaches to information research and their ability to illuminate the connections between identity and information practices. Notably, Cox (2012; explored the research possibilities a practice-based approach to information science. He argued that practice approaches to information could use social identity as a way to explore the role of information in larger social practices. This approach, he argued, would shift the focus of information research away from an individualist focus on information needs and expand the way the discipline understands the relational aspects of information and social practices. Method This study used a conceptual analysis approach to examine how identity has been conceptualized by information behaviour researchers. The goal of conceptual analyses is to "improve our understanding of the ways in which particular concepts are (or could be) used for communicating ideas" and to suggest productive lines of work for future research (Furner 2004, 233-234). Following Savolainen (2016), data for this study were collected by conducting a keyword search of core LIS databases: Library and Information Science Source (LISS), Library Literature & Information Science, and Library, Information Science, and Technology Abstracts (LISTA). The keywords used were "identity," "selfhood," "information behavior/behaviour," and "information practices." The initial search returned 176 articles. The abstracts for each article were reviewed to determine if they were appropriate for inclusion in the study. Inclusion criteria included whether or not the study explicitly examined identity or selfhood as it related to information behaviour or practices. After this initial review, 99 articles were removed from the study. A second review of the full-text of each article was then conducted. Following that, an additional 42 articles were excluded. The final data set consisted of 18 articles. The articles that remained in the study included theoretical explorations of new approaches to LIS research (Cox 2012) and empirical studies of information practices (see for example, Sundin 2002) and consisted of conference papers and peer-reviewed articles. Data from each article was first recorded using a data abstraction sheet. For each article the full citation, study aim, methodological approach, study design, definition or characterization of identity, and results (as they related to identity) were recorded. Whenever possible, direct quotes from each article were used when abstracting the data. This ensured relevant portions of the text, at the paragraph and sentence level, were highlighted for additional analysis and close reading "in order to identify individual characterizations or definitions of [identity]" (Savolainen 2016, 53). Initially, Wetherell's (2010, 5) description of how identity has been "understood, defined, framed, and debated" in various scholarly fields and conversations was used to code the conceptualizations of identity in the data set. She identified four approaches to identity: Identity as personal project; identity and social groups; fragmented discursive subjects; and intersectional, hybrid and global identities. As the analysis progressed an additional coding category was added: identity as self-presentation. Results Identity as personal project, identity and social groups, and identity as self-presentation were the three most common conceptualizations of identity. Studies that conceptualized identity as personal project focused on how individual's information behaviours (including search and information avoidance) supported and reflected their self-perception and expression (Buchanan & Tuckerman 2016;Meyers 2009). Conceptually, identity as personal project shared much in common with identity as self-presentation; however, while the information behaviours associated with personal identity were largely focused on learning more about, or seeking a reflection of, the self, information behaviours associated with self-presentation were largely focused on using information as a way to display one's identity to others (AL Omar & Cox 2016;Bronstein 2013). When identity was connected to social groups the focus was on how information behaviours supported one's self-definition in relation to others (Cox 2012;Sundin 2002) or how information behaviours acted as a tool for community building and boundary setting (Lingel & boyd 2013;Rothbauer 2004). The remaining two conceptualizations of identity (discursive and intersectional) were often acknowledged by the authors as potentially influencing information behaviours; however, they were rarely the focus of study. However, Cox (2013) did offer a discourse-inspired approach to conceptualizing identity in his exploration of practice theory as an approach LIS scholar should adopt to understand the role information plays in social practices. Conclusion This study contributes to the information behaviour research by illuminating how a key concept for the field of information studies is conceptualized. Scholars have noted that a lack of clarity about foundational concepts is a significant concern for information science as a discipline (Fleming-May 2014;Pilerot 2012;Savolainen 2016;Yu 2012). Core concepts, such as information need, information sharing, and information use are vague, have "multiple meanings" (Savolainen 2016, 52) and are used in a "cursory manner with ad hoc connotations" (Yu 2012, 2). Clarifying the meaning of a concept can shed light onto the theoretical foundations of a discipline and contribute to the field's empirical knowledge-base (Fleming-May 2014). Identity is a core concept for information behaviour research, especially as it relates to affective information activities and practices. Discursive and intersectional conceptualizations of identity offer exciting opportunities for information researchers to go beyond treating identity as an analytical lens for findings and include the information behaviours of people whose identities are not currently well captured in the information behaviour literature.
2019-08-23T20:27:58.320Z
2019-07-18T00:00:00.000
{ "year": 2019, "sha1": "2f371e03c48c7c5bfb85ce45094926f68a266177", "oa_license": null, "oa_url": "https://journals.library.ualberta.ca/ojs.cais-acsi.ca/index.php/cais-asci/article/download/1101/963", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "11bb4cca75094c0aee607a15ccc86ea6c879b0fd", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
213296129
pes2o/s2orc
v3-fos-license
Experience in treatment of comorbid pathologies in calves in the neonatal period The article describes an integrated approach in the treatment of hypotrophic calves with transindromal comorbid anemia. The clinical status of the calves was established and the therapeutic effect of the combined use of Carnitine chloride, Actovegin and Taurine on the metabolic status of calves with comorbid diseases in the neonatal period, restoration of hematopoiesis, energy status and reduction of imbalance of immunological parameters was studied. As a result, a decrease in the sensitivity of young animals to adverse environmental factors was found. Introduction The use of the production potential of dairy cattle breeding largely determines the efficiency of the economy of farms. The competitiveness of livestock breeding is laid during the period of receiving and raising calves, it is determined by their viability, health, growth, development, bioconversion of feed, maintenance and treatment. Growing young animals should be organized in such a way as to ensure normal growth, development and lay the foundation for the manifestation of genetically based productive capabilities of animals at low labor costs and optimal feed consumption [1][2][3][4]. One of the most critical periods of ontogenesis is the first month of life is the neonatal period. This period is characterized by the greatest tension of metabolic processes, the greatest danger of failure of adaptive mechanisms in the face of a sharp change in the environment (the transition from intrauterine to extrauterine). Therefore, the concept of the health of a newborn includes the concept of its optimal adaptation to changing environmental conditions and the evaluation of the results of adaptation mechanisms in the near (first month of life) and distant (subsequent life) perspective. The intensification of animal husbandry, oriented mainly at increasing productivity without taking into account changes in the needs of the body, first leads to an increase in the sensitivity of animals to adverse environmental factors, and then to immunometabolic disorders and the development of the disease. For modern veterinary medicine, forecasting the state of calves' health is of urge, since it allows identifying among them the risk groups for the development of certain diseases, and most importantly, to carry out preventive and therapeutic measures in a timely manner. The birth of a calf is accompanied by a single functional system of the mother-placenta-fetus, the launch of a system of adaptive mechanisms formed during the period of intrauterine development. The preservation of the vital activity of the newborn and the subsequent process of establishing the functions of organs and systems of the body depend on the usefulness of the metabolic adaptation that occurs during the first minutes or hours of life. After the birth of a calf, its own mechanisms of metabolic regulation are turned on, and, first of all, the problem of energy supply of homeostasis of the newborn due to its own (endogenous) energy sources arises. The organism of newborns has high manageability and it is most advisable to form its resistance and adaptive abilities in the early stages of ontogenesis. If the conditions of keeping, feeding and caring do not meet the requirements of the organism, animals are forced to adapt to these conditions, primarily due to increased energy costs. Metabolic processes are impaired, calves's health is deteriorating, resistance is decreasing, which ultimately leads to the development of gastrointestinal diseases. This is especially true for newborn calves, which are little adapted to protection from adverse environmental factors. In addition, the development of the animal in the early stages of life largely determines the continued successful rearing of replacement and feeder young animals. Therefore, the stimulation and strengthening of the body's natural defenses, their long-term maintenance at a high level is the most important task of livestock breeders. Among the diseases characterized by metabolic disorders, a special place is occupied by malnutrition and anemia of young animals. Hypotrophy is the pathology of the fetus, manifested by a violation (deterrence, inhibition) of its development and arising as a pathophysiological reaction to an insufficient supply of oxygen to the fetus, nutrients and biological active substances, or in violation of their digestibility. Hypotrophy reflects the concept of "physiological immaturity" of newborns. This pathology causes significant economic damage to farms, which is characterized by a reduction in the useful life, loss of body weight, death and forced culling of animals, loss of breeding qualities, deterioration in the quality of animal meat and a decrease in the return on feed. In newborn hypothrophic calves, redox processes are disturbed and oxygen starvation of tissues develops. Under-oxidized products of the intermediate metabolism enter the bloodstream, causing trophic disorders of various organs and systems, peripheral vascular spasms, and tachycardia. Congenital malnutrition in calves is accompanied by the development of secondary immune deficiency, which exacerbates age-related immune deficiency. A decrease in immune reactivity, in turn, inhibits erythropoiesis, exacerbating the course of malnutrition [2,3,[5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. Hypochromic microcytic anemia (GMA) is a disease that is characterized by impaired hemoglobin synthesis due to iron deficiency. The disease is caused by a lack of iron in the body, accompanied by a violation of the function of the blood-forming organs, a decrease in the formation of red blood cells, a low hemoglobin content, a metabolic disorder leading to growth retardation and a decrease in body resistance. Latent iron deficiency is one of the most common nutritionally dependent conditions in young farm animals. This is due to the increased need for a newborn organism in iron during periods of intensive growth. In addition, iron deficiency states develop under the influence of such unfavorable factors as low body weight at birth, as well as nutritional reasons: an unbalanced diet, early feeding by non-adapted substitutes, early introduction of roughage into the diet. It has been established that a significant role in the occurrence of iron deficiency in young calves is played by increased iron loss as a result of diapedetic microblood loss through the intestine. The significance of each of the listed causes of iron deficiency anemia varies depending on the age period [1,3,22,23]. A comorbid disease profile with multisystem multiple organ failure is one of the most dangerous risks of disturbance in the harmonious modeling of the organism of young farm animals, the likelihood of which increases with a decrease in technology adaptability [4,24,25]. The aim of our research is to develop a new comprehensive treatment regimen for comorbid hypotrophy and anemia in newborn calves, based on the use of modern medicines. Materials and methods Scientific production experiments were carried out in the Voronezh region at livestock breeding complexes of dairy cattle breeding, with a total livestock of 2.5 thousand. The material for the study were calves from the Holstein-Friesian breed aged from birth to 14 days. For the experiment, 3 groups of calves were formed. Calves with signs of moderate prenatal malnutrition were divided into 2 groups: control (intact) and experimental with 6 animals each, all calves had similar age, body weight and were in the same conditions of keeping, feeding and care. The group of clinically healthy calves was also formed to evaluate reference values. Newborn calves with acute infectious inflammatory diseases were excluded from the study group. After calving, all calves were placed in an individual box with an infrared irradiator. On the second day of life they were transferred to an individual pen. In order to restore metabolic status of animals from an experimental group, from the first day of life, in a mixture with 200 ml of Ringer-Locke solution, 10 % carnitine chloride solution was administered intravenously once at a dose of 100 mg/kg for 7 days; Actovegin at a dose of 5 mg/kg was used to stimulate erythropoietic function of the red bone marrow and increase the reactivity of the body; once a day for 10 days during the first 14 days of their life, Taurine at a dose of 100 mg/animal was administered orally together with colostrum and milk to prevent stressful maladaptation. Carnitine is a vitamin-like compound derived from an amino acid that allows the transfer of fatty acids across mitochondrial membranes, thereby improving their availability for beta oxidation and trapping potentially toxic organic compounds. Carnitine is a transmembrane carrier of fatty acids [26]. Actovegin® is an antihypoxant. It is a hemoderivative, which is obtained through dialysis and ultrafiltration. It has a positive effect on the transport and utilization of glucose, stimulates oxygen consumption (which leads to stabilization of the plasma membranes of cells during ischemia and a decrease in the formation of lactates), thus having an antihypoxic effect. It increases the concentrations of adenosine triphosphate, adenosine diphosphate, phosphocreatine, and also the amino acids glutamate, aspartate, and gamma-aminobutyric acid [27]. Taurine (2-aminoethanesulfonic acid) is the end product of the exchange of amino acids containing sulfur (methionine, cysteine, homocysteine, cystine). The key role in the synthesis of taurine in animals is played by the enzyme cysteine sulfinate decarboxylase. In most cases, taurine is described as the main osmoregulator of the cell, a membrane protector, an intracellular calcium regulator, which has the properties of an antioxidant, a detoxifier, which is involved in the metabolism of fats and fat-soluble vitamins, and affects inflammatory processes [28,29]. The results were taken into account on 7-9th and on 12-14th days of the experiment. The calves of the control group were not treated. The first portion of colostrum was drunk by force with the help of a drencher. Given the small volume and underdevelopment of the gastrointestinal tract, colostrum was fed in a reduced volume: 3 liters (normal volume is 4 liters). To achieve the optimal amount of immunoglobulin and the formation of passive immunity, colostrum of the first milk yield was fed from cows of 2-3 periods of lactation with a relative density of 1.067-1.068 g/cm 3 , which was determined using a colostrometer. Frozen colostrum was stored in the "bank". Before taking the blood, a clinical examination of the calves was carried out, according to the generally accepted method. Blood was taken from the studied newborn calves for morphological and biochemical analysis from the jugular vein (venae jugulares) in the morning before the first colostrum drink and on the following days of research in the morning before feeding the animals. Laboratory analyzes were performed at the Department of Therapy and Pharmacologists of the FSBEI HE Voronezh State Agrarian University and the All-Russian Research Veterinary Institute for Pathology, Pharmacology and Therapy. Clinical studies of newborn calves were performed according to the plan generally accepted in veterinary medicine. In the blood, the number of red blood cells, white blood cells, hemoglobin, hematocrit was determined using an ABX Micros 60 hematology analyzer. The determination of TIBC, glucose, alkaline phosphatase, cholesterol, triglycerides was carried out by the chemical method using diagnostic kits on a PE-5300V spectrophotometer. The content of inorganic phosphorus, iron and copper were determined on an Perkin Elmer 703 atomic absorption spectrophotometer. The bactericidal (SBA) and lysozyme (SLA) activity of blood serum, T-and B-lymphocytes, were determined in accordance with the "Methodological recommendations for the assessment and correction of non-specific resistance of animals" [25]. Results and discussion According to the results, an attempt to stand independently in newborn hypotrophic calves was noted after 4-6 hours, the sucking reflex appeared after 3-4 hours, the sucking movements per minute were 77.0±3.0. The response to a pinch determined a decrease in pain and tactile sensitivity; the lability of the nervous system was noted (sometimes apathetic, then excited). Milk teeth in some cases were underdeveloped. The mucous membranes were mostly anemic. The eyeballs are often sunken. Auricles and tail more noticeably drooping. The body weight of hypotrophic calves was 30.8±0.4 kg, height at the withers was 67.9±0.7 cm, chest circumference behind the shoulder blades was 74.0±1.3 cm, oblique body length (cm) hypotrophic calves was 63.5±0.9. The body temperature in newborn calves with antenatal hypotrophy was 38.1±0.4. The number of heart contractions per minute was 129.5±2.6, the number of respiratory movements per minute was 61.5±1.8. Hypotrophic calves had reduced skin turgor, the hairline was tousled, dull, but dense and strong. The hairline in newborn calves with this pathology was disheveled, dull and there were areas of alopecia. The subcutaneous fat layer was initially thinned on the abdomen and in other parts of the body. Meconium was unformed, yellow with a greenish tint. The presence of bilirubin in feces was established, which was also confirmed by a test for bile pigments. Microscopic examination of the feces of newborns revealed amylorrhea and steatarea, neutral fats (++++) were detected. According to the results of laboratory studies, by the seventh day, calves of the experimental group demonstrated an increase in the number of red blood cells by 14.7 %, hemoglobin by 33.6 %. An increase in the studied indicators of micromineral metabolism was recorded, so the serum iron increased by 29.9 %, the copper level became higher by 22.1 %. However, the indicators of the microelement composition of the blood we studied were within the lower limit of the norm. In calves of the control group, an increase in these indicators was also noted, but it was insignificant and did not reach physiological parameters. The hematocrit level corresponded to the values of the course of anemia in the studied animals. By the fourteenth day of studies in calves of the experimental group, the studied parameters returned to optimal values, so the number of red blood cells and hemoglobin increased by 28.3 % and 61.1 %, respectively. The level of iron in serum became higher by 57.4 %, and copper by 38.6 %. The correlation of data, research results before and after the experiment showed a decrease in the total ironbinding ability of blood serum by the seventh day by 10.2 %, and by the fourteenth day from the start of the study, its level decreased by 13.8 % and began to correspond to the norm. Hematocrit in experimental calves recovered to physiological values. In calves of the control group, by the fourteenth day of the experiment, the studied hematomorphological parameters did not reach reference values. The cell immunity in patients with congenital malnutrition of the calves of the experimental group, after the treatment we used, contributed to an increase in the number of leukocytes by 6.6 %, which corresponded to the physiological trend. The content of T-and Blymphocytes increased by 72.7 % and 80.0 %, respectively. Indicators of the humoral link: SBA reached the norm by the end of the studies due to an increase of 46.4 %, SLA decreased by 38.5 %. On the fourteenth day of the experiment, in the animals of the control group, SBA did not change significantly, and SLA decreased 2.7 times, which did not correspond to the reference values. By the fourteenth day, the amount of glucose in blood in newborn calves of the experimental group increased by 43.8 % (Р0.05), but this value did not exceed physiological values, and in calves of the control group this indicator increased by 7.3 % (Р0.05) on the seventh day of life and by 9.9 % on the fourteenth day. The content of inorganic phosphorus in animals of the experimental group increased by 15.1 (P<0.01) by the seventh day of the experiment and an increase of 25.9 % (Р0.05) was again registered by the fourteenth day, reaching physiological limits. In calves of the control group, the studied indicator became 2.7 % higher by the seventh day (P<0.05), and by the fourteenth day it increased by 2.2 % (P0.05), but did not reach reference values. Alkaline phosphatase in newborn calves of the experimental group by the seventh day of the experiment decreased by 50.5 % (P0.05), while on the fourteenth day there was a further decrease by 58.6 % (P0.05), to standard values. In the animals of the control group, a significant decrease in alkaline phosphatase was also noted at the height of the experiment and by the end of completion by 64.5 % (Р0.05), however, the level was higher than the background values. In the study of cholesterol content, experimental calves demonstrated an increase by 45.8 % by the seventh day (P0.05) and by 47.8 % by the fourteenth day (P0.05), reaching physiological limits. In hypotrophic calves in the control group, the studied indicator by the fourteenth day of the study increased by 69.4 % (Р0.05), but did not reach those values of physiologically mature calves. The content of triglycerides (TG) in calves of the experimental group increased by 56.0 % (Р0.05) by the seventh day, and by the fifteenth day of the study it increased by 40.5 % (Р0.05) and reached reference values. By the seventh day of the study, this indicator in control animals increased by 31.6 % (Р0.05) and increased by the fifteenth day by 29.6 % (Р0.05), but without reaching physiological parameters (see Table 1). Conclusion According to the results of our research, we suggest treating anemia as a syndrome of comorbid hypotrophy, pathogenetically related and mutually aggravating. As a result of testing the complex treatment regimen for calves with trans-syndromic diseases in the neonatal period, the functioning of the electron transport chain of mitochondria, as well as the oxygen-transport function of the blood was restored, the imbalance in the activity of the immune system was corrected, the links of the cellular and humoral status were optimized, the main sources of energy used in the body were restored, used for diverse processes of energy metabolism. Thus, the resistance of newborn calves to adverse environmental conditions increased, linear growth and average daily weight gain normalized corresponding to breed conditions.
2020-03-05T11:07:23.580Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "58c822e5bc937810cd72b928fc911ead6bf8be99", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2020/01/bioconf_fies2020_00102.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "64310f55b9c286575cd3a85e4dad498b0322de9c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3564293
pes2o/s2orc
v3-fos-license
Colonoscopy surveillance for high risk polyps does not always prevent colorectal cancer AIM To determine the frequency and risk factors for colorectal cancer (CRC) development among individuals with resected advanced adenoma (AA)/traditional serrated adenoma (TSA)/advanced sessile serrated adenoma (ASSA). METHODS Data was collected from medical records of 14663 subjects found to have AA, TSA, or ASSA at screening or surveillance colonoscopy. Patients with inflammatory bowel disease or known genetic predisposition for CRC were excluded from the study. Factors associated with CRC developing after endoscopic management of high risk polyps were calculated in 4610 such patients who had at least one surveillance colonoscopy within 10 years following the original polypectomy of the incident advanced polyp. RESULTS 84/4610 (1.8%) patients developed CRC at the polypectomy site within a median of 4.2 years (mean 4.89 years), and 1.2% (54/4610) developed CRC in a region distinct from the AA/TSA/ASSA resection site within a median of 5.1 years (mean 6.67 years). Approximately, 30% (25/84) of patients who developed CRC at the AA/TSA/ASSA site and 27.8% (15/54) of patients who developed CRC at another site had colonoscopy at recommended surveillance intervals. Increasing age; polyp size; male sex; right-sided location; high degree of dysplasia; higher number of polyps resected; and piecemeal removal were associated with an increased risk for CRC development at the same site as the index polyp. Increasing age; right-sided location; higher number of polyps resected and sessile endoscopic appearance of the index AA/TSA/ASSA were significantly associated with an increased risk for CRC development at a different site. CONCLUSION Recognition that CRC may develop following AA/TSA/ASSA removal is one step toward improving our practice efficiency and preventing a portion of CRC related morbidity and mortality. Colonoscopy with removal of premalignant lesions has contributed to a recent decline in CRC incidence and the number of deaths from this disease; nevertheless 5%-9% of patients diagnosed with CRC have undergone screening colonoscopy within the 3 years prior to detection of cancer [5] . Than et al [3] reported that colonoscopy has a 3.5% false negative rate for detection of CRC since 17% of patients with newly diagnosed CRC had been investigated with bowelspecific investigations within the previous 3 years. Winawer et al [6] reported that 6% of patients with advanced adenomas (AA) are missed by colonoscopy. The development of CRC despite colonoscopy may reflect missed superficial depressed lesions (cancer or high risk adenoma), incompletely resected adenomas [7] , de novo cancer [8] , or delayed diagnosis because of failed biopsy detection [9,10] . Adenomatous polyps are the most common neoplastic finding at colonoscopy [11] . These neoplastic polyps have malignant potential and are classified histologically as villous, tubulovillous, or tubular adenomas [12] . The malignant potential of these polyps correlates with type, size, and degree of dysplasia of the polyp. Advanced adenomas (AA) are those which are larger than 10 mm, have tubulovillous or villous architecture, or have high grade dysplasia [13] . The term "serrated adenoma" was introduced by Longacre et al [14] to describe polyps with dysplastic (adenomatous) cytology and serrated crypt architecture. Later, Torlakovic et al [15] coined the term sessile serrated adenoma to describe a different lesion, one with serrated crypts and characteristic architectural changes but usually no cytologic dysplasia. In order to avoid (or at least minimize) confusion, the Longacre lesion was renamed "traditional serrated adenoma." Despite the shared terminology, SSA and TSA are not necessarily related lesions [16] . After a few more terminology modifications, the current World Health Organization classification for serrated polyps is: hyperplastic polyp, sessile serrated polyp (SSP) without dysplasia; sessile serrated adenoma (SSA) with cytological dysplasia, and traditional serrated adenoma [17] . The risk of developing CRC from a serrated lesion correlates with larger size (> 10 mm), presence of dysplasia and higher number of synchronous polyps. Surveillance is recommended by the United States Preventive Services Task Force (USPSTF) 3 years after removal of AA, TSA, or advanced SSA [11] while the European guidelines recommend surveillance at 1 years for high risk polyps (≥ 20 mm) but three years for intermediate risk polyps (10 mm to < 20 mm) [18] . Despite frequent colonoscopy, CRC has been shown to develop at an incidence rate of 1.2/1000 [19] . Though several large studies have illustrated the rates of post colonoscopy CRC to be low [20] , we were particularly interested in how often CRC develops in the highest risk patients, namely those who have AA, TSA, or advanced SSA. Study population In this IRB-approved nested case cohort study (IRB 622-00), we reviewed the colonoscopy database and pathology reports for patients who were seen at Mayo Clinic, Rochester, Minnesota for colonoscopy related to any indication and found to have high-risk AA (villous architecture; high grade dysplasia and/or size > 10 mm), TSA, or Advanced SSA (any dysplasia and/or size > 10 mm), then identified 4160 patients who had at least one surveillance exam following the index polypectomy for their AA/TSA/ASSA. Surveillance exams were performed only for follow up and were not done in response to clinical symptoms. Colonoscopy reports prior to the incident advanced polyp lesion were not available in the electronic medical record on most patients and thus were not included in this study. We included all patients ≥ 18 years of age diagnosed with either AA between January 1990 to December 2010 or ASSA/TSA between January 2000 to December 2010. Patients were followed through August 2016. Patients with a diagnosis of a polyposis syndrome, inflammatory bowel disease, or a known genetic predisposition for CRC were excluded from the study. We identified all patients from this cohort who had developed CRC (n = 84) and then randomly selected 252 patients who had an AA, TSA, or ASSA at index colonoscopy but who had not developed CRC. Clinical and pathological features of high-risk polyps (i.e., size, histology, site, and degree of dysplasia, time of index polypectomy), number and timing of surveillance colonoscopies and post polypectomy CRC (i.e., size, site, grade and stage) were collected via chart abstraction for this cohort of patients. Subjects who had not developed post-polypectomy CRC were randomly selected from a pool of 10 patients matched to the post-polypectomy CRC group based on polyp histology and size (< or ≥ 20 mm), degree of dysplasia and decade that the index polyp was removed. ASSA was classified as being at higher risk for malignant transformation if the polyp was > 10 mm, had dysplasia or higher number of synchronous polyps (≥ 3 polyps in small polyps measuring < 10 mm or ≥ 2 large polyps measuring > 10 mm) [17] . Post-polypectomy CRC was classified as same site cancer if the cancer arose in the region of the colon in which the high risk polyp had been removed. Since our surveillance intervals and time from index AA/TSA/ASSA to cancer development extended beyond three years in some cases, we did not use the term interval cancer [21] , but rather post-polypectomy cancer. We acknowledge that it is impossible to know if the development of CRC in the same region as the high risk polyp that had prompted surveillance, we would anticipate that this high risk polyp would be the most likely source for the cancer. Though these cases spanned from 1990 to 2010 for the AA and from 2000 to 2010 for the TSA and ASSA, we applied the most current USPSTF guidelines to all of these cases in order to assess the ability of these recommendations for polyp management of these high risk patients [11] . We similarly assessed using the European surveillance guidelines distinguishing intermediate versus high risk AA/TSA/ASSA based on polyp size. A polyp was classified as persistent if polyp clearance was not achieved on any of the surveillance procedures and as recurrent if the polyp had been or R (version 3.2.3). RESULTS AA/TSA/ASSA were detected in 14633 patients at incident colonoscopy. Of those, 1261 were excluded since they were found to have incident CRC at the time of AA/TSA/ASSA detection. After excluding patients who did not undergo a surveillance colonoscopy after this index polypectomy, 4610 patients were evaluated. Thirty-one of the 1390 (1.67%) of the TSA and ASSA were found to have subsequent CRC, and 107/3406 (3.14%) of the AA patients developed subsequent CRC (p = 0.11) (Figure 1). Post-polypectomy CRC at the AA/TSA/ASSA resection site Sixty-three patients with history of AA (41 villous, 22 tubular), two with TSA and 19 with ASSA (15 without dysplasia and 4 with dysplasia) who developed CRC at the same site as the index polyp were identified. These 84 patients were compared to a randomly selected cohort of 252 of the AA/TSA/ASSA patients who did not develop post-polypectomy CRC. Patients who developed CRC at the index polypectomy site were significantly older (47.6% vs 33.7%, p = 0.02); had larger index polyps (15.5% vs 7.1%, p = 0.02); had an increased number of synchronous polyps at time of polypectomy (16.7% vs 8.3%, p = 0.03) and were more likely to have AA/TSA/ASSA in the right successfully treated, not detected on at least one subsequent colonoscopy but recurred at the tattooed site of the original AA/TSA/ASSA. Statistical analysis The data are reported as mean (± SD), median (interquartile range, IQR), ranges, and categorical variables by counts and percentages as appropriate. We included only cancers occurring at least one year after polypectomy to minimize the risk of detection bias and misclassification. Patients with a past history of CRC diagnosed were included in our study. Estimates of the rate of cancer for the entire cohort were determined by using the Kaplan-Meier survival curve with log-rank test. To identify risk factors associated with development of cancer, we performed univariate time-to-event analysis with Cox proportional regression models that accounted for the case-cohort design by using case weights to account for the sampling frame and robust estimates of variance [22][23][24] . Variables with p < 0.05 on univariate analysis were included in a multivariate Cox proportional hazard analysis to identify independent risk factors associated with malignancy. Finally, penalized regression models were run using Lasso regression, with 10-fold cross validation, to provide robust estimates of the model coefficients, which should provide better predictions when used with external data [25] . All statistical analyses were conducted using JMP version 10 for Windows (SAS Institute Inc., Cary, NC, United States), SAS (version 9) colon (75% vs 43%; p < 0.01) than the patients who did not develop post-polypectomy CRC. Patients with smaller polyps (> 10 mm and < 20 mm) that would be categorized by EU guidelines were less likely to develop post-polypectomy CRC (p = 0.03). Other findings are shown in Table 1. The most common causes associated with postpolypectomy CRC development were non-adherence to recommended surveillance interval (27.4%), incomplete resection of high risk polyp (25.0%), and unknown causes (30%) (Supplementary table 1). Notably, the median time from the index polypectomy to post-polypectomy cancer development ranged from 0.7 years for patients with persistent or recurrent polyps at the index polypectomy site to 3.5 years for patient who developed CRC but had at least one negative surveillance colonoscopy done after the index polypectomy. Patients who had their surveillance colonoscopy later than recommended or who were recommended by their healthcare providers to have follow up of their index AA/TSA/ASSA later than guideline recommendations developed CRC at a median of 6 years after treatment for the index AA/TSA/ASSA (Supplementary table 1 Table 2). CRC at site distinct from the index AA/TSA/ASSA Forty-four patients with history of AA (27 villous, 17 tubular); three with TSA and seven with ASSA (four with dysplasia) later developed CRC at a site distinct from that of the incident AA/TSA/ASSA. One hundred and sixty-two patients who underwent polypectomy for AA/TSA/ASSA but did not later develop CRC (Table 3) were randomly selected to be the comparison group matched to the control group based on polyp histology and degree of dysplasia. DISCUSSION This study showed that there is a persistent risk for post-polypectomy CRC despite surveillance colonoscopy for those polyps known to have the highest risk for malignant transformation. Even under watchful, directed colonoscopic surveillance and management of those polyps with the highest risk, 1.8% of patients developed post polypectomy CRC at the index polyp site and 1.2% developed CRC at a site distinct from the index AA/TSA/ ASSA. Villous and tubular adenomas were the most commonly observed histologies. ASSA/TSA were less common, possibly due to limited recognition of the serrated-cancer pathway during the time frame in this study, but which has improved within the last decade. One-third of patients developed CRC at the polypectomy site despite following appropriate surveillance intervals. This could be secondary to high endoscopic miss rate or rapidly-progressing cancer development. We found that increasing age at the time of polypectomy, number of polyps, polyp size, location, degree of dysplasia, and piecemeal resection were associated with increased CRC risk. CRC developed at the index AA/TSA/ASSA polypectomy site in 1.8% (84/4610) of patients despite apparent initial complete resection of the high risk polyp. In 25% of these cases in which CRC developed at the index polypectomy site, the polyp had been found on surveillance colonoscopy either to have persisted or recurred, and subsequently progressed to cancer. It is possible that some polyps were missed, since colonoscopy has a failure rate of 6%-12% in detecting adenomas > 10 mm [26,27] . Alternatively, this could be explained by rapid progression from adenoma to CRC or by de novo CRC formation [28] . In spite of at least one surveillance colonoscopy in which there was no endoscopic evidence of recurrence of the index polyp, CRC was identified at the index polypectomy site on subsequent colonoscopy in nearly one third of those who developed same site post polypectomy CRC. Polypectomy techniques have been implicated as one potential risk factor for post polypectomy CRC. Endoscopic piecemeal mucosal resection has been reported to be associated with 12.2%-55% rate for recurrence at the polypectomy site [29][30][31][32][33] , which is known to be suboptimal for full resection of flat polyps in part because of the difficulty to completely identify, and thus include with the resection, the tissue bordering the polyp. Walsh et al studied 65 patients with large flat polyps treated with piecemeal resection with electrocautery snare. Nearly 14% of the polyps recurred after at least one negative intervening examination, and CRC developed in 17% of the patients after complete resection of the large polyp [30] . In another study, the rate of recurrence after endoscopic mucosal resection (EMR) with mucosal lift was observed in 7% of patients with flat polyps [34] . In our study, CRC occurred in 21 (27.6%) and 13(17.1%) of the patients who received piecemeal resection and en bloc EMR with mucosal lift, respectively. Piecemeal snare excision but not EMR with mucosal lift was an independent risk factor for post polypectomy CRC in this study; but further prospective studies are needed to examine the prognostic utility of EMR with CRC development. In our study, poor adherence to current surveillance guidelines appeared to contribute to 8.3% of the cases of post-colonoscopy CRC. A previous study showed that delayed surveillance interval was associated with the development of CRC in almost 2% of patients post polypectomy for advanced adenoma [35] . Risk factors for pent surveillance was associated with a colonoscopist's having finished colonoscopy training prior to 1990; presently being in training; practicing in a non-academic setting, and performing a low life time number of colonoscopies [36] . In our study, we confirmed the finding by Robertson et al [37] that patients who are older at the time of polypectomy for AA are more likely to develop postpolypectomy CRC. Toll et al reported that CRC developed in 7% of patients with large polyps with high grade dysplasia over an average of 7 mo [38] . We also found that patients with right-sided AA/TSA/ASSA are more likely than those left-sided high risk polyps to develop same site post-polypectomy CRC, possibly due to the fact that flat polyps are more likely to arise in the right-side of the colon and are more easily missed [39] . Our study, like other studies, have implicated large (≥ 20 mm ) polyps as particularly high risk and in need of a follow-up colonoscopy relatively soon after initial resection because the residual polyp could persist or recur with subsequent progression to CRC after polypectomy [40] . In this group of patients with persistent or recurrent high risk polyps, markers that predict whether a polyp needs to be removed with a colon resection to prevent CRC have yet to be identified. The clinical or molecular clues that distinguish the three quarters of patients with recurrent AA/TSA/ASSA who are able to be successfully treated with colonoscopic therapy from the ¼ of patients in whom the recurrent polyps will progress to cancer need to be expanded beyond the current features that declare a polyp as "high risk". Our study highlights the risk of missing additional adenomas or cancers at a surveillance colonoscopy for follow up of an index AA/TSA/ASSA. Recognition that post-polypectomy CRC can happen at a site distinct from the index polypectomy even in individuals undergoing more intensive surveillance may be leveraged to improve the success rates of surveillance colonoscopy. It is possible that by expanding the proceduralist's attention beyond evaluation of the target lesion -in addition to utilizing each opportunity at the surveillance colonoscopy to perform a thorough examination of the entire colon -may decrease the unanticipated and undesired outcome of CRC developing in spite of repeated surveillance. Adenoma miss rates during colonoscopic surveillance have been reported to range from 6% to 27% [41] . Bressler et al [2] reported that the rates for new/missed colon cancer which developed within 6-36 mo after colonoscopy were approximately 6.0% in the right colon; 5.5% in the transverse colon; 2.0% in the descending colon; and 2.3% in the rectosigmoid colon [2] . Positive screening tests such as Cologuard™ could improve colonoscopy performance. Johnson et al found that endoscopists who were aware of the Cologuard™ results spent more time and found more hemorrhagic and precancerous polyps than blinded endoscopists [41] . Features other than colonoscopy adenoma detection and polypectomy skills may contribute to these post-polypectomy CRCs at either the index site or in other areas of the colon. Our study has several limitations. In addition to the retrospective nature of our study, we were not able to obtain data on all patients who did not develop CRC due to the large size of this cohort. To obtain reliable data would have necessitated manual review of over 4000 patient medical records not available in the electronic medical record to confirm the colonoscopy and pathology data for surveillance colonoscopies done both at Mayo and at other healthcare centers. Therefore, the relatively small number of post-polypectomy CRC cases was compared to a randomly selected portion of patients who did not develop post polypectomy CRC, rather than to the entire cancer free cohort. Another limitation is the relatively low number of TSA or ASSA patients compared to those with AA. We did not account for other confounding factors associated with higher lifetime risks and mortality from CRC such as the patient's BMI, smoking exposure, exercise, use of aspirin or NSAIDs, prior colonoscopy exams, or the adenoma detection rate of the performing colonoscopist [42] . This study shows that the applicability of current evidence-based surveillance guidelines to some patients with AA/TSA/ASSA is limited. There is insufficient data to provide explicit guidance for the follow up of polyps removed using specific treatments such as piecemeal endoscopic resection [11] . Current surveillance guidelines do not incorporate the impact of multiple high-risk features such as the risk of a large AA/TSA/ASSA being more recalcitrant or at higher risk for progressing to cancer if present in the right versus left side of the colon or the age of the patient. Current USPSTF guidelines recommend 3-year surveillance interval following polypectomy of adenoma with high-grade dysplasia but does not account for other features [11,43] . European guidelines stratify intermediate risk polyps as having a lower risk than EU guideline high risk polyps ≥ 20 mm, and recommend surveillance at 1 year for high risk polyps. Our findings that post-polypectomy CRC was significantly associated with high, but not intermediate risk polyps as classified by EU guidelines supports the need for a one year surveillance colonoscopy for these larger polyps currently not addressed in the USPSTF recommendations. Developing a risk score to optimize risk stratifications of patients with AA/TSA/ASSA might result in better discrimination between low-and highrisk patients. A recent study developed a scoring system based on older age, male sex, adenoma number, size ≥ 10 mm, villous histology, and proximal location at index colonoscopy; which were found to be independent predictors for detecting AA/TSA/ASSA, but not cancer, at surveillance endoscopy [44] . Having additional tools to risk stratify polyps will assist with making recommendations for surveillance, could identify tissue or molecular features that might be used to improve visualization of polyps, and stratify the risks that a polyp might recur or progress to cancer. To our knowledge, this is the first study to determine risk factors for incident CRC at the same site or at another site in the colon following polypectomy of advanced lesions. Current guidelines are still limited in detecting such patients. Our study supports Atkin et al's [45] study who recently reported that the incidence of CRC in patients was higher in patients with suboptimal quality colonoscopy, proximal polyps, large or high-grade polyps at baseline. Patients with increasing age and a history of large, multiple, highly dysplastic, right-sided, and difficult to remove adenomas requiring piecemeal resection are a high-risk population for the development of CRC at the same site. Increasing age and the presence of flat and/or right-sided adenomas increased the risk of CRC at another site. A diagnosis of CRC soon after complete colonoscopy may imply the need for shortened surveillance intervals. Understanding risk factors for subsequent CRC development and developing molecular markers predictive of progression to cancer are important for individualizing surveillance recommendations following adenoma removal since colonoscopy is not 100% sensitive tool in the identification or prevention of CRC in this population. In order to better stratify a polyp's risk for recurrence and subsequent CRC will require further research to identify molecular or other features to guide more individualized polyp management. Research background Screening colonoscopy has a 3.5% false negative rate for detection of colorectal cancer (CRC) resulting in 17% of patients who had undergone colon screening within 3 years being diagnosed with CRC. However, no large studies have assessed the frequency and risk factors for CRC development among individuals following advanced adenoma (AA)/traditional serrated adenoma (TSA)/advanced sessile serrated adenoma (ASSA) removal. Recognition of this group at high-risk for interval CRC is one step toward preventing morbidity and mortality associated with CRC development. Research motivation Recognition that CRC could develop following AA/TSA/ASSA removal despite ARTICLE HIGHLIGHTS adherence to guidelines is one step toward improving our practice efficiency and preventing a portion of CRC related morbidity and mortality. Understanding risk factors and developing molecular markers that predict progression may become important in order to individualize surveillance recommendations and recognize those AA/TSA/ASSA patients at high-risk for interval CRC. Research objectives To report the frequency of interval CRC development following high-risk polypectomy at the polypectomy site and another site distinct from polypectomy site and to identify risk factors associated with development of cancer. Realizing these objective is critical for future research since current evidence-based surveillance guidelines are limited in predicting CRC risk in these patients. Research methods We reviewed medical records of all adult patients ( ≥ 18 years of age) who underwent colonoscopy (between January 1990 to December 2010 ) and were found to have high-risk polyps ( either AA between January 1990 to December 2010 or ASSA/TSA between January 2000 to December 2010 ) to identify 4160 patients who had at least one follow-up surveillance colonoscopy following polypectomy. We excluded patients with IBD, polyposis syndromes or other genetic syndromes predisposing for CRC. Patients with a past history of CRC were not excluded from our study. From this cohort, we identified 84 patients who had developed CRC and matched to 252 patients who had not developed CRC based on polyp histology and size (< or ≥ 20 mm), degree of dysplasia and decade that the index polyp was removed. Data abstracted included clinical and pathological features of high-risk polyps, number and timing of surveillance colonoscopies and post polypectomy CRC. The data are reported as mean (± SD), median (interquartile range, IQR), ranges, and categorical variables by counts and percentages as appropriate. Estimates of the rate of cancer for the entire cohort were determined by using the Kaplan-Meier survival curve with log-rank test. We performed univariate time-to-event analysis with Cox proportional regression models to identify risk factors associated with development of cancer. Variables with p < 0.05 on univariate analysis were included in a multivariate Cox proportional hazard analysis to identify independent risk factors associated with malignancy. Finally, penalized regression models were run using Lasso regression, with 10-fold cross validation, to provide robust estimates of the model coefficients, which should provide better predictions when used with external data. All statistical analyses were conducted using JMP version 10 for Windows (SAS Institute Inc., Cary, NC, United States), SAS (version 9) or R (version 3.2.3). Research results Despite colonoscopic surveillance and management of high-risk polyps, 1.8% of patients developed post polypectomy CRC at or the index polyp site and 1.2% developed CRC at a site distinct from the index AA/TSA/ASSA. About one-third of patients developed CRC at the polypectomy site despite following appropriate surveillance intervals. Increasing age at the time of polypectomy, number of polyps, polyp size, location, degree of dysplasia, and piecemeal resection were associated with increased CRC risk. Current surveillance guidelines are not sufficient since it does not take into account the impact of multiple high-risk features of high-risk polyps for CRC development. This study also highlights the risk of missing additional adenomas or cancers at a surveillance colonoscopy for follow up of an index AA/TSA/ASSA. Resection technique (Piecemeal snare excision) was an independent risk factor for post polypectomy CRC in this study; but further prospective studies are needed to examine the prognostic utility of EMR with CRC development. Research conclusions 1.8% of patients developed post polypectomy CRC at the index polyp site and 1.2% developed CRC at a site distinct from the index AA/TSA/ASSA despite surveillance colonoscopy. Surveillance colonoscopy for high-risk polyp does not always prevent CRC cancer development. Current surveillance guidelines are not sufficient in predicting CRC risk in some patients. Incorporate the impact of multiple high-risk features of resected polyps in surveillance guidelines. Interval CRC develops after high-risk polyp resection despite being in a surveillance program. We compared patients who had developed interval CRC after high-risk polyp resection at same site and different site and matched to patients who had not developed CRC to identify risk factors associated with CRC development. Patients with increasing age and a history of large, multiple, highly dysplastic, right-sided, and difficult to remove adenomas requiring piecemeal resection are a high-risk population for the development of CRC at the same site. Increasing age and the presence of flat and/or right-sided adenomas increased the risk of CRC at another site. Colonoscopy is not 100% sensitive tool in the identification or prevention of CRC. Shortened surveillance intervals may be needed postpolypectomy in some patients with multiple high-risk features. Research perspectives Interval CRC cancer rate after high-risk polyp resection is low yet CRC does develop in spite of post-polypectomy surveillance. We require further research to identify molecular or other features to guide more individualized polyp management. Study molecular features of patients who developed CRC at the polypectomy site despite following appropriate surveillance intervals
2018-04-03T02:38:32.125Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "a2c1a8449949a11921b6cdba48f7376a970af812", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v24.i8.905", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a2c1a8449949a11921b6cdba48f7376a970af812", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258397343
pes2o/s2orc
v3-fos-license
Refractory Pit1 plurihormonal tumours and thyrotroph adenomas Pit-1 tumours are derived from neoplastic cells of either somatotroph, lactotroph or thyrotroph cell lineages, but there are also distinct mixed tumours and plurihormonal tumours within this category as described within the 2022 edition of the WHO classification of pituitary tumours. Plurihormonal tumours and thyrotroph adenomas are transcriptionally similar and grouped together to discuss in this review, although it is clear an immature type of plurihormonal tumour exists which are more commonly associated with refractory disease. Management of residual or recurrent disease should follow that of other aggressive pituitary tumours, although a trial of somatostatin analogue therapy is certainly warranted before considering temozolomide therapy. Introduction The Pit-1 lineage of pituitary tumours has evolved in the 2022 WHO classification to encompass tumours derived not only from mature somatotroph, lactotroph and thyrotroph cells, but recognising distinct plurihormonal types and tumours originating from precursor cells (Table 1) [1]. From a prognostic standpoint, refining the classification of Pit-1 lineage tumours is important as type behaviour varies. Pit-1 plurihormonal tumours The nomenclature "Pit-1 positive plurihormonal tumour" was coined in the WHO 2017 classification as an alternative to the previously known "Silent subtype 3 adenoma". However, the WHO 2022 edition refined this into 2 separate types being the "Immature Pit-1 lineage" (IPL) and "Mature McCormack AI a.mccormack@garvan.org.au in 1 small study among 18/23 (78%) of "silent subtype 3" tumours which suggests potential efficacy of treatment with the alkylating agent temozolomide [5]. Thyrotroph adenomas Thyroid stimulating hormone (TSH)-secreting adenomas (thyrotroph adenomas (TA) or "TSHomas") account for 2-3% of pituitary adenomas, although higher prevalence rates in the past 2 decades are attributed to a rise in detection of microadenomas [6][7][8]. TAs may present with isolated TSH elevation and consequent hyperthyroidism, but in around 40% of cases co-secretion of other hormones predominantly growth hormone (GH) and prolactin (PRL) (Fig. 1c,d) [8]. Plurihormonal expression, as assessed by IHC, maybe even more common, up to 83% in one study [7]. In fact, transcriptomic analysis demonstrates thyrotroph and plurihormonal Pit1 positive adenomas cluster with sparsely granulated somatotroph adenomas sharing a distinct gene expression profile different from lactotroph and somatotroph adenomas [9]. This suggests TA and Pit-1 plurihormonal tumours are more closely related than other tumours derived from the Pit-1 lineage. However, this study did not differentiate between mature or immature forms of Pit-1 plurihormonal tumours. TA, frequently presenting as In a recent French series of 20 TAs, there was just 1 tumour with Ki67 > 3% and none were classified as Grade 2b which are known to recur at a significantly higher rate, noting specifically that plurihormonal tumours included in their cohort were not of the "poorly differentiated" Pit-1 subtype [21,22]. There appears to be a male predilection among aggressive TAs, with 2/3 PCs and all 5 across ESE surveys being male, compared with a 1.07 F:M ratio among all published TA cases [10]. This sex difference was also described in a large Chinese cohort of 111 TAs in which 10/12 co-secreting tumours were male compared with 58% women in the pure TA group, with co-secretors demonstrating significantly larger tumours with higher rates of cavernous sinus invasion[6]. Among the described aggressive TAs, including the 3 PC cases, there is a high proportion of "silent" TAs, which frequently become clinically functioning heralding more aggressive behaviour, and these may well represent IPL [11,[13][14][15]. This highlights the importance of detailed IHC analysis including Pit-1, ERα, GATA3 and low molecular weight cytokeratin to accurately distinguish mature TA from MPL and IPL. Whether truly "silent" pure TAs have a worse outcome remains unclear [8]. It has also been suggested that TAs may become more aggressive following thyroid ablation (similar to Nelson's syndrome phenomena for corticotroph tumours) either from surgery or radioiodine that often results from incorrect diagnosis of TA as primary thyroid disease [23,24]. In one PC case radioiodine thyroid ablation was administered because of poor compliance with antithyroid medication, with development of metastases 10 months subsequently [14]. However, while invasive macroadenomas have been described in this setting so have microadenomas, and in an NIH cohort there was no difference in tumour size between patients with a treated thyroid and those without [24][25][26]. Furthermore, there is often years before diagnosis of TA following thyroid ablation suggesting the natural history of tumour development may not have been perturbed. Somatic mutation of TRβ and aberrant expression of iodothyronine deiodinase enzyme expression have been linked with the resistance to thyroid hormone feedback of TSH regulation within TA but has not been associated with aggressive behaviour [27,28]. In fact, little is known about the molecular mechanisms driving TA development -in a whole exome sequencing study of 8 TAs no recurrent mutations were found [29]. Transsphenoidal pituitary surgery remains first-line treatment for TAs as described in European Thyroid Association guidelines published in 2013 [30]. Overall among 535 reported cases, surgical remission rates are 69.7%, higher among microadenomas (87%) than macroadenomas (49%) [8]. Cavernous sinus invasion is the strongest predictor of surgical outcome with 75% of Knosp Grade 3 versus 0% Knosp Grade 4 tumours achieving remission in 1 modern study [7]. Preoperative use of SSA therapy does not appear to improve remission rates, although may be used to prevent peri-operative thyroid storm [8, 31]. Recurrence following gross total resection is uncommon in the first 3 years particularly if there is a low TSH in the 1st week postoperatively [7, 21, 32]. Further surgery for recurrent disease is associated with lower gross total resection rates (28.57% versus 71.42% primary surgery in 1 study) [33]. In cases not achieving remission or with recurrence, SSA treatment is effective and should be considered first-line medical therapy following incomplete surgery. In a meta-analysis of 536 TAs biochemical remission was seen in 76% of cases under SSA therapy with other cohorts demonstrating significant tumour shrinkage in up to 50%, but just an isolated case of complete remission [8,23]. SSTR5 expression may predict long term response to SSA therapy with one case of aggressive behaviour developing in the context of LOH involving the SSTR5 gene [34][35][36]. Radiotherapy (RT) may be used as second-line therapy, but now more frequently in setting of SSA resistance or concern about long-term SSA with total thyroidectomy only indicated for life-threatening hyperthyroidism when pituitary surgery not curative [10]. In a study of 19 macroTAs, biochemical remission was seen in 21% up to 2 years after RT with 37% still on medical therapy at last followup [31]. All patients in whom tumour shrinkage was evident received radiosurgery (rather than fractionated RT) with complete remission in 1 patient. In those resistant to SSA, there may be utility in trialling dopamine agonist therapy with a few cases demonstrating response but also occasional paradoxical increases in TSH have been seen [25,37]. Efficacy of temozolomide therapy in the setting of progressive disease despite SSA and RT in TA remains unclear based on limited cases. Of 6 published cases (5 APT, 1 PC), noting 5 of these were "silent", there was just 1 case of partial remission (41% tumour reduction) with 3 demonstrating stable disease and the PC progressing [11,15,38]. In 3 where MGMT IHC was performed, low expression was seen in one case of stable disease and intermediate expression in a case with partial response and another with stable disease. As commonly seen in historical Conclusion Pit-1 plurihormonal tumours comprise both mature (MPL) and immature (IPL) types now recognised in the WHO 2022 classification. These tumours have gene expression profiles that are closely aligned with TA, although IPL more closely resembles the previously known silent subtype 3 adenoma and may account for the poorer prognosis often attributed to these Pit-1 lineage tumours. In cases with residual or recurrent disease following surgery, a trial of SSA is warranted and RT may be effective. Low MGMT expression may be seen more frequently in IPL but data on temozolomide efficacy is limited but should be first-line chemotherapy in the absence of other known effective therapies. Authors' contributions Both authors contributed to the writing and editing of this manuscript. Funding Open Access funding enabled and organized by CAUL and its Member Institutions Data Availability Not applicable. Declarations Ethical approval Not applicable. Competing Interest The authors declare no competing interests.
2023-04-30T06:17:25.463Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "3b22bd29fffc5a19f7bbd130521e5e9931661434", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s11102-023-01312-9", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "d99a13ee05800a6de3c3df92a7f81d996761d27d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14554396
pes2o/s2orc
v3-fos-license
Coordination in networks for improved mental health service Introduction Well-organised clinical cooperation between health and social services has been difficult to achieve in Sweden as in other countries. This paper presents an empirical study of a mental health coordination network in one area in Stockholm. The aim was to describe the development and nature of coordination within a mental health and social care consortium and to assess the impact on care processes and client outcomes. Method Data was gathered through interviews with ‘joint coordinators’ (n=6) from three rehabilitation units. The interviews focused on coordination activities aimed at supporting the clients’ needs and investigated how the joint coordinators acted according to the consortium's holistic approach. Data on The Camberwell Assessment of Need (CAN-S) showing clients’ satisfaction was used to assess on set of outcomes (n=1262). Results The findings revealed different coordination activities and factors both helping and hindering the network coordination activities. One helpful factor was the history of local and personal informal cooperation and shared responsibilities evident. Unclear roles and routines hindered cooperation. Conclusions This contribution is an empirical example and a model for organisations establishing structures for network coordination. One lesson for current policy about integrated health care is to adapt and implement joint coordinators where full structural integration is not possible. Another lesson, based on the idea of patient quality by coordinated care, is specifically to adapt the work of the local addiction treatment and preventive team (ATPT)—an independent special team in the psychiatric outpatient care that provides consultation and support to the units and serves psychotic clients with addictive problems. Introduction and problem statement Poorly linked health and social care services for mental health clients have been reported in many countries and different approaches for better coordination are being pursued [1]. 'Integrated care' has become a component of health and social care reform across Europe, defined as "bringing together inputs, delivery, management and organisation of services related to diagnosis, treatment and care, rehabilitation and health promotion" [2, p. 7]. However, evidence indicates that there is a gap between policy intent and practical application [3]. Putting models of integrated care into practice is challenging, and progress toward integrated care has been limited. 'Under-coordination' has been shown to increase risks, adverse events and increases costs [4]. Some of the failings are related to unclear responsibilities for the patient and their problems, which result in information loss as the patient navigates the system [3]. Other failings are related to poor communication with the patient and between health and social care providers treating patients for one condition without recognising other needs or conditions, thereby undermining the overall effectiveness of treatment [3]. Efforts to describe the fragmentation problem and formulate solutions seem complex, partly due to a lack of shared definitions of terms like coordination and continuity of care. A multidisciplinary review demonstrates that concepts like coordination of care, continuum of care, discharge planning, case management, integration of services, and seamless care are frequently used synonymously [5]. More recently, integration has been described as an elastic term [6,7]-a circumstance that has implications for both patient safety and continuity of care in complicating evaluation efforts and constructive communication. To further illustrate the complexity, integration is often pictured along a continuum of inter-organisational relations, extending from a complete autonomy of organisations through intermediate forms of consultation and consolidation to a merger of organisations [7,8]. Parallel to that, distinctions among linkage, coordination and full integration, where linkage allows individuals with mild to moderate or new disabilities to be cared for appropriately in systems that serve the whole population without having to rely on outside systems for special relationships are also made [9]. At the second level, coordination refers to explicit structures and individual managers installed to coordinate benefits and care across acute and other systems. In comparison, coordination is a more structured form of integration than linkage, but it still operates largely through the separate structures of current systems. Finally, full integration creates new programs or units where resources from multiple systems are pooled. Well-organised cooperation between health and social services has been difficult to achieve in Sweden, as in other countries. Within mental health care, where flexible, personalised, and seamless care is needed, clients are regularly seen by several professionals in a wide variety of organisations and sites, which often causes fragmentation of care and gaps in the continuity in care. Case management is often described as a method for coordination, integration and allocation of resources for individualised care for mental health clients [10,11]. Case management is well-established as a major component of psychiatric treatment in most Western countries and has been for up to 20 years in some areas [10]. Coordination in networks is described as a structured type of integration operating largely through existing organisational units aimed at coordination of various health services, to share clinical information, and to manage the transition of clients between care units [12]. Network structures include, but also reach beyond, linkages, coordination, or task force action. Unlike networks, in which people are only loosely linked to each other, in a network structure people must actively work together to accomplish what they recognize as a problem or issue of mutual concern [13]. As a basic assumption, organisational network structures alone are not sufficient to produce integrated practice, but still, well-organised coordination of care may help to improve care quality, patient safety, health system efficiency, and patient satisfaction [1]. Today, the relationships in mental health care are typically established with a team rather than a single provider and coordination often extends to social services such as housing and daytime activities where care coordinators are appointed to facilitate both health and social services [5]. Identified as a unique feature, still topical in mental health care, is the continuity of contacts, where the care team maintains contact with clients, monitors their progress, and facilitates access to services [14]. The aim of this study was to document and describe a well-established coordination structure within a mental health and social care consortium but also to explore these structures impact on care organisation and client outcomes. Research questions A review of the research identifies a need to further increase our knowledge about how to economically and resourcefully organise coordination networks for improved mental health services and identify what factors are helping and hindering. To meet that objective, [15,16]. Action research is well suited to help solving real-life problems at hand. In order to meet the problem-solving intention, action research should encompass a conjunction of research, action and democratic participation [17]. One of the 12 case studies covered the Södertälje mental health and social care consortium. The setting for the study: The Södertälje mental health and social care consortium The Södertälje mental health and social care consortium is a cooperative model involving a county psychiatry clinic and the municipal social services and sheltered housing and rehabilitation units. Since 1996 the consortium has made major changes for better care across unit boundaries to chronic mental health clients. Some of the key changes made to develop the cooperative model were the formation of a joint steering group in 1996 for representatives from both the county psychiatry clinic and the municipal social services. Another change was the implementation of standardised assessments and follow-up of individual needs and service outcomes using The Camberwell Assessment of Need (CAN) scale. The assessment scale, introduced in 1997, is a 22-item measure for assessment of health and social needs of people with mental health problems [18,19]. A third key change was the introduction of 'joint coordinators' from both the county psychiatry clinic and the local municipal social services. The joint coordinators, based in the same office, aim at shared coordination for each client. The main actions to bring about the innovation content changes described above were: actions to formulate a shared vision for the service, • actions to prepare a plan and present this to differ-• ent local and county committees, actions to apply for and use national capital finance • available for mental health developments, actions to build and start services at three shared • rehabilitation units. Today, the majority of the clients within the consortium are diagnosed with schizophrenia and a few are diagnosed with schizoaffective psychosis or passing psychosis. A small number of clients have bipolar disorders and functional disorders. The core of the consortium consists of three daytime rehabilitation units. Both the county psychiatry clinic and the municipal social services share a holistic approach to clients needs. The initial phase of the case study investigated the origin of the consortium, explicitly the basic ideas and actions that guided the local change agents' first steps in the development work. Structured interviews with key persons (n=10) at various organisational levels helped to reconstruct the program theory and show important changes and factors both helping and hindering the continuous development work. Coordination in networks for improved mental health service In Södertälje, each client within the consortium has one coordinator from each service. The joint coordinators have central tasks in helping chronic mental health clients to recover for example through assessments of needs, which is an essential measure for the establishment of rehabilitation plans. Nurses, occupational therapists and rehabilitation assistants primarily hold the role as coordinator. Central in this case, the mental health coordination is strongly characterised by activities where joint coordinators are appointed to facilitate both mental health and social services. Since medical and social rehabilitation often overlap within the mental health consortium, staff activities are organised in networks rather than conventional client pathways [20,21]. This specific form of integration model includes both seamless care arrangements and health care networks and shares some elements both with assertive community treatment [22] and case management [23,24]. Phase B: Details of clinical coordination Method The case study design A multiple-case study approach [25] was applied for investigation into the joint coordinators prerequisites to take action and provide care according to the consortium's holistic approach to client needs. The benefit of multiple cases was considered and replication logic was followed [25]. A series of structured interviews with joint coordinators from each of the three rehabilitation units was performed in 2009. The aim of the interviews was to explore the joint coordinators view on current conditions and their prerequisites to take action according to the consortium's idea of prioritising the clients' needs. CAN-data reflecting the clients' satisfaction with help received from both professional services and relatives was applied as an outcome indicator for the joint coordinators work on integrated mental health care. Selection of coordinators The first line managers at each rehabilitation unit administered the selection of coordinators who were selected on the basis of their practical ability to participate in the interviews. A total number of six joint coordinators divided into three rehabilitation units were sampled. Four of these were senior coordinators (>5 years experience) and two had shorter experience in the role (1-5 years). The variation was considered a strength given the purpose of the study was to provide a broad description of the coordinators view on their current conditions. Interview protocol The design of the interview protocol was based on five fundamental areas of need defined as; daytime activities, psychotic symptoms, contact with authorities and financial issues, interaction with family and relatives, drug and alcohol. These themes were identified as fundamental to the clients' well-being and were also included as separate items in the CAN-scale. The interviews aimed at exploring current network-based interactions but also to identify factors both helping and hindering the current work. Minutes of meetings and documents describing the development work gathered all through the case study helped to structure the interview protocol. The protocol and the thematic list of central areas of need was assessed and approved by a reference group established to support the researchers work. Interview procedure All together, three pair interviews with six joint coordinators were completed at the rehabilitation units. The time consumed varied between 60 and 90 minutes. All coordinators authorised the researcher (JH) to record the interview session with a digital recorder. Added to that, the researcher made written notes during the session. All coordinators were informed about the purpose of the task and the researcher's assurance of integrity and confidentiality (informed consent). Data analysis All interviews were transcribed verbatim and analysed by procedures following basic content analysis [26][27][28]. Interview data were then structured into categories following the five areas of need. Descriptive networks illustrating direct and indirect planning activities were produced to represent the coordinated care process. Factors described as helping and hindering the coordination was identified. Based on common themes and emergent patterns in the interview data, quotations were selected and translated from Swedish to English. Conclusions were then formulated and reported back to the coordinators via the study reference group. Based on the main themes in the interview protocol, five corresponding items in the CAN-scale were identified and selected for outcome measure reflecting the clients' experience of the Södertälje mental health and social care consortium. The items were; daytime activities, medical supervision, money, interaction with family and relatives, drug and alcohol. Group level data on clients' self-assessments covering the years 1997/ 98,2002,2004,2006 and 2008 and were applied as outcome indicators for integrated mental health care. The analysis included all available selfassessments made by clients in the mental health consortium during the period (n=1262). Variables like gender, age, previous admissions and length of contact with services have deliberately been omitted from the analysis. The analysis focused on the clients' selfassessments regarding their satisfaction with help received from the joint coordinators. All documents were archived in a case study database together with transcribed interviews, minutes of meetings and case study notes. Ethical considerations This study was ethically approved by the regional ethical review board in Stockholm at Karolinska Institute, Sweden. In addition, all program activities described were approved by the participating organisation and the data gathering followed The American Psychological Association's ethical principles and code of conduct. Results The result section begins with a description built from the interviews with the joint coordinators on planning activities performed within the consortium. The interview findings is organised in five themes (A-E). Then findings are presented about the clients' self-assessments regarding satisfaction with help received from the coordinators. Interviews with joint coordinators Theme A: Daytime activities. The coordinators primarily mentioned contacts with practice planners, the preparatory group, employers and the Swedish Social Insurance Agency as central for coordination of daytime activities. Contacts with physicians and relatives were only mentioned on a few occasions. Factors helping the current daytime activities included the practice planner, which was described as a central function by the coordinators at both unit A and C: "Well yes, I do visits to workplaces. Sometimes on my own, sometimes together with the practice planner whom is the one identifying and customising the training place. We often ask the client what they find interesting and motivating. For example, one of my clients is interested in animals so recently we arranged a training place in a nearby pet shop. Our practice planner is really good at finding good matches". Continuous assessment of needs using the CANscale were mentioned at unit B. Cooperation with a nearby municipality was described as problematic by unit A. Lack of daytime activities for elderly clients (>65 years) were addressed as a barrier to the coordination work at unit B. Unclear communication with the Swedish Social Insurance Agency was mentioned by unit C. Theme B: Psychotic symptoms. Regarding both direct and indirect planning activities related to psychotic symptoms, the coordinators primarily mentioned interactions with physicians, social workers and staff at the addiction treatment and preventive team (ATPT) an independent special team in the psychiatric outpatient care that provides consultation and support to the units and serves psychotic clients with addictive problems. Contacts with relatives were mentioned on a few occasions. Flexible planning routines was mentioned as a strength at unit A. Continuous contact with clients were described as a factor helping the coordination work at unit B. Shared responsibilities and joint coordinators were mentioned as a strength at unit C. Regarding barriers to the coordination work, unclear roles were mentioned at unit A. Unit B did not describe any current condition hindering the coordination work, but unit C did comment on the limited access to medical records: "That is a problem. I work in the council next to staff from the municipality and I do have access to our physicians' record notes. But my colleague doesn't have access to the same medical record system. I do think we should have a shared system because the risk of errors and mistakes will then be much smaller. Today, I think it is important for me to inform my colleague about our clients' medical status to avoid contacts with aggressive clients". Theme C: Authorities and financial issues. The coordinators primarily mentioned contacts with lawyers and the Swedish Social Insurance Agency as centrally related to contact with authorities and help with financial issues. Contacts with physicians and relatives were only mentioned on a few occasions. Factors helping current coordination activities mainly covered broad contacts within the society, here exemplified by the coordinators at unit A: "We keep in touch and communicate with various authorities like lawyers for debt collection, the count administrative court and even the district court. All contacts start from our clients needs. Sometimes this is problematic due to unclear roles and boundaries. It is not always clear what to do because we have our tentacles in so many places. There is no clear cut boundary between Stockholm council and the municipality and sometimes one have to stop and ask if this really is within my area of competence". Preventive measures and personal skills to identify early signals indicating financial problems for clients were described as strengths at unit C. Inflexible authorities and unclear roles were addressed as a barrier to the coordination at unit A. Work overload on legal representatives were described as a hinderance at unit B. No barriers were mentioned at unit C. Theme D: Interaction with family and relatives. The coordinators primarily described the network meetings and interactions with associations for relatives including education. The local unit for recently hospitalised clients was also frequently mentioned as a central instance. Concerning factors helping the coordination work on interaction with family and relatives, no helping circumstances were made explicit at unit A. The coordinators at unit B and C mentioned the network meetings and support from the local unit of recently hospitalised clients as helpful cases. Regarding barriers, low communication with relatives was mentioned at unit A and B: "I wish there were more communication with relatives and that there was a stronger network surrounding our clients. Something bigger than us as coordinators, that is one thing I would like to improve but I don't think we have any methods for that so it is hard for me to say how to do it in practice. One objective this year and the next are to invite relatives more often to our CANassessment sessions". Theme E: Drug and alcohol. Regarding both direct and indirect planning activities related to drug and alcohol abuse, the coordinators primarily mentioned interactions with the addiction treatment and preventive team (ATPT). Interactions with relatives were only mentioned in exceptional cases. With reference to factors helping the coordination work, some emergent patterns become evident-all units described the ATPT as a central instance: "We have had a close and well-built cooperation standing for many years with the addiction treatment and preventive team. Today, we do have some clients registered here at our rehabilitation unit, which we, for some period of time, transfer to the local psychiatric addictive team for careful drug or alcohol treatment. After that, they return to our rehabilitation unit. Neither unit A nor B expressed any factors hindering the coordination work related to drug and alcohol. The coordinators at unit C mentioned unclear routines, which in turn indicate role ambiguity. Figure 1 summarises the main interview findings. In Figure 1, central planning activities and resources are organised along the five areas of need summarising identified main factors helping and hindering the coordination. The endpoints of each axis summarise identified helping aspects and barriers. Among helping aspects, a common denominator related to joint efforts and shared responsibilities manifest. Regarding barriers, a common denominator related to unclear roles and routines became dear. The ambiguity was described both in relation to internal contacts with colleagues and also associated to external contact with authorities. The Camberwell Assessment of Need Data on The Camberwell Assessment of Need scale reflecting the clients' satisfaction was used to assess the set of outcomes. Figure 2 summarises the clients' self-assessments regarding their satisfaction with the help received from the coordinators. Figure 2 shows the results on 1262 clients' self-ratings regarding perceived satisfaction with help received. Clients without any self-reported needs have been omitted from the summary above. Comparing the clients' self-ratings 1997/98 and 2008, development on all five areas becomes clear; daytime activities (+27%), psychotic symptoms (+6%), money (+11%), interaction with family and relatives (+8%), drug and alcohol (+15%). Overall, the results show that the number of clients satisfied with the help received has consistently increased during the given case period. Discussion The addiction treatment and preventive team (ATPT) within the psychiatric outpatient care was described as a central element by all coordinators. As indicated by others [29,30] teams in mental health care. Based on the general characteristics of the ATPT, the element might be important to include in other mental health services trying to achieve proper integration [31]. On the negative side, context factors such as financial savings, role ambiguity and unclear guidelines were described as hindering circumstances in the current situation. The factors contributing and explaining this are worth closer examination and more research. For instance, inflexible authorities and unclear roles were addressed as a barrier to the coordination and, as identified elsewhere, this type of relations and interaction between governmental levels in a multi-level governance system affect public organizations, their tasks, functioning and autonomy [32]. The findings also indicate that the current system for medical records unavailable for municipal staff might result in redundant administrative work. Implementation of shared medical records may help to strengthen the consortium's holistic approach and also contribute to the important aspect of building trust in interorganisational collaboration and care coordination [33]. Regarding data validity, the selection of coordinators via the first line managers entails the risk of positive sampling but the observed results indicate no or little such bias since both strengths and weaknesses were identified at all units. Another finding was that the number of clients satisfied with the help received has consistently increased during the given case period. The observed CAN results on client satisfaction with help received during the examined period lend strong support for progression on integrated staff activities. It is likely to assume that the CAN results reflect the introduction and development of joint coordinators in 1997/98 but more research on mechanism explaining the outcome is still needed. The applied study design was limited in identifying and separating other changes likely to have had an influence the observed CAN outcome but the study was able to identify structural and process changes which make the observed client outcomes likely. As regards aspects of internal data validity, the observed CAN results converge with the interview findings and the embedded idea of successive advances within the mental health consortium. Conclusions Well-organised cooperation between health and social services has been difficult to achieve in Sweden and elsewhere. Given the study aim, to document and describe a well-established coordination structure within a mental health and social care consortium, and to explore these structures impact on care organisation and client outcomes, this study has gone someway towards describing how to develop network structures for coordination. This paper described areas where there was some evidence of effective care coordination. Factors that help and hinder care coordination were identified, suggesting elements to be included in further research. The research also identified issues for further development-one lesson for current policy on integrated health care is that joint coordinators for each client may be suited to some situations where full structural integration is not possible. Another lesson, based on the idea of improved patient quality through coordinated care, is to adapt the core work of the local addiction treatment and preventive team for psychiatric outpatient care.
2014-10-01T00:00:00.000Z
2010-08-25T00:00:00.000
{ "year": 2010, "sha1": "eb72388569c3180b1cb1484c171cef075a8fdd4d", "oa_license": "CCBY", "oa_url": "http://www.ijic.org/articles/10.5334/ijic.511/galley/1061/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb72388569c3180b1cb1484c171cef075a8fdd4d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252651276
pes2o/s2orc
v3-fos-license
V-NOTES hysterectomy under spinal anaesthesia: A pilot study Background Spinal anaesthesia has not been widely adopted for laparoscopic surgeries until now. There are a few studies that have shown that spinal anaesthesia is at least as safe as general anaesthesia. The need for additional analgesics can be reduced by utilising early postoperative analgesic effects of spinal anaesthesia, and maximum benefit can be obtained from minimally invasive approaches when V-NOTES surgery is performed under spinal anaesthesia. Objective Combining V-NOTES with spinal anaesthesia to improve minimally invasive surgical techniques and provide maximum benefit to patients. Materials and Methods Patients who were found to have benign pelvic organ pathologies, required a hysterectomy and were considered suitable for V-NOTES hysterectomy under spinal anaesthesia were included in this study. Spinal anaesthesia was achieved with 12.5 mg 0.5% hyperbaric bupivacaine in the sitting position. Perioperative events and complications related to spinal anaesthesia were noted. Postoperatively, the pain was evaluated using a visual analogue scale at the 6th, 12th, and 24th hours. Main outcome measures To evaluate the feasibility and safety of spinal anaesthesia in VNOTES hysterectomy and to increase the advantages of minimally invasive surgical procedures. Results: No conversion to conventional laparoscopy or laparotomy was required in all six operated patients. Conversion from spinal anaesthesia to general anaesthesia was unnecessary, and no major perioperative incident occurred in any of the cases. Conclusion In the current study by our team, we demonstrated that V-NOTES hysterectomy could be performed safely under spinal anaesthesia in well-selected patients. The need for additional analgesics can be reduced by utilising early postoperative analgesic effects of spinal anaesthesia, and maximum benefit can be obtained from minimally invasive approaches when VNOTES surgery is performed under spinal anaesthesia. What is new? V-NOTES hysterectomy could be performed safely under spinal anaesthesia in well-selected patients. Introduction Hysterectomy is one of the most frequently performed major gynaecological operations worldwide. Abdominal hysterectomy, laparoscopic hysterectomy, and vaginal hysterectomy may be the preferred surgical approach when surgery is indicated (Whiteman et al., 2008). Recent studies have demonstrated the increasing popularity of laparoscopic hysterectomy over the past 20 years (Wright et al., 2013), and transvaginal natural orifice transluminal endoscopic surgery (V-NOTES) has been introduced as a combination of conventional vaginal and laparoscopic surgery (Kaouk et al., 2009). It has been an approach adopted by gynaecologists with the developing technological innovations. It has become the shining star of laparoscopic gynaecological surgeries over the years. Its advantages include less postoperative pain, no abdominal wall infection, and no scar or 275 E.C. Gündoğdu 1 , E. Mat 2 , Y. aboalhasan 3 , G. Yıldız 1 , G. Başol 1 , K. TolGa Saraçoğlu 4 , G. arslan 4 , a. KalE 1 incisional hernia (Su et al., 2012;Lee et al., 2014;Baekelandt, 2015;Kale et al., 2017). While developments in minimally invasive surgeries have continued in recent years, several studies have confirmed many advantages of spinal anaesthesia, including less postoperative pain, a lower incidence of nausea and vomiting, and an earlier ability to ambulate (Liu et al., 2005;Capdevila and Dadure, 2004). However, laparoscopic hysterectomy is routinely performed under general anaesthesia regardless of the transabdominal or transvaginal route. This is generally explained by the possibility of impaired respiratory function due to the pneumoperitoneum, Trendelenburg position during laparoscopic gynaecological surgery, or the patient's inability to tolerate the surgery. Although not widely accepted, there are reports of laparoscopic hysterectomy performed successfully under regional anaesthesia, while there are no reports of the use of spinal anaesthesia in V-NOTES hysterectomy (Sinha et al., 2008;Moawad et al., 2018). We have aimed to evaluate the feasibility and safety of spinal anaesthesia in V-NOTES hysterectomy and to increase the advantages of minimally invasive surgical procedures. Materials and methods The study was conducted in accordance with the Declaration of Helsinki guidelines in a tertiary referral hospital in Istanbul, Turkey, between January 2019 and June 2020. The local hospital's ethics committee gave ethics approval (Reference Number: 2020/514/178/20, Approved 27 May 2020). The study design was explained to all patients before they were included. A detailed medical history was obtained from all patients. Abdominal examination, bimanual examination and transvaginal ultrasonography, and, if necessary, abdominal ultrasonography were performed on all patients. The inclusion criteria were as follows: patients aged 30 to 70 years, with American Society of Anesthesiologists (ASA) physical status I-II, with pelvic organ pathologies associated with the uterus, cervix and/or ovaries that would require a hysterectomy, and eligible to perform V-NOTES hysterectomy. Patients with contraindications for pneumoperitoneum or spinal anaesthesia, a history of tubo-ovarian abscess, a history of deep endometriosis, suspected severe pelvic adhesions, a nodule in the Pouch of Douglas, a fixed uterus, and sexually inactive patients were excluded from the study. Patients with uterine prolapse who could undergo vaginal hysterectomy were also excluded from the study. Two anaesthesiologists evaluated all patients regarding their suitability for spinal anaesthesia, and detailed information was given to the patients about the anaesthesia procedure. The same teams performed both surgery and spinal anaesthesia. In our clinic, the recommendations of the Enhanced Recovery After Surgery Society guideline are applied to ensure optimal perioperative care (Altman et al., 2020). Preoperative bowel preparation was not used. ECG, arterial blood pressure, and pulse oximetry were evaluated. After obtaining vital signs,10 mL/ kg of Ringer's lactate solution was given to all patients for 30 minutes. Premedication was not used in any of the patients. Patients were placed in a sitting position. The subarachnoid space was entered through the L4-L5 space with a sharptipped 25G spinal needle. Spinal anaesthesia was achieved with 12.5 mg 0.5% hyperbaric bupivacaine in the sitting position. The patient was then placed in the supine position. Motor block was evaluated using the Bromage scale. Sensory block level was evaluated with a pinprick test. All patients received an intravenous dose of 0.05 mg/ kg of midazolam. During the first 20 minutes, the patients' blood pressure was followed at 5 minute intervals and measured at 10 minute intervals. As in other types of surgery, oxygen supplementation was given to all patients by oral and nasal mask. Pain monitoring was followed by Visual Analogue Scale (VAS) scores. When VAS was > 3, it was planned to administer 1-1.5 mg/kg IV tramadol with 1 g paracetamol and repeat it every 12 hours if necessary. After the anaesthesia procedure, the patients were placed in the dorsal lithotomy position. To reduce the incidence of postoperative infection, 2 g of cefazolin was administered intravenously 15 minutes before the incision. Povidone iodine solution was used as a topical antiseptic in the surgical field. The surgical field was covered with a sterile drape. 18-French Foley catheter was placed in the urethra. A transvaginal port system (Alexis; Applied Medical Resources Corp., Rancho Santa Margarita) was used to perform a V-NOTES hysterectomy. The Alexis wound retractor was placed in the vagina, and four self-retaining sleeves were placed in the GelSeal cap. Then the GelSeal cap was placed over the Alexis wound retractor. CO2 was insufflated from this port to a pressure of 15 mmHg at a flow rate of 0.4 L/min. After the pneumovagina was created, the vagina and cervix were visualised with a 10 mm 30-degree endoscope. Before the colpotomy incision, the operating table was tilted to the Trendelenburg position 10 degrees to reduce the risk of bowel injury. A circumferential incision was made around the cervix using an ultrasonic scalpel system (Harmonic HD 1000i shears, 5-mm diameter; Ethicon). A bladder flap was created by cutting the visceral peritoneum to isolate the bladder from the lower uterine segment. The posterior peritoneal fold was found and dissected using an ultrasonic scalpel system. This allowed access to the Douglas pouch. Anterior and posterior incisions were extended up transversely across the cervix. The pneumoperitoneum was created at a pressure of 8 to 12 mmHg CO 2 , and the abdominal cavity was visualised. After entering the abdominal cavity, the Trendelenburg position was changed to 20 degrees as needed. Uterosacral ligament complexes, uterine vessels, leaflets of the broad ligaments, utero-ovarian ligaments and round ligaments were identified and dissected using an electrothermal bipolar vessel sealing device (LigaSure, 5 mm diameter, blunt tip; Covidien). The uterus was freed from all attachments. If salpingectomy is to be performed, the Fallopian tube is identified and cut using an electrothermal bipolar vessel closure device (LigaSure, 5 mm diameter, blunt type; Covidien). Fallopian tubes were taken out of the abdomen and left in the outer part of the wound retractor. If an adnexectomy is performed, the infundibulopelvic ligament is identified and cut using an electrothermal bipolar vessel closure device (LigaSure, 5 mm diameter, blunt tip; Covidien) similar to other ligaments. Adnexae were taken out of the abdomen and left in the outer part of the wound retractor. After adequate haemostasis was achieved, the uterus was removed from the vagina. The vaginal cuff was closed vaginally using a single coated Vicryl suture (90 cm, polyglactin 910; Ethicon EndoSurgery). The patients' age, primary complaint, menopausal status, systemic diseases, indication for hysterectomy, and pre and postoperative biochemical and haematological parameters were recorded. The operation time was recorded from the beginning of the colpotomy incision to the vaginal closure. Perioperative events and complications related to spinal anaesthesia like nausea, vomiting, headache, and shoulder pain, were noted. Postoperatively, the pain was evaluated using a visual analogue scale at the 6 th , 12 th , and 24 th hours. When the patients came to the gynaecological inpatient service, they were given liquid food, and solid food was recommended 2 hours after the operation. Patients were discharged 24 hours after the surgery. The average operating time was 58 min (SD = 10.5 min). All patients were mobilised at the 4th postoperative hour. The mean postoperative VAS pain scores at the 6th, 12th, and 24th hours were 1.3 (range, 0-3), 1.5 (range, 0-3), and 0.1 (range, 01), respectively ( Figure 2). Two patients experienced postoperative nausea that responded to Granisetron administration. Vomiting, severe shoulder pain, or headache was not recorded. Blood loss was minimal, and none of the patients required blood transfusion. The mean haemoglobin level change was 0.18 g/ dL (SD = 0.14 g/dL) on postoperative day 1. All patients were discharged at the postoperative 24th hour. All patients were followed up one week and one month after the operation. None of the patients had complaints such as headaches or back pain related to spinal anaesthesia. No patient was diagnosed with postoperative cuff cellulitis, cuff separation, or bleeding. Discussion To date, this is the first study to evaluate the feasibility and safety of V-NOTES hysterectomy under spinal anaesthesia. Interestingly, despite the increasing interest in minimally invasive approaches, regional anaesthesia still plays a minor role, and general anaesthesia has become the dominant or even the only approach in laparoscopic gynaecological surgeries. Although upper abdominal laparoscopic surgery under regional anaesthesia has been reported in many studies, only a few have been published on laparoscopic pelvic surgery under spinal anaesthesia (Donmez et Results V-NOTES hysterectomy was performed under spinal anaesthesia in six patients who met the previously mentioned criteria and gave written informed consent. The mean patient age was 49 years (min = 43, max = 55, standard deviation [SD] = 4.04 years), and the mean BMI was 25.9 kg/ m2 (min = 24.7, max = 27.3, SD = 0.93 kg/m2). All patients were multiparous (median = 3.1; min = 2, max = 5). Two patients had hypertension, and one had diabetes mellitus (Table I). One patient had a history of cholecystectomy and umbilical hernia repair, and another had a history of breast cancer. The indication for hysterectomy was abnormal uterine bleeding in all patients. There was no significant change in the patients' mean arterial pressure, mean SPO2 levels, mean systolic blood pressure, and mean diastolic blood pressure during the surgery (Figure 1). The patients were informed about the possible advantages and disadvantages of salpingectomy and salpingo-oophorectomy. Bilateral salpingo-oophorectomy was performed in 4 patients, and bilateral salpingectomy was performed in 3 patients. The median uterine weight was 135 g (range, 90 -200). 3 patients had uterine fibroids. The largest fibroid size was 3 cm. No conversion to conventional laparoscopy or laparotomy was required. Conversion from spinal anaesthesia to general anaesthesia was not needed, and no major perioperative incident occurred in any of the cases. Nausea was observed in one patient, and shoulder pain was observed in another towards the end of the operation, and both resolved spontaneously without needing medical treatment. (Donmez et al., 2017;Tzovaras et al., 2008). Many factors, such as performing the surgery in the upper or lower abdomen, intra-abdominal pressure during the operation, and the operation time, may affect respiratory and cardiac function during surgery. It has been shown that measurable changes in haemodynamic parameters due to insufflation and patient position during laparoscopy are not reflected in clinic parameters when a pressure of 15 mmHg is not exceeded (Grabowski and Talamini, 2009). In cases where the pneumoperitoneum is created with a pressure of 15 mmHg, there is a 27% decrease in respiratory system compliance, and prolonged duration of pneumoperitoneum may result in a long time to reverse changes in pulmonary compliance (Rauh et al., 2001). Trendelenburg position can be used in most laparoscopic pelvic surgeries to facilitate visualisation of the pelvic region and perform a successful operation. If the Trendelenburg position is preferred, the respiratory function may adversely be affected, and it may be difficult to complete the surgery under spinal anaesthesia. However, adverse effects on respiratory function may be milder when V-NOTES is performed, as intra-abdominal pressure is maintained at lower levels compared to conventional laparoscopy. Based on these points, we think keeping the intraabdominal pressure between 8 to 12 mmHg and the short operation time in patients who underwent V-NOTES hysterectomy under spinal anaesthesia are the most important factors preventing adverse respiratory effects. It may also be important to keep the Trendelenburg position below 20 degrees during the operation. In addition to respiratory effects, numerous studies have shown that laparoscopic pelvic surgery also has some cardiac effects. These cardiac effects are thought to be related to the decrease in venous return to the inferior vena cava due to the increased intra-abdominal pressure during laparoscopic surgery, resulting in decreased cardiac output and, ultimately, hypotension (Grabowski and Talamini, 2009). Another factor affecting the changes caused by laparoscopy in the cardiovascular system is the patient's position. Studies show that the decrease in cardiac output in patients with pneumoperitoneum is more pronounced in the head-up position and tends to increase in the Trendelenburg position (Williams and Murr, 1993;Junghans et al., 1997). In our study, no significant change in mean arterial pressure, hypotension, or respiratory function impairment was observed in any patients. In our study, we consider that the intra-abdominal pressure not exceeding 12 mmHg during the operation and the short duration of the operation are important reasons for the absence of significant adverse cardiac effects. In addition, as we mentioned before, the Trendelenburg position may have played a positive role in cardiac function with its tendency to increase cardiac output. One of the reasons for the increasing popularity of minimally invasive approaches is the concerns and difficulties experienced in the management of acute postoperative pain after abdominal surgery (Andreae and Andreae, 2012). It has been shown that adequate analgesia cannot be achieved in half of the patients using routine pain control methods (Gandhi et al., 2011). Moreover, the lack of pain control after surgery may delay the patient's recovery. It has been shown in many studies that pain scores are lower, and the need for analgesics is less in the early postoperative period after spinal anaesthesia compared to general anaesthesia. (Wang et al., 1996;Massicotte et al., 2009;Kessous et al., 2012). In patients who have undergone surgery under spinal anaesthesia, it can be expected that lower abdominal pain will be felt less in the first few hours with the ongoing effect of spinal anaesthesia after the operation. In V-NOTES surgery, there is no abdominal incision that may cause pain. However, patients describe some pain in the pelvic region in the early hours after surgery, and spinal anaesthesia can reduce the sensation of pain caused by a vaginal incision in the first hours after the operation. In our study, none of the patients who underwent V-NOTES hysterectomy under spinal anaesthesia had a VAS score above 3, and none needed analgesics. One of the conditions that can be encountered in surgeries performed under spinal anaesthesia is shoulder pain, which may limit the use of spinal anaesthesia in laparoscopic surgeries. Shoulder pain occurs as a result of the stretching of the diaphragm with CO2 insufflation and is transmitted through the cervical roots, which are not affected by spinal anaesthesia. Once shoulder pain occurs, it may resolve on its own or be treated with medication, or it may be severe enough to require a conversion from spinal anaesthesia to general anaesthesia. The incidence of shoulder pain specific to laparoscopic gynaecological surgery is unknown due to studies' paucity. However, it is known that the probability of shoulder pain increases as the duration of pneumoperitoneum increases or the Trendelenburg degree increases. To reduce the possibility of shoulder pain, we aimed to keep the pneumoperitoneum time and Trendelenburg degree to a minimum without compromising surgical safety. In our study, only one patient had mild shoulder pain that did not require medication towards the end of the operation. There are only a few studies examining laparoscopic gynaecological surgery under spinal anaesthesia. One of these few studies is a case of total laparoscopic hysterectomy that Moawad et al. (2018) successfully performed under regional anaesthesia (Moawad et al., 2018). In another study, Singh et al. (2015) evaluated and reported the use of combined spinal and epidural anaesthesia for conventional laparoscopic surgeries with 50 patients. In that study, eight out of 50 patients underwent laparoscopy-assisted vaginal hysterectomy or total laparoscopic hysterectomy under combined anaesthesia. Conversion to general anaesthesia was required in only two patients due to severe shoulder pain, and no complications were reported (Singh et al., 2015). Conversion from spinal anaesthesia to general anaesthesia was not required in our study. When it comes to V-NOTES hysterectomy, there is a need to consider its position against vaginal hysterectomy. Vaginal hysterectomy is considered the first line approach when a hysterectomy is needed for a benign indication (National Institute for Health and Care Excellence, 2018; American Association of Gynecologic Laparoscopists, 2011). However, the choice of hysterectomy route may be influenced by many factors, such as the size and shape of the vagina and uterus, accessibility of the uterus, the need for concurrent procedures, and surgeon training and experience (American College of Obstetricians and Gynecologists' Committee on Practice Bulletins, 2017). Vaginal hysterectomy may not always be possible in cases of the undescended and immobile uterus or narrow vaginal apex. (Kovac, 2004). An endoscopic approach may be preferred in cases where vaginal hysterectomy cannot be performed. V-NOTES is a very comfortable and safe minimally invasive surgical method performed through natural orifices with laparoscopic surgical equipment. Most studies to date have focused on the comparison of V-NOTES hysterectomy with other laparoscopic hysterectomies (Michener et al., 2021;Kaya et al., 2021). In a study comparing V-NOTES hysterectomy and vaginal hysterectomy, there was no difference in surgical outcomes between the 2 groups, except for the rate of salpingectomy or adnexectomy (V-NOTES group 100%, vaginal group 60%) (Merlier et al., 2022). Therefore, V-NOTES hysterectomy may offer an advantage over vaginal hysterectomy when adnexal removal is required, but there is a need for further research for comparison. The vaginal route is considered the most costeffective surgical approach to hysterectomy since no disposable instruments are used, and hospital stay is relatively short. In a study comparing laparoscopy-assisted vaginal hysterectomy and V-NOTES hysterectomy, the V-NOTES hysterectomy group was found to be more costly due to the wound retractor and bipolar vessel closure device. Although our study did not focus on cost analysis, in our experience, V-NOTES hysterectomy does not appear to be more costly than any laparoscopic surgery. In addition, the use of spinal anaesthesia may reduce the cost further by reducing the use of analgesics and shortening hospital stay (Turkistani et al., 2019). Studies will be needed to assess the cost of V-NOTES hysterectomy under spinal anaesthesia in comparison to general anaesthesia and vaginal hysterectomy. Patient satisfaction may be one of the subjective indicators in the evaluation of V-NOTES hysterectomy under spinal anaesthesia. When the patients were asked about their opinions about the surgery in the postoperative follow-up, all patients stated that they were very satisfied with the surgery. We believe that establishing a relationship of trust between the patient and the team and informing the patients in detail play a key role. We attribute patient satisfaction to the maintenance of communication with the patient during the operation and the close follow-up of the patient during and after the operation. Conclusion Spinal anaesthesia has not been widely adopted for laparoscopic surgeries until now. It has many advantages, such as reduction in postoperative pain, faster ambulation, and faster recovery. Few studies have shown that spinal anaesthesia is at least as safe and feasible as general anaesthesia. In the current pilot study by our team, we demonstrated that V-NOTES hysterectomy could be performed safely under spinal anaesthesia in well-selected patients. The need for additional analgesics can be reduced due to early postoperative analgesic effects of spinal anaesthesia and maximising the benefit of a minimally invasive approach. There is, however, a need for further research to study the feasibility, safety and cost of V-NOTES in comparison to vaginal hysterectomy, as well as comparison of V-NOTES under spinal or general anaesthesia. We are currently conducting a prospective randomised controlled trial to compare the outcomes of performing a V-NOTES hysterectomy under spinal and general anaesthesia.
2022-10-02T15:21:58.960Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "3c431b3def2a12ed79eb7e4687dde9c5f67be8d8", "oa_license": "CCBY", "oa_url": "https://fvvo.eu/assets/1055/FVVinObGyn-14-275.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ec268dcc68c7f3f535f2f9d5650aa9a0fbf05399", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55751315
pes2o/s2orc
v3-fos-license
The effect of post-fire stand age on the boreal forest energy balance Fire in the boreal forest renews forest stands and changes the ecosystem properties. The successional stage of the vegetation determines the radiative budget, energy balance partitioning, evapotranspiration and carbon dioxide flux. Here, we synthesize energy balance measurements from across the western boreal zone of North America as a function of stand age following fire. The data are from 22 sites in Alaska, Saskatchewan and Manitoba collected between 1998 and 2004 for a 150-year forest chronosequence. The summertime albedo immediately after a fire is about 0.05, increasing to about 0.12 for a period of about 30 years and then averaging about 0.08 for mature coniferous forests. A mature deciduous (aspen) forest has a higher summer albedo of about 0.16. Wintertime albedo decreases from a high of 0.7 for 5- to 30-year-old forests to about 0.2 for mature forests (deciduous and coniferous). Summer net radiation normalized to incoming solar radiation is lower in successional forests than in more mature forests by about 10%, except for the first 1–3 years after fire. This reduction in net radiative forcing is about 12– (cid:2) 2 as a daily average in summer (July). The summertime daily Bowen ratio exceeds 2 immediately after the fire, decreasing to about 0.5 for 15-year-old forests, with a wide range of 0.3–2 for mature forests depending on the forest type and soil water status. The magnitude of these changes is relatively large and may affect local, regional and perhaps global climates. Although fire has always determined stand renewal in these forests, increased future area burned could further alter the radiation balance and energy partitioning, causing a cooling feedback to counteract possible warming from carbon dioxide released by boreal fires. Introduction The boreal forest is recognized as having a global influence on climate by reducing winter albedo (Bonan et al., 1992;Thomas and Rowntree, 1992;Viterbo and Betts, 1999) and acting as a carbon sink (Ciais et al., 1995) and water source to the atmosphere. The North American boreal forest includes sparse lichen woodlands in the northern taiga, and coniferous and deciduous forests that can grow rapidly in the southern regions. The surface characteristics of these forests are also highly variable. This is partly due to the species mix, but is also affected by the growth stage of a forest stand. Global climate models (GCMs) and regional climate models (RCMs) use land surface characteristics to drive the atmosphere-surface feedback, an essential boundary condition to model atmospheric motions. Many of these models use fundamental properties of the ecosystem, such as albedo, leaf-area index (LAI), surface roughness, and vegetation type (e.g., Verseghy et al., 1993;Bonan et al., 1995). They then calculate energy balance exchange from these inputs. One of the difficulties in projecting future climates is understanding the nature of future surface characteristics. Many of the current characteristics can be assessed using remote sensing on scales that are useful for GCM and RCM inputs. However, we need validation that the modeled energy balances agree with observations. This can be done most directly through measurements of the energy exchange properties of the forest. Throughout the North American boreal region, forest stands are being continuously renewed by disturbances. These are dominated by fire and insects, with harvesting being important in the southern parts. Other disturbances, such as disease and windthrow are also important but often not readily quantified. Forest fire is common throughout Canada and Alaska, averaging about 3 million ha burned annually in recent decades (Stocks et al., 2002;Murphy et al., 2000). This creates a mosaic of stand ages with different surface characteristics on the landscape. Some of these patches can be quite large; for example, one single Canadian fire in 1995 covered an area of 1 million ha. Hence, we need to characterize the changing state of the surface characteristics and ecosystem properties throughout the life cycle of the forest. We recognize that fire severity is also an important factor in the landscape mosaic, because it dictates the successional trajectory by setting the initial post-fire environment. This affects vegetation establishment as well as dynamics among species. We approximate the succession of forest stands using the chronosequence approach where forest stands of different ages following fire are studied. Here we focus on energy balance characteristics that drive local, regional and global climates. Many of these concepts have been described in detail by Chapin et al. (2000), especially with regard to vegetation controls in northern ecosystems. Studies of the post-fire energy balance in the boreal forest have included subarctic sites (Rouse and Kershaw, 1971;Rouse, 1976) and daytime summer conditions from aircraft (Amiro et al., 1999). Tower-based measurements of a 1-year-old jackpine site showed a decrease in net radiation (R n ), sensible heat flux density (H), and latent heat flux density (LE) compared to a mature site whereas a 10-year-old site showed little difference compared to a nearby mature site in summer (Amiro, 2001). Similarly, Alaskan forests in the first decade following fire show reduced summer net radiation, enhanced ground heat flux and lower Bowen ratios compared to older forests (Chambers and Chapin, 2002). Continuous observations over an annual cycle provide evidence that absolute differences in the surface energy budget between early and late successional forests are greatest during spring -because of differences in albedo -and substantial during summer (Liu et al., 2005). Also, the net radiation of boreal forest and tundra ecosystems respond differently following fire, with the tundra showing increased net radiation (Chambers et al., 2005). These individual studies provide some insights into the situation at specific sites. Over the past few years, there has been a larger effort to explore the effects of fire on the exchange of carbon dioxide between the boreal forest and the atmosphere (e.g., Amiro et al., 2003;Litvak et al., 2003). These studies have also measured energy balance components, which are often overlooked in the presentation of carbon flux results. In the present paper, we have analyzed data sets from all recent studies of post-fire forest environments in the North American boreal forest to provide an integrated view of the changes in the energy balance and surface characteristics with time since fire. The research sites The data sets come from several research groups who have been measuring the energy balance flux components and the radiative surface characteristics of 22 boreal forest sites. These represent a 150-year-old chronosequence initiated by natural wildfire over a variety of vegetation types. Table 1 outlines the location, the year of origin (year of the most recent stand-replacing fire), the dominant vegetation type and the years of data collection for each site. A larger deciduous component of vegetation is present in the younger sites. We have organized these data sets into contributions from Alaska (AK), Manitoba (MB), and Saskatchewan (SK). The AK data consists of two sets of measurements. Summer observations of surface energy and CO 2 exchange were made at nine sites during 1998 and 1999 (Chambers and Chapin, 2002). Continuous observations were made at two of these same sites (the 1999 and 1987 burn sites) and one additional site during 2002 as a part of a separate study (Liu et al., 2005). The MB sequence includes data from six sites including a control mature site (the BOREAS Northern Old Black Spruce site). However, the data reported in this paper were collected on a separate tower from the longterm measurements at the Northern Old Black Spruce Site (e.g., Goulden et al., 1997). The SK sequence includes data from three sites burned within the past 27 years , two evergreen mature sites (the BOREAS Southern Old Black Spruce site (Jarvis et al., 1997) and the Southern Old Jack Pine site (Baldocchi and Vogel, 1997)) and a deciduous mature site (the BOREAS Southern Old Aspen site, SOA (Blanken et al., 1997)). The SOA site is of particular interest because it is a mature pure aspen forest, which is not common throughout much of the boreal zone. Most older aspen forests have coniferous components with eventual succession to a coniferous-dominated forest. This particular site is at the southern edge of the boreal forest, and provides a deciduous comparison to the coniferous sites of similar age. The four former BOREAS sites are permanent installations, whereas the other sites were operated for periods of several weeks to several years. Each of these sites is relatively flat with ample fetch, although the 1998 SK site has limited fetch in one sector and data were only used when the footprint originated from the appropriate area. References given describe the sites more thoroughly. Measurements and processing Micrometeorological measurements were made of net radiation (R n ), incoming solar radiation (S), albedo, sensible heat flux density (H), and latent heat flux density (LE), above the vegetation canopy from towers, as well as ground heat flux density (G). The eddy covariance technique was used to measure H and LE, and the half-hourly values were integrated to obtain 24 h (daily) totals of all quantities. The energy storage terms in the air, very small on a daily basis, were not included in the H and LE daily fluxes. The surface soil layer storage term was included in G using measurements of shallow soil temperatures. The specific instrumentation is listed in Table 2. Turbulent flux data were acquired either through dataloggers or computer acquisition systems, at rates varying from 4 to 20 samples s À1 , depending on the site. We did not adjust the eddy covariance measurements for energy balance closure; this has a minor effect on our comparisons because closure is of a similar magnitude among sites (0.85-0.9 typically on a daily basis). We did not exclude data during low friction velocities at night because this has negligible impact on daily LE and only a small potential effect on daily H. Full 24 h data were available without gap filling. The sites were compared along the chronosequence through normalization of R n with S (R n /S). All other components of the energy balance were normalized to R n as LE/R n , H/R n and G/R n , with the normalization done as the ratio of the daily totals of each component. The normalized daily values during the summertime period, defined from the last week of June to the third week of July (DOY 177 to DOY 205), were used to compute the normalized summertime means and standard errors (i.e., based on the variability among days). This period was selected to ensure the deciduous vegetation had completed the process of full leaf development at all sites, and corresponded to the period where all sites had data. We excluded days where the daily R n was less than 1/2 the maximum value during the period to compare clear-sky conditions (i.e., differential cloud among sites was factored out). This left a data set with a mean number of days per site of 23 that included 40 site-years. The albedo data are based on daily totals (i.e., the ratio of total reflected to total incoming solar radiation). The wintertime albedo averages and associated standard errors were calculated for each site using all the available daily data during the months of January and February (average number of days = 51). These subsets, one for summer and one for winter, were selected to allow meaningful comparisons among the far-reaching sites. Continuous data were not available at all sites to allow for a full annual comparison. Albedo The summertime albedo is about 0.05 immediately after fire, and increases to 0.13 within the first 10 years (Fig. 1). This slowly decreases at most older sites to a value of about 0.07-0.08. The exception is the SOA site, which maintains a summertime albedo of about 0.16 in each of the 3 years. The slight decrease in 2003 may be cause by drought and less leaf area. The development of the forest canopy is largely a function of the stand age, as shown by the leaf-area index (LAI) and height data in Table 2. In fact, regression of height or LAI with stand age is approximately linear and positive, with regression coefficients (r 2 ) of about 0.5. It is important to note that most of the younger (less than 25 years) sites also have a substantial deciduous component in their canopy and also have a high albedo. Hence, the changes in albedo with time since fire are largely related to both the quantity of vegetation (and canopy structure) as well as the species in the successional trajectory, with deciduous broad-leafed species often being dominant in the early post-fire years. At most of our sites, coniferous pine and spruce dominate in later years, except at the SOA site. We have plotted a linear regression line in Fig. 1 that could be used for modeling purposes for sites older than 12 years, and excluding SOA. We believe that this captures the more typical developments in boreal succession following vegetation establishment although we do not imply a physical basis for the regression. The wintertime albedo at the older sites is slightly greater than for summer but much greater at sites less than 25 years of age (Fig. 2). We have fitted an exponential decay regression curve to the data, which may be useful for ecosystem modelers. Winter albedo can be as high as 0.7 and is caused by a high reflectance from the snow that is seen through the sparse canopy. The mature deciduous canopy in Saskatchewan has about the same albedo to similarly aged conifer sites. This is not expected but may be caused by similar amounts of snow seen on the ground through the deciduous canopy and on branches of the coniferous canopy at these low winter sun angles. Our albedo measurements are based on daily totals. Some minor latitudinal differences in sun angle do not appear to have a major effect on the trends, since we see similar patterns at any given latitude in Figs. 1 and 2. Net radiation We normalized summertime R n by S to allow comparisons based on different solar radiation conditions caused by latitude and weather conditions. Fig. 3 shows relatively high values at the very young AK sites, with a lower R n /S at ages between 10 and 25 years. R n /S is greater for the mature sites although the SOA site is slightly lower. Again, the SOA site is more similar to the 10-to 25-year-old sites because of the deciduous components. The higher R n /S at the very young AK sites is caused largely by the low summertime albedo (Fig. 1) so that a greater portion of the shortwave radiation is absorbed. We do not have measurements of the longwave radiation balance at all of these sites so its contribution to the variation in R n is not known. We have not attempted to construct regression curves for these data or for the components described in the following sections. The data trends are more complex and do not easily allow for predictive equations without a physical basis for the relationships. Latent heat We normalized LE by R n to allow comparisons with different amounts of total energy among the sites and to investigate energy partitioning (Fig. 4). LE is about 20% of R n at the youngest AK sites and increases to about 60% of R n at sites from 10 to 25 years of age. The rapid increase with age at the start of the chronosequence corresponds to vegetation development with the very youngest sites having much less vegetation. At sites between about 10 and 40 years old, the MB sites tend to have less relative LE than the SK sites. LE at the more mature sites (50-150 years) ranges from about 30 to 60% of R n . These sites differ not only in vegetation, but also in soil type, hydrology and water status during the measurements. This wide range is partly caused by interannual variability at any given site. For example, the SOA site experienced some very dry years, and 1999 was drier than 1998 at the AK sites. Despite this wide range, it is clear that these mature boreal sites partition close to half of R n into LE during July on average. Sensible heat and ground heat fluxes The normalized sensible heat flux, H/R n , is largely complementary to LE/R n . Although there is more scatter, the 10-to 25-year-old sites have a lower portion of energy partitioned into H than at most older sites (Fig. 5). The large variability at about 80 years contrasts the greater H at some AK sites with the lower H at the SOA site. As in the case of LE, the SOA site varies depending on year because of water availability. The variability in G among sites is about the same as that for LE and H in absolute terms, but this appears greater because of the relative magnitude of the dynamic range (Fig. 6). There is a general decrease with forest age. Data were not available for the MB sites. Although G might be expected to be greater at the young sites with sparser canopies, this is not necessarily the case. The severity of the fire has a major effect on the status of the remaining soil surface organic matter and the successional trajectory. At most of these sites, a ground cover and successional shrubs and seedlings establish quickly after fire such that soil heating may not be much different from older sites. We have no data on heat storage in standing biomass but it is usually small on a daily basis (Saxton and McCaughey, 1988) and likely has minimal impact on the differences among the sites. The Bowen ratio Figs. 4 and 5 show the relative portioning of net radiation into LE and H. However, we have explicitly plotted the daily Bowen ratio (H/LE) as a site-derived quantity (Fig. 7). This shows high values at very young sites with a minimum for sites in the 10-25-year range. There is large variability among sites at about 80 years contrasting the coniferous AK sites with SOA, whereas the MB and SK sites are mid-range at about 1.5. This difference between coniferous and deciduous canopies follows the differences highlighted by Baldocchi and Vogel (1996) between the SK old jackpine site and a deciduous forest in Tennessee. Their differences were partly attributed to boreal versus temperate forest differences. However, this difference can also occur within the boreal regions depending on forest type and water availability. Implications Our measurements provide an integrated estimate of the effect of the post-fire environment on the surface energy balance components of the western North American boreal forest. The three main study regions represent both the northern and southern parts of the boreal forest, and include coniferous-and deciduousdominated ecosystems. The data have been normalized so that they can be compared directly. The chronosequence data show clear patterns with time-since-fire for most parameters. However, the strength and duration of the deciduous phase of the post-fire successional trajectory appears to play a key role in shaping the surface energy balance. For example, the SOA site is clearly different from the coniferous sites of similar age in summer albedo and the Bowen ratio. Table 1 shows that the sites less than 20 years old also have a substantial deciduous tree component. Hence, we believe that the humps in the relationships at ages less than 20 years are mostly because of a deciduous component. This illustrates that the successional species trajectory largely defines the energy balance characteristics, with greater amounts of deciduous trees increasing the summer albedo and LE. This then decreases the Bowen ratio. There is some variability in this generalization, with part of this caused by local site and climate effects. For example, 2002 and 2003 were very dry years at the SK sites and some of the variability in LE among years is caused by local moisture limitations. There is likely a further complication caused by variability in the magnitude of fire severity among sites. We do not have direct fire severity data, but all fires had replaced the former forest stand, so were at least lethal to trees. There is really no way to experimentally control for differences caused by fire severity, or for edaphic conditions, among the large number of sites compared in the current study. Instead, it is the generalized trends that appear despite local site differences that suggest that successional development is a key factor. This is based on our use of time-since-fire as the hypothetical independent variable in our figures. We do not imply that the disturbance impact at all post-fire boreal sites follows a conversion from a pre-fire coniferous forest to post-fire deciduous-dominated forest. In fact, some forests, such as jack-pine stands, often perpetuate themselves. This will still have implications for surface characteristics and energy balance changes because of a low leaf area index in the young stands, and shallow rooting systems that cannot access deep soil moisture reserves. However, we believe that the nature of the successional stand will determine the surface-atmosphere interaction. Some of this responds to feedbacks, such as high LE depleting soil moisture, which in turn controls growth and species composition. But some of the processes are independent, such as distance from a seed source for regeneration of some species following fire (e.g., Green et al., 1999). If much of the energy balance is controlled by the species composition of the stand, this may be additional support for the use of Plant Functional Types (PFTs) in global vegetation models (e.g., Box, 1996;Bugmann, 1996). For example, deciduous broad-leaf and evergreen needleleaf PFTs form the two main types for much of the North American boreal forest (Nemani and Running, 1996). However, there is often a successional change between these two types that depends on when the most recent fire has occurred. Forest inventory data for Canada indicate that the current deciduous percentage of forest varies from a low of less than 1% in the Taiga Shield ecozone to close to 40% in the Boreal Plains ecozone . The western part of the Boreal Shield ecozone has about 5% deciduous, whereas the eastern Boreal Shield ecozone has about 16%. The vegetation variation among ecozones is caused by both environmental differences and disturbance. In addition, Siberian forests behave differently and include more extensive areas of deciduous needle-leafed trees (Larix spp.). Irrespective of current vegetation type, fire has the potential to alter the landscape mosaic and affect local, regional and global climates. Much of the focus on boreal forest change related to climate control has concentrated on albedo effects (Bonan et al., 1992;Betts, 2000). The wintertime albedo difference is more dramatic, but the summertime change with forest age is also important. For the surface energy balance, the overall effect on net radiation is of the order of 10% in summer on a daily basis (see differences in Fig. 3). The period of increased R n is very short (less than 5 years) so the net fire impact is a decrease in R n of 1-2 MJ m À2 day À1 or 12-24 W m À2 difference on a daily basis in fire-renewed stands for a few decades. As a comparison, global radiative forcing by enhanced greenhouse gases is of the order of 1-2 W m À2 to date (IPCC, 2001). However, the age changes caused by fire are over a limited area and can create either an increase or decrease in R n , depending on the successional stage and trajectory. To estimate the full effect, we would need to integrate global fire and forest successional impacts on an annual basis. We expect that these changes would be even larger than those during summer because of the effects of snow cover on albedo during spring (e.g., Liu et al., 2005), and these changes in the surface energy balance may be a more important driver of climate change than the carbon dynamics associated with a changing fire regime. For example, changes in the land system have been highlighted as a contribution to air temperature increases in northern North America (Skinner and Majorowicz, 1999) and modeling experiments clearly show that albedo changes need to be compared to forest carbon sinks to determine net climate effects (Betts, 2000). The tower data collated in the current study demonstrate that natural stand renewal by fire determines energy partitioning at the local scale, thereby determining microclimates. Larger climate scales can also be affected substantially because fire patch sizes often exceed 10,000 ha in size (Stocks et al., 2002). However, the quantification of the impact on climate is more difficult. Avissar and Schmidt (1998) explored the scale of patches that affect mesoscale circulations and concluded that patches of 5-10 km in extent can be significant. This depends on windspeed, humidity and the magnitude of the surface flux difference. Many of our fires are of sufficient scale to affect these circulations. For our larger boreal fires, it is possible that a whole GCM grid (of the magnitude of 48 latitude) could be affected. GCM projections of future climates in boreal areas suggest that area burned could double in a 3 Â CO 2 environment in Canada compared to the recent past (Flannigan et al., 2005). However, there is also evidence that historical area burned in Canada before European influence was greater than in the recent past (Bergeron et al., 2004) and this poses additional uncertainty in the likely future fire regime. If fire increases in the future, we need to consider feedbacks of the fire effects, which include enhanced emissions of combustion carbon to the atmosphere , post-fire vegetation changes, and changes in fire severity. A positive feedback scenario is possible with the warming causing more fire, releasing more CO 2 , and increasing the warming through elevated atmospheric CO 2 concentrations. However, potential negative feedbacks include more smoke, higher winter albedo at recently burned sites, lower net radiation over successional forests, and higher evapotranspiration that could change cloud cover. Also there may be less fire growth in younger forests because deciduous-dominated stands tend to have higher moisture contents and slower rates of fire spread. Investigation of these interactions is the next phase in linking of dynamic vegetation models to the GCMs (e.g., Thonicke et al., 2001), and we need to be able to incorporate the appropriate energy balance changes. The modeled results will need to capture the measured changes to the physical environment following fire.
2018-12-11T21:15:23.448Z
2006-11-01T00:00:00.000
{ "year": 2006, "sha1": "5275a0cfaaad5085187ec74ced5cdcf3a2ef32cc", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt0gt0j3h1/qt0gt0j3h1.pdf?t=msut6m", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "b6900154cf3a55b33618ff2bcfe9777ebb0e02c2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
236331163
pes2o/s2orc
v3-fos-license
Analyzing outcomes after proximal humerus fractures in patients <65 years: a systematic review and meta-analysis Background There has been an increasing amount of interest and research examining best practices for the treatment of proximal humerus fractures (PHF). Recent, high-level randomized control trials and many retrospective cohort studies have failed to demonstrate clear benefit of surgical management for these injuries especially in the elderly (generally defined as ≥65 years old). There is a paucity of research available on outcomes after surgical and nonsurgical treatment of proximal humerus fractures in adults younger than 65 years, and comparative data are almost nonexistent. The purpose of our study was to perform a systematic review and meta-analysis on the available data to determine if the literature supports surgical management over conservative treatment for PHFs in adults younger than 65 years. Materials and methods Adhering to PRISMA guidelines, a systematic review of proximal humerus fractures was performed using MEDLINE and Google Scholar databases. Studies were included if they reported useable data such as outcome measures for adult patients younger than 65 years. Quality of nonrandomized studies was assessed utilizing the MINORs criteria. Extracted data were analyzed using statistical software with P-value set at 0.05. Results Six studies were included in the study for data extraction and statistical analysis. When comparing Constant Scores (CS) and Oxford Shoulder Scores (OSS) of operatively and nonoperatively treated adult patients aged less than 65 years, no statistical differences were found. Furthermore, no statistical differences in CS or OSS were found comparing elderly patients (defined as ≥65 years) and adult patients (defined as 18 to <65 years). Analysis of DASH outcome data did show statistical differences of the three cohorts (nonoperative <65, operative <65, and operative ≥65). Thus, only the limb-specific (not joint specific) outcome score (DASH) was found to be significantly different upon data analysis. Differences in shoulder-specific outcome scores (OSS and CS) failed to meet significance. Conclusion The available literature does not demonstrate a clear clinical benefit of operative treatment over nonoperative management of proximal humeral fractures in adult patients younger than 65 years. These results challenge the widely accepted practice of choosing surgical treatment in adult patients younger than 65 years with PHFs. Proximal humerus fractures (PHF) continue to be a significant burden on the healthcare system. The vast majority of PHFs occur later in life with an exponential increase after the fifth to sixth decades. 12 They are the third most common osteoporotic fracture and have been proven to be independent risk factors for subsequent hip fracture. 5,6 Despite the morbidity and societal burden associated with these fractures, research on the treatment and outcomes for these injuries has been inconclusive. Treatment for proximal humerus fractures range from nonoperative management to arthroplasty. There is an expanding body of literature analyzing outcomes of operative procedures, yet there is a disparity in reported data for conservative treatments. Due to the improvements seen in the literature with surgery, operative management is widely pursued. Studies from around the world report up to 40% of PHFs being treated surgically and 100-400% increases over time in the use of operative management for proximal humerus fractures. 8,22,24 Despite this drastic trend, the highest-level evidence available reports no benefit of surgery over conservative therapy across all age groups. [17][18][19] Outcome data for PHFs is lacking in a few critical areas. The available outcome data mostly focuses on the elderly (defined as !65); there are limited studies analyzing outcomes of PHFs in adult patients (defined as 18 to <65 years). Citing patient characteristics such as better bone quality and increased physical demand, many surgeons advocate for operative management for adult patients. 10 Although logical, the use of surgery over conservative therapy in the adult population is unproven in the literature with a paucity of comparative and even observational studies. Thus, a systematic review and meta-analysis was performed to compare operative treatment versus nonoperative management for proximal humerus fractures in adult patients (18 to <65 years). We hypothesized there would be a statistically significant difference between operative treatment and nonoperative management of proximal humerus fractures in adult patients younger than 65 years. Search strategy In accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, a systematic review of the literature was completed using a search performed on MEDLINE and Google Scholar on July 5, 2020. For each of the searches, the titles and abstracts were screened and the full text versions of articles that met criteria were downloaded. Full texts were reviewed and any relevant referenced articles that were not already obtained were ordered and obtained. "Related citations" were also reviewed during the searches, and the "cited by" function on Google Scholar was also used to identify any additional studies. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were downloaded and followed during this review. In addition to following PRISMA guidelines, identified non-randomized studies were scored using the methodological index for nonrandomized studies (MINORS) criteria to identify risk of bias. 15,16 Study selection Criteria for inclusion were peer-reviewed studies (published articles or abstracts) evaluating operative treatment and nonoperative management of proximal humeral fractures in adult patients (18 to < 65 years of age) with clear extractable data and mean follow-up greater than one year. Only studies with author provided translation of the article text to English were included. Throughout the duration of the search, the contents of each article, as well as the reference list, were screened for overlap of patients from other studies. Data abstraction Authors G.L. and I.H. independently performed a search of the literature and screened titles and abstracts and downloaded the articles for inclusion. The decision to include articles was made by consensus, and, if necessary, the final decision was made by the senior author K.M. Data collected included patient age, surgical treatment, type of fracture, complications, and patient-reported outcomes (Disabilities of the Arm, Shoulder and Hand, Oxford Shoulder Survey, Near, Constant). Statistical analysis Data were initially collated and analyzed with the Microsoft Excel (Microsoft Corp., Redmond, WA, USA). When available, raw data including mean, standard deviation, and number of patients were collected and used to calculate the sum of terms. Studies with individual raw patient data without means and standard deviations were manually input into Microsoft Excel for inclusion into the final data calculations; or if individual scatter plots of the data were available, the estimated data point values were used to calculate the sum of terms utilizing the means and standard deviations identified using computerized software (Webplotdigitizer by Ankit Rohatgi). The null hypothesis for this study is there is no difference in outcome data between adult patients (18 to <65 years of age) and elderly patients (!65 years of age). A two-tailed, unpaired t-test was performed for continuous outcomes. The P value for statistical significance was set at .05. Review Manage (RevMan) version 5.3 (Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014) was used for meta-analysis. When pooling the data in studies, the means and standard deviations were calculated by RevMan. Results A total of 637 (MEDLINE: 537; Google Scholar 95) studies were screened for relevance. After identification of 23 potentially relevant studies, they were downloaded and the reviews of the reference lists yielded an additional 4 studies, for a total of 27 studies. Twenty-two articles were excluded; 3 were review articles with no new data, 2 were preliminary reports that were contained in another study by the same author, 2 were in a foreign language without author approved translations available, 5 articles had no patient outcome information, 8 articles had insufficient patient age information and 2 articles did not have patients under 65 years. Six studies met criteria and were included in this review with a mean follow-up of 42.3 months encompassing Neer I-IV fracture types. Figure 1 of PHF studies summarizes the PRISMA flow diagram of study selection. The 6 studies included in the data extraction and analyzation were made up of 3 level I randomized controlled trials and 3 comparative cohorts (level IV evidence) reporting outcomes after surgical procedures (mostly proximal humerus plating/shoulder arthroplasty) and nonoperative management (most commonly a sling plus or minus a swathe). 11,[17][18][19][20]23 Two out of the 3 nonrandomized studies had lower MINORs criteria grades, indicating a potential high level of bias (Table I). Two studies reported outcome level data utilizing the Oxford Shoulder Scale (OSS), 2 studies reported Disabilities of the Arm, Shoulder, and Hand scores (DASH), and 2 studies reported Constant Scores (CS). The extracted data means and standard deviations for all cohorts are reported in Table II. In studies reporting post-surgical OSS outcomes, no statistically significant difference in OSS was calculated when comparing the adult (OSS ¼ 40.9 ± 8.4) versus the elderly (OSS ¼ 37.6 ± 10.6; P ¼ .106). No difference was also found when comparing operative (OSS ¼ 39.5 þ 10.2) versus nonoperative management in adult patients (40.9 þ 8.4; P ¼ .859) (Tables IIeIV). Analyzation of CS of adult patients also failed to show a statistically significant between operative (65.6 ± 15.6) and nonoperative cohorts (64.7 þ 13.07; P ¼ .859). Furthermore, a statistical comparison of CS between operatively treated elderly patients versus adult patients did not yield a significant difference between the two differently aged cohorts (Table III). Only when examining DASH scores was there a statistical difference. Comparison of DASH scores of adults (18.2) versus the elderly (27.8) yielded a statistical significance (P ¼ .0017). Finally, data analysis revealed a statistical difference in DASH scores when comparing nonoperative management versus operative treatment in adult patients. Complications were under reported in most studies but tended to be greater in the surgical cohort with 106 reoperations being reported in the study by Robinson et al (mostly for persistent stiffness or symptomatic hardware). 20 Discussion Management of proximal humerus fractures remains contentious. Studies examining outcomes following surgical management in adults <65 years are scarce, and data for nonoperative management in this age group is almost nonexistent. Despite the lack of research and unknown outcomes, there has been a global trend towards operative management for these injuries, especially in adults younger than 65 years. 22 Ideally, surgical management should yield superior outcomes when compared to nonoperative management in any cohort; however, clear benefit of surgery over nonoperative management for any age group remains elusive and thus controversial. A recent study by Caliskan and Dogan found no benefits of surgery across Neer Type II-IV fracture types in a cohort with a mean age less than 60 years. 3 There was some increased grip strength with surgical intervention in Neer II fractures at the cost of increased pain, and there was a trend toward improved strength of the forearm for type III fractures. Type IV fractures had no benefit. Multiple meta-analysis and high-level studies have analyzed the outcomes of nonoperative versus operative outcomes with the consensus of operative treatment providing no clear benefit to nonoperative management in the elderly patient. 2 It is now generally accepted that operative management in PHF fractures is not advantageous in the elderly. Despite the focus on the elderly, no study has compared outcomes in adult patients (defined in our study as 18 to <65 years). 9 Thus, we present the first systematic review and meta-analysis analyzing outcomes of operative treatment versus nonoperative management in adult patients with PHFs. We found that operative treatment of PHFs provides no significant improvements in OSS and CS when compared to nonoperative management therapy, regardless of age. This finding is in agreement with the Proximal Fracture of the Humerus Evaluation by Randomization (PROFHER) study by Rangan et al which concluded surgery did not improve outcomes when compared to conservative management in patients across all age groups. Our analysis further corroborates this as we found no significant differences in surgical outcome when comparing the elderly to the adult cohort. In addition to not finding surgery advantageous when comparing age groups, we also found surgery did not offer any clear benefits over nonoperative methods when only analyzing patients younger than 65 years. These findings challenge the common practice of operating on PHFs in patients younger than 65 years, as it may be exposing patients to intraoperative and postsurgical complications with no added clinical benefit. Furthermore, it has been reported that patients see the greatest improvements in upper extremity function after PHF about a year from injury. This large improvement seen in observation surgical studies may just be normal physiological healing that would have occurred without surgery. On the other hand, when utilizing data available for the DASH scores, we found a statistically significant difference between the elderly and adult cohorts in favor of operative treatment for adult patients. The mean difference in DASH score between the two treatment cohorts for adults was 23.5, and the difference in DASH scores between adult and elderly patients was 9.6. When evaluating DASH scores, it is important to note the difference between statistical and clinical benefit. The minimally clinically important difference of the DASH score has been reported to be 10.83-13. 7,25 As a result, the difference of 9.6 between the operatively managed cohorts is statistically significant, but not clinically significant. This finding corroborates recent studies reporting elderly patients fairing about the same as the adults after surgery for PHFs, challenging the rationale of choosing nonoperative management in the elderly due to perceived lack of benefit. 26 Therefore, surgery seems to only offer clear benefit to adults (18 to <65 years) in regard to DASH scores when comparing nonoperative to operative cohorts. The discrepancies between outcome measures (CS and OSS vs. DASH) may be attributed to the difference in construct validities. Although these three scores are reported to be reliable measures of shoulder function for various pathologies, only the CS and OSS are shoulder specific. Previous research has identified a high correlation exists between the CS and OSS, while a low correlation exists between the CS and DASH, and our data seems to reflect these reported relationships. 1 Regarding the CS, it is considered the gold standard in Europe and criticism includes its time-consuming nature and lack of proper standardization. 21 Criticisms of the DASH score include being limb, not joint, specific and it being susceptible to patient bias due to its subjective nature. 4,14 There are several limitations in our study. Due to the scarcity of the studies and difficulty identifying useable data within the bodies of the papers, it is likely some available data were missed. Scarcity of data resulted in a small sample of the adult (18 to <65 years) nonoperative arm and the DASH outcome group, which greatly diminished the power of this study. In addition, curated data for nonoperative management utilizing DASH scores was unable to be included because of missing statistical parameters. It is likely that inclusion of this data would have a significant impact on the DASH scores for the younger than 65 years nonoperative cohort. 13 Furthermore, lack of high-level data created a need to include lower-level observational studies for the meta-analysis, lowering the level of evidence of the meta-analysis, and opening the study to limitations and biases associated with retrospective cohort designs. Finally, we hoped to structure the study in a way to be inclusive to all-comers; however, selection bias is likely given the inclusion criteria for the various studies was variable. Conclusions Outcome data for patients younger than 65 years with proximal humeral fractures is scarce and difficult to find. There is a need for long-term outcome data in patients younger than 65 years with proximal humeral fractures. A subanalysis performed by the largest randomized control trial to date by Rangan et al indicated no significant difference in primary outcomes between operative and nonoperative management in patients younger than 65 years. Furthermore, it found no statistical difference in outcomes between adult and elderly patients in regards to OSS. This systematic review and meta-analysis demonstrated no significant clinical difference in operative versus nonoperative treatment of proximal humeral fractures in adults younger than 65 years. Currently, the literature does not support surgical treatment over conservative management for proximal humerus fractures, regardless of age. Disclaimers: Funding: No funding was disclosed by the author(s). Conflicts of interest: The authors, their immediate family, and any research foundation with which they are affiliated have not received any financial payments or other benefits from any commercial entity related to the subject of this article.
2021-07-27T00:06:07.732Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "ec2090a7d854118dc35fa4d076cd8fca1cb114ab", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "bf2c837ca8ebaf719dff89c103a439fd47450e4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13003988
pes2o/s2orc
v3-fos-license
β-Globin cis-elements determine differential nuclear targeting through epigenetic modifications Multiple cis-elements surrounding the β-globin gene locus combine to target this locus to the nuclear periphery through at least two different epigenetic marks. Introduction Spatial compartmentalization of chromatin may contribute to regulation of genome function (Zhao et al., 2009;Cope et al., 2010;Geyer et al., 2011;Meldi and Brickner, 2011). In many higher metazoans, transcriptionally silent genes are preferentially located toward the nuclear periphery, with more active genes preferentially located in the nuclear interior (Peric-Hupkes and van Steensel, 2010;Shevelyov and Nurminsky, 2012). Recent genome-wide studies using the DNA adenine methyltransferase identification (DamID) method have mapped preferred genome-lamin interactions in Drosophila melanogaster and cultured mammalian cells, suggesting increased interaction of transcriptionally inactive regions with the nuclear lamina (Peric-Hupkes and van Steensel, 2010). In human fibroblasts, >1,300 sharply defined domains with sizes of 0.1-10 Mb were shown to preferentially interact with the nuclear lamina (Guelen et al., 2008). These lamina-associated domains (LADs) are enriched in repressive chromatin marks and genes with low expression levels. Similarly, some inactive genes localize to pericentromeric heterochromatin (PCH). In cycling primary B lymphocytes or developing T cells, PCH association correlated with heritable gene silencing (Brown et al., 1997;Hahm et al., 1998). Many developmentally regulated genes locate at the nuclear periphery in their silent state but reposition to the nuclear interior upon gene activation, suggesting that peripheral gene localization may help establish and/or maintain developmental gene repression (Kosak et al., 2002;Ragoczy et al., 2006;Williams et al., 2006;Yao et al., 2011;Kohwi et al., 2013). In yeast, tethering to the nuclear periphery restored gene repression to a defective silencer (Andrulis et al., 1998). In mammalian cells, similar tethering experiments (Finlan et al., 2008;Kumaran and Spector, I ncreasing evidence points to nuclear compartmentalization as a contributing mechanism for gene regulation, yet mechanisms for compartmentalization remain unclear. In this paper, we use autonomous targeting of bacterial artificial chromosome (BAC) transgenes to reveal cis requirements for peripheral targeting. Three peripheral targeting regions (PTRs) within an HBB BAC bias a competition between pericentric versus peripheral heterochromatin targeting toward the nuclear periphery, which correlates with increased H3K9me3 across the -globin gene cluster and locus control region. Targeting to both heterochromatin compartments is dependent on Suv39H-mediated H3K9me3 methylation. In different chromosomal contexts, PTRs confer no targeting, targeting to pericentric heterochromatin, or targeting to the periphery. A combination of fluorescent in situ hybridization, BAC transgenesis, and knockdown experiments reveals that peripheral tethering of the endogenous HBB locus depends both on Suv39H-mediated H3K9me3 methylation over hundreds of kilobases surrounding HBB and on G9a-mediated H3K9me2 methylation over flanking sequences in an adjacent lamin-associated domain. Our results demonstrate that multiple cis-elements regulate the overall balance of specific epigenetic marks and peripheral gene targeting. in mouse embryonic stem cells largely independent of their chromosomal insertion sites . This autonomous targeting mirrors the peripheral targeting of the endogenous mouse -globin locus in mouse embryonic stem cells (Hepperger et al., 2008). We applied this autonomous targeting assay to dissect ciselements conferring targeting to the nuclear periphery (Fig. S1). Inserting a lac operator (LacO) 256 mer into the BAC and expressing GFP-lac repressor (GFP-LacI) provided direct visualizzation of integrated BAC transgene location. To further facilitate this assay, we used mouse NIH 3T3 cells, an immortalized, fibroblast cell line with high transfection and subcloning efficiency. FISH on human BJ-human telomerase reverse transcriptase (hTERT) cells revealed >50% of endogenous -globin loci within 0.5 µm from the nuclear lamina ( Fig. 1, a and g). Measurements were made from single optical sections from the nuclear midsection. Peripheral localization of the endogenous -globin locus also was observed in mouse NIH 3T3 fibroblasts (Fig. 1,b and g). No significant peripheral localization was observed for the endogenous human -globin locus, flanked by multiple housekeeping genes (Fig. 1, c and g). The peripheral localization of human -globin (HBB) BACs in mouse NIH 3T3 fibroblasts similarly mirrored the peripheral localization of endogenous mouse and human HBB loci in fibroblasts. In 4/5 randomly selected, stable clones, BAC transgenes were located within 0.5 µm from the nuclear periphery in 30-64% of the cells (Fig. 1, d-f), with the transgene radial positioning distribution recapitulating the endogenous -globin locus distribution (Fig. S2). Peripheral localization in all clones was significantly higher than the 14.8% predicted geometrically for an average-sized, 2D elliptical nucleus (16-and 11-µm diameters; Fig. 1 g). The LacO repeat does not contribute to this peripheral localization; we observed by FISH a 50% peripheral localization of HBB BAC transgenes lacking an inserted LacO repeat in a mixed population of stable clones ( Fig. 1 g). In contrast, dihydrofolate reductase (DHFR) BAC transgene arrays showed a 5-16% peripheral localization ( Fig. 1 g). Previously, we observed interior nuclear localization for LacO-tagged BAC transgene arrays carrying metallothionein, Hsp70, DHFR, or -globin gene loci (Hu et al., 2009;Sinclair et al., 2010). -Globin genes and regulatory regions are not required for peripheral targeting Movement of the -globin genes from the nuclear periphery to interior accompanies their transcriptional activation during erythrocyte maturation . The human -globin gene cluster consists of five globin genes (HBE1, HBG1, HBG2, HBD, and HBB). The 39.5-kb "Hispanic" deletion upstream of the -globin gene cluster causes profound alteration of -globin expression (Driscoll et al., 1989;Bender et al., 2006). The most important regulatory region overlapping with the Hispanic deletion is the locus control region (LCR) containing six DNase I hypersensitive sites (HSs), located 6-22 kb upstream of the HBE1 gene and required for high expression level of all -globin genes . The 27-kb region upstream of the LCR (upstream Hispanic region [UHR]) may also contain functional elements for -globin gene regulation. 2008; Reddy et al., 2008) have suggested that gene repression associated with tethering was promoter specific and quantitative, modulating transcription rather than turning it from on to off. Little is known about how endogenous gene loci are targeted to the nuclear periphery. Different models could explain targeting of single copy gene loci to the nuclear periphery. Peripheral targeting of transcriptionally inactive genomic regions could be the default, with transcriptionally active genome regions actively targeted to the nuclear interior (model 1). Alternatively, specific DNA sequences, and/or proteins binding to these sequences, might target chromatin to the nuclear periphery either through direct molecular interactions with nuclear envelope proteins (model 2) or through establishment of a distinct, epigenetically marked chromatin domain, with peripheral targeting downstream of this chromatin domain establishment (model 3). Two very recent studies have begun to address these possible molecular mechanisms. Supporting model 2, an autonomous bacterial artificial chromosome (BAC)-targeting approach identified lamin-associated sequences (LAS) conferring peripheral targeting from the IgH and Cyp3a multigene loci (Zullo et al., 2012). These LAS contained GA motifs binding the cKrox GAGA transcription factor, which was proposed to peripherally tether these sites through interactions with the inner nuclear membrane protein Lap2- and HDAC3. Supporting model 3, tethering of repetitive gene arrays to the nuclear periphery in Caenorhabditis elegans was dependent on H3K9 methyltransferases, whereas chromosome arm regions with high levels of H3K9 methylation showed reduced interactions with the nuclear lamina after H3K9 methylation knockdown (Towbin et al., 2012). The high compaction of large-scale chromatin folding complicates identification of cis-and trans-elements that target chromosome regions to particular nuclear compartments. Active targeting via a single sequence region may lead to apparent targeting of 100-1,000 s of kilobases of the adjacent chromosomal sequence, depending on the resolution of the imaging modality used to map intranuclear localization. Conversely, redundant targeting mechanisms operating over adjacent chromosome regions may make identification of individual targeting mechanisms difficult. Here, we use the mammalian -globin gene (HBB) locus to identify cis requirements for chromosome targeting to the nuclear periphery. HBB targets either to the nuclear periphery Hepperger et al., 2008) or to PCH (Brown et al., 2001) in several cell types in which it is inactive. By analyzing the autonomous nuclear periphery targeting of BAC transgenes, we identify a Suv39H, H3K9me3-dependent pathway involved in tethering the HBB locus to the nuclear periphery, in competition with targeting to PCH, which acts separately from an independent G9a, H3K9me2-dependent pathway, which tethers adjacent LAD sequences to the periphery. Results A BAC containing the human -globin locus autonomously targets to the nuclear periphery in mouse fibroblasts Multiple, cointegrated copies of a 207-kb BAC (CTD-2643I7) containing the human -globin locus target to the nuclear periphery Interestingly, our deletion analysis revealed competition between HBB BAC targeting to the nuclear periphery versus PCH. In mouse cells, PCH clusters into DNA-dense bodies called chromocenters. BAC deletions that removed all three PTRs (D4 and D8D14) lost peripheral targeting activity while showing a commensurate increase in chromocenter association (Fig. 3, a and b). These observations prompted reexamination of HBB BAC localization. Cell clones with different chromosome integration sites and different HBB peripheral targeting frequencies showed a near constant sum of chromocenter or peripheral targeting (Fig. 3 c). A similar, near constant sum for peripheral and chromocenter targeting was observed for HBB BAC deletions preferentially targeting to the periphery (D5) or to the chromocenter (D4; Fig. 3 c). RT-quantitative PCR (qPCR) measurements of -globin and olfactory gene expression, normalized by copy number, revealed similar expression for BAC transgenes localizing either to the periphery or the PCH (Fig. 3 d). HBB transgene expression levels were very close to copy number-normalized expression Two erythroid-specific DNase I HSs, the 3 HS1 located between the HBB and olfactory receptor OR51V1 genes and HS-110 located 110 kb upstream of HBE1 between olfactory receptor genes OR51B6 and OR51M1, have also been implicated in -globin regulation (Palstra et al., 2003;Fang et al., 2007). We used  red BAC recombineering to remove specific sequences from the HBB BAC ( Fig. 2 and Fig. S3; Warming et al., 2005). Cycles of galK insertion and removal using positive and negative selection allowed repeated rounds of deletions. Deleting the LCR (D1) and the -globin gene cluster (D2) as well as the entire -globin locus (D5) including the UHR and 3 HS1 sequences (D5) all failed to eliminate peripheral targeting (Fig. 2, a and b). HS-110 also is not required (D9 and double deletion D8D11; Fig. 2, f and g). Instead, deleting the entire 80-kb region (D7) upstream of the -globin gene cluster and LCR eliminated peripheral targeting ( Fig. 2, a, c, and f). This peripheral targeting capability was further narrowed to a 56-kb region by the D4 deletion ( Fig. 3, a, c, and f). As a negative control, a double deletion (HBBD5D7; Fig. 2 a) localized to the nuclear interior (Fig. 2, c and f). This BAC double deletion removes nearly the entire human DNA insert but retains the LacO repeat, selectable marker, and vector backbone. Previous studies in human lung fibroblasts mapped a 32kb LAD region at the 3 end of the HBB BAC (Fig. 2 a; Guelen et al., 2008). Neither the LAD (D8) nor the LAD plus boundary region containing HS-110 and a CTCF binding site cluster (D9) was required for peripheral targeting. This means either the LAD region does not contain a peripheral targeting sequence or functionally redundant targeting sequences exist outside the LAD region. Further dissection revealed three, functionally redundant peripheral targeting regions (PTRs; Fig. 2 e). The 6.3-kb PTR1 was identified using nested deletions D11 through D14 combined with the D8 deletion (Fig. 2, e-g). The triple deletion D8D13D22 confirmed that additional sequences flanking PTR1 were not required for targeting (Fig. 2, e and f). The 23-kb PTR2 was revealed by double deletion D8D17, which removed PTR1 but preserved peripheral targeting (Fig. 2, e and f). PTR3 was mapped to the 32 kb corresponding to the intersection of the D8 and D4 deletions, based on the peripheral targeting of the D10 deletion, the loss of peripheral targeting with the D4 deletion, and the peripheral targeting of the combined D5 and D10 deletions (Fig. 2, c and d). Only PTR3 is contained within the LAD (Fig. 2 e). Competition between PCH and peripheral targeting Association of inactive genes with PCH occurs in several cell types (Brown et al., 1997(Brown et al., , 1999Francastel et al., 2001). This includes association of the inactive -globin locus with PCH in cycling human lymphocytes (Brown et al., 2001) and localization of the inactive -globin locus to both centromeres and the nuclear periphery in mouse erythroleukemia cells (Francastel et al., 2001). (d-f) Peripheral localization of HBB BAC identified by EGFP-LacI binding (green) in HBB C3 NIH 3T3 cell clone using DAPI staining (blue; d), lamin A immunostaining (red; e), or nuclear pore staining (red; f) to define the nuclear periphery. Arrowheads show the HBB transgenes. (g) Fraction of cells with peripheral localization in each NIH 3T3 subclone for HBB (black) or DHFR (green) BAC transgenes as compared with endogenous -globin loci (HBB) in human BJ-hTERT (dark gray) or mouse NIH 3T3 (red) cells or -globin loci (HBA) in human BJ-hTERT cells. HBB BAC (no LacO; yellow) refers to FISH measurements from a mixed population of stable NIH 3T3 clones with HBB BAC with just a selectable marker but no LacO repeat inserted. Random shows a fraction of DAPI staining within 0.5 µm from the periphery. At least 50 cells from each BAC transgene NIH 3T3 cell clone and ≥45 cells for each endogenous gene FISH experiment were analyzed. Bars, 2 µm. The 6.3-kb PTR had no effect on plasmid transgene positioning ( Fig. 4 a), suggesting a position effect for PTR-targeting activity. Multicopy plasmid arrays are known to show strong levels for the endogenous mouse Hbb-b1 gene (mouse HBB; Fig. 3 e), homologous to the human HBB gene. An approximately threefold expression increase was observed for the selectable marker in the BAC targeted to the PCH versus the nuclear periphery ( Fig. 3 d, kanamycin). locus are required for targeting. (d) At least two functionally redundant regions within the D4 region are sufficient for peripheral targeting because D8, D9, D10, and double deletions D5D8 and D5D10 BACs display similar peripheral targeting. (e-g) Further sequence dissection reveals at least three PTRs sufficient for peripheral targeting. (e) Additional deletions relative to PTR locations (red bars) as shown for 80 kb of the 3 end of BAC. kan/neo, kanamycin/neomycin. (f) Nested set of deletions reveals 6.3-kb PTR1 by loss of peripheral targeting in D8D14 deletion and persistence of peripheral targeting in D8D13D22 triple deletion, PTR2 by persistence of peripheral targeting in D8D17 double deletion, and PTR3 by D8 deletion. (g) Summary of sequence dissection showing median peripheral targeting levels for five cell clones analyzed for each BAC shown. Distances between BAC transgenes and nuclear periphery were measured in ≥50 cells for each cell clone. Figure 3. Competition between nuclear peripheral versus chromocenter targeting. (a) D4 and D8D14 BAC transgenes exhibit significantly higher percentages of chromocenter association compared with intact (HBB) or D5 deletion (HBBD5) -globin BAC transgenes. (b) Representative cells from three independent cell clones showing association of HBBD4 transgenes (green) with chromocenters. Blue, DNA DAPI staining. Bars, 2 µm. The presented images were collected in several different experiments and using different exposure times, as appropriate to each particular cell. (c) Stacked bar plots show a near constant sum of peripheral or chromocenter transgene targeting for multiple, independent cell clones carrying HBB, HBBD4, or HBBD5 transgenes or a mixed population of stably selected cell clones for HBB BAC transgenes not containing LacO repeats, suggesting competition between targeting to the nuclear periphery versus chromocenter. (a and c) Localization of BAC transgenes to either the nuclear periphery or chromocenter was measured in ≥50 cells for each cell clone. (d and e) Quantitative real-time PCR analysis of relative mRNA expression levels, normalized by copy number, for -globin (HBB), olfactory receptor (OR), and selectable marker (kanamycin/neomycin [KN]) genes in cell clones containing either HBB BAC or HBBD4 BAC. Expression per gene copy was normalized relative to the expression in the cell clone containing HBB BAC (d) or the expression of the endogenous mouse HBB gene (e). Data show means ± SEM from three independent experiments. H3K9me3 immunostaining corresponded closely to the shape and size of the GFP signal, suggesting an increased H3K9me3 modification over the BAC transgene array itself (1 Mbp in size for the approximately five BAC copies in the HBB-C3 clone). Approximately 80% of HBB transgenes showed "strong" H3K9me3 immunostaining independent of peripheral versus interior localization. All full-length HBB BAC and HBB BAC deletions targeting to the periphery showed elevated H3K9me3 immunostaining, whereas all HBB BAC deletions that did not target to the periphery showed significantly lower levels of H3K9me3 (Fig. 5,. Inserting PTR1 to a new location within the HBBD4 BAC (D4-PTR1) restored peripheral localization and elevated H3K9me3 staining (Fig. 5 g). Cointegration of HBB BAC transgenes with the active housekeeping DHFR BAC transgenes led to reduced H3K9me3 over the transgenes (Fig. 5 g). H3K9m3 immunofluorescence and nuclear localization To provide an independent, biochemical measure of the H3K9me3 modification, we used qPCR to quantitate H3K9me3 ChIP. Primer pairs were spaced every 5 kb over the HBB BAC insert except for PTR1, in which we used six primer pairs spaced over 6.3 kb (Fig. 6 a). Three biological replicates of ChIP over the HBB BAC transgene in cell clone HBBC3 (approximately five BAC copies estimated by qPCR) showed consistently elevated levels over PTR1 at both 5 and 3 PTR1 ends, with two primer pairs (28)(29) showing peak values over the entire BAC (Fig. 6 b). To normalize ChIP data between different experiments, we linearly mapped measurements on a 0-1 scale: 0 corresponded to the percent input values for the GAPDH promoter negative control, whereas 1 corresponded to percent input values measured for the intracisternal A-particle (IAP) transposonpositive control. The GAPDH promoter H3K9me3 ChIP modification level is among the lowest in the genome, whereas the IAP transposon-positive control H3K9me3 ChIP modification level, higher than that observed over major and minor satellite and LINE-1 repeats (unpublished data), represents the high end for H3K9me3 genomic modification. Normalized ChIP values showed an improved reproducibility across the HBB BAC transgene. Because PTR1 was oversampled relative to other HBB BAC sequences, we may have missed localized H3K9me3 peaks in PTR2 and PTR3. Elevated H3K9me3 levels were also seen over PTR1 inserted into the HBBD4 BAC with all PTRs removed (HBBD4 + PTR1; clone A3 containing approximately three BAC copies; Fig. 6 c). An observable increase in H3K9me3 immunostaining over BAC transgenes would require increased H3K9me3 levels over large regions of the HBB BAC. To compare ChIP with immunofluorescence data, we estimated mean H3K9me3 levels (weighted by probe separation distances) over the HBB BAC. Mean ChIP values were nearly twofold higher over the intact HBB BAC versus the HBBD4 BAC with all PTRs deleted; transgene silencing; this likely is related to their abnormally condensed chromatin conformation Bian and Belmont, 2010). Position effect modulation of PTR activity also is implied by the clonal variation in peripheral targeting of HBB BACs inserted at different chromosome sites. This position effect could arise either from a dominant targeting activity of endogenous sequences flanking the BAC transgenes and/or to a long-distance, antagonistic activity of these flanking sequences on the cistargeting elements within the PTR. To produce an experimentally reproducible position effect, we cotransfected DHFR and HBB BAC transgenes and isolated clones in which these BACs cointegrated. DHFR BAC transgenes reconstitute an open large-scale chromatin conformation and confer position-independent, copy number-dependent expression of a reporter gene independent of chromosome insertion site . Surprisingly, clones carrying cointegrated DHFR and HBB BACs transgenes showed preferential targeting not to the periphery, where HBB BACs localized, and not to the nuclear interior, where DHFR BACs localized, but instead to the PCH (Fig. 4, b and c). PCH association frequencies were similar to that observed for the HBBD4 BAC lacking all PTRs, suggesting that flanking DHFR BAC sequences inhibit PTR peripheral targeting activity. These results suggested an epigenetic component to PTR targeting. We next tested PTR1 within the context of BACs containing large, gene-free DNA regions. First, as a positive control, we showed that random insertion of PTR1 into HBBD4 by Tn5 transposition restored peripheral targeting (Fig. 4 d). Therefore, positioning of the -globin locus by PTR1 does not require a fixed position of PTR1 within the HBB BAC. We then inserted PTR1 via Tn5 transposition into BACs containing two different, 200-kb human sequences. CTD-2207K13 contains an insert from a large gene desert region, whereas RP11-2I1 contains an intergenic sequence from a generich RIDGE (Goetze et al., 2007) region. Both "neutral" BACs with control transposons containing only the LacO repeat and selectable marker showed interior nuclear localization. Adding PTR1 to the CTD-2207K13 BAC produced no change in interior localization (Fig. 4 e). However, inserting PTR1 into the RP11-2I1 BAC retargeted these BAC transgenes from the nuclear interior to the PCH (Fig. 4 f). Thus, PTR1 confers peripheral targeting to the HBBD4 BAC, which would otherwise target to the PCH, but PCH targeting to the RP11-2I1 BAC, which would otherwise target to the nuclear interior, strongly suggests an epigenetic mechanism for PTR targeting. Targeting to the nuclear periphery correlates with BAC colocalization with H3K9me3 foci Repressed genes are known to associate with both the nuclear periphery and the PCH. We therefore used immunofluorescence to examine colocalization of BAC transgenes with specific heterochromatin marks. HBB BACs did not obviously associate with H3K27me3 foci but did show elevated H3K9me3 immunostaining (Fig. 5, a-c). The shape and size of the elevated (e and f) Whereas insertion of PTR1 into BAC CTD-2207K13 changes neither peripheral nor chromocenter targeting (e), PTR1 insertion into BAC RP11-2I1 significantly increases chromocenter, but not peripheral, targeting. Localization of BAC transgenes to either the nuclear periphery or chromocenter was measured in ≥50 cells for each cell clone. periphery. We therefore made measurements over BAC transgenes not associated with chromocenters. Normalized immunostaining H3K9me3 levels were similar to normalized ChIP H3K9me3 levels (Fig. 6 f). ChIP and immunofluorescence H3K9me3 values were significantly higher for the full-length HBB BAC compared with the HBBD4 BAC deleted of PTRs. No significant differences in mean ChIP or immunofluorescence values were observed between the fulllength HBB BAC and the HBB BAC deleted of PTRs but with PTR1 added back (D4 + PTR1). PTR1-containing transgenes that target to the nuclear interior show significantly lower PTR1 mean H3K9me3 levels compared with PTR1 levels within HBB or HBBD4 + PTR1 adding PTR1 back to the HBBD4 BAC increased mean ChIP levels close to that of the full-length HBB BAC (Fig. 6, c and e). To compare ChIP and immunofluorescence results, we normalized immunostaining intensity values using a similar two point linear interpolation procedure. H3K9me3 immunostaining intensities over DAPI-stained regions devoid of obvious H3K9me3 foci were mapped to 0, whereas immunostaining values over chromocenters were mapped to 1. H3K9me3 immunostaining intensities directly over the GFP-LacI peak intensity were renormalized to this 0-1 scale. Given the resolution of light microscopy, we expected spreading of the high H3K9me3 signal from the intensely stained chromocenters would artificially elevate intensity values over transgenes at the chromocenter BACs that target to the nuclear periphery (Fig. 6 g). PTR1 H3K9me3 levels in cointegrated HBB and DHFR BACs were similar to levels in full-length HBB BAC transgenes despite their different nuclear targeting (Fig. 6, d and g). However, over the non-PTR regions (primer pairs 2-18), mean H3K9me3 ChIP levels were significantly lower in the cointegrated DHFR/HBB We observed no change in peripheral localization of the endogenous -globin locus in mouse NIH 3T3 cells (not depicted) or human Tig3ET fibroblasts (Fig. 8, f and g) after Suv39H1/H2 RNAi knockdown. Previously, we estimated compaction levels 1-3 Mbp/µm for large-scale chromatin fibers (Tumbar et al., 1999;Hu et al., 2009), whereas our criterion for peripheral localization is a signal <0.5 µm from the nuclear. We hypothesized additional non-H3K9me3-dependent tethering mechanisms acting on DNA flanking the HBB locus, preventing displacement of HBB >0.5 µm from the periphery. We identified two GA motif clusters within ±1 Mbp of the HBB gene (Fig. 8 a), using the same motif finder software as used previously (Zullo et al., 2012). The first, in an inter-LAD BAC arrays than peripherally targeted HBB or HBBD4 + PTR1 BAC arrays but similar to levels in the HBBD4 BAC that also targeted to the PCH (Fig. 6 h). H3K9 methylation is required for peripheral and PCH HBB BAC targeting Cells were infected with pooled lentiviruses expressing shRNA directed against both Suv39H1 and Suv39H2 and selected using puromycin. A significant fraction of cells surviving drug selection showed weaker and more diffuse H3K9me3 staining than control cells. In cell clone C3 containing the HBB BAC, infected cells were classified into three categories based on H3K9me3 staining levels (Fig. 7, a-c). A dose-dependent reduction of peripheral targeting was observed with a drop from 61 to 23% peripheral association between the "normal" and "none" categories ( Fig. 7 d). A similar dose-dependent reduction in PCH targeting was observed in cells (clone C40.10) carrying the HBBD4 BAC with all PTRs deleted (Fig. 7, e-h). Based on knockout experiments and mass spectroscopy, G9a appears responsible for 50% of the total H3K9me2 signal (Peters et al., 2003). This H3K9me2 reduction is comparable to what we observed after G9a drug inhibition (Fig. S4) or shRNA knockdown (not depicted) and similar to that observed by others (Wu et al., 2005;Kind et al., 2013). H3K9me2 knockdown using G9a inhibitors BIX01294 or UNC0638 (not depicted) did not disrupt the peripheral localization of HBB or RP11-715G8 BAC probes (Fig. S4). However, simultaneous G9a inhibition (or shRNA knockdown; not depicted) and H3K9me3 knockdown by Suv39H1/H2 siRNA significantly reduced HBB peripheral localization (Fig. 8 g). Painting the 1-Mbp LAD region with four BAC probes, we simultaneously visualized both LAD and HBB regions (Fig. 8, a-e). In control and single knockdown cells, in a given cell, both the LAD and HBB regions typically were located either at the periphery or the interior (Fig. 8, b, c, f, and g). After double knockdown of H3K9me3 and H3K9me2, the HBB region separated >0.5 µm from the periphery in 70% of cells 800 kb 5 to HBB (within CTD-2547L21), shows interior localization (unpublished data). The second, 200 kb 3 to HBB and in an adjacent 1 Mbp LAD (within RP11-715G8 BAC; Fig. 8 a), is peripherally located (not depicted). Neither cKrox siRNA single knockdown nor cKrox, Suv39H1, and Suv39H2 triple knockdown disrupted the peripheral localization of HBB or RP11-715G8 BAC probes (Fig. S4). Western blots showed knockdown of 95% for cKrox, whereas H3K9me3 knockdown was verified by immunostaining (Fig. S4). H3K9me2 has been proposed to anchor LADs to the nuclear lamina and is enriched at the nuclear periphery (Kind et al., 2013); however, HBB BAC transgenes (C3 clone) colocalized with H3K9me3 but not H3K9me2 staining (unpublished data). H3K9me3 is also enriched at the nuclear periphery at comparable or even higher levels than H3K9me2 in human Tig3 and WI-38 and mouse NIH 3T3 fibroblasts (Fig. S5). Foci of strong, peripheral H3K9me3 or H3K9me2 immunostaining frequently appear anticorrelated, suggesting independent targeting. (H3K9me2 [me2] knockdown) inhibition did not change the localization of HBB; however, double knockdown of H3K9me3 and H3K9me2 significantly reduces the preferential peripheral localization of HBB. (e and i) The polarized orientation with the flanking LAD attached peripherally at its distal end suggests existence of a third tethering mechanism, independent of Suv39H and G9a (Fig. 10 a). n > 100; data shown are pooled from at least three independent experiments. Bars, 2 µm. Insets are at 2×. Statistical significance: *, P < 0.05. these BAC transgenes. We next showed that this peripheral targeting was largely eliminated by Suv39H1/H2 knockdown. Because Suv39H1/H2 knockdown had no effect on the peripheral localization of the endogenous HBB locus, we hypothesized a second targeting mechanism acting on flanking sequences. Double knockdown of G9a and Suv39H1/H2 led to a significant displacement of the endogenous HBB locus away from the periphery. Subsequent FISH analysis suggested a model in which the several hundred kilobase HBB region is tethered to the periphery through a Suv39H1/H2, H3K9me3-dependent mechanism, most of an adjacent 1-Mbp LAD is tethered through a G9a, H3K9me2 mechanism, whereas a third, unknown mechanism tethers the distal LAD region to the periphery (Fig. 10 a). These three tethering mechanisms prevent significant displacement of either the LAD or the HBB regions after single knockdown of either H3K9me2 or H3K9me3. Support for this model comes from the G9a-dependent peripheral targeting of a second BAC transgene containing sequence from this adjacent LAD. Identification of PTRs and an epigenetic basis for peripheral targeting Focusing on the HBB BAC, an unbiased deletion analysis identified three PTRs, each of which was sufficient to target the remaining 100-kb HBB region to the nuclear periphery. Nested deletions narrowed one of these, PTR1, to 6.3 kb. Through this deletion analysis, a tight correlation was demonstrated between peripheral targeting and increased H3K9me3 immunostaining over the BAC transgenes. ChIP against H3K9me3 demonstrated elevated levels over PTR1 itself, plus a PTR-mediated general increase of H3K9me3 over the entire BAC transgene, consistent with our immunofluorescence results. Peripheral targeting activity was subject to position effects, and inhibition of peripheral targeting was again correlated with loss of H3K9me3, either as an elevated peak over PTR1 in plasmid transgenes or as reduced spreading of H3K9me3 over the HBB BAC transgene cointegrated with the DHFR BAC. In the Introduction, we outlined several models for peripheral targeting. Our results showing PCH targeting of the HBB BAC with all PTRs deleted and interior localization of two BACs containing intergenic regions contradict model 1-default (Fig. 8, f and i). A polarized orientation-peripheral tethering of the distal LAD region, separation of the remainder of the LAD from the periphery, and the HBB locus >0.5 µm from the periphery and more interior than the LAD-was observed in 15% of cells (Fig. 8 i). These results suggest three independent mechanisms for peripheral targeting near the HBB locus-a Suv39H1/H2dependent mechanism operating over the HBB locus, a G9adependent mechanism operating over the proximal region of the adjacent LAD, and likely, a third, uncharacterized mechanism anchoring the distal LAD region (see Fig. 10 a). To more clearly establish anchoring of LAD regions to the periphery through a G9a-dependent, Suv39H1/H2-independent mechanism, we visualized stably integrated, LacO-tagged RP11-715G8 BAC transgenes in a mixed clonal cell population. RP11-715G8 BAC transgenes were peripherally located in 50% of cells. Suv39H1/ H2 shRNA had no effect on RP11-715G8 BAC peripheral localization, but BIX01294 G9a inhibition eliminated peripheral targeting to background levels ( Fig. 9 c). In contrast, G9a inhibition had no effect on peripheral localization of HBB transgenes (C3 cell clone), but Suv39H1/H2 knockdown eliminated peripheral targeting to background levels ( Fig. 9 b). Two independent peripheral targeting mechanisms active in different but adjacent sequences Large-scale chromatin folding complicates identification of cisand trans-elements targeting chromosome regions to particular nuclear compartments. Active targeting via a single sequence will cause apparent targeting of 100-1,000 s of kb of adjacent chromosomal sequence, as visualized by conventional light microscopy. Conversely, additional targeting mechanisms distributed across this same 100-1,000 s of kb of adjacent chromosomal sequence will mask the contributions of individual targeting mechanisms. Here, we used autonomous targeting of randomly integrated BAC transgenes to overcome these problems. Using deletion analysis of a 207-kb HBB BAC, we correlated targeting to the nuclear periphery with elevated levels of H3K9me3 over Figure 9. Independent peripheral targeting mechanisms for HBB versus LAD BAC transgenes. (a) Schematic of HBB (CTD-2643I7) and LAD (RP11-715G8) BACs aligned relative to flanking LAD sequence (labeling as in Fig. 8) Although the molecular mechanism for this competition remains unproven, we propose a working model (Fig. 10, b-g) in which the quantitative levels of specific epigenetic marks averaged over a sufficiently large chromatin domain determine targeting of genomic loci to the nuclear periphery versus the PCH or nuclear interior. H3K9me3-marked chromatin is enriched near the nuclear periphery but also present throughout the nuclear interior. Therefore, we suggest that H3K9me3 is one of these marks and is necessary but not sufficient for targeting; however, we cannot rule out Suv39H1/H2-mediated methylation of a different substrate as the cause of targeting. PTR nucleation and/or the global spreading of these heterochromatin marks is modulated by the epigenetic state of flanking DNA sequences, with the ultimate targeting decision determined by the level of epigenetic modifications over the chromatin domain. At the endogenous locus, it is likely that changes in the Suv39H1/ H2-independent targeting activity of flanking regions would also have to occur to redirect the HBB locus away from the periphery to the PCH. peripheral targeting of transcriptionally inactive chromosome regions. Instead, our results strongly support model 3-binding of particular epigenetic marks to components of the nuclear periphery. In particular, our demonstration that the same PTR1 element, which retargets the HBBD4 BAC from the PCH to the periphery, instead targets the RP11-2I1 BAC from the interior to the PCH strongly supports this epigenetic model, while contradicting model 2, in which targeting is through the binding of specific DNA sequences, and/or the proteins binding to these sequences, to proteins at the nuclear periphery. A competition between targeting to two different heterochromatin compartments Our HBB BAC deletion analysis revealed an apparent competition between targeting to the PCH versus the nuclear periphery; this competition is intriguing in light of the known differential targeting of the endogenous locus to either the nuclear periphery or the PCH in different cell types. In particular, the sum of peripheral or PCH targeting for a specific cell clone was near constant, despite consistent differences in the ratio of peripheral versus PCH targeting between the full-length HBB BAC and different HBB BAC derivatives. The PTRs bias this competition Figure 10. Working model for HBB locus nuclear targeting. (a) At least two independent peripheral targeting mechanisms act on adjacent sequences to anchor the HBB locus and surrounding sequences to the periphery: a Suv39H/H3K9me3 (me3)-dependent mechanism mediated by PTR sequences near the HBB locus (red), a G9a/H3K9me2 (me2)-dependent mechanism mediated by sequences in left flanking LAD region (green; possibly also in right flanking LAD, dotted green), and a likely third, uncharacterized mechanism acting to tether distal LAD after combined Suv39H and G9a knockdown/inhibition (dotted blue). KD, knockdown. (b) Peripheral targeting regions (PTRs) induce epigenetic modifications, leading to inhibition of gene expression and, in a fraction of cells, association with the nuclear periphery. Peripheral association may in turn reinforce gene repression. (c) Nucleation of H3K9me3 by PTR and its propagation, presumably with other epigenetic marks, to flanking genomic regions. (d) Epigenetic modifications continuum depicted as a white to black gradient with targeting to the nuclear interior (I; white), chromocenter (C; gray), or periphery (P; black) dependent on position within this continuum. (e) Cis-elements establish epigenetic states characteristic for each BAC transgene, resulting in differential nuclear targeting. (f) Addition of PTR1 shifts this continuum toward the black, altering nuclear targeting. (g) Long-range influence of cis-elements within DHFR BAC transgene shifts the epigenetic state of cointegrated HBB BAC transgenes from black (peripheral) to gray (chromocenter). (h) Reducing H3K9me3 by Suv39H KD shifts the epigenetic state toward white. action may therefore reveal how specific sequences establish distinct epigenetic states over large chromatin domains. Such dissection should allow us to distinguish functional consequences of differential nuclear targeting per se from the functional consequences of establishing distinct epigenetic states. p ] was used to generate a Tn5 transposon carrying the 256 mer LacO direct repeat, 6.3-kb PTR1, and a kanamycin/ neomycin-selectable marker, which was transposed into BACs CTD-2207K13 and RP11-2I1 to generate 2I1 HBB6kb ] was used to generate a Tn5 transposon carrying the 256 mer LacO repeat and a kanamycin/neomycinselectable marker, which was transposed into BACs CTD-2207K13 and RP11-2I1 to generate 2I1-LacO-C1and 2,207 LacO-C2, respectively. p[Zeo-HBB6kb] was used to generate a Tn5 transposon carrying the 6.3kb PTR1 and Zeocin-selectable marker and transposed into HBBD4 BAC to generate the HBBD4-6kb-C5, which has the transposon inserted 28 kb downstream of HBE1. BAC deletions using BAC recombineering  red-mediated BAC recombineering using a galK-based dual-selection scheme was used to delete specific regions from -globin BAC CTD-26417-K/NPSI8.32-C4. Sequential rounds of recombination-mediated deletion with galK insertion followed by galK removal using standard recombineering protocols (Warming et al., 2005) allowed generation of BAC derivatives with more than one deleted region. CTD-26417-K/NPSI8.32-C4 was transformed into Escherichia coli strain SW102 in which the  red recombination machinery is induced by shifting temperature from 32 to 42°C (Warming et al., 2005). Recombination DNA fragments with homology ends were prepared by PCR using primers (Table S1) with 43-bp homology sequences plus 17-bp sequences (forward, 5-CGACGGCCAGTGAATTG-3; reverse, 5-TGCTTCCGGCTCGTATG-3) for amplifying the galK-selectable marker from plasmid pGalK. After galK insertion, recombinants were selected at 32°C on minimal medium in which galactose was supplied as the only carbon source. Recombinants were screened by PCR using 20-bp primers outside of the target regions. Subsequent removal of galK used DNA fragments generated by PCR using partially overlapping 60-bp primers (GkRm forward and reverse; Table S1). Each of the 60-bp primers consisted of a 52-bp homology region flanking the galK marker and an 8-bp sequence complementary to the last 8 bp of the opposite homology region. Negative selection used minimal medium containing 2-deoxy-galactose, and Relationship to other studies Using a similar autonomous BAC-targeting approach, peripheral targeting of the transcriptionally inactive IgH and Cyp3a multigene loci recently was shown to involve binding of GAGA factor cKrox to GA motif clusters (Zullo et al., 2012). It is unclear how prevalent these GA motif clusters are within the genome. None are present in the HBB BAC, and only two are located within 2 Mbp surrounding the HBB endogenous locus. Neither significantly contributes to peripheral targeting of this chromosome region. In contrast, recent findings in C. elegans identified two H3K9 histone methyltransferases (HMTs) as required for anchoring multicopy gene arrays. RNAi knockdown of both HMTs resulted in reduced lamin interactions of chromosome arm regions and selected gene loci enriched in H3K9me3 in wild-type embryos (Towbin et al., 2012). The functional redundancy for these two HMTs, only one of which is able to trimethylate H3K9, suggested a single peripheral targeting mechanism in C. elegans embryos. G9a is a mammalian HMT responsible for roughly 50% of total nuclear H3K9me2 (Peters et al., 2003) and most of the H3K9me2 enriched at the nuclear periphery. G9a knockdown reduces LAD targeting to the periphery approximately twofold based on biochemical assays (Kind et al., 2013). However, G9a knockdown failed to change the peripheral localization of late replicating, G9a-regulated genes based on cytological assays (Wu et al., 2005). Our results blend aspects of several of these studies: we demonstrate dependence of peripheral targeting through G9ainduced H3K9me2 and Suv39H1/H2-induced H3K9me3, but these pathways act on different genomic regions separated by hundreds of kilobases. Apparent functional redundancy is suggested by cytological assays as a result of the tethering activity of neighboring chromosome regions, but BAC transgenes allow a clear separation of H3Kme2-and H3K9me3related pathways. Future directions Our results reveal two independent pathways for peripheral targeting, each of which can be recapitulated by BAC transgenes that autonomously target to the periphery through a dependence on a single pathway. Focusing on the HBB BAC, we demonstrate the feasibility of using BAC transgenes to identify ciselements that confer peripheral targeting and ultimately dissect the molecular mechanisms involved. A current paradox in considering the functional consequences of targeting gene loci to "repressive" nuclear compartments is that only a fraction of alleles, for instance 50% for the -globin locus, show this targeting. Importantly, the fraction of cells with elevated H3K9me3 over the HBB BAC transgenes was significantly higher than the percentage of peripherally located transgenes. These PTRs from the HBB BAC may better be described as elements that confer a particular heterochromatin state that includes spreading of H3K9me3 over a large domain in most cells, resulting in targeting to the PCH and peripheral nuclear compartments in a fraction of these cells. Future dissection of the molecular mechanisms underlying PTR dilution of anti-lamin A or 1:500 dilution of anti-H3K9me3 primary antibody. After secondary antibody staining, cells were postfixed in 3% paraformaldehyde in CMF-PBS for 10 min at RT and washed in 0.1 M HCl/0.7% Triton X-100 (Thermo Fisher Scientific) in 2× SSC for 10 min on ice. DNA FISH was performed as described previously (Hu et al., 2009). In brief, paraformaldehyde-fixed cells were permeabilized in 0.5% Triton X-100. Cells were subjected to four freeze-thaw cycles before storing in 50% formamide/2× SSC. FISH probes were prepared using nick translation of BAC DNA (BioNick Labeling System; Invitrogen) using biotin and/ or digoxigenin-labeled nucleotides. Probes and cells were codenatured on a 75°C heat block for 2 min followed by overnight hybridization at 37°C followed by washes in 0.4× SSC at 70°C for 2 min and 2× SSC. For detection, we used Streptavidin-Alexa Fluor 594 (Invitrogen) and/or antidigoxigeninfluorescein antibodies (Roche). Depletion of human cKrox, Suv39H1, and Suv39H2 was performed using siGENOME siRNA SMARTpool (Thermo Fisher Scientific; Table S3). Tig3ET cells were transfected with siRNA using Lipofectamine (Invitrogen) following the manufacturer's protocol and were cultured for 48 h before being used for analysis. For the G9a inhibition, Tig3ET or NIH 3T3 cells were treated with BIX01294 (Kubicek et al., 2007) or UNC0638 (Vedadi et al., 2011; both obtained from Sigma-Aldrich) for 2 d, at a final concentration of 1 µM or 500 nM, respectively. Knockdown experiments were performed in at least two biological replicates, and a pooled result is displayed. Microscopy, image analysis, and statistical analysis A personal deconvolution microscope system (DeltaVision; Applied Precision) equipped with a charge-coupled device camera (CoolSNAP HQ 2 ; Photometrics) was used with a 60×, 1.4 NA lens for data collection. Deconvolution used an enhanced ratio and an iterative constrained algorithm (Agard et al., 1989), using the softWoRx software (Applied Precision). 2D distances between GFP-LacI spots and the nuclear edge defined by DAPI staining were measured from the optical section in which the GFP-LacI spot was in focus using ImageJ (National Institutes of Health) software. Image segmentation of the DAPI-stained nucleus was then performed using the Otsu thresholding 16 bit and k-mean clustering plugins with the following parameters: number = 2, cluster = 0.00010000, and randomization = 48. For BAC transgenes consisting of multiple GFP-LacI spots, measurements were taken from the spot closest to the nuclear edge. P-values were calculated from two-tailed, two-sample unequal variance t tests. Photoshop and Illustrator programs (Adobe) were used to assemble microscopy images, with bicubic interpolation used to rotate or enlarge images. RT-qPCR Total RNA was extracted from NIH 3T3 cells using the RNeasy Mini kit (QIAGEN), with on-column DNase I digestion (New England Biolabs, Inc.) according to the manufacturer's instructions. cDNA was synthesized from 1 µg total RNA with the cDNA kit (qScript Flex; Quanta BioSciences). Quantitative real-time PCR was performed on a PCR instrument (StepOnePlus; Applied Biosystems) using a 2× SYBR green mix. Real-time PCR reactions were performed in triplicate. -Actin was used as a reference to obtain the relative fold change for target samples using the comparative cycle threshold 2 (ddct) method. ChIP ChIP was performed on NIH 3T3 cell clones containing specific HBB BAC or plasmid transgenes. 10 million cells were cross-linked by adding deletion of galK in recombinants was verified using CHECK forward and reverse primers (Table S1). Integrity of BAC constructs and the LacO repeat length was verified by restriction fingerprinting using an AvaI and HindIII double digest. Sequence analysis Searching for GA motifs, as previously described in LAS (Zullo et al., 2012), in PTRs, the HBB locus, and DNA flanking the HBB locus, was performed using the MEME (Multiple EM for Motif Elicitation) software package (Bailey and Elkan, 1994).
2016-05-04T20:20:58.661Z
2013-12-09T00:00:00.000
{ "year": 2013, "sha1": "08c1f4a86adbde3bb0dba1f4369b1ddc5ef262e6", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/jcb/203/5/767.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "59cc3cbc834f9e4637133a643658e7444b45a5c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
112410723
pes2o/s2orc
v3-fos-license
“GGE BIPLOT” ASSESSMENT OF PHENOTYPIC STABILITY OF SPRING BARLEY VARIETIES The article presents GGE biplot analysis of results of environmental trials in 17 varieties of spring barley bred at the Plant Production Institute nd. a V.Ya. Yuriev of NAAS. The study results discriminate genotypes with stable realization of their genetic potential in a number of environments as well as genotypes combining a high level of a trait with its stable expression. The varieties Kozvan, Perl, Agrariy and Kosar were chosen as valuable source material for spring barley breeding. We think that GGE biplot can be a comprehensive alternative to the most conventional methods of assessment of adaptive features in genotypes. Introduction.Barley (Hordeum vulgare L.) is a strategic export-oriented agricultural plant in Ukraine.Increase in gross output of barley grain is impossible without implementation of high-yielding barley varieties that are resistant to biotic and abiotic factors. Analysis of publications, pose the problem.Environmental variety trials are an important tool for selection of genotypes with specific (narrow) or wide adaptation to a certain environment or to a range of environments, which enables predicting yield capacity of genotypes under these conditions and ultimately increases farmers' labor efficiency [1,2].Nevertheless, capabilities of environmental trials are not always used to the full: usually yield capacity of genotypes is only analyzed, but information on other traits remains unstudied [3]. The observed phenotypic variance (P) of traits consists of environment variance (E), genotype variance (G) and genotype × environment interaction (GE): P = G + GE + E or P -E = G + GE [4].W. Yan [5] points out that E effect forms the major part of the total phenotype variability, and contributions of G and GE are generally small.However, G and GE effects must be taken into account in the process of selection of high-yielding genotypes. The term «GGE» emphasizes understanding that G and GE are two sources of variation that are pertinent to genotype assessment and must be considered simultaneously, when genotype × environment interactions are investigated. With time, GGE biplot analysis has turned into a complex analysis system, as a result of which the majority of environmental trial datum patterns can be displayed graphically [6][7][8][9]. The aim and tasks of the study.The study purpose was evaluation of adaptive features of spring barley varieties in terms of performance and its elements using GGE biplot and discrimination of valuable source material for breeding of this plant. Material and methods.The source material was 17 varieties of spring barley bred at the Plant Production Institute nd. a V. Ya.Yuriev of NAAS.To determine their adaptive potential, in 2013 environmental trials were conducted in three locations with different soilclimatic conditions: Plant Production Institute nd. a V. Ya.Yuriev of NAAS (Eastern Forest- Steppe)environment E1, Donetsk Experiment Station of NAAS (Southern Steppe)environment E2 and Research Station of Bast Crops of the Institute of Agriculture of Northern -East NAAS (North-Eastern Forest-Steppe)environment E3.In addition to yield capacity, variability of performance elements was evaluated: grain weight per plant, productive tillering, grain number per spike and 1000-grain weight.The environmental trial data were analyzed by GGE biplot. GGE biplot graphs were constructed using the first two principal components PC1 и PC2 derived from subjecting the data to singular-value decomposition.Only two principal components (PC1 and PC2) are retained in the model because such a model tends to be the best model for visualizing interaction between each genotype and test environments. Results and discussion.The results of the environmental trials showed a significant differentiation of the studied varieties in terms of plant performance and its elements (table 1).Analysis of variance demonstrated strong significant differences between the gen otypes, environments and their interactions by all the estimated traits as well as differences in influence of these factors on formation of trait level (Table 2).Environment (E) was the dominant factor in productive tillering and grain weitht per plant variances (50.% and 49.7 %, respectively), but this factor is considered as of no importance upon genotype assessment, which allows focusing on the investigation of genotype (G) and genotype × environment interaction (GE) effects [8,10]. Environmental variety trial results are always a large conglomeration of data, which are rather difficult to analyze without visualization.GGE biplot is an ideal tool to solve this problem, enabling discrimination of genotypes realizing their potentials in specific soil-climatic conditions or genotypes with wide adaptation to a variety of test environments. In Fig. 1 the polygon vertices are genotype markers that are maximally remote from the biplot center, so all the genotype markers are inside the polygon.The lines dividing the biplot into sectors represent a set of hypothetical environments.The genotype forming the polygon angle for each sector dividing the biplot has the highest yield capacity in environments falling within this sector.Thus, the genotype of Kozvan variety (G12) had the maximum productive tillering in all the three environments, suggesting its wide adaptation by this trait.Modern variety (G14) was the winner by grain number per spike in environment E3, and Vitrazh (G6)in environments E1 and E2.Vektor (G3) in environments E1 and E2 and Perl (G16) in environment E3 were noticeable for 100-grain weight.In environment E3 Agrariy (G1) variety had the highest performance, and in environments E1 and E2 Kozvan and Vitrazh varieties, which were similar by their parameters, showed the highest performance. GGE biplot ranks genotypes by their performance and stability in a number of environments.In Fig. 2 the average tester coordinate (ATC) (X-axis) or the performance line passes through the biplot origin with an arrow indicating the positive end of the axis.The ATC Y-axis (stability axis) passes through the biplot origin and is perpendicular to the ATC X-axis.Thus, the mean value of a trait of a genotype is estimated by the projection of its marker to the ATC X-axis, and stability -by the projections to the ATC Y-axis. Vektor (G3) and Perl (G16) varieties were distinguished for 1000-grain weight; Vektor (G3) and Parnas (G15) were the most stable.Genotypes selected by level and stability of traits are valuable as source material for breeding. In AV Kilchevskyy, LV Khotyleva and VV Khangildin methods there is a very important integral parameter -breeding value of genotype‖, which provides a comprehensive assessment of genotypes in terms of yield capacity and its stability.GGE biplot also ranks genotypes by -breeding value‖.The center of concentric circles (Fig. 3) represents the position of a genotype with maximum -breeding value‖ or so-called -ideal‖ genotype.The closer a genotype to the ideal one is, the more valuable it is.In our studies Kozvan (G12) variety was of the greatest breeding value in terms of productive tillering; Kosar (G13) varietyin terms of grain number; the awnless variety of Vektor (G3)in terms of 1000-grain weight; Perl (G16) varietyin terms of performance, because it was much more stable than Kozvan variety, which exceeded Perl by performance (see Fig. 3). The results of GGE biplot analysis of adaptive features of spring barley varieties very closely correlate with the results that we obtained by AV Kilchevskyy, LV Khotyleva method [11,12], but GGE biplot has a number of advantages over the latter, in particular, it does not require heavy calculations.Conclusions.Use of GGE biplot enabled analyzing the environmental trial data and discriminating the most valuable genotypes.Among the studied varieties of spring barley, Kozvan variety was the most valuable by productive tillering, Kosar varietyby grain number, Vektor varietyby 1000-grain weight, Perl and Kozvan varieties -by performance. Thus, GGE biplot can be a comprehensive alternative to the most conventional methods of assessment of adaptive features in genotypes. 3 . GGE Biplot Based on Genotype-Centered Scaling for Comparison of Genotypes with the -Ideal‖ Genotype by Productive Tillering (A), Grain Number per Spike (B), 1000-Grain Weight (C), Plant Performance (D)
2018-12-07T05:41:44.038Z
2015-11-24T00:00:00.000
{ "year": 2015, "sha1": "1775e3fd5b1f20c8ec4e7845e4c16ebe01fc36de", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/pbsd/article/download/54059/50368", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1775e3fd5b1f20c8ec4e7845e4c16ebe01fc36de", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Engineering" ] }
258278412
pes2o/s2orc
v3-fos-license
Diagnostic Challenge of Small Bowel Neuroendocrine Tumor in a Young Female Patient Neuroendocrine tumors (NETs) are rare cancers arising from neuroendocrine cells and are characterized by their ability to secrete functional hormones causing distinctive hormonal syndromes. The incidence of NET has increased over the years, and small bowel neuroendocrine tumor (SBNET) is one of the most challenging to detect due to its varied presentation and poor accessibility with traditional endoscopic methods. Patients with SBNET present with variable hormonal symptoms, such as diarrhea, flushing, and nonspecific abdominal pain, which often delay the diagnosis. We present the case of a young patient who underwent multidisciplinary workups leading to a successful diagnosis of SBNET promptly. The patient was a 31-year-old female who presented to the emergency department with complaints of nausea, vomiting, and sudden-onset, severe, sharp abdominal pain. CT scan of her abdomen showed an area of irregular intraluminal soft tissue density suspicious for a mass in the mid-small bowel. The patient’s initial enteroscopy was normal. A video capsule endoscopy showed a small bowel mass, which was consistent with SBNET confirmed by pathology later. This case emphasizes the importance of considering SBNET as a differential diagnosis in young patients with nonspecific symptoms of abdominal pain and highlights the role of multidisciplinary approaches in achieving prompt diagnosis and treatment. Introduction Neuroendocrine tumors (NETs) arise from neuroendocrine cells and are characterized by their ability to secrete functional hormones throughout the body causing distinctive hormonal syndromes or nonspecific abdominal pain which delays the diagnosis. Small bowel neuroendocrine tumor (SBNET) is challenging to detect due to its extremely low incidence, various clinical presentation, and poor accessibility of the distal small bowel with the traditional endoscopic method [1]. Here, we share our diagnostic challenge in a case of a young patient who underwent multidisciplinary workups leading to a successful diagnosis of SBNET promptly. Case Presentation A 31-year-old female presented to the emergency department with complaints of nausea, vomiting, and abdominal pain for one day. The abdominal pain was sudden in onset, sharp in character, severe in intensity, localized to the periumbilical region, radiating to the back, not related to eating, and without any specific relieving factors. It was associated with one episode of nonbilious nonbloody vomiting. She denied diarrhea, constipation, weight loss, hematochezia, and melena. Her past medical history was significant for left internal jugular vein thrombus, left transverse and sigmoid sinus thrombosis, and subdural hematoma. She had a history of abdominoplasty 12 years ago. She denied any family history of gastrointestinal malignancies. She denied smoking cigarettes or using any recreational drugs. She drank alcohol socially. She had been taking apixaban for left internal jugular vein thrombosis. At the time of presentation, she was found to have a blood pressure of 133/75 mmHg, a heart rate of 85 beats per minute, a temperature of 36.7°C, a respiratory rate of 14 breaths per minute, and oxygen saturation of 99% at room air. She had bilateral vesicular breathing and normal heart sounds. She had scars on the abdomen from her previous surgery, mild abdominal tenderness in the periumbilical region, and normoactive bowel sounds. The patient's initial laboratory findings are summarized in Table 1. She underwent CT of the abdomen and pelvis with contrast which showed an area of irregular intraluminal soft tissue density suspicious for a mass in the mid-small bowel with adjacent multiple enlarged lymph nodes and a smaller 7 mm stellate-like lesion within the mesentery. The patient's carcinoembryonic antigen (CEA) was 0.6 ng/ml (reference range = ≤5.0 ng/mL), cancer antigen 19-9 (CA 19-9) was 2.7 (reference range = 0.0-37.0 U/mL), and cancer antigen 125 (CA-125) was 7.5 (reference range = ≤35.0 U/mL). She underwent an enteroscopy which showed no gross lesion in the esophagus, stomach, duodenum, and proximal and midjejunum ( Figure 1). In light of abnormal CT findings but normal enteroscopy, she was planned for a video capsule endoscopy. Video capsule enteroscopy showed several polyps throughout the small bowel. It also showed a mediumsized mass without bleeding found in the distal ileum of the small bowel after three hours and 17 minutes of capsule ingestion ( Figure 2). FIGURE 2: Video capsule enteroscopy showing (A) a polyp in the distal duodenum, (B) a polyp at the mid-jejunum, (C) a mass lesion in the distal ileum, and (D) a mass lesion in the distal ileum. She underwent diagnostic laparoscopy. About 160 cm proximal to the terminal ileum, a puckered firm small bowel mass was seen with thickened adjacent mesentery. A mini-laparotomy was done which showed a 2 cm small bowel mass in the small intestine and a 3 cm mass in the associated mesentery. The mass was resected with 10 cm proximal and distal margins to include the associated mesentery. A side-to-side isoperistaltic primary small bowel anastomosis was done. A small bowel mass with the associated mesentery was sent to pathology. Pathology showed invasive, well-differentiated, Grade 1 neuroendocrine neoplasm which was invading through the muscularis propria into the submucosal tissue without penetration of the overlying serosa. All surgical margins were found to be free of the neoplasm. The metastatic neoplasm was found to be present in four out of eight mesenteric lymph nodes. The neoplasm was found to be positive for chromogranin, synaptic ficin, CD56, CDX2, and Ki-67 index of approximately 1%. It was negative for CK7, DK20, PAX8, and TTF-1. Her serum chromogranin A was 213 (normal <311), serum serotonin was 257 ng/mL (reference range = 56-244 ng/mL), and 24-hour urine 5-hydroxyindolacetic acid level (5-HIAA) was 2.6 mg/24 hours (reference range = ≤6.0 mg/24 hours). The patient was discharged with outpatient follow-up with oncology, surgery, and gastroenterology. Discussion In 1907, Oberndofer first described these tumors as carcinoids [1]. The primary site of NET can be the lung, appendix, cecum, colon, liver, pancreas, rectum, small intestine, or stomach. According to the national Surveillance, Epidemiology, and End Result (SEER) program registry, the incidence of NET has increased 6.4-fold from 1973 to 2012 from 1.09 per 100,000 to 6.98 per 100,000. The annual incidence of SBNET in the United States was 1.05 per 100,000 persons in 2012 [2]. NETs account for less than 2% of gastrointestinal (GI) cancers [3]. It is uncertain but assumed that the increasing incidence of NET is probably related to the growing use of imaging, endoscopy, and improved understanding by physicians. SBNET refers to the anatomically arising NET in the small bowel from the ligament of Treitz to the ileocecal valve [4]. NET usually occurs in the sixth decade of life, with the median age at diagnosis being 63 years. The overall NET prevalence in women is 52% and in men is 48% from a total of 35,618 patients with NET in the SEER database. Males are more likely to have it in the jejunum and ileum. In addition, jejunal and ileal SBNETs are more frequently reported in white patients (17%) and African American patients (15%), which is significantly higher than the occurrence in Asian/Pacific Islander and American Indian/Alaskan Native patients (p < 0.001) [1]. In contrast, rectal NETs are more prevalent in Asian/Pacific Islander patients (41%), American Indian/Alaskan Native patients (32%), and African American patients (26%) compared to white patients (12%) (p < 0.001) [1]. The carcinoid syndrome, described as flushing, diarrhea, valvular heart disease, and bronchospasm, is caused by the excess secretion of neuroendocrine hormones. However, without hepatic metastasis of NET, most of the confined locoregional SBNET do not present this typical carcinoid syndrome due to the excess hormones being metabolized and inactivated by the liver [4,5]. Most patients are asymptomatic for a long period of time or present with nonspecific symptoms of abdominal pain treated as irritable bowel, allergy, stress, or food-related [6]. These variable and nonspecific symptoms of SBNET delay the diagnosis. The median time of symptom onset to diagnosis can vary from 4.3 months to 9.2 years [3,7]. SBNETs are often advanced with local nodal metastasis at the time of diagnosis. The local invasion often causes fibrosis of the mesentery, resulting in bowel obstruction, ischemia, perforation, or bleeding, requiring emergent surgery [4]. Diagnostic modality The diagnostic process of SBNET should be individualized by the patient's presentation. For example, patients presenting with carcinoid symptoms of flushing and diarrhea may benefit by promptly testing the biochemical markers for SBNET [5]. Neuroendocrine cells produce hormones or amines that can be used as screening serum biochemical markers, such as chromogranin A (CgA), CgB, bradykinin, substance P, neurotensin, human chorionic gonadotropin, neuropeptide K, and neuropeptide PP [6]. Among these, CgA is the best overall screening biomarker because it is secreted by a wide variety of NETs, regardless of primary sites, including nonfunctional tumors. Moreover, CgA is a sensitive and specific marker for NETs and correlates with both tumor volume and prognosis [4,6]. According to a recent meta-analysis of 13 studies, CgA has a sensitivity of 73% and a specificity of 95% for the diagnosis of NET [8]. Serotonin-secreting NETs can be diagnosed using the biomarker 5-HIAA, a serotonin breakdown product. Compared to the serum serotonin level, which changes throughout the day based on activity and stress levels, urinary 5-HIAA is more useful and reliable as it has an 88% specificity [3]. Imaging such as CT may be used as the first step in the investigation of patients with obstructive symptoms or it can be discovered after an emergent surgery [5]. Anatomical and functional images can be used for the diagnosis of SBNET. Anatomical imaging includes CT and MRI whereas functional imaging includes octreotide scan and positron emission tomography (PET). Anatomic imaging is useful for localizing tumors and measuring tumor burden and evaluating the option of surgical resection, whereas functional imaging has a higher sensitivity and helps find occult metastases and monitor recurrence surveillance [5]. CT is the most common and widely available imaging modality. It provides a wide anatomical view of the chest, abdomen, and pelvis, including vascular and lymph nodes, but the extent of liver metastasis is frequently underestimated compared to MRI [3,5]. CT has 83% of high sensitivity and 76% of specificity in diagnosing NET [3]. MRI has several benefits compared to CT. MRI has 93% sensitivity and 86% specificity for detecting NET. Additionally, MRI avoids ionizing radiation exposure and improves the detection of liver metastases [3]. According to a study comparing CT and MRI, the sensitivity for detecting liver metastases was found to be 78.5% and 95.2%, respectively [5]. In patients who may need liver resection, this can make a significant difference. In MRI, tumors can be more visible and assessable using a specific contrast agent (gadoxetic acid or gadopentetic acid-based gadolinium) [3]. Functional imaging techniques utilize the fact that most NETs express somatostatin receptors (SSTRs). The somatostatin analogs, mostly pentetreotide or lanreotide, are attached to indium, an isotope, that emits single-photon emission tomography ( 111 indium-octreotide, octreoscan). This indium-attached somatostatin analog, radiolabeled SSA, binds to SSTR2, 3, and 5 expressed on NET and localizes NET [3,5]. It can provide functional information about the tumor. The overall sensitivity of 111 indium-octreotide is 52-78% and the specificity is 98% for the detection of NET [3,6]. Octreotide scanning can also be used as a follow-up modality to assess response to octreotide treatment. However, it is limited if the tumor does not express SSTR and does not have an affinity to the tracer [3]. In the last decades, gallium-68-based imaging has been favored over 111 In-SRS for the choice of functional imaging for SBNETs as 111 indium-octreotide sensitivity is lower for primary SBNET [5]. Many Ga-labeled ligands are available, namely, 68 Ga-DOTATATE, 68 Ga-DOTATOC, and 68 Ga-DOTANOC, with each having different affinities to different SSTR subtypes. By several criteria, 68 Ga-PET is superior to 111 In-SRS imaging, including reduced radiation exposure, faster acquisition time, higher spatial resolution, and accuracy. According to meta-analyses, 68 Ga-PET has a mean sensitivity of 88-93% for the detection of NET, and PET 68 Ga-DOTATOC and 68 Ga-DOTANOC report 92-93% specificity [3][4][5]. Endoscopic ultrasonography (EUS) and standard axial endoscopy are essential for the diagnosis and treatment of gastro-entero-pancreatic NETs. Upper and lower GI endoscopies (standard axial endoscopy) are crucial for the detection, biopsy, and therapeutic resection of GI NETs (stomach, duodenum, rectal, colon). EUS is the diagnostic gold standard for pancreatic NET [9]. However, due to poor accessibility in the distal small intestine, standard axial endoscopy and EUS often fail to detect the small intestine lesions and delay the diagnosis of SBNET. Capsule endoscopy and double balloon enteroscopy enable direct visualization of the entire small bowel, improving the diagnostic yield for SBNET [10]. There is scant data on the efficacy and safety of these approaches. Due to its rarity, the data are usually based on small retrospective studies, and limited data on sensitivity and specificity are currently available. Instead, diagnostic yield is often used in studies [9,10]. Capsule endoscopy is a patient-friendly and minimally invasive method for visualizing the entire small bowel. It has reported a diagnostic yield of 45-72% in SBNET [10]. A study compared capsule endoscopy and other diagnostic images, CT, and MRI. Capsule endoscopy revealed a higher diagnostic yield in detecting small lesions (p < 0.001), concluding capsule endoscopy to be a superior diagnostic technique to CT and MRI [11]. The main limitation of capsule endoscopy is its inability to collect biopsy samples or perform a therapeutic procedure. CE can also miss submucosal lesions in the small bowel. Capsule retention is considered the major complication in 1.5-2.6% of cases [10]. Double balloon enteroscopy shows a variable diagnostic yield ranging from 30% to 80%, but a false-positive rate of 17% has been reported [9]. Due to the lower rate of false positives in capsule endoscopy, it should be the first choice for SBNET endoscopy diagnosis. Double balloon enteroscopy should be performed in patients with abnormal capsule endoscopy to take a biopsy for definitive diagnosis and tattooing before surgery, or in case of contraindication for capsule endoscopy, for example, known intestinal stenosis [9,10]. Although biochemical markers, imaging, and endoscopy suggest NET, a pathologic confirmation of surgical specimens or biopsies is required [4]. The histologic suggestion of a well-differentiated NET with cells arranged in nest patterns, salt, and pepper chromatin, and amphophilic cytoplasm is a characteristic feature of NET histology. CDX2, PAX6, ISL1, and TTF-1 positivity can suggest a primary origin of NET. CDX2 positivity indicates the small bowel as the primary site [4,12]. Treatment and prognosis Surgical resection is the preferred first-line treatment of SBNET. The goals of surgical resection are curative resection of the primary, regional lesions, and distant metastatic disease with cytoreductive intent along with palliative resection for symptom relief by removing tumor-releasing bioactive agents [6,13]. Surgical management improves survival by improving disease clearance and reducing the risk of developing metastasis. Median overall survival of 9.5 years versus 5.3 years in the elective prophylactic surgery group versus the delayed or nonsurgical group (six months after diagnosis) has been reported [14]. Although this study had heterogeneity bias, the delayed nonsurgical group was older and likely to have metastatic hepatic and extrahepatic lesions [13,14]. However, 58% (53/91) of the delayed nonsurgical group eventually received surgery at some point, either primary resection or emergently due to developing obstructive symptoms. The necessity of locoregional resection and cytoreduction of tumors by surgery cannot be overlooked [13,14]. The SEER database reports better survival is linked with at least one lymph node removal than no lymph node removal (hazard ratio = 0.64, p = 0.0027) [13]. The current North American Neuroendocrine Tumor Society guidelines recommend routine lymph node clearance with primary tumor resection. The gold standard surgical management SBNETs is open laparotomy. The surgeon should manually palpate and inspect the entire length of the small bowel to identify small and often subcentimeter and multifocal SBNET resecting the primary tumors, regional lymph nodes, mesenteric masses, and peritoneal metastases [13]. Several systemic treatment options are available for the treatment of NET, including SSAs, everolimus, and peptide receptor radionuclide therapy (PRRT) [4]. SSAs have been used for a long time as a systemic treatment of NET. SSAs are the first-line treatment for functional and nonfunctional metastatic SBNETs due to their antiproliferative effect and control of carcinoid symptoms. SSAs alleviate carcinoid syndrome and have an antiproliferative activity that improves progression-free survival (PFS) than placebo (14.3 months vs. 6 months) [4,5]. Everolimus is a mammalian target of rapamycin inhibitor which is only approved in progressive nonfunctional NET. It has shown improved median PFS when used as monotherapy compared with placebo (11.0 months vs. 3.9 months) [4]. PRRT delivers therapeutics by using radiolabeled SSA to SSTR-expressed cells, selectively targeting NET cells [6]. PPRT is the preferred second-line treatment for patients who experience disease progression while receiving SSA [4]. Due to its extreme rarity with an annual incidence of 1.05 per 100,000 persons [2], variable presentation, and poor accessibility of the distal small bowel with the standard endoscopic procedure, SBNET is difficult to diagnose [4,5]. These delays result in advanced-stage SBNET at the time of diagnosis. A multicentric lesion and metastatic spread to the regional lymph node are often found with SBNET. In population-based studies, small intestinal NETs are metastatic upon presentation in about 30% of patients [15]. Small intestine NETs tend to have high morbidity and mortality because of metastatic burden, mesenteric fibrosis leading to ischemia, and surgical emergency [12]. Compared to other GI carcinoids, SBNET carcinoids have a low fiveyear survival rate (60.5%). With hepatic metastasis, the five-year survival rate decreases to 18-32% [6,15]. The overall five-year survival rate is reported to be 67.2% for patients with gastrointestinal carcinoids. Each GI carcinoid's overall five-year survival rate is stomach 81%, appendix 98%, colon 62%, and rectum 87% [6]. This diagnostic challenge of SBNET increases the risk of patient presentation of a surgical emergency and can relate to the overall poor prognosis compared to other GI NETs [8]. Malignant features of SBNETs are assessed by multiple factors, including the size of the tumor, local spread, depth of invasion, the extent of metastases at the time of diagnosis, mitotic rate, multiplicity, and the presence of carcinoid syndrome. Ki-67, proliferation markers, and mitotic rate can be used as criteria for the classification and grading of NET [4,12]. Although complete resection of tumors seems to be curative, long-term recurrence rates of about 50% have been reported [5]. Close surveillance for 10 years after the resection of NET should be followed at intervals of six to 12 months with imaging, biochemical markers, or endoscopy. Currently, surveillance strategies are varied in clinical practice, and it is recommended that they are tailored to individual presentations [1,5,6]. Conclusions NETs arise from neuroendocrine differentiation and are characterized by their ability to secrete functional hormones throughout the body causing distinctive hormonal syndromes. SBNET is challenging to detect due to its extremely low incidence, varied presentation, and poor accessibility of the distal small bowel with traditional endoscopic methods. Patients suspected of having SBNETs should have an individualized diagnostic process involving testing biochemical markers or obtaining functional or anatomical imaging based on clinical presentation. Upper and lower GI endoscopies (standard axial endoscopy) are crucial for the detection, biopsy, and therapeutic resection of GI NETs (stomach, duodenum, rectal, colon). Surgical resection is the preferred first-line treatment of SBNET. The goal of surgical resection should be sufficient curative resection of the primary, regional lesions, and distant metastatic disease with cytoreductive intent along with palliative resection for symptom relief by removing bioactive agents releasing tumor. Several systemic treatment options are available for the treatment of NET, including SSAs, everolimus, and PRRT. Although complete resection of tumors seems to be curative, it has been noted that long-term recurrence rates are about 50%. Close surveillance for 10 years after the resection of NET should be followed at intervals of six to 12 months with imaging, biochemical markers, or endoscopy. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-04-23T15:02:18.080Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "87519c8bfd68ce6f52d634f89f8a31f71bcd4d7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.37925", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d631b54066b4a5f5d6dab08066d66ceb9fb99a37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54639458
pes2o/s2orc
v3-fos-license
The Health Promotion Model in HIV Care Effective medical treatment with uninterrupted engagement in care is critical to improving the survival and the quality of life of patients infected with the human immunodeficiency virus (HIV). Objectives: Multiple behavioral interventions have been conducted to promote adherence behaviors. However, adherence to HIV medications and medical appointments is still an issue of global concern. Method: The Health Promotion Model (HPM) is a nursing adaption of the health belief model. The HPM focuses on individual characteristics and experiences, as well as behavior specific cognitions and outcomes. Integrating the HPM in addressing adherence behaviors could be one of the building blocks of success in changing health behavior. Results: A search of the literature turned up no studies that applied the HPM in adherence behavior studies conducted among HIV-infected populations. Conclusion: This paper presents the reader with the availability of current adherence-behavior interventions and strategies that align with the HPM model components. It further proposes the need for medical treatment team members to adopt the HPM in current clinical practice settings so as to effectively address adherence behavior issues. Introduction The advancement of antiretroviral therapy (ART) has provided multiple positive health benefits to the HIV-infected population. The life expectancy of HIV-infected patients who choose to follow ART treatment has been dramatically prolonged. HIV is no longer considered a terminal illness by many medical professionals, but rather a chronic disease. However, many infected persons do not fully benefit from the best-managed treatment plans, because they do not consistently adhere to routine clinical care. Ensuring treatment adherence has proven to be a significant challenge in health care globally and in the United States. Adherence is defined as the extent to which a patient follows instructions for prescribed treatment(s) (1). Individuals who do not fully adhere to prescribed treatment regimens may face higher mortality and morbidity rates due to the untreated advancement of their disease (2). Research has shown that effective medical treatment and correct diagnosis have proven to be critical to improving the quality of life of HIV patients and assuring their long-term survival (3). Failure to follow the precise recommendations and instructions of health care providers is a barrier to effective medical treatment. In addition, non-compliance or non-adherence to HIV care and treatment poses a significant economic burden to society (4). Globally, non-adherence to antiretroviral therapy in the adult HIV-infected population ranges from 33% to 88 % (5). In the United States alone, non-adherence has led to tremendous yearly medical expenditures (6). In 2015, the global health observatory data repository indicated that approximately 1.1 million people died of HIV-related illnesses (7). The economic burden and death rate among HIV patients can be addressed and improved only through effective communication and commitment to the development and implementation of effective, evidence-based treatment regimens and adherence to them. Effective HIV treatment often requires a multidisciplinary team approach, and the coordination of care is critical to achieving success with respect to improving rates of adherence. Researchers have been struggling with the issue of adherence behavior for more than 50 years (8), and the problem of non-adherence is still listed as a priority research topic by the World Health Organization (9). An extensive review of the literature uncovered no studies that developed and implemented the HPM as a methodology to address non-adherence behaviors. This paper discusses the cli-nical adherence intervention strategies currently available and explores other theoretical models that could successfully align with the HPM. In most clinical settings, nurses are the healthcare professionals who spend the most time with patients (10). Therefore, this is an opportunity for nurses to embrace the Health Promotion Model (HPM) to further encourage adherence behavior strategies among their patients. The Health Promotion Model The Health Promotion Model (HPM) was designed as a health protection model that defined health as a state of positive metabolic and functional efficiency of the body and not merely the absence of disease (11). The HPM is a nursing adaption of the health belief model, which is directed toward increasing a patient's level of well-being and self-efficacy as the patient interacts with the surrounding environment. Thus, the HPM is described as the art and science of assisting patients as they adapt to changes and improve their life style, while progressing toward a state of optimum health. Figure 1 provides an overview of the HPM, which focuses on three major components: 1) individual characteristics and experiences, 2) behavior-specific cognitions and affect, and 3) behavioral outcomes. The model maintains that each individual has personal experiences and characteristics that are unique to that person and can affect subsequent actions and outcomes (11). The behavior-specific knowledge variables have motivation characteristics that are helpful to improving individual engagement in health. These behavior variables can be applied in nursing. Thus, health promotion behaviors shape the desired behavioral outcome. Behavior should positively enhance a better quality of life, as well as improve functional abilities and health across all development stages. The HPM embodies the following assumptions (11). 1. Patients must actively regulate their behavior. 2. Patients, in their biopsychosocial complexity, transform the environment as they interact progressively and transform behavior over time. 3. Physicians and nurses play an integral role in development of the interpersonal environment that exerts influence on the lives of patients. HIV treatment effectiveness is based on the level of patient adherence to the therapeutic regimen. Health care systems, HIVtreatment team members, and patients all have a responsibility in improving adherence to medication. A single technique is not able to improve the adherence of HIV patients to a therapeutic regimen. Instead, the development and implementation of diverse methods focused on improving adherence is needed. Table 1 presents the different theories and models that have been applied to explore adherence behaviors. These theories and models are grouped under the three major components of the HPM: Individual Characteristics and Experiences, Behavior-specific Cognitions and Affect, and Behavioral Outcomes. These models could be implemented within the HPM framework by team members, with the overarching goal of improving adherence to scheduled medical appointments and daily medications. Individual Characteristics and Experiences Individual characteristics and experiences involve personal biological factors, psychological factors, and socio-cultural factors (12). The factors are shaped by the nature of the behavior in question and are considered to be predictive of a particular behavior (12). The following strategies could be implemented by treatment team members utilizing the HPM model: The Educational Adherence Strategy Most research supports the effectiveness of patient education in the areas of adherence, knowledge, and patient outcomes (6,13,14). Education positively impacts adherence among HIV patients, especially with regard to daily medication routines. This may be due to the many educational programs that are available to assist with improving adherence to treatment regimens. One example involves providing precise instructions and recommendations on self-care activities during scheduled medical appointments. Through collaborative care, adherence interventions can be implemented by utilizing the expertise of other treatment team members (physicians, nurses, dietitians, and social workers) (15). Nurses play a major role, not only in utilizing this strategy in daily practice, but also as team leaders who are pivotal in promoting collaborative care. The Self-motivation Adherence Strategy Koenig,Bernard (16) reported that adherence to a treatment plan improves as physicians use the patient's current laboratory results as a launching point in talking with the patient during clinical visits. As patients are made aware of their current viral load, this information may motivate them to take control of managing their health (16). Other interventions such as providing groceries, meal tickets, and other tangible bonuses as part of attendance rewards have been shown to improve adherence in routine HIV medical care (17,18). Directly observed therapy (DOT) is another strategy that also could improve medication adherence. Berg and colleagues conducted a randomized trial using DOT to explore adherence behavior. The study indicated DOT enabled patients to maintain low viral loads and found they were more accepting of future medical evaluations and more likely to maintain ART adherence (17). Theoretical Models to Explain Educational and Self-Motivation Adherence Educational and self-motivation adherence strategies are referred to as responses that are intended to improve an individual's ability to manage his/her disease (19). Behavioral principles such as feedback and reinforcement are often integrated into these strategies (19). The effectiveness of educational and self-motivational interventions should be tailored to address specific patient needs. Another important component that requires exploration is the quality of the relationship between the patient and the healthcare provider (20). The concept of educating patients and addressing their needs appears to be daunting and complex, and does not refer to didactic or cognitive theoretical models. Multiple theories (self-regulation perspectives, cognitive perspectives, and communication perspectives) can be applied to the sociocultural and psychological factors in educating patients as they apply this knowledge and become self-motivated and more likely to adhere to their scheduled medical appointments and daily medication routines. The Communication Theory Communication models are essential to applying the educational and self-motivation strategies. These models focus on the transfer of knowledge and information related to a disease and its effective management (21). Communication models incorporate patient-specific information that can have a positive impact on motivation, as well as attitudes toward adherence. Based on recent studies, it was found that information presented to patients should be tailored to a sixth grade reading level if it is to be effective and understood (22). Limiting prescribed messages to no more than three specific points, with supporting statements for each topic, was found to be most useful (23). Other factors that can contribute to message acceptance are a physician's concern, interest, friendliness or empathy toward the patient and the alliance that may form once trust is established (24). Thus, through the use of communication theory, a provider can influence and create positive behavioral change that might significantly improve patient adherence to daily medications and medical appointments (25). Before initiating ART, providers should take the time to discuss with the patient the advantages and disadvantages, risks and benefits of each medication protocol (26). An example would be when a patient might express concern about the possible side effects of a medication or treatment regimen. By presenting reliable treatment options and individualized treatment plans to patients and discussing options with them, they can be involved in the decision-making process. This is key and a cornerstone to the collaborative development of a successful treatment strategy. The Cognitive Theory The cognitive model emphasizes the beliefs and perceptions of patients as motivating behavioral factors. It also assumes health-related behavior is determined by an understanding of health benefits and the threats perceived in health behavior choices made by the patient (19). The primary model dimension is the perceived severity and probability of the threat and the perceived benefits and barriers of such behavior. Actions are based on the individual's subjective perception of the advantages and disadvantages, and are not necessarily based on rational objective computations (19). There are different cognitive concepts applied in adherence behavior studies. However, applying single cognition concepts in adherence behavior studies might not provide sufficient data, wihout considering other preexisting behavioral factors such as alcohol use disorders (27). The Self-Regulation Model The self-regulation theory is directed toward patient selfmanagement, using educational interventions. It is determined by perceived social norms and/or group or social consequences (28). The theory maintains patients engage strategies that allow them to assume the role of active problem solvers. Thus, patient behavior is influenced by subjective emotions and experiences. These are based on perceptions of the goal and current status, the patient's plans to change his/her present state in order to achieve a goal, and the patient's appraisal to reach the target. When goals are altered or not achieved, a patient can change his/her perception and coping strategies (28). Coping among patients is based on cognitive considerations. The emotional and cognitive signals to cope are triggered by either external or internal stimuli (29). Media messages and symptoms are examples of external and internal stimuli. Behavior-Specific Cognitions and Affect This concept involves patient perception about the anticipated personal benefits of pursuing positive health outcomes that might result from a given health behavior (11). Thus, it entails situational influences, interpersonal influences, activity-related affect, perceived self-efficacy, perceived barriers to action, and perceived benefits of action. Historically, most medication adherence studies describe patient forgetfulness as the greatest barrier to non-adherence (30)(31)(32)(33). This is considered a non-intentional factor. Forgetfulness can be challenged and even improved as providers implement various reminder strategies such as cellular phone messages, alarms, emails, telephone reminders, and direct mail letters (17). Engaging the assistance of HIV patient caregivers establishes another route to removing non-adherence as a barrier to a medical regimen (34). Family-based or couples' interventions provide motivation and support for patients to adhere to their medication (35). Soliciting help from the HIV patient's family members and establishing trust via communication between the patient and the provider are vital components in assisting a patient toward selfefficacy and treatment adherence (36). Providers monitor and evaluate patient adherence by using support, rewards, calendars, and diaries, as well as by providing concise and consistent feedback (37). Financial incentives were reported to improve adherence behavior to HIV management in the short-term and while the incentives were in place (38,39). However, only a few limited controlled studies have been reported to date (38). Most research confirms that telephone prompts and mail reminders are beneficial in reducing patient non-adherence to scheduled medical appointments (40)(41)(42). One of the suggested intervention models that can be implemented easily by healthcare providers is a personal telephone call or a short reminder message sent a few days before a scheduled medical appointment. This kind of direct, personal communication reminds the patient of the importance of the pending medical appointment (17). Computerized reminders also are highly cost-effective and can motivate higher levels of adherence among HIV patients. As noted, adherence behavioral interventions are essential to improving adherence to scheduled medical appointments and daily medication, as medical providers seek to improve the health status of this highly vulnerable population. The Behavioral Perspective Theory to Explain Adherence Interventions based on incentives are essential to improving the HIV patient's adherence levels. Behavioral-adherent interventions are explained by using the behavioral theory. The model states the behavior of humans is largely based on cues or stimuli; these prompt specific responses that are essential in reinforcing behavior (19). Incentives can act as cues, reminders, and rewards for adherent behavior. The major principle of the behavior model is that behavior is learned by forming and/or gradually shaping behavior patterns. For a desired behavior to remain consistent, it must be reinforced through automation and frequent repetition. Reminders are, therefore, essential in improving adherence to scheduled medical appointments and daily medication. Due to advances in technology, reminders are the most inexpensive and direct intervention option available to HIV patients. By using electronic technology, these messages may be sent frequently by the provider, at little or no cost (43). Behavioral Outcomes The third HPM concept involves identification and intention of a planned strategy to implement a health behavior. It also can involve alternative behaviors that patients are not able to control because of environmental contingencies (12). Behavioral outcome interventions can be implemented by providers using technical strategies such as unit-dose and/or blister-packaging for medication, as compared to bottles and envelopes (13). With this model, the adherence strategies are aimed at reducing the number of drug types in the regimen or doses per day through the use of fixed-dose combination pills or extended release formulations. Fixed dose combination pills involve the formulation of two or more drugs in proportions that are set and/or blister packaged medicinal products in fixed dosage combinations. Additional adherence improvement aids include printed medication schedules and calendars that specify the time of day for daily medications and specific packaging such as pill boxes that indicate dosage frequencies (44). The average rate of adherence is higher for a single daily dose compared to patients taking multiple daily doses (14). Therefore, treatment team members should consider dosing frequency when developing medication regimens and attempt to limit the number of daily doses required. Another strategy is the use of electronic medication container caps for elderly HIV patients who may have difficulty opening regular bottles (44). These electronic vial caps serve as a reminder system by beeping whenever a dose should be administered. The patient is rewarded with an accurate record of when medication(s) was (were) last dispensed. Therefore, the development and implementation of reliable medication-product modifications should be a priority in improving patient adherence. Another important component of adherence involves followup appointments with the provider and the healthcare team. Wait times are an important factor and influence patient adherence to medical appointments. The longer a patient has to wait to schedule a visit to be seen by the provider, the lower the rate of adherence among the HIV-patient population (45). There is robust and consistent evidence that indicates simplifying scheduled medical appointments and dosage regimens improves adherence by reducing the frequency of daily dosages (44). This can result in decreased health care costs and better health outcomes for HIV patients. As stated previously, a strong level of respect and professionalism must be developed between the healthcare provider and the patient in order to create this environment. Different therapeutic options and the manner in which they can be adhered to must be considered and implemented jointly by patients and providers. Acknowledgement of the crucial role each plays in the plan is critical to the development of trusting and respectful relationships between healthcare providers and patients. The Biomedical Theory to Explain Adherence The biomedical perspective theory assumes patients to be passive recipients of instructions and recommendations provided by treatment team members. It also discusses alternative behaviors patients are not able to control, based on environmental and technical contingencies. Thus, it envisions the identification of a planned best-managed strategy to implement health behavior(s). A disease such as HIV involves biomedical causes, and the predominant focus of treatment is restoration of health. Adopting the current advancement of a newer drug regimen is preferred (19). Technical adherence strategies simplify the regimen by simplifing packaging and improving dosage adherence among HIV patients. Such interventions are part of a biomedical model in which providers seek solutions for HIV patients. The biomedical model, therefore, assists in motivating the development of technological advances in enhancing scheduled medical appointments and daily medication routines (19,20). In summation, simplification of treatment options is interpreted as a logical and practical solution. Summary In summary, no single adherence approach can resolve the status of commitment to patient adherence levels. Factors impacting adherence levels, such as the therapeutic relationship between treatment team members and the patient, should be addressed in an ongoing and/or proactive way (13). Adherence success is based on tailoring interventions to a patient's unique characteristics, readiness to engage in care, and the outcome expected from the treatment. Collaboration between treatment team members and patients is still the foundational core of success in improving adherence behaviors. Negotiation, collaboration, engagement, and participation all enhance opportunities for an ideal therapeutic approach that will assist HIV patients as they develop the skills needed to maintain their adherence equation. Such partnerships influence patient adherence at all levels, foster patient satisfaction, and create positive healthcare outcomes. Thus, all of these critical adherence elements can be linked to retention and improved health outcomes in the care of HIV-infected patients. Conventional interventions used to schedule medical appointments and reminders of daily medications, based on technical solutions, do not adequately explain the human situational thought processes. There is a need to present and utilize the components provided by the HPM before implementing such intervention(s). Adherence interventions such as reminders and incentives that stem from behavioral models are essential for non-adherent HIV patients who do not abide by their scheduled medical appointments and daily medications on a routine basis. It is unclear if educational, behavioral, biomedical and self-regulation models are more or less reliable in improving the level of adherence among HIV patients. Concentrated efforts to improve adherence can lead to a win-win solution in which healthcare providers, patients, and the community at large can all benefit. In order to effectively contribute to and validate the HPM, there is a need to promote and pursue further multidisciplinary, collaborative research that delves into the underlying issues of adherence as a cohesive team effort focused on improving healthcare for all HIV patients. Conclusion The paper contributes to advancement in the field by introducing the Health Promotion Model and describing its applicability to patient adherence behavior. The issue of adherence behavior in the HIV-infected population not only aligns with current research trends in United States (46), it also is the main research priority listed by the World Health Organization (9). Regardless of the types of HIV behavioral-related research topics, the HPM could still be appropriate as a foundation framework for interventions to address adherence. The challenge facing HIV patients has no simple solution in the area of behavioral change or adherence. The HPM can shed light on the processes underlying behavioral change. The theoretical model is essential for developing and implementing successful adherence interventions. More analysis is required to explore theory-based interventions in healthcare practices that are operative and developed based on a clear and relevant theoretical foundation. Finally, documentation shows that cohesive and dedicated teams of nurses spend the greatest amount of time with patients (10). To overcome adherence barriers and issues facing HIV patients with respect to daily medications and medical appointments , medical healthcare providers should be poised to assume a proactive role in promoting positive health habits over the long term through the Health Promotion Model.
2018-12-12T21:48:47.543Z
2016-10-27T00:00:00.000
{ "year": 2016, "sha1": "fdba2b1459b5de27738a261b5173abf9ebc05cb1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5294/aqui.2016.16.4.2", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7b47d0be022a91e674e4278f6e5300d22493d137", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
176206
pes2o/s2orc
v3-fos-license
A human haploid gene trap collection to study lncRNAs with unusual RNA biology ABSTRACT Many thousand long non-coding (lnc) RNAs are mapped in the human genome. Time consuming studies using reverse genetic approaches by post-transcriptional knock-down or genetic modification of the locus demonstrated diverse biological functions for a few of these transcripts. The Human Gene Trap Mutant Collection in haploid KBM7 cells is a ready-to-use tool for studying protein-coding gene function. As lncRNAs show remarkable differences in RNA biology compared to protein-coding genes, it is unclear if this gene trap collection is useful for functional analysis of lncRNAs. Here we use the uncharacterized LOC100288798 lncRNA as a model to answer this question. Using public RNA-seq data we show that LOC100288798 is ubiquitously expressed, but inefficiently spliced. The minor spliced LOC100288798 isoforms are exported to the cytoplasm, whereas the major unspliced isoform is nuclear localized. This shows that LOC100288798 RNA biology differs markedly from typical mRNAs. De novo assembly from RNA-seq data suggests that LOC100288798 extends 289kb beyond its annotated 3' end and overlaps the downstream SLC38A4 gene. Three cell lines with independent gene trap insertions in LOC100288798 were available from the KBM7 gene trap collection. RT-qPCR and RNA-seq confirmed successful lncRNA truncation and its extended length. Expression analysis from RNA-seq data shows significant deregulation of 41 protein-coding genes upon LOC100288798 truncation. Our data shows that gene trap collections in human haploid cell lines are useful tools to study lncRNAs, and identifies the previously uncharacterized LOC100288798 as a potential gene regulator. Introduction Long non-coding (lnc) RNAs can regulate gene expression and are abundant in the genomes of various organisms. 1 The human genome has been reported to contain about 60,000 lncRNA genes 2 and an increasing number is suggested to play important roles in cancer and other diseases. 3,4 Moreover, several lncRNAs were reported to serve as disease biomarkers 5,6 and potential drug targets. [7][8][9] LncRNAs display a wide range of functions from nuclear scaffolding 10 to post-transcriptional mRNA regulation by "sponging" regulatory miRNAs, 11 transcriptional gene activation or repression by binding and guiding histone modifiers to target genes 12,13 and silencing by transcription interference 14 (reviewed in 15 ). Apart from the basic difference between the functions of lncRNAs and mRNAs, lncRNAs also display a number of RNA biology features that make their identification and functional studies more challenging than that of protein-coding genes. 16 These features include: low, tissuespecific expression, 17 nuclear localization 18 and inefficient co-transcriptional splicing, 19,20 transcription initiation from repeat rich regions 21 and unusually high isoform heterogeneity. 22 To date, the majority of functional lncRNA studies have depleted the lncRNA of interest via post-transcriptional knock-down approaches using shRNAs, 23 morpholinos 24 or modified DNA antisense oligos that target nuclear localized transcripts. 25 Based on the atypical RNA biology features described above, these approaches might not be generally suited to study a wide range of lncRNAs. For example, shRNAs are unlikely to target lncRNAs in the nucleus, 26 while morpholinos or antisense oligos might be difficult to design for targeting complex lncRNA loci expressing multiple lncRNA isoforms. Importantly, lncRNAs that act solely by their transcription will not be affected by post-transcriptional knockdowns. 14 Genetic manipulations might be a more universal approach to interfere with lncRNA function independent of RNA-biology features. These manipulations have become more feasible due to the emergence of fast and simple genome editing technologies such as CRISPR/Cas9. 27 One strategy is the genetic deletion of the whole gene body or the promoter of the lncRNA of interest. [28][29][30][31] While this approach is appealing due to its relative simplicity, there is a risk of simultaneous deletion of potential genomic regulatory elements that could be located in the gene body of the targeted lncRNA, which can make the interpretation of the resulting phenotype problematic. 16,32 Therefore genetic insertion of transcriptional terminator sequences, or "gene traps" may be preferable to gene deletions as they are less likely to disrupt regulatory elements. Gene trap technology is based on the insertion of "truncation cassettes," typically containing polyA signals, shortly after the transcriptional start site (TSS) of the lncRNA to stop RNA Polymerase II transcription and create functional lncRNA "knock-outs". Gene trap mutagenesis has been used extensively in the mouse to identify and study protein-coding genes. 33 Classical gene trap cassettes carry a strong splice acceptor and a reporter protein terminated by a strong polyA signal. This cassette is introduced into the cell line using retroviral vectors that cause random integration into the genome. If the cassette integrates into the gene body of a transcribed gene in the correct transcriptional orientation, transcription will be stopped. 34 An analysis of mouse lines carrying gene trap insertions that had the goal to identify key genes expressed during embryonic development, led to the isolation of the lncRNA called gene trap locus 2 (Gtl2) gene. 35 It is also known as maternally expressed 3 (Meg3), since it is exclusively expressed from the maternally inherited allele, a phenomenon known as genomic imprinting. 36 Gtl2/Meg3 was shown to be functional in mouse development 37,38 and human disease. 39 Subsequently a targeted approach was used to introduce polyA signals from rabbit b globin or simian virus 40 to truncate the imprinted Airn, Kcnq1ot1 and Ube3a-as lncRNAs in mice, as occurs in gene trap truncations. These approaches successfully stopped lncRNA transcription and identified these lncRNAs as transcriptional regulators of developmentally important protein-coding genes. [40][41][42][43] The advent of genome editing tools such as zinc finger nucleases opened the possibility to use similar approaches also for human cells. In this way polyA containing truncation cassettes were targeted at the abundantly expressed MALAT1 lncRNA causing efficient truncation in a number of human cell lines. 44 Insertion of a truncation cassette may interrupt cisacting genetic elements, and although this is notably less likely than with gene body deletions, it should be controlled for. Such controls include insertion of the truncation cassette at different sites, creating lncRNA truncations of different lengths, or the use of non-functional truncation cassette insertions. 32 An important advantage of the gene trap approach is the possibility to restore lncRNA transcription by removing the stop cassette. 45 However, restoration of lncRNA function will only be possible if continuous expression is required for function. 32,46 Taken together, this indicates that the truncation of lncRNAs is a useful tool to study their function in both mouse and human, and in particular gene trap insertion is a well-controlled high-throughput method to achieve this. While tools to perform genetic manipulations in mouse and human systems are becoming faster and simpler, the creation of a human cell line carrying a lncRNA truncation may still require optimization and thus is time consuming and resource intensive. Therefore it would be beneficial to use existing lncRNA knockout resources to rapidly investigate a lncRNA of interest. Such a resource was reported for proteincoding genes as the "Human Gene Trap Mutant Collection". 45 This library is comprised of a collection of monoclonal cell lines that carry an insertion of a gene trap cassette in the gene body of a large number of genes. 45 The cell line used to establish this resource is a nearly haploid (except for chromosome 8) malignant myeloid lineage cell line called KBM7. 47 As most chromosomes are present in only one copy, the integration of a gene trap cassette results in a full knock-out in KBM7 cells. Since the creation of this gene trap collection did not select for a particular type of genomic locus, it contains cell lines with gene trap cassettes inserted into protein-coding genes, as well as into transcribed non-coding regions, including various annotated lncRNAs (visit https://opendata. cemm.at/barlowlab/ for the location of all cassettes). Thus, the KBM7 "Human Gene Trap Mutant Collection" could represent a massive ready-to-use collection of lncRNA knockouts that may be useful for rapidly assessing human lncRNA function. Importantly, efficiency of a gene trap depends on splicing from a neighboring exon of the "trapped" gene to the gene trap cassette. 34 In the above described case of Gtl2/Meg3 efficient splicing was expected as this lncRNA produces a number of spliced isoforms. 48 While "Human Gene Trap Mutant Collection" has been proven to efficiently stop transcription of protein-coding genes, the usefulness of this approach to study lncRNAs is unclear, since it was shown that many of them are inefficiently spliced or completely unspliced. 19 In this study we aimed to close this knowledge gap and test if "Human Gene Trap Mutant Collection" can be successfully used for studying lncRNAs, even the inefficiently spliced ones. For this purpose we focused on a lncRNA, that was identified in a tiling array based study to be close to the SLC38A4 proteincoding gene and named "SLC38A4-down". 49 It is noteworthy that mouse Slc38a4 shows imprinted expression in extra-embryonic, embryonic and adult tissues 50 as well as in cell culture cells. 51 No lncRNA has been reported to be involved in regulating Slc38a4 imprinted expression which is, to date, considered a solo imprinted gene (http://igc.otago.ac.nz). Although SLC38A4 was not reported to show imprinted expression in human, the identification of SLC38A4-down lncRNA close to the SLC38A4 gene allowed the possibility that this lncRNA might be involved in transcriptional regulation of SLC38A4. SLC38A4-down lncRNA was predicted from its expression profile, that lacked exon peaks, to be mainly unspliced and was also shown to be nuclearlocalized. 49 These features make it an unsuitable target for a post-transcriptional knock-down approach. Importantly, we identified a number of gene trap insertions in the gene body of this lncRNA in the "Human Gene Trap Mutant Collection" in the correct transcriptional orientation, which allowed us to use this lncRNA as a model in our study. We first identified that SLC38A4-down corresponds to the LOC100288798 lncRNA annotated by NCBI RNA reference sequences collection (RefSeq 52 ). Using publicly available RNA-seq data from various tissues and cellular fractions we found the LOC100288798 lncRNA to be ubiquitously expressed, inefficiently spliced and polyadenylated. Unspliced isoforms are retained in the nucleus, while minor spliced isoforms are exported to the cytoplasm. We also extended the annotation of this lncRNA by showing that it is twice as long as the annotated version, as it is transcribed over 500 kilobases (kb) and overlaps the SLC38A4 protein-coding gene in multiple tissues. Thus we suggest renaming it SLC38A4-AS lncRNA in accordance with recent lncRNA nomenclature guidelines. 53 We then obtained three independent KBM7 clones harboring gene trap cassettes in the body of SLC38A4-AS predicted to stop transcription 3kb and 100kb downstream of its transcription start. RNA sequencing (RNA-seq) of control and SLC38A4-AS truncated cell lines showed that SLC38A4-AS was efficiently truncated, which resulted in genome-wide gene expression changes. We applied further stringent filtering to identify a small list of the most plausible SLC38A4-AS targets. Based on this data we conclude that lncRNA truncations available in the "Human Gene Trap Mutant Collection" are useful to study lncRNAs, making this resource a valuable tool for studying lncRNA function in a human system. In order to maximize the usefulness of this data for the scientific community we provide a UCSC genome browser hub to display all the RNA-Seq data as well as the information on gene trap insertion sites presented in this paper (https://opendata.cemm.at/barlowlab/). Results LOC100288798 is a ubiquitously expressed, inefficiently processed lncRNA LOC100288798 lncRNA is annotated by several reference gene databases including RefSeq 52 and GENCODE v19 (http://www.gencodegenes.org/releases/19.html, 54 ) as a 269kb lncRNA on human chromosome 12 (Fig. 1A). LOC100288798 lncRNA was also identified by RNA-seq based human lncRNA annotation studies such as Cabili et al 17 and MiTranscriptome 2 (Fig. 1A). It is an intergenic lncRNA that initiates from its own CpG island (CpG: 106) and is located between the SLC38A2 and SLC38A4 protein-coding genes (Fig. 1A). Despite the 35 spliced expressed sequence tags (ESTs) mapped to this locus (Human ESTs That Have Been Spliced public track at UCSC Genome Browser), LOC100288798 remains an uncharacterized lncRNA. We characterized this lncRNA using publicly available human RNA-seq data. We first asked which tissues and cell types express LOC100288798 lncRNA using polyAC enriched and total (rRNA depleted) RNA-seq data from 34 healthy primary tissues and cell types as well as 4 normal and 3 malignant cell lines originating from different studies (total of 41 different cell types, 5 of which were replicated twice giving the total of 46 samples, Table S1A, Methods). We downloaded the raw RNA-seq data, aligned it with STAR 55 and obtained an average of 186 million uniquely mapped reads per sample (ranging from 16 to 371 million reads, Table S1A). We next calculated expression levels of LOC100288798 lncRNA and its neighboring SLC38A2 and SLC38A4 genes by calculating average RPKMs of RefSeq annotated spliced isoforms (Methods). Fig. 1B shows the obtained expression profile in the 46 analyzed samples. This shows that SLC38A2 is highly expressed (RPKM>9) in every analyzed sample and its ubiquitous expression is known (http://www.proteinatlas.org/ENSG00000134294-SLC38A2/tissue). In contrast, SLC38A4 is expressed (RPKM > 0.5) in just 18/46 samples (which corresponds to 15/41 different cell/tissue types) with highest expression in liver and skeletal muscle, consistent with previous observations (The Human Protein Atlas: http://www.pro teinatlas. org/ENSG00000139209-SLC38A4/tissue, Expression Atlas: http://www.ebi.ac.uk/gxa/genes/ENSG00000139209). Similar to SLC38A2, the LOC100288798 lncRNA is expressed (RPKM>0.5) in all analyzed samples. Notably, the highest LOC100288798 lncRNA expression level, achieved in CD34 cells, is 48 fold lower than the highest expression level of SLC38A2 and 16 fold lower than that of SLC38A4, consistent with previous observations that lncRNAs are generally lower expressed than protein-coding genes. 17 We next asked if LOC100288798 lncRNA expression showed any correlation with the 2 nearby genes, since it is known that some lncRNAs can regulate their nearby protein-coding genes. 13,40 Although LOC100288798 lncRNA and its closest gene SLC38A2 were both ubiquitously expressed, they did not show correlation in expression level (Pearson correlation D 0.17, 46 samples). This, together with the fact that their transcription start sites are separated by 11kb and located in 2 separate CpG islands, indicates that these 2 genes initiate from independent promoters, and while they seem to belong to the same transcription network, the regulation of their expression level may be independent. LOC100288798 lncRNA and SLC38A4 showed a striking difference in cell type expression profile and no correlation in expression among the tested tissues and cell types (Pearson correlation D 0.07, 46 samples), which indicates independent transcriptional regulation. When we analyzed correlation only in tissues that express both LOC100288798 lncRNA and SLC38A4, correlation between these 2 genes was still negligible (Pearson correlation D 0.11, 18 samples), although the small number of samples may impede the correlation analysis. In summary, we found that LOC100288798 is a ubiquitously, but lowly expressed lncRNA displaying no striking correlation with the expression of its neighboring protein-coding genes. We next characterized the efficiency of LOC100288798 lncRNA splicing as it was previously reported that lncRNAs show reduced co-transcriptional splicing when compared to mRNAs. 19 We used publicly available total RNA-seq data (Table S1A) from 18/41 of the above described different cell types and estimated splicing efficiency for LOC100288798 lncRNA and 2 protein-coding genes TBP and SLC38A2 that were expressed in the same cell types. We calculated the average splicing efficiency of all Table S1A for details of abbreviations). Expression levels of SLC38A4 and LOC100288798 were calculated as average RPKMs of RefSeq isoforms (SLC38A2 -1 isoform: NM_018976, SLC38A4 -2 isoforms: NM_018018 and NM_001143824, LOC100288798 -5 isoforms: NR_125377, NR_125378, NR_125379, NR_125380, and NR_125381), values are displayed inside each cell. Heat map color legend is displayed on the left. (C) LOC100288798 lncRNA is variably spliced in different tissues. Heat map shows splicing efficiency (Methods) of LOC100288798 and 2 protein-coding genes TPB, SLC38A2 (well-spliced ubiquitously expressed protein coding gene controls) in publicly available total RNA-seq data (Table S1A). Calculated splicing efficiency is displayed inside each cell. Heat map color legend is displayed on the left. (D) Visual inspection of ENCODE HeLa RNA-seq of various cell and RNA fractions suggests that LOC100288798 is an inefficiently processed lncRNA. From top to bottom: Chromosome position; RefSeq annotation; ENCODE HeLa RNAseq sequencing data. RNA-seq data is displayed using the public ENCODE RNA-seq (CSHL) hub in the UCSC browser (only Replicate 2 from 2 replicates available at ENCODE RNA-seq (CSHL) hub is displayed). From top to bottom: PolyAC RNA-seq of the whole cell Reverse and Forward strand show absence of SLC38A4 expression from the reverse strand and visible expression from the forward strand corresponding to LOC100288798. Dashed orange lines indicate chromosome positions of RefSeq annotated exons of LOC100288798. Comparison of signal intensities between polyAC and polyA-indicates LOC100288798 is inefficiently spliced as it appears more abundant in polyA-fraction. Cytoplasm RNA-seq indicates that only spliced and polyadenylated LOC100288798 transcripts can be exported to the cytoplasm (compare peaks in polyAC and no peaks in polyA-). Nuclear RNA-seq indicates nuclear enrichment of LOC100288798 unspliced form (compare nucleus polyA-to cytoplasm polyA-). RNA-seq tracks are displayed with the default ENCODE RNA-seq (CSHL) hub scale (range -from 0 to 100). (E) PolyAC enrichment. Bar plot shows PolyAC enrichment (calculated as the ratio between RPKM in PolyAC and PolyA-RNA fractions) of the 4 indicated genes in HeLa cells (ENCODE RNA-seq data). RPKMs and consequently PolyAC enrichment were calculated for spliced isoforms (RPKM over exons, blue bars) and unspliced isoforms (RPKM over whole gene body, purple bars) of the 4 genes. PolyAC enrichment is a relative value, therefore we indicated the absolute RPKM values of spliced and unspliced isoforms in PolyA-fraction below each respective bar. (F) Nuclear enrichment. Bar plot shows nuclear enrichment (calculated as the ratio between RPKM in nuclear and cytoplasmic fractions) of the 4 indicated genes in HeLa cells (ENCODE RNA-seq data). RPKMs and consequently nuclear enrichment were calculated for spliced isoforms (RPKM over exons, blue bars) and unspliced isoforms (RPKM over whole gene body, purple bars) of the 4 genes in PolyAC (darker bars) and PolyA-(lighter bars) fractions. Nuclear enrichment is a relative value, therefore we indicated the absolute RPKM values in cytoplasmic fraction below each respective bar. unique splice sites from all isoforms of the analyzed gene (Fig. 1C) by calculating RPKMs of exonic and intronic 45bp regions surrounding the splice site (Methods). As expected, both protein-coding genes showed high splicing efficiency with an average of 93.0% (TBP) and 96.5% (SLC38A2) among analyzed cell types. Importantly only 2 (for TBP) and one (for SLC38A2) cell types showed splicing efficiencies of less than 90%. The result was different for the LOC100288798 lncRNA. Here average splicing efficiency was 76.0%, with 14/18 cell types showing splicing efficiency of less than 90% and 7 -lower than 70%. It is noteworthy that low splicing efficiencies are not restricted to low expression levels. For example undifferentiated chondrocytes (59% splicing efficiency) and IMR90 cells (68% splicing efficiency) are in the top 25% and top 50% highest expressing tissues for the LOC100288798 lncRNA (Fig. 1B). This indicates that LOC100288798 lncRNA is less well spliced compared to protein-coding genes, and that splicing is variable in different cell types. It has been reported that lncRNAs tend to be nuclear localized, 18,56 and that nuclear export depends on the addition of a 3' polyA tail, which is connected to splicing. 57 To investigate the processing of LOC100288798 lncRNA we used publicly available ENCODE RNA-seq data from nuclear, cytoplasmic, as well as whole cell fractions (Table S1B). Importantly, the RNA from each cell fraction was further divided into polyA enriched (pol-yAC) and polyA depleted (polyA-), thus providing a source of information about the polyadenylation and cellular localization of LOC100288798 lncRNA spliced/polyadenylated as well as unspliced isoforms. We first visually inspected the RNA-seq signal obtained from HeLa cells in the LOC100288798/SLC38A4 region using the ENCODE (CSHL) RNA-seq hub in the UCSC browser (Fig. 1D). The SLC38A4 protein-coding gene is not expressed in whole cell polyAC RNA-seq as indicated by the absence of RNA-Seq signal over exons on the reverse strand ( Fig. 1D, whole cell, top box, Arrow marked 'Rev'), consistent with our expression calculation (Fig. 1B, RPKM of SLC38A4 D 0.00). In contrast, the forward strand showed abundant RNA-seq signals over LOC100288798 lncRNA exons in polyAC and over the whole gene body in polyA-RNA-seq data. Interestingly, the signal intensities in pol-yAC and polyA-data were comparable confirming inefficient splicing of LOC100288798 lncRNA (Fig. 1D, whole cell, middle and bottom box, Arrow marked 'Forw'). In the cytoplasmic fraction, only spliced and polyadenylated isoforms of LOC100288798 lncRNA were detectable as RNA-seq signal over exons in the polyAC, but not in the polyA-fraction (Fig. 1D, cytoplasm). In the nuclear fraction, stronger RNA-seq signals were detectable over the LOC100288798 lncRNA gene body in polyA-than in the polyAC faction, and no clear enrichment of exonic signals was visible. This indicated that spliced isoforms of LOC100288798 lncRNA were exported to the cytoplasm, whereas mainly unspliced isoforms were retained in the nucleus. To quantify this visual analysis we calculated RPKM values for LOC100288798 lncRNA and 2 control protein-coding genes, SLC38A2 and TBP, as well as for the XIST lncRNA, which is known to be polyadenylated, nuclear localized and well spliced. 58 We first estimated the efficiency of polyadenylation by calculating the ratio of RNA-seq signal in the PolyAC fraction over the PolyA-fraction (RPKMPAC/RPKMPA-, Fig. 1E). We observed that all the 3 control genes, which are known to be polyadenylated, show ratios of »2-4 for both unspliced (whole gene body, purple bars) and spliced (blue bars) isoforms, indicating efficient polyadenylation of these transcripts. Spliced and unspliced isoforms of LOC100288798 lncRNA showed ratios smaller than 1, indicating inefficient polyadenylation of LOC100288798 lncRNA (Fig. 1E, lncRNA). We next assessed the efficiency of cytoplasmic export by calculating the ratio of RNA-seq signals in the nuclear over the cytoplasmic cell fraction for both PolyAC and PolyA-RNA-seq datasets (Fig. 1F). As expected, PolyA-fraction showed high ratios for both spliced and unspliced isoforms of the 4 tested genes, indicating nuclear enrichment of unprocessed isoforms (Fig. 1F, light blue and light purple bars). In contrast, the pattern of nuclear enrichment of polyadenylated spliced and unspliced isoforms differed notably between the analyzed genes ( Fig. 1F, blue and purple bars). While spliced and polyadenylated XIST isoforms were almost exclusively present in the nucleus (ratio: »500), similar processed isoforms of the proteincoding genes SLC38A2 and TBP showed low ratios, indicating no nuclear enrichment (Fig. 1F). Consistent with our conclusions from visual inspection, spliced isoforms of LOC100288798 lncRNA were exported to the cytoplasm and showed low ratios similar to the analyzed protein-coding genes (RPKM of spliced isoforms in the polyadenylated cytoplasmic fraction D 3.4, while RPKM of spliced isoforms in the polyadenylated whole cell fraction D 2.3, Fig. 1B). Interestingly, unspliced isoforms of LOC100288798 lncRNA showed high ratios, indicating nuclear enrichment. Similar profiles were observed for LOC100288798 lncRNA in 4 other analyzed cell lines (Fig. S1, Table S1B). In summary, this analysis showed that LOC100288798 lncRNA is inefficiently polyadenylated in comparison to SLC38A2, TBP and XIST. Whereas the small fraction of polyadenylated LOC100288798 lncRNA isoforms is exported to the cytoplasm, the major fraction consisting of unspliced isoforms is highly enriched in the nucleus. Therefore we show that LOC100288798 lncRNA polyadenylation and nuclear enrichment profiles are distinct from both XIST lncRNA and protein-coding genes. De novo assembly of LOC100288798 exon structure identifies overlap with SLC38A4 Visual inspection of the RNA-seq data indicated that LOC100288798 transcription extends over the downstream SLC38A4 gene (see continuous RNA-seq signal in Fig. 1D), in spite of RefSeq annotating the 3' end of LOC100288798 112kb upstream from SLC38A4 ( Fig. 2 top). Interestingly, human spliced ESTs annotated continuous spliced transcripts overlapping SLC34A4 (Fig. 2). We next aimed to fully annotate LOC100288798 using publicly available RNA-seq data from multiple cell types. We limited this analysis to reads aligned to a 1 Mega base pairs (Mb) region (chr12:46,500,000- Figure 2. LOC100288798 exon structure assembly from various tissues extends its annotation to over 500kb overlapping SLC38A4.UCSC Genome Browser screen shot of the studied locus (chr12:46,772,500-47,422,500). From top to bottom: Chromosome position and the scale; RefSeq gene annotation (all annotated isoforms are displayed), spliced human ESTs (12/35 ESTs displayed), transcriptome assembly of the locus obtained in this study (Results, Methods). Note that only selected transcripts are shown (11/167 de novo isoforms of LOC100288798 and 4/43 de novo isoforms of SLC38A4), and that both EST and transcriptome assembly data reveal extension of LOC100288798 to over 500kb in length. RNA-seq tracks from ENCODE/CSHL UCSC hub with the titles containing cell type name, RNAseq type and transcriptional orientation are displayed below. Only total whole cell RNA-seq is displayed. Bottom: normalized RNA-seq signal from wild type human haploid KBM7 cell lines (merged data from 2 wild type clones sequenced in this study, Methods). For all RNA-seq tracks: only forward strand (Plus Signal) is displayed. 47,500,000) around LOC100288798. We extracted reads from each of the 46 aligned RNA-seq samples used in Fig. 1B (polyAC as well as ribosomal depleted total RNA-seq) and performed de novo assembly using the Cufflinks software. 59 Thus, we obtained 46 assemblies, which we merged using Cuffmerge software 59 to create an integrative de novo annotation of the investigated region (see Fig. 2 for selected isoforms and Table S1Cfor all the isoforms annotated in the region). Importantly, we identified exon models that share exons with LOC100288798 lncRNA and overlap the SLC38A4 protein coding gene, indicating that LOC100288798 is a 558kb long lncRNA (chr12:46777455-47335067, see CUFF.281.86 in Fig. 2 and Table S1C). Visual inspection of the LOC100288798 RNA-seq signal in cell types ranging from the highest expressing (CD34 cells, RPKMD6.68) to lowest expressing (MNC Peripheral blood, RPKMD0.56), showed that extended transcription persists independently of expression level (Fig. 2). Therefore LOC100288798 lncRNA is consistently overlapping the SLC38A4 protein-coding gene and should be renamed as SLC38A4-AS according to the recently suggested nomenclature. 53 As this nomenclature also appears more intuitive we have used it for the remainder of this study. Gene trap insertion in the haploid human KBM7 efficiently truncates SLC38A4-AS lncRNA Although visual inspection of RNA-seq and exon model assembly suggested that SLC38A4-AS lncRNA is a single lncRNA gene it is possible that this was an artifact resulting from multiple short overlapping lncRNAs. To address this issue we used the haploid KBM7 cell line for which a collection of gene trap insertion clones was readily available. 45 We first confirmed that SLC38A4-AS was expressed in wildtype KBM7 cells and found it well expressed over the predicted length by visual inspection of RNA-Seq data performed in this study (Fig. 2 bottom). Next, we identified 3 cell lines from the publicly available KBM7 gene trap collection where independent insertion events inserted gene trap cassettes in the correct orientation into the gene body of SLC38A4-AS (Table 1). Two of these cell lines were predicted to stop SLC38A4-AS transcription at 2,904bp (3kb1 and 3kb2, Fig. 3A), and one cell line at 103,958bp (100kb) downstream of the RefSeq annotated transcription start. To create biological replicates of the single 100kb insertion cell line we recovered 2 batches of this cell line from frozen stocks and cultured them in parallel (100kb1, 100kb2, Methods, Fig. 3A). The production of KBM7 gene trap insertion cell lines is a multi-step procedure including infection of cells with the gene trap cassette, fluorescent activated cell sorting (FACS) and clonal expansion to obtain monoclonal cultures. Also different people may have handled different cell lines. These factors are possible sources of gene expression differences, so we controlled for these factors using multiple control cell lines. First, we obtained 3 different KBM7 cell lines that had not undergone the gene trap insertion procedure but were handled by different people and had different passage numbers (wild type: WT1, WT2, WT3, Fig. 3A). Second, to control for potential effects of the gene trap insertion procedure, we obtained 2 cell lines with gene trap insertions not in SLC38A4-AS, but in the HOTTIP lncRNA gene body of which one was predicted to stop HOTTIP lncRNA and one was not, based on mapping cassette insertion orientation (C1 and C2, Table 1, Fig. 3A). To eliminate further batch effects from handling cells and preparing RNA and RNA-Seq libraries, all cell lines were obtained as frozen stocks and recovered, cultured and harvested at the same time by one person. Similarly one person performed RNA extraction and library preparation. After recovery we cultured the cell lines for 8 days and 2 passages. We measured the cell size prior to splitting and harvesting (Methods) and noticed that the C1 and 3kb2 cell lines showed increased peak cell size (Fig. 3B). It has been reported previously that cell size increases with ploidy 60 and therefore this result indicated that these KBM7 cell lines were not haploid. We then harvested the cells using 20 million cells for DNA isolation and 100 million cells for RNA isolation. As a further test for ploidy we measured the DNA amount obtained from the 20 million cells. Consistent with the cell size measurements we found that C1 and 3kb2 cells displayed 2 and 1.5 fold increase in DNA amount compared to wild type controls. Additionally we found that 3kb1 and C2 also showed 2 and 1.5 fold increase in DNA amount (Fig. 3C). As both cell size and DNA content are indirect measures of ploidy we performed karyotyping of selected cell lines (3kb2, 100kb, C1, WT2, Supplemental Figs. [2][3][4][5]. This confirmed the haploid state of the 100kb and WT2 cell lines and the diploid state of the 3kb2 and C1 cell lines. Also we did not detect large scale chromosomal aberrations in addition to the known t(9;22) translocation. 45 This indicated that most cell lines that underwent gene trap insertion and clonal expansion procedure either gained diploidy, or were a mixture of haploid and diploid cells. Note that KBM7 cell ploidy does not interfere with any downstream analyses, as RNA-seq expression analyses are performed on normalized values that correct for increased RNA amount in diploid versus haploid cells. To confirm that both alleles carry the gene trap insertion and to validate the integrity of the genomic locus after the gene trap insertion we performed 2 DNA blotting assays for the 2 3kb truncation cell lines (see Supplemental Figure. 6A-B for maps of restriction enzymes and probes). First, we identified the expected 2.8kb (size of the gene trap cassette) increase in size of a genomic EcoRV fragment including the gene trap insertion site in 3kb1 and 3kb2 cell lines compared to wildtype ( Fig. S6C-E). Second, we identified the expected size reduction of a genomic EcoRI/BamHI fragment due to the insertion of a BamHI site with the gene trap cassette ( Fig. S6D-F). Importantly, we did not detect any wildtype fragment in the 3kb1 and 3kb2 cell lines indicating that gene trap insertion occurred in sorted haploid cells and that diploidy arose after cassette insertion. Therefore it can be concluded that both chromosomes in diploid cells carry the gene trap. We next tested if gene trap cassette insertions 3kb and 100kb downstream of the SLC38A4-AS transcription start indeed stopped transcription elongation. We designed 5 RT-qPCR probes inside the body of the SLC38A4-AS gene ( Table 2, Fig. 3D). We placed 2 probes (start1 and start2) upstream of the 3kb stop cassette insertion site, one probe (middle1) downstream of the 3kb, but upstream of the 100kb stop cassette, and 2 probes (middle2 and end) downstream of the 100kb stop cassette insertion site. Note, that the "end" RT-qPCR probe lies outside of the gene body of RefSeq annotated LOC100288798. We used all these probes to define the profile of SLC38A4-AS transcription in 3 wild type (blue, WT1-3), 2 control (green, C1, C2), 2 3kb (yellow, 3kb1, 3kb2) and 2 100kb (purple, 100kb1, 100kb2) SLC38A4-AS truncation cell lines (Fig. 3D bar plot). Since SLC38A4-AS RNA-Seq signals decreased from 5' to the 3' end (see Fig. 2), we normalized expression levels to WT1 for each RT-qPCR probe. All cell lines displayed transcription of SLC38A4-AS upstream of the 3kb gene trap insertion site, with increased expression in the 2 3kb truncation cell lines (Fig. 3D, start1 and start2). Consistent with expectations, the 2 3kb truncation cell lines displayed dramatic reduction of SLC38A4-AS transcription 28kb downstream of the transcription start (25kb downstream the truncation site, middle 1), while the 100kb truncation cell lines displayed continuous SLC38A4-AS transcription since these cell lines carried the stop Table 1). Two monoclonal cell lines with independent insertion events that integrated a gene trap cassette 3kb downstream of SLC38A4-AS transcription start site (TSS) were available (3kb1 and 3kb2). Only one monoclonal cell line had a gene trap insertion 100kb downstream of the downstream of SLC38A4-AS TSS. Therefore we prepared biological replicates by performing independent thawing and culturing procedures (100kb1 and 100kb2). Left column: We obtained 3 wild type KBM7 control cell lines, which did not undergo any gene trap insertion procedure, were not monoclonal and were cultured by different people at different times prior to culturing for this analysis (WT1, WT2 and WT3). Middle column: To control for changes during gene trap insertion and selection procedure we obtained 2 KBM7 cell lines that did undergo gene trap insertion within the body of HOTTIP lncRNA and were monoclonally expanded (C1 and C2) (see Table 1). (B) Ploidy of KBM7 cell lines assessed by cell size. Bar plot shows peak cell size measured for 9 cultured KBM7 cell lines (Methods). All the cell lines were thawn and processed in one batch by the same person. Cell size was measured at the first splitting (3 days post-thawing, dark gray bars), second splitting (6 days post-thawing, medium gray bars), and prior to harvesting (8 days post-thawing, light gray bars). (Table 1) and RT-qPCR probes are displayed ( Table 2). Bottom: Expression profiling of SLC38A4-AS in the KBM7 cell lines (described in A). Error bars represent standard deviation from 3 RT-qPCR technical replicates. Bars are ordered from left to right as listed (top to bottom) in the legend on the right. For each RT-qPCR probe the expression level in WT1 is set to 100%. cassette downstream of this RT-qPCR probe (Fig. 3D, middle1). Expression levels downstream from the 100kb stop cassette were dramatically reduced in both the 3kb and 100kb truncation cells, but largely unchanged in the wild type and the control cells (Fig. 3D, middle2 and end). Thus, RT-qPCR confirmed that the SLC38A4-AS lncRNA was successfully truncated in KBM7 cells at the gene trap cassette insertion sites. Importantly, lack of transcription at multiple positions downstream of the gene trap cassette insertion sites in all tested cell lines further indicates that the SLC38A4-AS gene generates a single 558kb long transcript. RNA-seq of KBM7 cell lines with truncated SLC38A4-AS lncRNA confirms a single transcription unit overlapping SLC38A4 As RT-qPCR only detects transcripts in a very narrow window at the chosen primer position, we performed RNA-seq to obtain a global picture of SLC38A4-AS truncation. We chose 2 cell line replicates per group: wild type (WT2 and WT3), control (C1 and C2), 3kb (3kb1 and 3kb2) and 100kb (100kb1 and 100kb2). 50bp singleend RNA-seq and alignment using STAR 55 produced an average of 35 million uniquely mapped reads per sample (standard deviation -1.0 million reads) (Table S1D). Visual inspection showed similar SLC38A4-AS RNA-seq profiles in wild type and control cells with a similar decrease in signal from 5' to 3' end as seen before (compare Fig. 2 and Fig. 4A wild type). While the 3kb2 cell line showed a clear reduction of RNA-seq signal downstream the 3kb stop cassette insertion site, 3kb1 seemed to have residual transcription and thus truncation might be less efficient. Both the 100kb1 and 100kb2 replicates displayed a similar SLC38A4-AS expression profile with a clear reduction in RNA-seq signal after the gene trap cassette insertion point. We next quantified the RNA-seq signal strength to confirm the conclusions made from visual inspection. To obtain a transcription profile of SLC38A4-AS in each cell line we calculated RPKM of 5 regions (relative to the transcription start): 0-3kb, 3kb-50kb, 50kb-100kb, 100kb-300kb and 300kb-600kb (Fig. 4B). WT, C and 100kb cell lines showed a 3-fold RPKM drop from 0-3kb to 3kb-50kb regions with detectable expression in the 3kb-50kb window (RPKM > 0.2), which is consistent with the reported RNA-seq signal decrease from 5' to the 3'end for lncRNAs. 61 In the 3kb cell lines the gene trap cassette stopped SLC38A4-AS and removed this pattern, and therefore all windows downstream of the gene trap cassette insertion site showed very low expression (RPKM <D 0.05). WT and C cell lines showed a further 1.8-and 1.7-fold signal drop between 50-100kb and 100kb-200kb regions confirming the visual impression that the RNA-Seq signal decreases from 5' to 3' end in WT and C cell lines. The 100kb cell lines follow the expression pattern of the WT and C cell lines but the signal drops to very low expression levels (RPKM <D 0.02) after the gene trap insertion site. To allow a direct comparison between cell lines we plotted the expression of each window relative to WT (set to 100%, Fig. 4C). The first window (0-3kb) showed similar expression in WT, C and 100kb cell lines but was »3-fold lower in 3kb cell lines. The following window (3-50 kb) showed a further »3-fold reduction in expression for the 3kb cell lines whereas all other cell lines showed similar expression of SLC38A4-AS. At the 50-100kb window the expression of the 100kb truncation cell lines started to drop »2-fold but were still »2-fold higher than 3kb truncation cell lines. In the last 2 windows (100-300kb, 300kb-600kb) the 100kb truncation cell lines showed a low residual expression level (»10fold less compared to WT, 6-8 fold less than C) whereas 3kb truncation cell lines showed a 2-3 fold higher residual expression likely due to the inefficient truncation of the 3kb1 cell line identified by visual inspection. We observed that while difference between 100kb replicates was low for every analyzed SLC38A4-AS region (maximal difference between 100kb1 and 100kb2 constituted 37% of the mean, at 100-300kb, Fig. 4C), the difference between 3kb1 and 3kb2, which resulted from different integration events, was more notable (maximal difference between 3kb1 and 3kb2 constituted 126% of the mean, at 100-300kb, Fig. 4C). 3kb1 showed 2.5-to 4.4fold higher expression compared to 3kb2 in the 4 windows downstream the 3kb gene trap insertion (Fig. 4B). In spite of increased RNA-seq signal compared to the 3kb2 and 100kb truncations, the 3kb1 cell line did not reach the wild type and control levels of SLC38A4-AS transcription (Fig. 4C). It was possible that the difference Table 2. RT-qPCR probes for analyzing expression profile of SLC38A4-AS lncRNA. RT-qPCR probe forward primer, 5'-3' reverse primer, 5 '-3' distance from TSS, bp start1 CCCCGAGCAAATGGTGAATC GGCATTATGTCATCGTCCTTTCA 1,560 start2 CATTCCAAGGCAGTGTTACATTTT TCGGGGCTAAAGGTGTATGA 1,452 middle1 TGGGGCTGAAACATTTAGGC TCAGGCTCCATGTTCCTACC 28,415 middle2 GGAACTAACAACGTCACAGGTAAT ACCACATTCAACAGGAGAGAATAG 136,322 end GTCCCTTCAAAGGAGGGTTT GAAGGTGCCAAGTTTGAGGT 338,946 in truncation efficiency between the 3kb1 and the 3kb2 cell lines was due to sequence aberrations in the splice acceptor sequence in the gene trap cassette. Therefore we amplified and sequenced this region of the gene trap cassette and found it to be identical in the 3kb1, 3kb2 and C1 cell lines (Supplemental Fig. 7A-B). In order to discriminate inefficient truncation of SLC38A4-AS from a contamination of the 3kb1 cell line with wildtype cells we performed a PCR assay with primers directly flanking the cassette insertion site. We identified the correct wildtype PCR fragment in all tested cell lines, except for 3kb1 and 3kb2 cell lines, where the cassette insertion separates the primers by 2.8kb, which is not amplified in our settings (Supplemental Fig. 7C). Importantly this indicates that the 3kb1 cell line is not contaminated with wildtype cells to a detectable level. In summary, RNA-seq confirms efficient truncation of SLC38A4-AS in both 100kb truncation cell lines and the 3kb2 cell line. Interestingly, the global transcriptional analysis of 3kb1 truncation revealed reduced truncation efficiency in this cell line. SLC38A4-AS truncation causes deregulation of several genes in trans To investigate if SLC38A4-AS truncation had an effect on gene expression in cis or in trans, we calculated expression level of RefSeq annotated protein-coding genes and performed differential gene expression analysis using Cuffdiff software. 62 We compared WT2, WT3, C1 and C2 (4 control replicates) with 3kb1, 3kb2, 100kb1 and 100kb2 (4 targeted cell line replicates). This analysis produced a list of 120 significantly differentially expressed genes (excluding chromosomes X and Y, Table S1E) that we further filtered by requiring a 3-fold expression change between the 2 conditions, which resulted in a list of 41 protein-coding genes (Table S1 Elines in bold). This number of genes was 5-fold higher than the average number of genes differentially expressed (3-fold expression change) in 11 mock comparisons (Table S1F). Interestingly, the 41 genes were distributed across almost all chromosomes (Table S1 Elines in bold). One gene (CD163L1) was down-regulated and 3 (CD9, EMP1 and CRY1) were upregulated on chromosome 12, the same chromosome that contains SLC38A4-AS. However, these genes were located 33-61 million bp distant from SLC38A4-AS and therefore their regulation is more likely to arise from trans effects. We then calculated expression levels (FPKM, Methods) of the 41 significantly deregulated genes reported above by Cuffdiff for each of the 8 samples separately to allow unsupervised clustering to be performed (Methods). This analysis correctly grouped the 2 biological replicas of the 3kb truncation, 100kb truncation replicates and wild type replicates (Fig. 5A). Interestingly, C1 and C2, although in the same branch, did not group together, which may relate to the fact that C1 carries a truncated HOTTIP lncRNA (gene trap insertion in sense to HOTTIP, Table 1), while C2 had an antisense insertion in the HOTTIP gene body, and therefore should not truncate (Table 1). We then performed further filtering to create a small stringent list of the deregulated genes. To increase the stringency of the list of differentially expressed genes we performed 3 filtering steps. First, we filtered out genes that showed significant differential expression between wild type (WT2, WT3) and control (C1, C2) samples and thus might be differentially expressed due to the effect of the gene trap cassette insertion procedure (3/41 genes). Second, we removed the genes that showed differential expression between 3kb and 100kb truncation thus restricting our list to the genes that are regulated by the part of SLC38A4-AS lncRNA downstream of the 100kb cassette insertion site (18/41 genes). Third, we only retained the genes that were differentially expressed in both pairwise comparisons of control to 3kb (3kb1, 3kb2 vs C1, C2, 12 genes) and control to 100kb samples (100kb1, 100kb2 vs C1, C2, 24 genes). These filtering steps resulted in a stringent list of 6 proteincoding genes ( Table 3). Three of these genes, including CD9 (Fig. 5B) were upregulated upon SLC38A4-AS truncation, and 3, including RORB (Fig. 5C), were downregulated. In summary, these data show that genetic truncation of SLC38A4-AS lncRNA results in genome-wide gene expression changes and provides a stringent list of 6 potential SLC38A4-AS target genes. As these results provide clear evidence for the use of the "Human Gene Trap Mutant Collection" to study lncRNAs, we investigated how many lncRNAs can be potentially studied using this collection in its current form. First, we calculated expression for all GENCODE v19 lncRNAs in the 2 wild type cell lines investigated in this study (WT1, WT2) and found 2,307 non-overlapping lncRNA loci to be expressed (i.e. to express at least one lncRNA isoform with RPKM>0.2). Next, we investigated how many GENCODE v19 lncRNAs contained a gene trap insertion on the same strand and found that 938 lncRNAs are likely to be truncated in one of the available cell lines (Fig. 6A left bar). Overlapping these 2 data sets revealed 409 expressed lncRNAs carrying a gene trap insertion in the current collection ( Fig. 6A middle bar). If we set a higher expression cut off of RPKM>0.5, we find 266 lncRNAs carrying a gene trap (Fig. 6A right bar). We investigated the position of gene trap insertions relative to the transcriptional start site of lncRNAs and found enrichment at the 5' end (Fig. 6B). Finally we examined the well-studied lncRNA MALAT1 and identified 5 gene trap insertions close to the 5' end corresponding to potential knock-out cell lines. (Fig. 6C) Discussion Here we report the first use of the "Human Gene Trap Mutant Collection" 45 to study the function of a human lncRNA. To demonstrate the utility of this collection we analyzed cell clones that successfully truncated the SLC38A4-AS lncRNA (renamed from LOC10028879) that displays RNA biology features distinct from protein-coding genes, including low expression and inefficient splicing. We also investigated this gene trap collection as a whole for its suitability for the study of lncRNAs, and identified hundreds of lncRNAs with gene trap insertions including the well-studied MALAT1 lncRNA. Therefore we demonstrate here the utility of the "Human Gene Trap Mutant Collection" for studying lncRNAs and also identify SLC38A4-AS as a very long and novel functional regulatory lncRNA. Prior to analyzing gene trap efficiency we examined the RNA biology of the SLC38A4-AS lncRNA that has not previously been characterized. We showed that SLC38A4-AS, unlike many lncRNAs, does not show tissue-specific expression. While tissue-specificity is often considered as an indication of functionality, 63 several ubiquitously expressed lncRNAs have been proven to play important gene regulatory roles. 40,64 We used a set of public RNA-seq data to show that SLC38A4-AS lncRNA is inefficiently spliced and that the major unspliced isoform is nuclear localized. Importantly, by comparing SLC38A4-AS to 2 control protein-coding genes, we show that the unspliced isoforms we detect for SLC38A4-AS are not just an intronic signal. We conclude this from the finding that the polyadenylation and localization profiles for unspliced isoforms of the protein-coding genes, which are notably highly expressed, differ dramatically from that of SLC38A4-AS. Minor spliced isoforms of SLC38A4-AS lncRNA are well detectable in the cytoplasm and thus are exported and likely stable. SLC38A4-AS lncRNA is thus a transcript with unusual RNA biology features different from protein-coding genes. We performed de novo transcriptome assembly in the region and were able to show that transcription of SLC38A4-AS extends 289kb downstream the RefSeq annotated 3' end and overlaps the downstream SLC38A4 gene. We then obtained KBM7 cells from the "Human Gene Trap Mutant Collection" with gene trap insertions at 2 different locations (3kb and 100kb downstream of the transcription start) in the gene body of SLC38A4-AS lncRNA to test whether the unusual RNA biology features interfered with efficient truncation by the gene trap cassette. By using qRT-PCR as well as RNA-seq we identified one cell line with efficient truncation at both insertion sites. This data not only verifies that gene trap insertions in KBM7 cells efficiently truncate SLC38A4-AS lncRNA, but also confirms our prediction of the extended SLC38A4-AS lncRNA length. Detailed RNA-seq analysis identifies that the 3kb1 cell line shows less efficient truncation compared to 3kb2 cell line despite these cell lines sharing same gene trap insertion site. Differences in the efficiency of truncation between different insertion sites have been documented for one truncation of the Airn lncRNA. In this case a truncation cassette insertion at 3 different genomic loci caused successful truncation of the lncRNA whereas the same cassette was highly inefficient when inserted into a CpG island. 14 Also differences in the gene trap efficiency of protein-coding genes were noted for different cassette integration sites. 45 However, a difference between similar insertion sites as shown for 3kb1 and 3kb2, was surprising. DNA gel blotting experiments did not detect a large scale rearrangement of the chromosomal locus with the gene trap insertion nor did they identify a contamination of the 3kb1 cell line with wildtype cells. As DNA blotting might not be sensitive enough to detect a low level of wildtype cell contamination we validated these results by a PCR assay. We also validated that the splice acceptor sequence was unchanged in the 3kb1 cell line. Taken this together, an aberration of the genetic sequence in 3kb1 is unlikely to be the cause for the reduced efficiency of transcription termination in this cell line. A connection between chromatin structure and transcription termination has been made in yeast 65 and it has been suggested that local chromatin changes influence splicing. 66 It is therefore possible that cell line specific local chromatin changes result in differences in truncation efficiency at identical cassette integration points. As global geneexpression analysis showed high similarity between both 3kb truncation cell lines, it is highly likely that the residual level of SLC38A4-AS expression seen in 3kb1 cell line is not sufficient to maintain a wildtype gene expression pattern. We therefore conclude that gene trap approach used for the "Human Gene Trap Mutant Collection" is a useful tool to truncate inefficiently spliced lncRNAs. We noted that 2 qRT-PCR primers that are close to the 3kb truncation cassette insertion site, showed elevated qRT-PCR signals specifically in 3kb truncation cell lines. Interestingly RNA-seq did not support this elevated transcription on the forward strand, which corresponds to SLC38A4-AS lncRNA, but identified strong transcription from the reverse strand directly at the gene trap insertion site that was absent in the control cell lines. Similar transcription on the reverse strand at the gene trap insertion point was visible albeit at lower levels for the 100kb truncation cell lines (Fig. S8). Thus, we provide evidence that the gene trap cassette used for the "Human Gene Trap Mutant Collection" can drive transcriptional activity, which was suggested earlier. 45 Additionally, we also show that this activity can be strong (2fold higher than SLC38A4-AS) and therefore has to be carefully considered when expression of genes in close proximity is affected, as transactivation of protein-coding genes by the transcriptionally active viral LTRs was reported in gene therapy patients. 67 Interestingly, SLC38A4-AS lncRNA shares several unusual RNA biology features with the imprinted mouse lncRNA Airn that also overlaps in antisense orientation and silences the protein-coding Igf2r gene. Although Airn lncRNA is inefficiently spliced, 5% of its nascent transcripts are spliced and give rise to stable lncRNAs that are exported to the cytoplasm. 20 These spliced Airn lncRNA isoforms are, however, not connected to the silencing mechanism. 14 Interestingly, truncation experiments identified that Airn silences Igf2r due to its transcriptional overlap, a phenomenon called transcriptional interference. 14,40 The Airn lncRNA also silences 2 protein-coding genes that it does not overlap in a tissue-specific manner, likely by targeting repressive chromatin to the promoters of these genes. 68,69 We tested if the SLC38A4-AS lncRNA silences the SLC38A4 protein-coding gene that it overlaps and/or the SLC38A2, which is located 10kb away in a similar manner. We were surprised to find that neither SLC38A4 nor SLC38A2 protein-coding genes were affected by the truncation of SLC38A4-AS lncRNA. In addition, expression analysis of multiple tissues did not show anti-correlating expression patterns of the 2 protein-coding genes with the lncRNA. In the case of imprinted expression involving a repressor lncRNA, such a pattern would not be expected as one allele expresses the protein-coding gene whereas the other allele expresses the lncRNA. Therefore we conclude that SLC38A4-AS lncRNA most likely does not share functional similarities with the imprinted Airn lncRNA and does not control SLC38A4 or SLC38A2 protein-coding gene expression. This data supports the hypothesis that imprinted expression of Slc38a4 in the mouse, is rodent-specific as it is also absent from the pig and cow. 70,71 In order to test the functional importance of SLC38A4-AS lncRNA as a gene regulator in trans, we tested whether the truncation of the lncRNA resulted in gene expression changes in KBM7 cells. In accordance with recent guidelines established for the correct analysis of lncRNA knockout experiments, we included a number of controls in this analysis. 32 First, we excluded batch effects from the handling of cells by having all cell lines cultured in parallel by one person. Second, it is possible that the gene trap insertion disrupts an important genetic element that causes gene expression changes of protein coding genes that are not dependent on the lncRNA. Therefore we analyzed 3 independently derived SLC38A4-AS lncRNA truncation cell lines: 3kb1, 3kb2 with an identical insertion site and 100kb. As controls we used 2 batches of wild type KBM7 cell lines. In order to identify genes that are specifically deregulated upon truncation we performed differential gene expression analysis between SLC38A4-AS lncRNA truncation cell lines (3kb1, 3kb2, 100kb1, 100kb2), and all control cell lines (C1, C2 that carried gene traps at unrelated loci, WT1, WT2 that lacked gene traps). This analysis resulted in 120 differentially expressed genes, 41 of which were more that 3-fold up/downregulated in the truncation cell lines. Importantly, none of the differentially expressed genes were located in close proximity to the SLC38A4-AS lncRNA, as reported for well-known cis-regulating lncRNAs, such as Airn or KCNQ1OT1. 36 Whereas clustering based on the 41 differentially expressed genes allowed correct grouping of the replicates, performing a similar analysis using the expression of genes in the 10Mbp region around SLC38A4-AS resulted in sporadic clusters. This indicates a lack of consistent changes of these genes between control and truncation cell lines and thus further supports a lack of cis-acting regulatory function of SLC38A4-AS lncRNA (Supplemental Fig. 9). We plotted expression values of the 41 significantly deregulated genes in all the 8 cell lines as a heat map and found that a number of genes seemed to be specifically expressed in one control cell type (C1/C2 or WT1/WT2) or in one of the truncation cell types (3kb1, 3kb2 or 100kb1, 100kb2) rather than in all control vs. all truncation cell types. Therefore, we also performed pairwise comparisons to remove these genes. We do note that this approach limits the part of the lncRNA examined for function to regions downstream of the 100kb truncation cassette (i.e., spanning »400kb of the SLC38A4-AS gene body). Additionally we note that the function of the first 3kb of SLC38A4-AS lncRNA (upstream 3kb gene trap cassette position) was not assessed in our study while it is possible that this region may possess a function. Of the 6 genes that pass the most stringent filters for deregulation in SLC38A4-AS lncRNA truncation cell lines 2 are of special interest. The first is the clusters of differentiation proteins 9 (CD9) that belongs to the superfamily of tetraspanins, integral membrane proteins that play a role in multiple biological processes by interacting with membrane proteins like other tetraspanins, growth factors and cytokine receptors. Clinical data suggests that CD9 is a suppressor of metastasis and modulates tyrosine kinase receptor signaling in cancer. 72 CD9 is also a marker for haematopoietic stem cells 73 and was found to be up-regulated upon induction of pluripotent stem cells (iPS) from KBM7 cells, 74 although it is not necessary for pluripotency in mice 75 . The second gene is RAR-related orphan receptor B (RORB or RORb), which encodes the nuclear receptor subfamily 1, group F, member 2 (NR1F2) protein that binds to DNA and inhibits transcription. 76 RORB has not been implicated in cancer, 77 but was associated with the mammalian circadian clock, 76 and was found to be a member of a gene hub that discriminates human iPS from stem cells. 78 Little is known about the importance of RORB in KBM7 cells, however it is unlikely to be essential for this cell line as an unbiased mapping of gene trap insertions in this cell line identified 7 gene trap insertion events in this gene with 4 predicted to stop RORB transcription. 79 As mentioned above, gene trap cassette removal could provide a valuable rescue control. Human Haploid Gene Trap Collection contains cell lines with gene trap cassettes flanked by loxP sites that thus can be removed by Cre recombinase expression and the expression of the targeted genes might be restored. Among the analyzed SLC38A4-AS truncation cell lines, 3kb1 and 3kb2 did have loxP sites flanking the gene trap cassette, while 100kb truncation cell lines did not. However, while removal of the truncation cassette by expressing the Cre recombinase and subsequent re-expression of full-length SLC38A4-AS lncRNA could restore its wildtype gene expression pattern, it is possible that the gene expression changes initiated by SLC38A4-AS lncRNA are accompanied by changes in secondary gene expression or in the epigenetic landscape that may not be immediately reversible. Such an example was reported for the Airn lncRNA that silences the Igf2r protein coding gene in early development. After silencing, by Airn transcription, Igf2r acquires repressive epigenetic marks on its promoter and silencing is stably maintained in the absence of Airn lncRNA expression. 46 Therefore we conclude that the use of multiple control cell lines may prove a more efficient way to study lncRNA function in comparison to multiple targeted cell lines. In summary, this report shows that the "Human Gene Trap Mutant Collection" is a useful tool to study lncRNA function. Importantly, we identified 857 GEN-CODE v19 lncRNAs (http://www.gencodegenes.org/ releases/19.html) for which KBM7 gene trap insertions cell lines are available (Methods and https://opendata. cemm.at/barlowlab/). Similar to protein-coding genes, the gene trap cassette preferentially inserts close to the 5' end of lncRNAs, which is useful for functional studies as the bulk of the lncRNA will not be produced. 45 We found that 409 lncRNA loci with a gene trap insertion show an RPKM >0.2 (RPKM of at least one isoform in the locus) and 266 have an RPKM>0.5, which constitutes respectively 44% and 28% of all GENCODE v19 lncRNA gene trap insertion clones. It is to date unclear, which expression cutoff can be used to indicate functional importance, and it is therefore possible that also lncRNAs expressed to a lower level have a functional importance. The "Human Gene Trap Mutant Collection" could be a useful tool to study this question. Also KBM7 cells can be converted to iPS cells and have the potential to be differentiated into different lineages. 74 Therefore it is possible that lncRNAs that are lowly expressed in wild-type KBM7 cells are highly expressed in a different lineage, which can also be studied using KBM7 iPS cells. Gene trap KBM7 cells from the "Human Gene Trap Mutant Collection" are simple to obtain and culture and therefore offer a rich resource that allows analysis of lncRNA function in a human system. This is illustrated by the example of the MALAT1 lncRNA. This lncRNA was previously studied using a truncation cassette, 44 an experiment that includes (1) cloning of the truncation cassette for homologous recombination (2) optimizing endonuclease to cleave genomic DNA at the desired position (3) selection, screening, expansion and testing of correctly targeted clones. 44 This effort linearly increases for the production of cell lines with different truncation cassette insertion sites. In contrast to this time-consuming approach, 5 KBM7 gene trap clones are readily available truncating the MALAT1 lncRNA at different positions close to the 5' end that are ready to be analyzed. According to our results, the unusual RNA biology inherent to many lncRNAs does not influence the ability of the gene trap cassette to stop lncRNA transcription, and gene trap truncations are therefore a universal tool for studying a wide range of lncRNAs. The availability of multiple control cell lines is an additional advantage and allows thorough artifact control. Using SLC38A4-AS lncRNA as an example, we also show that gene trap resource together with the already available RNA-seq resources from the ENCODE consortium allow fast characterization of a lncRNA of interest. We anticipate that similar integrated approaches that make efficient use of these publicly available resources will allow the fast functional characterization of the many lncRNAs found in the human genome. Splicing efficiency calculation Splicing efficiency was calculated using public total (ribosomal depleted) RNA-seq datasets of high depth (135-371 million reads, Table S1A). Splicing efficiency of each RefSeq annotated splice site was estimated by calculating RPKM of exonic and intronic 45bp regions surrounding the splice site starting 5bp away from the precise splice site position to allow for potentially imprecise annotation of the splice site. For each splice site, which passed the coverage cutoff (exonic RPKM > 0.2), "Splicing efficiency" (S), S D 100 Ã (1-RPKM intronic /RPKM exonic ), was calculated. Splicing efficiency was within the range from 0 for fully unprocessed splice sites (RPKM intronic >D RPKM exonic , S was set to 0, when it was calculated to be <0) to 100 for perfectly processed splice sites (RPKM intronic D0). We then calculated the average splicing efficiency of all the unique splice sites for each gene and assigned the splicing efficiency of the gene with this value. Assembly of SLC38A4-AS exon structure using publicly available RNA-seq data from multiple cell types Exon structure assembly was performed for each of 46 public RNA-seq data only in the region of interest: samtools view -b [position sorted STAR alignment] chr12:46,500,000-47,500,000 > tissue.1Mb.bam . De novo transcriptome assembly was performed for each one of 1Mb regions in all the samples separately using Cufflinks version 2.2.1 with the following command: cufflinks -multi-read-correct -output-dir [output] -F 0.01 -p 7 -library-type fr-firststrand (if RNA-seq is stranded) -mask-file pseudogenes.gtf tissue.1Mb.bam . Pseudogene annotation was obtained from GENCDOE v19. The resulting transcript assemblies were then merged using Cuffmerge with the following command: cuffmerge -s hg19_fasta -keep-tmp -p 8 -min-isoform-fraction 0 [list of all gtf files from 46 cufflinks assemblies]. Single exon transcripts were discarded. KBM7 cell culture All gene trap KBM7 cell lines were obtained frozen from Horizon Genomics GmbH (http://www.horizon-geno mics.com/). WT KBM7 cell lines were from Horizon Genomics GmbH or from Sebastian Nijman lab. All cell lines were cultured in filter cap flasks in IMDM (Sigma) medium (with L-Glutamine, supplemented with Penicillin/Streptomycin and 10% Fetal Bovine Serum (PAA Laboratories (GE Healthcare)) at 37 C with 5% CO 2 . KBM7 are suspension cells. Cell concentration and cell size were measured using Casy cell counter (Sch€ arfe System GmbH). RNA preparation RNA was isolated from pelleted KBM7 cells using TRIreagent (Sigma), dissolved in RNA Storage Solution (RSS, Ambion) and stored at ¡20 o C. RNA was DNAse I treated (DNAfree kit, Ambion). Quality control was performed by accessing RNA integrity using Agilent RNA 6000 Nano Kit. RT-qPCR RNA was converted to cDNA using RevertAid First Strand cDNA Kit (Fermentas) with -RT (no reverse transcriptase) control reaction for each RNA sample according to manufacturer's protocol. RT-qPCR was performed using MESA GREEN qPCR MasterMix Plus for SYBR Assay I dTTP (Eurogentec). Primers (Table 2) were designed using Primer3. RT-qPCR was performed using standard curves in 3 technical replicates for each sample and standard deviation between the replicates was used to define the error and plot the error bars. DNA-blot DNA extraction, restriction enzyme digestion and DNA gel blots were performed using standard methods. The hybridization probe was amplified by PCR, cloned and gel purified. Membranes were exposed to an imaging plate (FujiFilm) that was scanned (Typhoon TRIO, GE Healthcare). Levels were adjusted on the whole image to increase the visibility of all bands on the image. Chromosome analysis Metaphase preparation and FISH were carried out by standard methods. Dividing cells were locked in metaphase by adding colcemid (0.1mg/ml final concentration) (Gibco, ThermoFisher) for 60 minutes. After fixation cells were dropped onto slides, dried at 42 C for 30 minutes and then incubated at 60 C over night. One slide was used for Giemsa-trypsin banding of chromosomes. For FISH analyses a Cy3 labeled probe mix (Kreatech) was used which detects the centromeric regions of chromosomes 1, 5 and 19. Strand-specific RNA-seq library preparation and RNA sequencing 4 mg of DNase I treated RNA underwent Ribosomal depletion using RiboZero rRNA removal kit Human/ Mouse/Rat (Epicentre) following manufacturer's protocol. RNA-seq library was prepared with ribosomal depleted RNA using TruSeq RNA Sample Prep Kit v2 (Illumina) with modifications to preserve strand information as described. 80 Quality and size distribution of the prepared libraries was assessed with Experion TM DNA 1K Analysis Chips, and was used for molarity calculation. 8 RNA-seq libraries were barcoded using Tru-Seq RNA Sample Prep Kit v2 provided barcodes and pooled in equal molarities. 50bp single-end RNAsequencing was performed at the Biomedical Sequencing Facility (http://biomedical-sequencing.at/BSF/) using Illumina HiSeq 2000. KBM7 cell lines clustering based on differential gene expression Expression level (FPKM) of RefSeq protein coding genes was calculated in each of 8 samples separately using Cuffdiff (same command as above, no replicates). Expression of 41 significantly differentially expressed genes (Fig. 5A) or was used to perform unsupervised clustering of the samples. Heat map was built in R using pheatmap function with options clustering_distance_colsD "canberra," clustering_distance_rowsD "euclidean." Expression calculation and gene trap insertion analysis GENCODE v19 lncRNA expression was calculated as RPKM (described above) separately for WT2 and WT3 cell lines. The average RPKM from both calculations was used in the figure. To determine the number of lncRNAs with gene trap insertion sites we downloaded cassette insertion sites from http://kbm7.genomebrowser.cemm. at/ in July 2015. Insertion sites can be updated and gene trap insertion sites used in this publication are available from http://opendata.cemm.at/barlowlab. Overlaps on the same strand with lncRNA annotations from GEN-CODE v19 were identified and overlapping annotations merged with bedtools software. GENCODE v19 lncRNA annotation was obtained at ftp://ftp.sanger.ac.uk/pub/ gencode/Gencode_human/release_19/gencode.v19. long_noncoding_RNAs.gtf.gz. To calculate position of gene trap insertions within the gene body we divided each GENCODE v19 lncRNA into 10 equally sized regions (numbered 1-10 starting at 5' end). Then we calculated the overlap of mapped gene trap insertion sites with these regions (bedtools) and created a sum of all insertions mapped to similar numbered regions. Author contributions A.E.K., D.P.B. and F.M.P. conceived the study and wrote the manuscript. I.V. discovered the SLC38A4-AS lncRNA and performed preliminary experiments characterizing this lncRNA. J.N. performed karyotype analysis and FISH. A.E.K and F.M.P performed DNA blots and PCR analyses. A.E.K. performed bioinformatic analysis, cell culture and RNA-seq. Data access Raw RNA-seq data from 8 KBM7 cell lines and the differential expression analysis output of Cuffdiff (Results, Fig. 5A) were deposited in NCBI's Gene Expression Omnibus 81 and are accessible through GEO Series accession number GSE71284 (http://www.ncbi.nlm.nih. gov/geo/query/acc.cgi?accDGSE71284). Full de novo assembly in the 1Mb region around SLC38A4-AS lncRNA, RNA-seq signal in 8 sequenced KBM7 cell lines as well as location of gene trap insertion cassettes used in the study can be viewed in the related UCSC genome browser hub via https://opendata.cemm.at/barlowlab/. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T03:07:32.288Z
2015-12-15T00:00:00.000
{ "year": 2015, "sha1": "203269700587635bfbc9c632abaef82956a28302", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15476286.2015.1110676?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fac8262c2e35ed3e2d50207294579bb0407ebec0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119212570
pes2o/s2orc
v3-fos-license
Practical and accurate calculations of Askaryan radiation An in-depth characterization of coherent radio Cherenkov pulses from particle showers in dense dielectric media, referred to as the Askaryan effect, is presented. The time-domain calculation developed in this article is based on a form factor to account for the lateral dimensions of the shower. It is computationally efficient and able to reproduce the results of detailed particle shower simulations with high fidelity in most regions of practical interest including Fresnel effects due to the longitudinal development of the shower. In addition, an intuitive interpretation of the characteristics of the Askaryan pulse is provided. We expect our approach to benefit the analysis of radio pulses in experiments exploiting the radio technique. I. INTRODUCTION In 1962 Askaryan [1] proposed to detect Ultra-High Energy (UHE) cosmic rays and neutrinos observing the coherent radio pulse from the excess of electrons in a shower developing in a dense dielectric and nonabsorptive medium. The scaling of the emitted power with the square of the particle energy, which has been experimentally confirmed in accelerators [2][3][4][5], makes the technique very promising for the detection of UHE particles and has motivated a variety of past and present experiments [6][7][8][9][10][11][12][13] along with some in the planning stages [14,15]. Key to the success of these initiatives is an accurate and computationally efficient calculation of the radio emission properties due to the Askaryan effect in UHE showers. The problem of computing the coherent radiation from particle showers can be approached in a variety of ways. Purely Monte Carlo methods have been developed to simulate the induced showers in dense media. One can obtain the contribution to the radiation from every particle track in the shower from first principles (Maxwell's equations), and add the contributions, which automatically takes coherent effects into account. This approach has been applied to calculate the Fourier components of the radiation (i.e. in the frequency-domain) [16][17][18][19][20][21][22][23][24][25][26][27][28], and only recently to the calculation of the radiation as a function of time (i.e. in the time-domain) [29]. Similar methods have also been applied to the calculation of radio emission in atmospheric showers [30][31][32][33][34][35] in which the Askaryan effect is not the dominant mechanism and will not be addressed in this paper. Monte Carlo methods have the advantage that the full complex-ity of shower phenomena is accounted for, the influence of shower-to-shower fluctuations can be addressed, and the dependence on the type of primary particle, hadronic model, along with any other assumptions can be studied with high accuracy. However, purely Monte Carlo methods are typically very time-consuming, especially at ultra-high energies and approximations are required [27]. Another numerical approach currently being developed is the application of finite difference in the time-domain (FDTD) techniques [36]. The idea is to discretize spacetime and propagate the electric and magnetic fields by approximation of Maxwell's differential equations into difference equations [37]. FDTD techniques have the advantage that they can be easily adapted to computing the effects of dielectric boundaries and index of refraction gradients and can be linked to an accurate Monte Carlo simulation of showers in dense media. The FDTD technique is however rather computationally intensive [36]. Analytical approaches have also been developed. In these methods the charge development in the shower is approximated as a current density vector [38]. Typically, parameterizations of the longitudinal and lateral profile of the showers are used to describe the main features of the space-time evolution of the charge distribution. In this approach one calculates the vector potential by integrating the Green's function to obtain the electric field. These integrals are in general difficult both numerically and analytically, but they can be greatly simplified by the use of approximations [38]. These methods are usually less time-consuming but also less accurate than purely Monte Carlo simulations. One limitation is that showers elongated due to the Landau-Pomeranchuk-Migdal (LPM) effect [39,40] cannot be parameterized easily due to large shower-to-shower fluctuations and the emitted radiation is known to depend strongly on the particular longitudinal profile of the shower [18][19][20]. Analytical techniques have also been applied with different levels of sophistication to the calculation of radio emission in atmospheric showers [41][42][43][44]. It is clearly important to be able to calculate coherent radiation due to the Askaryan as well as other effects with a variety of techniques as each has its own set of advantages and disadvantages. Our goal in this work is to provide a calculation method that is both fast and able to reproduce all the essential characteristics resulting from detailed Monte Carlo shower simulations. Semi-analytical techniques are a very good option. The general idea behind this method is to obtain the charge distribution from detailed Monte Carlo simulations as the input for an analytical calculation of the radio pulse. Other semi-analytical methods in dense media [18][19][20]29] and in the atmosphere [45] have been attempted in the past. In dense media, the frequency spectrum of the radio emission due to the Askaryan effect has been shown to be easily obtained from the Fourier-transform of the longitudinal profile of the shower [20]. This technique reproduced the frequency spectrum as predicted in Monte Carlo simulations with a high degree of accuracy, but only for angles away from the Cherenkov angle in the Fraunhofer approximation. Complementary, in the timedomain, it was shown that the electric field away from the Cherenkov angle and in the far-field regime can be very accurately calculated from the time-derivative of the simulated longitudinal development of the excess charge [29] (see also [42]). This paper presents a semi-analytical calculation that is able to reproduce the electric field in the time-domain at all angles with respect to the shower axis in both the far-field (Fraunhofer) and "near-field" 1 regions of the shower when compared to a full Monte Carlo simulation such as the well-known Zas-Halzen-Stanev (ZHS) code [17,29,46]. The technique is computationally efficient since it only requires the convolution of the longitudinal charge excess profile with a parameterized form factor to fully reproduce the coherent radiation effects from particle showers in a homogeneous dielectric medium. Once the longitudinal shower profile is obtained, the electric field can be calculated with a simple numerical integral. It is worth remarking here that the longitudinal development of extremely energetic showers can be obtained quickly and with high precision, using hybrid simulation techniques which consist on following only the highest energy particles in the shower while accounting for the lowest energy particles with parameterizations. In particular, the complexity of the longitudinal profile of showers 1 By "near-field" we mean a region in which the Fraunhofer approximation is not valid because of the longitudinal dimensions of the shower and not the region where the Coulomb field associated with the charge excess cannot be considered negligible. affected by the LPM effect can be very well reproduced with hybrid techniques [18][19][20] (for an example see Fig. 5 in [27]). The semi-analytical method described in this work is well suited to obtain the time-domain radio emission due to electromagnetic showers, for all observation angles both in the far-field and the near-field regions of the shower. The approach can be used in practically all experimental situations of interest since it only begins to show significant discrepancies when the observer is at distances comparable to the lateral dimensions of the shower ( 1 m in ice). Since the typical distance between antennas in experiments such as the Askaryan Radio Array (ARA) [14] is ∼ 10 − 100 m, we expect the results to be accurate enough in most practical situations. We expect our results to benefit experiments exploiting the radio technique. They can be used in detector simulations to test the efficiency for pulses observed from various directions. In particular, with our approach one can test the ability to detect the craggy pulses resulting from the LPM effect. The calculation can also be implemented in the data analysis of experiments by using likelihood functions aimed at the reconstruction of the longitudinal charge excess profile from a detected pulse. If the longitudinal distribution is consistent with the elongation and multiple-peaked structure due to the LPM effect, this can be used for neutrino flavor identification since it is only expected in UHE showers due to electron neutrinos. II. MODELING ASKARYAN RADIATION The case we are interested in this paper is the radiation due to the charge excess of a shower in a linear dielectric medium such as ice, salt, or silica sand. We use SI units all throughout this work. The Green's function solutions to Maxwell's equations provide the potentials Φ and A given a charge distribution ρ with current density vector J = ρv. Assuming a dielectric constant ǫ and magnetic constant µ the solutions in the Coulomb gauge (∇·A = 0) can be written as (2) where the delta function gives the observer's time t delayed with respect to the source time t ′ by the time it takes light to reach the observation point x from the source position at x ′ . The transverse current is given by J ⊥ = −û × (û × J) whereû = (x − x ′ )/|x − x ′ | is the unit vector pointing from the source to the observer. The non-trivial proof that the J ⊥ above is the only component relevant to the radiation part of the field is given in [47]. Radiation calculations are typically performed in the Lorentz gauge (∇·A+n 2 c −2 ∂Φ/∂t = 0) [24,38,42,48], with n the refractive index of the medium. However, our primary interest is to derive an approach that can be easily applied to numerical radiation calculations. In the Coulomb gauge, the scalar potential only describes nearfield terms which can be ignored for our purposes. This simplifies the computation of the radiative electric field E = −∇Φ − ∂A/∂t to a simple time derivative E = −∂A/∂t. Thus, all that is needed is a computation of the vector potential as a function of time at the position of interest. The radiation of a particle shower in a dense medium is obtained by treating it as a current density J with its main features depicted in Fig.1. The evolution in spacetime of the excess charge in a shower can be modeled as a pancake δ(z ′ − vt ′ ) traveling with velocity v along the z-axis. The net charge profile of the shower Q(z ′ ) rises and falls along the shower direction z ′ and spreads laterally in x ′ and y ′ . The velocity vector v is primarily directed in the shower axis directionẑ but may have a small lateral component and a lateral dependence due to scattering of particles in the shower. This may seem like an unnecessary complication but the small scatter will lead to an observable asymmetry of the Askaryan pulse in the time-domain. The speed v is assumed to be close to the speed of light for particle showers of interest. The associated current density vector can be modeled as a cylindrically symmetric function where r ′ = x ′2 + y ′2 is the cylindrical radius. The function f (r ′ , z ′ ) represents the lateral charge distribution in a plane transverse to z ′ depicted in the bottom of Fig. 1. This model of the current density is similar to the ones used in [24,38] except that v is allowed to have first order radial components. The vector potential is then given by where φ ′ is the azimuthal angle in cylindrical coordinates and v ⊥ (r ′ , φ ′ , z ′ ) generally depends on r ′ , φ ′ , and z ′ . In its full glory Eq. (4) seems rather intractable. The lateral distribution f (r ′ , z ′ ) is the most difficult part to model. It is for this reason that the semi-analytical models developed in [20] and [29] ignored the lateral development of the shower, and hence were not able to accurately describe Askaryan radiation near the Cherenkov angle where the lateral distribution is known to determine the degree of coherence of the emitted radiation [17]. Modeling the radiation due to the lateral distribution of the shower can be attempted using the standard FIG. 1. Geometry of a high energy particle shower. The top figure shows a side view (r − z plane) of the charge excess profile. The shower front propagates at velocity v and is modeled as a thin pancake δ(z ′ − vt ′ ). The charge evolution of the shower front traces out the longitudinal profile Q(z ′ ) shown here as an asymmetric distribution. The lateral spread of the shower due to Coulomb scattering, shown in (blue) arrows, is modeled by a velocity vector v = v(r ′ , φ ′ ) that is mostly directed along the shower axis with small radial components. The scatter results in observation angles that differ from the nominal angle θz relative to the shower axis and will lead to an asymmetric pulse. The radiative portion of the electric field lies along the vector p which is the orthogonal projection of z along the directionû from the source to the observer. The bottom panel shows a frontal view (r −φ plane) of the shower. The lateral charge excess distribution f (r ′ ) is represented in (blue) shading. NKG function [49]. However, this parameterization has singularities that make the integrals particularly difficult to solve and interpret, and it assumes constant particle velocities parallel to the shower axis which is not sufficiently accurate as will be shown further below. In the following we will show that the radiation due to the lateral distribution of the shower can be parameterized, and this parameterization can be used to predict the emitted radiation matching that obtained from the full ZHS simulation [17,29] with great accuracy. Moreover, the lateral distribution of the shower is mainly due to low energy processes such as Coulomb scattering, and as result the shape of the parameterized radiation is independent of shower energy in the energy range of interest as will be shown below. On the other hand, the longitudi-nal distribution Q(z ′ ) can change dramatically depending on the energy of the shower due to the LPM effect, and needs to be obtained in a Monte Carlo simulations such as ZHS. We will also show that our calculation of Askaryan radiation works in both the near and far-field approximations, as long as the lateral coordinates are treated in the far-field, i.e. the observation of the shower occurs at a distance larger than its lateral dimensions (∼ 1 m in ice). A. The vector potential at the Cherenkov angle Let us first consider the Fraunhofer approximation for radiation emitted by a shower. This implies expanding where R = |x|. For an observer looking in the direction u = x/|x| = (sin θ cos φ, sin θ sin φ, cos θ) in spherical coordinates, and assuming without loss of generality that φ = 0, the above expansion can be written as Approximating the denominator of the vector potential in Eq. (4) by |x − x ′ | ≈ R but keeping the approximation in Eq. (6) in the argument of the δ-function we get, Integrating over the source time t ′ results in where is the transverse projection of the unit velocity vector. In our model we make the assumption that the shape of the lateral density and the particle velocity depend only very weakly on With these approximations Eq. (8) can be written as: Vector potential at the Cherenkov angle. FIG. 2. The vector potential from electromagnetic showers in homogeneous ice (density ρ = 0.924 g cm −3 and refractive index n = 1.78, θC ∼ 55.8 • ) observed at the Cherenkov angle for various energies from the ZHS simulation. The functional behavior is identical except for an overall scaling factor that is directly proportional to the shower energy. The width is determined by the lateral distribution of the shower while the asymmetry is mainly due to the radial spread of particle tracks; both are the result of the Coulomb scattering in the medium. where we have defined the function F as: Note that we have explicitly included a factor sin θ in Eq. (9) for convenience, anticipating that the radiation is mainly polarized in the direction transverse to the observer's direction. The vector function F contains the radial r ′ and azimuthal φ ′ integrals and can be considered as an effective form factor that accounts for the lateral distribution of the charged current density, quite analogous to that obtained in the frequency domain in [38]. At the Cherenkov angle we have 1/v − n cos θ C /c = 0 and Eq. (9) results in: where F at the Cherenkov angle is obtained from Eq. (10): Using symmetry arguments we can now project the form factor F in only two orthogonal directions using unit vectorsû along the observation direction and in the direction of p = −û × (û ×ẑ): The direction ofp has been chosen perpendicular to the observation direction and lying on the plane defined byẑ andû as shown in Fig. 1 2 . This has been done anticipating the expected polarization of the radiation mainly in the direction ofp in order to make the orthogonal component along the direction of observation F u negligible. The form factor F is a medium dependent function that accounts in an effective way for the radial and azimuthal interference effects due to the lateral structure of the shower including possible directional variations in the velocity vector v of the shower particles. In principle F can be obtained from analytical solutions of the cascade equations, but this is rather involved. Alternatively one could use standard parameterizations of the lateral distribution function of high energy showers such as the NKG [49]. However, as stated before, this does not account for the radial components of the velocity and gives results symmetric in time which are qualitatively different from results obtained in detailed simulations such as that shown in Fig. 2. The trick is to extract F from simulations that effectively account for the radial component of particle velocities and the lateral distribution of the excess charge which are responsible for the time asymmetry characteristic of simulations. The basic idea behind this article is that the form factor is obtained from the vector potential at the Cherenkov angle and, as we will show later, the emission at other angles is easily related to that at the Cherenkov angle. The vector potential at the Cherenkov angle in the time domain is calculated with a detailed shower simulation and parameterized for practical purposes. The form factor F can be obtained directly equating Eq. (11) to the vector potential as obtained in the simulation. As anticipated the F u component is typically below 1 % of F p and it can be neglected, and F p can be simply obtained from the following equation: (14) where LQ tot = dz ′ Q(z ′ ) has been referred to as the excess projected track-length [17]. Taking the absolute value of Eq. (14) the functional form of F p is given by: where A(θ C , t) = |A(θ C , t)|. F p represents the average vector potential at the Cherenkov angle per unit excess track length -given by LQ tot -scaled with the factor 4πR/µ. Detailed simulations of electromagnetic showers performed with the ZHS Monte Carlo code in ice produce a consistent time-dependent vector potential at all energies of interest as shown in Fig. 2. The results for homogeneous ice can be parameterized by: where E is the energy of the shower in TeV and t is the observer time in ns. The result is accurate to within 5%. The shape of A(θ C , t) depends very weakly on shower energy, while the normalization is proportional to the energy as becomes evident in Fig. 2. Note also that at higher energies the fluctuations are reduced because the number of particle tracks increases almost linearly with shower energy. B. The radiation in the far-field Given the time domain parametrization of the radiation at the Cherenkov angle in Eq. (16) we will first obtain the pulse as seen by an observer in the far-field at any observation angle. The integral in Eq. (9) can be written as This is a straightforward numerical integration with F p given by Eqs. (15) and (16) Vector potential in the time-domain with 10 ps time sampling (corresponding to a sampling frequency of 100 GHz) as obtained in ZHS simulations. This is compared to the calculation presented in this work. The longitudinal charge profile is shown as the 1D model presented in [29] where the depth and charge are linearly rescaled to give the observer time and vector potential. Middle panel: Electric field in the time-domain comparing our method to ZHS results. Bottom panel: The electric field amplitude spectrum obtained from the ZHS simulation and the Fourier transform amplitudes of the electric field obtained from the calculation presented in this work. Note that the discrepancy between the time-domain electric field of the ZHS simulations and our results are due to the incoherent radiation at high frequencies. emission from the lateral distribution of the shower, with the longitudinal profile of the excess charge. The form factor F p is a function that has to be evaluated at the time t at which the observer in the far field sees the portion of the shower corresponding to the depth z ′ . That time is given by t = nR/c + z ′ /v − z ′ n cos θ/c. We have made the only assumption that the shape of F p depends weakly on the stage of longitudinal evolution of the shower. At the Cherenkov angle, the far-field observer sees the whole longitudinal development of the shower at once i.e. z ′ /v = z ′ n cos θ C /c in which case Eq. (17) reduces to the vector potential at the Cherenkov angle given by Eq. (14). In Fig. 3 we show an example of the vector potential and electric field in the time-domain due to an electro-magnetic shower with energy E = 100 EeV from the ZHS simulation. The fields are observed in the Fraunhofer region at an angle θ = θ C − 0.3 • and they are compared to our results obtained with Eq. (17) using the longitudinal distribution Q(z ′ ) from the same simulation. The agreement between the vector potential obtained directly in the Monte Carlo simulation and the prediction of Eq. (17) lies within a few percent difference in the region relevant to the pulse (top panel of Fig. 3). The difference between this calculation and the ZHS electric field in the time domain is greater (middle panel of Fig. 3), but as shown in the bottom of Fig. 3 this is due mostly to the incoherent emission of the shower at high frequencies. The Fouriertransformed amplitudes of the time-domain electric field obtained in our approach are based on smooth parameterizations, while the frequency spectrum obtained directly in the ZHS simulation includes incoherence effects coming from the fine structure of the shower at the individual particle level. Note that when f (r ′ , z ′ ) = δ(r ′ )/r ′ , i.e. if we neglect the lateral distribution of the shower, Eq. (17) reduces to the 1-dimensional model in [29] which fails at describing the features of the radio emission for angles close to the Cherenkov angle. To illustrate this, we show in the top panel of Fig. 3 the vector potential obtained in the one dimensional (1D) model in [29], which is a linear rescaling of the longitudinal charge excess profile, showing a clear disagreement with the results of the full ZHS simulation as expected. C. Askaryan pulses in the "near-field" We can now generalize Eq. (17) for an observer in the "near-field" region of the shower. In this case it is more natural to work in cylindrical coordinates and place the observer at (r cos φ, r sin φ, z). Without loss of generality we can again assume the observer is at φ = 0 giving In dense media, the lateral distribution is in the scale of centimeters, which means that for all practical purposes the observer is at any given instant in the far-field region with respect to the lateral distribution. The idea is to solve the vector potential in Eq. (4) using the Fraunhofer approximation to account for the lateral distribution at any given time t ′ . We expand Eq. (18) to first order in r ′ giving where sin θ(z ′ ) = r/ r 2 + (z − z ′ ) 2 , but we take into account that the distance in the denominator of the vector potential depends on the time t ′ or equivalently on the position z ′ in the shower as r 2 + (z − z ′ ) 2 . This is in contrast to the case of the far field calculation in which the distance in the denominator of the vector potential is constant and equal to R. We proceed as in the case of the far-field calculation as- After integrating over t ′ the vector potential in Eq. (4) can be written as, where the transverse projection of the velocity vector now introduces a new dependence on the longitudinal source coordinate z ′ due to the fact that in the near-fieldû(z ′ ) = (r+(z−z ′ )ẑ)/ r 2 + (z − z ′ ) 2 depends on z ′ . If we define the form factor containing the radial r ′ and azimuthal φ ′ integrals as in subsection II.A we obtain a similar expression: Note that the form factor defined in Eq. (21) has the same functional form as that defined in Eq. (15) and they only differ in the argument of the delta function. The form factor F as obtained in the far field can be applied to the near field (in relation to the longitudinal development of the shower) simply modifying its argument. Neglecting again the component of F parallel toû the vector potential in the near-field can be written as: Note that a new z ′ dependence is introduced through the polarization vector p(z ′ ) = −û(z ′ ) × (û(z ′ ) ×ẑ). In Fig. 4 an example is shown of the z ′ dependence of p(z ′ ). Also note that the sin θ term in Eq. (17) has been absorbed in Eq. (22) through p = sin θp In the near field the explicit z ′ dependence is necessary because the longitudinal profile of the shower is observed at different angles. This means that parts of the shower observed at different depths will have differing polarization vectors. This modification accounts exactly for the interference between different z ′ points along the shower development. This result has also been determined from a one dimensional current density model in [50]. Eq. (22) tells us that the vector potential in the nearfield region of the shower can be obtained as a convolution of the form factor F p -that parameterizes the interference effects due to the lateral distribution of the shower -and the longitudinal profile of the excess charge. The form factor function F p has to be evaluated at the time t at which an observer in the near field sees the portion of the shower corresponding to the depth z ′ . That time is clearly given by t = z ′ /v + n r 2 + (z − z ′ ) 2 /c. The difference between this expression and that obtained in the far field is that F p is always evaluated at the time t an observer sees the position z ′ in the shower which is different for the far-and near-field regions. In Fig. 5 we show an example of the vector potential in the time domain for various observers in the near field region of the shower as obtained in full ZHS simulations. The shower has an energy E = 100 PeV and a longitudinal dimension of ∼ 25 m, and the observer is placed at different positions z along the shower axis and at a fixed radial distance r = 10 m. The ZHS simulation also gives the longitudinal profile of the excess charge Q(z ′ ) which we have introduced into Eq. (22) to obtain the vector potential and compare it to that obtained directly by the Monte Carlo, also shown in Fig. 5. The agreement between the vector potential obtained directly in the Monte Carlo simulation and the calculation in our approach is remarkable. For distances to the shower axis larger than ∼ 1 m the difference between our approach and the Monte Carlo is typically ∼ 1 % or better for values down to ∼ 3 orders of magnitude below the peak of the vector potential. This difference starts to increase gradually as the distance to the shower axis decreases and becomes comparable to the lateral dimensions of the shower, where the parameterization of the vector potential at the Cherenkov angle given in Eq. (16) is not expected to be valid. Since the typical distance between antennas in experiments such as the Askaryan Radio Array (ARA) [14] is ∼ 10 − 100 m, we expect our results to be accurate enough in most practical situations. III. THE APPARENT MOTION OF A CHARGE DISTRIBUTION. The temporal behavior of the vector potential traces the motion of a charged particle according to the retarded time. This is an old idea discussed by Feynman in [51] applied to elucidate on the properties of synchrotron radiation. More recently, this approach has also been used in the Cherenkov radiation calculation due to linear tracks in the near field [52] and one dimensional current densities in [50]. In this section we analyze the characteristics of our results in terms of the apparent motion of the charge density distribution to gain an intuitive understanding of the radiation due to a particle shower developing in a homogeneous dielectric medium. portion, corresponding to the minimum of the source and observer time relation, contributes significantly to the radiation near the Cherenkov peak. The resulting vector potential is shown in the right hand side. The Cherenkov radiation spike corresponds to the compressed mapping of the charge excess distribution to the observer time. A. Time delay effects The apparent motion of particles is encoded in the Green's function solution of the vector potential, Eq. (4), by the argument of the delta function taking the source time t ′ to the observer time t: The term |x−x ′ | traces the motion of the current density vector J at position x ′ (t ′ ) to determine the observer time. We can gain much insight into the properties of the vector potential resulting from the ZHS particle shower simulation, shown in Fig. 5, by momentarily ignoring the lateral distribution of the shower. In this case the observer time t is given in terms of the source time t ′ by where we have substituted z ′ = vt ′ and r is the cylindrical radial position of the observer. When v < c/n, Eq. (24) has a unique observer time corresponding to each source time. However, in the case of v > c/n there always exist a range of observer positions such that Eq. (24) has two source times corresponding to every observer time (see an example in the top left panel of Fig. 6). In addition, a minimum value of t (different from the trivial minimum z ′ 0 corresponding the beginning of the shower at t ′ 0 ) may exist when nβ > 1. In other words, the observer first sees the radiation corresponding to a depth in the shower z ′ min = z ′ 0 and then sees contributions from shower depths before and after arriving simultaneously. This minimum can be characterized by looking at the derivative of the retarded time relation, The extrema in the relation between the source time and the observer time are given by requiring the above equation to be equal to zero. A solution exists only when v > c/n and indeed corresponds to a minimum value in the observer time t min . The corresponding shower coordinate z ′ min given by the source time t ′ min = z ′ min /v is The angle of observation θ with respect to the shower axis corresponding to the shower position z ′ is given by: when z ′ = z ′ min it is straightforward to show that the angle corresponds to the Cherenkov angle cos θ C = 1/(nβ), i.e. the minimum time t min at which the observer first sees the shower corresponds to the shower coordinate lying at the Cherenkov angle. Its value is given by A peculiar consequence of these relations is that if an observer is placed at a position such that the shower is always seen with θ < θ C then z ′ min corresponds to the end of the shower, which is an apparent violation of causality. In the case where θ > θ C then z ′ min corresponds to the beginning of the shower as expected. This relation can be seen in Fig. 7 and is discussed in depth later in this section. In the analytical solution (Fig. 6), for that particular observer seeing the region around shower maximum with angles close to the Cherenkov angle, it is evident that a given observation time t corresponds to two different shower coordinates z ′ ± one corresponding to an early development of the shower observed at angle θ < θ C and the other to a late one at angle θ > θ C . When viewing particle showers around the Cherenkov angle in the near-field, the radiation due to the early parts of the shower interferes with radiation due to the late parts of the shower. The apparent violation of causality is a relativistic effect due to the index of refraction of the medium being n > 1. Note also that for shower positions observed below the Cherenkov angle the derivative ∂t/∂t ′ < 0 meaning that time appears to run backwards. The depths z ′ ± are easily obtained by expressing the source time t ′ in terms of the observer time t by inverting Eq. (24): where c n = c/n. Real solutions exist if the argument of the square root is non-negative which is equivalent to t > t min with t min given in Eq. (28). The features of the vector potential due to the apparent motion of a charge distribution along an axis with v > c/n are illustrated in Fig. 6. In the bottom panel we show the Greisen parametrized longitudinal profile of the excess charge as a function of z ′ or equivalently t ′ . In the center panel we show the relation between t and t ′ given in Eq. (24) for ice (n = 1.78) with the shower front traveling at the speed of light. This relation has a minimum at t ′ min = z ′ min /v given in Eq. (26) which corresponds to an observer time t min given in Eq. (28). Around t ′ min the derivative ∂t/∂t ′ is very small and as a consequence the charge density of the source that corresponds to a relatively wide interval of source times t ′ min ± ∆t ′ = t ′ min ± ∆z ′ /v, is projected and seen by the observer during a small interval t min ± ∆t where, This causes a time compression which enhances the radiation, especially when the geometry is such that at t min the observer located at (r, 0, z) sees the region around the shower maximum. This is illustrated in the right panel of Fig. 6 where the vector potential corresponding to the charge distribution shown in the bottom panel is depicted. The sharp initial peak of the vector potential is due to the compression of radiation into a short interval of time as seen by the observer. For late enough observer times, the relation between the apparent and the observer time approaches linearity and the relation is causal. The observer sees later parts of the shower, shown in the bottom panel of Fig. 6, at later times. This corresponds to the long tail in the vector potential depicted in the right panel of Fig. 6. It is worth remarking that the relativistic effects described here arise in any situation where v > c/n and are not limited to the description of Askaryan radiation [35]. B. Interpretation of simulation results The corresponding time delay analysis of the results of the ZHS particle simulation shown in Fig. 5 is displayed in Fig. 7, using the same profile and source positions as depicted in Fig. 5. The observer located at (x, y, z) = (10, 0, 20) m in Figs. 5 and 7, sees a sharp and strong spike in the vector potential (and electric field) that is not matched by that seen by observers located at other positions in Fig. 5. In the middle panel of Fig. 7 we show the corresponding observer time t vs. the source position z ′ relation. For the observer at (10,0,20) m one can clearly see a region with a small derivative ∂t/∂t ′ responsible for the compression effect which leads to an enhancement in the vector potential. As shown with the aid of Eq. (27) the time at which an observer sees the shower first corresponds to observation at the Cherenkov angle. This can be also seen in the top panel of Fig. 7 where we have plotted the angle between the position z ′ along the shower axis and the location of the observer. An observer at (10, 0, 10) m also sees a fraction of the shower with angles around the Cherenkov angle. The Cherenkov pulse is not as pronounced because the net charge in that region is significantly smaller than the charge at shower maximum, and does not last as long as what the observer at (10, 0, 20) m sees. In this view Cherenkov radiation is a geometrical phenomenon due to the minimum in the relation between the observer time and the source position 3 . The apparent causality violations are manifested in the shape of the vector potential as viewed by different observers. An observer at an angle smaller than the Cherenkov angle will see the evolution of the shower with an inverted causal order at all times. This is the case of the vector potential labeled (x, y, z) = (10, 0, 40) m in Fig. 5 where the observer sees a ∼ 25 m long 100 PeV energy shower (shown in the inset of Fig. 5 and in the bottom of Fig. 7) with the radiation corresponding to larger depths at later times. The longitudinal charge excess distribution shown in Fig. 7 has a primary peak followed by a smaller secondary peak. The vector potential traces this feature but in the reversed time sequence. In the case of an observation at angles larger than θ C , the shower is seen in the normal causal order at all times. This case is illustrated by vector potential labeled (x, y, z) = (10, 0, 10) m in Fig. 5 where the radiation corresponding to the charge excess distribution at larger depths is observed at later times. The peaks of the vector potential match the order of the peaks of the charge excess distribution in the bottom of Fig. 7. It is also worth noting that even though single antenna measurements in the near-field make it difficult to reconstruct the longitudinal profile of the charge excess, observations from multiple stations located at tens of meters from the shower axis, do provide the necessary information. 3 One cannot forget that the geometrical effect manifested as an index of refraction n > 1 is in fact due to an interaction of the excess charge with the atoms in the medium and which is, in general, frequency dependent. The discussion in this section is only relevant to frequency bands where the index of refraction is reasonably approximated by a constant. IV. SUMMARY AND OUTLOOK We have derived a highly detailed and computationally efficient approach for the calculation of Askaryan pulses. The electrodynamic calculations leading to the relation between the pulse features and the shower characteristics can be intuitively understood via the apparent motion of charges. Viewed from this perspective, one can easily retrace the time-domain behavior of the pulse to the shape of the electromagnetic shower and the observer versus source time relation as shown in Fig. 6. There are many interesting features of Askaryan radiation due to the fact that the speed of the shower front exceeds the speed of light. In this treatment, the radiation due to the charge excess of an electromagnetic shower is understood as a "dense" or compressed mapping of the charge excess profile to the vector potential via the observer vs. source time relation. The mapping is densest at the minimum of the observer vs. source time relation which corresponds to observations at the Cherenkov angle. In addition, for observation at angles smaller than the Cherenkov angle, time appears to run backwards. This manifests itself in the time-reversed mapping of the longitudinal profile of the charge excess to the time-domain vector potential. The primary motivation for developing this calculation in the time-domain is to understand the temporal behavior of the Askaryan pulse. In the frequency domain, this is equivalent to understanding the phase versus frequency relation. Although it is possible to do this in a completely frequency-domain approach, the time-domain relations can be intuitively understood and are easier to compute. The computational algorithm presented here is summarized as follows: 1. Compute the vector potential of the Askaryan pulse at the Cherenkov angle A(θ C , t) and use it together with total charged track-length LQ tot = dz ′ Q(z ′ ) to extract the functional form of the form factor F p . (This has been done in this article for electromagnetic showers in ice and in that case it is possible to use directly the parameterization provided in Eq. (16). For other situations it needs to be re-evaluated with a detailed Monte Carlo simulation.) 2. Obtain the charge excess longitudinal profile of an electromagnetic shower Q(z ′ ). This can be provided as either the output of a particle shower simulation or using a parameterization. 3. Convolve F p with Q(z ′ ) according to Eq. (17) in the far-field or Eq. (22) in the near-field to obtain the time-domain vector potential. 4. Electric fields are obtained from a trivial numerical derivative of the vector potential with respect to time: E = −∂A/∂t. The formalism developed here can also be applied to the reconstruction of longitudinal shower profiles. In the far-field this can only be done if the pulse was detected away from the Cherenkov angle, otherwise the pulse shape is approximately the same for any given longitudinal charge excess profile. Away from the Cherenkov angle the pulse traces the shape of the longitudinal profile convolved with the lateral profile response. For very extended longitudinal profiles, such as those resulting from UHE showers affected by the LPM effect, the tracing of the profile can be seen for angles as small as 0.3 • away from the Cherenkov angle (see Fig. 3). The overall quality of the reconstruction can be assessed with simulations that are specific to the experiment in question. The reconstruction of longitudinal profiles has interesting experimental applications such as the identification of the primary particle or ν flavor inducing the shower. This is particularly relevant for the electromagnetic component of a ν e -induced shower with its multiply-peaked structure due to the LPM effect. In the near-field the reconstruction of shower longitudinal profiles is also possible. If the radiation is detected by a single station the reconstruction is complicated by the fact that the portions of the shower above and below the Cherenkov angle may interfere with each other depending on the position of the antenna. However, if multiple antennas observe the radiation due to a single shower it is possible to obtain a highly constrained reconstruction of the longitudinal profile. The example depicted in Fig. 5 shows a case where this could be done for antennas in ice spaced tens of meters apart. The formalism provided in this paper will allow the experimentalist to simulate these measurements and find the optimal antenna placement and assess the quality of reconstruction. This would be of particular interest to an array such as the future planned ARA [14] and ARIANNA [15] experiments where the shower could potentially be observed by multiple stations in Antarctic ice. In a future publication we plan to produce timedomain parameterizations of the vector potential at the Cherenkov angle for electromagnetic showers in various media such as salt and the lunar regolith. In addition the time-domain parameterizations of hadronic showers will be included which will allow the experimentalist to produce full simulations of neutrino interactions with flavor dependent parameters. This will be useful for experiment simulations and candidate event reconstructions.
2012-04-16T12:00:49.000Z
2011-06-30T00:00:00.000
{ "year": 2011, "sha1": "7179551c25d057228ef1acade5fa0a4fdef2d2f7", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.84.103003", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "7179551c25d057228ef1acade5fa0a4fdef2d2f7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13160357
pes2o/s2orc
v3-fos-license
Concurrent once-daily versus twice-daily chemoradiotherapy in patients with limited-stage small-cell lung cancer (CONVERT): an open-label, phase 3, randomised, superiority trial Summary Background Concurrent chemoradiotherapy is the standard of care in limited-stage small-cell lung cancer, but the optimal radiotherapy schedule and dose remains controversial. The aim of this study was to establish a standard chemoradiotherapy treatment regimen in limited-stage small-cell lung cancer. Methods The CONVERT trial was an open-label, phase 3, randomised superiority trial. We enrolled adult patients (aged ≥18 years) who had cytologically or histologically confirmed limited-stage small-cell lung cancer, Eastern Cooperative Oncology Group performance status of 0–2, and adequate pulmonary function. Patients were recruited from 73 centres in eight countries. Patients were randomly assigned to receive either 45 Gy radiotherapy in 30 twice-daily fractions of 1·5 Gy over 19 days, or 66 Gy in 33 once-daily fractions of 2 Gy over 45 days, starting on day 22 after commencing cisplatin–etoposide chemotherapy (given as four to six cycles every 3 weeks in both groups). The allocation method used was minimisation with a random element, stratified by institution, planned number of chemotherapy cycles, and performance status. Treatment group assignments were not masked. The primary endpoint was overall survival, defined as time from randomisation until death from any cause, analysed by modified intention-to-treat. A 12% higher overall survival at 2 years in the once-daily group versus the twice-daily group was considered to be clinically significant to show superiority of the once-daily regimen. The study is registered with ClinicalTrials.gov (NCT00433563) and is currently in follow-up. Findings Between April 7, 2008, and Nov 29, 2013, 547 patients were enrolled and randomly assigned to receive twice-daily concurrent chemoradiotherapy (274 patients) or once-daily concurrent chemoradiotherapy (273 patients). Four patients (one in the twice-daily group and three in the once-daily group) did not return their case report forms and were lost to follow-up; these patients were not included in our analyses. At a median follow-up of 45 months (IQR 35–58), median overall survival was 30 months (95% CI 24–34) in the twice-daily group versus 25 months (21–31) in the once-daily group (hazard ratio for death in the once daily group 1·18 [95% CI 0·95–1·45]; p=0·14). 2-year overall survival was 56% (95% CI 50–62) in the twice-daily group and 51% (45–57) in the once-daily group (absolute difference between the treatment groups 5·3% [95% CI −3·2% to 13·7%]). The most common grade 3–4 adverse event in patients evaluated for chemotherapy toxicity was neutropenia (197 [74%] of 266 patients in the twice-daily group vs 170 [65%] of 263 in the once-daily group). Most toxicities were similar between the groups, except there was significantly more grade 4 neutropenia with twice-daily radiotherapy (129 [49%] vs 101 [38%]; p=0·05). In patients assessed for radiotherapy toxicity, was no difference in grade 3–4 oesophagitis between the groups (47 [19%] of 254 patients in the twice-daily group vs 47 [19%] of 246 in the once-daily group; p=0·85) and grade 3–4 radiation pneumonitis (4 [3%] of 254 vs 4 [2%] of 246; p=0·70). 11 patients died from treatment-related causes (three in the twice-daily group and eight in the once-daily group). Interpretation Survival outcomes did not differ between twice-daily and once-daily concurrent chemoradiotherapy in patients with limited-stage small-cell lung cancer, and toxicity was similar and lower than expected with both regimens. Since the trial was designed to show superiority of once-daily radiotherapy and was not powered to show equivalence, the implication is that twice-daily radiotherapy should continue to be considered the standard of care in this setting. Funding Cancer Research UK (Clinical Trials Awards and Advisory Committee), French Ministry of Health, Canadian Cancer Society Research Institute, European Organisation for Research and Treatment of Cancer (Cancer Research Fund, Lung Cancer, and Radiation Oncology Groups). Introduction Small-cell lung cancer is characterised by its rapid tumour doubling time, early dissemination, and high response rate to both chemotherapy and radiotherapy. Of the 42 000 patients in the UK and 225 000 in the USA diagnosed with lung cancer every year, 15% have small-cell lung cancer and 30% of those have limited-stage disease that can be encompassed within a tolerable radiotherapy field. 1 Even in this early-stage disease, outcomes are poor, with median survival of 16-24 months after curative intent treatment and 2-year survival of less than 50%. [2][3][4] Combined chemotherapy and thoracic radiotherapy is the standard treatment for limited-stage small-cell lung cancer. Results from two meta-analyses 5,6 showed that the addition of radiotherapy to chemotherapy improves median survival, 3-year survival, and local control. Subsequently, metaanalyses of clinical trials investigating the optimal timing and sequencing of chemoradiotherapy have shown an advantage for early concurrent thoracic radiotherapy. [7][8][9][10][11] Furthermore, twice-daily radiotherapy was superior to once-daily radiotherapy in the landmark Intergroup 0096 study. 4 In that study, patients were randomly assigned to receive either 45 Gy once-daily (1·8 Gy per fraction) for 5 weeks or 45 Gy twice-daily (1·5 Gy per fraction) for 3 weeks. In both groups, radiotherapy was given concurrently, starting with the first cycle of chemotherapy. Twice-daily radiotherapy significantly improved 5-year overall survival compared with once-daily radiotherapy (26% vs 16%) and reduced the risk of thoracic relapse (36% vs 52%) but at the cost of increased severe radiation oesophagitis (32% vs 16%). Consequently, twice-daily radiotherapy concurrently with chemotherapy was adopted as a standard of care for limited-stage small-cell lung cancer. 12 However, it is unclear whether twice-daily radiotherapy resulted in better outcomes because of the increase in the biologically effective dose of radiation or because of shorter overall treatment time, which is important in this rapidly proliferating disease. Radiotherapy techniques have evolved since the Intergroup 0096 study was designed in the late 1980s; specifically, the use of CT-planned conformal treatment and the omission of elective nodal irradiation to reduce normal tissue exposure and toxicity, particularly oesophagitis. Although twice-daily radiotherapy concurrently with chemotherapy has produced the best outcomes so far, concerns about its toxicity, logistical issues in its delivery, and the low radiation dose in the control group of the Intergroup 0096 study, resulting in a very high (52%) local failure rate, have resulted in the poor adoption of this regimen and no consensus on the standard treatment to use in the routine setting. 13 The authors of one study 14 suggested that the local control could be improved with a higher dose of once-daily radiotherapy. The CONVERT trial was therefore designed as a superiority trial to improve on the standard of care for limited-stage smallcell lung cancer by comparing twice-daily radiotherapy to a higher dose of radiotherapy delivered once daily, given concurrently with chemotherapy. Study design and participants The CONVERT trial was an international, multicentre, open-label, randomised phase 3 superiority trial. Details of the trial design have been published previously. 15 Patients were recruited at 73 centres in eight countries Research in context Evidence before this study The role of thoracic radiotherapy is well established in the management of limited-stage small-cell lung cancer, and the standard of care in patients with good performance status is concurrent chemoradiotherapy. However, the optimal radiotherapy dose and fractionation remains controversial. One standard of care is twice-daily radiotherapy, which was shown to be superior to once-daily radiotherapy in a landmark Intergroup 0096 study in 1999. We searched PubMed and the abstracts of major conferences (such as the American Society of Clinical Oncology) with the terms "small cell lung cancer", "limited-stage", "radiotherapy (or irradiation)", and "chemotherapy", with no constraints imposed on the timeframe for the search, for randomised evidence to support this practice. We found only one relevant randomised clinical trial comparing once-daily and twice-daily radiotherapy. Added value of this study Although twice-daily radiotherapy has produced the best outcomes in these patients so far, concerns about its toxicity, logistical issues in the delivery of twice-daily radiotherapy, and the low radiation dose used in the control group of the Intergroup 0096 study have resulted in the poor adoption of this regimen and no consensus on the standard treatment to use in the routine setting. The CONVERT trial provides further evidence supporting the use of twice-daily radiotherapy in the routine setting and will help to standardise patient care. Furthermore, the results of this study show that in the era of modern radiotherapy techniques, the frequency and severity of acute and late radiation toxicities are lower than previously reported. Implications of all the available evidence Results from this study showed that twice-daily radiotherapy should be considered standard-of-care in patients with limitedstage small-cell lung cancer. The implication for future research is that overall treatment duration of radiotherapy should be kept short when combined with chemotherapy. This Article provides updated information on expected treatment toxicity that clinicians can relay to their patients. Eligible patients were aged 18 years or older; had histologically or cytologically confirmed small-cell lung cancer with limited disease (as defined by the Veterans Administration Lung Cancer Study Group-ie, patients whose disease can be encompassed within a radical radiation portal); 16 had an Eastern Cooperative Oncology Group performance status of 0-1 17 or performance status of 2 due to disease-related symptoms and not comorbidities (since small-cell lung cancer is characterised by rapid doubling time and central disease location, which can be associated with a sudden change in performance status); had no malignant pleural or pericardial effusions; and had acceptable radiotherapy target volume (according to the local radiotherapist). Eligible patients had a maximum of one adverse biochemical factor (concentrations of serum alkaline phosphatase >1·5-times the upper limit of normal, serum sodium <lower limit of normal, and serum lactate dehydrogenase >the upper limit of normal), forced expiratory volume in 1 s greater than 1 L or 40% predicted value, and transfer factor for carbon monoxide greater than 40% predicted value. Patients with a previous history of malignancy in the past 5 years (except for non-melanomatous skin or insitu cervix carcinoma) and those with previous or concomitant illness or treatment that, in the opinion of the investigator, would interfere with the trial treatments or comparisons were excluded. Participants gave written informed consent and the study was done according to the Declaration of Helsinki and Good Clinical Practice Guidelines. The trial was reviewed in the UK by the National Research Ethics Service Committee North West-Greater Manchester Central, which granted ethics approval for the study on Dec 21, 2007 (REC reference: 07/H1008/229). The protocol was also approved by the institutional review board or research ethics committee in each country and at each study centre. Randomisation and masking Patients were randomly assigned (1:1) to one of the two treatment groups (twice-daily vs once-daily radiotherapy). Allocation to treatment group was done by phone call or fax from the recruiting centre to the Manchester Academic Health Science Centre Trials Coordination Unit. The allocation method used was minimisation with a random element using a bespoke computer application. The factors controlled for in the allocation were institution, planned number of chemotherapy cycles (four vs six), and performance status (0-1 vs 2). Patients and investigators were not masked to treatment allocation. Procedures At baseline, all patients underwent baseline investigations, which included physical examination, chest radiograph, CT scan of the thorax and upper abdomen, CT or MRI of the brain, full blood count, biochemical profile, and lung function tests. PET/CT scans were allowed but not mandatory. Staging was done using the Union for International Cancer Control/American Joint Committee on Cancer classification system. 18 Patients were randomly assigned to receive radiotherapy either twice-daily (45 Gy in 30 twice-daily fractions of 1·5 Gy, with a minimum of 6 h between fractions, over 19 days, on 5 consecutive days a week) or once-daily (66 Gy in 33 daily fractions of 2 Gy over 45 days, on 5 consecutive days a week), concurrently with chemo therapy. Chemotherapy was started within 4 weeks of randomisation and consisted of four to six cycles of cisplatin and etoposide every 3 weeks in both groups (etoposide 100 mg/m² intravenously on days 1-3 and cisplatin 75 mg/m² intravenously on day 1, or etoposide 100 mg/m² intravenously on days 1-3 and cisplatin 25 mg/m² intravenously on days 1-3). Each centre had to elect to prescribe four or six cycles for all eligible trial patients. The first cycle of chemotherapy was given before radiotherapy and the second was given concurrently with radiotherapy if no delay with chemotherapy occurred. No later than 6 weeks after the last cycle of chemotherapy, patients without evidence of progressive disease on the CT scan and with no clinical evidence of brain metastases were offered prophylactic cranial irradiation. Radiotherapy commenced on day 22 of cycle one of chemotherapy, coinciding with cycle two of chemotherapy in patients not experiencing chemotherapy delay due to toxicity. 3D conformal radiotherapy was mandatory and elective nodal irradiation was not permitted. The total dose was prescribed at the International Commission on Radiation Units and Measurements reference point. Intensity-modulated radiotherapy and PET/CT planning was permitted but not mandated. The protocol specified that if dose constraints to the organs at risk could not be met, the dose delivered could be decreased accordingly. The policy for chemotherapy was to delay and give at full dose later, rather than give at a reduced dose. However, we recommended a chemotherapy treatment delay of more than 7 days for grade 4 febrile neutropenia, grade 4 thrombocytopenia requiring medical intervention, or grade 2 or worse bleeding with thrombocytopenia; for the first episode of such an event, we recommended full-dose chemotherapy and granulocyte colony-stimulating factor support, or a 20% dose reduction. In case of a second episode, we recommended a 30% dose reduction. If a third episode occurred, the patient was removed from the trial. A radiotherapy quality assurance programme was set up to ensure the robustness of the radiotherapy procedures, and the details of the programme have been reported previously. 15 The programme was managed by the UK National Cancer Research Institute Radiotherapy Trials Quality Assurance Team. See Online for appendix On completion of study treatment, patients were followed up weekly until the resolution of acute sideeffects, then every 3 months until 1 year, and every 6 months for 5 years. A CT scan of the thorax and abdomen was done at 4 weeks after cycle four (even if six cycles were given). Subsequently, during followup at 6 and 12 months after randomisation, investigations included physical evaluation, reporting of adverse events, and a CT scan of the thorax and abdomen. Follow-up investigations were done according to local policy. Outcomes The primary outcome of the study was overall survival, defined as time from randomisation until death from any cause. Secondary outcomes included compliance with chemotherapy and radiotherapy (defined as dose intensity delivered), acute toxicity (defined as toxicity occurring between the start of treatment and up to 3 months after completion of treatment, and assessed according to the Common Terminology Criteria for Adverse Events [version 3.0]), late toxicity (according to the Common Terminology Criteria for Adverse Events [version 3.0]), 19 and local and metastatic progression-free survival (calculated from date of randomisation to date of first clinical or radiological evidence of progressive disease at the primary site or distant sites). With regard to toxicity, frequencies of worst recorded grade of toxicity in the respective time periods were recorded. Response rate was another secondary outcome but it was not analysed because interpretation of CT imaging would have been too complex after concurrent chemoradiotherapy. The study also had post-hoc exploratory translational objectives, which will be reported at a later date. All serious adverse events were reported to the trial coordinating centre and were assessed for causality and expectedness, both locally by the Principal Investigator and centrally by the Chief Investigator. Statistical analysis Our hypothesis was that overall survival in the once-daily chemoradiotherapy group would be superior to that of the twice-daily group. A 12% higher overall survival at 2 years in the once-daily group versus the twice-daily group was considered to be clinically significant to show superiority of the once-daily regimen. Overall and progression-free survival were estimated with the Kaplan-Meier method, and between-group comparisons Figure 1: Trial profile *One patient withdrew consent for twice-daily radiotherapy. †Dose constraints to organs at risk not met in four patients and twice-daily radiotherapy given in error to two patients. ‡Six patients did not receive any chemotherapy and two patients died during cycle one before toxicity assessment. ¶Seven patients did not receive any chemotherapy and three patients died during cycle one before toxicity assessment. Numbers assessed and ineligible are unavailable because screening logs were not completed by all centres. 270 included in survival analysis 246 included in radiotherapy toxicity analysis (concurrent and sequential chemoradiotherapy) 263 included in chemotherapy toxicity analysis ¶ 547 patients randomly assigned evaluated by the log-rank test with stratification for institution, planned number of chemotherapy cycles (four vs six), and performance status (0-1 vs 2). The number of events required to detect a hazard ratio (HR) for death of 0·7 with an α level (two-sided) of 0·05 and 80% power (ie, an increase in 2-year survival from 44% in the twice-daily radiotherapy group to 56% in the oncedaily radiotherapy group) was 247. An additional 5% was added to the sample size of 506 patients to allow for ineligible patients, giving a total recruitment target of 532 patients. The primary survival outcome was analysed using the modified intention-to-treat principle, because four cases provided no follow-up data and hence were censored at time zero. Further details about the statistical analysis are available in the protocol. 15 All randomly assigned patients who were treated with at least one study dose of chemotherapy and who were alive at the time of the first toxicity assessment were included in the safety analysis. Data were collected at each study site and monitored by the independent data monitoring committee. We submitted reports to the independent data monitoring committee on an annual basis, commencing 12 months after the first patient was randomly assigned. The statistical package used for the analyses was Stata (version 13.1). This trial is registered with ISRCTN, number 91927162, and ClinicalTrials.gov, number NCT00433563. Role of the funding source Cancer Research UK reviewed and approved the study design. None of the funders had a role in data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication. Results Between April 7, 2008, and Nov 29, 2013, we recruited 547 patients from 73 centres in eight countries. We randomly assigned 274 patients to receive twice-daily chemoradiotherapy and 273 to receive once-daily chemoradiotherapy. The modified intention-to-treat survival analysis included 543 patients (273 in the twice-daily chemo radiotherapy group and 270 in the once-daily chemoradiotherapy group) because four patients were lost to follow-up (centres did not return their case report forms (figure 1). Table 1 shows the baseline characteristics of the participants. The median age at randomisation was 62 years (IQR 29-84) in the twice-daily group and 63 years (34-81) in the once-daily group, with 83 (15%) of 547 patients being older than 70 years (32 [12%] in the twice-daily group and 51 [19%] in the once-daily group). More than 95% of patients overall had a performance status of 0-1. Less than 2% of patients were never smokers, almost two-thirds were former smokers, and just over a third were current smokers ( Data are median (IQR) or n (%). UICC/AJCC=Union for International Cancer Control/American Joint Committee on Cancer. *Eastern Cooperative Oncology Group Performance Status was not recorded on the source documentation and case report form in three cases at baseline; in all three cases, the performance score was recorded as 0-1 on the randomisation form. †Never smokers defined as adults who have never smoked a cigarette or who smoked fewer than 100 cigarettes in their entire lifetime; former smokers defined as adults who have smoked at least 100 cigarettes in their lifetime but say they currently do not smoke; current smokers defined as adults who have smoked 100 cigarettes in their lifetime and currently smoke cigarettes every day (daily) or on some days (non-daily). In our survival analysis (which included 273 patients in the twice-daily group and 270 in the once-daily group), median overall survival was 30 months (95% CI 24-34) in the twice-daily group and 25 months (21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31) in the oncedaily group (hazard ratio 1·18 [95% CI 0·95-1·45]; p=0·14; figure 2A). 2-year overall survival was 56% (95% CI 50-62) in the twice-daily group and 51% (45-57) in the once-daily group (absolute difference between the treatment groups 5·3% [95% CI -3·2% to 13·7%]). 5-year overall survival was 34% (95% CI 27-41) in the twice-daily group and 31% (25-37) in the once-daily group (absolute difference 2·8% [95% CI -6·4% to 12·0%]). In the twice-daily group versus the once-daily group, causes of death were lung cancer (152 vs 146), intercurrent deaths (six vs 14), treatmentrelated (three vs eight), and cardiovascular (three vs eight); causes of the 12 treatment-related deaths were radiation pneumonitis (one vs two), dementia possibly related to prophylactic cranial irradiation (none vs one), neutropenic sepsis (one vs three), septic shock (one vs none), bronchial pneumonia (none vs two), and peripheral vascalar ischaemia (one vs none). 25 (9%) of 273 patients in the twice-daily radiotherapy group and 33 (12%) of 270 in the once-daily radiotherapy group did not receive concurrent chemoradiotherapy (figure 1), giving compliance rates of 91% in the twicedaily group and 88% in the once-daily group. Less than 10% of patients did not receive any radiotherapy (20 [7%] in the twice-daily group and 26 [10%] in the once-daily group; figure 1, table 2). Of the patients who received radiotherapy, intensity-modulated radiotherapy was delivered to 40 (16%) of 254 participants in the twice-daily group versus 43 (17%) of 247 participants in the once-daily group. Prophylactic cranial irradiation was delivered to 229 (84%) of 274 vs 220 (81%) of 273 participants (table 2). More patients received the full dose of radiotherapy in the twice-daily group than in the once-daily group (p<0·0001; table 3). The optimal number of fractions, as defined in the protocol, 15 (30 fractions in the twice daily group and 33 in the once daily group) were delivered in 213 (86%) of 249 patients in the twice-daily group and 192 (80%) of 240 patients in the once-daily group (p=0·10). Radiotherapy was delivered over the planned overall treatment time of 19 days in 158 (63%) of 249 patients in the twice-daily group and over the planned overall treatment time of 45 days in 114 (48%) of 240 patients in the once-daily group (p=0·0004). Protocol deviations and violations were mainly due to logistical reasons, such as public holidays. Chemotherapy toxicity was assessed in 266 (97%) of 273 patients in the twice-daily group and 263 (97%) of 270 patients in the once-daily group, who had received at least one cycle of chemotherapy and who were alive at the time of the first toxicity assessment ( figure 1, table 4). Radiotherapy toxicity was assessed in 254 (93%) of 273 patients in the twice-daily group and 246 (91%) of 270 patients in the once-daily group who had received either concurrent or sequential chemoradiotherapy ( figure 1, table 4). Data are n (%) or n/N (%). *All p values were calculated with χ² tests (except for number of cycles, which is a Wilcoxon rank sum test). †The denominator in each group is the number of patients who received concurrent chemoradiotherapy. ‡The denominator in each group is the number of patients who received radiotherapy. The most common grade 3-4 adverse event was neutropenia (affecting 197 [74%] of 266 patients in the twice-daily group vs 170 [65%] of 263 in the once-daily group). The frequencies of most adverse events recorded were similar in both groups, with the exception that significantly more grade 4 neutropenia was recorded in the twice-daily group than in the once-daily group (129 [49%] vs 101 [38%]; p=0·05). However, grade 3-5 febrile neutropenia did not differ significantly between the two groups (table 4). Acute radiotherapy toxicity was similar in both groups: grade 3-4 oesophagitis was reported in 47 (18%) of 254 patients in the twice-daily group and 47 (19%) of 246 patients in the once-daily group. 11 patients developed grade 3-5 radiation pneumonitis (five in the twice daily group and six in the once daily group), of whom three patients died within 3 months of radiotherapy (two in the once-daily group and one in the twice-daily group, one of whom received sequential rather than concurrent radiotherapy; table 4, appendix p 3). Regarding late toxicity, four patients in the once-daily group developed grade 3 oesophagitis, one of whom had an oesophageal stricture. Six patients in each group developed grade 3-4 pneumonitis, and five patients (three in the twice-daily group and two in the once-daily group) developed grade 3 pulmonary fibrosis (table 5). Discussion Our results show that once-daily radiotherapy did not improve overall survival in patients with limited-stage small-cell lung cancer and good performance status, compared with twice-daily radiotherapy, when given concurrently with chemotherapy. Radiotherapy treatment delivery was superior in the twice-daily group. Furthermore, both acute and late toxicities were similar and lower than expected with both regimens. However, although results are unable to show superiority of the once-daily radiotherapy regimen, the CONVERT trial should have a major effect on the standardisation of chemoradiotherapy in this disease group-a treatment that has been the subject of controversy since the publication of the Intergroup 0096 study. 4,13 Overall survival with both regimens were higher than the survival results reported in the Intergroup 0096 study. In CONVERT, 2-year survival for twice-daily and once-daily radiotherapy was 56% and 51%, versus 47% and 41% in the Intergroup 0096 study. 4 CONVERT was not an equivalence study (and was not powered for equivalence) so it cannot be concluded that the two regimens have the same efficacy. Furthermore, the 2-year survival of 56% achieved in the control group with twice-daily radiotherapy is the same survival that was projected for the experimental group. The better-than-expected performance of both Number at risk (number censored) Once-daily Twice-daily Number at risk (number censored) Once-daily Twice-daily HR 1·13 (95% CI 0·92-1·39); p=0·24 B C groups might be explained by several changes in the management of small-cell lung cancer since the publication of the Intergroup study, including PET/CT staging in more than half of patients, the use of modern and precise radiotherapy techniques, and improvements in supportive care. These results, together with several meta-analyses and systematic overviews, support the use of short overall radiotherapy treatment time to avoid early cancer cell repopulation. [7][8][9][10][11] One of the systematic overviews also identified that time from the start of any treatment to completion of radiotherapy is a key variable in predicting outcome. 20 Although not significant, 2-year overall survival was slightly higher in the twice-daily group than in the once-daily group, which could possibly be a result of improved delivery of treatment in the Data are n (%). The radiotherapy toxicity population was used to analyse the prevalence of these adverse events because it would not be possible to report radiotherapy-related toxicity in patients who did not receive radiotherapy. NA=not applicable. *Other grade 3 reported toxicities included diarrhoea (n=7), hyponatremia (n=1), urinary retention (n=5), dysphagia (n=5), and lymphopenia (n=6) in the once-daily group; and diarrhoea (n=3), constipation (n=7), hyponatremia (n=1), dysphagia (n=8), lymphopenia (n=8), dyspnoea (n=8), and leucopenia (n=4) in the twice-daily group. Other grade 4 reported toxicities included pulmonary embolism (n=4), hyponatremia (n=2), dyspnoea (n=1), and myocardial infarction (n=1) in the once-daily group; and pulmonary embolism (n=2), hyponatremia (n=3), lymphopenia (n=3), and fast atrial fibrillation (n=1) in the once-daily group twice-daily group, with more patients receiving full-dose radiotherapy, the optimal planned number of fractions, and treatment delivered over the optimal treatment time. Another reason why treatment delivery was superior in the twice-daily group is because of the lower overall dose of radiotherapy in this group, which meant it was possible to achieve the protocol dose constraints for organs at risk, such as lung and spinal cord, in a greater proportion of patients than in the once-daily group. A further advantage of the twice-daily regime is that it halves the radiotherapy treatment time (from 45 days to 19 days) and reduces the number of fractions (from 33 to 30) compared with the once-daily regimen. Although no formal health economic analysis has been done as part of this study, the delivery of twice-daily radiotherapy could lead to cost savings, especially if patients require hospital transport to attend radiotherapy appointments. Overall, the frequency and severity of acute and late radiation toxicities were lower than expected, probably because of the use of modern radiotherapy techniques, including 3D radiotherapy or intensity-modulated radiotherapy, and treatment of involved fields with regard to nodal disease. In the Intergroup 0096 trial, 4 patients were treated with outdated radiotherapy techniques including elective nodal irradiation, which would have resulted in higher radiation exposure of normal tissues than in this trial. Indeed, the high rate of severe acute oesophagitis (32% with twice-daily radiotherapy) in the Intergroup study has been cited as the main reason for poor adoption of twice-daily radiotherapy. 13 By contrast, less than 20% of patients had severe oesophagitis in the CONVERT study and only one patient developed an oesophageal stricture requiring intervention. Radiation pneumonitis was not specifically reported in the Intergroup 0096 study, but in this trial very few (<3%) patients had severe radiation pneumonitis or severe pulmonary fibrosis. The lower than anticipated toxicity rates and rates of local failure reported in this study suggest that radiotherapy treatment delivered concurrently with chemotherapy could be intensified further-for example, by means of dose escalation or hypofractionation. A limitation of this study is that although we did not mandate an upper age limit-with the aim to gather much needed evidence about the outcome of elderly patients treated with concurrent chemoradiotherapyonly 15% of the patients included were older than 70 years. Data for patients older than 70 years participating in CONVERT was presented at the International Association for the Study of Lung Cancer 17th World Conference on Lung Cancer in Vienna, Austria, in 2016, and the results of this analysis will be presented in a separate report. Elderly patients have been reported to be less likely to receive concurrent chemoradiotherapy than their younger counterparts, which is mainly due to insufficient high-quality evidence to support the use of this potentially toxic treatment. 21 Another limitation is that the majority of patients enrolled in both groups were white, and therefore the results of the study might not be applicable to other ethnicities. To our knowledge, CONVERT is the largest study completed investigating thoracic radiotherapy in limitedstage small-cell lung cancer, and the first clinical trial in this group of patients to report on the outcome of patients treated with modern radiotherapy techniques incorporating a quality assurance programme. It was possible to complete this study because of the interest, enthusiasm, and collaborative efforts of a large number of investigators from many different countries. The key to completing accrual was to include a large number of recruiting sites. Furthermore, by contrast with US practice, concurrent chemoradiotherapy is not always adopted as the standard of care for limited-stage smallcell lung cancer in Europe, and the study provided an incentive for centres to adopt and set up concurrent treatment protocols. Given the importance of keeping overall treatment time as short as possible, future studies could investigate doseescalated twice-daily or hypofractionated radio therapy concurrently with chemotherapy. Further data for the outcome of patients treated with high-dose 2 Gy per fraction treatment will be provided by the ongoing CALGB 30610/RTOG 0538 study (NCT00632853). The upcoming analysis of the CONVERT translational studies, including the prognostic role of baseline circulating tumour cells, could provide data for relevant biological stratification factors that can be used prospectively in future studies. In conclusion, the results of CONVERT show that there were no significant differences in survival and no major differences in toxicity between twice-daily and once-daily radiotherapy. However, since the trial was designed to show superiority of once-daily radiotherapy and not powered to show equivalence, twice-daily radiotherapy should continue to be considered standard-of-care. Furthermore, twice-daily radiotherapy concurrently with chemotherapy is well tolerated, with better compliance and shorter treatment time than oncedaily treatment. From a pragmatic perspective, oncedaily radiotherapy could be considered when delivery of twice-daily radiotherapy is impossible because of departmental logistics or patient choice. Contributors CF-F, LA, PL, MS, FBl, RM, NM, and PJW conceived the study and initiated the study design. WA, FBa, ABh, ABe, FC, PF, SH, CLP, MO'B, JP, VS, and JPVM helped with implementation. CF-F is the grant holder. LA provided statistical expertise in clinical trial design. The authors designed the trial, analysed the data, wrote the manuscript (with the first draft written by the first author), made the decision to submit the manuscript for publication, and assured the completeness and accuracy of the data and analysis and of the fidelity of this report to the trial protocol. All authors approved the final manuscript. Declaration of interests CF-F, LA, PL, and FBl report grants from Cancer Research UK during the conduct of this study. The other authors declare no completing interests.
2018-04-03T04:10:48.982Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "79f02c45af4c8e3c7ab44e27d72408aa2ddbd41c", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S1470204517303182/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85ada7c2d4faf0cdb1d0c7c1e3a8e3d06f842ae9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52261537
pes2o/s2orc
v3-fos-license
Mechanobiological analysis of porcine spines instrumented with intra-vertebral staples. OBJECTIVE To characterize growth plate histology of porcine spines instrumented with a new intra-vertebral staple. METHODS Spinal segments (T7-T9) previously instrumented with an intra-vertebral staple (experimental group, n=7) or non-instrumented (control group, n=4) underwent average growth rate (AGR), and histomorphometric measurements: heights of proliferative (PZH) and hypertrophic (HZH) growth plate zones, hypertrophic cells height (CH), and the number of proliferative chondrocytes per column (CC). These measurements were done over three regions: (1) left side; (2) middle; (3) right side (instrumented side). The two groups were analyzed by comparing the difference between results for regions 1 and 3 (Dif-R1R3). RESULTS A significantly higher Dif-R1R3 was found for AGR and HZH for the experimental group as compared with controls. This Dif-R1R3 was also significantly higher for CC at T8 level, CH at T7 level and PZH at both levels. No significant changes for the Dif-R1R3 were observed in the adjacent vertebrae (T11-T12). CONCLUSIONS This study confirmed the local growth modulation capacity of the intra-vertebral staple, translated at the histomorphometric level by a significant reduction in all parameters, but not in all spinal levels. Further analyses are needed to confirm the regional effect, especially for the intervertebral disc and other connective tissues. Introduction Adolescent Idiopathic Scoliosis (AIS) is a 3D spinal deformity, presenting curvature(s) in the coronal plane but also vertebral rotation in the transverse plane and an altered sagittal profile. Curve progression might be related to factors such as the remaining bone growth, the degree of the initial curvature 1,2 , and some of its morphological parameters like sagittal intervertebral rotation, 3D wedging of the apical disks, vertebral axial rotation, among others 3 . Based on the estimation of curve progression, a proper treatment should be chosen. Generally, a patient with a curvature of more than 45 o and limited remaining growth will undergo spinal instrumentation and fusion, a highly invasive surgery associated with important risks and mobility impacts in these young patients [4][5][6] . For pediatric scoliotic patients presenting a moderate spinal deformity with significant growth remaining, compressive fusionless approaches can be considered (off label procedure) to correct the curvatures, hence avoiding spinal arthrodesis. Compressive fusionless techniques aim at progressively correcting the deformation using the remaining bone growth potential while preserving patients' mobility 7 . They are based on Hueter-Volkmann principle to reduce or increase vertebral growth by means of an increased pressure on the convex side of the curve, or a decreased pressure on concave side of the curve, respectively. Longitudinal spine growth takes place in vertebral body epiphyseal growth plates by synthesizing cartilaginous tissue, which is further transformed into bone by endochondral ossification 8 . The growth plate is a connective tissue divided into three zones. The reserve zone includes chondrocytes in a relatively quiescent state. It supplies the proliferative zone, where chondrocytes undergo division. Chondrocytes finally are pushed towards the hypertrophic zone, where they increase in volume and undergo apoptosis at the osteochondral junction. The growth process is based on the progression, as well as on changes of chondrocytes and their surrounding extracellular matrix composition through these three zones. Fusionless compressive devices spanning a scoliotic curve segment, such as anterior vertebral body tethering and vertebral staples, have shown to progressively correct scoliotic curvatures with vertebral growth [9][10][11][12][13] . At the histological level, vertebral growth modulation was associated with decreased hypertrophic zone height and hypertrophic cells heights on the implant's side, as reported by experimental studies on pig models [14][15][16] . These changes occur in response to compressive stresses transmitted to the growth plates by the implant. To avoid spanning the intervertebral disc (IVD), which is thought to lead to IVD degeneration 17 , a new implant for the treatment of pediatric scoliosis was developed 18 . This implant, consisting of a staple, was designed to be fixed on the lateral side of the vertebral body, with its very thin prong affixed onto the superior growth plate of a given vertebra, below the contiguous annulus fibrosus, therefore without spanning of the IVD 18 . It has been tested, firstly in a rat tail model 19 and secondly in a porcine model 20 . The implant demonstrated its capability to reduce vertebral growth on the implant side and, when correctly placed, to preserve IVD health. A revised version of this implant allows the simultaneous action of two thin prongs on the superior and inferior vertebral growth plates of the same vertebra ( Figure 1). This new intravertebral staple was tested in a pig model 21 . Its regional effects on the spine as well as on vertebral and intervertebral disc wedging were experimentally characterized 18 . However, its local effects have not yet been evaluated. Thus, the aim of this study was to comprehensively characterize the histology of the growth plates of porcine spinal segments instrumented with this double-sided intra-vertebral staple implant. Tissue collection Spinal segments from eleven immature female pigs (Landrace/Yorkshire of 25-35 Kg; eleven weeks old) were used in this study, in continuity with the previous study 21 . In summary, seven animals (experimental group) underwent scoliosis induction using the new dual side prongs intravertebral staple presented above and four uninstrumented cases were used as controls (control group). During this previous study, the experimental animals were preoperatively sedated. The surgery was performed in the left decubitus position in a sterile environment. The access to the tested segment (T7-T8-T9) was achieved via a right side thoracotomy between the seventh and eighth ribs. Both stainless steel (316L, UNS S31603) prongs of this new intra-vertebral staple were 0.5 mm thick, and were inserted between the IVD and the thin secondary ossification site just above the growth plate with an approximate penetration of 5 mm. After insertion, prongs were fixed using bone screws of 25x2.8 mm diameter (Figure 1). Site closure was performed after device insertion, and animals had a postoperative follow-up of 82 ± 2 days. A bicarbonate solution of Calcein (SigmaAldrich, Oakville, ON, Canada) was injected seven and one days prior to sacrifice at a dose of 15 mg/kg body weight, since during these days the intra-vertebral staple action over growth rate will be more highlighted, animals present a linear-like growth all along the follow-up, and Calcein will be kept from being released through osteoclasts activity before microscope observation. Animals were then euthanized by induction of deep anaesthesia followed by a lethal injection of saturated potassium chloride. Immediately following euthanasia, thoracic spinal segments were collected for the present study. Three vertebra-disc-vertebra blocks ( Figure 2) were dissected for each control or experimental animal and further fixed in 10% buffered formalin, dehydrated in increasing graded ethanol solutions and clarified in xylene before embedding in methylmetacrylate (MMA, Fisher Scientific, Ottawa, ON, Canada). Each block was first trimmed using a saw, equipped with a diamond knife (Buehler IsoMet 1000), and then cut along the longitudinal axis into ten series of six slices (6 µm) each, using a microtome (Leica SM2500). Histological analyses The evaluated growth plates are shown in Figure 2. Four spatially separated slices (30 µm), from each of the three blocks, were used for growth rate measurements, and four other spatially separated slices were used for histomorphometry measurements. All growth plates slices were virtually divided in three regions to better understand the local effects of the implant. Region 1 corresponded to the left (opposite side of the instrumentation for the experimental group), region 2 to the center and region 3 to the right (instrumented for the experimental group) spine side ( Figure 2). Histological staining and mounting All slices were first deplasticized in two 30 min serial washes of EGMA (ethylene glycol methacrylate, Fisher Scientific), dried for 30 min, and then underwent one of the following protocols. For growth plate histomorphometry, slices were rehydrated in distilled water and stained with 1% Toluidine blue (Fisher Scientific) for 5 min, and washed with a citrate buffer solution. Following staining, slices were dehydrated in graded alcohols, followed by xylene and mounted with Permount mounting medium (Fisher Scientific). For growth rate measurements, slices were only deplasticized, transferred to graded ethanol solutions and xylene, and mounted with Permount. Growth plate analyses Three images were obtained, one per region of the analyzed growth plates, using an optical microscope (Leica DMR equipped with a Qimaging Retiga Camera). Growth rate and histomorphometric parameters were then evaluated on a total of 12 images (i.e. three images for each of the four evaluated slices) for each analyzed growth plate. All measurements were done using a custom-made Matlab application (R2014a, MathWorks, Natick, MA, USA) 22 . The average growth rate was evaluated on 10X-magnified images using Calcein labeling in the growth plate epiphyseal junction. The growth rate was evaluated as the distance between the two labels of Calcein further divided by the number of days (6) between the two injections ( Figure 3a). The average growth rate was estimated as the mean of 35 to 45 height measurements taken parallel to the longitudinal growth direction and evenly separated in each of the 12 images 23 . Four histomorphometric parameters were analyzed on the vertebral growth plate stained with Toluidine blue observed under the microscope with a magnification of 20X. The hypertrophic and proliferative zone heights were evaluated similarly to average growth rate (Figure 3b). A total of 50 to 70 heights parallel to the longitudinal growth direction were evaluated and averaged on each of the 12 images for the hypertrophic and proliferative zones. Hypertrophic cells height was measured as the mean distance between the upper and the lower limit of a total of 120 hypertrophied chondrocytes, which means 10 randomly chosen cells per image (Figure 3c). Finally, regarding the number of chondrocytes per column, the analysis consisted of counting, in a total of 60 randomly chosen columns of proliferative chondrocytes, the number of cells per 100 µm (Figure 3d). Statistical analyses Statistical analyses were performed using STATISTICA 13.3 software package (Statistica, StatSoft Inc., Tulsa, Oklahoma, USA). First, all data was screened with a Shapiro Wilk W test to verify normality. Then, a one-way ANOVA for repeated measures was used to detect differences between the means obtained in each group (control and experimental) when subtracting results obtained for region 3 from those obtained for region 1. This difference, named Dif-R1R3, allowed evaluating a relative variation for each parameter. The level of significance was fixed at p<0.05. Results are presented as mean values ± standard deviation of the mean (SEM). Results Average values of all growth plate parameters (three regions combined) are presented for tested and adjacent segments for both control and experimental groups in Table 1. There is a general reduction of all parameters of the tested segments in the experimental group compared to controls, except for those of the number of chondrocytes per column (CC) and proliferative zone height (PZH). At the adjacent segments reductions of less than 8.5% were found for the average growth (AGR) rate and proliferative zone height. Figure 4 presents results for the Dif-R1R3 of average growth rate in the tested segments. There were significant increases of 121%, 422%, and 117% for Dif-R1R3 when comparing the experimental group to the control one, for T7, T8 and T9 levels, respectively. No difference was found between the experimental and control groups when comparing Dif-R1R3 for the adjacent segments (T11-T12, data not shown). The results for the Dif-R1R3 presented in Figure 4 were normalized for the experimental vertebrae to region 1 for T7, T8 and T9 levels (AGR of 8.9 µm/day, 9.3 µm/day and 9.4 µm/day, respectively for this region). Thus, reductions of 15%, 16% and 12%, respectively, were found for the average growth rate within these vertebrae between these two regions. Figure 5 shows results of Dif-R1R3 for hypertrophic and proliferative zone heights of the tested segments. Dif-R1R3 was significantly greater in the experimental group compared to the controls for the hypertrophic zone height (HZH), where increases of 321%, 200%, and 413% were observed for vertebrae T7, T8 and T9, respectively (Figure 5a). Additionally, this difference was also significantly higher at T7 and T8 levels (Dif-R1R3 was increased of 276% and 274%, respectively) for the proliferative zone height when comparing experimental and control groups (Figure 5b). Results for Dif-R1R3 of hypertrophic cells height (CH) and the number of chondrocytes per column are presented in Figure 6 for the tested segments. For hypertrophic cells height, the Dif-R1R3 was significantly higher for T7 level only, with an increase of 185% in the experimental group compared to controls (Figure 6a). Furthermore, an increase of 394% of Dif-R1R3 was found for the number of chondrocytes per column at T8 when comparing experimental and control groups (Figure 6b). Concerning the adjacent segments (T11-T12), no significant difference was found for Dif-R1R3 for hypertrophic and proliferative zone heights, when comparing experimental and control groups. Similarly, this Dif-R1R3 remains without significant changes for the number of chondrocytes per column and hypertrophic cells height (data not shown). Discussion A significant growth modulation after three months of instrumentation was successfully obtained with the new intra-vertebral staple device. This modulation was reflected by a reduced vertebral growth rate from the applied pressure of the implant. This result agrees with studies reported in other animal models undergoing growth plate compression, and follows the well-established Hueter-Volkmann principle. In fact, significant 15 to 30% growth rate reduction have been observed on caudal rat vertebrae under dynamic or static compression [23][24][25] . Furthermore, our findings showed that this growth modulation was achieved via a significant reduction of the average growth rate near the implant's region (region 3), as observed in other tested vertebral staples. Wakula Y. et al. (2012) 26 found 43% reduction of growth rates in the implant's side between control and experimental animals while evaluating a shape memory alloy intervertebral staple in a pig model. In our study, the achieved reduction within the tested vertebrae between regions 1 and 3 was around 12 to 16%, approximately 3 times less than the reduction reported for devices that span the intervertebral disc 27 . This smaller growth rate reduction within those two regions in the tested vertebrae is likely caused by a non-significant growth rate reduction in the opposite side of the implant observed in this study (of around 11% in region 1) in the experimental group, which is unexpected for this type of devices. We believe that this reduction could be a consequence of the screws length and positioning, since they were long enough to reach this region and, sometimes, deviated from the parallel plane of the growth plate, hence applying pressure over region 1 (Figure 1, T8 level, bottom screw). This event was observed while evaluating the results of average growth rate in 7 from 21 stapled vertebrae (33%) and confirmed by means of postero-anterior and lateral radiographs. Therefore, the potential of the implant could be improved with a better positioning of the screws, or shortened screws while ensuring a sufficient bone fixation. The effective growth rate modification of a given vertebra, useful for the correction of scoliotic deformities, is in fact the relative combination of the growth rate modulation of the instrumented vs. non instrumented sides generating important changes in the hypertrophic and proliferative zone histomorphometric parameters. Indeed, significant reductions were found in heights of the hypertrophic cells as well as in the proliferative and hypertrophic zones, and in the number of chondrocytes per column when comparing left and right sides of growth plates between experimental and control groups. These histomorphometric changes are not a phenomenon unheard of in animal models following compression. Ménard A-L. et al. (2014) 24 reported, for caudal rat vertebrae dynamically loaded under compression, a significant 17% reduction on the overall growth plate height, 14% decrease in hypertrophic cells height, and 13% decrease in the number of chondrocytes per column between the experimental and control group. These findings are also consistent with those from Valteau B. et al. (2011) 23 in similar experimental conditions, where they found significant reductions of 14% in both hypertrophic and proliferative zone heights, 19% reduction in the number of proliferative cells per column and 15% in hypertrophic cells height. Furthermore, the changes in hypertrophic zone histomorphometric parameters have also been reported for other vertebral staples in a pig model 15 14 found a significant reduction in the hypertrophic zone height of the instrumented vertebrae in contrast with the non-instrumented as well as a reduction on the hypertrophic cells height for instrumented vertebrae compared to the non-instrumented ones. However, these studies did not evaluate the effects of the compression over the proliferative zone. These histomorphometric changes could be associated with the stiffness of the hypertrophic and proliferative zones. Since these zones have been evaluated as the least rigid growth plate zones, especially in pigs (half as stiff as the reserve zone), they experience the greatest deformation under compression 8,28,29 . The new intra-vertebral staple mainly generated local effects in the epiphyseal growth plates. Indeed, no significant changes on growth rate measurements and histomorphometric parameters were found between control and experimental growth plates of the adjacent levels (T11 and T12). Therefore, we confirm that no curvature was detected at these levels as a compensation mechanism of the pig to maintain the forward looking gaze as reported during the vertebral wedging analysis of this device 21 . In spite of the limitations of this study using a porcine model to test a new device intended for humans and carrying vertebral anatomy differences between the two species, the use of young pigs allowed the local and detailed evaluation of the action of the present device. These results could be extrapolated to humans' spine considering that quadrupeds' spines are mainly axially loaded, due to the action of muscles and ligaments, as in the case of bipeds. During this study, the histological analysis of growth plates was performed considering relative values, such as the difference between regions 1 and 3, rather than comparing absolute values. This approach was chosen to normalize the parameter changes and take into account the possible inter-animal variability in the mechanobiological responses. Results of the present study show only significant alterations of chondrocytes in the proliferative zone in one of the three analyzed vertebrae. The 2D technique used in this study does not allow the visualization of the 3D nature of the proliferative cells columns, preventing the correct visualization of all cells composing a single column. Improvements could be done on this matter by implementing three dimensional stereological methods. In addition, further analyses are needed to confirm the regional effects of the device such as the intervertebral disc health and other connective tissues preservation. In conclusion, the new two-sided intra-vertebral staple implant achieved a significant growth modulation after three months of instrumentation, associated with a significant reduction in the average growth rate within the instrumented region. At the histomorphometric level, this study showed that both hypertrophic and proliferative zone parameters are major contributors to the average growth rate reduction induced by the implant, with the proliferative and hypertrophic zones being the most mechanically sensitive zones of growth plates. Moreover, the local potential of the implant was also highlighted since no mechanobiological effects were found in the adjacent segments.
2019-03-11T17:24:38.191Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "4cd74e99c91e27d965b59f10c369e0216d7ecabe", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4cd74e99c91e27d965b59f10c369e0216d7ecabe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12188289
pes2o/s2orc
v3-fos-license
Skeletal muscle as a privileged site for orthotopic skin allografts. A semi-privileged status for rat skin allografts may be achieved by placing them on extensive open beds formed by panniculus carnosus muscle which prevents contact of the transplant with host skin. Such allografts enjoy approximately a twofold increase in their life expectancy, even if transplanted across a strong histocompatibility barrier. Experiments are described which rule out stress or a "central" weakening of response, such as enhancement, as explanations of this phenomenon. Intact skin "islands" separated from surrounding host skin on all sides by a broad border of bared panniculus were also found to serve as privileged sites. Dye injected into these islands failed to reach the regional nodes until about the 15th day after their preparation. These studies indicate that a lymphatic deficit is responsible for the observed privileged status of the allografts. Large ear skin grafts comprised tile skin removed from both sides of an excised pinna; their aggregate are being 3-4 cm 2. Grafts Beds.--These were cut in the close-clipped skin of the lateral thoracic wall under chloral hydrate anesthesia, supplemented with ether as required. "Fitted grafts" were those placed in beds which extended down to the level of the panniculus carnosus and which were just large enough to receive them (Fig. 1 a). "Open fit grafts" were placed on standard, very extensive beds prepared by sharply incising 3 X 5 cm rectangular outlines in the skin of the lateral thoracic wail, grasping one corner with a hemostat and then tearing the skin as cleanly as possible from the underlying pan-'niculus carnosus muscle. This procedure resulted in the stripping away of the epimysium as well as the overlying fascial connective tissue in which course both blood and lymphatic vessels serving the skin. Skin grafts were placed at the centers of these beds so that their margins were widely separated from host skin (Fig. 1 b). "Skin islands" were 1.5 X 1.5 cm squares of skin left intact at the centers of large openfit beds. Shallow, circular beds were cut in these islands to receive fitted ear skin allografts ( Fig. 1 e). Dressings.--They included plaster of Paris impregnated bandage, and were applied around the entire thorax according to our standard procedure (see 12). Primary inspection was usually carried out on the 8th postoperative day and subsequent inspections at 2-3 day intervals. Dressings were reapplied so long as unepithelialized granulation tissue was present. Visualization of A ~erent Lymphatic Vessds and their Draining Lymph Nodes.--This was accomplished by intradermal injection, via a no. 30 gauge needle, of a mixture of equal volumes of 2% aqueous solutions of Berlin Blue and Patent Blue V (5). EXPERIMENTS AND OBSERVATIONS Base-Line Data: Survival Times of Fitted Allografls.--To provide the necessary controls for the experiments to be described standard, fitted 1 cm diameter ear skin grafts from Lewis and DA donors were transplanted to different panels of Fischer hosts (see Table I, experiments 1 and 2). The median survival time (MST)* of Lewis --~ Fischer grafts was 10.5 4-1.1 days and that of DA --~ Fischer grafts was 8.4 4-0.33 days. Fate of Open-Fit Allografts on Extensive Beds.---The 1 cm diameter skin allografts transplanted in open style were extremely healthy and well-united to their beds by the 8th postoperative day and, with both donor/host strain combinations, the majority significantly outlived their "fitted" controls (Table I, experiments 3 and 4). Grafts that lived long enough became surrounded by annuli of outgrowing hyperplastic epithelium which frequently made contact and fused with ingrowing native epithelium from the wound margins. Although maintenance of the dressings on the wounds retarded contracture (13), eventually this process progressed, apparently eliminating the resurfaced granulation tissue which developed in the wound, to the point where graft dermis and host skin dermis became juxtaposed, usually after about the 20th day. Regeneration of fur took place on many of the long-lived allografts. Rejection of all grafts, when it finally occurred, was an acute rather than a chronic process; total sloughing of the epithelium being complete within 1 2 days of the first indication of its weakness. Compatibility with the host 1 Abbreviation used in this paper: MST, median survival time. Influence of Dosage on Open-Fit Allograft SurvivaL--To find out whether the amount of allogeneic skin transplanted exerted any influence on its longevity on the panniculus, large composite grafts comprising all the skin that could be obtained from both sides of the median ear cartilage of a rat's pinna (3-4 cm 2) were transplanted to standard extensive wounds. The size of these grafts necessarily resulted in their perimeters approximating host skin sooner than was the case with the small grafts. Again, this mode of grafting conferred considerable protection upon both Lewis and DA allografts, with the results of the former being as good as those obtained when the smaller grafts were used. However, the performance of the large DA grafts was inferior to that of their smaller counterparts (see Table I, experiments 5 and 6). When 1 cm diameter grafts of trunk skin, which is much thicker than ear skin, were transplanted to extensive beds, those from Lewis donors fared just as well as ear skin, but those from the DA donors only outlived their controls by a few days (Table I, Analysis of the Basis of the Prolonged Survival of Skin Allografts on Extensive Pannicular Beds.--There are three obvious factors which, either singly or in combination, might contribute to the observed impairment of allograft rejection: (a) operative stress leading to increased corticosteroid production, which includes hormones with immunosuppressant properties, (b) inadequacy of the lymphatic drainage in the graft bed, resulting in attenuation of the afferent pathway of the immunologic reflex, and (c) exposure of the host to antigenic material via the venous rather than the lymphatic route, favoring the development of humoral rather than cellular immunity, and so leading to the phenomenon of immunologic enhancement, i.e., the grafts may be capable of "self-enhancement," as in the case of renal and cardiac allografts in rats (14,15). The experiments now to be described were designed to discrfininate between these possibilities. Influence of stress: Standard large wounds were prepared on the right thoracic walls of Fischer hosts but instead of placing 1 cm diameter Lewis test grafts at their centers, these grafts were "fitted" into beds on the contralateral sides of the hosts' thoraxes ( Fig. 1 c). This resulted in a small but significant prolongation of graft survival, the MST being extended to 12.2 4-0.37 days (Table I, experiment 9). A second, independent appraisal of the influence of stress was made by placing 1 cm DA ear skin allografts in eccentric locations on extensive beds so that one point on their perimeter was in direct contact with host skin (Fig. 1 d). Again, these grafts displayed only a trivial prolongation of survival, (Table I, However, of a separate panel of 12 DA skin grafts which were placed eccentrically so that a distance of 1 cm of wound bed separated the graft and wound margins at their nearest point, six displayed very significant prolongation of survival (to 14-25 days: Table I, experiment 11). These observations suggested that intact skin at the wound margins is a more consistent source of "something" essential for graft rejection than the panniculus bed. The most obvious candidate here, of course, is lymphatic drainage (though there are other more remote possibilities, e.g., skin might be richly endowed with antigen-sensitive cells that can recognize transplantation antigens). To explore this possibility standard 1 cm allografts were fitted into beds cut in the centers of small islands of intact skin left at the centers of extensive muscle beds (Fig. 1 e). It was reasoned that if the longevity of the open-fit grafts was due to paucity of their lymphatic drainage, this situation should apply to intraisland beds, but if contact with host skin having an intact blood supply is the essential feature for procuring rejection, then intralsland grafts should display normal susceptibility. Lewis allografts fared extremely well in this site, having an MST of 19.8 4-4.12 days, and 3/15 grafts lived longer than 50 days. However the results with DA grafts were disappointing, the MST being 10.8 -4-4.17 days, and only 2/7 grafts surviving beyond 20 days (Table I, experiments 13 and 14). Dye Injection Studies.--Attempts to study the lymphatic drainage status of the panniculus muscle at various stages after grafting gave equivocal results because of the tendency of the dye to leak and spread rapidly over the surface to the wound edges where it was rapidly taken up by the lymphatics. Injection of the healed-in thin ear skin grafts also proved unsatisfactory since, with surviving grafts, it was difficult to confine the inoculation to graft connective tissue and with rejecting grafts the vessels had lost their patency. However the "full-thickness" skin islands proved ideal sites for dye injection, especially when they were 4 days post preparation. At and beyond this time their initially raw dermal margins had become reepithelialized, preventing lateral leakage of dye from transected lymphatics. It seemed reasonable to assume that the lymphatic drainage status of these islands would be representative of that of the healed-in, open-fit skin grafts. Dye injected into skin islands of less than 11 days' standing gave no evidence of entering regional lymphatics or of reaching the draining axillary and brachial nodes (Table II). However dye did escape into lymphatics and enter the nodes from 8/11 islands injected 18 or more days after their preparation. When islands of more than 22 days' standing were injected dye was transmitted to the regional nodes in 5/5 tests. This result was not unexpected since contracture had resulted in the close approximation of graft skin with host skin. On the basis of these findings, it was predicted that if lymphatics are important in graft rejection, allografts inlaid into skin islands prepared 7-9 days previously should also enjoy some deferment of rejection. This prediction was fulfilled by the finding that 3/6 delayed Lewis intraisland grafts survived for longer than 25 days (Table I, experiment 14). 8/11 notes stained at 18-30 days * Each score relates to an independent test conducted on a single rat. The regional node was not stained in this animal but a dye-stained lymphatic-like vessel was observed passing directly through the chest wall from beneath the skin island. Influence of Immunologic Enhancement.--To determine the extent to which the phenomenon of immunologic enhancement might have participated in weakening the host's reactivity to skin allografts, Fischer rats which had previously manifested attenuated reactivity to open-fit allografts were rechallenged on the opposite side of their trunks with "fitted" allografts from the original donor strain. In all instances these animals behaved as if they had been sensitized. In another series of tests primary open-fit and fitted Lewis grafts were excised from two groups of five Fischer rats after they had been in residence for 4 days. The animals were then challenged with secondary fitted Lewis grafts on the opposite side of the thorax. The results (Table III, (Table III, experiment 3). The results indicate that residence of the primary graft for as long as 20 days was insufficient to sensitize all hosts, though in the majority of cases 9-10 days' exposure to the primary graft was sufficient to initiate weak sensitization. Finally, a panel of 14 Fischer rats which had received open-fit DA grafts on the right thoracic wall 7 days previously received fitted grafts on the left side. None of these second-set grafts showed prolonged survival (see Table III, experiment 4). The observation that none of the primary grafts lived longer than 15 days, and most of them were destroyed concomitantly with their accompanying second-set grafts, suggested that the latter compromised the survival of their predecessors. This, again, indicates that central inhibition of response was not involved in the prolongation of graft survival observed. DISCUSSION The present findings confirm and extend to the rat an observation previously made in the guinea pig (11), that skin allografts transplanted in open style to extensive beds afforded by bare panniculus carnosus muscle enjoy a highly significant prolongation of survival--by a factor of about 2--provided that the graft margins do not make contact with host skin. That this abrogation of allograft reactivity was not due to operative stress, and an associated release of corticosteroid hormones, or to some kind of induced "central" weakening of response, such as immunological tolerance or enhancement, was established on the basis of several discriminating experiments. For example: (a) only feeble prolongations of survival resulted when the grafts were fitted into small beds on one side of the hosts' trunks and extensive panniculus wound beds were prepared on the contralateral sides or when the grafts were placed marginally in contact with host skin on the extensive beds, and (b) no prolongation of survival was enjoyed by secondary fitted skin allografts transplanted to hosts bearing healthy, open-fit grafts of 7 days' standing. Interference with the afferent limb of the immunologic reflex is a much more likely explanation, especially since; (a) allograft survival was also prolonged by fitting them into small beds prepared in intact residual islands of skin at the centers of extensive areas of raw panniculus and (b) dye injection studies on such skin "islands" at various times after their preparation indicated a transient deficiency of lymphatic drainage from the wound beds to the regional nodes persisting until about the 15th postoperative day. It is necessary to emphasize that the present findings are easily reconciled with the familiar observation that skin allografts placed centrally upon large beds afforded by the panniculus carnosus in the rabbit are promptly rejected (16). In this species there is a natural cleavage plane between the dermis and the panniculus so that the epimysium and its rich vascular and lymphatic network are normally left intact when such extensive beds are prepared. In rats and guinea pigs, by contrast, this cleavage plane is absent and the mode of preparation of these beds results in either the removal in loto or at least heavy damage to the epimysium and its associated vasculature. The present observations are consonant with the general thesis that the various known immunologically privileged sites which sustain vascularized skin allografts do so by virtue of the absence or considerable impairment of lymphatic drainage, i.e. a deficiency in the afferent pathway of the immunologic reflex. Precisely what function lymphatic channels fulfill in mediating the rejection of free tissue allografts remains to be determined. The conventional view is that they transmit antigenic material to the lymph node. Alternatives are that they transmit host lymphocytes which have been "primed," i.e. have fulfilled the act of antigenic "recognition," peripherally (17)(18)(19). Another possibility which is not mutually exclusive is that they transmit leukocytic "passenger" cells from the graft which function as antigen when they have percolated into the draining node (20,21). Apart from their possible usefulness in sustaining allografts in nonimmunosuppressed hosts--whether for experimental or for clinical purposes--and the light they have shed on the pathophysiology of allograft rejection, the existence of immunologically privileged sites has some important theoretical and clinical implications: (a) It challenges the thesis that normal, unsensitized animals already possess a significant proportion of lymphocytes endowed with the capacity to react against major locus-incompatible allogeneic cellular antigens, i.e. the only difference between a normal animal and a sensitized animal is that the latter has a higher incidence of such cells (22). (b) If the postulated and much discussed cell-mediated immunological tumor surveillance mechanism (23) really exists, one would have anticipated that natural privileged sites in which rapidly proliferating populations of epithelial or other cells are present--as in the cornea or the hamster's cheek pouch-would be common sites for the development of malignancies. There is no evidence that this is so. However, of possible relevance is the high incidence of reticulum sarcomas in the brains of immunosuppressed renal transplant patients (24). It has been suggested that the brain's alymphatic status, in conjunction with the immunosuppression results in a blunting of immunologic recognition and the initiation of cellular immunity directed against the tumor specific antigens at an early stage. It is also tempting to relate the relatively high incidence of tumors which develop in cutaneous burn scars in man, noted by Celsus in the first century (25), to the immunologically privileged status which healed burn lesions have been shown to possess (26). SUMMARY A semi-privileged status for rat skin allografts may be achieved by placing them on extensive open beds formed by panniculus carnosus muscle which prevents contact of the transplant with host skin. Such allografts enjoy ap-proximately a twofold increase in their life expectancy, even if transplanted across a strong histocompatibility barrier. Experiments are described which rule out stress or a "central" weakening of response, such as enhancement, as explanations of this phenomenon. Intact skin "islands" separated from surrounding host skin on all sides by a broad border of bared panniculus were also found to serve as privileged sites. Dye injected into these islands failed to reach the regional nodes until about the 15th day after their preparation. These studies indicate that a lymphatic deficit is responsible for the observed privileged status of the allografts.
2014-10-01T00:00:00.000Z
1973-07-01T00:00:00.000
{ "year": 1973, "sha1": "9575ee14626c5c68074ab87311f7c74f9ab55646", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc2180540?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9575ee14626c5c68074ab87311f7c74f9ab55646", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
128225986
pes2o/s2orc
v3-fos-license
Related factors and regional differences in energy consumption in China This article used the Cluster analysis of statistical method to separate China’s 30 provinces and municipalities into three categories according to their energy consumption discrepancies and characteristics from 1985 to 2007. The categories were high, moderate and low energy consumption areas and they had significant differences in energy consumption. Based on this classification, the authors analyzed the influencing factors of energy consumption in the three areas by means of panel data econometric model. The results showed that the influencing factors were obviously different. In order to support national goal of energy conservation and emission reduction, the energy measures and policies should be distinctly taken. characteristics, while that was apparently linear relation from 1977 to 2005. PENG, WU and WANG (2007) pointed out that the industry sector was the major energy consumption sector and had a high proportion and impact on the whole energy consumption. WANG and LIU (2007) studied the relationship between China's energy consumption and economic growth by co-integration analysis and Granger causality test. They found that China's energy consumption and economic kept volatile growth in the short run and stable equilibrium relation in the long term. They also investigated that it was one-way causality from energy consumption to economic growth. ZHOU (2007) used Granger causality, dynamic correlation coefficient, small sample test model and other econometric methods to analyze the China's energy consumption and economic growth mechanism since China's reform and opening up. He maintained that the economic increase was the reason why energy consumption increased. WANG, TIAN and JIN (2006) studied China's energy consumption and economic growth by variable parameter model and found that they remained a long-term equilibrium relationship (variable parameter cointegration relationship), which didn't change over time. LIN (2001) utilized the cointegration and error correction model to come to the conclusion that total energy consumption, GDP, energy price and energy structure had long term equilibrium relation. Not only the price and income, but also the heavy industry sector share in GDP, which reflected structure change, was all critical determinants of energy demand. The above investigations mainly took the time series data and discussed China's energy issues from the country's point. In fact, if we considered all the areas as homogeneous and took the one-fits-all framework, neglecting each region's resources and economic level's discrepancies, it would be bound to hamper the achieving of energy conservation and emission reduction goal and harmonious development. In this article, the authors discussed the energy consumption from China's different regions and utilized panel data model. The result would be more reliable and comprehensive because panel data model composed of three dimensions of individuality, index and time, which could take the region difference and time trend into consideration. In the investigation, the authors used the Cluster analysis of statistical methods to separate the China's 30 provinces and municipalities (Tibet was not concluded because of lack of energy data) according to their energy consumption level. It was more reliable when classified the regions, which had similar energy consumption level, into one category because it overcame the defect of traditional region classifying (eastern, western and the middle regions) method. Index selection and data explanation The authors selected the following indices for analysis and investigation according to the data availability, including studies' results and all the possible influencing factors in energy consumption. Total energy consumption quantity (eng) Total energy consumption quantity referred to the primary energy consumption quantity in the nature, which was the sum consumption of coal, oil, natural gas and hydropower. The authors had changed all of them into "equivalent of ten thousand tons of standard coal" as measurement unit. Economic growth (GDP) Economic growth highly relied on energy consumption. Most studies showed that GDP was a main determinant to the energy demand. So the authors expected that the relation was positive between economic growth and energy consumption. Total fixed assets investment (INV) According to the definition of Statistical Yearbook of China, total fixed assets investment composed of state-owned economic investment, urban and rural collective ownership unit and urban and rural individual ownership unit investment. This article used the ratio of regional fixed assets investment in GDP to represent investment status, and the authors expected that the relation was positive between the fixed assets investment and the energy demand. Population growth (POP) Because of the improvement of economic development and citizens' income, the living standard of urban and suburban citizens had been improved gradually. In fact, the denser the population was and the higher the income level was, the more energy was demanded. Meanwhile, the speedup of local industrialization and urbanization process would raise the average energy consumption rate. Therefore, high population growth rate would certainly increase the energy demand. This article used the total population of each region to represent population growth and the authors also expected that it was positive to the energy demand. Industrial structure (STR) Industrial structure and its change were important factors influencing energy consumption. Among the three industry sectors, the secondary industry was the main influencing factor, while the tertiary industry had high added value and low energy consumption. Therefore, the authors could mitigate the energy consumption by increasing the tertiary industry proportion. This article used the ratio of output value of the secondary and tertiary industry in GDP to represent industrial structure change. The authors expected that it was positive for the secondary industry and negative for the tertiary industry regarding to the energy consumption. Energy price (EP) Energy price directly influences the quantity of energy consumption. In China, raw materials, fuels and power purchasing price indices were comprehensively considered by the relative price of production input. The authors expected that energy price was negative to the energy consumption. However, Chinese energy price was very low because the energy pricing mechanism had not realized marketization. Price distortion made it impossible for the supply and demand sides to get correct market signal, which in turn brought the consumption distortion. In addition, this kind of price mechanism caused low efficiency and excessive use of energy because it couldn't regulate the energy's production and consuming behavior. Eventually, the law of demand was violated. Our statistical data came from Statistical Yearbook of China, China Energy Statistical Yearbook, Fifty-Five Years of New China's Statistics Compile and each province, municipality and self-governing district's statistical yearbook. The base period was selected as 1990 and all the data was deflated to the year 1990. Comparison of energy consumption In this article, the authors firstly calculated the average energy consumption of the 30 provinces and municipalities from 1985 to 2007 (see Table 1). From the calculation results, we could see that the top 3 energy consumption provinces in eastern areas were Shandong (10,016), Hebei (9,643), Liaoning (9,174) and their average consumption quantity were more than or close to equivalent of 10,000 ten thousand tons of standard coal. However, in Hainan province, which was also from eastern area, the number was equivalent of 454 ten thousand tons of standard coal and was the least. That meant 1/22 of Shandong, 1/21 of Hebei, 1/20 of Liaoning. For the middle areas of China, the number was: Henan (7,570), Shanxi (6,729) (the most), Jiangxi (2,306) (the least). For the 11 western provinces, the extreme number was: Sichuan (6,470) (the most), Qinghai (792) (the least), that was 1/8 of Sichuan. We could clearly conclude from the above analysis, discrepancies existed not only among different areas, but in the internal regarding to China's energy consumption. Therefore, we deduced that the traditional region classifying method was no longer adequate to analyze China's provincial energy consumption problem. We could also know that the 8 largest (economy) regions classifying method was too finesorted to present a clear-cut comprise and also made great difference in the same region's energy consumption quantity. Cluster analysis of provincial energy consumption Regarding to the drawbacks of the traditional region classifying method in studying the energy consumption problem, the authors used the Cluster analysis of statistical methods in our article. The authors carried out systematic Cluster to the energy consumption level of the 30 provinces and municipalities. Meanwhile, the authors standardized the data to [-1, 1] (see Fig. 1) and separated the 30 provinces and municipalities into three categories pursuant to the Euclid distance. The first category was high energy consumption areas and they incorporated Hebei, Liaoning, Jiangsu, Shandong and Henan province, with average energy consumption quantity from 7,570 to 10,016 equivalents of ten thousand tons of standard coal. The Cluster analysis results contributed to understand the discrepancies and characteristics internal and between the categories. The results also provided grounds to the policy drawing up in support of the goal of energy conservation and emission reduction. In addition, these results provided new references to later investigation on the energy consumption issues. Fig. 2 showed a great difference on energy consumption among the three categories of regions. The overall energy consumption amount was taking an upward trend, however, before 2001, the spread was small and after that, energy consumption had a drastic increase. In Fig. 2, the fist (more than 20,000 equivalent of ten thousand tons of standard coal after year 2006) and second (more than 12,000) categories of regions had the largest increase spread, while the low energy regions was below 5,000 equivalent of ten thousand tons of standard coal after year 2006. Although it gradually increased, the spread was quite small. The gap between the three categories of regions was taking an expending trend. It was really important to realize the disparity in taking targeted measures to enhance efficiency of energy utilization and control the energy waste. Econometric model designing and selecting According to the discussion of influencing factors of energy consumption in part 2, the authors designed the following panel data model: (1) In the equation (1) In the equation (2) The authors estimated the model (2) F α = , which denied H0 and we chose the Entity fixed effect model (the authors omitted the test process to save space because the process was too complex and long). Panel model estimating results We estimated the equation (2) by means of Generalized Least Squares (Cross-section SUR (PCSE)), (see the results in Table 2). 2.0238 Notes: the authots used the software of Eviews 5.0. In the bracket, the first number was t statistics and the second one was probability. Empirical analyses From Table 2, we could see that all the regression coefficients passed the significance level test of 1%, 5% or 10%. R 2 was 98.03%, 98.89%, 99.24% respectively; F statistics was big; in the D.W. test, autocorrelation didn't exist. Above all, we could conduct the following statistical analysis: (1) Economic growth had significant diversified impact on the energy consumption in the three categories GDP increase of 1 percent would bring 0.93% and 0.26%'s increase to the high and moderate energy consumption areas respectively, while it had little influence on the low energy consumption area. Except for Henan province, high energy consumption area was all developed eastern region. They occupied 50% of whole nation's energy consumption, although with relatively high efficiency of energy utilization. So we could say the economic growth was the main actor to increase energy consumption. For the moderate energy consumption area, there were 3 backward middle and western regions and 2 relatively developed provinces (Shandong, Guangdong). The backward areas had low efficiency of energy utilization, but they were usually important production bases of energy, which indicated that their economic growth relied to excessive energy consumption. As for the low energy consumption area, they were mainly composed of backward minority areas and the developed areas with high efficiency of energy utilization, such as Beijing, Tianjin, Shanghai, Fujian and Hainan. Because the minority areas were in the early stage of industrialization, they had low level of economic development, high proportion of the first industry in GDP and low level of urbanization, they objectively had small energy demand as a result. However, we should realize that we could give play to the resources advantage and turn it to economic advantage for the minority areas possessing abundant energy resources. Meanwhile, because the developed areas with high efficiency of energy utilization were in the advanced stage of industrialization, with their fast technology progress, the economic growth mode transformed from "extensive" to "intensive", their dependence on energy consumption was relatively low. (2) Fixed asset investment increase had positive impact on the energy consumption The influence coefficient of fixed asset investment increase had close impact on the low and moderate energy consumption areas, which was 0.0023 and 0.0024 respectively. But for the high areas, the coefficient was 0.0265. The analysis were the following: Since 1990s, the low and moderate energy consumption areas, which were the major production bases of energy, had a fast increase of fixed asset investment, and they were mostly invested to energy, raw material and other high energy consumption industries. Their investment mode was mainly "extensive", which had little effect to enhance efficiency of energy utilization. However, on the one hand, the high energy consumption areas, enhanced efficiency of energy utilization by technological improvement because of their developed economic; On the other hand, their investment went mainly to the higher technology industries such as electronic information, electrical apparatus, machinery manufacture and pharmaceutical industry, which had low energy consumption and high added value. Consequently, these areas' energy demand had declined. (3) Population growth had significant impact on the regional energy consumption Population increase of 1 percent would bring 0.44% and 0.20% of increase to the low and moderate energy consumption areas respectively, 0.51% of decline to the high energy consumption areas. Our explanation was: Population was a traditional influencing factor to energy consumption and their relation was positive. Recent years had witnessed fast energy consumption in daily life, which happened as the increase of income and improvement of quality of life, popularization of household appliances, more uses of private cars, which changed people's life style and correspondingly the average energy consumption quantity. Although China's average energy consumption quantity was only account for 25% of the developed country, population's impact on the regional energy consumption would be long-term and stable as the energy demand increases. (4) Industrial structure had significant diversified impact on the energy consumption The authors found that the secondary industry's structure change affected the moderate energy consumption regions most and the elastic coefficient was 0.6018. The high and low energy consumption region was 0.4789 and 0.3657 respectively. The analysis was below: The moderate energy consumption regions had a fast secondary industry, especially heavy industry growth, which led to a tremendous energy demand. For the high energy consumption regions, although the industrial structure had a relatively high level, the industry sector was still the leading industry of secondary industry, which made it a main factor to increase energy consumption. However, we could see that its elastic coefficient was less than the moderate regions thanks to its rational industry distribution and high structure level. At last, the authors discussed the low energy consumption regions, of which industrial structure level was very low. Since the 20th century, China witnessed a fast economic growing period. Although still below the high and moderate regions, the energy demand presented a positive change as the secondary industry's fast growing. The tertiary industry's structure change affected the moderate energy consumption regions most and the elastic coefficient was 0.6645. Successively, the high energy consumption region was 0.1556, which was contrast with our expectation, and the low region was negative, which accorded with our expectation. We knew that the tertiary industry's leading industry was service sector possessing high added value and low energy consumption. So the higher portion of tertiary industry, the more energy consumption quantity. All in all, optimization of industrial structure would have more energy conservation potential and mitigate the energy demand of the three categories of regions. (5) Energy price had the same influencing direction to the energy demand of the three categories of regions, but significant degree diversities, which was opposite with our expectation Energy price rising one percent would bring 0.56%, 0.67% and 1.1% of increase to the high, moderate and low energy consumption areas respectively. This estimated result showed that rising of energy price in short run would have little inhibition effect on energy demand, and have even lesser impact on the low energy consumption areas with enrich energy resources. This in turn proved that China's regional economic growth had strong driving effect on the energy demand. Chinese energy price was very low because the energy pricing mechanism had not realized marketization. This kind of price mechanism leads to low efficiency and excessive use of energy. Therefore, strengthening the reform of energy pricing formation and giving full play to market mechanism would be a problem which is desperately needed to be resolved in the current and future period. Conclusions and economic implications According to the above analysis, the authors came to the conclusions as below: (1) This article used the Cluster analysis of statistical methods to separate 30 provinces and municipalities of China into three categories according to their energy consumption level and characteristics from 1985 to 2007. The categories of regions were high, moderate and low energy consumption areas. From the vertical point of view, the energy consumption level, resource endowment and social economic development level in the same category presented certain commonness. From the horizontal point, there were significant differences among the three categories of regions in energy consumption level, their consumption ratio was 3.02: 2.23: 1. We also had to point out that the discrepancies among them would be obvious as time went on. (2) From the estimating result from panel model of energy demand, it can be concluded that the main influencing factors to high energy consumption areas were economic growth, ratio of output value from secondary industry in GDP and energy price. For the low energy consumption areas, the main factors were fixed asset investment, population growth, ratio of output value from secondary industry in GDP and energy price. Meanwhile, the influencing factors to the moderate energy consumption areas were comprehensive and various. There was one point worthy of mentioning, that was the obvious effect on regional energy consumption coming from population and energy price factors. Because China had a large population base, the lifestyle was modified and people's income increased, the average energy demand would go upward. So regulators should propagandize the energy conserving concept and advocate choosing products with high efficiency of energy utilization to mitigate life energy consumption. Chinese energy price could not reflect the adequacy and the status of supply and demand because the energy pricing mechanism had not realized marketization. Consequently, energy could allocate resources effectively in the short run, and it would be more serious for the high resource endowment areas. Therefore, China should first promote the reform of energy pricing mechanism so as to make the price reflect the status of supply and demand and allocate resources effectively. (3) According to the influencing factors and degree of energy consumption from the three categories of regions, we had to implement differential policies to conserve energy. The high energy consumption areas, based on its fine economic foundation, should build a resource-saving society and develop recycling economy to mitigate dependence of economic growth on energy consumption. This objective could be accomplished by the science and technology innovation, popularization and application of new technology, new equipments and new products and energy conserving industry's development. The moderate energy consumption areas, in a fast developing and transformation period, had great potential to carry on energy conserving. They should vigorously rearrange and optimize the structure of industry and develop the third industry with low energy consumption and high added value. In addition, fixed asset investment should be controlled and the investment structure should be adjusted. The low energy consumption areas should emphasize the enhancement of efficiency of energy utilization, speed up the transformation of economic growth mode and accelerate modifying, optimizing and upgrading of regional economic structure and industrial structure. Besides, overspreading of heavy industry should be controlled to lay a foundation for the energy conservation and emission reduction. All in all, the three categories should break boundaries to aggressively cooperate with each other and take the sustainable development approach to build a resource-saving society.
2019-04-23T13:32:56.735Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "2398ac349574149fce685fcb03dfff346184ab35", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.com/Public/uploads/Contribute/5563dd0e6a04a.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9563a3499202ac787691342180db69b2b35241ff", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Geography" ] }
249082268
pes2o/s2orc
v3-fos-license
Second Language Acquisition of Constraints on WH-Movement by L2 English Speakers: Evidence for Full-Access to Syntactic Features : This paper presents results from two experiments on the L2 acquisition of wh-features and relevant constraints (Superiority and Subjacency) by L1 Sinhala–L2 English speakers. Our results from a Truth Value Judgment Task and a Grammaticality Judgment Task with 31 English native controls and 38 Sinhala/English bilinguals show that the advanced adult L2 speakers of English we tested have successfully acquired the uninterpretable wh-Q feature and relevant movement constraints in English, despite the lack of overt wh-movement in L1-Sinhala. These results raise questions for Representational Deficit Accounts of second language acquisition and offer evidence that (i) uninterpretable syntactic features are not necessarily subject to an early critical period and (ii) uninterpretable features not instantiated in learners’ L1 can be available for L2 syntactic computation. We take our results as evidence for full access by L2 learners to syntactic properties that are not instantiated in their L1, but that remain accessible due to a cognitive capacity for language (i.e., knowledge of Universal Grammar) independently of the L1. the predictive and explanatory power of the Interpretability Hypothesis regarding the acquisition of wh-questions and corresponding locality constraints (Subjacency and Superiority) by L1 Sinhala-L2 English speakers. To re-evaluate the Interpretability Hypothesis, we conducted two experiments—a Truth Value Judgment Task (TVJ) and a Grammaticality Judgment Task (GT)—with 38 L2 speakers of English (Sinhala L1 speakers) and a control group of 31 English native speakers. The Truth Value Judgment Task (TVJ) is a partial replication of a similar task carried out by Hawkins and Hattori (2006). However, our study is different from Hawkins and Hattori’s study at least in two respects. First, this study provides an independent measurement of L2 participants’ proficiency in English, which is absent in H&H. Second, unlike H&H’s study, which uses only 19 L2ers, we use a larger sample of 38 L2 participants. The results from these two experiments clearly reveal, contra predictions of RD accounts, that advanced L2 speakers are sensitive to both Superiority and Subjacency constraints that govern the syntax and the interpretation of English wh-questions with overt wh-movement. This indicates that the L2 English learners have acquired the relevant uninterpretable uwh* syntactic feature that triggers the application of Superiority and Subjacency, despite their non-instantiation, at least in the same syntactic domain, in the L1 syntax. Based on these results, we will argue that the reconfiguration of uninterpretable syntactic features is possible in adult L2 acquisition, and the acquisition of those features is not intrinsically restricted by an early critical period for language acquisition, challenging RD accounts. In addition to its theoretical contribution, this paper the first generative investigation of the L2 acquisition of English syntax by L1 speakers of Introduction This paper presents an experimental study to re-evaluate the Feature Interpretability Hypothesis (Hawkins and Hattori 2006;Prentza and Tsimpli 2013;Tsimpli and Dimitrakopoulou 2007), a representational deficit theory of learnability in adult Second Language Acquisition (SLA) that has received a substantial amount of attention in SLA research. The proponents of the Feature Interpretability Hypothesis (henceforth the Interpretability Hypothesis) adopt a feature distinction proposed in syntactic theory within the Minimalist Program (MP) (Chomsky 1995, 2001 andthereafter) and postulate that the L2 acquisition of uninterpretable syntactic features (as opposed to interpretable features), which are purely syntactic and interface independent in nature, is subject to an early critical period, a time when one has complete access to the inventory of features made available by Universal Grammar (UG): "in this theory, the domain of the functional lexicon in the Language Faculty (FL) ceases to be accessible once first language acquisition is complete" (Tsimpli and Dimitrakopoulou 2007, p. 217). Therefore, after the so-called critical period, during which a child constructs a mental grammar by acquiring feature values/specifications for his/her native language, an L2 learner has access to only those uninterpretable syntactic features that are directly instantiated in their L1. Given this, acquiring native-equivalent competence in any new uninterpretable syntactic features (those features absent in the learner's L1) is argued to be 'impossible' after the critical period, and apparent native-like performance by L2ers (Second Language Learners) may not imply that they have developed native-equivalent underlying grammatical knowledge and representations of those features: "by hypothesis, there is a permanent 'loss of capacity to acquire' in this domain" (Hawkins and Hattori 2006, p. 272). In contrast to uninterpretable syntactic features, interpretable L2ers expected to be able to reset parameters or acquire new feature specifications for uninterpretable syntactic features, but their ability to acquire the corresponding nativeequivalent underlying representations cannot be permanently lost due to the time of onset of L2 acquisition. This paper re-evaluates the predictive and explanatory power of the Interpretability Hypothesis regarding the acquisition of wh-questions and corresponding locality constraints (Subjacency and Superiority) by L1 Sinhala-L2 English speakers. To re-evaluate the Interpretability Hypothesis, we conducted two experiments-a Truth Value Judgment Task (TVJ) and a Grammaticality Judgment Task (GT)-with 38 L2 speakers of English (Sinhala L1 speakers) and a control group of 31 English native speakers. The Truth Value Judgment Task (TVJ) is a partial replication of a similar task carried out by Hawkins and Hattori (2006). However, our study is different from Hawkins and Hattori's study at least in two respects. First, this study provides an independent measurement of L2 participants' proficiency in English, which is absent in H&H. Second, unlike H&H's study, which uses only 19 L2ers, we use a larger sample of 38 L2 participants. The results from these two experiments clearly reveal, contra predictions of RD accounts, that advanced L2 speakers are sensitive to both Superiority and Subjacency constraints that govern the syntax and the interpretation of English wh-questions with overt wh-movement. This indicates that the L2 English learners have acquired the relevant uninterpretable uwh* syntactic feature that triggers the application of Superiority and Subjacency, despite their non-instantiation, at least in the same syntactic domain, in the L1 syntax. Based on these results, we will argue that the reconfiguration of uninterpretable syntactic features is possible in adult L2 acquisition, and the acquisition of those features is not intrinsically restricted by an early critical period for language acquisition, challenging RD accounts. In addition to its theoretical contribution, this paper represents the first generative SLA investigation of the L2 acquisition of English syntax by L1 speakers of Sinhala (or Sinhalese), an insular Indic Indo-Aryan language spoken by approximately 17 million people in Sri Lanka). Before we present the experimental study, we discuss the relevant syntactic properties of English and Sinhala. The Syntax of Wh-Questions: The Case of English and Sinhala English and Sinhala are distinct regarding the way wh-interrogatives are formed. As it is extensively discussed in the generative literature, English is an overt wh-movement language. A single wh-phrase initially merged inside the vP domain (in non-echo questions) subsequently undergoes overt syntactic movement to its surface position, the specifier of the Complementizer Phrase (CP), where it is pronounced. This is illustrated in (1) Sinhala, in contrast, is a wh-in situ language (Gair 1983;Hettiarachchi 2015b;Kishimoto 2005): in the unmarked case, a wh-phrase always stays in its first merged position in the syntactic structure. The examples in (3) and (4) show at least two properties associated with Sinhala wh-questions: (i) in the unmarked case, a Q-particle d@ occurs adjacent to the wh-phrase, and (ii) the verb of a wh-question is obligatorily marked by an -e suffix (Kariyakarawana 1998). This is different from declaratives and yes/no questions in Sinhala, which carry an -a suffix on the verb, as illustrated in (5) below. The e-marking on the verb has been treated as a licensing requirement for the wh-element in Sinhala (see Kariyakarawana 1998 Like mono-clausal, local wh-interrogatives illustrated in (1) and (3), complex questions involving Long Distance (LD) wh-movement show the same distinction between the two languages. As it is extensively argued in syntactic theory, in a long-distance (inter-clausal) wh-question in English, such as the one illustrated in (6), the wh-phrase undergoes overt movement from the position where it is first merged inside the embedded clause to the matrix clause initial position. The lack of overt wh-movement yields ungrammaticality in non-echo questions, implying the obligatory nature of this operation (7). Moreover, as it has been extensively argued in generative syntax, Long Distance (LD) wh-movement happens in a successive cyclic manner via the embedded Complementizer Phrase (CP): (6) [ According to Ross (1967), Long Distance (LD) wh-movement in English is subject to various island constraints, syntactic domains from which an element cannot be extracted. This is illustrated below with a Complex NP Island (9), Wh-island (10) and an Adjunct island (11) Chomsky (1973 and thereafter) proposed a more general constraint to account for the ungrammaticality associated with Ross' (1967) island violations. This has been known as the 'Principle of Subjacency', assumed in Principles and Parameters syntax to be a property of UG. Subjacency requires that movement be a local operation that takes place in short cycles via intermediate CPs. If a wh-phrase crosses more than one bounding node (TP and DP in English) at a time, as illustrated in (12), it violates Subjacency. Subjacency successfully accounts for the ungrammaticality of (12) in which the movement of the wh-phrase 'which car' from the embedded clause to the matrix CP crosses two bounding nodes: TP and DP. This is different from declaratives and yes/no questions in Sinhala, which carry an -a suffix on the verb, as illustrated in (5) below. The e-marking on the verb has been treated as a licensing requirement for the wh-element in Sinhala (see Kariyakarawana 1998 Like mono-clausal, local wh-interrogatives illustrated in (1) and (3), complex questions involving Long Distance (LD) wh-movement show the same distinction between the two languages. As it is extensively argued in syntactic theory, in a long-distance (inter-clausal) wh-question in English, such as the one illustrated in (6), the wh-phrase undergoes overt movement from the position where it is first merged inside the embedded clause to the matrix clause initial position. The lack of overt wh-movement yields ungrammaticality in nonecho questions, implying the obligatory nature of this operation (7). Moreover, as it has been extensively argued in generative syntax, Long Distance (LD) wh-movement happens in a successive cyclic manner via the embedded Complementizer Phrase (CP): According to Ross (1967), Long Distance (LD) wh-movement in English is subject to various island constraints, syntactic domains from which an element cannot be extracted. This is illustrated below with a Complex NP Island (9), Wh-island (10) and an Adjunct island (11) Chomsky (1973 and thereafter) proposed a more general constraint to account for the ungrammaticality associated with Ross' (1967) island violations. This has been known as the 'Principle of Subjacency', assumed in Principles and Parameters syntax to be a property of UG. Subjacency requires that movement be a local operation that takes place in short cycles via intermediate CPs. If a wh-phrase crosses more than one bounding node (TP and DP in English) at a time, as illustrated in (12), it violates Subjacency. Subjacency successfully accounts for the ungrammaticality of (12) in which the movement of the wh-phrase 'which car' from the embedded clause to the matrix CP crosses two bounding nodes: TP and DP. In addition to Subjacency, Chomsky (1973, p. 246) observed that in English multiple wh-interrogatives, the movement of one wh-phrase over the other results in ungrammaticality. He proposed the condition in (15) to account for the type of ungrammaticality in (14). In addition to Subjacency, Chomsky (1973, p. 246) observed that in English multiple wh-interrogatives, the movement of one wh-phrase over the other results in ungrammaticality. He proposed the condition in (15) to account for the type of ungrammaticality in (14). The Superiority condition in (15) imposes a restriction on which wh-phrase(s) can undergo movement into the Spec-CP position when a clause contains multiple wh-phrases. According to (15), who (Z) in (13) and (14) is superior to what (Y), given that every major category (i.e., at least every maximal projection) dominating who also dominates what (i.e., CP and TP) but not conversely (i.e., VP dominates what, or its trace position, but not who). However, as discussed in Lasnik and Saito (1992), Bošković (1997), and other work, even in English, the Superiority Condition, as formulated in (15), is not without exceptions. In (16b) below, the superiority condition predicts that the movement of what over where would yield ungrammaticality, as where is merged at a higher position than what in the relevant structure. Nevertheless, contra this prediction, either of the two wh-phrases can undergo movement into Spec-CP without yielding ungrammaticality. Given this, Chomsky's superiority condition has received much discussion both in GB and Minimalist syntax (e.g., Aoun et al. 1981;Epstein and Seely 2006;Honstein 1995;Lasnik and Saito 1992), and the phenomena that instantiate its application do not involve all wh-movement questions, raising additional questions regarding L2 acquisition. We will return to this issue in our Results and Discussion section. Constraints on Sinhala Questions We observed above that Sinhala is a wh-in situ language: in the unmarked case, a wh-phrase stays in situ in overt syntax, maintaining, e.g., an SOV word order (17) for an object wh-question. (17) [ However, wh-questions in Sinhala, similar to non-wh clauses, also allow the OSV word order, as illustrated in (18) Even though (18) is superficially similar to an overt wh-question in English, its noncanonical word order in Sinhala is derived through a syntactic operation called scrambling, driven by a different syntactic feature than uwh* in C. Following Miyagawa's (2009) proposal for Japanese scrambling, Hettiarachchi (2015a) argues that clause initial scrambling in Sinhala (OSV) is triggered by either a topic or focus feature, as further discussed below. Consequently, scrambling can apply even to non-wh elements in Sinhala, unlike wh-movement in English, as shown in (19b). Similar properties regarding scrambling are also found in Japanese. It has been argued that wh-displacement in (18), which is optional in Sinhala, is also an instance of scrambling (wh-scrambling) and this has been supported by the fact that it does not exhibit Superiority effects, as shown in (20), unlike wh-movement in English (see Kariyakarawana 1998, p. 145 If scrambling were driven by the same uninterpretable feature that drives wh-movement in English, the displacement of mokak 'what' in (20) would be expected to show sensitivity to Superiority. Evidence for the absence of overt wh-movement in Sinhala also comes from the status of Subjacency violations. Similar to many other wh-in situ languages, wh-phrases are allowed inside a variety of syntactic islands in Sinhala (Gair 1983;Kariyakarawana 1998;Kishimoto 2005 In addition, Sinhala scrambling, unlike English A'-movement, is allowed from a variety of syntactic islands in Sinhala, as illustrated in the following examples of long-distance (inter-clausal) scrambling from a Complex DP Island (24) and an Adjunct Island (25). In each case, the island violation yields an ungrammatical sentence with topicalization in English, but not in Sinhala. This contrast indicates that scrambling in Sinhala is not driven by the same features as overt wh-movement is in English. The absence of Subjacency violations in Sinhala wh-in situ questions is compatible with Huang's (1982) generalization that wh-in situ questions involve LF (wh-)movement, which was earlier argued to be sensitive only to the Empty Category Principle (ECP), but not Subjacency, unlike overt wh-movement (Chomsky 1982). In terms of the properties outlined in this section, Sinhala is structurally very similar to Japanese: (i) both languages are wh-in situ languages (they lack overt wh-movement), (ii) they have wh-scrambling which only superficially resembles overt wh-movement in English, (iii) scrambling does not exhibit Superiority and Subjacency effects, and (iv) wh-phrases can occur inside islands. Learning Tasks Given our discussion in the previous section, Sinhala native speakers acquiring L2 English must acquire a new uninterpretable syntactic feature (uwh* feature) that is not instantiated in wh-questions in their L1 syntax. If the Interpretability Hypothesis is on the Languages 2022, 7, 134 7 of 19 right track, the acquisition of English wh-questions should pose a learnability problem for at least those Sinhala L1/English L2 speakers who undertake the L2 learning task after the complete acquisition of functional feature specifications in their L1 syntax. For them, L2 acquisition involves an instance of Parameter Re-setting or a reconfiguration of feature specifications in the domain of the L2 functional lexicon, as elaborated below. In this section, we briefly outline specific learning tasks for the native Sinhala speakers acquiring L2 English wh-questions, along with predictions from different hypotheses and research questions to be investigated in this study. As outlined in Section 2, the competence of an English native speaker in the domain of wh-interrogatives is characterized by at least three properties: I. A wh-phrase is first merged inside the vP and it subsequently undergoes overt whmovement to Spec-CP. II. The uninterpretable syntactic feature [uwh*] in C triggers obligatory syntactic movement of the wh-phrase, which needs valuation/deletion in narrow syntax. III. The movement of the wh-phrase in any derivation must adhere to principles of locality such as Superiority and Subjacency (or, in Minimalist terms, to principles such as the Phase Impenetrability Condition/PIC and Minimal Link Condition/MLC) Moreover, for an English native speaker, a long-distance wh-question in English (involving adjuncts) can be ambiguous between a matrix and an embedded reading for the fronted wh-phrase. For example, the following wh-question could either be a question about when Siri said something or when Mary bought a new car. (26) [ CP When did [ TP Siri say [ CP [ TP Mary bought a new car?]]]] It is also part of the native speaker competence that the embedded reading of a longdistance wh question could be blocked by an intervening wh-phrase at the intermediate Spec-CP, as the result of a Subjacency violation, as shown in (27) Sinhala native speakers exposed to L2 English must acquire all the three properties outlined above, for which they do not have overt evidence in L1 Sinhala (as discussed in detail in the section on wh-questions in English and Sinhala). As far as the first two properties are concerned, recall that a wh-phrase first merged inside a vP does not undergo overt wh-movement in Sinhala. Given this, the first task of these L2ers is to learn/acquire that, in the case of English overt wh-movement, wh-phrases are pronounced at a different structural position from where they are interpreted at LF. This also means that in incremental processing, these L2ers must learn to form an unbounded dependency between an antecedent (wh-phrase at Spec-CP) and its trace/copy inside the vP where it is initially merged. Second, they need to learn that a long-distance wh-question can be ambiguous in English, as in (26), though the same ambiguity is absent in Sinhala, in which distinct sentences are necessary to yield the two meanings. Notice that the following Sinhala counterparts for the English question in (27) are not ambiguous. In Sinhala, each interpretation is associated with a different word order in overt syntax (matrix question in (28) Predictions Let us first assume that Sinhala native speakers in our study have had sufficient exposure to construct a mental grammar for L2 English. Considering different hypotheses or theories on the role of UG in adult L2 acquisition, several predictions are possible concerning their interlanguage development in the domain of English wh-questions. Full Access approaches (e.g., Epstein et al. 1996;Schwartz and Sprouse 1996;White 2003) in general would predict that these L2ers can successfully acquire the relevant uninterpretable feature [uwh*] that triggers overt wh-movement in English and the application of related constraints, given that they have direct access to the complete inventory of both interpretable and uninterpretable syntactic features made available by UG. As a result, successful L2 acquisition in this context is predicted to be able to yield native-like sensitivity to locality constraints associated with wh-movement, so that English L2 learners (Sinhala L1) also distinguish English wh-movement from scrambling, a syntactic operation driven by a different syntactic feature in their L1, as discussed in the previous section. Meanwhile, a prediction in line with Representational Deficit (RD) accounts such as the Interpretability Hypothesis is that Sinhala L1/English L2 speakers would continue to apply overt whscrambling to form wh-dependencies in the target grammar, provided that they began the L2 acquisition process after having acquired these properties in their L1. Under RD accounts, the complete acquisition of the uwh* feature in L2 syntax must not be possible for late L2 learners, as they do not have access to the UG inventory of uninterpretable syntactic features after parameter setting in their L1 (Tsimpli 2003). If the Feature-Interpretability Hypothesis were on the right track, this could be evident in the absence of native-like sensitivity to locality constraints (Superiority and Subjacency), which are associated with the uninterpretable uwh* feature that triggers overt movement in English wh-interrogatives. Finally, in terms of the properties outlined in the previous section, recall that Sinhala is one language which is structurally very similar to Japanese. Thus, if RD accounts were on the right track, the acquisition of English overt wh-movement and corresponding constraints would be expected to pose a learnability problem for Sinhala L1-English L2 speakers; in the same way, they have been argued in Hawkins and Hattori (2006) to be problematic for Japanese L1 speakers acquiring L2 English. This study will re-evaluate these predictions in view of the new experimental results presented in the following section. Research Questions This experimental study investigates the following research questions: 1. To what extent are Sinhala L1-English L2 speakers sensitive to the locality constraints-Subjacency and Superiority-in English wh-interrogatives, implying that they have successfully acquired [uwh*] feature in wh-interrogatives? 2. Does the knowledge of the L2 speakers in this study significantly differ from that of the English native speakers in overt wh-movement in English? Experiment 1-Truth Value Judgment Task Experiment 1 involves a Truth Value Judgment task (TVJ) (Crain and Thornton 1998), a slightly modified version of the one used in Hawkins and Hattori (2006). The goal of this task is to test the sensitivity of Sinhala L1-English L2 speakers to violations of Superiority and Subjacency in English long-distance wh-extractions, which would constitute evidence that they have or have not acquired the uwh* that triggers wh-movement in English. It is assumed that the TVJ task would allow us to test participants' sensitivity to the two locality constraints on wh-questions in a more natural way, including possible ambiguities in different structures. Participants Thirty-nine L2 speakers of English (L1 Sinhala) in Sri Lanka and a control group of 31 English native speakers in the US participated in the two experiments reported below. The mean age of the L2 speakers was 28.3 (SD = 8.6). The mean age of the English monolinguals was 22.2 (SD = 7.5). At the time of the testing, all L2 participants were either studying or teaching English at a university in Sri Lanka. Native English controls were recruited from a pool of undergraduates at a large research university in the US. In addition to the two experiments and a language background survey, all participants completed an English language proficiency test based on the ECPE Examination for the Certificate of Proficiency in English, which is aimed at the C2 level of the Common European Framework of Reference. This test (a Cloze Test) consisted of 40 test items and was worth 40 points in total. Based on the results of the proficiency test, L2 speakers were assigned to two proficiency groups. Participants who scored between 34 and 40 were included in the Advanced Proficiency Group (n = 14) while the ones with lower scores (15-33) were included in the Intermediate Proficiency Group (n = 23). Materials and Procedure The TVJ experiment consisted of a series of test items, and each included a short background story, followed by a multiple wh-question about the content of the story and two possible answers. Both answers were pragmatically plausible, given the context created by the story. However, some answers were grammatically impossible, because the interpretation they corresponded to would involve violations of Superiority, Subjacency or both. This is illustrated in the sample test item below: James is making plans to go hike the Great Wall of China during the summer. Last Tuesday, James promised to call Lois the following day with the details of the trip so that Lois can join him too. Test Question Who did James promise he would call when? (c) Answers a: James promised that on Wednesday he would call Lois. b: James promised Lois that he would call on Wednesday. In this task, the participants were asked to choose the most acceptable answer (they had the option to choose either one or both answers) to the question that was being asked. Since both answers were always pragmatically possible given the context created by the story, the difference in the acceptance or non-acceptance of either answer to the test question relied on whether the subjects allowed the fronted wh-phrase in the test question (who in (30b) to be interpreted either in the matrix or embedded clause, which was the only difference between the two possible answers. For the answer (a) to be accepted in the above example, both who and when must have scope in the embedded clause. At least according to the standard view in generative syntax (following Chomsky 1973), this violates the Superiority condition: who would have to be generated lower than when in the syntactic structure corresponding to the embedded clause interpretation of who in (a). For the answer (b) to be accepted, who must have scope in the matrix clause while when is expected to have scope in the embedded clause; this reading arguably does not yield any syntactic violations. The two readings are illustrated in (31) below: (31) The mean age of the L2 speakers was 28.3 (SD = 8.6). The mean age of the English monolinguals was 22.2 (SD = 7.5). At the time of the testing, all L2 participants were either studying or teaching English at a university in Sri Lanka. Native English controls were recruited from a pool of undergraduates at a large research university in the US. In addition to the two experiments and a language background survey, all participants completed an English language proficiency test based on the ECPE Examination for the Certificate of Proficiency in English, which is aimed at the C2 level of the Common European Framework of Reference. This test (a Cloze Test) consisted of 40 test items and was worth 40 points in total. Based on the results of the proficiency test, L2 speakers were assigned to two proficiency groups. Participants who scored between 34 and 40 were included in the Advanced Proficiency Group (n = 14) while the ones with lower scores (15-33) were included in the Intermediate Proficiency Group (n = 23). Materials and Procedure The TVJ experiment consisted of a series of test items, and each included a short background story, followed by a multiple wh-question about the content of the story and two possible answers. Both answers were pragmatically plausible, given the context created by the story. However, some answers were grammatically impossible, because the interpretation they corresponded to would involve violations of Superiority, Subjacency or both. This is illustrated in the sample test item below: (30) (a) Story James is making plans to go hike the Great Wall of China during the summer. Last Tuesday, James promised to call Lois the following day with the details of the trip so that Lois can join him too. (b) Test Question Who did James promise he would call when? (c) Answers a: James promised that on Wednesday he would call Lois. b: James promised Lois that he would call on Wednesday. In this task, the participants were asked to choose the most acceptable answer (they had the option to choose either one or both answers) to the question that was being asked. Since both answers were always pragmatically possible given the context created by the story, the difference in the acceptance or non-acceptance of either answer to the test question relied on whether the subjects allowed the fronted wh-phrase in the test question (who in (30b) to be interpreted either in the matrix or embedded clause, which was the only difference between the two possible answers. For the answer (a) to be accepted in the above example, both who and when must have scope in the embedded clause. At least according to the standard view in generative syntax (following Chomsky 1973), this violates the Superiority condition: who would have to be generated lower than when in the syntactic structure corresponding to the embedded clause interpretation of who in (a). For the answer (b) to be accepted, who must have scope in the matrix clause while when is expected to have scope in the embedded clause; this reading arguably does not yield any syntactic violations. The two readings are illustrated in (31) Hawkins and Hattori (2006), we predict that the participants who have successfully acquired the [uwh*] will show sensitivity to Superiority and Subjacency violations, choosing answers that do not involve such violations in the interpretation of the test question. Each condition included four test items. Items in Condition 1 involved no violation of Subjacency or Superiority in the interpretation of the fronted wh-phrase either in the Following Hawkins and Hattori (2006), we predict that the participants who have successfully acquired the [uwh*] will show sensitivity to Superiority and Subjacency violations, choosing answers that do not involve such violations in the interpretation of the test question. Each condition included four test items. Items in Condition 1 involved no violation of Subjacency or Superiority in the interpretation of the fronted wh-phrase either in the matrix clause or the embedded clause. Items in Condition 2 involved a Superiority violation in the embedded clause interpretation of that wh-phrase (31). In Condition 3, the embedded clause reading was blocked by a Subjacency violation, and in Condition 4, the same was blocked by both Subjacency and Superiority violations. Items in C1 were used as a baseline to evaluate whether the L2 participants are sensitive to the scopal ambiguity in English whquestions. Recall that such ambiguity is something for which these L2ers do not have overt evidence in their L1, i.e., in Sinhala wh-questions, each scopal interpretation is associated with a different word order, as we discussed before. Results Figure 1 summarizes participants' mean choices of matrix/embedded readings for the fronted wh-word in each condition. The readers are invited to pay close attention to how the three groups of participants (NS (Native), AP (L2 Advanced Proficiency), and IP (L2 Intermediate Proficiency) have performed in the control condition (C1) and each of the three experimental conditions. As stated earlier, Condition 1 included complex wh-questions in which either the matrix or embedded reading was predicted to be possible for the fronted wh-phrase without any violations of Superiority or Subjacency. These items allowed us to determine whether L2 participants, similar to native speaker controls, are sensitive to the scopal ambiguity in long-distance English wh-interrogatives. Our results show that English monolinguals in these cases had a preference, though marginally, for the embedded scope reading (Mean = 0.85, SD = 0.27) over the matrix one (Mean = 0.73, SD = 0.34). Advanced L2ers, in contrast, showed almost no difference in their choices between matrix (Mean = 0.64, SD = 0.34) and embedded readings (Mean = 0.66, SD = 0.32) while the intermediate L2 group displayed a strong preference for the matrix interpretation (Mean = 0.73, SD = 0.30) over the embedded one (Mean = 0.47, SD = 0.36). Despite these differences, all three groups showed that (i) they were sensitive to the scopal ambiguity in long-distance wh-movement, and (ii) they can assign both matrix and embedded readings for the fronted wh-word when there is no movement violation involved. Thus, their performance in this condition provided us with a baseline to evaluate participants' scopal assignment in the other three experimental conditions. R PEER REVIEW 11 of 19 embedded reading in C1, an issue we return to later. Crucially, however, advanced L2ers were not significantly different (p > 0.34) from the native speaker controls in terms of the number of times that they assigned an embedded reading in C2. Test items in C3 were like those in C2 except that the embedded reading for the matrix wh-word in these items was predicted to be blocked by a Subjacency violation, instead of a Superiority violation. In this condition, both L2ers and English monolinguals showed a clear preference for the matrix reading of the first (higher) wh-word. Native controls behaved as predicted, as their performance in this condition significantly differed from their own embedded readings in the baseline condition, t (30) = 7.65, p < 0.001. The same We submitted participants' mean choices of embedded/matrix readings to a repeated measures ANOVA, with proficiency as a between subject factor and condition (C1 to C4) and interpretation site (matrix vs. embedded clause) as within subject factors. Both by-participant and by-item analyses showed a significant three-way interaction of interpretation site, condition, and proficiency (F1 (6, 12) = 3.91, p < 0.001, F2 (6, 24) = 3.43, p < 0.01), and significant effects of interpretation site (F1 (1, 65) = 14.37, p < 0.001, F2 (1, 12) = 33.82, p < 0.001), condition (F1 (3, 63) = 15.90, p < 0.001, F2 (3, 12) = 8.08, p < 0.001), and proficiency (F1 (2, 65) = 8.78, p < 0.001, F2 (2, 11) = 10.80, p < 0.003). Given that proficiency interacted with the other two factors in question, we conducted separate repeated measures ANOVAs for each participant group, including several post hoc tests (paired t-tests) where necessary. Recall that items in the Superiority condition (C2), unlike those in the baseline condition, offered a different possibility in terms of their scopal interpretation for the fronted wh-word: the embedded reading for the matrix wh-word was predicted to be blocked by a Superiority violation, given standard theoretical accounts. When compared to the baseline condition (Mean = 0.85, SD = 0.27), native speakers' embedded reading in this instance (Mean = 0.54, SD = 0.33) was found to be significantly different, t (30) = 4.42, p < 0.001. However, this difference was not found for the advanced L2 group, as their mean embedded interpretation in this Condition (Mean = 0.71, SD = 0.31) was not significantly different from their own performance in condition 1 (Mean = 0.66, SD = 0.32, t (13) = −0.50, p > 0.62). The latter was also true for the intermediate group: there was no significant difference between their own embedded interpretation in the baseline condition and the Superiority condition, t (22) = −0.85, p > 0.40. According to these comparisons, only native speakers appeared to be sensitive to a contrast between C1 and C2 in the TVJ experiment. However, this was also affected by the fact that the L2 speakers showed lower preference for the embedded reading in C1, an issue we return to later. Crucially, however, advanced L2ers were not significantly different (p > 0.34) from the native speaker controls in terms of the number of times that they assigned an embedded reading in C2. Test items in C3 were like those in C2 except that the embedded reading for the matrix wh-word in these items was predicted to be blocked by a Subjacency violation, instead of a Superiority violation. In this condition, both L2ers and English monolinguals showed a clear preference for the matrix reading of the first (higher) wh-word. Native controls behaved as predicted, as their performance in this condition significantly differed from their own embedded readings in the baseline condition, t (30) = 7.65, p < 0.001. The same was true for advanced L2ers (t (13) = 5.95, p < 0.001) and intermediate L2ers (t (22) = 4.11, p < 0.001). Furthermore, as far as the performance in this condition is concerned, there was no significant difference between native controls and advanced L2ers (p > 0.49), even though intermediate L2ers were slightly different from native speakers (p < 0.05), in that Intermediate L2 subjects more strongly rejected an embedded reading. Condition 4, meanwhile, involved items in which the embedded reading for the higher wh-word was predicted to be excluded by both Superiority and Subjacency violations. As we predicted, the embedded reading in this instance for the native control group was also significantly different from their own performance in the baseline condition (t (30) = 4.81, p < 0.001). The same pattern was again observed for advanced L2ers (t (13) = 4.17, p < 0.001), but not for the intermediate group (t (22) = 0.45, p > 0.65). Therefore, only advanced L2ers and native speakers showed strong sensitivity to violations that blocked the embedded reading in this condition. Interim Discussion Our results on the TVJ experiment show that these L2 English speakers have successfully acquired the principle of Subjacency, as it is evident in their low mean preference for embedded scope readings in C3. Recall that for both advanced and intermediate L2ers, the embedded reading in C3 significantly differed from their own assignment of embedded readings in the no violation condition. This is also consistent with what was found for the native controls in the comparison between these two conditions. Thus, regarding the Subjacency constraint, both L2 groups showed strong evidence of the acquisition of the uninterpretable feature (uwh*) that drives movement in English wh-questions. Still, if Sinhala/English L2ers have acquired the uwh* in the target L2, one would expect them to show equal sensitivity to Superiority violations, too. However, neither the advanced L2 group nor the intermediate L2 group showed a strong level of sensitivity to Superiority violations in C2: unlike native controls, neither of the L2 groups showed a significant difference in their mean choice of embedded scope between this condition and the baseline condition. Therefore, at least regarding the Superiority condition, our results are consistent with what Hawkins and Hattori (2006) found for Japanese Native Speakers (JSE) acquiring L2 English: JSE did not block embedded readings that violated Superiority as much as native controls did. However, notice that even native controls in our study have shown weaker sensitivity to Superiority violations (Mean = 0.54, SD = 0.33) than Subjacency violations (Mean = 0.31, SD = 0.23). If native controls were equally sensitive to Subjacency and Superiority constraints in wh-questions, we would not expect to see a difference in their performance in the embedded readings between C2 and C3. However, in our results, this difference proved to be statistically significant, too: (t (30) = 3.96, p < 0.001). We argue that this disparity between the sensitivity to the two constraints resulted from the fact that the test items on Superiority that H&H used in their study (and that we replicated in this experiment) only involved Argument over Adjunct extractions which are acceptable to many native speakers of English (Bošković 1997;Lasnik and Saito 1992;Obata 2008). For instance, in the following sentence, either the argument or the adjunct could be extracted without yielding an ungrammatical reading for different speakers: Even though the Superiority Condition, as formulated in Chomsky (1973), would predict only (a) in (32) to be grammatical, Obata (2008) argues that the extraction of the argument (what) over the adjunct (where) in (32b) can be grammatical in English. In Obata's analysis, the argument matches the C head better than the adjunct in terms of the number of features that they share: what carries both case and wh-features while where only carries a wh-feature (Obata also assumes that C is involved in Case feature match/agreement, following (Pesetsky and Torrego 2001). If so, in the following test item on Superiority (Hawkins and Hattori 2006, p. 287), which was modeled in our experiment, too, either answer should in fact be acceptable for an English native speaker: (33) Who did Sophie's brother warn <who 1 > [Sophie would telephone <*who 2 > when]? Answer 1: He warned Norman that Sophie would phone on Friday. Answer 2: He warned that Sophie would phone Mrs. Smith on Friday. In contrast, a clearer Superiority violation is observed when an argument in a lower position in the structure is extracted over an argument occupying a higher position. Due to this difference in grammaticality, a more extensive test on Superiority should include a sample of both kinds of violations. If L2ers, like native speakers, show a difference in their judgments between these two kinds of Superiority violations, that can provide further evidence for their sensitivity to overt wh-movement violations in L2 syntax. We took this into consideration in designing the stimuli for our experiment 2. Experiment 2: Grammaticality Judgment Task This experiment consisted of a scalar Grammaticality Judgment task in which participants used a five-point Likert scale (1: Strongly Agree, 2: Agree, 3: Neither Agree nor Disagree, 4: Disagree, 5: Strongly Disagree) to evaluate the un/grammaticality of forty-six English sentences presented to them in a random order. Similar to the TVJ task in Experiment 1, the main goal of this experiment was to test the sensitivity of L2ers to Superiority and Subjacency violations associated with wh-interrogatives in English. In addition, this task also tested whether our participants were sensitive to the grammaticality distinction across the two kinds of superiority violations in English wh-questions, as in (32). Participants All the participants who took part in the first experiment participated in this experiment, too. Materials and Procedure This experiment included eight test items each on Superiority (Condition 1) and Subjacency violations (Condition 2) and five items on combined Superiority and Subjacency violations (Condition 3) in English wh-questions. The test also included five grammatical counterparts (control items) to each test condition, as in (37), (40) and (42), and 10 fillers (n = 46). The Superiority condition 1 included violations resulting from Argument over Adjunct (AoAJ) extractions (35) (5 items), and Argument over Argument (AoA) extraction (36) (3 items). Examples from each condition are listed below, with their predicted grammaticality judgments: Condition 3: Combined Superiority and Subjacency (41) *Who did you say when Frank visited? (42) Who did Jane visit when she went to London? The test items in all three conditions were created using long distance wh-extraction that was either blocked by a Superiority violation (C1), a Subjacency violation (C2) or both Superiority and Subjacency violations (C3). Participants, in a paper and pencil test, were instructed to read each sentence carefully and indicate to what extent they thought the sentence was grammatically acceptable in English. Results In preparation for the statistical analysis, we computed mean scores for each participant as he/she judged the grammaticality of wh-questions for the three conditions. In order to do this, participants' judgments on the five-point scale (strongly agree = 1 to strongly disagree = 5) were averaged. Figure 2 shows the mean choices of the answers for the three participant groups. In the test of Subjacency (C2), both English monolinguals and L2ers performed very similarly in both the test and control conditions, in rejecting the test sentences with Subjacency violations. All three groups are in the higher end of the five-point acceptability scale: English monolinguals (Mean = 4.5, SD = 0.49), advanced L2ers (Mean = 4.5, SD = 0.46), and intermediate L2ers (Mean = 4.1, SD = 0.72) all strongly and consistently rejected question sentences with Subjacency violations in this experiment. A very similar pattern is observed in C3 (Subjacency + Superiority violations) for both L2ers and native controls. Finally, notice that for all three groups, the mean difference between test vs. control items in C1 is not as substantial as what is observed in the other two conditions, when the two types of Superiority conditions are considered together. In C3 (combined Superiority and Subjacency), we found a similar interaction between proficiency and grammaticality, F1 (2, 65) = 13.95, p < 0.001, F2 (2, 3) = 28.88, p < 0.01 (due to how Inter/L2 subject performed, as further discussed in the next paragraph). Despite this, for each of the three groups, the difference between grammatical and ungrammatical items was significant, Native: t (30) = 21.29, p < 0.001, Adv/L2: t (13) = 9.74, p < 0.001, Inter/L2: t (21) = 12.61, p < 0.001. That is, each group clearly rejected the test items involving subjacency and superiority violations, as opposed to the grammatical control items. Further analyses considering both C2 and C3 showed that the interaction between proficiency and grammaticality was significant only in C2 and C3, because the intermediate group performed slightly differently from the other two groups in their judgment of both test and control items. Despite this difference, the intermediate group was also sensitive to the grammaticality distinction in both C2 and C3. Summarizing our results on the three conditions so far, native controls, as predicted, showed sensitivity to the grammaticality distinction in all three conditions. Meanwhile, L2ers were sensitive to this distinction only in C2 (Subjacency) and C3 (combined Subjacency and Superiority). However, we return below to C1 results and show that L2ers were in fact sensitive to a more fine-grained distinction among the test items in that condition. Discussion and Conclusions This study aimed at re-evaluating a prediction made by Representational Deficit (RD) Accounts, in particular, the Interpretability Hypothesis (Hawkins and Hattori 2006;Tsimpli 2003;Prentza 2014;Prentza and Tsimpli 2013;Tsimpli and Dimitrakopoulou 2007) concerning the role of uninterpretable syntactic features in adult L2 grammars. According to this hypothesis, after a critical period (the acquisition of feature specifications for one's L1), L2ers do not have access to the complete inventory of uninterpretable syntactic features made available by UG. Partially following aspects of the experimental design from Hawkins and Hattori's (2006) study with Japanese Speakers of English (JSE), this study investigated the acquisition of the uwh* feature and relevant constraints in English overt wh-movement questions by Sinhala Native Speakers acquiring L2 English in Sri Lanka. If the predictions made by the RD account in Hawkins and Hattori (2006) were satisfied, the acquisition of the uninterpretable feature (uwh*) that drives movement in English wh-questions would be expected to be substantially difficult or inaccessible for Sinhala Native Speakers acquiring L2 English, in the same way it was argued to be problematic for Japanese L1-English L2ers in Hawkins and Hattori (2006). This is due to the typological distinction regarding whquestions between Sinhala and English, on the one hand, and the corresponding similarity between Sinhala and Japanese, as discussed in the section on the syntax of wh-questions in English and Sinhala. However, contrary to the predictions made by the RD/Interpretability Hypothesis, converging evidence from the two experiments in this study clearly shows that at least our advanced L2ers have successfully acquired overt wh-movement in English, implying that they have acquired the uninterpretable feature (uwh*) that is argued to trigger this overt movement in English. This is supported by the strong sensitivity of the L2 learners to locality constraints (Subjacency and Superiority) associated with overt wh-movement in the target L2 English grammar. Let us consider evidence from subjacency violations. According to our discussion on Sinhala, subjacency is a constraint that does not apply to wh-questions in Sinhala, i.e., in Sinhala, wh-phrases are allowed in a variety of syntactic islands (e.g., Gair 1983). Hence, similar to the Indonesian L1-English L2 group studied by Martohardjono (1993), one can argue that in acquiring the Subjacency constraint in L2 English, Sinhala native speakers are faced with a genuine poverty of the stimulus problem: they would not have access to the uninterpretable uwh* feature in their L1 Sinhala, and the English L2 input does not provide (negative) evidence about the application of the Subjacency and Superiority constraints. Results of our two experiments show that our Sinhala L1-English L2ers have been able to successfully overcome this problem in acquiring overt whmovement in English that is sensitive to the application of Subjacency violations, implying that they have acquired the new uninterpretable feature specification (uwh*) that drives overt wh-movement. For example, in Experiment 1, the Subjacency constraint (condition 2) clearly blocked an embedded reading for the displaced wh-phrase in the judgments by both L2 groups and English native speakers. In addition, the L2 speakers' performance in this condition matched what was observed for the native controls. Further, in Experiment 2, both L2 groups, like English native speakers, showed a significant difference between grammatical (test) items and ungrammatical (control) items in the Subjacency condition. To the extent that parameter settings are rooted on the feature specification of different syntactic categories, further evidence for such parameter resetting, involving the acquisition of the uwh* feature in the L2 grammar, comes from the L2ers' sensitivity to Superiority violations. As supporting evidence, results from our Grammaticality Task/Experiment 2 revealed that advanced L2ers more strongly rejected argument-over-argument than argument-over-adjunct extractions in Superiority violations in English, showing a contrast equivalent to English native speakers. Notice that even intermediate L2ers in this study show some evidence of successful acquisition of the relevant uninterpretable syntactic feature in English wh-questions, although they show weaker sensitivity to Superiority violations, unlike advanced L2ers and native controls. In both experiments, Inter/L2 subjects at least show a strong level of sensitivity to Subjacency violations. This could be evidence that their interlanguage grammar is still under development (see Ellis 1985;Long 1992;Selinker 1972, for discussion of this phenomenon). However, as rightly pointed out by Epstein et al. (1996), this does not necessarily imply that their grammar fails to be UG constrained: "Although L2 learners may lag behind native speakers with regard to accuracy rates, their judgments of wh-structures may still derive from their knowledge of UG principles and conform to a pattern predicted by UG" (p. 688). Lardiere (2008) also argues that the 'variability' or 'divergence' from the target norm is not necessarily a reliable indication that L2ers have failed to reset a parameter in their interlanguage grammars. Hence, more converging evidence is needed before further conclusions can be made regarding the intermediate L2er's knowledge state in their target L2 English syntax. One argument that has commonly been made in favor of Representational Deficit (RD) accounts is that L2ers, even those who seem to match native speakers in performance, do not truly have native-like underlying mental representations, i.e., L2ers' mental grammar for the target language would be impaired in the functional domain due to their restricted access to UG. Hence, in accounting for the target input, they would use alternative strategies borrowed from their L1 grammatical system. For instance, Hawkins and Chan (1997), in their study of Chinese L1/English L2 speakers in Hong Kong, argue that even the advanced L2ers in their study analyzed English relative clauses as non-movement structures derived through a 'resumptive strategy' borrowed from their L1. Tsimpli and Dimitrakopoulou (2007) made a similar argument to account for the non-target-like performance of the Greek L1/English L2 learners that they studied in Greece. Meanwhile, Hawkins and Hattori (2006), following Miyamoto and Iijima (2003), argued that their Japanese L1/English L2 speakers have replaced English wh-movement with scrambling, an operation found only in their L1 grammar. Borrowing Bley-Vroman's (2009) term, let us call these 'patching strategies'. Given these common findings with L2ers in different contexts (e.g., Hawkins and Chan 1997;Hawkins and Hattori 2006;Kong 2017;Prentza 2014;Tsimpli and Dimitrakopoulou 2007), one could consider whether Sinhala/English L2ers in this study are also employing a 'patching strategy' to analyze wh-dependencies in English. For the sake of argument, let us assume that these L2ers would not have reconfigured the relevant feature specification in their interlanguage grammars. One possibility, as suggested by H&H for Japanese natives, is that they are analyzing English wh-movement as scrambling, an operation available in their L1. Given the superficial similarity between the two kinds of operations in Sinhala, this would be a possibility. However, if the L1 Sinhala/L2 English learners studied here had transferred scrambling from their L1 syntax to analyze the L2 input (at least as predicted by Schwartz and Sprouse 1996 for early stages of L2 development), we would not expect them to be sensitive to Superiority violations in English. The reason, as we discussed in the section on Sinhala wh-questions, is that Sinhala wh-scrambling, unlike wh-movement in English, is not subject to Superiority violations. The insensitivity to Superiority violations is a main argument used by H&H to support the proposal that Japanese native speakers have not acquired the relevant feature in English. However, contra this prediction, we found in the current study that our advanced L2ers could even distinguish between the two kinds of Superiority violations in English, in the additional analysis carried out as part of Experiment 2. The assumption that these L2ers analyze wh-movement as a scrambling operation is even more problematic regarding the Subjacency constraint. As we discussed in the section on Sinhala wh-questions, scrambling, unlike wh-movement, does not show island effects in Sinhala. However, even intermediate English L2ers show strong sensitivity to island constraints. Given this, there is evidence from this study against the view that L2ers, especially at the advanced level, would be using a 'patching strategy' in their acquisition of the uninterpretable feature specification of English wh-questions that is different from their L1 Sinhala. Furthermore, the Subjacency and Superiority constraints are also very unlikely to have been explicitly taught in ESL classrooms. In addition, they cannot be inferred only from the input, which would require exposure to negative data (ungrammatical structures). Hence, they must be part of a learner's acquired unconscious knowledge of the L2 syntax (Campos-Dintrans et al. 2014;Epstein et al. 1996). In summary, these results indicate native-like underlying mental representations are indeed possible in the domain of uninterpretable syntactic features in L2 syntax, a challenge to RD accounts. Our results with Sinhala/English L2ers are also consistent with other recent studies that report the successful acquisition of new functional features in various L2 contexts (e.g., Campos-Dintrans et al. 2014). Finally, these results can reasonably be interpreted as additional evidence for Full Access to UG principles and constraints in adult L2 syntax (e.g., Epstein et al. 1996;Schwartz and Sprouse 1996;White 2003). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data are available on request from the corresponding author.
2022-05-27T15:25:46.246Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "256f712cb121f70b386c43be0c6a4bb673b6d08f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2226-471X/7/2/134/pdf?version=1653461421", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "37c6bea38941c2ff605a59c6a7e081c21337c46d", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
25940947
pes2o/s2orc
v3-fos-license
Precursor of Pro-apoptotic Cytokine Modulates Aminoacylation Activity of tRNA Synthetase* Endothelial monocyte activating polypeptide II (EMAPII) is a cytokine that is specifically induced by apoptosis. Its precursor (pro-EMAPII) has been suggested to be identical to p43, which is associated with the multi-tRNA synthetase complex. Herein, we have demonstrated that the N-terminal domain of pro-EMAPII interacts with the N-terminal extension of human cytoplasmic arginyl-tRNA synthetase (RRS) using genetic and immunoprecipitation analyses. Aminoacylation activity of RRS was enhanced about 2.5-fold by the interaction with pro-EMAPII but not with its N- or C-terminal domains alone. The N-terminal extension of RRS was not required for enzyme activity but did mediate activity stimulation by pro-EMAPII. Pro-EMAPII reduced the apparent K m of RRS to tRNA, whereas the k cat value remained unchanged. Therefore, the precursor of EMAPII is a multi-functional protein that assists aminoacylation in normal cells and releases the functional cytokine upon apoptosis. Aminoacyl-tRNA synthetases (ARSs) 1 catalyze ligation of their cognate amino acids to specific tRNAs. Although basic architecture of the core domain is well conserved among ARSs, unique peptide extensions have been found in the N-or Cterminal ends of metazoan enzymes (1)(2)(3). Although these extensions have been thought to be involved in heterologous molecular interactions, their functional significance is not yet understood. A macromolecular protein complex consisting of at least nine different ARSs has been found in higher eukaryotes (1)(2)(3). This multi-ARS complex also contains three nonsynthetase components, p18, p38, and p43 whose functions are not clear (4 -7). Among these nonsynthetase components, p43 has been proposed to be a precursor of a tumor-specific cytokine, endothelial monocyte-activating polypeptide II (EMAPII) based on over 80% sequence identity between the two proteins (6). EMAPII was originally identified in the culture medium of murine fibrosarcoma cells induced by methylcholanthrene A (8). It triggers an acute inflammatory response (9,10) and is involved in development-related apoptosis (11). The precursor for EMAPII (pro-EMAPII) is processed at the Asp residue of ASTD/S sequence to release the C-terminal cytokine domain of 23 kDa (11). Its C-terminal domain shares homology with the C-terminal parts of methionyl-tRNA synthetases of prokaryotes, archaea and nematode, and also a yeast protein, Arc1p/G4p, which interacts with methionyl-and glutamyl-tRNA synthetases. The N-terminal domain of pro-EMAPII does not show homology to any known proteins, and its function has not been understood. EMAPII is expressed in a wide range of cell lines and normal tissues (12) and its mRNA level is unchanged during apoptosis (11) although its production is induced by apoptosis. The present work was designed to address whether pro-EMAPII is identical to p43 and to understand its function in the normal cell. The results showed that pro-EMAPII is associated with the N-terminal extension of human arginyl-tRNA synthetase (RRS), facilitating aminoacylation reaction. Expression and Purification of Recombinant tRNA Synthetases and Pro-EMAPII-Human pro-EMAPII is genetically separated into the Nand C-terminal domains by proteolytic cleavage at Asp 147 . The cDNA encoding the full-length pro-EMAPII was isolated from pM338 2 by NdeI and XhoI digestion and then used as a template to separately amplify the DNA encoding its N-and C-terminal domains by PCR using the primer pairs of R1EF/S1ENB and R1ECF/S1EB (Table I). The PCR products were digested and cloned into pET28a using EcoRI and SalI. The DNA encoding the 72-amino acid N-terminal extension of human RRS was also amplified by PCR using the primers of R1RNF and S1RNB (Table I) and cloned into the EcoRI and SalI sites of pET28a. The resulting clones were transformed into Escherichia coli strain BL21-DE3, and the inserted genes were induced at 0.1 mM IPTG. The cells expressing the recombinant proteins were harvested, resuspended in 20 mM KH 2 PO 4 , 500 mM NaCl (pH 7.8), and 2 mM 2-mercaptoethanol, and then lysed by ultrasonication. After centrifugation of the lysate at 25,000 ϫ g, the supernatants were recovered and the recombinant proteins containing a 6-histidine tag were isolated by nickel affinity chromatography according to the instructions of the manufacturer (Invitrogen). The cDNAs encoding the full-length and N-terminal 72-amino acid truncated (⌬N72) human RRS proteins were also amplified by PCR with the primer pairs of R1RNF/S1RB and R1RTN/S1RB, respectively ( Table I). The resulting PCR products were cloned into pGEX4T-1 using the EcoRI and SalI sites to express as the glutathione S-transferase (GST) fusion proteins. Protein extracts were prepared as described above, and the GST fusion proteins were purified by glutathione affinity chromatography. The GST tag was then removed by thrombin cleavage and the RRS proteins were further purified according to the protocol of the manufacturer (Amersham Pharmacia Biotech). The plasmid pM109 containing the full-length human lysyl-tRNA synthetase (KRS) fused to a 6-histidine tag (13) was used to express the protein. The His-KRS * This work was supported by a grant from National Creative Research Initiatives and Biotech 2000, 97-N1-06-01-A06 of the Ministry of Science and Technology, Korea (to S. K.) and by a grant from the Ministry of Education, Science and Culture of Japan (to K. S.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. fusion protein was purified using nickel affinity chromatography (CLONTECH). Preparation of Polyclonal Rabbit Antibody Specific to Human Pro-EMAPII-The purified recombinant human pro-EMAPII (500 g) was mixed with Freund's complete adjuvant at 1:1 volume ratio and then injected into two New Zealand White rabbits. Booster injections were conducted three times at 1-week intervals using the same amount of the protein mixed with the incomplete adjuvant at a 1:1 ratio. The rabbits were sacrificed by cardiac puncture, and the antiserum was obtained. The antibody was purified by protein A column chromatography. Specificity and titer were determined by Western blotting. Immunoprecipitation-The purified N-terminal extension of human RRS (10 g) was mixed with each of the full-length, N-or C-terminal domains of pro-EMAPII (10 g each) at 4°C overnight. The polyclonal rabbit (20 g) antibody raised against human pro-EMAPII was then added to each of the mixtures and incubated on ice for 4 h. The protein A-agarose suspension in 20 l of 50 mM Tris-HCl (pH 7.5) and 25 mM NaCl was also added, and incubation was continued at 4°C for 5 h. The mixture was centrifuged, and the agarose pellet was washed three times with 400 l of 50 mM Tris-HCl (pH 7.5) containing 25 mM NaCl and 0.01% Triton X-100. The agarose was treated with 50 mM Tris-HCl (pH 6.8) containing 100 mM dithiothreitol, 2% sodium dodecyl sulfate, 0.2% bromphenol blue, and 10% glycerol, and the solution was then boiled for 5 min to elute the bound proteins. After centrifugation, the supernatant was loaded onto a 12% SDS-polyacrylamide gel. The proteins were separated by electrophoresis and detected by Coomassie Blue staining. Two-hybrid Assay-Human proteins interacting with human pro-EMAPII were screened by a yeast two-hybrid system (14). The cDNA encoding the full-length pro-EMAPII was isolated by PCR using the primers R1EF and S1EB (Table I) and ligated next to the gene for LexA using the EcoRI and SalI sites. The plasmid was transformed into yeast strain, EGY48 (MAT, his3, trp1, ura3-52, leu2::pLeu2-LexAop6/pSH 18 -34 (LexAop-lacZ)). A human fetal brain cDNA library in which the proteins are expressed as fusion proteins with the B42 transcriptional activator (CLONTECH) was used to screen for proteins interacting with LexA-pro-EMAPII. The plasmids containing human cDNAs were transformed into EGY48 expressing LexA-pro-EMAPII. Interactions were detected by the induction of reporter genes, LEU2 and LacZ, which resulted in cell growth on leucine-depleted yeast synthetic media containing 2% galactose and also formation of blue colonies on the yeast synthetic media containing 0.2 mM X-gal, 2% galactose, and 2% raffinose. The cDNAs encoding the N-and C-terminal domains of pro-EMAPII were cleaved from the histidine tag construction using EcoRI and SalI and religated into the pLexA vector using the same sites. Aminoacylation Assay-Aminoacylation activity of the purified human RRS was determined as described previously (15). The reaction mixture contained 125 mM Tris acetate (pH 7.4), 0.2 mg/ml bovine serum albumin, 5 mM ATP, 4 mM EDTA, 50 mM MgCl 2 and 0.1 Ci/l [ 3 H]arginine. Aminoacylation of human KRS was carried out in a reaction mixture containing 50 mM HEPES (pH 7.5), 0.1 mg/ml BSA, 20 mM 2-mercaptoethanol, 4 mM ATP, and 0.12 Ci/l [ 3 H]lysine. Human RRS and KRS were pre-incubated on ice with the full-length, N-or Cterminal domain of pro-EMAPII for 5 min and then added to their respective reaction mixtures at a concentration of 0.14 nM. The reaction was initiated by adding bovine liver total tRNA (0.34 M). Reaction samples were taken at 1-min intervals and spotted on filter discs presoaked with 5% trichloroacetic acid. After 1 min, the filter discs were added to ice-cold 5% trichloroacetic acid and washed three times with fresh 5% trichloroacetic acid at 4°C. The radioactivity adsorbed to the filters was quantitated by liquid scintillation counting. Reactions were also carried out at different concentrations of pro-EMAPII for kinetic analysis. Screening of Proteins Interacting with Human Pro-EMA-PII-To investigate the function of pro-EMAPII and its relationship to p43, we screened for protein(s) interacting with human pro-EMAPII using a yeast two-hybrid system (16,17). The 312-amino acid polypeptide of human pro-EMAPII was fused to LexA (DNA-binding domain), and this fusion protein was used as a bait. Human proteins fused to B42 (transcriptional activator) were screened, and interaction between the two fusion proteins was detected by the induction of the reporter genes, LEU2 and LacZ, in a yeast host strain (14). Approximately 300,000 cDNA clones of human fetal brain were screened to identify proteins interacting with pro-EMA-PII. The N-terminal 58-amino acid region of human RRS was selected as one of the six positive clones interacting with pro-EMAPII (data not shown). In the present work, we focused on the interaction between pro-EMAPII and RRS. The N-terminal 72-amino acid peptide region is only found in human (18) and hamster RRS proteins (19). We conducted deletion analysis to determine the peptide regions of pro-EMAPII and RRS responsible for the interaction. The peptides from Gln 15 to Tyr 53 and from Ser 38 to Asn 72 were able to interact with pro-EMAPII, suggesting that the residues from Gln 15 to Ser 38 are responsible for the interaction (Fig. 1). The N-terminal domain of pro-EMAPII showed the interaction with RRS but its C-terminal cytokine domain did not (Fig. 1). Interaction between the N-terminal extension of RRS and pro-EMAPII was also tested by co-immunoprecipitation. The full-length, N-and C-terminal domains of pro-EMAPII and the 72-amino acid N-terminal extension of RRS were all expressed as His-tag fusion proteins and were purified by nickel affinity chromatography (Fig. 2). The purified N-terminal peptide of RRS was mixed with each of the isolated full-length, N-and C-terminal pro-EMAPII in separate reactions. Polyclonal rabbit antibody raised against pro-EMAPII was then added to the mixture and precipitated with protein A-agarose. The proteins in the precipitate were dissolved and separated on an SDSpolyacrylamide gel. The N-terminal peptide of RRS was coprecipitated with the full-length or N-terminal domains of pro-EMAPII but not with its C-terminal domain (Fig. 2). These results further confirmed that the N-terminal domain of pro-EMAPII interacts with the N-terminal extension of RRS as initially identified by the two hybrid analysis (Fig. 1). Pro-EMAPII Stimulates the Catalytic Activity of RRS-The functional significance of the interaction between RRS and pro-EMAPII was further investigated. We tested whether the aminoacylation activity of RRS was affected by interaction with pro-EMAPII. The full-length and N-terminal 72-amino acid truncated (⌬N72) RRS were expressed as GST-fusion proteins. The fused GST was removed by proteolytic cleavage, and the purified full-length and N-terminal truncated RRS proteins were used for the enzyme assay (Fig. 3). The reaction catalyzed by tRNA synthetases proceeds in two steps. The first step is activation of the amino acid by reaction with ATP, and the second step involves transfer of the activated amino acid to the cognate tRNAs. Aminoacylation activity of the full-length RRS was enhanced approximately 2.5-fold in the presence of pro-EMAPII (Fig. 4, left bars). Since argininedependent [ 32 P]pyrophosphate-ATP exchange assay showed that the adenylation step of RRS was not affected by addition of pro-EMAPII (data not shown), the activity enhancement probably results from the second step of the reaction. Activity stimulation was not detected when the separated N-or C-terminal domain of pro-EMAPII was added, indicating that the fulllength pro-EMAPII is necessary for the effect (Fig. 4, left bars). The truncated RRS retained aminoacylation activity comparable with the wild-type enzyme, suggesting that the N-terminal extension is not essential for the enzyme activity (Fig. 4, middle bars). However, the activity of this mutant was not increased by pro-EMAPII, indicating that interaction of pro-EMAPII with the N-terminal extension of RRS is essential for the stimulatory effect (Fig. 4, middle bars). To investigate whether the stimulatory effect of pro-EMAPII is specific for RRS, we employed human lysyl-tRNA synthetase (KRS) which does not appear to interact with p43 (7). The aminoacylation activities of KRS were measured in the absence and presence of pro-EMAPII. KRS activity was not affected by the addition of pro-EMAPII, suggesting that activity stimulation is specific to RRS (Fig. 4, right bars). Kinetic analyses on the aminoacylation of RRS were carried out at different concentrations of pro-EMAPII to understand how pro-EMAPII enhances the RRS activity. The activity enhancement reached a maximum at a 2-fold molar excess of The activity of the full-length KRS was also determined in the absence and presence of the full-length pro-EMAPII. The activities of the fulllength RRS without pro-EMAPII were normalized to 100%, and other activities were compared accordingly. The KRS activities with and without pro-EMAPII were also compared. The experiments were repeated three times. F, N, and C represent the full-length, N-and C-terminal domains of pro-EMAPII, respectively. 2. Immunoprecipitation of pro-EMAPII and RRS. The 72amino acid N-terminal extension of RRS and the full-length, N-and C-terminal domains of pro-EMAPII were expressed as His-tag fusion proteins and purified by nickel affinity chromatography. Each of the pro-EMAPII derivatives was mixed with the RRS peptide. Subsequently, anti-pro-EMAPII antibody was added to each mixture, and protein complexes were precipitated with protein A agarose. The precipitated proteins were separated by SDS-polyacrylamide gel electrophoresis and detected by Coomassie Blue staining. IgG (heavy chain) is shown as marked, and protein sizes are indicated in kDa. pro-EMAPII to RRS and further addition of pro-EMAPII resulted in gradual decrease in the reaction rate (Fig. 5, left panel). A Lineweaver-Burk plot of the reaction showed that the apparent K m of RRS with respect to tRNA was reduced by the addition of pro-EMAPII, whereas its k cat value was not changed (Fig. 5, right panel). Excess pro-EMAPII probably binds to the tRNA substrate and lowers its effective concentration. DISCUSSION Pro-EMAPII (8) and p43 (6) have been independently isolated from different organisms. In this work, we found that pro-EMAPII interacts with RRS ( Figs. 1 and 2). Previous crosslinking and genetic experiments showed the linkage of p43 and RRS (7,20). Thus, all of these results support that p43 and pro-EMAPII are responsible for similar functions within the cell. The full-length pro-EMAPII was required for the activity enhancement of RRS although the N-terminal domain of pro-EMAPII was sufficient for the direct interaction with pro-EMAPII (Fig. 4). It was previously shown that the C-terminal domain of pro-EMAPII contains tRNA binding activity (6). The kinetic analyses showed that pro-EMAPII affected only the apparent K m value to tRNA and not k cat of the enzyme (Fig. 5). Probably, tRNA recruited to the C-terminal domain of pro-EMAPII is delivered to the active site of RRS. Although the activity of RRS was enhanced about 2.5-fold by pro-EMAPII under our experimental conditions, its effect may be more significant in vivo because RRS present in the multi-protein complex would have limited accessibility to tRNA Mammalian RRS exists in two forms differing by the Nterminal extension (15). The larger RRS containing the Nterminal extension is found in the multi-synthetase complex, whereas the smaller RRS exists in a free form (18,19). The complex-associated larger RRS showed a 7-fold higher K m for the tRNA substrate than the complex-free RRS, whereas other kinetic properties were similar (15). Perhaps, the higher K m value of the complex-associated RRS for the tRNA substrate requires compensation by an active delivery of the tRNA substrate. In the case of RRS, the delivery of tRNA appears to be mediated by a trans-acting factor, pro-EMAPII. This mechanism is also reminiscent of yeast Arc1p, which forms a complex with methionyl-tRNA synthetase and stimulates its aminoacylation activity (21). ARSs have developed different ways to modulate their catalytic activities and the efficiency of protein synthesis. For example, the N-terminal extension of rat aspartyl-tRNA synthetase facilitates the release of aminoacylated tRNA to elongation factor (22,23), and the aminoacylation reaction of rabbit valyl-tRNA synthetase is enhanced by interaction with elongation factor EF-1H (24). The N-terminal extension of yeast glutaminyl-tRNA synthetase promotes specific recognition of its cognate tRNA (25), and the C-terminal appendix of E. coli methionyl-tRNA synthetase helps to dock its cognate tRNA to the active site (26). Whereas all of these functions are exerted by the peptide extensions connected in cis to the catalytic domains of ARSs, yeast Arc1p and mammalian pro-EMAPII are trans-acting factors. These factors may have more functional flexibility than the cis-acting peptide extensions because they can easily dissociate from the ARS and interact with cellular molecules for other physiological roles. Human tyrosyl-tRNA synthetase was recently shown to be secreted from apoptotic tumor cells and is cleaved to release the two distinct cytokine domains (27). Interestingly, the released C-terminal domain is homologous to EMAPII. These results along with our data suggest that protein synthesis and apoptosis are functionally coordinated via novel domains covalently or noncovalently linked to ARSs.
2018-04-03T04:18:04.751Z
1999-06-11T00:00:00.000
{ "year": 1999, "sha1": "19ff96ace4650f982816dda76e853aee45e1df71", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/24/16673.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "3562275940f62f214f9f45b79189a142f45af084", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14213828
pes2o/s2orc
v3-fos-license
Mixing times of critical 2D Potts models We study dynamical aspects of the $q$-state Potts model on an $n\times n$ box at its critical $\beta_c(q)$. Heat-bath Glauber dynamics and cluster dynamics such as Swendsen--Wang (that circumvent low-temperature bottlenecks) are all expected to undergo"critical slowdowns"in the presence of periodic boundary conditions: the inverse spectral gap, which in the subcritical regime is $O(1)$, should at criticality be polynomial in $n$ for $14$ in accordance with the predicted discontinuous phase transition. This was confirmed for $q=2$ (the Ising model) by the second author and Sly, and for sufficiently large $q$ by Borgs et al. Here we show that the following holds for the critical Potts model on the torus: for $q=3$, the inverse gap of Glauber dynamics is $n^{O(1)}$; for $q=4$, it is at most $n^{O(\log n)}$; and for every $q>4$ in the phase-coexistence regime, the inverse gaps of both Glauber dynamics and Swendsen--Wang dynamics are exponential in $n$. For free or monochromatic boundary conditions and large $q$, we show that the dynamics at criticality is faster than on the torus (unlike the Ising model where free/periodic boundary conditions induce similar dynamical behavior at all temperatures): the inverse gap of Swendsen--Wang dynamics is $\exp(n^{o(1)})$. Introduction The q-state Potts model on a graph G at inverse-temperature β > 0 is the distribution µ G,β,q over colorings of the vertices of G with q colors, in which the probability of a configuration σ is proportional to exp[βH(σ)], with H(σ) counting the number of pairs of adjacent vertices that have the same color (see §2.1). Generalizing the Ising model (the case q = 2), it is one of the most studied models in Mathematical Physics (cf. [50]), with particular interest in its phase transition on Z d (d ≥ 2) at the critical β = β c . The random cluster (FK) model on a graph G with parameters 0 < p < 1 and q > 0 is the distribution π G,p,q over sets of edges of G, where the probability of a configuration ω with m edges and k connected components is proportional to [p/(1−p)] m q k (see §2.2). It generalizes percolation (q = 1) and electrical networks/uniform-spanning-trees (q ↓ 0), and corresponds at integer q ≥ 2 to the Potts model via the Edwards-Sokal coupling; e.g., one may produce σ ∼ µ G,β,q by first sampling ω ∼ π G,p,q for p = 1 − e −β , then assigning an i.i.d. color to the vertices of each connected vertex set of ω. As such, extensively studied in its own right, the random cluster representation has been an important tool in the analysis of Ising and Potts models (see [25] for further details). On Z 2 with q ≥ 1, significant progress has been made in the study of these models and their rich behavior at the phase transition point p c = √ q 1+ √ q (and β c = log(1 + √ q)). It is widely believed (see [25,Conj. 6.32 and (6.33)]) that the phase transition would be continuous (second-order) if 1 ≤ q ≤ 4 and discontinuous (first-order) for q > 4: the latter has been proved [28][29][30] for q > 24.78 (see also [25,Thm. 6.35]) and supported by exact calculations [2] for all q > 4; the former was very recently proved [17] through an analysis of crossing probabilities in rectangles under various boundary conditions. Here we build on this recent work to study the dynamical behavior of the critical planar Potts and FK models in the three regimes: 1 < q < 4, the extremal q = 4, and q > 4. Heat-bath Glauber dynamics is a local Markov chain, introduced in [23], that models the evolution of a spin system as well as provides a natural way of sampling from it. For the Potts model, the dynamics updates each vertex via an i.i.d. rate-1 Poisson process, where its new value is sampled according to µ G,β,q conditioned on the values of all other vertices (this dynamics for the FK model is similarly defined via single-bond updates). Swendsen-Wang dynamics is a Markov chain on Potts configurations, introduced in [47], aimed at overcoming bottlenecks in the energy landscape (thus providing a potentially faster sampler compared to Glauber dynamics) via global cluster flips: the dynamics moves from a Potts configuration σ to a compatible FK configuration ω via the Edwards-Sokal coupling, then to a new Potts configuration σ compatible with ω. Chayes-Machta dynamics [10] is a closely related Markov chain on FK configurations, analogous to Swendsen-Wang for integer q, yet defined for any real q ≥ 1 (see §2.4). The spectral gap of a discrete-time Markov chain, denoted gap, is 1 − λ where λ is the largest nontrivial eigenvalue of the transition kernel, and for a continuous-time chain it is the gap in the spectrum of its generator. It serves as an important gauge for the rate of convergence of the chain to equilibrium, as it governs its L 2 -mixing time. For the above mentioned dynamics on the Potts/FK models, the inverse spectral gap is expected to feature a well-documented phenomenon known as critical slowdown [26,31]; in what follows we restrict our attention to Z 2 , though an analogous picture is expected in higher dimensions as well as on other geometries (see, e.g., [35] for further details). Glauber dynamics for the Potts model on an n × n torus should have gap −1 transition from O(1) at high temperature (β < β c ) to exp(cn) at low temperatures (β > β c ) through either a critical power-law when 1 < q ≤ 4 or an order of exp(cn) when q > 4 (in accordance with the first-order phase transition believed to occur at q > 4). Swendsen-Wang/Chayes-Machta dynamics should, by design, have gap −1 = O(1) both at high and low temperatures, yet should also exhibit a critical slowdown at β = β c . While this picture for the Potts model has been essentially verified for Glauber dynamics for all β < β c and Swendsen-Wang for all β = β c (see §1.1), the case β = β c has largely evaded rigorous analysis, with two exceptions: for q = 2, a polynomial upper bound on gap −1 of Glauber dynamics for the Ising model was given in [35]; and for sufficiently large q, Borgs et al. [6] showed in 1999 that the Swendsen-Wang dynamics has gap −1 = exp[n 1−o (1) ] (thereafter improved to log gap −1 n in [7]). Crucial to the analysis of the dynamics for q = 2 were Russo-Seymour-Welsh (RSW) estimates for the corresponding FK model-which state that on n × m rectangles with uniformly bounded aspect ratios and free boundary conditions, crossing probabilities are uniformly bounded away from 0-obtained by [16] using the discrete holomorphic observable framework of Smirnov [46]. The framework of [46] is further applicable to the critical Potts model for q = 3 (where the model is expected to have a conformally invariant scaling limit), and the above RSW-type estimates for the FK-Ising model have been recently extended by Duminil-Copin, Sidoravicius and Tassion [17] to this case; this allows one to similarly extend the dynamical analysis of [35] to q = 3. However, at q = 4, these RSW estimates are no longer expected to hold, and instead crossing probabilities are believed to be highly sensitive to boundary conditions, thus resulting in a quasi-polynomial (rather than a polynomial) upper bound on mixing. The following theorems demonstrate the change in the critical slowdown of the Potts and random cluster models on (Z/nZ) 2 between these different regimes of q. Theorem 2. Let q > 4 be such that the critical FK model on Z 2 has two distinct Gibbs measures π 1 Z 2 ,q = π 0 Z 2 ,q . There exists c = c(q) > 0 such that Swendsen-Wang dynamics and Glauber dynamics for the critical q-state Potts model on (Z/nZ) 2 satisfy gap −1 exp(cn) . (1. 3) The same holds for Glauber and Chayes-Machta dynamics for the critical FK model. Remark 1.1. Since the initial posting of this paper, Duminil-Copin et al. [15] proved the discontinuity of the FK phase transition for all q > 4 on Z 2 ; thus, the bound (1.3) from Theorem 2 holds for the critical Potts and FK models on (Z/nZ) 2 for all q > 4. Furthermore, the Glauber dynamics upper bounds in Theorem 1 also hold for boxes with arbitrary (as opposed to periodic) Potts boundary conditions (see Corollary 3.2). In a companion paper [22], for a wider class of boundary conditions, a matching upper bound to (1.2) is established for Glauber dynamics for the FK model at every q ∈ (1,4]. On the other hand, for q > 4, one does not expect Swendsen-Wang and Glauber dynamics to be slow under every boundary condition; e.g., monochromatic boundary conditions should destabilize all Gibbs states but one, inducing faster mixing, as in the case of the low temperature Ising model with plus boundary conditions (cf. [37, §6]). If we naively followed the intuition from the low temperature Ising model, free boundary conditions (where plus and minus phases are both metastable so gap −1 exp(cn)), might be expected to induce the same (slow) critical mixing behavior as in the torus. However, this is not case (see Fig. 2), as the following theorem demonstrates. The same holds for Glauber and Chayes-Machta dynamics for the critical FK model. The estimate (1.4) holds also for monochromatic Potts boundary conditions (which correspond to wired FK boundary conditions), since it is a consequence of the analogous bound for the FK Glauber dynamics, where free boundary conditions at p c (q) are selfdual to wired boundary conditions. In fact, we establish (1.4) for all FK boundary conditions sampled from the free or wired Gibbs measures (see Proposition 5.2), as well as ones that are free on three sides and wired on the fourth (Corollary 5.15). Theorems 2 and 3 show the similarities between the dynamical behavior of the Potts model at its critical point β c in the presence of a discontinuous phase transition, and the 2D Ising model in the low temperature regime β > β c . The proof of Theorem 2 is based on identifying a bottleneck involving the geometry of the torus, between the ordered and disordered phases in the critical FK model using only the multiplicity of Gibbs measures; this is akin to the energy barrier between the plus and minus phases in the low temperature Ising model. Moreover, through this similarity, our analysis of the Potts model at β c extends to its entire low temperature regime β > β c , where the slow mixing behavior of Glauber dynamics was shown for q large enough in [6,7]. An adaptation of the proof of Theorem 2 establishes this result for all q > 1. The proof of Theorem 3 follows the approach used in [41] to establish sub-exponential upper bounds on t mix for Swendsen-Wang in the presence of all-plus boundary conditions, and involves adaptations of cluster-expansion techniques and the Wulff construction framework of [14] to the FK model. The absence of monotonicity in the Potts model frequently leads us to work directly with the FK representation. However, unlike the Ising model-where central to the upper bounds on mixing in many related works is the coupling of configurations beyond an interface between clusters (e.g., the interface between the plus and minus phases, used to establish the inductive step in the multi-scale argument of [36])-the boundary conditions of the FK model may feature long-range connections between vertices. Using these as a "bridge" over the interface (see Figure 3), different FK configurations below the interface may induce different distributions above it, thus preventing the coupling. Working around obstacles of this type comprises a significant part of the proof of Theorem 3. 1.1. Related work. The critical slowdown picture of Glauber dynamics for the 2D Ising model is by now fairly well understood. For β < β c , the dynamics on an n × n torus has gap −1 = O(1) via the work of Martinelli and Olivieri [38,39] and Martinelli, Olivieri and Schonmann [40], showing that, in this regime, there is a uniform bound on the inverse gap (in fact under arbitrary boundary conditions; see [37, §3.7]). That this dynamics has gap −1 exp(c β n) at any β > β c for some c β > 0 was shown by Chayes, Chayes and Schonmann [9], and thereafter with the sharp c β by Cesi et al. [8]. Finally, a polynomial upper bound on gap −1 at β = β c was given in the aforementioned paper [35]; establishing the correct dynamical critical exponent (believed to be universal and approximately 2.17; cf. [35] and its references) remains a challenging open problem. As for Swendsen-Wang, comparison estimates due to Ullrich [48,49] imply that its inverse gap is at most that of Glauber dynamics on any graph and at any temperature (see Theorem 2.7); thus for q = 2 on Z 2 it also has gap −1 = O(1) for all β < β c and for all β > β c thanks to duality, and similarly at β = β c it has gap −1 = n O (1) . For all other q > 1, Glauber dynamics for the Potts model on (Z/nZ) 2 is again known to have gap −1 = O(1) for all β < β c by combining the following results: Alexander [1] related exponential decay of connection probabilities in the FK model on Z 2 to an analogous spatial mixing property in the Potts model on a finite box; Beffara and Duminil-Copin [3] proved the exponential decay of correlations in the FK model for all β < β c ; and the works of Martinelli et al. [38][39][40] translate the aforementioned spatial mixing property to an O(1) bound on the inverse gap. In contrast, Potts Glauber dynamics on (Z/nZ) 2 is always expected to be exponentially slow for β > β c : as mentioned before, this is known for q = 2, and was proved for large enough q in [6,7]. Using the above mentioned estimates for high temperatures, comparison estimates, and duality, Swendsen-Wang dynamics for the Potts model for any q > 1 also has gap −1 = O(1) for all β = β c . Blanca and Sinclair [5] recently showed that for any q > 1 both Chayes-Machta dynamics and (heat-bath) Glauber dynamics for the FK model have t mix = O(log n) for all p = p c (enjoying duality, the latter mixes rapidly at p > p c unlike for the Potts model). That t mix should at the critical p = p c be polynomial in n for 1 < q ≤ 4 and exponential in it for every q > 4 was left in [5] as an open question. (See also Li and Sokal [33]; there, a polynomial lower bound on the mixing of Swendsen-Wang and Glauber dynamics was given in terms of the specific heat-a physical quantity which itself is not rigorously known. In §3 (Theorem 3.6) we give a rigorous polynomial lower bound for gap −1 of the Potts Glauber dynamics.) In the only two cases so far where the dynamical critical behavior on (Z/nZ) 2 has been addressed-the case q = 2 in [35] and the case of integer q large enough in [6,7]through the comparison inequalities of Ullrich, the results apply to all Markov chains discussed above (each has t mix n c at q = 2 and t mix exp(−cn) at q large enough). Note that the results of [6,7] are applicable to every dimension d ≥ 2, while requiring that q be sufficiently large as a function of d. The dynamics for critical 2D Potts/FK models under free boundary conditions takes after Glauber dynamics for the low temperature Ising model under plus boundary conditions. Improving on the original work of Martinelli [36], a delicate multi-scale analysis due to Martinelli and Toninelli [41], based on censoring inequalities (see §2.4), yielded an upper bound of exp(n o(1) ) for the Ising model with plus boundary conditions. This was followed by an n O(log n) bound in [34] via this approach, extended to all β > β c . Our proof of (1.4) is based on this method. Finally, detailed results are known on the dynamical behavior of Potts/FK models on the complete graph (mean-field); see, e.g., [4,11,20,24] and the references therein. Preliminaries In what follows we review the model definitions and properties, as well as the tools that will be used in our analysis. For a more detailed survey of the random cluster model, see [25]. For more details on Markov chain mixing times and Glauber dynamics see [32] and [37], respectively. Throughout this paper, we use the notation f g for two sequences f (n), g(n) to denote f = O(g), and let f g denote f g f . 2.1. Potts model. The (ferromagnetic) q-state Potts model on a graph G = (V, E) is the probability distribution over configurations σ ∈ Ω p = [q] V (viewed as assignments of colors out of [q] = {1, ..., q} to the vertices of G) in which the probability of σ w.r.t. the inverse-temperature β > 0 and the boundary conditions ζ (an assignment of colors in [q] to the vertices of some subgraph H ⊂ G) is given by where the sum is over unordered pairs of adjacent vertices {u, v} in V (G), and the normalizing constant Z p is the partition function. Throughout the paper, we consider graphs that are rectangular subsets of Z 2 with nearest neighbor edges and vertex set where n = αn for some fixed aspect ratio 0 < α ≤ 1, and the notation a, b stands for {k ∈ Z : a ≤ k ≤ b}. We use the abbreviated form Λ when n and α are made clear from the context. For general subsets S ⊂ Z 2 , the boundary ∂S will be the set of vertices in S with a neighbor in Z 2 − S and its edge set will be all edges in Z 2 between vertices in ∂S; we set the interior S 0 = S − ∂S. When considering rectangles Λ, denote the southern (bottom) boundary of Λ by ∂ s Λ := 0, n × {0}, define ∂ n , ∂ w and ∂ e analogously, and let multiple subscripts denote their union, i.e., ∂ e,w Λ = ∂ e Λ ∪ ∂ w Λ. 2.2. Random cluster (FK) models. For a graph G = (V, E), a random cluster (FK) configuration ω ∈ Ω rc = {0, 1} E assigns binary values to the edges of G, either open (1) or closed (0). (In the context of boundary conditions, these are often referred to instead as wired and free, respectively). A cluster is a maximal connected subset of vertices that are connected by open bonds, where singletons count as individual clusters. For a subset H ⊂ V (G), we define FK boundary conditions ξ as follows: first augment the graph to G by adding edges between any two vertices in H not already connected by an edge; then if the boundary subgraph of G has vertex set H and edge set E(H) consisting of all edges between vertices in H, ξ is an FK configuration in {0, 1} E(H) . A boundary condition ξ can be identified with a partition of H given by the clusters of ξ. The FK model is the probability distribution over FK configurations on the remaining edge set E(G) − E(H), where the probability of ω under the boundary conditions ξ and parameters p ∈ [0, 1], q > 0 is where o(ω), c(ω), and k(ω) are the number of open bonds, closed bonds and clusters in ω, respectively, with the number of clusters being computed using connections from ξ as well as ω. The partition function Z rc is again the proper normalizing constant. Infinite volume Gibbs measures may be found by taking limits of increasing rectangles Λ n under a specified sequence ξ = ξ(n) of boundary conditions on ∂Λ n , where the important cases of all-wired and all-free boundary conditions are denoted by 1 and 0 respectively; let π ξ Z 2 denote the weak limit (if it exists) of π ξ Λn as n → ∞. Edwards-Sokal Coupling. The Edwards-Sokal coupling [18] provides a way to move back and forth between the Potts model and the random cluster model on a given graph G for q ∈ {2, 3, . . .}. The joint probability assigned by this coupling to (σ, ω), where σ ∈ Ω p is a q-state Potts configuration at inverse-temperature β > 0 and ω ∈ Ω rc is an FK configuration with parameters (p = 1 − e −β , q), is proportional to It follows that, starting from a Potts configuration σ ∼ µ G,β,q , one can sample an FK configuration ω ∼ π G,p,q by letting ω(e) = 1 (e ∈ ω) with probability p = 1 − e −β if the endpoints x, y of the edge e have σ(x) = σ(y), and ω(e) = 0 (e / ∈ ω) otherwise. Conversely, from ω ∼ π G,p,q , one obtains σ ∼ µ G,β,q by assigning an i.i.d. color in [q] to each cluster of ω (i.e., σ(x) assumes that color for every vertex x of that cluster). In the presence of boundary conditions ζ for the Potts model, it is possible to sample σ ∼ µ ζ G,β,q using the random cluster model as follows. Associate to ζ the FK boundary conditions ξ that wire two boundary sites x, y to each other if and only if ζ(x) = ζ(y). Further denote by E ζ the random cluster event that no two boundary sites x, y with ζ(x) = ζ(y) are connected via ω in G. Then one can sample a configuration of µ ζ Λ,β,q by first sampling ω ∼ π ξ Λ,p,q (· | E ζ ) for p = 1 − e −β , then coloring the boundary clusters as specified by ζ, and coloring every other cluster by an i.i.d. color uniformly over [q]. For further details, see [35], where E ζ was introduced in the context of the Ising model. Planar duality. On Z 2 , a configuration ω is uniquely identified with a configuration ω * on the dual graph Z 2 + ( 1 2 , 1 2 ) as follows: for every primal edge e and its dual edge e * (intersecting at their center points), ω * (e * ) = 1 if and only if ω(e) = 0. For every q ≥ 1, the involution p → p * given by pp where the boundary conditions ξ * are determined on a case by case basis, but it is important to note that free and wired boundary conditions are dual to one another. It is known [3] that on Z 2 , for all q ≥ 1 one has p c (q) = p sd (q). Throughout the paper, unless otherwise specified, let p = p c (q) and β = β c (q) (so p c = 1 − e −βc ), omitting these from the notations, as well as q wherever it is clear from the context. For two vertices x, y ∈ V , denote by x ←→ y the event that x and y belong to the same cluster of ω. In the context of a subgraph S ⊂ G, write x S ←→ y to denote that x and y belong to the same cluster of ω E(S−∂S) . Refer to as a vertical crossing of a rectangle R, and denote the analogously defined horizontal crossing of the rectangle R by C h (R). Consider a subset of Z 2 of the form Finally, we add the * -symbol to the above crossing events to refer to the analogous dual-crossings (occurring in the configuration ω * and the appropriate dual subgraphs). FKG inequality, monotonicity and the Domain Markov property. An event in the FK model is increasing if it is closed under addition of (open) edges, and decreasing if it is closed under removal of edges. For q ≥ 1, the model enjoys the FKG inequality [19]: Consequently, the model for q ≥ 1 is monotone in boundary conditions: for every boundary conditions η ≥ ξ (w.r.t. the partial ordering of configurations), π η G π ξ G , that is, π η G (A) ≥ π ξ G (A) holds for every increasing event A. The Domain Markov property of the FK model states that, on any graph G with boundary conditions ξ, for every subgraph G ⊂ G with boundary conditions η that are compatible with ξ, π ξ G ω G ∈ · | ω G−G = η = π η G . FK phase transition and Russo-Seymour-Welsh (RSW) estimates. The FK model at fixed q ≥ 1 undergoes a phase transition at p c (q) = sup{p : θ(p, q) = 0}, where θ(p, q) is the probability that the origin lies in an infinite cluster under π Z 2 ,p,q . Our proofs hinge on recent results of [17] on this phase transition, summarized as follows. (1) Discontinuous phase transition: π 0 Z 2 = π 1 Z 2 . (2) Exponential decay of correlations under π 0 : there exists some c > 0 such that Discontinuity of the phase transition, conjectured for all q > 4, was first proved by Kotecký and Shlosman [28] for sufficiently large q; the proof in [30] applies whenever q 1/4 > (κ + √ κ 2 − 4)/2, where κ is the connective constant of Z 2 . Plugging in the rigorous bound κ < 2.6792 due to [44] affirms the phase coexistence for all q > 24.78. For 1 < q ≤ 4, the continuity of the phase transition was established in [17] via the following RSW estimates 1 (note the difference between 1 < q < 4 and the extremal case q = 4, where full RSW-type bounds are believed to fail). 1 The proofs in [17] of Theorems 2.3 and 2.4 were for the special case of ε = ε but readily extend to the more general setting presented here. Theorem 2.2 ([17, Theorem 7]). Consider the critical FK model for 1 ≤ q < 4 on Λ = Λ n,n with n = αn for fixed 0 < α ≤ 1 and arbitrary boundary conditions ξ. Then there exists some p 0 = p 0 (q, α) > 0 such that Theorem 3]). Let q = 4 and consider the critical FK model on Λ = Λ n,n with n = αn for fixed 0 < α ≤ 1. Then for every ε, ε > 0 there exists some p 0 = p 0 (α, ε, ε ) > 0 such that, for every boundary condition ξ, ). Fix ε, ε > 0 and 0 < α ≤ 1, and consider the critical FK model at 1 ≤ q ≤ 4 on the annulus A = Λ n,n − εn, (1−ε)n × ε n , (1−ε )n for n = αn . There exists p 0 = p 0 (q, α, ε, ε ) so that, for every boundary condition ξ, A consequence of the above RSW-type bounds is polynomial decay of correlations for the critical FK model at 1 ≤ q ≤ 4 (see, e.g., the proof of [17, Lemma 1]). 2.3. Markov chain mixing times. Consider a Markov chain X t with finite state space Ω, transition kernel P and stationary distribution π. In the continuous-time setting, instead of P t consider the heat kernel given by Spectral gap. The mixing time of the Markov chain is intimately related to the gap in its spectrum: in discrete-time, gap := 1−λ 2 where λ 2 is the second largest eigenvalue of P , and in continuous-time it is the gap in the spectrum of the generator L. An important variational characterization of the spectral gap is given by the Dirichlet form: Mixing times. Denote the (worst-case) total variation distance between X t and π by where the total variation distance between two probability measures ν, π on Ω is Further define the coupling distancē d tv (t) = max x,y∈Ω P t (x, ·) − P t (y, ·) tv , noting thatd tv is submultiplicative and d tv (t) ≤d tv (t) ≤ 2d tv (t). The total variation mixing time of the Markov chain w.r.t. the precision parameter 0 < δ < 1 is For any choice of δ < 1 2 , the quantity t mix (δ) enjoys submultiplicativity thanks to the aforementioned connection withd tv ; we write t mix , omitting the precision parameter δ, to refer to the standard choice of δ = 1/(2e). The total variation mixing time is bounded from below and from above via the gap: one has t mix ≥ gap −1 − 1, and if gap is the absolute spectral gap 2 of the chain then t mix ≤ log(2e/π min )gap −1 , where π min = min x π(x) (see, e.g., [32, §12.2]). For the FK and Potts models on a box with O(n 2 ) and fixed 0 < p < 1 and q ≥ 1, there exists some c > 0 such that π min e −cn 2 , thus t mix are gap −1 are equivalent up to n O(1) -factors. Dynamics for spin systems. Heat-bath Glauber dynamics. Continuous-time heat-bath Glauber dynamics for the Potts model on Λ is the following reversible Markov chain w.r.t. µ Λ . Assign i.i.d. rate-1 Poisson clocks to all interior vertices of Λ. When the clock at a site x rings, the chain resamples σ(x) according to µ Λ conditioned on the colors of all the sites other than x to agree with their current values in the configuration σ: the probability that the new color to be assigned to x will be k ∈ [q] is proportional to exp(β y∼x 1{σ(y) = k}). The heat-bath Glauber dynamics for the FK model on Λ is the following reversible Markov chain w.r.t. π Λ . Each interior edge of Λ is assigned an i.i.d. rate-1 Poisson clock; when the clock at an edge e = xy rings, the chain resamples ω(e) according to Bernoulli(p) if x ←→ y in Λ − {e} and according to Bernoulli( p p+q(1−p) ) otherwise. The random mapping representation of this dynamics views the updates as a sequence Monotonicity and censoring inequalities. The heat-bath Glauber dynamics for the FK model at q ≥ 1 is monotone: for every two FK configurations ω 1 ≥ ω 2 and every t ≥ 0, The grand coupling for Glauber dynamics is a coupling of the chains from all initial configurations on Λ: one appeals to the random mapping representation of Glauber dynamics described above, using the same update sequence (J i , U i , T i ) i≥1 for each one of these chains. For q ≥ 1, the monotonicity of the dynamics guarantees that this coupling preserves the partial ordering of the configurations at all times t ≥ 0. In particular, under the grand coupling, the value of an edge e in Glauber dynamics at time t from an arbitrary initial state ω 0 , is sandwiched between the corresponding values from the free and wired initial states; thus, by a union bound over all edges, (see this well-known inequality, e.g., in [41, Eq. (2.10)]), and consequently, 3) The Peres-Winkler censoring inequalities [43] for monotone spin systems allow one to "guide" the dynamics to equilibrium by restricting the updates to prescribed parts of the underlying graph, thus supporting an appropriate multi-scale analysis, the key being that censoring all other updates can only slow down mixing (this next flavor of the inequality follows from the same proof of [ 43]). Let µ T be the law of continuous-time Glauber dynamics at time T of a monotone spin system on Λ with stationary distribution π, whose initial distribution be subsets of the sites Λ, and letμ T be the law at time T of the censored dynamics, started at µ 0 , where only updates within Λ i are kept in the time interval [t i−1 , t i ). Then µ T − π tv ≤ μ T − π tv and µ T μ T ; moreover, µ T /π andμ T /π are both increasing. Cluster dynamics. Swendsen-Wang dynamics for the q-state Potts model on G = (V, E) at inverse-temperature β is the following discrete-time reversible Markov chain. From a spin configuration σ ∈ Ω p on G, generate a new state σ ∈ Ω p as follows. Chayes-Machta dynamics for the FK model on G = (V, E) with parameters (p, q), for q ≥ 1 and 0 < p < 1, is the following analogous discrete-time reversible Markov chain: From an FK configuration ω ∈ Ω rc on G, generate a new state ω ∈ Ω rc as follows. (2) Resample every e = xy such that x and y belong to clusters with X c = 1 via i.i.d. random variables X e ∼ Bernoulli(p), to obtain the new configuration ω . In the presence of boundary conditions, Step (2) of the Swendsen-Wang dynamics does not reassign the color of any cluster that is incident to a vertex whose color is dictated by the boundary conditions, and analogously, Step (2) of the Chayes-Machta dynamics does not resample an edge whose value is dictated by the boundary conditions. Variants of Chayes-Machta dynamics with 1 ≤ k ≤ q "active colors" have also been studied, with numerical evidence for k = q being the most efficient choice; see [21]. Spectral gap comparisons. The following comparison inequalities between the above Markov chains are due to Ullrich (see [ 48,49]). Let q ≥ 2 be integer. Let gap p and gap rc be the spectral gaps of Glauber dynamics for the Potts and FK models, respectively, on a graph G = (V, E) with maximum degree ∆ and no boundary conditions, and let gap sw be the spectral gap of Swendsen-Wang. Then we have The proof of (2.5) further extends to all real q > 1, whence Canonical paths. The following well-known geometric approach (see [12,13,27,45] as well as [32, Corollary 13.24]) serves as an effective method for obtaining an upper bound on the inverse gap of a Markov chain, and will be used in our proof of Theorem 3. Theorem 2.10. Let P be the transition kernel of a discrete-time Markov chain with stationary distribution π, and write Q(x, y) = π(x)P (x, y) for every x, y ∈ Ω. For each (a, b) ∈ Ω 2 , assign a path γ(a, b) = (x 0 = a, . . . , (2.7) A very standard application of Theorem 2.10 (see e.g., [36] in the setting of the Ising model) proves upper bounds on mixing times of spin systems in terms of the cut-width of the underlying graph. We omit the proof and note that it follows for the Potts and FK models by making the natural modifications and observing that in the FK setting, the probability of any single edge-flip is at least some c(p, q) > 0. Lemma 2.11. Consider the Glauber dynamics for the q-state Potts model at inverse temperature β on a rectangle Q = 0, n × 0, for 0 ≤ ≤ n, with arbitrary boundary conditions. There exists a constant c(β, q) > 0 such that and an analogous bound holds for the heat-bath dynamics on the FK model. Mixing at a continuous phase transition This section contains the proof of Theorem 1 (as well as its analogs for boxes with non-periodic boundary conditions); recall from Theorem 2.7 that it suffices to prove the desired bounds for Glauber dynamics for the Potts model in order to obtain them for FK Glauber as well as Swendsen-Wang and Chayes-Machta dynamics. Consider Λ = Λ n,n = 0, n × 0, n for n = αn , where α ∈ [ᾱ, 1] for some fixed 0 <ᾱ ≤ 1 2 . 3.1. Mixing under arbitrary boundary conditions. We first establish analogues of Eqs. (1.1)-(1.2) for Glauber dynamics for the Potts model with arbitrary boundary conditions, modulo an equilibrium estimate on crossing probabilities at q = 4 which we establish in §3.2. Whenever we refer to arbitrary or fixed boundary conditions we mean ones that include an assignment of a color, or free to each of the vertices of ∂Λ (in contrast to periodic). The following is a general form of the approach of [35] to proving upper bounds on mixing times in the presence of RSW bounds; we stress that, while this proof does extend from the Ising model to the Potts model, in fact it fails to produce a polynomial upper bound for the critical FK model at noninteger 1 < q < 4, despite the availability of the necessary (uniform) RSW estimates (cf. [22]). Theorem 3.1. Suppose q ≥ 1 and there exists a nonincreasing sequence (a n ) such that Then there exists some absolute constant c > 0 such that Glauber dynamics for the Potts model on Λ = Λ n,n with arbitrary boundary conditions, ζ, satisfies Combining the RSW bound of Theorem 2.2 with Theorem 3.1 establishes the analog of Eq. (1.1) for a rectangle with arbitrary (non-periodic) boundary conditions. At q = 4, we will later prove a polynomially decaying bound on crossing probabilities uniform in boundary conditions (see Theorem 3.4), through which Theorem 3.1 will yield the matching quasi-polynomial upper bound on mixing. Proof of Theorem 3.1. We use the block dynamics technique of Theorem 2.9 used in [35]. Define two sub-blocks of Λ, as follows: 3 × 0, n , B e := n 3 , n × 0, n . Then let B denote the block dynamics on Λ with sub-blocks B w , B e as defined in §2.4. We bound gap ζ B and gap ϕ B i of Theorem 2.9 uniformly in ζ, ϕ. Lemma 3.3. For any two initial configurations σ, σ on Λ with corresponding block dynamics chains X t and Y t , there exists an absolute constant c > 0 such that, if (a n ) is a sequence satisfying (3.1), there is a grand coupling, such that P(X 1 = Y 1 ) ≤ 1 − c a n . Moreover, there exists some c > 0 such that gap ζ B ≥ c a n uniformly in ζ. Proof. We construct explicitly a grand coupling that allows us to couple the two configurations with the above probability. First recall that the Potts boundary condition ζ on Λ corresponds to an FK boundary condition ξ where two boundary vertices are in the same cluster if and only if they have the same color, along with the decreasing event Via the Edwards-Sokal coupling, we move from the Potts model with boundary ζ to the corresponding FK model with boundary ξ conditional on the event E ζ . Suppose the clock at block B w rings first. The two initial configurations σ, σ induce two Potts boundaries η, η corresponding to FK boundaries ψ, ψ on ∂ e B w along with the events E η,ζ and E η ,ζ ; (η, ζ) is the boundary condition on B 1 with η on ∂ e B 1 and ζ ∂Bw on the rest of ∂B w . Here and throughout the rest of the paper, when discussing boundary conditions, we use the restriction to a line to denote the boundary condition induced on that line by the configuration we have revealed. We seek to couple the two initial configurations on all of Λ by first coupling them on Λ−B o e . For each initial configuration, the block dynamics samples a Potts configuration on B w by sampling an FK configuration from π ψ,ξ Bw (· | E η,ζ ), π ψ ,ξ Bw (· | E η ,ζ ). Via the grand coupling defined in §2.4 of all boundary conditions on ∂ e B w , we reveal the open component of ∂ e B w in order to condition on the right-most dual vertical crossing of Λ−B o e . Note that all FK measures we consider are stochastically dominated by π 1,ξ Bw (by monotonicity in boundary conditions and since E η,ζ is a decreasing event). Then if a sample from π 1,ξ Bw has a dual vertical crossing in B w ∩ B e , under the grand coupling, so will all the samples of π ψ,ξ Bw (· | E η,ζ ). Under the event E η,ζ , by construction it is impossible to add boundary connections by modifying the interior of B w (either such connections would be between monochromatic sites in which case they are already in the same cluster, or otherwise such connections are impossible under E η,ζ ). Thus, if there is such a dual vertical crossing under π 1,ξ Bw , the event E η,ζ ensures that to the west of that crossing, all realizations of π ψ,ξ Bw see the same boundary conditions. By the domain Markov property, the grand coupling then couples all such realizations west of the right-most dual-crossing of π 1,ξ Bw and therefore on all of Λ − B o e (for the explicit revealing procedure, see [35, §3.2]). We then use the same randomness to color coupled clusters the same way, and couple all corresponding Potts configurations on Λ − B o e . The colorings of the boundary clusters are predetermined, but because the ∂ e B w boundary clusters cannot extend past the dual vertical crossing, the two Potts configurations can be coupled west of the dual vertical crossing. Suppose the clock at block B e rings next. If we have successfully coupled X t and Y t in Λ − B o e , then the identity coupling couples the configurations on all of Λ. But note that by the assumption of Theorem 3.1, and the fact that n ≤ n, Moreover, by time t = 1, there is a probability c > 0 that the dynamics rang the clock of B w and then the clock of B e in which case we have coupled the two configurations with probability a n at time t = 1. By the submultiplicativity ofd(t), for all t > 0, which implies that there exists a new constant c > 0 such that t mix ≤ c/a n and In particular there exists c > 0 such that (gap ζ B ) −1 ≤ c /a n . By Theorem 2.9 there exists c > 0 such that we get the following relation between the gap of Glauber dynamics on Λ and Glauber dynamics on the blocks B i , i ∈ {e, w}: However, each B i is a rectangle Λ 2n/3,n with arbitrary boundary conditions and one can check by hand that for α ∈ [ᾱ, 1], it also has, up to rotation, aspect ratio α i ∈ [ᾱ, 1]. It follows that max i max σ (gap σ B i ) −1 satisfies the same relation as (gap ζ Λ ) −1 . Recursing 2 log 3/2 n times yields the desired bound on (gap ζ Λ ) −1 for any ζ. 3.2. Crossing probabilities at q = 4. Recall that, for 1 ≤ q < 4, the probability of a horizontal crossing of a rectangle with arbitrary boundary conditions is uniformly bounded away from 0 (Theorem 2.2), whereas at q = 4, under free boundary conditions, it is expected that the probability of such a crossing of Λ in fact decays to 0 as n → ∞. We lower bound this crossing probability under general boundary conditions. Then there exist some c(α), γ(α) > 0 (independent of ξ) such that By monotonicity in boundary conditions, it suffices to prove the above for free boundary conditions (the case ξ = 0). Fix δ > 0 and let R = 0, n × ( 1 2 −δ)n , ( 1 2 +δ)n . We will show the stronger result that there exist some γ(α), γ (α) > 0 such that, The upper bound in (3.2) is a consequence of the polynomial decay of correlations in Theorem 2.5, and it remains to establish the lower bound. Observe that for every e, thus, we can force all the edges of R 0 = 0, 2 log n × { n 2 } to be open with probability We boost this to a horizontal crossing of length δn/2 from the boundary by stitching together horizontal and vertical crossings and applying the FKG inequality (see Fig. 4). Fix ε > 0 sufficiently small (e.g., a choice of ε = δ/10 would suffice), and consider Moreover, takeR 2k−1 andR 2k to be the concentric 3 2 -dilations of R 2k−1 and R 2k , respectively. By construction, each R 2K−1 and R 2K has width at most 2εn and height at most 2εn , hence their respective dilationsR 2K−1 andR 2K are both contained in R. As a consequence ofR i ⊂ Λ the free boundary conditions onR i are dominated by the measure over boundary conditions induced by π 0 Λ . Thus, there exists some p 1 (α), p 2 (α) > 0 given by Theorem 2.3 such that (Notice the aspect ratios ofR 2k−1 are the same for all k, and similarly forR 2k .) Further, for every k, these events are increasing; thus by the FKG inequality, At the final scale K, the width of R 2K is (2 − o(1))εn and its height is (2 − o(1))εn , so for any sufficiently large n. By repeated application of the FKG inequality, for some γ > 0. By symmetry, the exact same argument shows that In order to complete the desired horizontal crossing, we require an open path connecting the left and right crossings, via an open circuit in the annulus A 1 given by By Theorem 2.4, there is an absolute constant p 3 (α) > 0 such that π 0 A 1 (C o (A 1 )) > p 3 . Since the induced boundary conditions on ∂A 1 by π 0 Λ stochastically dominate free boundary conditions on ∂A 1 , it follows that π 0 Λ (C o (A 1 )) > p 3 . Finally, the event C o (A 1 ) is increasing, and its intersection with the two horizontal crossing events from (3.3) and (3.4) is a subset of the event {(0, n 2 ) R ←→ (n, n 2 )}. Thus, by FKG, the latter has probability at least p 3 n −2γ , establishing (3.2), as desired. 3.3. Periodic boundary conditions. We now complete the proof of Theorem 1. Proof of Theorem 1. The proof to go from arbitrary boundary conditions to the torus is the same as the proof of Theorem 4.4 of [35] which used block dynamics twice to first reduce mixing on the torus (Z/nZ) 2 to a cylinder (Z/nZ) × 0, n , and then that cylinder to a rectangle with fixed boundary conditions, on which Corollary 3.2 gives the desired polynomial (quasi-polynomial) mixing time bound. We only observe that the proof goes through after replacing the RSW bounds there by the estimate in Theorem 3.4, and conditioning on the event E ζ as before. 3.4. Polynomial lower bounds. In order to provide as complete a picture as possible, we also extend the polynomial lower bound of [35] to the Glauber dynamics for the q = 3, 4 Potts models, showing that indeed they undergo a critical slowdown. We do not have access to precise arm exponents as exist for q = 2, but we adapt a standard argument for obtaining the Bernoulli percolation two-arm exponent, to lower bound the Potts one-arm exponent and prove a polynomial lower bound on gap −1 . 5) and thus, there exists c (ε, q) > 0 such that for every x, y ∈ −(1 − ε)n, (1 − ε)n 2 , Consider the event Γ that there exists a site x ∈ L such that Γ x holds. We begin by proving that π 1 B (Γ) ≥ c for some c > 0 independent of n. By using Theorem 2.2-2.3 and stochastic domination twice, we see that there exists c(q) > 0 such that ) ≥ c . But one can observe that the above event implies that the right-most point on L that is part of the cluster of the vertical open crossing in R + 0 satisfies Γ x , so π 1 B (Γ) ≥ c. At the same time, we have by a union bound that The maximum on the left-hand side is attained by some deterministic x ∈ L which we set to be j, for which we have, by the FKG inequality and self-duality, that By the RSW estimate, Theorem 2.4, we see that π 0 B (C o (B − j∈L R j )) ≥ ε for some ε(q) > 0, and therefore by monotonicity in boundary conditions, Plugging this in to (3.7) implies that π 0 B (j ←→ ∂R j ) ≥ 2 cε n . In order to complete the proof of (3.5), we translate by −j to see that π 0 B−j (0 ←→ ∂ − n 2 , n 2 2 ) ≥ c n − 1 2 for some c (q) > 0. Since j ∈ L, B − j ⊂ R and by monotonicity, we deduce (3.5). Going from (3.5) to (3.6) is a standard exercise in using RSW estimates (Theorems 2.2-2.3) and the stitching arguments used in the proof of Theorem 3.4; since both x, y are macroscopically far from ∂R, we can use (3.5) to connect each of them to some distance O( x − y ) away, and stitch open crossings to connect these two together via the FKG inequality and Theorems 2.2-2.3, yielding the desired. Proof. Now that we have a bound on connection probabilities macroscopically away from boundaries, we modify the lower bound of [35] to our setting. Fix any boundary condition η (if we are considering the torus, reveal σ ∂Λ and fix that to be your boundary condition η). Let Λ 1 = n 4 , 3n To do so, we move to the FK representation of the Potts model on Λ via the Edwards-Sokal coupling using the event E η . For the remainder of this proof only, let E and Var be with respect to the joint distribution over FK and Potts configurations given by the Edwards-Sokal coupling. By Theorem 2.4, we see by the FKG inequality, By the law of total variance and the above, we see that . But given that ∂Λ 1 ←→ ∂Λ, the probability of σ(x) = q for x ∈ Λ 2 is 1/q; in particular, by the Edwards-Sokal coupling and FKG, we can expand the above as for some c 2 (q) > 0, where the last inequality follows from Eq. (3.6) of Lemma 3.5. Slow mixing at a discontinuous phase transition At a discontinuous phase transition, the dynamical behavior of the Potts model is expected to exhibit an exponential critical slowdown on the torus but otherwise depend on the choice of boundary conditions. We demonstrate this in the following sections. Proof idea. For q > 4, to obtain an exponential lower bound when there is a discontinuous phase transition, we establish a bottleneck in the state space consisting of vertical and horizontal crossings, each forming a loop around the torus. The basic idea is that a combination of a horizontal loop and a vertical loop in the torus can be translated to form a macroscopic wired circuit. Escaping such configurations via a pivotal edge would require a macroscopic dual-crossing inside the circuit, an event with an exponentially small probability (see Theorem 2.1). Unfortunately, conditioning on the locations of these two loops includes negative information about the interior of the circuit and prevents us from appealing to the decay of correlations estimates. If we instead considered two pairs of horizontal and vertical loops: one could expose the required wired circuit with no information on its interior. However, a subtler problem then arises, where after exposing one pair of loops (say the vertical ones), revealing the second (horizontal) pair might leave the potentially pivotal edge outside of the formed wired circuit, preventing us from estimating the probability of it being pivotal. It turns out that using three pairs of horizontal and vertical loops supports a suitable way of exposing a wired circuit such that the potential edge is pivotal only if it supports a macroscopic dual-crossing within that circuit, thus leading to the desired lower bound. Proof of Theorem 2. A standard technique for proving lower bounds on mixing times is constructing a set S ⊂ Ω that is a bottleneck for the Markov chain dynamics. For a chain with transition kernel P (x, y) and stationary distribution π, let the edge measure between A, B ⊂ Ω be . (4.1) and the following relation between Φ and the spectral gap of the chain [32] holds: By the dual version of Theorem 2.1, there exists some c(q) > 0 such that Using (4.3), we will establish a bottleneck set S for the random cluster model on Λ with periodic boundary conditions. Define the bottleneck event where the constituent events are defined as follows for i = 1, 2, 3: The crossings in the above events are all loops on the torus of homology class (1, 0) and (0, 1). We aim to get an exponentially decaying upper bound on π p Λ (∂S | S) (the superscript p denotes periodic boundary conditions on Λ), where the boundary subset ∂S = {ω ∈ S : P (ω, S c ) > 0} is the event that there exists an edge e that is pivotal to S. Specifically, ∂S = {ω ∈ S : ω − {e} / ∈ S for some e ∈ ω} , in which configurations ω are identified with their edge-sets. The bound P (S, S c ) ≤ 1 implies Q(S, S c )/π p Λ (S) ≤ π p Λ (∂S | S). We now control π p Λ (S) to express the conductance in terms of π p Λ (∂S | S). Via the symmetry of periodic FK boundary conditions, RSW estimates on the torus (Z/nZ)×(Z/αnZ) were proved in [3] for all q ≥ 1 at p c (q). By [3, Theorem 5], therefore, there exists some ρ(α, q) > 0 such that Combining this with Eqs. (4.2) and (4.4), it suffices to prove that there exist constants c 1 = c 1 (α, q) > 0 and c 2 = c 2 (α, q) > 0 such that, to obtain the desired bound on the inverse spectral gap. A union bound implies where the sum is over all edges. We also union bound over whether e is pivotal to S i v or S i h for i = 1, 2, 3 (see Fig. 5 for an illustration of a configuration in S). Without loss of generality, examine the probability that e is pivotal to S 1 v . The other cases can be treated analogously. Fix an edge e in 0, n 3 × 0, n and consider its horizontal coordinate (in Fig. 5, e is the edge of intersection of the purple and red paths): the edge e is either closer to the left side, or closer to the right side of the vertical strip and is either in the top, middle, or bottom third of the vertical strip (if there is ambiguity in these choices, choose arbitrarily). Now move to a translate of Λ on the universal cover of the torus, which we call Λ = Λ n,n . Choose a translate such that: (1) The horizontal third of Λ that contained e, is now the middle horizontal third of Λ . (2) If e was closer to the left of its vertical strip than the right, that strip is the left vertical third of Λ . Otherwise, that strip is the right vertical third of Λ . Because we work with periodic boundary conditions, π p Λ = π p Λ . Begin by exposing all the edges of ∂Λ so as to fix a boundary condition and move from a torus to a rectangle with some fixed boundary condition. Then expose the outermost circuit C in Λ as follows. First expose the left-most vertical crossing of Λ by revealing the dual component of ∂ w Λ : by construction, its adjacent primal edges will form the left-most vertical crossing of Λ and we will not have revealed any edges to its right. Repeating this procedure on the n, e, s sides reveals the outermost circuit C without exposing any of its interior edges (see Fig. 5 where the shaded region consists of the edges we reveal). If e is not an open edge in one of the vertical crossings we have exposed, then certainly e is not pivotal to the event S. So suppose e is an open edge in a vertical loop and-by construction-also an open edge in C. Denote by C o the set of edges that have not yet been exposed, i.e., all edges interior to C. It is a necessary condition for e to be pivotal to S that there exists a dual path from an interior dual-neighbor of e to the inner (to C) vertical boundary of the vertical third containing e in Λ . If there is no such dual-path, then there is a different primal vertical crossing of that strip which does not contain e, and that crossing-because of the exposed horizontal loops-is itself a loop of homology class (0, 1). But observe that the inner boundary of the vertical strip is a graph distance at least n/6 from e because we chose Λ such that e is farther from the inner boundary of the strip than the outer one. Also note that such a dual-crossing event is a decreasing event. By the Domain Markov property and monotonicity, . By (4.3), the probability of e being dual-connected to the inner vertical boundary of the vertical third it is in, under π 1 Z 2 , is less than e −cn/6 where c(q) > 0 is from (4.3). Thus, there exists an absolute c(q) > 0 such that, for any fixed e, If e were pivotal to S i h the analogous claim would hold with probability less than e −cn /6 . Summing over all six crossing events and summing over all edges e we conclude that π p Λ (∂S | S) ≤ 12αn 2 e −cαn , implying Eq. (4.4) as desired. 4.2. Exponential lower bound at β > β c (q). Modifying the proof of Theorem 2 to the setting of Potts Glauber dynamics at β > β c (q) allows us to prove slow mixing of the Potts Glauber dynamics at all low temperatures and all choices of q > 1. The key difference is we can no longer work directly with the FK dynamics, because it is-by duality to high temperature-fast for all p > p c (q) (see, e.g., [5,49]). Proof of Theorem 4. In the context of this proof, denote by x y the existence of a sequence of sites {x i } k i=1 such that x i is adjacent to x i+1 and with x 1 = x and x k = y, with σ( We call each of these Potts connections of nontrivial homology on the torus Potts loops. We obtain an exponentially decaying upper bound on µ p Λ (∂S | S) (where p denotes periodic boundary) by examining the pivotality of vertices to S. As before, we union bound over all vertices in Λ and the six different crossing events: fix a vertex v whose pivotality to-without loss of generality-S 1 v,q we examine. We choose a translate of Λ on the universal cover, Λ , according to the same rules as for the FK model, with e replaced by v, where by periodicity of the boundary conditions, µ p Λ d = µ p Λ . In order to bound the probability of σ(v) being pivotal to S 1 v,q , we reveal the outermost Potts loops of color q in Λ as follows. First reveal the spin values on ∂Λ to reduce the torus to a rectangle with fixed boundary conditions. Then reveal, starting from ∂Λ , all spins -adjacent (either adjacent or diagonal to) to vertices whose spin value is not q. By construction, we will have revealed the outermost q-colored paths, and therefore a Potts circuit, C q , of spin value q, and nothing interior to it. If v is pivotal to S 1 v,q it must be the case that v ∈ C q , so we suppose that v ∈ C q . In order to obtain bounds on its pivotality, we now move to the FK representation of the Potts model inside C q . By our definition of C q and the Edwards-Sokal coupling on bounded domains, the FK representation of the Potts region we have not yet revealed has fully wired boundary conditions, and therefore the event E σ C q is trivially satisfied. We claim that in order for v ∈ C q to be pivotal to S 1 v,q , there must be a dual-crossing from one of the interior dual-neighbors of v (an edge whose center is distance √ 5 2 from v) to one of { n 3 } × 0, n or { 2n 3 } × 0, n in Λ (the choice depends on which translate Λ is chosen), and it must be contained completely within C q . Suppose there were no such dual-crossing. Then there would have to be a primal FK connection in the same vertical third connecting the two neighbors of v in C q . By the Edwards-Sokal coupling and the definition of Potts connections, such an FK connection would translate to a new Potts connection of color q that does not use the vertex v. Like for the FK model, the new Potts crossing is still a vertical loop because of the exposed horizontal crossings. Since the event E σ C q is trivially satisfied, monotonicity in boundary conditions, combined with the exponential decay of dual-connections under π Z 2 whenever p > p c , implies that the probability of such a macroscopic dual-crossing is bounded above by e −cn for some c = c(β, q) > 0. By spin flip symmetry of the torus, and our definition of Potts loops, µ p Λ (S) ≤ 1 2 . Using S as the bottleneck in (4.2), we conclude that for every β > β c (q), there exists c = c(β, q) > 0 such that gap −1 exp(cn). Upper bounds under free boundary conditions We now prove Theorem 3, showing that the dynamics for the critical Potts model in the phase coexistence regime should be sensitive to the boundary conditions. We prove the desired upper bound for Swendsen-Wang dynamics on Λ with free or monochromatic boundary conditions using censoring inequalities (Theorem 2.6). The monotonicity requirement of the censoring prevents us from carrying this out in the setting of the Potts Glauber dynamics. We thus restrict our analysis now to the FK Glauber dynamics, a bound on which would imply the analogous bounds for Chayes-Machta and Swendsen-Wang via Theorem 2.7. As in [41], we will work with distributions over boundary conditions induced by infinite-volume measures; this is more delicate in the setting of the FK model where boundary interactions are no longer nearest-neighbor. We now formally define these distributions. Definition 5.1 ("free"/"wired" boundary conditions). In order to sample a boundary condition on ∆ ⊂ ∂Λ from π 0 Z 2 or π 1 Z 2 we sample the infinite-volume configuration on Z 2 − Λ 0 and then identify the induced boundary condition with the partition of ∆ that induced by that configuration. A measure P over boundary conditions is called "wired" if P π 1 Z 2 (resp., "free" if P π 0 Z 2 ), i.e., it dominates the measure over boundary conditions induced by π 1 Z 2 . When sampling boundary conditions on A, B ⊂ ∂Λ according to different measures, we do so sequentially clockwise from the origin 3 . We prove the following proposition, from which Eq. (1.4) of Theorem 3 follows easily. Proposition 5.2. Let q be sufficiently large and consider Glauber dynamics for the critical FK model on Λ = Λ n,n . Let P be a distribution over boundary conditions on ∂Λ such that P π 0 Z 2 or P π 1 Z 2 and let E be its corresponding expectation. Then for every ε > 0, there exists c(ε, q) > 0 such that for t = exp(cn 3ε ), where ξ ∼ P. In particular, P(t mix t ) exp(−cn 2ε ). Proof ideas. In order to prove Proposition 5.2, we appeal to the Peres-Winkler censoring inequalities [43] for monotone spin systems, a crucial part of the analysis in [41] (then later in [34]) of the low-temperature Ising model under "plus" boundary, a class of boundary conditions that dominate the plus phase. A major issue when attempting to adapt this approach to the critical FK model at sufficiently large q with "free" boundary conditions is that the typical "free" boundary conditions still have many boundary connections, inducing problematic long-range interactions along the boundary (see Fig. 3), which prevent coupling beyond interfaces. To remedy this, at every step of the analysis we modify the boundary conditions to all-free on appropriate segments of length n o(1) (this modification can only affect the mixing time by an affordable factor of exp(n o(1) )), and with high probability no boundary connections circumvent the interface past the modified boundary by the exponential decay of correlations in the "free" phase. Refined large deviation estimates on fluctuations of FK interfaces then allow us to control the influence of other long-range boundary interactions (see Figure 8) and to couple different configurations beyond distance n 1/2+o(1) (the length scale that captures the normal fluctuations of the interface), yielding mixing estimates on n × n 1/2+o(1) boxes, the basic building block of the proof. FK boundary modifications. Let d ξ ω 0 (t) = P t (ω 0 , ·) − π ξ tv , and for a pair of FK boundary conditions ξ, ξ , with mixing times t mix , t mix , define It is easy to verify (see [41,Lemma 2.8]) that for some c independent of n, ξ, ξ , t mix ≤ cM 3 ξ,ξ |E|t mix (5.1) (indeed, in the variational characterization of the spectral gap, the Dirichlet form, expressed in terms of local variances, produces a factor of M 2 ξ,ξ , and the variance produces another factor of M ξ,ξ ). Definition 5.3 (boundary modification). If P is a distribution over boundary conditions on ∂Λ, ∆ ⊂ ∂Λ, we let P ∆ be the distribution which samples boundary conditions ξ ∼ P and modifies them as follows: if ξ corresponds to the partition P 1 , ..., P k of ∂Λ, then ξ = ξ ∆ is given by the partition P 1 − V (∆), ..., P k − V (∆), V (∆); this induces a coupling of (ξ, ξ ) ∼ (P, P ∆ ). E.g., if ∆ = ∂Λ then ξ = 0, and if ∆ consists of a single vertex v and ξ is induced by a configuration where every boundary vertex is connected to v and to no other boundary vertex, then ξ would be wired on ∂Λ − {v}. Proof. By inspection, one sees that the addition of at most |V (∆)| boundary clusters can increase the total number of clusters by at most |V (∆)|. By definition of the FK model, every additional cluster receives a weight of q. In order to prove (5.3), begin with the observation that by (5.1), As a consequence of Theorem 2.6, it is well-known (see [41,Corollary 2.7]) that 5.2. Necessary equilibrium estimates. We now include some equilibrium estimates from cluster expansions at sufficiently large q, which are adaptations of the necessary low-temperature Ising equilibrium estimates of [14] to the setting of the critical FK model (cf. Proposition 5.6). For any φ ∈ (−π/2, π/2), the strip S n = 0, n × −∞, ∞ has (1, 0, φ) boundary conditions denoting wired on ∂S n ∩ 0, n × {y ≤ x tan φ} and free elsewhere. Then Γ is the set of all order-disorder interfaces (bottom-most dual crossings from (0, 0) ←→ (0, n tan φ)). We define the cigar-shaped region for d, κ > 0 by , and call Γ r κ,d,φ ⊂ Γ the set of interfaces that are contained in the cigar-shaped region. Proposition 5.5. Consider the critical FK model on S n and fix a δ > 0. There exists some q 0 such that for all φ ∈ [− π 2 + δ, π 2 − δ], there exists c(d, φ) > 0 such that for every q ≥ q 0 and every κ > 0, π 1,0,φ Sn (Γ r κ,d,φ ) n 2 exp(−cn 2κ ) . Proof. We use an extension of [14, §4] to the framework of the FK/Potts models in the phase coexistence regime by [42]. The specific case of [42, §5] states the following: consider the critical FK model on the strip S n = 0, n × −∞, ∞ with (1, 0, φ) boundary conditions. Then for every q ≥ q 0 and every d, κ > 0, and φ ∈ (− π 2 , π 2 ), A straightforward adaptation of the proof of [42,Proposition 5] to the form of [14,Proposition 4.15] in fact yields the very large deviation bound we desire. A specific case is when φ = 0: there exist some q 0 ,c > 0 such that for all q ≥ q 0 and every a ≥ 0, Sn (|H| ≥ a) n 2 exp(−ca 2 /n) , (5.4) where |H| denotes the maximum vertical distance of an edge e in the interface to the x-axis. Though the result of [42] is written with the Potts model in mind and thus with integer q, the cluster expansion and all the results hold with noninteger q as well. We now prove the following estimate on FK interfaces near a repulsive boundary. Proposition 5.6. Fix c > 0 and consider the critical FK model on R = 0, n × 0, for c √ n log n ≤ ≤ n with boundary conditions (1, 0) denoting 1 on ∂ n,e,w R and 0 on ∂ s R. Let H be the maximum vertical height of the bottom-most horizontal open crossing. There exist constants q 0 , c > 0 such that, for all q ≥ q 0 and every 0 ≤ a ≤ , Fix the a ≤ from the statement of Proposition 5.6. Denote by (1, 0, * ) boundary conditions that are still wired (resp. free) on the intersection of ∂S n and the upper (resp. lower) half plane, but now also free on all of S n ∩{(x, y) : y ≤ −a/2} (this induces a free bottom boundary on the semi-infinite strip starting from y = −a/2). Clearly π 1,0 Sn π 1,0, * Sn . Then by Eq. (5.4), there exists A(q) > 0 such that with probability bigger than 1 − An 2 exp(−ca 2 /4n), the interface-bottom-most open crossing connecting (0, 0) to (0, n)-under π 1,0 Sn does not touch the line y = −a/2, and therefore, with that probability, there is a horizontal dual-crossing of S n contained entirely above y = −a/2. Via the grand coupling, since this is a decreasing event, the same horizontal dual-crossing would be present under π 1,0, * Sn and we expose the bottom most horizontal dual-crossing above the line y = −a/2 and couple the configurations above it. At the same time, using (5.4), we have that under π 1,0 Sn , with the same probability, the maximum y-coordinate of the interface does not exceed a/2. If we have coupled the two configurations above a bottommost dual-crossing above the line y = −a/2, the same would be true of the interface under π 1,0, * Sn . Thus, taking a union bound, π 1,0, * Sn (H ≥ a) ≤ 2An 2 exp(−ca 2 /4n) . Using the monotonicity of the FK model, and denoting by R the vertical translate of R by −a/2, we obtain π 1,0 R π 1,0, * Sn (ω R ); together with the fact that {H ≥ a} is a decreasing event, for c =c/4, where now, the interface is again between (0, 0) and (0, n). Now for any fixed ε > 0, consider the rectangle V = 0, n × 0, with ≥ 4n 1 2 +ε , and (1, 0, ∆) denoting free boundary conditions on ∂V ∩ {(x, y) : y ≥ 2n 1 2 +ε } and ∆ = {0} × n 2 − n 3ε , n 2 + n 3ε and wired elsewhere. Denote the four points at which the boundary conditions change by (w 1 , w 2 ) ∈ ∂ w V × ∂ e V , z 1 , z 2 ∈ ∂ s V with z 1 to the left of z 2 . Let C 1 and C 2 be the blocks 0, n 2 × 0, and n 2 × 0, respectively. The main equilibrium estimate we use in the sequel reads as follows. Proposition 5.7. Let Γ = {ω : w i C * i ←→ z i , i = 1, 2}. There exists q 0 > 0 so that the following holds. For every q ≥ q 0 there exists c = c(q) > 0 such that the corresponding critical FK model on V satisfies Proof. This corresponds to Claim 3.10 proven in the appendix of [41] for = n 1 2 +ε in the setting of the low-temperature Ising model using the cluster expansion of [14] and the analogues of Propositions 5.5 and 5.9 (Propositions 4.15 and Theorem 4.16 of [14] respectively), but the extension to larger is immediate. We sketch the proof of [41] before justifying its extension to the current setting. (a) Via a surface tension estimate analogous to (5.6), with probability 1 − exp(−cn 3ε ), the interfaces connect w i ←→ z i instead of w 1 ←→ w 2 . In order to then claim that the two interfaces are confined to the right and left halves of V , it is shown in the appendix of [41] that with high probability the two interfaces do not interact, via the exponential decay of the Ising cluster weights in cluster lengths. (b) The next step in [41] was to show that the interface does not deviate farther than n ε from the east of z 1 . The complication in the Ising setup was that the plus boundary on ∂ s V produced a repulsive force on the interface so neither Proposition 4.15 nor Theorem 4.16 of [14] were directly applicable. (c) To circumvent the problem that ∂ s V is not sufficiently far from z 1 to contain the cigar-shaped region, the region V is extended in [41] to V ∪ 0, n/2 − n 3ε × −n, 0 with appropriate boundary conditions. Thereafter, the proof concludes by lower bounding the weight of all interfaces between w 1 and z 1 that do not interact with the extension of V , repeatedly using Theorem 4.16 of [14]: the key to this lower bound consists of stitching cigar shaped regions of increasing length, all sufficiently far from the extension of V and lying above the straight line connecting w 1 to z 1 . (d) If the extension is accounted for, an estimate of the form of Theorem 4.16 of [14] with the appropriate angle φ implies that with high probability, the interface does not deviate far to the east of z 1 , thus stays bounded away from ∂ e C 1 . For more details on these arguments, see [41,Appendix A]. In the setting of the FK model, the central cluster expansion estimate we require is the following (Proposition 5.9), the FK analogue of Theorem 4.16 whose proof is a direct adaptation of the proof in [14] of Theorem 4.16 using the FK cluster expansion techniques of [42]. Definition 5.8. For an angle φ ∈ [− π 2 + δ, π 2 − δ], define an edge cluster weight function Φ(C, I) as a function with first argument that is a connected set of bonds in S n and second argument that is an FK interface connecting (0, 0) to (n, n tan φ), satisfying (1) Φ(C, I) = 0 when C ∩ I = ∅ , The FK order-disorder weight function is a specific choice of Φ that gives rise to the FK distribution on wired-free interfaces, i.e., π 1,0,φ Sn (I) = λ |I|+ C∩I =∅ Φ(C,I) , and is given explicitly by Proposition 5 of [42]. Proposition 5.9. Consider the critical FK model on a domainV n ⊃ U κ,d,φ and let Φ be the FK order-disorder weight function. LetΦ be any function satisfying (e.g.,Φ = Φ1{C ⊂ U κ,d,φ ). LetZ(V n , φ) be the partition function with weightsΦ onV n (see [14,42]). Then there exist q 0 > 0 and f (κ) κ −1 such that for all q ≥ q 0 , Moreover, for τ f,w (φ) the order-disorder surface tension in the direction of φ (see [42]), 5.3. The recursive scheme. Throughout this subsection, let P be a distribution over FK boundary conditions on Λ n,m = 0, n × 0, m and E the corresponding expectation. For ξ ∼ P, we say that Λn,m tv ≤ δ . Using this notation, the following corollary is a consequence of Lemma 5.4. Corollary 5.10. Consider Λ n,m with boundary conditions ξ ∼ P and ∆ ⊂ ∂Λ n,m such that |V (∆)| n 3ε , with P ∆ defined as in Definition 5.3. If for some t, δ, then A P n,m (t , δ ) holds with δ = 8δ + exp(−e cn 3ε ) and t = exp(cn 3ε )t for some c(q) > 0 independent of ∆ and ξ. Similarly, A P n,m (t, δ) implies, Before proving the main theorem, we fix an ε > 0 and prove a recursive scheme that yields a mixing time bound on rectangles with side lengths n × n 1 2 +ε for n of the form n ∈ {2 k } k∈N . We remark that as in [41], this is a technical assumption that is not requisite to the upper bound, (see Remark 3.12 of [41]). For the base scale of the recursion, we use a consequence of the canonical paths estimate of Theorem 2.10, specifically Lemma 2.11, and the submultiplicativity ofd tv . Proposition 5.11. There exists c = c(q) > 0 such that for every n, for every q, for the FK Glauber dynamics, A P n,m (t, exp[−te −c(n∧m) ]) holds independent of P. An intermediate step to proving Proposition 5.2 is proving analogous bounds for rectangles with "free" boundary conditions on three sides and "wired" on the fourth. Definition 5.12. A distribution P over boundary conditions on Λ n,m is in D(Λ n,m ) if it is dominated by π 0 Z 2 on ∂ n,e,w Λ n,m and dominates π 1 Z 2 on ∂ s Λ n,m . We say that A n,m (t, δ) holds if A P n,m (t, δ) holds for every P ∈ D(Λ n,m ). The main estimate for our recursion on increasing rectangles is the following. Proposition 5.13. For the critical FK Glauber dynamics with q large enough on Λ n,m , the following holds: for any m ∈ n 1 2 +ε , n and α ∈ (1, 2), there exist c 1 , c 2 > 0 such that for every t, δ, 7) and for every m n 1 2 +ε there exist c 1 , c 2 > 0 such that for every t, δ, Before proving the implications in Proposition 5.13, we first prove two easy but important consequences. We now use the bound on n × n 1 2 +ε rectangles to obtain mixing time bounds on the n × m rectangle with boundary conditions that are disordered on three sides and ordered on the fourth. Figure 6. Setup for the proof of Eq. (5.7) starting from wired initial conditions. We prove the above by induction on j ∈ 0, k for n × h j rectangles, showing that where c 1 , c 2 are the constants of (5.7) for m = h j , and The base case j = 0 is given by Corollary 5.14, and if (5.9) holds for some fixed j ∈ 0, k − 1 , then an application of (5.7) immediately implies it for j + 1. The observations that j ≤ log n and h j ≤ n allow us to choose slightly different constants to obtain the first inequality of Corollary 5.15. The triangle inequality and Eq. (2.3) can then be used to boost the bound on d 1 (t) ∨ d 0 (t) to a bound ond(t), so that Markov's inequality implies the second inequality. We now prove Proposition 5.13 from which the above corollaries follow. The proof of Proposition 5.2 then follows from Corollary 5.15 using similar techniques (see §5.4) Proof of Eq. (5.7). Fix any P ∈ D(Λ n, αm ) and observe that the proof is independent of this choice of P. Consider the quantity, E[ P t (ω 0 , ·) − π ξ Λ n, αm tv ] for ω 0 = 0, 1. (i) Wired initial conditions. Begin with the case when ω 0 = 1. Let A, B be two copies of Λ n,m with A translated upwards by (α − 1)m such that Q := A ∪ B = Λ n, αm and A ∩ B is the middle rectangle in Q n of thickness m. In order to compensate for the long-range interactions of the FK model, that are not present in the setting of [41], we force a set of boundary edges to be free (in a manner similar to part 2 of the proof of Theorem 3.2 of [41]) to "disconnect" B from A. Consider the boundary condition ξ , a modification of ξ ∼ P on ∆ = ∆ s ∪ ∆ n for, Denote byP the transition kernel of the censored dynamics (X s ) s≥0 started from the all wired configuration, only accepting updates in block A up to time t, resetting all edge values in B to 1 at time t then only accepting updates in block B from time t to time 2t (observe that as in Lemma 3.4 of [41], by Theorem 2.6, resetting all edge values to 1 only slows mixing). Let ν 1 denote the distribution after time t on A and let ν η 2 denote the distribution after time 2t of configurations on B given that at time t the configuration on B was set to 1 and the boundary condition on B c was η (see Fig. 6). The monotonicity of the FK model along with Theorem 2.6 yields, Now we aim to show that E ∆ P 2t (1, ·) − π ξ Q tv ≤ δ where we let δ be the second argument in the right hand side of (5.7). For R = A, B we denote by π ξ ,η R the modified stationary distribution with η boundary conditions on Q − R. To simplify the notation, throughout the rest of this section, we let µ − ν R denote µ R − ν R tv . Also, for any R, ξ and any random variable X, let π ξ R (X) denote the expectation of X under π ξ R . By the Markov property and the triangle inequality, We begin by bounding the first and fourth terms which are easier, then use the equilibrium estimates of the cluster expansion to bound the third term in Lemma 5.16, analogous to which the second term can be bounded. First observe that with probability 1 − exp(−cn 3ε ) for some c > 0, the boundary conditions on A are sampled from a distribution in D(A). The concern is that the wired initial configuration may add connections to ∂ n,e,w A via the long-range FK interactions. Such an effect on the boundary conditions on A is impossible if there are no boundary connections from ∂ e,w A c to ∂ n,e,w A. Because of the modification on ∆ s such a connection would require a connection of length at least n 3ε under P and therefore also in the free phase, which has probability less than exp(−cn 3ε ) (see Eq. (2.1)). If no such connection exists along the boundary, the boundary conditions on ∂ n,e,w A are sampled from a measure dominated by π 0 Z 2 because the modification of Definition 5.3 only removes connections. From now on, paying a cost of exp(−cn 3ε ), we assume the decreasing event that this is the case. Then by the assumption that A n,m (t, δ) holds, the first term in (5.11) is smaller than δ. The observation that P ∆ P on ∂ e,w Q, implies that for any decreasing f only depending on ∂ n,e,w B, so that the π ξ ,0 Q -averaged distribution on boundary conditions on B is in D(B), the statement A n,m (t, δ) applies, and the fourth term in (5.11) is also bounded above by δ. We now turn to the second and third terms of (5.11), which can be bounded similarly, and thus we only go through the details of the third term: . The wired boundary conditions on ∂ s (E n (Q)) then also dominate the "wired" boundary conditions on ∂ s Q allowing us to dominate the interface (blue) in Q by that in E n (Q). Lemma 5.16. There exists c(q) > 0 such that Proof. We bound the total variation distance by proving that under the grand coupling of the two distributions on B c , they agree with probability 1 = e −cn 2ε . Let ∂ ± (Q) denote the two connected components of ∂Q − ∆ n above and below ∆ n respectively. We break up the expectation into an average over Γ 1 , the set of ξ in which there does not exist a pair (x, y) ∈ ∂ + Q × ∂ − Q such that x ξ ←→ y (i.e. they are in the same boundary component), and Γ c 1 . By Eq. (2.1) of Theorem 2.1 and a union bound over pairs of boundary vertices, there exists a constant c (q) > 0 such that For all such ξ , we use the worst bound of 1 on the total variation distance. Suppose now that Γ 1 holds and observe that this is a decreasing event so P ∆ (· | Γ 1 ) P ∆ . Let Γ 2 denote the decreasing event that the interface (bottom-most horizontal dual crossing) of Q is contained entirely below ∂ s (B c ). Let Γ 3 be the decreasing event that there does not exist any vertex x ∈ ∂ − e Q such that x ←→ ∂ n B, and there does not exist any y ∈ ∂ − w Q such that y ←→ ∂ n B. By monotonicity and the domain Markov property, for ξ ∈ Γ 1 , if Γ 2 ∩ Γ 3 holds, it is the case that the boundary conditions on ∂ n,e,w B c will not have been affected by the updates on A ∩ B as the interface and all its long-range interactions with ∂Q would be confined to A ∩ B. Then one could reveal all boundary components of ∂ e,s,w B so that they are all confined to B and by monotonicity under the grand coupling the two distributions would be coupled on B c . As a result, We bound the two probabilities separately and take a union bound. To bound the probability of Γ c 2 , consider the enlarged rectangle, E n (Q) = −n, 2n × 0, n + αm ⊃ Q , (5.15) with (0, 1) boundary conditions denoting wired on ∂ s E n (Q) and free elsewhere. By Definition 5.1 we sample ∂ n,e,w Q separately and then ∂ s Q. First observe that by Eq. (2.1), with π 0 Z 2 -probability 1 − e −cn , there is a dual circuit between Q and its enlargement by n in all four sides E n (Q). In that case, the boundary conditions on Q are dominated by those with free on ∂E n (Q) (see Figure 7). We can subsequently dominate the boundary conditions on ∂ s Q by making them all wired and extending them all the way across E n (Q) to obtain that there exists c > 0 such that By Proposition 5.6, we deduce that π 0,1 En(Q) (Γ c 2 ) exp(−cn 2ε ) for some c > 0. We now bound the probability of Γ c 3 . Claim 5.17. There exists c = c(q) > 0 such that for every ξ ∈ Γ 1 , Proof. Under Γ 2 , by monotonicity, we can only worsen our bound on the probability of Γ c 3 by replacing the boundary conditions on Q by wired on ∂ − Q − ∂ s Q, free on ∂ s Q, and ξ elsewhere. LetQ = 0, n × 0, 2m ⊃ Q with boundary conditions free on ∂ s,nQ ∪ ∆ n , and wired elsewhere. By the exponential decay of correlations in the free phase, with πQprobability 1−e −cm , the measure this induces on Q dominates the boundary conditions under Γ 2 on Q. Controlling the probability of Γ c 3 can now be expressed in a manner similar to the equilibrium bound, Proposition 5.7. As is standard in such problems, (see, e.g., the appendix of [41]), we can up to an error of e −cn separate the left and right interfaces (see Proposition 5.9 whence the probability that they interact is a large deviation of order n), and just considerQ with free boundary conditions now on all of ∂ eQ also. Then extend the northern boundary ofQ to makeQ symmetric about ∆ n and call the new domain Q . We can, using monotonicity, let its boundary conditions (1, 0, ∆ n ) be free on ∂Q ∩ ({x ≥ m 1 2 +ε } ∪ ∆ n ) and wired elsewhere. At this point, we apply Proposition 5.7 with = n (up to a π 2 -rotation and a rescaling of m to n/2) to obtain the desired bound: if in the new domain, the boundary points are denoted by w 1 , w 2 ∈ ∂ n Q × ∂ s Q and z 1 , z 2 ∈ ∂ e Q , and {C i } i∈n,s are the north or south halves of Q respectively, Proposition 5.7 implies that there exists c > 0 such that for large enough n, Putting everything together, we conclude that if Γ is the event that under π 1,0,∆n Q , the two interfaces are contained in bottom and top halves of Q respectively, then there exists c = c(q) > 0 such that, for large enough n, π 1,0,∆n Q (Γ c ) ≤ 2e −cn 3ε + 2e −cm + 2e −cn , Monotonicity and n 1 2 +ε ≤ m ≤ n imply the bound on E ∆ [π ξ ,1 Q (Γ c 3 | Γ 2 ) | Γ 1 ]. By union bounding over the errors that arise from each of the Γ i not occurring, and otherwise conditioning on their occurrence, we obtain The corresponding bound on the second term of (5.10) is the same up to changes of scale corresponding to working with distributions on configurations on A, not Q. Because of the modification on ∆ s , as remarked earlier, with probability 1−exp(−cn 3ε ), the boundary conditions on A are in D(A). This event is a decreasing event so it only increases the probability of Γ i for i = 1, 2, 3. Because α > 1, the middle rectangle is at least order n 1 2 +ε so the bound on the interface touching ∂ s (B c ) also still holds. Combined with (5.10) and the bounds on the other terms in (5.11), we conclude that for some c 1 , c 2 > 0 and every large enough n, (ii) Free initial configuration. Consider the dynamics started from ω 0 = 0. LetP denote the transition kernel of the censored dynamics that only updates edges in B until time t at which point all edges in A are reset to 0 and the dynamics subsequently only updates edges in A until time 2t. Let ν η 2 denote the distribution obtained between times t and 2t given boundary conditions η on A c and initial configuration 0 on A. Let so that again by Theorem 2.6 and Corollary 5.10 it suffices to prove the desired implication under E ∆ for theP dynamics. The Markov property and the triangle inequality together imply, We can bound the first term by δ by assumption and the observation that the free initial configuration does not change the boundary conditions on B. The second term can be bounded using the same approach as the proof of Lemma 5.16, where now A c is shorter than the rectangle in Lemma 5.16 but still n In that case, the measure on ∂A is dominated by π 0 Z 2 and therefore by (2.1), up to an error of e −cm the entirety of B c is disconnected from A c . By monotonicity and the fact that m ≥ n 1 2 +ε , we obtain that there exists c > 0 such that It remains to bound the third term in (5.16) following the approach of [41]. Lemma 5.18. There exist constants c, c > 0 such that Proof. Using the bound on total variation by the probability of disagreement under a maximal coupling, together with monotonicity and a union bound, write For any e ∈ E(A), consider K = {e + − , 2 } ∩ A. Denote by ν η 2, the distribution obtained by the dynamics in K with boundary conditions given by (ξ , η) on ∂K ∩ ∂A and 0 elsewhere. Then via a very rough mixing time estimate akin to Theorem 2.10 or Lemma 2.11, there exists c > 0 such that Absorbing a 2n 2 for the maximum number of edges in A, it suffices to prove there exist constants c, c > 0 such that for every e ∈ E(A), For any fixed e ∈ E(A) let Γ c := {e By the FKG inequality, so it suffices to check that Proving this is very similar to proving the bound on the probability of Γ c 2 in the proof of Lemma 5.16 as shown in Figure 7. For some c > 0, up to an error of e −cn , we replace is the enlarged rectangle defined in (5.15). Then by Proposition 5.6, for some other c > 0 with probability 1 − exp(−cn 2ε ), the bottom-most horizontal crossing stays below ∂ s A at which point it suffices to consider π 0 En(Q) (Γ c ) because in that case, ∂ s (E n (Q)) would be completely disconnected from K . But Eq. (2.1) and monotonicity imply that there exists c > 0 such that π 0 En(Q) (Γ c ) e −c , at which point a union bound over the two errors concludes the proof. Choosing = c −1 log t in Lemma 5.18 and union bounding over all e ∈ E(A) yields that there exists a new c > 0 such that for sufficiently large n, Combined with the bounds on the first and second terms of (5.16) and Theorem 2.6 we see that there exist c 1 , c 2 > 0 such that for large enough n, which combined with part (i) of the proof, allows us to conclude the proof of (5.7). We now prove the second implication to complete the proof of Proposition 5.13. Proof of Eq. (5.8). Fix any P ∈ D(Λ 2n,m ) and observe that the proof is independent of this choice of P. Consider the quantity, E[ P t (ω 0 , ·) − π ξ Λ 2n,m tv ] for ω 0 = 0, 1. Let B e , B w be the two connected components of B so that the dynamics on B does not update any of the edges of C ⊂ ∂B. We first observe that, as before, by Corollary 5.10 and the size of |V (∆)|, it suffices up to new choice of constants in (5.8), to prove the implication under P ∆ as given by Definition 5.3. By monotonicity, Theorem 2.6, and Corollary 5.10, up to another change of constants c 2 , c 3 , it suffices to prove, with t = exp(cn 3ε )t and δ = 8δ + exp(−e c n 3ε ) for c, c > 0 given by Corollary 5.10 and P a censored dynamics. We begin with the situation in which ω 0 = 1 and letP be the transition kernel of the following censored dynamics: for the first time interval [0, t ), only accept updates from A then at time t change all edges interior to B to 1 and only updates edges interior to B until time 2t . As before, let ν 1 denote the distribution after time t on A and let ν η 2 denote the distribution after time 2t of configurations on B given that at time t all edges in B are reset to 1 and the configuration on C was η. The triangle inequality and Markov property together imply that, Here, boundary conditions of the form (ξ , η) denote a boundary condition that agrees with ξ on the intersection of the boundary of the domain and ∂Q, and takes on boundary conditions η elsewhere. As in part (i) of the proof of (5.7), we observe that because of the modification on ∆ e ∪∆ w with P-probability 1−exp(−cn 3ε ), the boundary conditions on ∂ n A are dominated by π 0 Z 2 in spite of the wired initial configurations on Q−A. In what follows, we assume-paying an error of exp(−cn 3ε )-that this decreasing event holds. Observe also that the wired initial configuration can only make ∂ s A more wired, and thus those boundary conditions will continue to dominate the marginal of π 1 Z 2 . Along with self-duality and Corollary 5.10, this implies that the first term in (5.17) is bounded above by δ . The fourth term can be bounded as follows: observe that the distribution over boundary conditions (ξ , η) on B under P ∆ (ξ )π ξ ,0 Q (η) coincides with the ∆ s modification of P ∆n,e,w (ξ )π ξ ,0 Q (η). Because the boundary conditions on ∂ n A are dominated by π 0 Z 2 , an argument like (5.12) implies P ∆n,e,w (ξ )π ξ ,0 Q (η) π 0 Z 2 (η) on C. Thus, by Corollary 5.10, the third term is bounded above by 2δ , where the factor of 2 comes from the fact that B consists of two independent copies of R n . We used the fact that the configuration on E(C) is dominated by the partition of C induced by the FK configuration under π ξ ,0 Q . It remains to bound the second and third terms of (5.17), which can be treated similarly so we only go through the bound of the former. "1" "1" "1" "1" ∆ s ∆ n C A A "0" "0" Figure 9. The modification analogous to Figure 5 for the second step of the recursion (Eq. (5.8)). Lemma 5.19. There exists c = c(q) > 0 such that Proof. We bound the total variation distance on C by bounding the probability that samples from the two distributions agree under the grand coupling. Following the proof of Lemma 5.16, we first define the event Γ 1 as the set of ξ in which there exists no pair (x, y) ∈ (∂ n B e − ∆ n ) × (∂ n B w − ∆ n ) such that x ξ ←→ y where ξ ←→ denotes that x, y are in the same boundary component. We split up E ∆ [ π ξ ,1 A − π ξ Q C ] into an average over those ξ ∈ Γ 1 and those in Γ c 1 . As in Lemma 5.16, we obtain the bound E ∆ [Γ c 1 ] ≤ exp(−cn 3ε ) for some c > 0 using the fact that the boundary conditions on ∂ n A are obtained from a measure that is dominated by π 0 Z 2 and Eq. (2.1) implies an exponential decay of connections. For all such ξ , we bound the distance between the two measures by 1. Now consider, for any e ∈ E(C), E ∆ [ π ξ ,1 A − π ξ Q tv | Γ 1 ]. Define in analogy with the proof of Lemma 5.16, the decreasing events Γ 2 and Γ 3 : and Γ 3 is the event that there does not exist any x ∈ ∂ n A ∩ B e − ∆ n such that x ←→ ∂ w B e and likewise, there does not exist any x ∈ ∂ n A ∩ B w − ∆ n such that x ←→ ∂ e B w . Under Γ 2 ∩Γ 3 , we could expose all the wired components of ∂A−∆ n −∆ s to reveal an outermost dual circuit around C. By monotonicity and domain Markov, the two distributions would be coupled under the grand coupling past that dual circuit, so that for all ξ ∈ Γ 1 , π ξ ,1 A − π ξ Q C ≤ π ξ ,1 A (Γ 2 ∩ Γ 3 ) . LetĀ = 0, n × 2m , viewed as two copies of A stacked above one another. Let (0, 1, ∆ s ) boundary conditions onĀ denote those that are free on ∂ nĀ and ∆ s and wired on the rest of ∂Ā. By monotonicity and the exponential decay of correlations in the free phase given by (2.1), we see that there exists c > 0 such that E ∆ π ξ ,1 A (Γ c 2 ) Γ 1 ≤ E ∆s π ξ ,1 A (Γ c 2 ) Γ 1 ≤ e −cm + π 0,1,∆s A (Γ c 2 ) . Consider π 0,1,∆s A (Γ c 2 ) and let V = 0, n × 0, 4m , as in Proposition 5.7 with = 4m. Let (0, 1, ∆ s ) boundary conditions on V denote those that are free above y = 2m and on ∆ s and wired elsewhere. Then it is clear by monotonicity and the fact that Γ c 2 is an increasing event, that π 0,1,∆s A (Γ c 2 ) ≤ π 0,1,∆s V (Γ c 2 ) . Because m n 1 2 +ε , we can apply Proposition 5.7 to see that there exists a new c(q) > 0 such that π 0,1,∆s A (Γ c 2 ) exp(−cn 3ε ). We now turn to bounding π ξ ,1 A (Γ c 3 | Γ 2 ) using the same approach as in the proof of Claim 5.17. Under Γ 2 , there is a pair of vertical dual crossings in A which allow us to replace, by monotonicity, the boundary conditions (ξ , 1) by ones that we denote (0, 1, ∆ n ) which are free on ∂ e,w,s A and also free on ∆ n and wired elsewhere. To make the comparison to the setting of Proposition 5.7, perturb the boundary conditions more by extending the wired segments down along ∂ e,w A a length n 1/2+ε . Up to a π-rotation, Proposition 5.7 with the choice of = m implies that the two interfaces are confined to the left and right halves of A with probability 1 − exp(−cn 3ε ) for some new c > 0, and sufficiently large n. Monotonicity implies that for any ξ ∈ Γ 1 , π ξ ,1 A (Γ c 3 | Γ 2 ) e −cn 3ε , and together with a union bound, there exists c > 0 such that for large enough n, E ∆ π ξ ,1 A − π ξ Q C ≤ e −c n 1 2 +ε + 2e −cn 3ε . Combined with the prior bounds on the first and third terms in the right-hand side of (5.17) and Theorem 2.6, for some c > 0, E ∆ P 2t (1, ·) − π ξ Q tv δ + e −cn 3ε . (ii) Free initial configuration. The bound for the free initial configuration, E ∆ P 2t (0, ·) − π ξ Q tv δ + e −cn 3ε , is nearly identical to the above bound with the following change: the censored dynamics P only allows updates in block A until time t then resets all edge values in B to 0 then only allows updates in B between time t and 2t . In this case, we can again write, using the same notation as before, E ∆ P 2t (0, ·) − π ξ Q tv ≤ E ∆ ν 1 − π ξ ,0 A C + E ∆ π ξ Q − π ξ ,0 Q C +E ∆ π ξ ,0 A − π ξ Q C + E ∆ π ξ ,0 Q ( ν η 2 − π ξ ,η B tv ) . (5.18) The free initial configuration precludes the long-range interactions of the FK model modifying the boundary conditions on A and thus they are still in D(A). The bound on the first term then follows from the assumption, without appealing to self-duality, and the other three bounds hold as for ω 0 = 1. Combining the above with part (i) holds, we see that (5.8), which with (5.7), concludes the proof of Proposition 5.13. 5.4. Proof of Proposition 5.2. It now remains to extend Corollary 5.15 to boundary conditions that are dominated by the free phase or dominate the wired phase on all four sides, to obtain the desired bounds for free and monochromatic boundary conditions. Suppose without loss of generality that P π 0 Z 2 on Λ; the case P π 1 Z 2 follows from self-duality. First consider the dynamics started from ω 0 = 1. We wish to prove that there exists c > 0 such that for t = exp(cn 3ε ), By an application of Corollary 5.10, up to errors that can be absorbed by adjusting the constant c appropriately, we can modify, as in Definition 5.3, the boundary conditions on ∆ = ∆ n ∪ ∆ s , where ∆ n = ∆ 1 n ∪ ∆ 2 n , ∆ 1 n ={0, n} × 3n 4 − n 3ε , 3n 4 , ∆ 2 n ={0, n} × n 2 , n 2 + n 3ε , and ∆ s is the reflection of ∆ n across the line y = n/2, and consider dynamics on Λ with the new measure P ∆ on boundary conditions. Let Λ ± denote the top and bottom halves of Λ, respectively. Then, We deal only with the first term since the second can be bounded analogously. LetP be the dynamics that censors all updates not in Λ n,3n/4 . Then by Markov property and triangle inequality, the first term can be bounded above by E ∆ P t (1, ·) −π ξ ,1 Λ − + E ∆ π ξ Λ −π ξ ,1 Λ − , whereπ ξ ,1 denotes the stationary distribution on Λ n,3n/4 with boundary conditions that are wired on ∂ n Λ n,3n/4 and ξ elsewhere. First observe that because of ∆ 1 n , with probability 1−exp(−cn 3ε ), the wired initial configuration on Λ−Λ n,3n/4 does not affect the boundary conditions on Λ n,3n/4 and therefore the boundary conditions on it are, up to a π-rotation, in D(Λ n,3n/4 ). At this cost, we assume this decreasing event holds. The second term can then be bounded as in Lemma 5.16 by exp(−cn 2ε ) for some c > 0. We use ∆ 2 n to disconnect the wired boundary condition on ∂ n Λ n,3n/4 from Λ − ; the new choice of ∆ and modifications in the sizes of the boxes do not affect the proofs. The first term can be bounded via Corollary 5.15 for m = 3n/4 since, with probability 1 − exp(−cn 3ε ), the boundary condition on ∂ e,s,w Λ n,3n/4 is dominated by the marginal of π 0 Z 2 while on ∂ n Λ n,3n/4 , it is all wired. Combining the two bounds and doing the same for Λ + , implies there is a c > 0 such that for t = exp(cn 3ε ), For the dynamics started from the free initial configuration, for every e ∈ E(Λ), ∈ N, define K = Λ ∩ {e + − , 2 }. Let P K denote the transition kernel of the dynamics restricted to K with (ξ, 0) boundary conditions denoting ξ on ∂K ∩ ∂Λ and free elsewhere. We claim that, for some c > 0, for some c > 0. The first inequality is an immediate consequence of monotonicity. The second follows from an argument similar to that in the proof of Lemma 5.18 where now we take a new enlargement, E n (Λ) that enlarges Λ by n in the southern direction also. We can replace, by Eq. (2.1), π ξ Λ by π 0 E n (Λ) up to an error of e −cn as before. Again using (2.1) in the free phase, up to an error of e −c , we can replace π 0 E n (Λ) with π ξ,0 K , noting that the distributions at e match if e is disconnected from K by a dual circuit. We can then bound the sum in the right-hand side by Proposition 5.11: uniformly in ξ,e, t mix for K is bounded above by e c for some c > 0. Choosing = c −1 log t n 3ε and union bounding over all the errors yields, for some c > 0, E P t (0, ·) − π ξ Λ tv e −cn 2ε , as desired. An application of Markov's inequality, the triangle inequality, and (2.3) to max ω 0 ∈{0,1} E P t (ω 0 , ·) − π ξ Λ tv e −cn 2ε , implies that there exists some c > 0 such that for t = e cn 3ε and n sufficiently large, P(t mix ≥ t ) exp(−cn 2ε ) as required.
2016-11-08T18:56:27.780Z
2016-07-07T00:00:00.000
{ "year": 2017, "sha1": "291f6e4f083f2dd0b8e44961de62379f89875b17", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "291f6e4f083f2dd0b8e44961de62379f89875b17", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
88842775
pes2o/s2orc
v3-fos-license
Pervasive adaptation in Plasmodium-interacting proteins in mammals The protozoan genus Plasmodium causes malaria in dozens of mammal species, including humans, non-human primates, rodents, and bats. In humans, Plasmodium infections have caused hundreds of millions of documented deaths, imposing strong selection on certain populations and driving the emergence of several resistance alleles. Over the deep timescale of mammalian evolution, however, little is known about host adaptation to Plasmodium. In this work, we expand the collection of known Plasmodium-interacting-proteins (PIPs) in mammalian hosts from ~10 to 410, by manually curating thousands of scientific abstracts. We use comparative tests of adaptation to show that PIPs have experienced >3 times more positive selection than similar mammalian proteins, consistent with Plasmodium as a major and long-standing selective pressure. PIP adaptation is strongly linked to gene expression in the blood, liver, and lung, all of which are clinically relevant tissues in Plasmodium infection. Interestingly, we find that PIPs with immune functions are especially enriched for additional interactions with viruses or bacteria, which together drive a 3.7-fold excess of adaptation. These pleiotropic interactions with unrelated pathogens, along with pressure from other Plasmodium-like Apicomplexan parasites, may help explain the PIP adaptation we observe in all clades of the mammalian tree. As a case study, we also show that alpha-spectrin, the major membrane component of mammalian red blood cells, has experienced accelerated adaptation in domains known to interact specifically with Plasmodium proteins. Similar interactions with Plasmodium-like parasites appear to have driven substantial adaptation in hundreds of host proteins throughout mammalian evolution. 1 . Expression changes were the most common form of evidence (72% of PIPs), 15 but 28% of PIPs were supported by multiple sources of evidence, and 41% by 16 multiple studies. Virtually all of the studies were conducted on five Plasmodium 17 species infecting humans or mice (Fig 1A). 18 19 20 PIP expression and function support role in malaria 1 If PIPs are truly a set of malaria--relevant genes, we would expect the 2 pathophysiology of malaria to be reflected in their tissue expression profiles. We 3 tested this hypothesis by examining human gene expression in each of the 53 tissues 4 collected by the GTEx Consortium (2015; Methods III). We first found that, on 5 average, PIPs have 9.5% higher total expression than other genes (p<0.0001; Fig 6 2A). To fairly evaluate PIP overexpression in each tissue, we designed a matched 7 permutation test that compares PIPs to many, similarly--sized sets of control genes 8 with similar total expression (Methods III). After controlling for total expression in 9 this way, we find seven tissues in which PIPs are significantly overexpressed ( 14 Similarly, we expected PIPs to be enriched for GO functions that reflect 15 malaria pathology. We tested 17,696 GO functional categories (Methods V) for PIP 16 enrichment using Fisher's Exact Test. After correcting for multiple testing, over 17 1,000 categories contained significantly more PIPs than expected (S2 Table). These 18 categories are dominated by immune functions, especially for the highest levels of 19 enrichment (Fig 1C). Other functions, including apoptosis, cell--cell signaling, and 20 coagulation, are also highly enriched for PIPs (S2 Table). These results confirm the 21 biological connections between PIPs and malaria, and suggest that immune 22 pathways present a major functional interface between host and parasite. 23 1 4 axis. That is, sets with higher average values will traverse more x--axis space (and appear as 'lower' 5 lines) before reaching the maximum density of 100%. Descriptions of data sources are available in 6 Methods V. *** = p<0.0001; ** = p<0.001; ns = p>0.05. when testing the link between Plasmodium, as a single causal pathogen, and 14 pleiotropy has many interesting implications, including the need to carefully isolate 1 any single selective pressure when linking it to protein adaptation. 3 PIPs are not like other proteins 4 We have already shown that PIPs have two unusual properties-high mRNA 5 expression, and excess overlap with other pathogens-that may influence their rate 6 of evolution. We assessed several additional metrics for differences between PIPs 7 and other proteins, in order to fairly evaluate PIP adaptation. 8 First, we tested three more broad measures of gene function in humans: the 9 density of DNAseI hypersensitive elements; protein expression, as measured by 10 mass spectrometry; and the number of protein--protein interactions (see Methods 11 V). For each of these metrics, PIPs have significantly higher mean values than sets of 12 random controls, indicating that PIPs are more broadly functional in humans (Fig 2, 13 B--D; all p<0.01). We next tested four measures of genomic context, which have been 14 linked to the rate of protein evolution: aligned protein length; the regional density of 15 protein--coding bases; the density of highly conserved, vertebrate elements; and GC 16 content (Methods V). Most of these metrics do not differ between PIPs and other 17 genes (Fig 2 E--H), with the exception of conserved element density, which is slightly 18 but significantly lower in PIPs (mean=8.0% vs. 8.8%; p=0.0004; Fig 2G). 19 Based on these results, we expanded our permutation test to find matched 20 controls for each PIP. Control genes were considered acceptable matches if their 21 values for each of the five significantly different metrics (Fig 2A-- be generated. About 9% of PIPs were too dissimilar from other proteins to be 4 matched, and were excluded from subsequent analysis. 5 Finally, one of the largest differences between PIPs and other proteins is the 6 frequency with which they are discussed in the scientific literature ( Fig 2I). The 7 average PIP has 6.5 times more PubMed citations, and 9.1 times more scientist--8 contributed References Into Function (GeneRIFs), than the average mammalian 9 protein (Methods V). This difference was too large to control for in the matched 10 permutation test without excluding the majority of PIPs. However, we show that the In contrast, we find that PIPs do have a significantly higher ratio of non--2 synonymous to synonymous substitutions across 24 mammal species ( Both models find evidence of excess adaptation in PIPs. Over 37% of PIPs 14 have BUSTED evidence (at p≤0.05) of recurrent adaptation in mammals, versus 23% 15 of matched controls (p<10 --5 ; Fig 3C). Similarly, PIPs have BS--REL evidence for 16 adaptation on more branches of the mammalian tree (p=1.87× 10 --4 ; Fig 3D), and for 17 more codons per protein (p<10 --5 ; Fig 3E). This excess is robust to the BUSTED p--18 value threshold used to define adaptation, and increases as the threshold becomes 19 more stringent (Fig 3F, p=0.001). Overall, these matched tests show that PIPs have 20 indeed experienced an accelerated rate of adaptive substitutions, consistent with 21 malaria as an important selective pressure. 22 High rate of adaptation in PIPs known to interact only with Plasmodium 1 We have shown that a large set of host proteins with strong connections to 2 Plasmodium (STable 1 , Fig 1 A--C) have, over deep time scales, evolved under 3 exceptionally strong positive selection (Fig 3). Given that nearly half of PIPs are 4 known to also interact with viruses and/or bacteria (Fig 1D), one critical question is 5 whether Plasmodium is truly the source of this selection. We attempted to isolate 6 Plasmodium as a selective pressure by dividing PIPs into 'Plasmodium--only' and 7 'multi--pathogen' categories, based on the available information regarding viruses 8 and bacteria (Fig 1D; Methods IV). We find that Plasmodium--only PIPs have a 2.2--9 fold excess of adaptation compared to matched controls (p=0.008; Fig 4A, far left), 10 when adaptation is measured as the proportion of adaptive codons per gene (Fig 11 3E). This suggests that Plasmodium may have specifically driven adaptation in a 12 large number of mammalian proteins, apart from any pleiotropic interactions they 13 may have with other pathogens. 14 Nonetheless, multi--pathogen PIPs have 3.7 times more adaptation than 15 matched controls-significantly higher than the excess in Plasmodium--only PIPs 16 (p=0.005; Fig 4A, left). This suggests that an increased number and diversity of 17 pathogen interactions may drive a cumulative increase in host adaptation. 18 Importantly, however, these multi--pathogen interactions are concentrated in 19 immune PIPs (Fig 1D; Fig 4A). Since immune genes are well known to evolve at 20 PIPs. 15 Before disentangling this issue, we first verified the correlation between 16 immune function and adaptation (Methods VI). We find that while PIPs overall have 17 adapted at a 3.1--fold higher rate than matched controls, non--immune PIPs have 1 adapted at a 1.7--fold higher rate than matched, non--immune controls (Fig. 4A, 2 middle). This difference, which is highly significant (p<0.001), reinforces that 3 immune enrichment could confound adaptation in multi--pathogen PIPs. To isolate 4 these two effects, we then considered only non--immune PIPs, divided into groups by 5 their total number of pathogen interactions (Fig 4A, right; S7 Fig). In these non--6 immune PIPs, in contrast to all PIPs, we find that additional interactions beyond 7 Plasmodium have no additional effects on adaptation. 8 Together, these results suggest that adaptation in immune genes is difficult to 9 attribute to any single selective pressure. The immune system appears to be the 10 most efficient avenue for hosts to simultaneously adapt to multiple pathogens. In 11 contrast, host adaptation to Plasmodium is apparent through both immune and non--12 immune pathways (Fig 1D; Fig 4A). We have shown that non--immune genes evolve 13 more slowly and have less pathogen pleiotropy (Fig 4A; Fig 1D). Thus, though 14 Plasmodium has likely played a major role in immune evolution, we can be more 15 confident that selection imposed by Plasmodium has specially driven adaptation in 16 non--immune PIPs. 17 18 19 Malaria infections are biologically complex, and host adaptation to 20 PIP adaptation is related to expression in blood, liver, and lung Plasmodium could occur in genes expressed in several malaria--relevant tissues (Fig 21 1B). We used multiple linear regression to test whether the rate of adaptation in a 22 gene, as measured by BS--REL and BUSTED, was related to its tissue--specific 23 expression, as measured by GTEx 1 For PIPs, rates of adaptation are significantly and positively related to 2 relative expression in blood, liver, and lung, but not in other malaria--related tissues 3 (Table 1, column 2). Overall, in a multiple linear model, PIP expression in these 4 tissues explains 17.4% of the variance in the proportion of adaptive codons. In 5 contrast, the tissue--specific expression of matched control genes (Methods III) 6 explains only 4.6% of this variance in adaptation, or 3.8 times less (p<0.001). When 7 compared to samples of control genes matched for total expression, as well as for 8 expression in blood, liver, and lung, PIP relationships between adaptation and tissue 9 expression are significantly stronger than expected ( PIP adaptation is not limited to Plasmodium--infected lineages 1 A number of Plasmodium species infect mammalian hosts in the orders 2 Primates and Rodentia (Carlton, Perkins, and Deitsch, 2013). In contrast, 3 Artiodactyla and Carnivora are parasitized by other genera of Apicomplexan 4 parasites, which also reproduce in the blood and are transmitted by insects (Clark 5 and Jacobson, 1998). To further test the specificity of PIP adaptation, we applied the 6 BUSTED and BS--REL models to separate protein alignments for each mammalian 7 order (Methods VIII). 8 When all PIPs are considered, we find significant excesses of adaptation in 9 rodents (p<0.001), primates (p=0.005), and carnivores (p=0.02; Fig 4B). The signal 10 is positive, but not significant, in artiodactyls (p=0.28; Fig 4B). Artiodactyls are the 11 most poorly--represented group in our mammalian tree (S1 Fig Apicomplexan pathogens. Other ubiquitous pathogens that interact with PIPs, 18 namely viruses and bacteria (Fig 1D), may further contribute to these mammal--wide 19 patterns. 20 21 Understanding a single case of adaptation to Plasmodium 22 We have shown that Plasmodium has driven, at least in part, an accelerated 23 rate of adaptation in a set of 410 mammalian PIPs. In order to understand this 1 adaptation at a more mechanistic level, we selected a single PIP for more detailed 2 investigation. 3 Of the top ten PIPs with the strongest BUSTED evidence of adaptation, only 4 one candidate-alpha--spectrin, or SPTA1-has been extensively characterized for 5 molecular interactions with Plasmodium proteins. Alpha--spectrin is a textbook 6 example of a major structural component of the red blood cell (RBC) membrane. In 7 humans, several polymorphisms in this gene are known to cause deformations of 8 the RBC, which may either be symptomless or cause deleterious anemia (reviewed 9 in, e.g., Gallagher, 2004). The SPTA1 protein has a well--defined domain structure, 10 and specific interactions with Plasmodium proteins are known for three domains 11 6 7 We wished to test whether sites of mammalian adaptation in SPTA1 mapped 8 to any of these Plasmodium--relevant domains. To identify adaptive codons with 9 higher precision and power, we aligned SPTA1 coding sequences from 61 additional 10 mammal species (S5 Table) for analysis in MEME (Murrel et al., 2012; Methods IX). 11 Of the 2,419 codons in this large mammalian alignment, we found that 63 show 12 strong evidence of adaptation (p<0.01), and that these are distributed non--13 randomly throughout the protein. 14 Remarkably, three domains-Repeat 1, Repeat 4, and EF--hand 2-are 15 significantly enriched for adaptive codons, after controlling for domain length and 16 conservation (Fig 5; Methods IX). That is, all three SPTA1 domains with strong 17 evidence of adaptation in mammals are known to either interact specifically with P. 18 falciparum proteins, or harbor human mutations that provide resistance to P. 19 falciparum. This overlap is unlikely to occur by chance (p=0.015), and is robust to 20 the p--value thresholds chosen (S6 Table). Thus, evidence from SPTA1 suggests a 21 meaningful and specific connection between host adaptation and the mechanics of 1 Plasmodium infection. 2 3 Discussion 4 In this work, we have examined decades of malaria literature to expand the 5 collection of mammalian, Plasmodium--interacting proteins by over an order of 6 magnitude (Fig 1). We show that, compared to control proteins matched for various 7 properties (Fig 2), these 410 PIPs have adapted at exceptionally high rates in 8 mammals (Fig 3). The highest rates of adaptation are evident in immune PIPs, 9 especially those that share interactions with viruses and bacteria (Fig 4A). However, 10 we show that Plasmodium itself (or related Apicomplexans) has likely been an 11 important driver of this adaptation, especially for non--immune proteins (Fig 4A). 12 We used collections of available data on other pathogens to isolate a set of 13 PIPs that, to the best of our knowledge, lack any 'multi--pathogen' interactions. These 14 'Plasmodium--only' PIPs, whether immune or not, have adapted at over twice the 15 expected rate in mammals (Fig 4A). This suggests that Plasmodium has had an 16 appreciable effect on PIP evolution, beyond the effect of unrelated pathogens. Still, 17 many interactions with other pathogens likely remain unknown, making it 18 difficult-based on this evidence alone-to dismiss their importance. 19 However, two other pieces of evidence support Plasmodium as a key selective 20 pressure. First, mammal--wide adaptation in PIPs is strongly linked to PIP 21 expression in human blood, liver, and lung (Table 1). Plasmodium parasites are well 22 known to replicate within red blood cells (RBCs) and hepatocytes, and infected 23 RBCs tend to sequester in the lungs, with serious consequences (e.g. Aursudkij et al., 1 1998). Thus, the pathophysiology of malaria is reflected in the tissues where PIPs 2 show the strongest evidence of adaptation. 3 Second, in the well--studied case of alpha--spectrin, we show that domain--level 4 interactions with Plasmodium perfectly explain the observed patterns of adaptive 5 substitution (Fig 5). Besides validating the ability of codon evolution models to 6 detect adaptation at particular residues (Methods VII), this result affirms a specific 7 role for Plasmodium in mammalian evolution, beyond the immune--focused role 8 played by pathogens in general ( Fig 4A). Thus, despite the inevitability of at least 9 some pleiotropy (Wagner and Zhang, 2011), we show that phenotypic information 10 can be leveraged to link genetic adaptation to specific sources of selection. 11 Throughout this work, we showcase the utility of phenotypic information for 12 studying evolution. We demonstrate that recent, well--funded projects like GTEx and 13 Encode can provide, among many other uses, the raw information required for 14 meaningful evolutionary comparisons (Fig 2; Methods). Smaller--scale projects, 15 including most of the scientific papers contained in PubMed, also contain an 16 impressive quantity of valuable data ( Fig 1A). However, we find that heavy manual 17 curation is still required to remove false positives from literature searches (Methods 18 I). In the future, unique and automatic indexing of existing data will be key to 19 understanding the evolution of complex phenotypes, and should be a major 20 research focus, alongside the accessibility of new data. 21 Finally, this work provides an interesting contrast with previous studies, 22 which have associated only a few dozen human genes with malaria resistance 23 (Verra et al., 2009). Only a handful of these genes are backed by convincing evidence 1 of positive selection in humans, and nearly all of these are RBC proteins (Hedrick, 2 2011;MalariaGEN, 2015). In contrast, our work provides a repository of hundreds 3 of diverse human genes with phenotypic links to malaria (Fig 1; S1 Table). Why, 4 then, do we know of so few examples of recent human adaptation to Plasmodium? 5 This disconnect may depend simply on the timescale of human evolution, which is 6 only a fraction of the 105 million years of mammalian evolution (Murphy et al., 7 2007). Or, perhaps the difficulty of detecting balancing selection (Charlesworth, 8 2006) has obscured additional, important human variants. Future work will utilize 9 the large set of PIPs to better understand the evolution of malaria resistance in 10 humans. 11 In conclusion, we have found evidence of substantially accelerated 12 adaptation in mammalian proteins that interact with Plasmodium. In the case of 13 rapidly evolving immune proteins, Plasmodium appears to share responsibility with 14 other groups of pathogens, including viruses and bacteria. We show that it can be 15 difficult to attribute evolutionary changes to a single selective agent, given the 16 surprising pleiotropy of host genes with regard to very different pathogenic agents. 17 But in many cases-as well as in the case of alpha--spectrin-our approach allows us 18 to infer that Plasmodium--like parasites have imposed a substantial selective 19 pressure on mammals. We hope that our collection of 410 mammalian PIPs will 20 continue to prove a powerful resource for exploring host interactions with 21 Plasmodium. 22 addressing non--genetic aspects of malaria. 12 For papers discussing genes, we examined the abstracts for the presence and 13 type of evidence connecting genes to malaria phenotypes. In cases where the 14 abstract was ambiguous, we examined the full text of the paper. To limit the number 15 of false positives, we did not include results from RNAseq or other high--throughput 16 experiments. 17 18 19 We used BLAT to identify homologs of 22,074 human coding sequences in 24 20 high--depth mammal genomes (S1 Fig). We retained orthologs which (1) had best 21 reciprocal hits in all 24 mammal species, (2) lacked any in--frame stop codons, (3) 22 II. Generation of mammalian ortholog alignments were at least 30% of the length of the human sequence, and (4) had clearly 23 conserved synteny in at least 18 non--human species. Coding sequences for the 1 resulting 9,338 proteins were aligned with PRANK, and any codon present in fewer 2 than eight species was excluded from analysis. Additional details are available in 3 values for each tissue. RA is simply the proportion of each gene's total RPKM found 10 in each tissue. For matching controls, we summed RPKM values over all tissues to 11 yield total expression. 12 Because PIPs have substantially higher total expression than other proteins 13 ( Fig. 2A) Alpha--spectrin homologs were initially identified in 88 mammal species 2 using NCBI Gene (http://www.ncbi.nlm.nih.gov/gene/?Term=ortholog_gene_6708). 3 The sequence of the longest mRNA transcript for each species was downloaded 4 using E--Utilities, and each transcript was trimmed to the longest ORF using Table). The alignment was manually inspected and corrected using Table). Then, for each domain, we 16 calculated an 'adaptation score' as: 17 a/v 18 where a measures adaptation (the proportion of codons within the domain with 19 MEME p≤0.01*) and v measures variability (the proportion of codons within the 20 domain that vary among species, i.e., are not 100% conserved). This score also 21 controls for domain length, as it uses the proportion of codons within the domain. 22 To calculate the significance of each domain's adaptation score (i.e., to ask, is it 23 higher than expected?), we randomly permuted codons among domains 10,000 1 times. 2 *We also tested MEME p--value cutoffs of 0.1, 0.5, 0.005, and 0.001 for 3 defining a; these results are available in S6 Table. The results for p≤0.01, which are 4 reported in the main text, are representative across these cutoffs. 5 6 Data Access 7 All data used in this work are publicly available (Methods I--V). The collection of PIPs 8 is available in S1 Table. 9 10 Acknowledgements 11 We wish to thank Kerry Geiler--Samerotte for her thoughtful comments on the 12 manuscript, along with the rest of the Petrov lab. ERE thanks Jane Carlton for 13 abbreviation advice; Jamie Blundell and Anisa Noorassa for figure advice; and Daniel 14 Friedman, for conceding that 'protein' can mean 'gene.' 15 16 Author Contributions 17 E.R.E curated the PIPs. D.E., E.R.E., and N.T. collected other data. E.R.E. and D.E. 18 performed the analyses, with design input from D.A.P. and S.V. E.R.E. and D.A.P. 19 wrote the paper, with contributions from all other authors. 20 This work was supported by NIH grants R01GM089926 and R01GM097415 and 21 NSF grant R35GM118165--01 to DAP, and an NSF Graduate Research Fellowship to 22
2016-11-01T19:18:48.349Z
2016-10-16T00:00:00.000
{ "year": 2016, "sha1": "a3ed4dd3824da2718b7bec76169f1742fba364b9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1007023&type=printable", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "83c75ed7e2c5384cbee59b5c7d347585700e11c1", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
44082034
pes2o/s2orc
v3-fos-license
Urine Trefoil Factors as Prognostic Biomarkers in Chronic Kidney Disease Introduction Trefoil factor family (TFF) peptides are increased in serum and urine in patients with chronic kidney disease (CKD). However, whether the levels of TFF predict the progression of CKD remains to be elucidated. Methods We determined the TFF levels using peptide-specific ELISA in spot urine samples and performed a prospective cohort study. The association between the levels of urine TFFs and other urine biomarkers as well as the renal prognosis was analyzed in 216 CKD patients (mean age: 53.7 years, 47.7% female, 56.9% with chronic glomerulonephritis, and mean eGFR: 58.5 ml/min/1.73 m2). Results The urine TFF1 and TFF3 levels significantly increased with the progression of CKD stages, but not the urine TFF2 levels. The TFF1 and TFF3 peptide levels predicted the progression of CKD ≥ stage 3b by ROC analysis (AUC 0.750 and 0.879, resp.); however, TFF3 alone predicted CKD progression in a multivariate logistic regression analysis (odds ratio 3.854, 95% confidence interval 1.316–11.55). The Kaplan-Meier survival curves demonstrated that patients with a higher TFF1 and TFF3 alone, or in combination with macroalbuminuria, had a significantly worse renal prognosis. Conclusion The data suggested that urine TFF peptides are associated with renal progression and the outcomes in patients with CKD. Introduction Chronic kidney disease (CKD) is defined as having either glomerular filtration rate (GFR) < 60 mL/min/1.73 m 2 or markers of kidney damage for at least 3 months or both [1,2]. CKD with multifactorial etiology leading to endstage renal disease (ESRD) is a significant concern, given the increasing numbers of such patients worldwide [3]. CKD is not only associated with an elevated risk of ESRD but also with cardiovascular disease and mortality, even with a slight decline in the GFR [4,5]. A lower estimated GFR (eGFR) and severe albuminuria independently predict ESRD and mortality in patients with CKD [6]. Several reports have identified and validated novel biomarkers in CKD patients in order to better identify those at high risk of a rapid loss of the renal function [7]. The mammalian trefoil factor family (TFF) peptides consist of a three-looped structure of cysteine residues, known as the trefoil domain, and the family comprises three members in mammals: TFF1, TFF2, and TFF3 [8,9]. TFF1 and TFF3 contain one trefoil domain, while TFF2 contains two. TFF1 and TFF3 can dimerize to homodimers through a seventh cysteine residue [10]. These small peptides, with a molecular weight of approximately 7 kDa, are secreted by mucus-producing cells in the gastrointestinal tract and are involved in mucosal surface maintenance and repair [11,12]. BioMed Research International They are also secreted by epithelial cells of multiple tissues, including tubular epithelial cells of kidney [13], through a seventh cysteine residue located near the C-terminus. In the human urinary tract, TFF3 is detected as the most abundant form followed by TFF1 [14]. In rodent models, urine TFF3 markedly reduced after acute renal toxicity [15], and it has already been proposed as a urine biomarker for kidney toxicity in preclinical stages [16]. Higher urine levels of TFF3 were shown to be associated with incident CKD in community-based populations [17]; however, they were not associated with incident CKD or albuminuria in another prospective cohort of Framingham Heart Study participants [18]. In recent studies in patients with CKD, increased levels of urine TFF1 [19] and urine TFF2 [20] have been reported in early CKD stages, whereas urine TFF3 levels are increased in later CKD stages [19,21]. Given the above conflicting findings, whether or not urine TFF levels can be used to predict the renal outcome is still uncertain in patients with CKD. We therefore examined the urine levels of TFF and investigated the relationship between urine TFF and the renal progression and outcomes in patients with CKD. Subjects. The subjects in this study were outpatients who had visited the Renal Unit of Okayama University Hospital between February 2009 and January 2011. All patients were diagnosed with CKD according to their eGFR and the presence of kidney injury, as defined by the National Kidney Foundation K/DOQI Guideline [22]. Hypertension was defined as systolic blood pressure (SBP) ≥ 140 mmHg or diastolic blood pressure (DBP) ≥ 90 mmHg or the use of antihypertensive drugs. The eGFR was calculated according to the simplified version of the Modification of Diet in Renal Disease formula [eGFR = 194 × (sCr) −1.094 × (age) −0.287 (if female × 0.739)] [23]. All procedures in the present study were carried out in accordance with institutional and national ethical guidelines for human studies and guidelines proposed in the Declaration of Helsinki. The ethics committee of Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences approved the study (number 522 and revision number 2063). Written informed consent was obtained from each subject. This study was registered with the Clinical Trial Registry of the University Hospital Medical Information Network (registration number UMIN000010140). According to the protocol, we excluded any patients with established atherosclerotic complications (coronary artery disease, congestive heart failure, or peripheral vascular disease). Patients with nephrotic syndrome, acute kidney injury, acute infection, and malignancy including gastric cancer [24,25], active gastrointestinal diseases including gastroenteritis and peptic ulcers, or liver cirrhosis [26] at entry were excluded (Supplementary Figure S1). Study Samples. All urine samples were obtained from patients from spot urine in the morning [27]. Samples were spun at 2,000 for 5 minutes in a refrigerated centrifuge, and the supernatants were immediately transferred to new screwtop cryovial tubes and frozen at −80 ∘ C. All urine aliquots used in this biomarker study had undergone no previous freeze-thaw cycle. Samples for this study were obtained from 216 participants who were free of ESRD at the time of urine collection. Biomarker Measurements and Other Clinical Parameters. The mean storage duration between collection and measurement was a median of 29 months (interquartile range, 28-31 months). The TFF peptide (TFF1, TFF2, and TFF3) concentrations were measured using an ELISA system, as described previously [24,25]. Antisera were prepared from rabbits immunized with human TFFs. Purified polyclonal antibodies (TFF1: OP-22203, TFF2: OP-20602, and TFF3: OPP-22303) were coated onto a 96-well microtiter plate, and the plates were blocked with 0.1% bovine serum albumin/phosphate-buffered saline (PBS). After the blocking solution was removed, 100 L of assay buffer (1 mol/L NaCl/0.1% bovine serum albumin/PBS) and 50 L of sample or human TFF standard were added to the wells. After incubation overnight at room temperature, the plate was washed, and diluted biotin-labeled anti-TFF polyclonal antibodies (TFF1: biotin-OPP22205, TFF2: biotin-OPP20601, and TFF3: biotin-OPP22305) were added to each well. After incubation for 2 h, the plate was washed, and diluted horseradish peroxidase-conjugated streptavidin (Vector Laboratories, Burlingame, CA, USA) was added to each well, followed by a further 2 h incubation at room temperature, after which the plate was washed. Tetramethylbenzidine (TMB) solution (Scytek Laboratories, Inc., West Logan, UT, USA) was then added, stop solution (Scytek Laboratories, Inc.) was added 10 min later, and the absorbance at 450 nm was measured. Concentrations of human TFFs in the samples were calculated from a standard curve constructed from recombinant human TFFs. The assay sensitivities for TFF1, TFF2, and TFF3 were 7, 30, and 30 pg/mL, respectively. Each TFF antibody reacted specifically and showed no crossreactivity for the other TFFs [24,25]. The performance characteristics of the ELISA are shown in Supplementary Table S1 and Supplementary Figure S2. The concentrations of clinical parameters were measured using routine laboratory methods (SRL, Inc., Okayama, Japan). The urinary levels of albumin, 1-microglobulin ( 1-MG), 2-microglobulin ( 2-MG), and N-acetyl--D-glucosaminidase (NAG) were also determined (SRL, Inc.). The serum and urinary creatinine levels were measured according to the enzymatic colorimetric method. Each subject's arterial blood pressure was measured by a physician after a 10-minute resting period to obtain the systolic and diastolic blood pressures (SBP and DBP, resp.). The mean blood pressure (MBP) was calculated as DBP + (SBP − DBP)/3 [28]. Outcomes and Follow-Up. The primary outcome was CKD progression, defined as a composite endpoint of incident ESRD (recipient of maintenance dialysis or kidney transplant) or doubling of serum creatinine [29]. Patients were prospectively followed up for a median period of 1097 days (interquartile range, 794-1244 days). Patients were followed by review of the medical record or telephone interview at least twice a year until March 31, 2013. Death and loss to follow-up were considered censoring events. Statistical Analyses. Statistical analyses were performed using the JMP software package (release 11; SAS Institute, Cary, NC, USA). Data are expressed as the mean ± standard deviation for continuous parametric data, median and interquartile range for continuous nonparametric data, and frequencies for categorical data. A linear regression analysis of the data at baseline was performed using the least-squares method. Variables showing a positively skewed distribution were transformed using the natural logarithm (ln). Differences between groups were analyzed using Student's ttest and the Mann-Whitney test as appropriate. Receiver operating characteristic (ROC) curves were constructed to determine the optimum sensitivity and specificity, and the area under the curve (AUC) was calculated [30]. A multivariable logistic regression analysis was performed to determine the predictors [28]. The values, odds ratios, and corresponding two-sided 95% confidence intervals for the predictors are presented [31]. A Kaplan-Meier analysis and the log-rank statistic were used to explore the effect of urine biomarker levels on the renal endpoint-free survival [29,31]. Renal survival times were censored only when patients died, underwent maintenance dialysis or kidney transplantation, were lost to follow-up monitoring, or completed the study. The renal survival was calculated from the date of urine sample collection. A value of < 0.05 was considered to be statistically significant. Urine TFF Levels in Early, Middle, and Later CKD Stages. A total of 216 CKD patients with a mean age of 53.7 years were included in the study (Table 1). More than half of background causes of CKD included glomerulonephritis (56.9%). The median urine TFF1, TFF2, and TFF3 levels were 16.6, 199.7, and 65.3 g/gCr, respectively. The baseline characteristics are shown according to early (stages 1 and 2), middle (stages 3a and 3b), and later (stages 4 and 5) CKD stages (Table 1). Of note, the concentrations of both urine TFF1 and TFF3 significantly increased with progression of CKD stages; however, those of urine TFF2 did not ( Figure 1). Concentrations of other urine markers of tubular injury, including 1-MG, 2-MG, and NAG, also increased with CKD progression (Table 1). Regarding the relationships among urine TFF peptides and other tubular injury markers, TFF3 correlated well with 1-MG, 2-MG, and NAG, and TFF1 correlated well with 1-MG and 2-MG, but TFF2 did not exhibit significant correlations with any of these markers ( Table 2). The associations among mutual TFF peptides in urine were significant except for those between TFF2 and TFF3. The correlations between urine TFF peptides and age were significant (Supplementary Table S2). The data of urine TFF levels in healthy subjects is also shown in Supplementary Table S3. Table 3). Regarding the other urine biomarkers, the AUCs of 2-MG, TFF1, albumin, and NAG were also significant. Serum factors such as hemoglobin and uric acid were significant as well for predicting the CKD progression, as expected in a typical CKD cohort (Supplementary Table S4). In an analysis with the ratio of urine TFF3 to other parameters, the AUC of the ratio of urine TFF3 to urine TFF2 was the largest (0.859) (Supplementary Table S5). In a multivariate logistic regression analysis, higher levels of urine TFF3 (more than the median value, 65.3 g/gCr) and urine 1-MG (more than the median value, 3.81 mg/gCr) at the start of the study were significantly associated with the CKD progression (Table 4). Prediction of the Renal Survival by Urine TFF. To investigate whether or not the baseline urine TFF levels predict the subsequent renal survival in CKD patients, we categorized the patients into groups by the level of each TFF (median value, g/gCr) or in their combination with albuminuria (<300 or ≥300 mg/gCr) in Kaplan-Meier survival analyses ( Figure 2). We observed a significant difference in the threeyear renal endpoint-free survival when patients were divided into groups according to the median value of urine TFF1 or TFF3 (Figure 2(a) or 2(c)). In contrast, we observed no significant difference in the three-year renal survival when patients were divided into groups by the median value of urine TFF2 (Figure 2(b)). Combining urine TFFs, which are suspected tubular injury markers, with albuminuria, which mainly Adjusted for age, gender, mean blood pressure, uric acid, and renin angiotensin system blockade treatment; CI, confidence interval; CKD, chronic kidney disease; UAE, urinary albumin excretion, u 1-MG, urinary 1-microglobulin; u 2-MG, urinary 2-microglobulin; uNAG, urinary Nacetyl--D-glucosaminidase; uTFF, urinary trefoil factor; * < 0.05. reflects glomerular injury, the three-year renal endpointfree survival probabilities were 100.0% (d), 87.3% (e), and 100.0% (f) for lower TFF1, TFF2, and TFF3 levels (less than the median value) and lower albuminuria (<300 mg/gCr); 78.7% (d), 67.2% (e), and 81.7% (f) for higher albuminuria survival group), the renal endpoint group had significantly higher levels of urine TFF1 and TFF3 but significantly lower levels of urine TFF2 (Figure 3). The analyses without creatinine correction of the levels of urine TFF according to the CKD stages, for the renal survival and for the renal endpoint group or the renal survival group, are shown in Supplementary Figures S3, S4, and S5, respectively. Discussion In this study, we measured the urine TFF levels in early, middle, and later CKD patients and determined the relationships between the urine TFF level and the CKD progression and outcomes. Based on analyses of urine samples from CKD patients, we found that (1) the TFF1 and TFF3 levels significantly increased with progression of CKD stages, while TFF2 did not; (2) TFF3 to a better degree and TFF1 to a lesser degree correlated with the decline in the eGFR and other urine markers of tubular injury, including 1-MG and 2-MG, as well as other family peptides TFF1 and TFF3, respectively; (3) TFF1 and TFF3 were significant predictors of the progression of CKD ≥ 3b in an ROC analysis, and TFF3 alone, but not TFF1 or TFF2, was a significant predictor in a multiple logistic regression analysis; and (4) in a survival analysis, TFF1 and TFF3 either alone or in combination with the level of albuminuria were a significant predictor of the renal outcome in patients with CKD. We showed that the urine levels of both TFF1 and TFF3 significantly increased with the progression of CKD, while the urine levels of TFF2 did not (Figure 1). Regarding urine TFF3, Du et al. reported findings consistent with our own on urine TFF3 in CKD patients [21]. However, as for urine TFF1, Lebherz-Eichinger et al. reported that urine TFF1 levels increased in the early stages of CKD and declined with disease progression without significant changes in the fractional excretion of TFF1 [19], which is inconsistent with our data on urine TFF1. TFF2 was the first TFF to be identified and characterized [8,9]. In the human urinary tract, TFF3 is detected as the most abundant form followed by TFF1 [14], while urine TFF2 and TFF3 are increased in patients with nephrolithiasis [14]. A recent study evaluated the urine TFF2 levels in patients with CKD [20]. Urine TFF2 concentrations were significantly higher in early or middle CKD stages than in later CKD stages and predicted early CKD stages in an ROC analysis, but without significant changes in the fractional excretion of TFF2 among CKD stages [20], which is also inconsistent with our data on urine TFF2. Further studies will be required to clarify these inconsistencies in data on urine TFF1 and TFF2 levels at different CKD stages. The origin of urine TFF peptides has yet to be fully elucidated. TFF3 mRNA is expressed in the cortex of the human kidney [14], in contrast to genes that encode other TFF members. Elevated levels of TFF3 were also found in urine from patients with incident chronic kidney disease as part of a nested case-control study [17], as well as in serum of patients with CKD stages 1-5 [19,21]. Cultured human proximal tubular epithelial cell line HK-2 can synthesize and excrete TFF3 after exposure to immunoglobulin light chain, but not after exposure to fatty acid-free human serum albumin [32]. The promotor region of human TFF3 has the STAT3 binding site critical for the self-induction of TFF3 [33] as well as the NF-B binding site [34]. Possible triggers for the increase in TFF3, at least in part, may include inflammation via the transcription factors STAT3 and NF-B, both of which are proposed as central regulators of CKD progression [35,36]. The exact role of TFF in the kidney is still uncertain. TFF3, also known as intestinal trefoil factor (ITF), a peptide expressed in goblet cells of the intestines, colon, and kidney [37], plays essential functions in both mucosal surface maintenance and repair [12]. By inhibiting apoptosis and promoting the survival and migration of epithelial cells into lesions, TFF3 facilitates the restoration of intestinal epithelium as a protective barrier against injury [38,39]. TFF3 also plays a role in inducing airway epithelial ciliated cell differentiation [40]. Systemic TFF3 KO mice developed normally and were grossly indistinguishable from their wild-type littermates without apparent renal abnormalities but exhibited poor epithelial regeneration of mucosa after intestinal injury [41]. TFF3 might play a role in the repair of tubular epithelium in kidney, similar to its role in the gastrointestinal tract. Examining conditional knockout mice of TFF3 specific to renal tubular epithelial cells may help clarify the precise function of TFF3 in the kidney. The kidney tubules of the outer stripe of the outer medulla are a major site of tff3 mRNA expression in rodents [15]. Histochemical localization using a labeled TFF3 fusion protein detected binding sites in the collecting ducts of the kidney [42], and aging was correlated with a decreased renal expression of tff3 transcript in rodents [43]. In the normal human kidney, TFF3 has been found in proximal and single distal tubular cells as well as in collecting duct cells from which a small amount of TFF1 is also secreted by immunohistochemistry, while only TFF3 is detectable by a Western blot analysis in the medulla [14]. In the collecting ducts of the medulla, TFF1 and TFF3 are constituents of the mucus layer [14]. These reports suggest that the increases in the TFF3 and TFF1 in urine reflect their excretion from the urinary tract of CKD patients, not merely their leakage from serum. The renal distribution of TFF3 protein in CKD patients is very scant. Immunohistochemistry of renal biopsy specimens showed aberrant expression of TFF3, which was localized to the tubular epithelial cells in the renal cortex but not to the glomeruli, arterioles, or interstitium [21]. The recent genome-wide association study in the Framingham Heart Study revealed an association between TFF3 and LRP2 with multiple variants independently associated with urinary TFF3 levels [44]. Since lrp2 encodes megalin, a multiligand endocytic receptor localized in the renal proximal tubule, TFF3 might be a megalin ligand, such like 1-MG or 2-MG [45], leading to altered tubular handling of TFF3 in the presence of the variants. In acute kidney injury (AKI) of animal models, a decrease in both urine TFF3 levels and renal TFF3 staining was observed in nephrotoxin-treated rodents [15], suggesting a gene regulatory response of TFF3 to tubular toxicity in this setting. In AKI of patients with acute decompensation of liver cirrhosis, urine TFF3 levels are significantly increased, particularly in acute tubular necrosis, compared to patients without AKI [26]. In the survival analysis of this study, TFF1 and TFF3 either alone or in combination with the level of albuminuria were found to be a significant predictor for the disease progression and renal outcome in patients with CKD ( Figure 2). In an analysis of a panel of 14 urine biomarkers for incident kidney disease and the clinical outcome in the Framingham Heart Study participants, urine TFF3 levels predicted the all-cause mortality and death with coexistent kidney disease but not with incident CKD or albuminuria, although it did not investigate the renal outcome of doubling of the serum creatinine level or incident ESRD [18]. The ROC analysis of this study showed that urine TFF3 was a useful biomarker for predicting the progression of CKD ≥ 3b. Although other biomarkers, such as urine 1-MG, urine 2-MG, and hemoglobin, were also shown to be good predictors (Table 3 and Table S4), the AUC of urine TFF3 was the largest among these biomarkers, and the invasiveness of its measurement is lower than those of other serum biomarkers. These findings underscore the usefulness of measuring the urine TFF3 levels. Our study has several limitations and strengths that should be kept in mind when interpreting the results. First, we lacked enough data in patients with diabetic nephropathy, which was the most frequent cause of ESRD in modern countries. However, including diabetic patients in the CKD cohort might have influenced the TFF levels, as other biomarkers such as serum Klotho are lower in diabetic patients than in nondiabetic patients [46]. Second, several methods for measuring the TFF levels have been established using inhouse ELISA assays, such as in this study and others [21], or are commercially available in ELISA kits [19] or bead immunoassay platforms [17,18]. Previously published data have reported TFF3 concentrations in the urine of normal and diseased individuals to span the range between 0.03 and 7.0 g/mL [47,48]. The validation of the TFF assay will be of great importance in the near future such as a paper-based assay which can be quickly and inexpensively performed [48]. Third, relatively few patients reached the outcome, which might have influenced the results of this study to some extent. Fourth, the precise expression of TFF in the kidney tissue of patients with CKD was not investigated in this study, although a previous report showed localization of TFF3 in the renal tubular epithelial cells, but not in the glomeruli, arterioles, or interstitium in renal biopsy specimens of 23 patients with CKD [21]. Conclusions Our data showed that urine TFF peptides are associated with other urine tubular injury markers and the renal outcomes in patients with CKD. Further studies are required to elucidate the precise localization and function of TFF in the human kidney and its role in the progression in CKD patients. Interventions that can modulate the level of urine TFF in such patients may be useful, since improving the outcome is the ultimate goal of biomarker studies. Confidence intervals CKD: Chronic kidney disease DBP: Diastolic blood pressure eGFR: Estimated GFR ELISA: Enzyme-linked immunosorbent assay ESRD: End-stage renal disease GFR: Glomerular filtration rate MBP: Mean blood pressure NAG: N-Acetyl--d-glucosaminidase ORs: Odds ratios ROC: Receiver operating characteristic SBP: Systolic blood pressure TFF: Trefoil factor family. Data Availability The cohort data used in this article contain anonymized but individual data. Therefore, we would prefer not to share this database. Ethical Approval This study was approved by the Medical Ethics Committee and was conducted in accordance with the Declaration of Helsinki. Consent Written informed consent was obtained from each subject. Disclosure The funders had no role in the study design, data collection and analyses, decision to publish, or preparation of the manuscript. other medical staff in their department for their important contributions to the study. They also thank Brian Quinn for the editorial support for preparation of the manuscript. A portion of this study was supported by JSPS KAKENHI Grant no JP16K09616 to Hitoshi Sugiyama. Supplementary Materials Supplementary Table S5: AUC of ratios among parameters for predicting the progression of CKD ≥ 3b. Supplementary Figure S1: the flowchart of inclusion and exclusion criteria in the study. Supplementary Figure S2: the standard curves and the dilution tests of the ELISA system for TFFs. Supplementary Figure S3: box and line plots showing the levels of urine TFF without creatinine correction according to the CKD stages. Supplementary Figure S4: the renal survival categorized by urine TFF alone without creatinine correction (A-C) or by their combination with albuminuria (D-F). Supplementary Figure S5: the levels of urine TFF without creatinine correction for the renal endpoint group and the renal survival group. (Supplementary Materials)
2018-06-05T04:55:07.244Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "1b1f30c8affe495bcafa365523997ad4a29db8f3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2018/3024698.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e4cabd8270fca1ccd31c2178432f3970fec9de9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264368312
pes2o/s2orc
v3-fos-license
Enhancing the Functional Properties of Tea Tree Oil: In Vitro Antimicrobial Activity and Microencapsulation Strategy In the context of addressing antimicrobial drug resistance in periocular infections, Tea Tree Oil (TTO) has emerged as a promising therapeutic option. This study aimed to assess the efficacy of TTO against bacterial strains isolated from ocular infections, with a particular focus on its ability to inhibit biofilm formation. Additionally, we designed and analyzed microcapsules containing TTO to overcome certain unfavorable physicochemical properties and enhance its inherent biological attributes. The quality of TTO was confirmed through rigorous analysis using GC-MS and UV-Vis techniques. Our agar diffusion assay demonstrated the effectiveness of Tea Tree Oil (TTO) against ocular bacterial strains, including Corynebacterium spp., coagulase-negative Staphylococcus spp., and Staphylococcus aureus, as well as a reference strain of Staphylococcus aureus (ATCC 25923). Notably, the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) for all tested microorganisms were found to be 0.2% and 0.4%, respectively, with the exception of Corynebacterium spp., which exhibited resistance to TTO. Furthermore, TTO exhibited a substantial reduction in biofilm biomass, ranging from 30% to 70%, as determined by the MTT method. Through the spray-drying technique, we successfully prepared two TTO-containing formulations with high encapsulation yields (80–85%), microencapsulation efficiency (90–95%), and embedding rates (approximately 40%). These formulations yielded microcapsules with diameters of 6–12 μm, as determined by laser scattering particle size distribution analysis, and exhibited regular, spherical morphologies under scanning electron microscopy. Importantly, UV-Vis analysis post-encapsulation confirmed the presence of TTO within the capsules, with preserved antioxidant and antimicrobial activities. In summary, our findings underscore the substantial therapeutic potential of TTO and its microcapsules for treating ocular infections. Introduction The pharmaceutical industry continually explores novel therapeutic alternatives for preventing and treating various diseases.Focusing on those that come from natural sources, an attractive option are essential oils (EOs) biosynthesized by plants.These natural products are mainly obtained by distillation, applying different conventional and non-conventional extraction techniques [1].This is feasible because EOs are a mixture of volatile substances of diverse chemical composition, mainly terpenes, phenylpropanoids, and aromatic derivatives, which vary according to the species and Ambiental and anthropogenic factors. EOs have been used for thousands of years in primary health care, demonstrating a broad spectrum of pharmacological activities [2,3].Today, it is known that these medicinal properties are given because the components of EOs are capable of modulating numerous multiple signal transduction pathways, individually or in a synergistic manner. Among the diverse pharmacological activities, antimicrobial properties of EOs have gained importance, particularly in light of the escalating challenge of microbial resistance [4].In this scenario, EOs are surging as potential alternatives to antibiotics or as a complementary therapy alongside them. One often overlooked healthcare issue pertains to periocular infections, which affect the area around the eyes, including the eyelids and the surrounding region.These infections can be caused by bacteria, viruses, or fungi.When left untreated or inadequately managed, they can progress and directly impact the eyes, leading to conditions like conjunctivitis, orbital cellulitis, chronic blepharitis, chorioretinitis, and endophthalmitis [5]. Both periocular and ocular infections are commonly treated with broad-spectrum antimicrobial drugs, often without proper pathogen identification through culture and susceptibility testing.This misuse can potentially promote antimicrobial resistance in ocular bacteria.In particular, bacterial biofilms are key contributors to resistance mechanisms, protecting the bacterial community [6]. Limited studies have been conducted on the use of essential oils (EOs) against bacteria from ocular infections, and even fewer studies on encapsulation systems that include EOs for this kind of pathology are reported.However, using EOs in pharmaceutical applications presents challenges due to their physicochemical properties.Their unfavorable physicochemical properties, including hydrophobicity, low solubility in aqueous media, high volatility, oxygen-mediated decomposition, and an undesired biological profile, like significant irritant action, restrict their applicability as therapeutic agents.Cosmetic and pharmaceutical formulations incorporating EOs have been developed to address these issues, yet stability problems persist, due to exposure to environmental factors like air, heat, light, and moisture substantially altering their composition during storage.Encapsulation technologies, particularly microencapsulation via spray drying, offer a promising solution [7][8][9][10]. Microencapsulation is an increasingly favored technique in the pharmaceutical industry due to its flexibility, cost-effectiveness, and suitability for heat-sensitive compounds [7][8][9][10].It enables the production of ultrafine solid structures with high stability and encapsulation efficiency. Incorporating antibacterial EOs into pharmaceutical formulations for periocular infections has the potential to enhance treatment efficacy, reduce the risk of antimicrobial resistance development, prevent its spread to ocular structures, and minimize the risk of serious complications that could endanger vision.Moreover, the complementary pharmacological properties of EOs, such as its antioxidant and anti-inflammatory effects, have the potential to enhance overall ocular health.These properties fortify the immune system and shield ocular tissues from oxidative damage caused by inflammatory processes. Tea Tree Oil (TTO) presents a promising profile for antimicrobial therapy.This oil is obtained mainly by steam distillation of the leaves of Melaleuca alternifolia (Cheel) Myrtaceae, a tree native to Australia [11,12].ISO 4730:2017 standards establish that the main component of TTO is terpinen-4-ol, in a proportion not less than 40% [13].Different studies on the subject have demonstrated the broad-spectrum antimicrobial activity of TTO, including antibacterial, antiprotozoal, antifungal, and antiviral activity [14][15][16][17]. With the passage of time, the gradual oxidation of components within TTO during the storage period can lead to a decrease in its antimicrobial effectiveness and potentially initiate undesired chemical reactions.Consequently, there is a growing demand for formulations that not only preserve the integrity of TTO but also enhance its inherent biological properties. In view of these considerations, this study pursued a dual objective.Firstly, it assessed the antibacterial activity of a natural extract, TTO, against bacterial strains isolated from ocular infections.Secondly, it developed an encapsulation methodology using the spray drying technique to microencapsulate the selected EO.The resulting microcapsules underwent various analyses, encompassing the evaluation of their physical and morphological characteristics, in vitro drug release profiles, and investigations through scanning electron microscopy. Therefore, further research and exploration of TTO as a promising therapeutic option for eye infections is essential. Gas Chromatography-Mass Spectrometry Analysis TTO verified the specifications given by the ISO standards referred to its components by gas chromatography coupled to mass spectrometry (GC-MS).Qualitative and quantitative analysis of TTO were performed using Clarus SQ8 equipment (Perkin Elmer, Walthman, MA, USA) with an Agilent DB-5 column, 30 m in length, 0.25 mm of diameter, and 0.25 µm thickness of the film.A temperature program was adapted from the method reported by Tranchida et al. [18].Initial temperature was set to 50 • C, increasing 3 • C/min until 150 • C, and remained until complete a total time of 35 min.Injector temperature and detector temperature were 280 • C. Solvent delay was 4 min, helium was employed as carrier gas at 1 mL/min, and split injection mode was selected.Spectrums were acquired in a single quadrupole mass spectrometer, under vacuum, with an ionization energy of −70 eV.Mass range was set to 51-400 Da.TurboMass 6.1.0.software was used to acquire and process data.Fragmentation patterns of the obtained signals were compared with those included in the NIST mass spectral library and, therefore, identified as regular components of TTO [13].The percentage of terpinen-4-ol was established by calculating the rate of the individual area and total area. Characterization and Validation of UV/Vis Method The TTO was analyzed using UV/Vis spectroscopy using a spectrophotometer (Analytik Jena, Specord S600, Jena, Germany) with a sample to determine its complete absorption spectrum in absolute ethanol.Variable absorbance spectroscopy scans the entire UV and visible wavelength range to obtain detailed information about the sample's absorption.The highest absorbance was obtained at 265 nm, coinciding with the value reported in the literature [19].Therefore, the total content of TTO components was established by UV/Vis.A calibration curve was built using six dilutions of TTO in absolute ethanol (three replicates each) over a range of 3.58 µg/mL to 89.5 µg/mL.The analytical procedure was validated according to the following criteria: linearity was established from a calibration curve applying least-square linear regression analysis and correlation coefficient (r); accuracy and precision were evaluated by processing replicates of samples (n = 6) expressing the results as relative standard deviation (RSD).The absorbance of TTO solutions at λmax = 265 nm was measured.In addition, a standard equation y = 0.0106x − 0.0008 (R 2 = 0.9995) was obtained (where "y" represents the absorbance, and "x" represents the oil concentration (µg/mL)). Biological Activity of Free TTO 2.3.1. Antimicrobial Screening Bacterial Strains The present study was conducted using three clinical isolates obtained from eye infections, in addition to the reference ATCC strain: Staphylococcus aureus (ATCC 25923).The clinical strains were isolated from the conjunctiva (Corynebacterium spp.and Staphylococcus spp., negative for coagulase) and the cornea (Staphylococcus aureus). Cultures were stored in 10% glycerol (v/v) at −80 • C. Bacterial strains were grown aerobically in MHA at 37 • C for 18 h; subsequently, the bacterial culture was prepared by inoculating one single isolated colony from a pure culture in MHA. Diffusion Agar Assay The agar diffusion method was employed to assess the antimicrobial effectiveness of TTO.A microbial suspension of 0.5 McFarland was prepared for each microorganism and then introduced into MHA plates using a sterile cotton swab.In total, 25 µL of free TTO at different concentrations (895 mg/mL to 56 mg/mL) were tested in triplicate.The inhibition zone produced by each microorganism was observed after 24 h and measured using a Vernier caliper. Determination of the Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) The antimicrobial activity of TTO was initially assessed using the Clinical and Laboratory Standards Institute (CLSI) protocol for antimicrobial susceptibility by agar diffusion [20]. The MIC was defined as the lowest TTO concentration without visible growth, and the MBC was defined as the lowest concentration reducing the initial inoculum by ≥99.9%. In the subsequent phase, MIC and MBC were determined using the microdilution method.The isolated microorganisms were cultured on MHA at 37 • C until they reached the exponential growth phase.Serial dilutions of TTO were prepared in a 96-well plate, whereas flat-bottom 96-well microplates were filled with 100 µL of MHB per well, except for the first row, which contained 200 µL of TTO. To overcome the insolubility of TTO in the medium, it was supplemented with Tween 80 detergent at the final concentration of 0.05%.A diluted bacterial suspension was added to each well to achieve a concentration of ~10 6 colony-forming units (CFU/mL).Controls were run in parallel. Bacterial Kill Curves Time kill curves illustrated bacterial elimination over time as a function of the concentration of TTO.The CLSI protocol was followed.The determination is made by counting the number of viable cells at different times by subjecting the bacterial inoculum to TTO.The bactericidal profile was established when there was a decrease of 3 logarithmic units in a given time.The bacterial death curves were made taking the results obtained in the determination of the MIC as a starting point. Bacterial cultures in the exponential growth phase were suspended in MHB until reaching a concentration of approximately ~10 8 CFU/mL.Next, the inoculum was adjusted to a concentration of ~10 6 CFU/mL to be inoculated against the different TTO dilutions.Bacterial viability counts were performed at 0, 1, 2, 3, and 4 h of incubation at 37 • C for 24 h.Different aliquots of each sample were collected, and serial dilutions were run [21][22][23]. The lethality curves were represented graphically expressing the log10 CFU/mL as a function of time. Evaluation of Antibiofilm Activity Crystal Violet Assay Starting from fresh cultures, a dilution was made in TSB, adjusting the inoculum concentration to ~10 6 CFU/mL.To induce biofilm formation, 200 µL of this suspension was added to each well of a sterile polystyrene plate and incubated for 24 h at 35 • C with continuous agitation (130 rpm).Subsequently, once the biofilm was formed, the supernatant was discarded, and the wells were washed three times with 200 µL of PBS buffer to remove planktonic cells.Solutions were prepared in TSB at different % v/v concentrations (0.8, 1.5, 12.5, and 25).After the final wash, 200 µL of these solutions was added, and the plate was incubated for 24 h at 35 • C in a shaker with continuous agitation at 130 rpm.Finally, the biomass of the biofilm was analyzed in triplicate using the absorbance spectrophotometry method on a microplate reader (Thermo Scientific-Multiskan FC, Munich, Germany).To quantify the biofilm biomass, Crystal violet (CV) staining assay was employed through a microplate reader (Thermo Scientific-Multiskan FC).The percentage reduction in biofilm biomass was calculated with respect to the average OD570 obtained from the wells that were not incubated with TTO.23 Biofilm biomass reduction (%) was calculated based on the relevant control by applying the following equation: reduction % = [(Abs0 − Absx)/Abs0] × 100, where Abs0 ~570 is the absorbance of the controls to which no treatment with TTO was added, and Absx ~570 absorption corresponds to the absorbance of the sample after applying the treatment with TTO [22,23]. MTT in Assembled Biofilms Bacterial viability was evaluated using tetrazolium salts, where it is reduced to formazan in the presence of live cells, and its absorbance can be measured at 570 nm.The exposure of biofilms to TTO treatment and controls was performed as described above.Subsequently, the samples were washed with 200 µL of MTT reagent (200 µg/mL in PBS) added to each well and incubated in darkness conditions for 3 h at 35 • C.Then, the supernatant was discarded, 150 µL of DMSO was added to solubilize crystal formation, and the absorbance was recorded in a microplate reader (Thermo Scientific-Multiskan FC).Viability (%) was calculated as described above [23]. Antioxidant Activity of TTO The antioxidant potential of both the free and microencapsulated TTO was measured using the DPPH method.The DPPH free radical scavenging potential was determined according to the method of Brand-Williams et al. [24], with equal quantities of TTO and microcapsules.The antioxidant activity was determined by preparing 5 dilutions (1, 10, 100, 500, and 1000 µg/mL).As control, various solutions of ascorbic acid were employed due to its well-known antioxidant capacity.The analysis was performed in a microplate by adding 2 mL of the solution and 2 mL DPPH methanolic solution.After 30 min of incubation at room temperature in the dark, the absorbance was measured at 517 nm using a spectrophotometer (Analytik Jena, Specord S600, Jena, Germany) with a microplate reader. Preparation of Emulsions The emulsions (1 and 2) were prepared using MDX and AG as carrier (well material).MDX and AG were previously dissolved in distilled water at 50 • C for 1 h and left to stand for 24 h at room temperature.The next day, SD was added as a lubricant.SD, MDX, and AG were used in a proportion of 2:1:1, respectively.For the emulsion preparations, TTO was incorporated into the wall material emulsion using a high-power homogenizer (Proscientific PRO 250, Oxford, UK) at 24,000 rpm for 5 min.Immediately after the emulsification process, the TTO emulsion was dried by spray drying.The schematic representation of the microencapsulation formation process is depicted in Figure 1. Preparation of Emulsions The emulsions (1 and 2) were prepared using MDX and AG as carrier (well material).MDX and AG were previously dissolved in distilled water at 50 °C for 1 h and left to stand for 24 h at room temperature.The next day, SD was added as a lubricant.SD, MDX, and AG were used in a proportion of 2:1:1, respectively.For the emulsion preparations, TTO was incorporated into the wall material emulsion using a high-power homogenizer (Proscientific PRO 250, Oxford, UK) at 24,000 rpm for 5 min.Immediately after the emulsification process, the TTO emulsion was dried by spray drying.The schematic representation of the microencapsulation formation process is depicted in Figure 1. Spray Drying Spray drying was performed using a laboratory-scale Mini Spray Dryer (Büchi B-290, Büchi Labortechnik AG, Flawil, Switzerland).The samples were atomized with a hot air stream in the drying chamber.A two-fluid nozzle of 0.5 mm cap orifice diameter was used.This type of nozzle operates on the basic principle of utilizing high-speed air to crush the liquid, resulting in smaller liquid particles and higher flow rates.The following parameters were fixed.For emulsion 1: pump 10, aspirator 100; Q-flow, 600 L/h; inlet temperature, 130 °C; and outlet temperature, 100 °C.The same parameters were used for emulsion 2, except for the inlet temperature, which was 120 °C, and the pump was 7, respectively. Determination of Microencapsulation Yield (EY), Microencapsulation Efficiency (ME) and Oil Embedding Rate (ER) The EY (%) was calculated as the ratio between the recovered solids (g) after the spray drying process and the initial solids of the formulation (g) using Equation (1). The theoretical oil content was determined using Equation ( 2), where Moil, MMX, MAG, and MSD are the masses (g) of TTO, MX, AG, and SD added in the system, respectively. The ME (%) was calculated as the ratio of the total oil content obtained inside the microcapsules and the surface oil content (Equation ( 3)).To measure the total oil content, Spray Drying Spray drying was performed using a laboratory-scale Mini Spray Dryer (Büchi B-290, Büchi Labortechnik AG, Flawil, Switzerland).The samples were atomized with a hot air stream in the drying chamber.A two-fluid nozzle of 0.5 mm cap orifice diameter was used.This type of nozzle operates on the basic principle of utilizing high-speed air to crush the liquid, resulting in smaller liquid particles and higher flow rates.The following parameters were fixed.For emulsion 1: pump 10, aspirator 100; Q-flow, 600 L/h; inlet temperature, 130 • C; and outlet temperature, 100 • C. The same parameters were used for emulsion 2, except for the inlet temperature, which was 120 • C, and the pump was 7, respectively. Determination of Microencapsulation Yield (EY), Microencapsulation Efficiency (ME) and Oil Embedding Rate (ER) The EY (%) was calculated as the ratio between the recovered solids (g) after the spray drying process and the initial solids of the formulation (g) using Equation (1). The theoretical oil content was determined using Equation ( 2), where Moil, MMX, MAG, and MSD are the masses (g) of TTO, MX, AG, and SD added in the system, respectively. The ME (%) was calculated as the ratio of the total oil content obtained inside the microcapsules and the surface oil content (Equation ( 3)).To measure the total oil content, a sample of microcapsules (100 mg) was put into 15 mL anhydrous ethanol.After sonication for 1 h, the microcapsules were filtered and washed with 10 mL and then another 5 mL of ethanol.All the filtrate was put together.Moreover, to measure the surface oil content, 100 mg of microcapsules was put in a funnel and washed with 5 mL ethanol 3 times.Both filtrates were put together to measure the oil content.The absorbance was measured with a UV/Vis spectrophotometer, and the total oil content in the sample was calculated according to the standard equation. The ER oil embedding rate was defined according to Equation (4).M2 was the total oil mass obtained in 100 mg microcapsules, and M1 was the oil mass obtained on the surface of 100 mg microcapsules. The span calculation is the most common format to express distribution width. The value of the span was calculated applying the following equation (Equation ( 5)): where d10, d50, and d90 correspond to the diameters at 10, 50, and 90% of the cumulative particle size distribution. Scanning Electron Microscopy (SEM) Morphology and surface features of spray-dried microcapsules were evaluated using scanning electron microscopy (SEM) (ZEISS, Σigma, Oberkochen, Germany) operated at 5 kV with magnifications of 5000× in Lamarx Laboratories (Universidad Nacional de Córdoba, Argentina).Previously, the samples were attached to a double-sided adhesive tape mounted on SEM stubs and metallized with gold/palladium under vacuum. Fourier-Transform Infrared Spectroscopy (FTIR) Fourier-Transform Infrared Spectroscopy (FTIR) analysis of TTO in its pure form, as well as the mixture of wall material and structures after spray drying, was performed on a droplet of each sample.The equipment used was a CARY 630 FTIR (Agilent Technology, Santa Clara, CA, USA), covering a range of 500 to 4000 cm −1 with a resolution of 3 cm −1 .Sixteen scans were performed for each sample analyzed. Antimicrobial Screening of Microencapsulated TTO The antibacterial effect of the microparticles was evaluated by completely releasing the encapsulated TTO against the previously mentioned strains S. aureus and S. aureus ATCC.For the assay, Formulation 1 was selected, and tubes were prepared with samples corresponding to 50, 100, 200, and 300 mg of microcapsules with 0.5 mL of DMSO and 0.5 mL of MHB.Subsequently, the sample was sonicated for 1 h and then centrifuged at 10,000 RPM at 5 • C for 10 min (ThermoST16R).These samples were inoculated with 1 mL of bacterial suspension to achieve a concentration of approximately ~10 6 CFU/mL for the purpose of evaluating the MIC and MBC.Parallel controls were performed in MSA. GC-MS Analysis of TTO and Encapsulated TTO The composition of the commercial sample of TTO used was compared with that reported in the literature.The percentage of the chemical marker stated by ISO 4730:2017 [13], terpinen-4-ol, was determined as 44.8%, as well as other highlighted components (Figure 2A).It is important to point out that o-cymene, a decomposition indicator, was detected at a low percentage in this analysis. reported in the literature.The percentage of the chemical marker stated by ISO 4730:2017 [13], terpinen-4-ol, was determined as 44.8%, as well as other highlighted components (Figure 2A).It is important to point out that o-cymene, a decomposition indicator, was detected at a low percentage in this analysis. On the other hand, after microencapsulation, the composition of TTO inside the capsule (Formulation 1) was also analyzed, and a similar fingerprint was observed, where terpinen-4-ol was the main component (52.1%) (Figure 2B). Antimicrobial Activity of TTO The antimicrobial activity of TTO was evaluated at different concentrations.The inhibition zone produced by each microorganism (Table 1) was measured using a Vernier caliper.On the other hand, after microencapsulation, the composition of TTO inside the capsule (Formulation 1) was also analyzed, and a similar fingerprint was observed, where terpinen-4-ol was the main component (52.1%) (Figure 2B). Biological and Antioxidant Activity of Free TTO 3.2.1. Antimicrobial Activity of TTO The antimicrobial activity of TTO was evaluated at different concentrations.The inhibition zone produced by each microorganism (Table 1) was measured using a Vernier caliper.The agar diffusion method was applied to test the antimicrobial properties of TTO, and it showed activity against studied microorganisms.It was observed that TTO inhibited cellular growth.As the concentration of TTO increases, the inhibition zone (halo) of microbial growth also enhances.The agar diffusion method is commonly used as a quick test to determine the potential susceptibility or resistance to an antibiotic.The results can be compared with those of Mumu (2018) [25], where inhibition zones were observed by TTO against clinical bacterial isolates after 24 h of incubation.Although the study shows that TTO is effective against both Gram-positive and Gram-negative strains, our study found inhibition values higher than those found in this study for S. aureus isolates.However, for new molecules, there are no standard reference measurements for comparisons, thus the agar diffusion method becomes a qualitative tool for checking the inhibition produced by these active molecules, and it should only be used as a screening test [26]. MIC/MBC After using the agar diffusion method as a screening, the antimicrobial activity was evaluated more accurately through determinations of MIC and MBC using a broth dilution technique.The values are shown in Table 2.The lowest dilution for MIC in the tested microorganisms was 0.2% v/v (1.8 mg/mL) of TTO, whereas it was 0.4% v/v for MBC (3.5 mg/mL). Both the clinical and collection strains of S. aureus exhibited the same MIC and MBC values, whereas higher concentrations were required for Staphylococcus spp.negative coagulase to achieve inhibitory and bactericidal effects.However, these values align with reports from other authors [27,28].Moreover, it has been described that S. aureus requires concentrations between 0.5-1.25%v/v (4.5-11.2mg/mL) for MIC and 1-2% v/v (9.0-18.0mg/mL) for MBC, whereas, in this study, the same effect was found with concentrations of 0.2% v/v (1.8 mg/mL) and 0.4% v/v (3.5 mg/mL), respectively.As for the Corynebacterium spp.isolate, no MBC was found, whereas the MIC was found at a concentration of 0.4% (3.5 mg/mL). According to the literature, it has been reported that bacteria are susceptible to TTO at concentrations of 1.0% (9.0 mg/mL) or less.However, higher MIC values have been disclosed for other Gram-positive isolates, such as Staphylococcus and Micrococcus, and Gram-negative isolates, such as Pseudomonas aeruginosa [29,30]. TTO is predominantly bactericidal in nature, although it can be bacteriostatic at lower concentrations [12]. The studies on essential oils as antimicrobial agents have focused on their abilities to kill different types of microorganisms, including bacteria, fungi, and viruses [31].Results show that many essential oils have potent antimicrobial activities and can be effective against a variety of pathogens, including some bacterial strains resistant to conventional antibiotics.Moreover, for an EO to be considered an active compound, the extract must have an approximate MIC of less than 1 mg/mL [32]. Bacterial Kill Curves of Free TTO Bacterial death curves are a method employed to determine the in vitro activities of different concentrations of a compound against a microorganism over a specific period.The American Society for Microbiology established that any new approach claiming to be antimicrobial or antibacterial must achieve a reduction of at least 3 logarithms in CFU [33].This reduction indicates that the antimicrobial agent has been effective in eliminating approximately 99.9% or more of the bacterial culture, demonstrating its bactericidal activity. Various concentrations of TTO were utilized to assess its efficacy over time as an antimicrobial agent.The selected strains were isolated from ocular infections, as previously mentioned (Section 2.3.1), and a reference strain was also included.Figure 3 presents the outcomes of these experiments along with their respective controls.In the case of Staphylococcus coagulase-negative, bacterial death was observed with 0.2% TTO after 1 h.Conversely, for S. aureus ATCC and S. aureus, the same effect was observed with the same concentration of TTO after 2 h.However, no bacterial death was observed for Corynebacterium spp. This reduction indicates that the antimicrobial agent has been effective in eliminating ap proximately 99.9% or more of the bacterial culture, demonstrating its bactericidal activity Various concentrations of TTO were utilized to assess its efficacy over time as an an timicrobial agent.The selected strains were isolated from ocular infections, as previousl mentioned (Section 2.3.1.),and a reference strain was also included.Figure 3 presents th outcomes of these experiments along with their respective controls.In the case of Staphy lococcus coagulase-negative, bacterial death was observed with 0.2% TTO after 1 h.Con versely, for S. aureus ATCC and S. aureus, the same effect was observed with the sam concentration of TTO after 2 h.However, no bacterial death was observed for Corynebac terium spp. Li et al. 2016 noted a trend in which, with increasing concentrations of TTO, the rat of cell death and the duration of the growth lag phase correspondingly increased.Thes findings indicated that TTO exhibited time-and concentration-dependent antibacteria effects [12,34]. The mechanism of action of TTO is primarily attributed to its monoterpenoid com ponents, which are the major bioactive constituents responsible for its antimicrobial prop erties.Monoterpenoids are a class of naturally occurring organic compounds with a cha acteristic molecular structure.They have demonstrated potent antimicrobial activitie against a wide range of microorganisms, including bacteria, fungi, and viruses.Li et al. 2016 noted a trend in which, with increasing concentrations of TTO, the rate of cell death and the duration of the growth lag phase correspondingly increased.These findings indicated that TTO exhibited time-and concentration-dependent antibacterial effects [12,34]. The mechanism of action of TTO is primarily attributed to its monoterpenoid components, which are the major bioactive constituents responsible for its antimicrobial properties.Monoterpenoids are a class of naturally occurring organic compounds with a characteristic molecular structure.They have demonstrated potent antimicrobial activities against a wide range of microorganisms, including bacteria, fungi, and viruses. Antibiofilm Activity of Free TTO Biofilm is considered a virulence factor that promotes the survival, resistance, and pathogenic capacity of microorganisms, complicating treatments for infections and significantly increasing the severity of diseases.With the purpose of exploring the effectiveness of TTO, we decided to test the antibiofilm effect.Biofilm formation was induced in all ocular bacterial strains.Biofilm generation after 24 h was quantified by CV assay, indicating that all stains yielded positive biofilm formation, ~OD570 > 0.24 [35].The performance of TTO against mature biofilms was evaluated using CV and MTT assays, the former yielding information regarding the overall biomass quantification, whereas the latter evaluates bacterial viability, proliferation, and cellular cytotoxicity.Figure 4 displays the results of these experiments.ocular bacterial strains.Biofilm generation after 24 h was quantified by CV assay, indicating that all stains yielded positive biofilm formation, ~OD570 > 0.24 [35].The performance of TTO against mature biofilms was evaluated using CV and MTT assays, the former yielding information regarding the overall biomass quantification, whereas the latter evaluates bacterial viability, proliferation, and cellular cytotoxicity.Figure 4 displays the results of these experiments.In all strains, a reduction in biofilm biomass between 30 and 70% was observed when treated with TTO.No differences were observed in the reduction in biomass between the different concentrations when CV assays were performed.However, when cell viability was evaluated using MTT, greater effectiveness was observed when the applied treatments were at lower concentrations of TTO.A reduction of over 80% was achieved in treatments with 1.6% v/v TTO in 3 (A, B, and D) out of the four strains used.On the other hand, C required a more diluted concentration (0.8% v/v) to achieve greater effectiveness compared to the rest of the treatments.There are some precedents, considering that the minimum eradication concentration of mature biofilms formed by S. aureus bacteria was two times higher than the MIC but never higher than 1% of TTO [36].Furthermore, the use of essential oils at low concentrations has often been effective in inhibiting biofilm formation in pathogenic strains [37].In all strains, a reduction in biofilm biomass between 30 and 70% was observed when treated with TTO.No differences were observed in the reduction in biomass between the different concentrations when CV assays were performed.However, when cell viability was evaluated using MTT, greater effectiveness was observed when the applied treatments were at lower concentrations of TTO.A reduction of over 80% was achieved in treatments with 1.6% v/v TTO in 3 (A, B, and D) out of the four strains used.On the other hand, C required a more diluted concentration (0.8% v/v) to achieve greater effectiveness compared to the rest of the treatments.There are some precedents, considering that the minimum eradication concentration of mature biofilms formed by S. aureus bacteria was two times higher than the MIC but never higher than 1% of TTO [36].Furthermore, the use of essential oils at low concentrations has often been effective in inhibiting biofilm formation in pathogenic strains [37]. Antioxidant Activity Before encapsulation, TTO showed an antiradical activity above 80%.After the encapsulation process, the percentage was 60%. The antioxidant properties of natural compounds can help protect cells from damage caused by free radicals, which are unstable molecules that can harm cells and contribute to aging and disease development.In this study, different concentrations of TTO were tested, and ascorbic acid was used as a control, known for its antioxidant property.A 40% activity was observed at a concentration of 1 µg/mL of TTO, whereas a 90% activity was achieved at a concentration of 1 mg/mL in a similar study conducted by Kim et al., 2004, in a methanolic solution, yielding comparable results, with methanol exhibiting approximately 80% free radical scavenging activity [38]. Regarding the antioxidant capacity of microencapsulated TTO, the tested activity was maintained in the same proportion. Microencapsulation Yield (EY), Microencapsulation Efficiency (ME), and Oil Embedding Rate (ER) The percentage of microparticle recovery (% microencapsulation yield) is the amount of powder obtained in the spray dryer collector relative to the total amount of solid content prior to the process.The EY after the spray-drying procedure was estimated at 80-85%. The increase in the initial solid quantity could also lead to an increase in powder yield.However, a study involving the microencapsulation of lavender essential oil showed that a high concentration of solids can decrease the emulsion's water content, reducing the time needed to form a membrane on the particle surface during spray drying.This resulted in an efficient powder recovery [39]. The ME was about 90-95% for both microcapsule preparations.The ME of TTO is the ratio between the total TTO within the microcapsules and the surface TTO.The ER, calculated as the ratio between the total TTO and the theoretical TTO, was about 40%.This value indicated the loading capacity and the degree of success of the coating in preventing the negative effects of essential oils, such as volatilization.For medicinal applications, it is not practical to use a formulation with low quantities of active pharmaceutical ingredients, even if they provide high encapsulation efficiencies, as a significant amount of polymer would be required to achieve the necessary therapeutic dose [39]. Particle Size and Morphological Characterization of the Microcapsules of TTO The samples were atomized with a hot air stream in a drying chamber, thus making it possible to obtain solid microparticles when the EO was trapped within a film of encapsulating material. Particle size is important, as it can affect the microencapsulation efficiency, as well as the interaction with other fluids.The particle size of a microcapsule ranges from 1 to 1000 µm depending on the manufacturing method and its specific purpose.Particle size is considered a critical aspect for encapsulating substances and is an important factor in the controlled release of bioactive agents [40,41]. All the parameters described in the methods were analyzed (Table 3), resulting in an average diameter (d50) of 6 µm for both formulations.The most relevant value was that of d90, which characterizes the majority of the particle population.The D90 values obtained were 12.90 ± 0.30 µm and 12.20 ± 0.30 µm for Formulations 1 and 2, respectively, reflecting a uniform particle size, as expected for them to be considered microparticles.Furthermore, the calculated span value demonstrates a narrow particle size distribution for both formulations across all replicates.A slightly broader particle distribution was recorded, ranging between 8 and 15 µm, when microencapsulating the EO of Lippia sidoides Cham.(Verbenaceae) [42] and the microencapsulation of Origanum vulgare L. essential oil (7-18 µm) using a combination of MX, AG, and modified starch [43]. In our study, the smallest particle size was recorded with Formulation 2 (12.20 µm).However, no significant differences were found in particle size compared to Formulation 1 when variations in equipment parameters were made prior to spray drying.For these reasons, both formulations could be used as potential strategies for microencapsulating TTO. With the purpose of ensuring proper administration of a microencapsulated active ingredient, it is advisable for the microparticle size to be sufficiently small [44]. The obtained SEM images play a crucial role, not only in verifying the acquired particle size data but also in assessing the morphological features that elucidate the functional attributes of the particles, such as their capacity to retain and safeguard the bioactive ingredient [45]. The images captured revealed distinct individual spherical structures with smooth surfaces, devoid of any cracks or fissures that might allow the release of TTO.This ensures the sustained preservation of its functionality over time. The SEM images are consistent with the obtained span value.While a range of sizes is evident, which is a characteristic feature of particles produced by spray drying due to factors like the material used, formulation, and atomization process [41], the observed size range remains relatively narrow.In a study involving the microencapsulation of lavender essential oil, it was observed that smaller particles fused with larger ones, as evident in images B and D [39,46]. The particle sizes extracted from the SEM images were manually measured by identifying the largest corresponding dimension.The results indicated a particle diameter ranging from 3 to 19 µm (Figure 5C-F), with the majority falling within the intermediate range.This reaffirms the presence of a narrow particle size distribution.Considering that the hydrodynamic diameter, determined via DLS, is derived from analyzing the sample in a liquid state (without affecting the sample's aggregation state), whereas the diameter measured for the particles through SEM necessitates the preparation of the sample on a support, such as a thin film, followed by drying with additional carbon deposition, the consistency between the results is highly commendable.This further underscores that the particles are accurately represented by the solid sphere model.The FTIR spectra of TTO and the microcapsules with and without TTO are presented in Figure 6.The FTIR spectra of TTO and the microcapsules with and without TTO are presented in Figure 6. The spectrum of TTO revealed a stretching vibration peak corresponding to the C-H bond at 2960 cm −1 .Additionally, it exhibited numerous peaks within the wavelength range of 1550 cm −1 to 600 cm −1 .The peak at 1126 cm −1 was attributed to the stretching vibration absorption peak of the C-O bond in the tertiary alcohol of terpenes and terpineol, respectively.The peak at 924 cm −1 corresponded to the bending vibration absorption peak of an unsaturated double bond.The FTIR spectra of TTO and the microcapsules with and without TTO are presented in Figure 6.The spectrum of TTO revealed a stretching vibration peak corresponding to the C-H bond at 2960 cm −1 .Additionally, it exhibited numerous peaks within the wavelength range of 1550 cm −1 to 600 cm −1 .The peak at 1126 cm −1 was attributed to the stretching vibration absorption peak of the C-O bond in the tertiary alcohol of terpenes and terpineol, respectively.The peak at 924 cm −1 corresponded to the bending vibration absorption peak of an unsaturated double bond. For the microcapsules without TTO, characteristic hydroxyl peaks (O-H stretching) were observed around 3332 cm −1 .Peaks corresponding to the C-H stretching from the carboxylic group appeared around 2929 cm −1 .Peaks corresponding to amine or carbonyl groups appeared at 1608 cm −1 .For the microcapsules with TTO, characteristic hydroxyl For the microcapsules without TTO, characteristic hydroxyl peaks (O-H stretching) were observed around 3332 cm −1 .Peaks corresponding to the C-H stretching from the carboxylic group appeared around 2929 cm −1 .Peaks corresponding to amine or carbonyl groups appeared at 1608 cm −1 .For the microcapsules with TTO, characteristic hydroxyl peaks (O-H stretching) were observed around 3298 cm −1 .Peaks corresponding to the C-H stretching from the carboxylic group appeared around 2930 cm −1 .Peaks corresponding to amine or carbonyl groups appeared at 1603 cm −1 .All of these peaks were characteristic of the wall material used in the formulation design.These bands representing coating materials were evident in the microcapsule spectra.This implies that these carbohydrates maintained their structures during the drying process. The spectra of the microcapsules with and without TTO confirmed the successful microencapsulation through spray drying, with no TTO present on the surfaces of the microcapsules [47,48]. Antimicrobial Screening of Microencapsulated TTO A clinical strain and a reference strain of S. aureus were employed to evaluate the antimicrobial effectiveness of TTO microcapsules.Bacterial samples were exposed to different concentrations of microcapsules (50, 100, 200, 300, and 400 mg) for 24 h. Afterward, cell suspensions were 10-fold diluted and plated on MHA, and, in parallel as control, bacteria were harvested in a selective and differential media such as MSA to encourage the proper growth of S. aureus.Regarding the culture media, growth inhibition was observed, starting at 100 mg of microcapsules for S. aureus ATCC and 200 mg for S. aureus.Various concentrations of Formulation 1 were meticulously employed to assess its antimicrobial effectiveness over a period spanning 24 h.The chosen test strains for this evaluation included a reference strain of S. aureus ATCC and a clinical isolate of S. aureus.The graphical representation in Figure 7 eloquently illustrates the profound impact of Formulation 1 on cellular viability, showcased over the course of the incubation period.The data presents a strikingly evident bactericidal profile for both of the aforementioned bacterial strains, underscoring a significant reduction in bacterial viability, surpassing an impressive 99.99% reduction concerning the control group.These results serve as compelling evidence of the successful and efficacious release of the active compound, TTO, within Formulation 1, solidifying its potential as a potent antimicrobial agent. The graphical representation in Figure 7 eloquently illustrates the profound impact of Formulation 1 on cellular viability, showcased over the course of the incubation period.The data presents a strikingly evident bactericidal profile for both of the aforementioned bacterial strains, underscoring a significant reduction in bacterial viability, surpassing an impressive 99.99% reduction concerning the control group.These results serve as compelling evidence of the successful and efficacious release of the active compound, TTO, within Formulation 1, solidifying its potential as a potent antimicrobial agent. Conclusions TTO has demonstrated efficacy against bacterial strains isolated from ocular infections, including Corynebacterium spp., Staphylococcus spp.negative for coagulase and Staphylococcus aureus, as well as a reference strain of Staphylococcus aureus (ATCC 25923).TTO exhibited a substantial reduction in biofilm biomass, ranging from 30% to 70%. The microencapsulation technique used in the study successfully prepared TTO-containing formulations with high encapsulation yields, microencapsulation efficiency, and embedding rates, as well as preserved antioxidant and antimicrobial activities.FTIR Conclusions TTO has demonstrated efficacy against bacterial strains isolated from ocular infections, including Corynebacterium spp., Staphylococcus spp.negative for coagulase and Staphylococcus aureus, as well as a reference strain of Staphylococcus aureus (ATCC 25923).TTO exhibited a substantial reduction in biofilm biomass, ranging from 30% to 70%. The microencapsulation technique used in the study successfully prepared TTOcontaining formulations with high encapsulation yields, microencapsulation efficiency, and embedding rates, as well as preserved antioxidant and antimicrobial activities.FTIR analysis and quantification results demonstrate the non-appearance of TTO on the microcapsule surfaces in any of the formulations, reaffirming that spray drying microencapsulation using natural biopolymers is a promising approach to overcome the limitations of TTO, such as high volatility and susceptibility to oxidation, and to improve stability and shelf life. Our formulations showed uniform particle sizes.SEM images provided visual confirmation of our particle size data, revealing well-defined spherical structures with smooth surfaces, ensuring sustained preservation of EO functionality.The observed particle size distribution remained relatively narrow, despite some variation, which is characteristic of spray-dried particles. The study's results underscore the significant therapeutic potential of TTO and its microparticles for the treatment of ocular infections. Figure 1 . Figure 1.Schematic representation of the emulsion preparation and microencapsulation of Tea Tree Oil (TTO). Figure 1 . Figure 1.Schematic representation of the emulsion preparation and microencapsulation of Tea Tree Oil (TTO). Figure 4 . Figure 4. Activity of free TTO on biofilm formed by (A) S. aureus; (B) S. aureus ATCC; (C) Staphylococcus spp.negative coagulase; and (D) Corynebacterium spp.Each bar represents the percentage of biomass reduction evaluated with CV and the cell viability evaluated with MTT.Data represent the mean ± SD of six replicates of three independent experiments. Figure 4 . Figure 4. Activity of free TTO on biofilm formed by (A) S. aureus; (B) S. aureus ATCC; (C) Staphylococcus spp.negative coagulase; and (D) Corynebacterium spp.Each bar represents the percentage of biomass reduction evaluated with CV and the cell viability evaluated with MTT.Data represent the mean ± SD of six replicates of three independent experiments. Figure 6 . Figure 6.FTIR spectra of samples of Tea Tree Oil (TTO) and microcapsules with TTO and without TTO. Figure 6 . Figure 6.FTIR spectra of samples of Tea Tree Oil (TTO) and microcapsules with TTO and without TTO. Figure 7 . Figure 7. Viability assay depicting growth inhibition of S. aureus (red) and S. aureus ATCC (green) when exposed to different concentrations of Formulation 1 at 37° for 24 h.Data represent the mean ± standard deviation (SD) from three independent experiments (n = 3). Figure 7 . Figure 7. Viability assay depicting growth inhibition of S. aureus (red) and S. aureus ATCC (green) when exposed to different concentrations of Formulation 1 at 37 • for 24 h.Data represent the mean ± standard deviation (SD) from three independent experiments (n = 3). Table 2 . MIC and MBC (% v/v) values of tea tree oil against bacterial strains. * Could not be measured.
2023-10-21T15:15:34.501Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "d2f9d00684aee0d8b38f82c0cc57de47015da455", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "80706534ac29a4b3f3bf1cc377abea1b426f5d6e", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
256583611
pes2o/s2orc
v3-fos-license
Use of UAV Images in 3D Modelling of Waste Material Stock-Piles in an Abandoned Mixed Sulphide Mine in Mathiatis—Cyprus : The island of Cyprus is famous for its rich deposits of volcanic mineralisation that yielded large quantities of copper, gold, and silver. The abandonment of the waste material in several dump sites during exploitation severely impacted the environment. A significant environmental issue is the acid mine drainage from the hydration of large barren piles that cover these old open pit mines. However, abandoned piles are still enriched in precious metals and perhaps even rare earth metals. These dump sites may form a new possible “deposit”, which may attract companies’ economic interest. Removing the stockpiles can be cost-effective, since the secondary extraction process is profitable, in addition to the benefits from the restoration of the natural environment. The case study considered here pertains to the North Mine of Mathiatis, where unmanned aerial vehicle (UAV) images were used to create not only a 3D topographic map but also to locate these dump sites and finally to create a 3D model of one of these waste stockpiles. The methodology proposed here to locate dump sites by using point cloud data ( x , y , z , RGB) and high-resolution images provided by UAVs will assist in the secondary mining of old open-pit mines by defining the bottom and top stockpile surfaces. The reconstructed 3D waste piles can also be used to calculate the volume they occupy and other parameters, such as the gradient of slopes, that are essential for estimating the cost of possible restoration. The proposed methodology was applied to the stockpile STK1 with the most available drillhole data, and its volume was estimated at 56,000 m 3 , approximately. Introduction According to the Mines Department of the Republic of Cyprus, in an area of 142 square kilometres, there are currently 61 active exploration licenses (and another 15 under consideration) for the discovery of copper, gold, silver and other precious metals and generally mixed sulfide ores. Most of these exploration licenses relate to already exploited areas that were abandoned several years ago due to various problems that had arisen. The possibility of exploiting these areas, apart from the economic benefits, also directly impacts the environment, as the main problem in the areas of the old mines is related to acid drainage. Much research has been carried out to study and restore these areas [1][2][3][4], where restoration by phytoremediation using low-pH-resistant plants is suggested in [2]. However, rehabilitation, apart from being a long-term process that helps to restore the landscape to its natural form visually, does not solve the problem of heavy metal contamination of water [3]. The old Mathiatis mines are one such case of mines that started to operate in the 1930s (1936)(1937)(1938) and were abandoned in the 1980s, leaving behind open-pit mining ( Figure 1) and piles of disposed waste material with or without mineralisation [2,3]. One of the major problems of these piles is the accurate estimation of their volume since usually there are no data on the area topography before mining began. Mining 2023, 3, FOR PEER REVIEW 2 mineralisation [2,3]. One of the major problems of these piles is the accurate estimation of their volume since usually there are no data on the area topography before mining began. In this paper, based on high-resolution photographs from unmanned aerial vehicles (UAVs) [5][6][7][8][9], also known as drones, an attempt has been made to estimate the area occupied by the abandoned stockpiles, measure their volumes and to spot areas that need further exploration. UAVs have grown in recent years, and their use in the mining sector has provided a quick, effective, and cheap solution compared to other methods, such as using satellites or aerial photographs [10,11]. It is difficult to obtain direct results from these technologies due to factors such as dense cloud cover, the position of the satellite and dust in the atmosphere [12]. Thus, in many mining cases, UAVs have been used ( Figure 2) in the exploration, exploitation, or rehabilitation phases [6,7,[12][13][14][15]. In this paper, based on high-resolution photographs from unmanned aerial vehicles (UAVs) [5][6][7][8][9], also known as drones, an attempt has been made to estimate the area occupied by the abandoned stockpiles, measure their volumes and to spot areas that need further exploration. UAVs have grown in recent years, and their use in the mining sector has provided a quick, effective, and cheap solution compared to other methods, such as using satellites or aerial photographs [10,11]. It is difficult to obtain direct results from these technologies due to factors such as dense cloud cover, the position of the satellite and dust in the atmosphere [12]. Thus, in many mining cases, UAVs have been used ( Figure 2) in the exploration, exploitation, or rehabilitation phases [6,7,[12][13][14][15]. Mining 2023, 3, FOR PEER REVIEW 2 mineralisation [2,3]. One of the major problems of these piles is the accurate estimation of their volume since usually there are no data on the area topography before mining began. In this paper, based on high-resolution photographs from unmanned aerial vehicles (UAVs) [5][6][7][8][9], also known as drones, an attempt has been made to estimate the area occupied by the abandoned stockpiles, measure their volumes and to spot areas that need further exploration. UAVs have grown in recent years, and their use in the mining sector has provided a quick, effective, and cheap solution compared to other methods, such as using satellites or aerial photographs [10,11]. It is difficult to obtain direct results from these technologies due to factors such as dense cloud cover, the position of the satellite and dust in the atmosphere [12]. Thus, in many mining cases, UAVs have been used ( Figure 2) in the exploration, exploitation, or rehabilitation phases [6,7,[12][13][14][15]. Percentage presentation of UAV application in each mining process phase [14]. The main objective of this work was to create a 3D model of abandoned stockpiles and estimate the potentially exploitable reserves. For this purpose, the GEOVIA Surpac software [16] combined data from the UAV and available drillholes into resource calculations by applying the Kriging technique [17][18][19]. This is an interpolation technique commonly used in mining applications [20][21][22][23], with the advantage of using actual data spatial distribution in estimations and simultaneously minimising square error based on the spatial data position. Treating waste from open-pit mines as a source of mineralisation increases the mining industry's sustainability and simultaneously reduces the environmental footprint of old mines in the area. The proposed methodology was applied in the case of the Mathiatis mine, where there are three small and five large waste stockpiles (with a focus on the one with the most data available, stockpile STK1) with significant mineralisation and a total volume of about 56,000 m 3 . Based on the present investigation, the area is considered suitable for further investigation to be exploited in the future. Geology of Cyprus (of the Study Area) Cyprus is an island in the easternmost part of the Mediterranean region, considered one of the most geodynamically active regions globally. The wider area constitutes the junction of the Eurasian, Nubian, and Arabian plates [24][25][26][27][28][29][30]. In particular, the northward motion of the Arabian plate results in the westward Anatolian plate tectonic escape, leading to the Aegean micro-plate southwestern motion [31,32]. Eventually, the subduction process of the Nubian plate under the corresponding Eurasian one occurs along the subduction zone, extending from the southern parts of the Ionian Sea and Crete, respectively, to Cyprus [33][34][35][36][37]. Therefore, this geodynamic regime resulted in complicated geological processes imprinted in various lithological and tectonic features [38]. This paper focuses on the properties description of the geological-lithological formations structuring the island of Cyprus. These formations are documented in four distinct geological zones, which are: (i) Pentadaktylos-Kyrenia zone, (ii) Mamonia zone (or complex), (iii) autoch-thonous sedimentary rock zone, (iv) Troodos zone (or ophiolite) and (v) volcanogenic massive sulfide ore deposits. The volcanogenic massive sulfide ore deposits are the main formation associated with this research. In particular, the Mt. Troodos ophiolite mass is related to significant massive sulfide ore deposits [39] resulting from marine volcanic procedures [40]. These ore deposits are of great importance for Cyprus Island, as more than 40% of sulfide minerals are included, such as pyrite (FeS 2 ), chalcopyrite (CuFeS 2 ), bornite (Cu 5 FeS 4 ), sphalerite (ZnS), chalcocite (Cu 2 S), galena (PbS), etc., while they are characterised by significant concentrations of copper (Cu), zinc (Zn), lead (Pb), gold (Au) and silver (Ag) [40]. The sulfide mineral formations are directly associated with the hydrothermal fluid circulation procedure occurring in the oceanic crust rocks (Figure 3). The metal elements (ferrum, copper, zinc, sulfur, etc.) included in these fluids have been leached by the underlying rocks. In particular, the leaching process has predominantly been performed in a multiple dykes system due to the high-temperature water circulation through cracks and joints; the upward magma bodies' motion along the extension axes of the sea floor causes this water overheating. Proposed Methodology Plan Data were collected from UAV flights at an elevation of 70 m with drone model DJI Phantom RTK Pro (having an accuracy 5 cm). Thirty control points combined with the 5100 photographs that were taken (with 80% overlap) generated a point cloud of a total of 21 million points. This point cloud (courtesy of Harold Andrew Daniels) was used first to create the study area's topographic map and then to define the bottom stockpile's surface with no topographic data available. For the reconstruction of the bottom surface, the following approach was used: • The stockpile's boundary was digitised by visual inspection on the point cloud. • The points of the stockpile's boundary combined with drillhole data were imported into the GEOVIA Surpac database to build a pseudo-3D block model (BM) The estimation parameter is the elevation Z since the z coordinate of imported data and blocks centroid is dummy (zero). • After statistical and geostatistical analysis, estimates of elevation (here treated as a BM attribute) were calculated into each BM's centroid using the Kriging technique. • These elevation estimates were validated using Kriging residual statistics described analytically in Section 3.6 as proposed by [17,18]. If the Kriging validation failed, then corrections of the Block Kriging model variables were made, and the elevation estimations were repeated. • After successful validation of Kriging, X, Y of the BM centroids along with the elevation estimated variable as Z were used as input data (points) for the bottom stockpile's DTM. The bottom and top DTM surfaces were combined to create the 3D solid model of the stockpile and its volume estimation. This methodology is best described in the flow diagram of Figure 4. Proposed Methodology Plan Data were collected from UAV flights at an elevation of 70 m with drone model DJI Phantom RTK Pro (having an accuracy 5 cm). Thirty control points combined with the 5100 photographs that were taken (with 80% overlap) generated a point cloud of a total of 21 million points. This point cloud (courtesy of Harold Andrew Daniels) was used first to create the study area's topographic map and then to define the bottom stockpile's surface with no topographic data available. For the reconstruction of the bottom surface, the following approach was used: • The stockpile's boundary was digitised by visual inspection on the point cloud. • The points of the stockpile's boundary combined with drillhole data were imported into the GEOVIA Surpac database to build a pseudo-3D block model (BM) The estimation parameter is the elevation Z since the z coordinate of imported data and blocks centroid is dummy (zero). • After statistical and geostatistical analysis, estimates of elevation (here treated as a BM attribute) were calculated into each BM's centroid using the Kriging technique. • These elevation estimates were validated using Kriging residual statistics described analytically in Section 3.6 as proposed by [17,18]. If the Kriging validation failed, then corrections of the Block Kriging model variables were made, and the elevation estimations were repeated. • After successful validation of Kriging, X, Y of the BM centroids along with the elevation estimated variable as Z were used as input data (points) for the bottom stockpile's DTM. The bottom and top DTM surfaces were combined to create the 3D solid model of the stockpile and its volume estimation. This methodology is best described in the flow diagram of Figure 4. Topographic Map Creation UAVs were used to scan the Mathiatis mine area, and the resulting point cloud dataset was employed to create the topographic map. A total of nine las files were combined in open access program CloudCompare (www.danielgm.net/cc/) (accessed on 25 November 2022), and after noise removal and point density reduction-one every 2 m (memory handling)-the final map of the area's current state was created. This map and the relative digital terrain model (DTM) are displayed in Figure 5a,b, respectively. , FOR PEER REVIEW 5 Topographic Map Creation UAVs were used to scan the Mathiatis mine area, and the resulting point cloud dataset was employed to create the topographic map. A total of nine las files were combined in open access program CloudCompare (www.danielgm.net/cc/) (accessed on 25 November 2022), and after noise removal and point density reduction-one every 2 m (memory handling)-the final map of the area's current state was created. This map and the relative digital terrain model (DTM) are displayed in Figure 5a,b, respectively. Concerning the digitisation of the stockpile boundary, a different procedure from the previous one was used. The boundary line of the stockpile was created on the point cloud itself ( Figure 6). The resulting boundary consists of about 280 points which are the contact points of the stockpile with the ground and is common for both the upper and lower stockpile's surface. Since the point cloud, apart from the coordinate values, also contains RGB (red-green-blue) values, this procedure was more accurate and flexible (erratic points could be easily identified and discarded manually). Mining 2023, 3, FOR PEER REVIEW 6 (a) (b) Concerning the digitisation of the stockpile boundary, a different procedure from the previous one was used. The boundary line of the stockpile was created on the point cloud itself ( Figure 6). The resulting boundary consists of about 280 points which are the contact points of the stockpile with the ground and is common for both the upper and lower stockpile's surface. Since the point cloud, apart from the coordinate values, also contains RGB (red-green-blue) values, this procedure was more accurate and flexible (erratic points could be easily identified and discarded manually). Drillhole Data Database In the Mathiatis mine area, a new drilling program was developed with 39 drillholes (a total length of 829 m was drilled), most of them on the top of the stockpiles observed during the geological study. Since, in this study, the bottom surface of the tailings layer is needed, then only geological data were used. The locations of the 39 drillholes are presented in Figure 7. Three major formations were identified with minor differences in colouration and mineralogical composition. These formations are the wastes of the previous exploitation in the area (tailing formation-TF), the Perapedi formation (umber layer-U) and the upper pillow-lava formation (UPL). Each point elevation at the end of the tailings Concerning the digitisation of the stockpile boundary, a different procedure from the previous one was used. The boundary line of the stockpile was created on the point cloud itself ( Figure 6). The resulting boundary consists of about 280 points which are the contact points of the stockpile with the ground and is common for both the upper and lower stockpile's surface. Since the point cloud, apart from the coordinate values, also contains RGB (red-green-blue) values, this procedure was more accurate and flexible (erratic points could be easily identified and discarded manually). Drillhole Data Database In the Mathiatis mine area, a new drilling program was developed with 39 drillholes (a total length of 829 m was drilled), most of them on the top of the stockpiles observed during the geological study. Since, in this study, the bottom surface of the tailings layer is needed, then only geological data were used. The locations of the 39 drillholes are presented in Figure 7. Three major formations were identified with minor differences in colouration and mineralogical composition. These formations are the wastes of the previous exploitation in the area (tailing formation-TF), the Perapedi formation (umber layer-U) and the upper pillow-lava formation (UPL). Each point elevation at the end of the tailings Drillhole Data Database In the Mathiatis mine area, a new drilling program was developed with 39 drillholes (a total length of 829 m was drilled), most of them on the top of the stockpiles observed during the geological study. Since, in this study, the bottom surface of the tailings layer is needed, then only geological data were used. The locations of the 39 drillholes are presented in Figure 7. Three major formations were identified with minor differences in colouration and mineralogical composition. These formations are the wastes of the previous exploitation in the area (tailing formation-TF), the Perapedi formation (umber layer-U) and the upper pillow-lava formation (UPL). Each point elevation at the end of the tailings formation was not only used as a new attribute to the BM along with the point elevation of the stockpile boundary but also as a control point for the error of the BM's estimates. Statistcal-Geostatistical Analysis Histograms of 278-point elevations were constructed to check the data distributions and proceed to experimental semivariogram plots. The first attempt at raw data values presented a Gaussian distribution histogram plot (Figure 8a). However, the relative semi-variogram plot (Figure 8b) presented a trend towards the greater values for the range, and a Gaussian mathematical model was calibrated on it. This trend can be observed in the experimental semivariogram shape (large sill values versus variance presented with a green line in Figure 8b). Several attempts in GEOVIA Surpac were made to check for an anisotropy on the data [42,43]. Due to the small number of data and since the software uses search ellipsoids that are cones with angle values less than 30 • , their number became so small that it was impossible to build directional variograms. Mining 2023, 3, FOR PEER REVIEW 7 formation was not only used as a new attribute to the BM along with the point elevation of the stockpile boundary but also as a control point for the error of the BM's estimates. Statistcal-Geostatistical Analysis Histograms of 278-point elevations were constructed to check the data distributions and proceed to experimental semivariogram plots. The first attempt at raw data values presented a Gaussian distribution histogram plot (Figure 8a). However, the relative semivariogram plot (Figure 8b) presented a trend towards the greater values for the range, and a Gaussian mathematical model was calibrated on it. This trend can be observed in the experimental semivariogram shape (large sill values versus variance presented with a green line in Figure 8b). Several attempts in GEOVIA Surpac were made to check for an anisotropy on the data [42,43]. Due to the small number of data and since the software uses search ellipsoids that are cones with angle values less than 30°, their number became so small that it was impossible to build directional variograms. As evident from the semivariogram in Figure 8b, the data showed a trend due to the slope of the waste disposal, so the best-suited technique was using universal Kriging. As this option is not available in the training version of Surpac, our approach was to remove the trend from the data. The possible polynomial tendency depends on the old hillside shape, and based on point cloud data and borehole information is better defined by using the following second-degree polynomial (Table 1): formation was not only used as a new attribute to the BM along with the point elevation of the stockpile boundary but also as a control point for the error of the BM's estimates. Statistcal-Geostatistical Analysis Histograms of 278-point elevations were constructed to check the data distributions and proceed to experimental semivariogram plots. The first attempt at raw data values presented a Gaussian distribution histogram plot (Figure 8a). However, the relative semivariogram plot (Figure 8b) presented a trend towards the greater values for the range, and a Gaussian mathematical model was calibrated on it. This trend can be observed in the experimental semivariogram shape (large sill values versus variance presented with a green line in Figure 8b). Several attempts in GEOVIA Surpac were made to check for an anisotropy on the data [42,43]. Due to the small number of data and since the software uses search ellipsoids that are cones with angle values less than 30°, their number became so small that it was impossible to build directional variograms. As evident from the semivariogram in Figure 8b, the data showed a trend due to the slope of the waste disposal, so the best-suited technique was using universal Kriging. As this option is not available in the training version of Surpac, our approach was to remove the trend from the data. The possible polynomial tendency depends on the old hillside shape, and based on point cloud data and borehole information is better defined by using the following second-degree polynomial (Table 1): As evident from the semivariogram in Figure 8b, the data showed a trend due to the slope of the waste disposal, so the best-suited technique was using universal Kriging. As this option is not available in the training version of Surpac, our approach was to remove the trend from the data. The possible polynomial tendency depends on the old hillside shape, and based on point cloud data and borehole information is better defined by using the following second-degree polynomial ( Table 1): where x and y are the coordinates with respect to the position of drillhole P1, the new transformed value z = Z −ẑ is used in the Surpac algorithm. The transformed elevation data spatial statistics are presented in Figure 9. where x and y are the coordinates with respect to the position of drillhole P1, the new transformed value = −̂ is used in the Surpac algorithm. The transformed elevation data spatial statistics are presented in Figure 9. The mathematical semivariogram parameters calibrated on both data cases' are presented in Table 2. These parameters were used to estimate the BM's variables, which are explained in the following subparagraph. Block Model of the Study Area BMs for each stockpile were constructed, covering a distance of 10-20 m around every boundary ( Figure 10). The procedure was implemented exclusively on the stockpile STK1, based on the drillhole data availability. The "pseudo"-3D BM for STK1 (all centroids are on the same elevation, 0.5) consisted of 5400 blocks with extents of 120 m and 180 m in easting and northing, respectively (Table 3). The location of the measurements plays an essential role in determining a random field, as spatial continuity requires that similar values are observed in neighbouring locations. The spatial continuity can be described by the variation of the dispersion in space which is given by the following theoretical relationship [44,45]: The mathematical semivariogram parameters calibrated on both data cases' are presented in Table 2. These parameters were used to estimate the BM's variables, which are explained in the following subparagraph. Block Model of the Study Area BMs for each stockpile were constructed, covering a distance of 10-20 m around every boundary ( Figure 10). The procedure was implemented exclusively on the stockpile STK1, based on the drillhole data availability. The "pseudo"-3D BM for STK1 (all centroids are on the same elevation, 0.5) consisted of 5400 blocks with extents of 120 m and 180 m in easting and northing, respectively (Table 3). The location of the measurements plays an essential role in determining a random field, as spatial continuity requires that similar values are observed in neighbouring locations. The spatial continuity can be described by the variation of the dispersion in space which is given by the following theoretical relationship [44,45]: E[ ] is the mean value of the expression in the bracket. For the calculation of the experimental semivariogram, all pairs belonging to distance r and direction θ are used: where: x i the position vector of the measurements of parameter z, r, θ the distance and direction of calculation, N(r, θ), pairs number n(θ) = [cosθ sinθ ], unitary vector in direction θ. The equations of ordinary Kriging are based on the semivariogram assuming unknown mean value but constant in the field of investigation and described by Equations (4)-(7) [45]. The unbiased condition is ensured by the following Equation (5): The estimates of the Kriging variable and variance are given by: Validation of Block Model Estimates At this point, the BM's attribute estimates needed to be validated before the final construction of the bottom tailings formation surface. Since all these estimates depended significantly on the variogram parameters, selecting the best mathematical model was essential. In this way, the validation of the Kriging model was evaluated through the validation of the semivariogram mathematical model. The selected validation method depended on the estimation errors measured on the control points. In this case, the elevation values of each drillhole point were normalised with Kriging's variance as estimated into each BM's centroid. This methodology is called the residuals (Q)-Test [17], and the normalised residual is estimated through the equation: Z i , is elevation of Kriging elevation estimation, σ 2 i , is Kriging variance estimation and Z(s i ), actual elevation of the stockpile's bottom from drillhole data in position s i . Q 1 value Statistical value of Q 1 is the mean of the normalised errors: where ε i is a random regionalised variable that follows the normal distribution (µ = 0, σ 2 = 1/(n − 1)). Q 2 value Statistical value of Q 2 is the mean of the squared normalised errors: Again, making the assumption that all errors are following the normal distribution, then the square errors follow the "Chi-Square" distribution with a mean equal to 1. The "Chi-Square" is a non-symmetric distribution which is used in the case of a small number of data, n < 40, and holds for the present case (21 data). The region's determination with a 95% confidence level is made from charts from the international literature [17]. In choosing the best semivariogram parameters, Q 1 needs to be close to zero and Q 2 close to 1, and, as mentioned before, errors ε i follow a normal distribution. Block Model Elevation and Variance Estimates The Kriging algorithm was first executed using only the datum from the points of the boundary line (no drillhole data were used), and elevation estimates were carried out for all block centroids (Figure 10a). The estimates for each block in the vicinity of every drillhole were compared to the closest drillhole's real value to calculate the relative errors. Next, data from one drillhole were inserted into the calculations, and the Kriging algorithm was executed again. This procedure continued until all drillholes were inserted into the calculations and all errors were estimated. The BM of the first scenario (raw elevation values) was validated, as will be described in Section 3.5. This first scenario was rejected since the Q 2 residual statistic constraint was not met. A second Scenario II, was implemented to remove a polynomial trend (Equation (1)), as described in Section 3.4. In Scenario II, after the first estimation of Z transformed data into every block of the BM, data from drillhole NMMT_BH19 were inserted into the calculations, and the Kriging algorithm was executed again (Figure 10b). New estimates for elevation were carried out, and new errors were calculated. This procedure was continued until all drillhole data were inserted into calculations, and all errors were estimated for every step (Figure 10c-h). The drillhole data import sequence was NMMT_BH18, NMMT_BH19, NMMT_BH21, NMMT_BH22, NMMT_BH20, NMMT_BH17, as displayed in Figure 10b Figure 10. BM plans presenting the elevation (transformed) distribution as data from drillholes are imported into the Kriging algorithm when (a) no drillholes are used, (b) NMMT_BH18 is added to (a,c) NMMT_BH19 is added to (b,d) NMMT_BH21 is added to (c,e) NMMT_BH22 is added to (d,f) NMMT_BH20 is added to (e,g) NMMT_BH17 is added to (f,h) colormap legend. An area of high variance values can be identified in the west part of the STK1 stockpile. This increased uncertainty could be reduced by adding two extra drillholes near the red dot locations (see Figure 11). Estimations, errors, and Kriging variances from each step were used to define the block model's validity again. Since both Q1 and Q2 constraints were met (Figure 12), the construction of the lower TF surface and the stockpile's volume estimation was done. The Kriging variance of the final Z transformed values (Figure 10g) is presented below After the validation of Z predictions, the X,Y, and Z-predicted values of each BM's centroid were now used as a point for the creation of the bottom stockpile's surface. This means that the number of points that was used for the bottom stockpile's surface is the same, with the number of the block shown in Figures 10 and 11 (inside the red line). From the above checks, the Kriging algorithm based on the parameters of Table 2 underestimates the elevation (Q1 < 0) and overestimates the error (Q2 < 1), which can be enhanced by further reducing the semivariogram sill value. It was noted that the residual statistics check, although far from the mean values of the respective statistics Q1 and Q2, Estimations, errors, and Kriging variances from each step were used to define the block model's validity again. Since both Q 1 and Q 2 constraints were met (Figure 12), the construction of the lower TF surface and the stockpile's volume estimation was done. The Kriging variance of the final Z transformed values (Figure 10g) is presented below After the validation of Z predictions, the X, Y, and Z-predicted values of each BM's centroid were now used as a point for the creation of the bottom stockpile's surface. This means that the number of points that was used for the bottom stockpile's surface is the same, with the number of the block shown in Figures 10 and 11 (inside the red line). Estimations, errors, and Kriging variances from each step were used to define the block model's validity again. Since both Q1 and Q2 constraints were met (Figure 12), the construction of the lower TF surface and the stockpile's volume estimation was done. The Kriging variance of the final Z transformed values (Figure 10g) is presented below After the validation of Z predictions, the X,Y, and Z-predicted values of each BM's centroid were now used as a point for the creation of the bottom stockpile's surface. This means that the number of points that was used for the bottom stockpile's surface is the same, with the number of the block shown in Figures 10 and 11 (inside the red line). From the above checks, the Kriging algorithm based on the parameters of Table 2 underestimates the elevation (Q1 < 0) and overestimates the error (Q2 < 1), which can be enhanced by further reducing the semivariogram sill value. It was noted that the residual statistics check, although far from the mean values of the respective statistics Q1 and Q2, Figure 12. Q 1 and Q 2 distribution probability density function. From the above checks, the Kriging algorithm based on the parameters of Table 2 underestimates the elevation (Q 1 < 0) and overestimates the error (Q 2 < 1), which can be enhanced by further reducing the semivariogram sill value. It was noted that the residual statistics check, although far from the mean values of the respective statistics Q 1 and Q 2 , were within the confidence level (CL) of 5%, so the selected semivariogram parameters cannot be rejected. Stockpile Volume Estimation After the validation of the final model, the estimated values were back-transformed into "real" elevations into each block's centroid. Northing, easting, along with this elevation, were used to construct the bottom STK1 surface, as shown in Figure 13a. This surface also contains elevation values estimated from extrapolation data, which are high uncertainty values. These values are removed when cutting the surface with the boundary that was digitised from the point cloud UAV data (Figure 13b). Mining 2023, 3, FOR PEER REVIEW 14 were within the confidence level (CL) of 5%, so the selected semivariogram parameters cannot be rejected. Stockpile Volume Estimation After the validation of the final model, the estimated values were back-transformed into "real" elevations into each block's centroid. Northing, easting, along with this elevation, were used to construct the bottom STK1 surface, as shown in Figure 13a. This surface also contains elevation values estimated from extrapolation data, which are high uncertainty values. These values are removed when cutting the surface with the boundary that was digitised from the point cloud UAV data (Figure 13b). This surface, combined with the top surface of the STK1 ( Figure 14) constructed as described in Section 3.2, was used to estimate the stockpile's volume, which is equal to 56,000 m 3 . The two blue lines show the traces of two sections of STK1. This surface, combined with the top surface of the STK1 (Figure 14) constructed as described in Section 3.2, was used to estimate the stockpile's volume, which is equal to 56,000 m 3 . The two blue lines show the traces of two sections of STK1. Mining 2023, 3, FOR PEER REVIEW 14 were within the confidence level (CL) of 5%, so the selected semivariogram parameters cannot be rejected. Stockpile Volume Estimation After the validation of the final model, the estimated values were back-transformed into "real" elevations into each block's centroid. Northing, easting, along with this elevation, were used to construct the bottom STK1 surface, as shown in Figure 13a. This surface also contains elevation values estimated from extrapolation data, which are high uncertainty values. These values are removed when cutting the surface with the boundary that was digitised from the point cloud UAV data (Figure 13b). This surface, combined with the top surface of the STK1 (Figure 14) constructed as described in Section 3.2, was used to estimate the stockpile's volume, which is equal to 56,000 m 3 . The two blue lines show the traces of two sections of STK1. These two sections describe the present and past topography of the area, which is essential for accurate volume estimation. It is important to mention again that exploitation of the Mathiatis area started in the 1930s, along with the waste disposal, and that no topographical data were present ( Figure 15). Mining 2023, 3, FOR PEER REVIEW 15 These two sections describe the present and past topography of the area, which is essential for accurate volume estimation. It is important to mention again that exploitation of the Mathiatis area started in the 1930s, along with the waste disposal, and that no topographical data were present (Figure 15). Discussion Since the construction of the top surface's DTM is a common procedure using UAVs data (photogrammetry) with many available publications, the contribution of the current work is the combination of a UAV point cloud and drillhole data to estimate the underneath surface (not optically reached by UAV flights). Since, in the first stages of investigations, only a small number of drillholes is applied, the proposed methodology can produce fast and relatively accurate predictions of the stockpile's volume. For this purpose, the linear interpolation Kriging technique was applied as a tool to predict areas that are not "visible". The approach that was used is a pseudo-3D BM (actually, the BM has only one block in the Z direction-2D BM). In each BMs' centroid, the z value of the stockpilebedrock interface is estimated. After the validation of the Z predictions, X,Y, and Z-predicted values of each BM's centroid were then used as a point for the creation of the bottom stockpile's surface. Another technique used in cases such as these is indicator Kriging [18,46] but due to the lack of available data (few drillholes) the proposed technique was preferred. In this work, a commercial software commonly used in mining projects for 3D block estimations was used, instead of using an existing Kriging algorithm [47,48] or any other open access code. An adaptation was made in the Surpac software for a 2D Kriging application for the elevation of bottom stockpile by combining point cloud and drillhole data. Discussion Since the construction of the top surface's DTM is a common procedure using UAVs data (photogrammetry) with many available publications, the contribution of the current work is the combination of a UAV point cloud and drillhole data to estimate the underneath surface (not optically reached by UAV flights). Since, in the first stages of investigations, only a small number of drillholes is applied, the proposed methodology can produce fast and relatively accurate predictions of the stockpile's volume. For this purpose, the linear interpolation Kriging technique was applied as a tool to predict areas that are not "visible". The approach that was used is a pseudo-3D BM (actually, the BM has only one block in the Z direction-2D BM). In each BMs' centroid, the z value of the stockpile-bedrock interface is estimated. After the validation of the Z predictions, X, Y, and Z-predicted values of each BM's centroid were then used as a point for the creation of the bottom stockpile's surface. Another technique used in cases such as these is indicator Kriging [18,46] but due to the lack of available data (few drillholes) the proposed technique was preferred. In this work, a commercial software commonly used in mining projects for 3D block estimations was used, instead of using an existing Kriging algorithm [47,48] or any other open access code. An adaptation was made in the Surpac software for a 2D Kriging application for the elevation of bottom stockpile by combining point cloud and drillhole data. Conclusions This paper proposed a new methodology for waste stockpile volume calculation with a small number of available drillholes by combining aerial photographs and drillhole data. This methodology can be applied in any old waste deposits combining a UAV point cloud which includes coordinates (X, Y, Z) and a colour data RGB (red, green, blue) and available drillhole data. The drillholes were used to upgrade the first model based initially on the stockpile's boundary defined from the point cloud dataset from aerial photogrammetry by finding the optimal parameters of the Kriging algorithm. In elevation data, there exists a second order drift due the old hillside shape. This drift is subtracted from the data and Kriging applied to the new elevation data. Since Kriging predictions depend on the semivariogram parameter selection. a validation technique of residual statistics Q1, Q2 was used for selection of exponential semivariogram with 41 m scale parameter, 6.8 sill and a small amount of nugget effect 0.06. The volume of the deposits of the stockpile with the most available data was approximately estimated at 56,000 m 3 . The advantage of the Kriging method is that the uncertainty is also estimated. Based on the above methodology, new drillhole locations in areas of high Kriging variance can be proposed to minimise any uncertainty. It was noted that the most secure method would be to use topographic maps before placing the waste deposits, but this was not feasible due to time elapsed since the old open pit mine was abandoned. The recovery of these piles is essential for the remediation of the Mathiatis area of Cyprus, where the acid rain phenomenon is intense, causing problems of contamination of the aquifer. At the same time, the remaining mineralisation combined with the existence of the umber deposits beneath the waste stockpiles has attracted the interest of many companies. It is only a matter of time before open pit mining in the broader area proceeds. Hence, the proposed model could be a valuable tool for the first stages of planning the exploitation. In reducing the model prediction risk, more targeted drilling would be necessary to reduce the "dark" spots within the reservoir. For example, in the examined stockpile, the available drillholes were spaced at regular intervals in the centre of the pile. The exact number of drillholes would provide more information if they were scattered throughout the volume of the pile, reducing the unmeasured distances and, thus, the uncertainty. The environmental restoration, the stockpile's residual value, plus the fact that the existence of confirmed umbra deposits beneath the stockpiles will add a secondary profit for future exploitation. The proposed methodology can be used for more accurate volume estimation of all stockpiles (by suggesting the possible position of new drillholes), optimising the mining design and the selection of suitable mechanical equipment.
2023-02-05T16:03:24.778Z
2023-02-03T00:00:00.000
{ "year": 2023, "sha1": "147c5b58e4b35fea7dbaf795c19c45143d08a656", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-6489/3/1/5/pdf?version=1675417053", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cba35e44895231b7e31a349a0ce71fda1efc23e7", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
49870096
pes2o/s2orc
v3-fos-license
New 1,2-Dihydropyridine-Based Fluorophores and Their Applications as Fluorescent Probes New 1,2-dihydropyridine (1,2-DHP)-based fluorophores 1a–1h were designed and synthesized by a one-pot four-component condensation reaction using dienaminodioate, aldehydes, and an in situ-generated hydrazone mediated by trifluoroacetic acid. The photophysical properties of 1,2-DHPs were studied in detail, and a few of them exhibited selective mitochondrial staining ability in HeLa cell lines (cervical cancer cells). A detailed photophysical investigation led to the design of 1,2-DHP 1h as an optimal fluorophore suitable for its potential application as a small molecule probe in the aqueous medium. Also, 1,2-DHP 1h exhibited sixfold enhanced emission intensity than its phosphorylated analogue 1h′ in the long wavelength region (λem ≈ 600 nm), which makes 1,2-DHP 1h′ meet the requirement as a bioprobe for protein tyrosine phosphatases, shown in L6 muscle cell lysate. ■ INTRODUCTION Small molecule-based organic fluorophores are essential for sensing and imaging of biological specimens with high sensitivity and fast response. 1 Even though a large variety of fluorophores are known, only a few have optimal performance because a majority of them often suffer from photobleaching, autofluorescence, and cytotoxic behavior that limit their further applications in biology. 2 A number of heterocyclic fluorophores were reported for fluorescent labeling of biomolecules, sensing, and bioimaging applications; however, for most of these molecules, the emission maxima were observed in the green window of less than 500 nm. 3 Consequently, the discovery of new heterocyclic fluorophore scaffolds with improved photophysical properties is highly warranted. Fluorescent properties exhibited by 1,4-dihydropyridines (1,4-DHPs) 4 and our recent interest in 1,2-dihydropyridines (1,2-DHPs) 5 have inspired us to develop new 1,2-DHP-based fluorophores with improved photophysical features. 1,4-DHPs are known to exhibit blue fluorescence with appropriately substituted electron-donating groups at the 1position and electron-withdrawing groups at 3-and 5-positions ( Figure 1). 6 Furthermore, a higher Stokes shift was observed by the presence of an electron-donating aryl system in the 4position of 1,4-DHP, which is attributed to an internal charge transfer in the excited state between the two π-systems. 7 The 4-aryl-substituted 1,4-DHP comprising two different chromophores separated by an sp 3 carbon served as a tunable photoactivated dyad involving energy and electron transfer processes between them ( Figure 1). 8 The fluorophore ability of 1,4-DHP was further extended as a chemosensor where a watersoluble glucopyranosyl 1,4-DHP is used in the detection of 2,4,6-trinitrophenol. 9 1,2-DHPs, however, were not explored in detail for their photophysical properties to an extent as that of 1,4-DHPs, but 2-pyridones, which are structural analogues of 1,2-DHPs, were recently reported as fluorescent probes. 10 Recently, ylidenemalononitrile enamines were reported as fluorescent "turn-on" indicators for their ability to undergo cyclization with 1°amines to produce fluorescent 1,2-DHP products. 11 In the quest for developing new fluorophores with improved photophysical properties, herein, we have explored 1,2-DHPs with an extended π-conjugation as novel fluorophores. As Nphenyl-1,2-DHPs absorb in the near UV region (Table S1), the corresponding derivatives with absorption in the visible region would be preferred for biological applications. Hence, the present 1,2-DHP design ( Figure 1) involves a push−pull system with different electron-rich N-benzylideneamine substitutions that offer tuning of their photophysical behavior. 12 This new N-benzylideneamine-appended 1,2-DHP offered a remarkable bathochromic shift in the absorption and emission profiles with large Stokes shifts ( Table 1). The application of these fluorophores was demonstrated as selective mitochondrial staining agents in HeLa cells. Furthermore, the design offers different sites for appendage to bioactives or functionalities required for conjugation, and such applicability has been demonstrated here as a probe for protein tyrosine phosphatase (PTP) enzymes in L6 muscle cell lysate. ■ RESULTS AND DISCUSSIONS Synthesis. We have recently reported a one-pot multicomponent synthesis of 1,2-DHPs from dienaminodioate and imines, generated from aromatic aldehydes and amines, mediated by trifluoroacetic acid at room temperature. 5 As an extension of this methodology, the aromatic amine is replaced with an in situ-generated hydrazone, and by condensation with other components, the expected N-benzylideneamine-appended 1,2-DHP was observed under mild conditions, thus serving as a facile one-pot four-component reaction (Scheme 1). A series of 1,2-DHPs 1a−1g were synthesized in moderate to good yields (20−60%) by utilizing hydrazones of differing electronic properties to decipher their photophysical properties (Scheme 1). 1,2-DHPs 1a−1b were prepared to assess the role of phenyl substitution in the 6-position. The remaining 1,2-DHPs 1c−1g were synthesized with acetaldehyde to evaluate the effect of the phenyl group as a contributing factor behind 1,2-DHP's fluorophore ability. This methodology offers a choice of appending any aliphatic or aromatic group at the 6position, thus, a suitable place for conjugation with bioactives or biomolecules. In addition, these 1,2-DHPs can undergo regioselective hydrolysis of 5-CO 2 Me, which was supported by its single crystal X-ray structure ( Figure S1). This selectivity can be realized by difference in nitrogen lone pair delocalization with 3-and 5-CO 2 Me, thus offering another site for conjugation via an amide linkage. Furthermore, we have designed and synthesized a water-soluble fluorophore 1,2-DHP 1h by utilizing an aldehyde generated from triethylene glycol monomethyl ether and N,N-diethyl salicylaldehyde, which further offers an appropriate hydroxyl group substituent for appending any cleavable targeting group such as phosphate for in vitro phosphatase-sensing applications (Scheme 1). Photophysical Properties. The photophysical properties of 1,2-DHPs 1a−1h, viz., absorption, emission, quantum yields, and emission lifetime measurements, are provided in Table 1 and Supporting Information (Figures S2 and S3). The present design involves a D−π−A or push−pull type system; thus, the nature and position of the substituents on the 1,2-DHP moiety are crucial to tune their intramolecular charge transfer (ICT) properties, which leads to different photophysical properties. 1,2-DHPs 1a−1h exhibit maximum absorption wavelengths (λ max ) between 396 and 448 nm in methanol with strong molar extinction coefficients (5388−27 300 M −1 cm −1 ) and emit in the long wavelength region of 500−600 nm. 1,2-DHPs 1a and 1b exhibited similar photophysical properties; however, replacement of the phenyl group at the sixth position with a methyl group did not offer any change in the properties of 1,2-DHPs 1c, 1d, and 1f when compared to the former. These results indicate that the tuning of fluorophoric properties of these 1,2-DHPs can be made by variations in the Nbenzylideneamine moiety. Thus, the sixth position of 1,2-DHP is an ideal position for conjugation with other biomolecules for fluorophore tagging. To assess the role of N-benzylideneamine in 1,2-DHP, the N-ethanimine-appended 1,2-DHP 1e was also synthesized, and indeed, it was found poorly emissive when compared to all other 1,2-DHPs because of the reduced ICT character with the lowest molar extinction coefficients (ε = 5388 M −1 cm −1 ). As expected, 1,2-DHP 1g with a strong donating group led to a significant bathochromic shift of λ max (ca. 20 nm) and λ em (ca. 50 nm) with a higher molar extinction coefficient (ε = 23 292 M −1 cm −1 ). The fluorescence quantum yields for 1,2-DHPs 1a−1g were determined by a relative comparison method using coumarin 153 13 as a standard and were found to be in the range of ACS Omega Article 0.032−0.125 with 1,2-DHP 1g being the highest. These compounds exhibited remarkable Stokes shift values, which can help in obtaining better fluorescence imaging with minimum self-absorption of the fluorophore. It is already established that for better cellular imaging, compounds should have absorption in the visible region and high fluorescence quantum yield. In this regard, on the basis of the observed photophysical properties, the present design of 1,2-DHPs possesses the potential for their application as bioprobes. Applications. The mitochondrial membrane has a negative potential of −180 mV; therefore, it is typical to use cationic dyes for imaging these organelles. 14 The push−pull system in 1,2-DHPs (Figure 1) makes the ring nitrogen of 1,2-DHP to attain a sufficient positive charge; thus, 1,2-DHPs may have an ability to serve as mitochondrial staining agents. Further, to assess the potential of 1,2-DHPs for specific mitochondrial staining, 1,2-DHPs 1a−1g were studied in HeLa cells. Initially, cytotoxicity of 1,2-DHPs was evaluated using MTT assay, and it was found that 1,2-DHPs exhibit greater than 80% cell viability at 30 μM ( Figure S4). HeLa cells were incubated with 30 μM of 1,2-DHPs for 10 min, and the excess compound was washed with Hanks' balanced salt solution (HBSS) buffer solution. As shown in Figures 2 and S5, 1,2-DHPs were localized mostly in the cytoplasm, specifically stained mitochondria in HeLa cells, and no nuclear uptake was observed. Additionally, the costaining experiment with MitoTracker red chloromethyl-Xrosamine (CMXRos), a commercially available mitochondriaimaging dye, confirmed the localization of 1,2-DHPs in the mitochondria supported by Pearson's correlation coefficient in the range of 0.75−0.89. Among all the 1,2-DHPs under study, 1,2-DHP 1b, 1d, and 1g were found to exhibit high fluorescence intensity compared to others. As a proof of concept, to justify the importance of the new 1,2-DHP as a fluorescent probe, we have synthesized a ACS Omega Article phosphorylated analogue 1h′ from 1,2-DHP 1h (Scheme 1). It is well-known that direct and rapid analysis of the crude lysate for endogenous phosphatase enzymes such as PTPs are of prime interest owing to their significant role in insulin-signaling pathways 15 and a variety of disease states 16 including hepatocellular carcinoma 17 as well as metabolic disorders. 18 PTPs are significant targets in many diseases, and there is a growing need for direct determination of endogenous protein phosphatase activity. 19 The UV−vis absorption spectrum of the 1,2-DHP 1h′ in methanol exhibited absorption maximum at 448 nm, and the corresponding emission spectrum shows a peak at 594 nm, whereas in aqueous buffer medium (25 mM Hepes buffer, pH 7.4), a small bathochromic shift was observed both in absorption and emission spectra (Figure 3). The quantum yield of 1,2-DHP 1h′ in Hepes buffer medium is reduced to 0.007, which can be rationalized by differences in the electron density involved in conjugation with phosphate and phenoxide groups. This difference of electronic distribution also reflected in the fluorescence lifetime profile. 1,2-DHP 1h in Hepes buffer exhibited a fluorescence lifetime of 1.94 ns, which is good enough for imaging experiments, 20 whereas its phosphorylated analogue 1,2-DHP 1h′ did not show any decay profile because of its weak fluorescence property (Table 1). To get the structural details of 1,2-DHPs 1h and 1h′, both the structures in their ground state were optimized using density functional theory (DFT) with the B3LYP 21 exchange correlation functional and the 6-31G** basis set 22 with a Gaussian G09 package, 23 and the corresponding structures have been given in Figure 4a,b. The highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) of 1,2-DHP 1h have been given in Figure 4c,d, respectively, which shows that the HOMO of 1,2-DHP 1h is largely localized on the diethylaniline group, whereas the ACS Omega Article LUMO is predominantly confined on the 1,2-DHP core, thus supporting our concept of the push−pull system. In aqueous medium, at physiological pH (Hepes buffer, pH 7.4), the fluorescence emission properties of 1,2-DHPs 1h and 1h′ showed a distinct change. 1,2-DHP 1h with a free hydroxyl group exhibited a sixfold higher orange fluorescence than that of 1,2-DHP 1h′ appended with a phosphate group (Figure 4e). The corresponding fluorescence changes were also reflected in visual appearance of both the solutions (Figure 4e, inset). This significant difference in emission intensity inspired us to explore 1,2-DHP 1h′ as a phosphatase sensor. As it is wellknown that blinking and photobleaching of the fluorophores may cause problems for the imaging experiments, 24 the photostability of 1,2-DHP 1h′ was first tested by monitoring the fluorescence intensity as a function of time upon continuous irradiation (λ = 445 nm) in Hepes buffer solution (25 mM, pH 7.4, 0.3% DMSO) over a period of 20 min under aerobic conditions and was found to be quite stable ( Figure S6). Encouraged by the fluorescence features of 1,2-DHPs 1h and 1h′, we further investigated their suitability as a probe for biological systems. The cytotoxicity of 1,2-DHPs 1h and 1h′ was determined by MTT assay in L6 cell lines. Cells were treated with different concentrations of 1,2-DHPs 1h and 1h′ ranging from 1 to 30 μM, and after 2 h of treatment, we found that both 1,2-DHPs 1h and 1h′ were less than 20% toxic up to 30 μM (Figure 5a). In our next attempt, we investigated the applicability of 1,2-DHP 1h′ as a chemosensor in the presence of PTPs from L6 muscle cell lysate as a preliminary study. This enzymatic reaction was performed in a 96 micro-well plate by the addition of cell lysate (5 μL, 0.8 μg/μL) to a 100 μL aqueous solution of 1,2-DHP 1h′ (30 μM) in Hepes buffer (25 mM, pH 7.4, 0.3% DMSO). After incubation at room temperature for 15 min, the fluorescence intensities were measured at an excitation wavelength of 450 nm and emission at 590 nm. The increase in fluorescence intensity with time clearly indicated cleavage of the phosphate group, which is a result of conversion of 1,2-DHP 1h′ to 1,2-DHP 1h (Figure 5b), thus indicating the suitability of 1,2-DHP 1h′ as a fluorescent bioprobe useful for monitoring the activity of PTPs. Further, to demonstrate the interference of 1,2-DHP 1h′ with other biologically relevant analytes, we measured the change in fluorescence intensity of 1h′ in the presence of various metal ions, reactive oxygen species, and under different pH conditions. Interestingly, there was no influence of these analytes in varying the fluorescence intensity of 1,2-DHP 1h′ (Figures S7 and S8). ■ CONCLUSIONS In summary, we have designed and synthesized a new class of 1,2-DHP-based fluorophores by a facile one-step multicomponent protocol, and their photophysical properties were studied in detail. The results indicate that 1,2-DHPs with an extended N-benzylideneamine appendage have an absorption and emission maxima around 420 and 600 nm, respectively, having prominent Stokes shift. In particular, 1,2-DHPs 1g and 1h showed remarkable photophysical properties with high fluorescence. Furthermore, 1,2-DHPs 1b, 1d, and 1g are recognized as well-suited mitochondrial staining agents in HeLa cells. The potential of fluorophore 1,2-DHP 1h′ as a fluorescent probe in tyrosine phosphatase activity on the cell lysate was also explored. Synthetic accessibility and scope for conjugation warrants the utility of 1,2-DHP as a potential fluorescent probe for biological applications. ■ EXPERIMENTAL SECTION General Experimental Methods. All the reactions were conducted using undistilled solvents, whereas CH 2 Cl 2 was distilled over CaH 2 , which was used for the demethylation of the phosphate ester of 1,2-DHP 1h. Silica gel 60 F 254 aluminum thin-layer chromatography (TLC) plates were used to monitor the reactions with short and long wavelength UV and visible lights to visualize the spots. Column chromatography was performed on the silica gel 100−200 and 230−400 mesh. The Shimadzu HPLC instrument with C18-phenomenex reversedphase column (250 × 21.2 mm, 5 μm) was used for the purification of 1,2-DHP 1h′ using methanol and water. 1 H, 13 C, and 31 P NMR spectra were recorded on a Bruker AVANCE II spectrometer at 500, 125, and 202 MHz, respectively. Chemical shifts are given in ppm using solvent residual peaks of chloroform (δ 7.26) and methanol (δ 3.31 ppm) as reference, and coupling constants in hertz. High-resolution electrospray ionization mass spectrometry analysis was recorded on a Thermo Scientific Exactive-LCMS instrument with ions given in the m/z ratio. Absorption spectra were recorded using a Shimadzu UV-2450, UV−visible spectrophotometer using a quartz cuvette with a 1 cm path length. The fluorescence spectrum of the 1,2-DHPs were recorded on a FluoroLog-322 (Horiba) instrument, which was equipped with a 450 W Xe arc lamp as the excitation source. The fluorescence quantum yields were determined with the relative method, employing an optically matched solution of coumarin 153 in MeOH as the reference (Φ R = 0.46). The following equation was used for calculating the quantum yield where the subscripts R and S refer to the reference and samples, respectively. Abs, area, and n are the absorbance at the excitation wavelength, area under the fluorescence spectrum, and refractive index of the solvent, respectively. Fluorescence lifetimes were measured using an IBH (FluoroCube) TCSPC system. L6 myoblast and HeLa cells were obtained from the National Centre for Cell Sciences, Pune, India. Tris buffer (25 mM, pH 7.4, 0.3% DMSO), Hepes buffer (25 mM, pH 7.4, 0.3% DMSO) and HBSS (pH 7.4) buffers were used for the cell culture studies. The cells were visualized using a fluorescent microscope (Pathway 855, BD Bioscience, USA). Pearson's correlation coefficients were calculated using ImageJ software with a JACoP plugin. General Procedure for the Synthesis of Hydrazone. To a solution of hydrazine hydrate (10 equiv) in ethanol (10 mL) was added pertinent aldehyde (1 equiv), and the resulting mixture was stirred under reflux overnight. After complete consumption of the aldehyde, as indicated by 1 H NMR, the reaction mixture was diluted with water and extracted with CH 2 Cl 2 . The organic layer was dried over anhydrous Na 2 SO 4 , and concentrated, and the resulting residue was used directly for the next step without further purification. ACS Omega Article observed on TLC, the reaction mixture was quenched with saturated aqueous NaHCO 3 and extracted with EtOAc. The organic layer was dried over anhydrous Na 2 SO 4 and concentrated, and the resulting residue was purified by column chromatography to afford the desired 1,2-DHP. Cellular Studies. Cell Culture and Treatment. Rat skeletal muscle cell lines (L6 myoblasts) and cervical cancer cell lines (HeLa) were maintained in Dulbecco's modified Eagle's medium (DMEM), supplemented with 10% fetal bovine serum and 1% antibiotic−antimycotic mix at 37°C under 5% CO 2 atmosphere. Cell Viability Study of 1,2-DHPs 1h and 1h′ on L6 Myoblast. MTT assay was performed to check the cytotoxicity of the compounds. The viability of L6 myoblast was measured by means of MTT assay. Cytotoxicities of 1,2-DHPs 1h and 1h′ (1, 5, 10, 20, and 30 μM) were standardized based on the concentration. Briefly, cells after incubation with the compound were washed, and MTT (0.5 g/L), dissolved in DMEM, was added to each well for the estimation of mitochondrial dehydrogenase activity, as described previously by Mosmann. 25 After an additional 2 h of incubation at 37°C in a CO 2 incubator, 10% of SDS in DMSO was added to each well, and the absorbance at 570 nm of solubilized MTT formazan products was measured after 45 min using a microplate reader (BioTek-USA). Results were expressed as percentage of cytotoxicity. = − × Percentage of toxicity absorbance of control absorbance of sample absorbance of control 100 Preparation of Cell Lysate. Cells were grown in T25 flasks and after attaining 60% confluency, cells were differentiated in DMEM containing 2% horse serum for 5 days. Differentiated cells were then washed three times with Hepes buffer (25 mM, pH 7.4). Cells were scraped off from the plates using a cell scrapper and centrifuged, and the proteins were extracted from the cell pellet using 0.15 M KCl (4°C for 30 min). The protein content of the lysate was then measured using a BCA protein assay kit. Colocalization Study of 1,2-DHPs with MitoTracker CMXRos. Cells were grown in a 96-well black clear bottom plates (BD Biosciences, Franklin Lakes, BJ), and after attaining 90% confluency, the cells were taken for the experiments. HeLa cells were incubated with MitoTracker CMXRos (50 nM) for 20 min at 37°C, followed by the addition of the corresponding 1,2-DHPs (30 μM) and incubated for 10 min. This was followed by washing the cells twice with HBSS to remove the unbound dye. The cells were visualized under a fluorescent microscope (Pathway 855, BD Bioscience, USA). * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.7b01835.
2018-08-06T12:46:11.584Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "9e0618bedf88c2decbc2c8c8412227df7666c82a", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b01835", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41441fdbb150809e35a1dad3e5529f33f50076fc", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }