text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Measuring naphthenic acid corrosion potential with the Fe powder test Results are presented of experiments performed using a new method to measure the naphthenic acid corrosion potential. The method consists of adding pure iron powder into a small autoclave containing the crude or oil sample. The test is then performed at a given temperature for one hour, after which the oil sample is filtered and the remaining liquid is sent for iron content determination (ppm). The tests are run at 7 different temperature levels, 3 more are run as repeated tests. A best-fitted curve is drawn through these 10 experimental points and the maximum point is thus determined. This becomes the main outcome of the test and it is used to give a measure of the naphthenic acid corrosion potential. The same general trends as observed in the past using the neutralization number or TAN (Total Acid Number) is obtained. However, this new test seems capable oí detecting anomalous cases where oil samples having larger values of TAN exhibit less corrosivity than others having much lower values of TAN or where they show completely different corrosivity despite having similar or the same TAN. INTRODUCTION In 1956, Derungs^^^ stated that naphthenic acid corrosion was first observed in the 1920's.If so, corrosion engineers in the petroleum refining industry have been dealing with this phenomenon for 80 years or more.The naphthenic acid content is commonly determined by titration with potassium hydroxide (KOH), as described in ASTM D 974 (color^ndicator titration) and D 664 (potentiometric titration).The value obtained is referred to as neutralization number or more commonly as the Total Acid Number (TAN), which is expressed in milligrams of KOH required to neutralize the acid constituents present in one gram of sample.During this very long time span.many technical papers have been written on the subject suggesting that naphthenic acid corrosion is predictable from TAN^^ «"^ 3]^ A rule of thumb using TAN is commonly used, which consider crudes with TAN higher than 0.5 as potentially corrosive to distillation equipment made of carbon steel or alloyed to deal only with sulfidation attack.Based on actual field experience, however, others have adopted the criterion that crude slates with TAN less than 0.3 can be processed in distillation units made primarily of carbon steel.Continuous processing of feed with TAN in the range of 0.3-0.5 is said to require the use of alloy and even austenitic stainless steels in some critical areas of the distillation units.For feed with TAN higher than 0.5 it is said that it can be processed only in fully protected units.By full protection it is meant to have austenitic stainless steel type AISI 316 (17Cr42Ni'2.5Mo)or even 317 (19Cr43Ni-3.5Mo)for all parts or equipment in the unit operating in the temperature range of 230-400 °C.This is a rather expensive approach. Since some crude with low TAN can be more corrosive than others with higher TAN, the use of TAN to predict corrosivity has been questioned.When the TAN is adjusted by adding commercially available low-sulfur naphthenic acids in sulfur-free white oil or mineral oil, the correlation between TAN and corrosivity seems to be evident.The higher the TAN the greater the corrosivity.However, with actual crude oil samples, it does not always produce the same trends and sometimes even produces quite unexpected results.Alternative methods have been proposed based on autoclave test and weight lost measurement.Lee Craig^^ proposed the use of a corrosive acid number (CAN), determined by calculating the equivalent weight of iron lost by corrosion, and of a naphthenic acid corrosion index (NACI).This is a ratio of the corrosion rate to the weight of corrosion product.The principle is that there are two competing corrosion mechanisms acting together, sulfidation and naphthenic acid corrosion.While sulfidation produces an insoluble corrosion product naphthenic acid corrosion is said to produce an oil-soluble corrosion product.Thus, a value of NACI between 10 to 100 is said to correspond to moderate naphthenic acid corrosion while those higher than 100 correspond to severe naphthenic acid corrosion.Low values of NACI, less than 10; mean that sulfidation is the predominant corrosion mechanism. The use of autoclave tests and of a closed loop circuit for hot oil to provide dynamic testing conditions have been proposed as a mean to assess crude corrosivity.This has been the main tool used in a multi-client sponsored research on crude corrosivity prediction initiated in 1993.Althought the final report with the complete results of this research program has not yet been made public, as a sponsor it is known that it has produced clear laboratory evidence showing a very complex interaction and inhibiting effect of sulfidation on the naphthenic acid corrosion phenomenon.When the interacting effect of sulfur is present, the use of TAN to indicate corrosivity may be misleading. Notice that measuring corrosion rate by weight lost does not make the necessary discrimination between sulfidation and naphthenic acid corrosion.This is important because using Cr-Mo steels or stainless steel type 410 or 12 Cr usually solves high corrosivity problems due to sulfidation.In contrast, use of austenitic stainless steel type 316 or even 317 must be made to solve high corrosivity problems due to naphthenic acid corrosion. The task group T-8-22 formed during the 1996 Fall Committee Week, NACE International T-8 Refining Industry Corrosion Group, recently published its results^ ^ on the literature review on naphthenic acid corrosion.They concluded that the concerns associated with naphthenic acid corrosion continue to exist today, that there are still difficulties on predicting this type of corrosion, and that a more accurate prediction tool or method will have to be developed.Some approaches have been different to traditional ones.For instance, a recent paper^ was published of a methodology of better crude oil corrosivity predictions based on origin, evolution, and maturity (crude history).The iron powder test is another different approach to this very old problem.This paper describes the principles and the potential use of the method to predict corrosivity.It also updates the information regarding the latest stages that have been completed in the development process of this new method. THE Fe POWDER TEST The original idea behind the Fe powder test (patent pending) was to expose a much large surface area than available in any steel corrosion coupon used when applying the weight lost method.This was followed by another idea of using a fundamental approach of chemical kinetics to study the rate of the chemical reactions of the naphthenic acids with iron in the corrosion process^^l In the iron powder test, all compounds that can react with iron at the testing temperature should do so during the test.However, only those producing oil soluble corrosion products contribute to the measured result, which is the amount in ppm (part per million, weight) of dissolved iron.If any other chemical reactions or corrosion processes occur and produce solid products, like sulfidation that produces FeS, it should be ignored because only the dissolved iron will be measured.Since the naphthenic acids react with iron to produce oil-soluble iron naphthenates, it is believed that the method is capable of giving an indication of the naphthenic acid corrosion potential.As mentioned earlier, the weight lost methods give a total corrosivity indication.Based on this perception, an effort was made to try to use the iron powder test to assess corrosivity^ \ Most recently, it was used for classifying crude oils^^ ^ according to their corrosivity potential, as measured by the iron powder test method. EXPERIMENTAL PROCEDURE The tests are currently performed with 25 g of oil sample and 2.5 g of iron powder placed in 50 ml autoclaves with an axial flow impeller rotating at 100 rpm to induce fluid motion and bring about uniformity.The iron powder is added in excess so that the naphthenic acids are the limiting reactant.To eliminate oxygen entrapment in the reactor, N2 is bubbled during 5 minutes.Then, the valves are closed and the temperature is increased.When reaching 90 % of the set temperature, the recording of the reaction time is started.At the end of the reaction time, the reactor is cooled.After opening the autoclave, the mixture is filtered to separate the remaining iron powder.If the oil sample is too heavy, Xylene is used to dilute it before filtering.The Fe concentration in the filtered liquid is determined by inductive coupled plasma (ICP) emission spectroscopy (ASTM D-5708-95) and the result is expressed as a function of the oil sample weight as ppm. The testing procedure consists of conducting the several tests, as just described, at a constant time of one hour and at different temperatures.A curve of the amount of dissolved Fe against temperature is then plotted.The final response of the test is the point of maximum amount of dissolved Fe for one-hour test at the corresponding testing temperature.The testing temperature is selected in steps of 40 °C from 140 to 380 °C, given a total of 7 tests.At least 3 tests are replicated for accuracy.An initial point is added corresponding to 30 °C and the initial amount of Fe contained in the virgin oil sample.Since the type, size and particular shape of the iron particles in the powder influence the result, all tests need to be performed with the same iron powder brand and type. Even though it is possible to convert the result into a corrosion rate expressed as thickness lost per year, this might be misleading and it is therefore not recommended in the procedure.Steels used in the oil refining industry have alloying elements that may change the material performance as compared with pure iron.Also, actual field conditions involve velocity/turbulence effect in piping and acid condensation in vacuum tower.These two factors plus the condition implying a continuous fresh hydrocarbon stream contacting the metal are rather difficult to simulate in the laboratory.This test is performed in autoclave using a very limited amount of oil sample continuously in contact with the same iron powder for as long as the test lasts at the testing temperature. RESULTS The results summarized in figure 1 were produced by performing the Fe powder test with low-sulfur paraffinic white oil (less than 10 ppm sulfur and with a TAN less than 0.05) with addition of lowsulfur commercially available naphthenic acid.This acid has a TAN of 230 mg KOH/g.The samples were produced by adding weighed amount of this commercially available naphthenic acid to make up 250 ml of paraffinic oil.An equivalent TAN was calculated, as a reference.The maximum amount of iron dissolved found after the test is shown in the vertical axis in a logarithmic scale in figure la and in a linear scale in figure lb. Notice that the addition of 0.2 % naphthenic acid resulted in a maximum amount of iron dissolved of 8 ppm.Doubling the addition of naphthenic acid to 0.4 %, resulted in a maximum amount of iron dissolved of 20 ppm.Increasing the addition of naphthenic acid to 1.0 % resulted in a maximum amount of iron dissolved of 85 ppm.These amounts of naphthenic acid added correspond to values of TAN of about 0.5, 0.9 and 23 mg KOH/g, respectively.Up to here, the relationship between dissolved iron and TAN fits a second order polynomial.The addition of 2, 4 and 8 % naphthenic acid corresponds to values of TAN of 4-6, 9.2 and 18.4 mg KOH/g.The temperature at which the maximum iron dissolution occurred varied.It was 180, 200, 220, 250, 260 and 270 °C for the oil samples with naphthenic acid additions of 0.2, 0.4, 1.0, 2.0, 4.0 and 8.0 %.The maximum amount of iron dissolved was 8, 20, 85, 620, 2700 and 7300 ppm, respectively. The point at which the addition was 2 % naphthenic acid (TAN = 4-6) represents a transition above which the relationship between the maximum amount of dissolved iron and TAN fits a straight line.Most common crude oils have TAN below 5.0 but heavy vacuum gas oil (HVGO) and the first liquid collected in the bottom bed of a vacuum distillation tower may have values of TAN up to 8.0 when processing long residues derived from these crude oils.So the experiments in figure 1 do represent conditions similar to those actually found in oil refineries, when processing naphthenic acid containing crude oils, at least in regard to naphthenic acid content and TAN. Table I shows real crude oil samples tested with this method.All except three are Venezuelan crude oils from heavy such as Boscan and Tía Juana Pesado to lighter ones such as Mesa and Lagomar.All the oil samples shown in table 1 have total sulfur content exceeding 1.0%.The TAN varies from the lowest measured of 0.13 for Mesa to the highest measured of 4.6 for Tía Juana Pesado. Assuming that the amount of dissolved iron gives an indication of the naphthenic acid corrosion potential, a comparison between the results in figure 1 and table I shows that having the same TAN does not necessarily correspond to the same corrosivity level.For instance, BCF 17 (Bolivar Coastal Field 17) has a TAN of about 2.1 and dissolved 42 ppm of iron.Notice that the addition of 1.0 % naphthenic acid was equivalent to a TAN of 2.3, which dissolved 85 ppm of iron.That is, although they have similar TAN values, the latter dissolved twice the amount of iron.The case of TJP is even more revealing.Both TJP and the oil sample produced by adding 2 % naphthenic acid have a TAN of 4.6 and yet, this sample dissolved 620 ppm, as compared with TJP that dissolved 217 ppm of iron.Some scatter was obtained in the test with TJP, giving an error of about ±64 ppm.The synthetic oil sample produced by adding 2 % naphthenic acid dissolved more than twice as much iron than TJP in spite of the fact that they both have the same TAN. Figure 2 shows the comparison of the results obtained by testing real crude oil samples and the synthetic mixture of paraffinic oil with addition of Comparison between the results obtained with the Fe powder test for real crude oil samples and for the mixture of paraffinic with addition of commercially available low-sulfur naphthenic acid. commercially available low-sulfur naphthenic acid.A regression line was fitted for the points produced by real crude oil sample testing.The experimental points corresponding to synthetic mixture were joined by a hand-fitted curve.There are differences when comparing the straight line with this curve below about a TAN of 2.6.For instance, with a value of TAN equal to 2.1, the regression line predicts 97 ppm; the hand-fitted curve predicts 70 ppm, as compared with the actual one obtained for BCF17 of 42 ppm.Despite these differences, in the graph in figure 2 they appear closer because of the large scale used in the ordinate, 0-650 ppm.Above a TAN of 2.6, the results from the synthetic mixture depart farther away from the real crude oil sample ones.None of the heavy crude oils Cerro Negro, Zuata, Bachaquero, and TJP exhibit the same corrosion potential than the values predicted by the hand-fitted curve for the synthetic mixture.With 2 % naphthenic acid the equivalent TAN is 4-6 as compared with values of TAN of 3.21, 3.77, 3.99, and 4.60, for Cerro Negro, Zuata, Bachaquero, and TJP, respectively. Figure 3 shows again the straight line correlation found between values of TAN and the maximum amount of iron dissolved in the Fe powder test applied to real crude oil samples.However, this time a logarithmic scale was used for the ordinate to better reveal the differences. Notice that the general trend suggests what is already known that the higher the TAN the greater the corrosivity.However, some points do not follow the correlation very well, particularly below a TAN of 2.6.Thus, crude oils such as the foreign crude 3, Paeon Mara, and Boscan, that have similar or even much lower values of TAN, appear more corrosive than Merey, BCF22, and BCF17.This matches field experience.Merey, BCF22, and BCF17 are found in actual field experience that they are not as corrosive as the foreign crude 3 and Boscan.Foreign crude 2 (30 ppm [Fe]) appears with similar corrosivity than Merey (33 ppm [Fe]) despite the fact that the former has a TAN of only 0.18 as compared with 1.51 of Merey.Crude oils such as Lago Treco (12 ppm [Fe]) and Menemota (19 ppm [Fe]) are known not to be very corrosive and this is also reflected in the results shown in figure 3. Figure 4a shows the correlation between the maximum amount of iron dissolved, as determined in the Fe powder test, and the API gravity of real crude oil samples.Figure 4b shows the correlation between the TAN and the API gravity for the same crude oil samples.There are several points that do not fit the regression curve.However, although the correlation is far from perfect, the general trend is evident.The heavier the crude oils the higher the TAN and the higher the corrosivity, which is also known from experience. Although the test is being developed since early 1998 and has already gone through several stages, it still needs further development.The effort to produce a simple standard procedure continues in order to reduce data scattering, which sometimes has been a problem.Also, the main challenge is to find how to translate these results into actual field applications.The questions to be answered are several.There is one related to the level of ppm [Fe] below which carbon steel is acceptable or above which the need arises to upgrade to higher alloy steels.There is another one related to the level of ppm [Fe] above which stainless steel type 316 becomes necessary or above which even this steel does not provide the necessary protection. The translation from the Fe powder test results into reality needs to be done as to know which piping, vessel, heat exchanger shell and tube bundle, and distillation tower wall and internals require upgrading to higher alloy steels.This would imply testing not only the raw crude but also the long residue and all the distillates to determine the levels of ppm [Fe] below which 5 Cr-O.S Mo or 9 Cr4 Mo steels suffice or above which upgrading to stainless steel type 316 becomes necessary.So there is still a long way to go but the work is in progress. The propensity has been to use it in comparison with the known corrosivity of what has been processed for many years.That is, say that a new crude slate is tested and dissolved twice or three times more than the current crude slate.If the latter is producing so many mils per year (mpy) of corrosion rate in a particular area of the distillation unit, it is predicted that the new crude slate will produce twice or three times higher corrosion rate. Although the prediction could sometimes be successful, the approach has not yet been proved and it is not recommended. CONCLUSIONS This new method of measuring the naphthenic acid corrosion potential produces results that agree with existing knowledge about this phenomenon.That the higher the acid content corresponds to greater corrosivity has been observed in the field and in laboratory test results.However, by the same nature of the test procedure, the Fe powder test reveals results that have caused much confusion in traditional laboratory test results and in failure analysis interpretation.This refers to cases where oil samples that have the same or similar values of TAN do not show the same corrosivity.It also refers to cases where oil samples having larger values of TAN exhibit less corrosivity than others having much lower values of TAN.The Fe powder test seems capable of detecting the above mentioned cases.This is because unlike KOH, which does not only react with naphthenic acids but also with other compounds such as hydrolyzable salts, iron naphthenates, inhibitors and detergents, the iron powder is likely to react more with all those species also capable of producing corrosion on actual steels.If the naphthenic acids present are stronger it is expected to produce a larger amount of dissolve iron than in another oil sample having weaker organic acids.In this way, it is somewhat capable of distinguishing between different acids. Unlike conventional corrosivity tests based on weight lost and steel coupons, if some corrosion reactions occur that produce insoluble corrosion products (like sulfidation does), the Fe powder test should not record this contribution.In this way, it should give a much better indication of the naphthenic acid corrosion potential than conventional corrosivity tests, since these produce a total corrosion rate that have both the contribution of sulfidation and naphthenic acid corrosion. Whether the effect is due to an inhibiting effect of sulfur species or not, it is outside the scope of this work.The fact is that sulfur-free paraffinic oil having values of TAN higher than 2.6 appears much more corrosive than real crude oil samples that have a total sulfur content greater than 1.0 % and similar or even higher values of TAN. also wish to express recognition to R. Callarotti, R. Lorenzo, and M. L Specht for previous contribution to the development of this new test method and to M. Ledezma who has administered the project under which this method has been developed. 118Figure 1 .Figura 1 . Figure 1.Maximum amount of iron dissolved as a function of the naphthenic acid added into white oil, after performing the Fe powder test, (a) Semi-Logarithmic scale and (b) Linear scale.Figura 1. Cantidad máxima de hierro disuelto en función de lo cantidad de ácido nofténico añadido en el aceite parafínico, después de realizar el ensayo del polvo de Fe.(a) escala semilogarítmica y (b) escala lineal. ( 3 TAN Figure 2.Comparison between the results obtained with the Fe powder test for real crude oil samples and for the mixture of paraffinic with addition of commercially available low-sulfur naphthenic acid. Figure 3 . Figure 3. Correlation between the maximum amount of iron dissolved, as determined in the Fe powder test, and the TAN for real crude oil samples. ( c )Figure 4 . Figure 4. Correlation between the maximum amount of iron dissolved, as determined in the Fe powder test, TAN, and the API gravity of real crude oil samples. Table I . Classification of crude oils according to their measured corrosivity
5,112.8
2003-12-17T00:00:00.000
[ "Materials Science" ]
Unlocking the Potential of Electronic Health Records for Health Research Electronic health records (EHRs), originally designed to facilitate health care delivery, are becoming a valuable data source for health research. EHR systems have two components, both of which have various components, and points of data entry, management, and analysis. The “front end” refers to where the data are entered, primarily by healthcare workers (e.g. physicians and nurses). The second component of EHR systems is the electronic data warehouse, or “back-end,” where the data are stored in a relational database. EHR data elements can be of many types, which can be categorized as structured, unstructured free-text, and imaging data. The Sunrise Clinical Manager (SCM) EHR is one example of an inpatient EHR system, which covers the city of Calgary (Alberta, Canada). This system, under the management of Alberta Health Services, is now being explored for research use. The purpose of the present paper is to describe the SCM EHR for research purposes, showing how this generalizes to EHRs in general. We further discuss advantages, challenges (e.g. potential bias and data quality issues), analytical capacities, and requirements associated with using EHRs in a health research context. Introduction Electronic Health Records (EHRs) are systemized collections of patient health information and documentation, collected in real-time, and stored in a digital format [1]. EHRs were originally designed to facilitate clinical decision-making regarding health care delivery for individual patients, and to improve the quality of care. EHRs have seen rapid deployment in health care worldwide over the past decade. Both Canada and the U.S. saw increases in EHR adoption, but the rate differed by provinces in Canada [2,3] and between health systems within states in the U.S. [4,5]. EHRs have historically been used mainly within acute-care settings, but primary-care settings are increasingly adopting them as well. Despite the increase in EHR adoption for healthcare delivery, researchers have used these systems in a limited capacity. Presently in Canada, research facilities using EHRs are localized to primary care and specific institutional sites. In Calgary, a city-wide inpatient EHR system called AllScripts Sunrise Clinical Manager TM (SCM) has been in operation since 2006. SCM covers four acute-care facilities (Foothills Medical Centre, Rockyview General Hospital, Peter Lougheed Centre, and South Health Campus) and one pediatric facility (Alberta Children's Hospital). These five facilities provide health care coverage to 1.4 million people living in the Calgary Health Region, and will additionally capture those accessing care in Calgary from surrounding rural regions. Since its inception in 2006, SCM has collected longitudinal inpatient health data on 5,469,761 million individuals. This num-ber represents any contact with Calgary hospitals (including emergency department (ED) visits), so do potentially include out of Calgary and out of province visits as well as hospitalizations. Therefore, this SCM EHR system is a comprehensive source of population-level inpatient information. On April 1, 2009, all regional health authorities and boards across Alberta were amalgamated into Alberta Health Services (AHS). The SCM governance was transferred to this single provincial health authority. Therefore, SCM is managed by relevant AHS departments (e.g. business intelligence, privacy office, information systems) for clinical operations and IT system management. AHS has developed and instituted protocols (e.g. research ethics, data disclosure agreement and research administration agreement) to allow health research activities using AHS data, but this process had not included SCM EHR until recently. Recently, AHS announced implementation of ConnectCare (using EPIC software), which offers a province wide EHR system. There is a growing need to understand EHR, ultimately allowing researchers to leverage the data to optimize patient care through precision medicine and precision public health. Toward that end, AHS has partnered with Centre for Health Informatics (CHI) at the University of Calgary, to work together to apply our knowledge to ConnectCare when it comes into operation. To date, population level EHR research is lacking, and there is a need to advance on this frontier. We are using SCM as an initial base to explore and understand EHR systems for health research. Further, we will discuss the system architecture (back-end and front-end data and its users). The current review will explore analytical and administrative challenges with using EHR data for research, and includes an example application of risk adjustment analysis in the context of precision medicine and precision public health. This work provides a roadmap for research using clinical information systems and discusses concepts that are generalizable to most EHR systems. EHR Back End: System Architecture There is an intricate relationship between the front-end users and the back-end of EHR systems. Information is entered at the front line by health care providers and workers, including physicians and nurses. The front-end users are asked to enter their data in various ways, such as entering structured field information (e.g. drop-down menus, numerical fields, checkboxes, radio buttons) or writing free-text documentation. This can include discharge summaries and multidisciplinary progress notes that document patient history, clinical examination, and patient progress throughout the hospitalization. The EHR client/server structure records timestamps for all patient transactions, enabling the system to track outcomes and patient care processes (e.g. recording physician orders, vitals, patient consent or refusal). Hospital protocols that are relevant to patient care, such as patient isolation protocols, are implemented in the EHR system using triggers and warnings. To ensure interoperability between EHR systems built by different vendors, international technical standards (e.g. International Standards Organization 18308: Health informatics -Requirements for an Electronic Health Record Architecture) ensure that basic technical documentation is broadly consistent across systems [6]. SCM is configured as a standard client/server application. Data entered from the front end is fed directly into a Microsoft SQL Server database within Alberta Health Services' data warehouse. In addition to the main production database, several additional SCM database copies are used for various purposes ( Figure 1): 1. Live Copy: an almost real-time replication of the SCM is available, that holds data just a few seconds or minutes behind the production database. This replication database is used for in-system reports for active patient care. Access can be granted to parts of the replication database (or even the production database) for reporting outside the system. For example, it is necessary to report on real-time data for the Emergency Department Wait Times app/web site, and access to the Live Copy SCM would be essential. 2. Daily Copy: a copy of the SCM production database is made once a day, between 4am and 6am. This database is primarily used for non-critical reporting and troubleshooting within the IT department at AHS. Some analysts outside of IT also use it for analytic reporting. This is a complete and exact copy of the SCM production database, and contains all free-text records. Access is generally restricted. 3. Analyst Copy: AHS Analytics loads a select amount of data to the Oracle Alberta Health Services Data Repository for Reporting (AHSDRRX) data warehouse. Alternatively described as 'SCM LOAD,' it is the warehouse that includes many other data sets including administrative data (e.g. Discharge Abstract Database, National Ambulatory Care Reporting System, Pharmaceutical Information System, etc.). This version is most familiar to analysts outside of IT. It contains only a subset of SCM tables, and is copied from the daily copy of SCM (i.e., # 2 above). In addition to the above, SCM data flows into various other schemas within the data warehouse where it may be more analyst-friendly, or go through other validation. For example, there is an ED Visits table, which includes ED visit data from both SCM and other provincial systems. This table is in a format that is much easier to work with than the raw transactional tables. Front-End EHR: Health care Workers In Canada, clinicians input structured or unstructured information based on the patient visit into EHR for care documentation purposes. EHRs are then coded into a universal health language called the International Classification of Diseases, 10th revision, with Canadian enhancements (ICD-10-CA). Structured EHR Components Structured data refers to types of data where the format was predetermined through an existing schema. These data are captured via structured data entry systems (SDES) on the front end [7]. Often, structured data are embedded within unstructured fields. Healthcare providers and workers often convert unstructured patient information into a structured format for easier information flow. Typical EHR systems, including SCM, contain many structured data fields ( Table 1) that use controlled fields such as problem lists, diagnoses, procedures, vital signs, medications, lab results, billing codes, demographic and other administrative data. These data are typically recorded in a long-form table within a relational database. There are built-in variables within the EHR to indicate clinical processes and control mechanisms, such as restricted access for specific patient records, flags for procedure receipts, and isolation status. Consider inpatient medication as an example. In the context of inpatient medication, front-line clinical and healthcare workers typically see timestamps corresponding to when a medication was ordered by a physician. Timestamps will also be made for when that particular order was fulfilled by the pharmacy, and administered to patients at bedside. To date, structured data within EHR systems have been used in a limited capacity in research to power a wide array of data tools for end-users [8,9,10]. For example, these data have been used to populate case reports for disease surveillance [11,12]. Health system administrators can use structured information from procedure and diagnosis codes, as well as structured outcomes data, to evaluate and improve patient safety [13,14]. The volume and variety of data within EHR have led to the use of machine learning techniques [15,16]. To our knowledge, most statistical methods and machine learning algorithms either require structured input, or include some mechanism for converting unstructured data into structured input as part of the analytical pipeline. While from a research perspective it would be ideal for most or all EHR data to be captured via structured fields, there are practical barriers to this, including physician resistance to SDES use [7] and lack of ability to capture contextual information [17,18]. Hence, EHR systems such as SCM generally have the ability to capture unstructured data as well. Unstructured Components Unstructured data refers to data elements that do not have a predefined or predetermined form. Unstructured free-text fields in EHRs contain essential clinical detail [17]. These allow medical staff to record the highly variable information that may be medically relevant, and which do not lend themselves easily to structured fields. It is difficult to predict all the fields that may be required ahead of time, or be too demanding for practitioners to fill in numerous individual structured fields. We offer an example to demonstrate where both structured and unstructured elements are necessary. A discharge summary is a document describing a patient's course during a single hospital admission. These summaries are often written as detailed narratives, but can also be filled out as templates with parts being auto-populated from other components of the EHR. These summaries can contain features such as diagnoses, allergies, procedures performed, current and prescribed medications, and other relevant information. Unstructured components are found throughout the EHR in other formats as well. This includes nursing notes, which contain nurse assessments and treatments; progress reports, which record relevant events while the patient is under care as well as communication between physicians and other medical staff; consultant reports, which document the specialty consulting details; transfer care reports, anesthesia records, surgery reports, and pathology reports (see Table 2). Understanding the Relationship between Front End and Back End for Research Previous research on data quality demonstrates that there are potential biases and other issues that need to be accounted for [19][20][21][22][23], and EHR data is no exception. Thus, a researcher must consider the following factors when attempting to design a study using EHR: 1. How was data entered? The researcher must understand the context of how the data was entered into the system, such as clinical practice variations between units or physician documentation practices; and 2. How was care provided? The researcher must understand the flow and context of the provided clinical care. Documentation in EHRs should be thorough and complete, as missing or incorrect information at this stage impacts the quality of downstream data. Data entered by health care workers from the front line are the data that will flow to the back end of the system. Therefore, much of what is entered will be dependent on the clinical context and the clinical practice culture. There can be significant workflow variation between facilities and programs. Both data entry and coding processes often hinder quality of data obtained downstream. Clinicians entering patient data into an EHR may not document every condition presented, particularly those conditions that are not a primary reason for the visit [19]. For example, depression is often under-coded [20] due to poor documentation if the depression is less severe [21], or if patients feel stigmatized [22]. Similarly, hypertension is often a comorbidity presented by the patient, but the patient may have been admitted due to symptoms of another condition, resulting in undercoding [23]. Following entry of data into the EHRs, clinical coding specialists in health information management departments code patient conditions found in the EHRs using ICD-10-CA. The process of coding health information can also introduce issues of data quality, as some information in the EHR is not required to be coded (secondary conditions that use little to no resources or are not the primary reason for admission), and high demands for productivity sacrifice quality of coding to meet urgent timelines [24]. Within the back end of SCM specifically, the data are stored in raw transactional form, and are left untouched relative to what was entered. The entered data are stored within thousands of tables. Since SCM is a highly normalized database, one cannot always effectively determine if an entire table is trustworthy or not. Data Access and Linkage Considerations Accessing and linking EHR data presents both technical and privacy-related challenges. Technical Considerations Studies based on relational databases such as EHRs (25) generally require tables to be linked (this includes internal linkage between EHR tables, and external linkage with tables from other databases). Linking these tables requires knowledge of Structured Query Language (SQL). Internal linkage within EHRs is not straightforward, due to the size and complexity. For example, SCM contains over 1,000 tables. Multiple tables and multiple key columns can be attached to a single patient. The hierarchical structure (e.g. visitation) and longitudinal information further complicates linkage process. It is important that the study team incorporate members with expertise in the EHR data structure and in SQL, as well as experts with a thorough understanding of the research question, who can work in close collaboration to extract and link the data. Another associated challenge is with the process of converting 5.4 million individuals into population cohorts for research studies. This could be achieved by using locationrelevant variables within SCM, or by applying data-linkage to other province-wide administrative databases containing resident status information, and then eliminating or sub-setting Privacy Considerations A second significant challenge with using EHR data revolves around security, and may require dialogue between health systems, universities, and appropriate stakeholders to move forward. The sensitive nature of EHR data places legal responsibilities on custodians (e.g. AHS in Alberta) for data security. Researchers may have difficulty accessing the data due to required privacy requirements. Linking patients' EHR data between multiple internal and external data tables can present an unusual level of privacy risk for both patients and health care providers. EHR freetext data are difficult to anonymize, and may contain identifying information for patients, doctors, nurses, and other health system workers. Moreover, population-level inpatient EHRs such as SCM represent a comprehensive view of the entirety of a patient's interaction with the health care system. If a large number of tables are linked, it can pose a risk of indirect identification of patients within the data set. Having a specific research question assists in identifying the minimal data elements required from EHR, which in turn can help data custodians de-identify the data to whatever extent is possible. Analytical Approaches, Challenges and Considerations Analyzing EHR data, and in particular unstructured data, requires non-traditional approaches and technical skills. We will focus on natural language processing and machine learning. Analyzing Structured Data Structured EHR data can be analyzed in multiple ways, including traditional statistical techniques and through machine learning (ML). This section will focus on ML. ML focuses on giving computers the ability to identify patterns in data without being explicitly programmed, inspired by the ability of humans to learn from experience, without being explicitly taught. ML classification algorithms generally can be divided into supervised learning and unsupervised learning. Supervised learning consists of predicting the value of a particular dependent variable (e.g. disease status, length of stay), often called the 'target'. This is based on the given values of a number of independent variables or 'features' (e.g. age, sex, diagnosis codes), together with a number of training examples in which the correct value of the target is manually assigned by a person. These manually assigned values are called 'training labels'. Unsupervised learning refers to situations in which no training labels are available (not commonly done in analysis of EHR data). Machine learning, in this case, extends into deep learning, which is a state-of-the-art method that has led to its exploration usage in EHRs [16]. Deep learning methods do not require expert knowledge or pre-defined rules, as the hidden manifold can be learned from big data. Analyzing Unstructured Data Natural language processing (NLP) allows machines to identify the structure (syntax) and extract the meaning (semantics) of human language. NLP is primarily useful in the EHR context when processing free-text unstructured data elements. An important part of NLP is part-of-speech tagging (determine whether one word is a noun, verb, adjective, etc.), negation detection, and sentence boundary detection. This facilitates searches for clinical concepts in unstructured EHR components. The Unified Medical Language System (UMLS) is one example of an NLP system [26]. The clinical Text Analysis and Knowledge Extraction System (cTAKES) is another example of an open-source Natural Language Processing system [27]. cTAKES included pre-trained machine learning algorithms specifically designed for clinical texts. The hybrid system, which combined cTAKES and expert knowledge decision rules, became state-of-the-art, up until deep learning was invented. Deep learning and word embedding have become two cornerstones of modern NLP. Challenges of Analyzing EHR Data in SCM Traditional methods are unable to handle large numbers of features and unstructured data; however, machine learning can handle both. There are three major analytical challenges associated with these techniques. First, trained experts in ML and data science are needed. Second, a large number of records is required, and computational requirements must be met. Third, is it challenging to interpret the models, and requires specific expertise. Finally, quality of data entered from the front end (as discussed previously) can cause issues in the data downstream. As previously discussed, EHR data is very heterogeneous, and must be accounted for when determining appropriate techniques. Therefore, one must have sufficient understanding of the data, as well as possess the technical skills to conduct ML and NLP. There are many open, online courses available for technical training, and many universities are now establishing graduate training programs. ML requires large amounts of data and often is challenging to interpret. Deep learning, a subfield within ML, can offer better performance than machine learning, but requires even more data and can be more challenging to interpret. The sample size of the study must be large enough to partition the data into training, validation, and test sets. Generally, the training set should be given the largest portion of the sample, which is a decision that is also influenced by the size of the total dataset. ML algorithms require gold-standard labels for algorithms to train on if supervised learning is used. Chart review is the usual gold standard to validate data in health research, but can be expensive and time-consuming. In addition to having sufficiently large data and the required skill-sets, hardware computational requirements (e.g. Graphics Processing Unit cluster for deep learning) must be met to conduct such analyses. Researchers should note that EHR-related privacy requirements might hamper data transfer to hardware. A major criticism of ML and deep learning is that the models can be difficult to interpret. Achieving interpretability is currently an active area of research within computer sciences, and there are some ML techniques that are easier to interpret than others. Furthermore, the context of the problem also determines whether certain processes need to be interpretable or not. For example, if a researcher is interested in whether someone has a disease (i.e. case definition) using a huge data volume, then achieving high predictive accuracy may be more important than precisely understanding the causal chain. EHRs contain huge volumes of data for each patient, sometimes beyond what traditional techniques can process. ML and deep learning are therefore sound methodologies for EHR research, as long as research objectives align with the purposes of the techniques. Example Applications of the EHR: Developing learning algorithms for risk adjustment analysis to achieve precision medicine and precision public health The potential for EHR for clinical research applications have been described previously [28]. Researchers have used EHR data to provide real-time adverse surgical event reporting [29], recruit participants for clinical trials [30], build systems to automatically infer medical problems [31], and for pharmacoepidemiology and public health surveillance [32,33]. A population-wide inpatient EHR (such as SCM) can be used to facilitate local and regional healthcare system planning in addition to clinical research. Alberta's health care is structured as a single payer system, which is under AHS. This structure allows the creation of a system-wide data repository for provincial planning. The crux of health system planning requires accurate and timely risk adjustment analysis. Risk adjustment aims to identify patient health risks, and build models that compare, adjust and predict/forecast associated health expenditure or outcomes of interest [34]. The principles of risk adjustment analysis using EHR data is therefore a critical component of precision medicine, as it would lead to better patient outcomes and improved health system planning and management. It should be noted that inpatient EHR systems, such as SCM, provide granular clinical details and may lack such detail on non-clinical information, such as school achievements, patient complaints, and so forth. Therefore, data linkage between multiple population-level data sources is required to achieve precision medicine and precision public health. Identifying appropriate data sources for population data linkage is then dependent on the context of the research question. We aim to explore data linkage with non-inpatient clinical settings, such as primary care data and non-clinical population databases, within Alberta. Conclusion and Next Steps EHR data are potentially an optimal data source for research. Clinical details, which are not readily available in administrative data, can be augmented with the data extracted from EHR. Utilizing EHR will lead to improved case definitions and identification of conditions, leading to development of robust risk adjustment methodologies. This will allow the creation of personalized outcome predictions/comparisons, which constitutes the core principles of precision medicine. There are administrative and analytical challenges associated with EHR data. However, these challenges are surmountable and worth overcoming. EHR data have led to the use of sophisticated analytical techniques such as machine learning and natural language processing. The Center for Health Informatics (CHI) at the University of Calgary was established to work with EHR and other data types in pursuit of health data science. The CHI brings together Albertan stakeholders (e.g. UofC, AHS, Ministry of Health (Alberta Health), and Alberta Strategy for Patient Ori-ented Research (SPOR)) to allow the EHR access for research use under a controlled environment. Our team at the CHI has completed chart review for 3,000 randomly selected inpatients admitted in Calgary hospitals. We are utilizing SCM EHR and other data (e.g. administrative data, clinical registry and chart review data) to develop and validate case definition algorithms, ultimately improving research methods such as risk adjustment. Ultimately, harnessing the full potential of EHR data can lead to better patient outcomes and system improvements.
5,587.4
2020-01-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Novel chemical route for CeO2/MWCNTs composite towards highly bendable solid-state supercapacitor device Electrode materials having high capacitance with outstanding stability are the critical issues for the development of flexible supercapacitors (SCs), which have recently received increasing attention. To meet these demands, coating of CeO2 nanoparticles have been performed onto MWCNTs by using facile chemical bath deposition (CBD) method. The formed CeO2/MWCNTs nanocomposite exhibits excellent electrochemical specific capacitance of 1215.7 F/g with 92.3% remarkable cyclic stability at 10000 cycles. Light-weight flexible symmetric solid-state supercapacitor (FSSC) device have been engineered by sandwiching PVA-LiClO4 gel between two CeO2/MWCNTs electrodes which exhibit an excellent supercapacitive performance owing to the integration of pseudocapacitive CeO2 nanoparticles onto electrochemical double layer capacitance (EDLC) behaved MWCNTs complex web-like structure. Remarkable specific capacitance of 486.5 F/g with much higher energy density of 85.7 Wh/kg shows the inherent potential of the fabricated device. Moreover, the low internal resistance adds exceptional stability along with unperturbed behavior even under high mechanical stress which can explore its applicability towards high-performance flexible supercapacitor for advanced portable electronic devices. Remarkable specific capacitance of 486.5 F/g with much higher energy density of 85.7 Wh/kg shows the inherent potential of the fabricated device. Moreover, the low internal resistance adds exceptional stability along with unperturbed behavior even under high mechanical stress which can explore its applicability towards high-performance flexible supercapacitor for advanced portable electronic devices. Significant research on supercapacitors (SCs) is targeted at accumulative power and energy density; however, recently attention is also focused at lowering manufacture costs and using environmental friendly materials too. To fulfill these requirements, the electrode materials should have high surface area and narrow pore size distribution while providing easy access of electrolyte ions to exhibit high energy storage capacity and good stability. To fabricate such electrode for electrochemical SCs, the electroactive materials should offer efficient mass transport, highly accessible electrochemical surface area and good electron conductivity which cannot be supplied solely by the individual electroactive material 1 . To address the above mentioned activities, carbonaceous materials (such as activated carbon, carbon nanotube, graphite oxide, and reduced graphite oxide) have been drawn much attention with good electrode conductivity but limited to higher specific capacitance. In fact, hierarchically structured carbon-based composites with faradaic pseudo-capacitive system could be better candidates for SCs offering larger energy density and higher power density. Such composites combine an electric double layer capacitive (EDLC) and a faradaic pseudo-capacitive systems through nanoarchitecturing. This approach takes advantages as both utilize the fast and reversible faradaic capacitance coming from the electroactive pseudocapacitive species and the indefinitely reversible double-layer capacitance at the electrode-electrolyte interfaces. Carbon nanotube (CNT) has a unique set of properties including high mechanical properties, namely tensile strength and elastic modulus, and still remarkable flexibility, excellent thermal and electric conductivities (10 2 −10 5 S/cm) 2 , low percolation threshold through loading weight at which a sharp drop in resistivity occurs and high aspect ratios through length to diameter ratio (L/D). Besides being responsible for high conductivity, the delocalized π-electrons of carbon nanotubes could be utilized to promote adsorption of various moieties on the CNT surface via π-π stacking interactions 3 . Although highly desirable, it is a great challenge to design and synthesize unique carbon based composites with hierarchical nanostructures in a controllable and much simpler manner, which can tailor the physical and chemical properties of the electroactive materials to meet the basic requirements for the supercapacitor applications. In this feature article, we have designed and fabricated flexible symmetric solid-state supercapacitor (FSSC) device using CeO 2 /MWCNTs electrodes sandwiched by PVA-LiClO 4 . Initially, CeO 2 nanoparticles were deposited on MWCNTs surface to form composite nanostructure using simple, low-cost and environment-friendly chemical bath deposition (CBD) method (Fig. 1). There are a few literatures available regarding the supercapacitive performance of CeO 2 -MWCNTs composite electrodes. Kalubarme et al. prepared carbon nanotube (CNT)/ cerium oxide composite and gained specific capacitance of 289 F/g 4 . Deng et al. reported a specific capacitance of 455.6 F/g at specific current of 1 A/g for CeO 2 /MWCNTs nanocomposite 5 . Luo et al. synthesized CeO 2 /CNTs hybrid electrode through hydrothermal method to achieve a maximum specific capacitance of 818 F/g at scan rate of 1 mV/s 6 . Nanostructured CeO 2 can improve the transport and redox properties along with improved surface to volume ratio compared to other bulk and nanostructured materials. By considering nanocryatalline electroctive metal oxides, the energetics may be substantially reduced due to defects and increases the nonstoichiometry levels and electronic carrier generation 7 . The fluorite-structured cerium oxide forms a close packed structure array of atoms with four coordinate O 2− and eight coordinate Ce 4+ . As inexpensive and environmentally abundant rare earth element, CeO 2 draws most attention with a ground state valance of 4f 1 5d 1 6s 2 and hence, it not only forms steady Ce 3+ by losing one 5d and two 6s electrons but stable Ce 4+ by losing one extra 4f electron 8 . The oxidation state can be easily changed between Ce 3+ and Ce 4+ , making CeO 2 as an outstanding redox material as required for supercapacitor application. The efficient characteristics such as structural defects depending upon the partial pressure of oxygen and high mobility of oxygen vacancies can show the potential in energy-related application 9 . Hence, the assembly in combination with MWCNTs and nanostructured CeO 2 opens new perspectives for the development of electrode material towards high-performance SCs including the following options: (i) exhibits sufficient electrical conductivity; (ii) increases the contact surface area between the electrode and the electrolyte; (iii) protects the electrolyte from decomposition by catalytic reactions with the electrodes; (iv) decreases the transport path length for both electrons and ions by using functionalized MWCNTs as structuring agent while maintaining its conductive nature. 10 . Figure 2b shows the Raman spectra of the bare MWCNTs, CeO 2 , CeO 2 /MWCNTs samples. The strongest peak appearing at 457 cm −1 in the spectra of CeO 2 and CeO 2 /MWCNTs corresponds to the Raman F 2g mode related to the stretching vibrations of oxygen 5 . The small hump around 600 cm −1 is assigned to the oxygen defective CeO 2 layers 11 . Moreover, the typical internal vibration of nitrates are associated with the peaks arisen at about 740 and 1049 cm −1 , respectively 12 . www.nature.com/scientificreports www.nature.com/scientificreports/ The Raman spectrum of CeO 2 /MWCNTs also shows the characteristic D band (1346 cm −1 ) and G band (1578 cm −1 ), respectively arising from defects and the sp 2 hybridized carbon atoms in MWCNTs walls 13 . The significant G′ (2D) peak (2691 cm −1 ) is observed due to phonon scattering in MWCNTs; indication of a dense uniform distribution of MWCNTs in the sample 14 . The peak intensities of D and G bands for CeO 2 /MWCNTs are enhanced as compared to bare MWCNTs due to the surface enhanced Raman scattering (SERS) effect associated with the CeO 2 nanoparticles, which affect the local electromagnetic field 15 . The relative intensity ratio shows the information about graphitic and disordered carbon. An I D /I G ratio of 0.88 for composite corresponds to the enhanced electronic conductivity of the electrode. Results and Discussion The oxidation states of constituent elements were analyzed by XPS analysis (Fig. 3a). To determine the peak position, the featured survey and core spectra were fitted by nonlinear least-square fitting (NLLSF) method www.nature.com/scientificreports www.nature.com/scientificreports/ associated with Gaussian-Lorentz distribution function after Shirley integrated background subtraction. The survey spectrum is similar with previous literatures 16,17 . KLL and MNN are the Auger groups for O and Ce, respectively 18 . The main peaks such as Ce 4+ 3d 3/2 and Ce 4+ 3d 5/2 related Ce 3d core spectrum can be clearly shown at binding energies of 916.77 and 898.38 eV, respectively (Fig. 3b). Moreover, the Ce 3+ 3d 3/2 and Ce 3+ 3d 5/2 peaks are assigned at 900.97 and 882.50 eV, individually. Two additional 'shake-up' satellite lines specified by SU1 and SU2 are revealed at 905.63 and 886.49 eV, separately. This Ce 3d spectrum is well consistent with previously reported literatures 19 . The existence of Ce 3+ is a consequence of oxygen vacancies which is superior in nanoparticles. Since, greater number of the atoms are on the surface, the surface atoms will reduce coordination because of reduced particle size 5 . The oxygen vacancies lead to the transformation between Ce 4+ and Ce 3+ which is the main factor for enhanced electrochemical reactions on electrode. It is clear that CeO 2 /MWCNTs composite has a good oxygen release and storage capacity through the reversible redox reactions between Ce 3+ and Ce 4+ under reducing and oxidizing conditions, respectively. As shown in Fig. 3c, the peak of C 1s spectrum at 284.14 eV specifies the non-oxygenated C-C bonds whereas the other peaks are due to the oxygenated functional groups present in MWCNTs. The component peak at 284.72 eV is associated with the C atoms directly bonded to hydroxyl group (C-OH). The peak related to carbonyl group (C=O) is indicated at 285.98 eV and the peak at 288.57 eV is due to the carboxyl group (O=C-OH) 20 . Three Gaussian components in O 1s spectrum ( Fig. 3d) at 529.46, 531.80 and 532.78 eV correspond to the phenolic, carboxylate C-OH and C=O groups, respectively 21,22 . It is noticeable that the peak assigned at 531.80 eV indicates a greater contribution than from C-OH and C=O bonds. The analyzed results of O 1s spectrum clearly support the details defined by C 1s spectrum. Fig. 4a, the surface of the stainless steel (SS) substrate is well covered by a complex MWCNT network, while the hierarchical nanoparticle morphology of CeO 2 onto SS substrate (as reference) offers plenty of electroactive surface to the electrolyte ions (see Fig. 4b). FESEM images of the CeO 2 /MWCNTs composite (Fig. 4c,d) reveal that the CeO 2 nanoparticles are well encapsulated on the outer surface of MWCNTs. It is well noted that the CeO 2 /MWCNTs nanostructured material providing high surface area (Supplementary Information S1) with enough space not only opens well-established conductive network for electrolyte diffusion and electron transport but also offers sufficient electroactive sites for electrochemical redox reactions. The cross-linked and intertwisted network of the porous MWCNTs film greatly supports the conductive pathway between oxide layer and the metal substrate, and hence, significantly reduces the charge transfer resistance with increase in the utilization rate of the active materials and consequently, results in the outstanding capacitance value. Morphological analysis. As shown in The detailed structure is revealed by HRTEM images depicted in Fig. 5a,b. The images depict the anchoring of CeO 2 onto MWCNTs to form nanocomposite which is in well agreement with the FESEM images, confirming the successful synthesis of the CeO 2 /MWCNTs composite with hierarchical structure. The anchoring of CeO 2 Fig. 7a. The CV curve of MWCNTs displays a distinct capacitive behavior which is consistent with previous reports 24 . Both the CeO 2 and CeO 2 /MWCNTs composite electrodes exhibit well-defined oxidation-reduction peaks. Additionally, the CeO 2 /MWCNTs electrode exhibits the highest current responses, with approximately similar areas in the anodic and cathodic regions, resulting superior capacitance value (inset, Fig. 7a). The scan rate dependent CV curves are shown in Fig. 7b. The CeO 2 / MWCNTs electrode exhibits a much superior capacitance of 1215.7 F/g at scan rate of 2 mV/s (Fig. 7c). The origin of the peaks is associated to the following reversible redox reaction 25 : The CV curve of CeO 2 /MWCNTs composite electrode principally retains faradaic peaks even at scan rates up to 100 mV/s, which is symptomatic of a fast charge transport, i.e., pseudocapacitance, in the electrode. Both the magnitudes of the current and the potential peak separation increase with the rise in scan rate, and the oxidation and reduction peaks shift towards more positive and negative values, respectively, mainly due to polarization and ohmic resistance appeared during the faradaic processes 26 . The cyclic stability of the electrode was executed at 100 mV/s scan rate within the same potential window. The unperturbed CV curves even at 10000 cycles are depicted in the inset of Fig. 7d. Notably, the composite electrode exhibits very high capacitance retention, maintaining 92.3% of its initial capacitance at 10000 cycles (Fig. 7d). These remarkable results prove that CeO 2 /MWCNTs is chemically stable in NaOH electrolyte with very low degradation, constantly delivering power upon long-term cycling with good reversibility in spite of its pseudocapacitive behavior. In fact, MWCNTs supports the prevention of the degradation of electroactive material to electrolyte solution by strong synergy during electrochemical reaction process. The effect of the current density on the specific capacitance of MWCNTs, CeO 2 , and CeO 2 /MWCNTs electrodes was investigated by galvanostatic charge/discharge (CD) studies. Figure 8a demonstrates the CD curves of the supercapacitors at a CD current density of 1.5 mA/cm 2 . The MWCNTs electrode shows electric double layer capacitance (EDLC), exhibiting the typical triangular CD plot. However, both the CeO 2 and the CeO 2 /MWCNTs electrodes show pseudocapacitive characteristics 27 . Moreover, the CeO 2 /MWCNTs composite electrode shows the greatest discharge time at a same current density of 1.5 mA/cm 2 , exhibiting greater capacitance as shown in www.nature.com/scientificreports www.nature.com/scientificreports/ inset of Fig. 8a. The discharge curve includes two stages in the voltage ranges between −0.5 to −0.8 and −0.8 to −1.1 V. The first stage, with a comparatively smaller duration is attributed to EDLC; whereas, the second stage, with a much longer discharge duration, involves the combination of EDLC and faradaic capacitance originating from MWCNTs and CeO 2 , respectively 28 . Furthermore, CD curves for composite were analyzed with different current densities ranging from 1.5 to 3 mA/cm 2 and depicted in Fig. 8b. Utilizing both the charge storage mechanisms, the CeO 2 /MWCNTs electrode yields a maximum specific capacitance of 1044.2 F/g at a current density of 1.5 mA/cm 2 , decreasing to 573.5 F/g at 3 mA/cm 2 (Fig. 8c). A little iR drop at the starting of every discharge curves is observed due to the presence of internal resistance 29 . Non-intrusive and highly sensitive EIS measurements were performed in the 0.1 Hz to 100 kHz frequency range. As shown in Nyquist plots in Fig. 8d, a semicircle in the high frequency region is observed, resulting from the charge-transfer resistance (R CT ) associated to the faradaic reactions. The equivalent series resistance (R S ) constituted by the electrolyte resistance, the contact resistance between the active material and the current collector, and the intrinsic resistance of the electro-active material is obtained from the high frequency intersect of the impedance curve 30 . The electrode shows low values of R S (1.87 Ω/cm 2 ) and R CT (1.06 Ω/cm 2 ), these obtained low values are of great importance as they affect the power and energy performance, but also reduce undesired heat dissipation throughout the charge-discharge processes 31 . The reason behind the optimal behavior is that the MWCNTs can effectively transport the current to and from the active material ( Supplementary Information S3). Additionally, they prevent the agglomeration of the CeO 2 nanoparticles, facilitating the contact between the active material (CeO 2 ) and the electrolyte, which in turns lead to the improved utilization of the active material (i.e., to high capacitance). Moreover, the fitted equivalent circuit of the Nyquist plot is presented in the inset of Fig. 8d. The constant phase element (CPE) represents the double layer capacitance occurring at the interface between the electroactive material and the electrolyte 32 . The impedance due to the diffusion of OH − ions within pores of the CeO 2 electrode is represented by the Warburg element, and is dependent upon the frequency whereas R L represents the leakage resistance during the electrochemical activities 33 . Figure 9a shows the Bode plots of the real (C′) and imaginary (C″) components of the capacitance against the frequency. The variation of C″ with the frequency shows a maxima at the characteristic frequency (f 0 = 20.6 Hz), defining the relaxation time as 0 1 f 0 τ = . The time 0 τ is a measure of the rate capability of the supercapacitor as it is normally associated with the swiftness of the capacitive discharge. The relaxation time is the minimum time required to deliver the stored energy with an efficiency greater than 50% of its maximum value and redirects the shift among the resistive and capacitive activities of the supercapacitor 34 . The small relaxation time constant of www.nature.com/scientificreports www.nature.com/scientificreports/ 48.5 ms suggests the excellent charge-discharge rate performance of the electrode material. The experimental impedance data has been converted to imaginary capacitance (C″) using the following equation. The phase angle is −77° (close to 90°) as exhibited in the Bode plot (Fig. 9b), suggesting excellent capacitive response 35 . As capacitive and resistive impedances are identical at phase angle of −45°, relaxation time constant can be estimated by 0 1 f 0 τ = at that particular phase and found to be 43.6 ms, consistent with the results obtained from the previous result. Electrochemical performance of flexible symmetric solid-state supercapacitor (FSSC). Recent literatures suggest that the measurements performed on two-electrode cells are more effective than those made on three-electrode cells to ascertain the performance of the electrode materials (including the synthesis route) and electrolytes for commercial purposes. Considering the fast ionic transport property and high capacitance of the CeO 2 /MWCNTs composite, a symmetric solid-state supercapacitor device of dimension 3.5 × 3.5 cm 2 was assembled using PVA-LiClO 4 gel as solid-state electrolyte and separator. As shown in Fig. 10, the resulting device could be bent and twisted easily, which combines additional advantage compared to other SCs. The fabricated symmetric SC showed the characteristic pseudocapacitive behavior exhibiting oxidation and reduction peaks in the potential up to 1.2 V (Fig. 11a). The scan rate dependent CV curves of the FSSC consist of well-defined and almost symmetric peaks as a result of the reversible redox reactions of CeO 2 involving Li + insertion/release. Till date, many reports are available on high performance solid-state symmetric devices based on excellent electroactive materials. Chen et al. have constructed a symmetric device by using hybrid SWCNTs/ RuO 2 electrodes with a specific capacitance of 138 F/g 36 . The specific capacitance of 159.6 F/g was reported by Hou et al. for the symmetric device using layered ZnS/CNTs composite 37 . Zhao et al. improved electrochemical properties by porous highly flexible and conductive cellulose-Mediated PEDOT:PSS/MWCNT composite assembly that allowed for an enhanced specific capacitance of 380 F/g 2 . In present case, the specific capacitance of the device calculated on the basis of CV curve is 486.5 F/g at scan rate of 2 mV/s in spite of having a total mass loading of 8.33 mg as depicted in Fig. 11b. The CV curves maintain its original shape with the inclusion of large potential window of 1.2 V even at high scan rate (100 mV/s). Asymmetric device takes account of both the advantages of www.nature.com/scientificreports www.nature.com/scientificreports/ positive and negative electrodes, and hence exhibiting excellent electrochemical behavior than the symmetric devices. But, our fabricated symmetric device shows greater capacitance than many asymmetric devices reported by Liu 42 . The charge-discharge (CD) has also been analyzed with different current densities varying from 3 to 6 mA/ cm 2 and shown in Fig. 11c. The total specific capacitance of the device obtained as 428.3 F/g at a current density of 3 mA/cm 2 is quite higher than other solid-state devices reported recently too (inset, Fig. 11c). Though various resistive factors are involved during device fabrication, the discharge curves show small iR drop by referring internal resistance. The Ragone plot relating the energy and power densities is shown in Fig. 11d. The device shows a maximum specific energy of 85.7 Wh/kg with the power density of 2.6 kW/kg. Moreover, the device can deliver energy density of 41.9 Wh/kg with the upsurge in power density of 5.3 kW/kg. The energy density of present device is substantially higher than those of recently reported all-solid-state symmetric SCs with electrode materials such inset shows specific capacitance of respective electrodes, (b) CD curves at different current densities ranging from 1.5 to 3 mA/cm 2 , inset shows specific capacitance as a function of current density, (d) Nyquist plot of impedance from 100 mHz to 100 kHz, inset shows corresponding equivalent circuit. 50 , waste paper fibers-RGO-MnO 2 (19.6 Wh/ kg) 51 , and porous carbon (7.22 Wh/kg) 52 . Due to current collector-free feature, present ultrathin device showed remarkable energy density, a value considerably higher than even solid-state asymmetric devices, such as CNT/ polyaniline//CNT/MnO 2 /GR (24.8 Wh/kg) 53 , carbon aerogel//Co 3 O 4 (17.9 Wh/kg) 38 58 , CuS/3D graphene//3D graphene (5 Wh/kg) 59 , MnO 2 @PANI//3D graphene foam (GF) (37 Wh/kg) 60 , NiCo 2 S 4 /polyaniline//AC (54.06 Wh/kg) 61 , NiCo-LDH//carbon nanorods (59.2 Wh/kg) 39 , Ni(OH) 2 /RGO/Ni//RGO aerogel/Ni (24.5 Wh/kg) 62 , and rGO/CoAl-LDH//rGO (22.6 Wh/kg) 63 . Towards stability check, CV was repeated 10000 times for FSSC device and results are shown in Fig. 12a. There was a quick drop of specific capacitance for first 1000 cycles and stabilized after 2000 cycles. At 10000 cycles, it shows an excellent retention of 92.1% which is higher than recently reported solid-state devices (Supplementary Information S4) and strongly favors the commercial use of FSSC device. Continuously, the performance durability of the FSSC device was further tested under harsh mechanical conditions through bending states by bending the assembled symmetric device at various degrees from 0 to 175° where almost no loss in the capacitance (98.2% retention at 175°) is observed Fig. 11b; suggesting its superior mechanical stability under stress environment. This superior performance is attributed to the excellent adhesion of gel electrolyte, good mechanical robustness, electrical conductivity, intimate interfacial contact among multiple components, and well adherence of MWCNTs to the SS substrate. The overlapping of CV curves of all bending angles shows the high rate capability of the electrode material (Fig. 12c). As practical demo, a series combination of two SC devices was assembled and charged with 2.4 V. The system can easily lighten up a 'VNIT' panel consisted of 21 red LEDs with luminous intensity for 120 s duration (Supplementary video) as depicted in Fig. 12d-g which explores the potential ability of the device. The successful attempt to drive commercial LEDs shows that our device has the opportunity to be applied in energy storage and portable/flexible electronics. As compared with recently reported solid-state devices, present assembled symmetric device shows superior supercapacitive characteristics including excellent energy and power densities which can be described in the following aspects: (i) unique morphology by the hybridization of CeO 2 nanoparticles and MWCNTs nanostructures with richer specific surface and porosity, enabling short electron transport paths and high rate of charge propagation which overall improves the electrochemical performance, (ii) use of MWCNTs is not only to encapsulate CeO 2 nanostructure with strong synergy but to enhance the conductivity and stability of the prepared composite too, and (iii) combining two symmetric CeO 2 /MWCNTs electrodes using PVA-LiClO 4 gel extends the voltage www.nature.com/scientificreports www.nature.com/scientificreports/ window up to 1.2 V, ensuing remarkable enhancement in energy density of the device and empower the device to one step closer to hands-on application. Conclusions Hierarchical CeO 2 nanostructure was successfully anchored onto outer surface of MWCNTs using facile chemical method. Supercapacitor based on hybrid CeO 2 /MWCNTs nanostructured electrode displays an enhanced capacitive performance in terms of specific capacitance of 1215.7 F/g, cyclic stability of 92.3% at 10000 cycles and low vaues of resistive factors (R S = 1.87 Ω/cm 2 and R CT = 1.06 Ω/cm 2 ). The assembled device impressively shows excellent supercapacitive performance with superb electrochemical and mechanical stability. The durability operation was achieved successfully with a wide cell voltage of 1.2 V, giving rise to energy and power densities. Well-integrated interface between the electrodes and the electrolyte enables fast charge storage/release processes at high rates and good cycling performance over 10000 cycles even under harsh mechanical (bent) conditions. These rationally designed symmetric SCs represent a promising pathway to build-up flexible energy storage devices with high-performance to drive wearable and stretchable electronic devices for advance applications. Experimental Fabrication of electrodes and devices. Previously reported synthesis procedure to coat MWCNTs onto SS substrate has been adopted 64,65 . Briefly, 95% pure MWCNTs (length = 5-15 μm and Outward diameter = 20-40 nm) procured from Monad Nanotech Pvt. Ltd. (Maharashtra, India) were refluxed using H 2 O 2 at 90 °C for 48 h in order to anchor oxygenated functional groups by removing amorphous carbon derivatives. The residue was rinsed repeatedly in double distilled water (DDW) followed by drying at 60 °C for 12 h. To obtain well-dispersed solution of MWCNTs, the product was processed through ultra-sonication in 1 wt% Triton X-100 and DDW with the ratio of Tx-100:DDW equal to 0.01. The two-step mechanical process involving immersion of SS substrate into the dispersed MWCNTs solution and dehydration under IR lamp yields uniform and well adherent coating of MWCNTs on SS substrate. The simple chemical bath deposition (CBD) method was employed to deposit cerium oxide over the pre-coated MWCNTs (Supplementary Information S5). In brief, cerium (III) nitrate (Ce(NO 3 ) 3 , 6H 2 O) was used as cationic solution, while hydrogen peroxide (H 2 O 2 , 30%) was used as anionic precursor during the process. The precursor solution was prepared by dissolving 0.04 M Ce(NO 3 ) 3 in 50 ml DDW under constant stirring to get uniform distribution. Furthermore, 2.5 mL H 2 O 2 was added to the prepared solution under vigorous stirring. After, the MWCNTs coated substrate was immersed in the bath kept at a constant temperature of 60 °C. After 1 h, the yellowish cerium oxide deposited MWCNTs substrate was taken out from the bath and rinsed several times in DDW and dried under infrared (IR) radiation. The as-prepared film was air annealed at 200 °C to remove extra hydroxide. The non-oxidized hydroxide originating from reaction, was eliminated via the annealing process as follows: Further, the PVA-LiClO 4 gel electrolyte was obtained by adding 6 g of LiClO 4 and 6 g of polyvinyl alcohol (PVA) powder into 60 ml of DDW 66 . The mixture was heated at 90 °C under stirring until the solution became clear and viscous. Then the composite electrode on the flexible SS substrate was coated with a thin layer of the prepared PVA-LiClO 4 gel electrolyte followed by evaporation of the excess water. When the gel electrolyte got solidified, the two electrodes were sandwiched and packaged to assemble flexible symmetric supercapacitor (FSSC) device. Characterizations. XRD was performed by Bruker AXS D8 Advance diffractometer using Cu K α X-ray source. Raman studies were performed with LabRAM HR, 532 nm laser excitation. Energy-dispersive X-ray spectroscopy (EDX) was acquired using a JSM-7610F analyzer connected with a scanning electron microscope (FESEM, JEOL JSM-7610F). The detailed morphological study was performed by high-resolution transmission electron microscopy (HRTEM) using JEOL 2100 with LaB 6 source. XPS measurements were carried out in a PHI 5000 VersaProbe II (ULVAC INC, Japan) photoelectron spectrometer under ultrahigh vacuum (UHV) below 5 × 10 −10 Torr. XPS system is equipped with the Al K α X-ray monochromator operated with an anode power of 350 W and the sample surface normal was oriented at 45° to both the X-ray source and photoelectron spectrometer. Electrochemical characteristics of the as-obtained films were studied on PARSTAT 4000 electrochemical workstation (Princeton Applied Research, USA) using cyclic voltammetry (CV), charge-discharge (CD) and electrochemical impedance test on three-electrode cells. In these, the composite electrode acts as the working electrode, Pt wire as the counter electrode and a saturated Ag/AgCl electrode as the reference electrode.
6,511.4
2019-04-10T00:00:00.000
[ "Materials Science", "Chemistry", "Engineering" ]
Spontaneous Chiral Symmetry Breaking and Entropy Production in a Closed System In this short article, we present a study of theoretical model of a photochemically driven, closed chemical system in which spontaneous chiral symmetry breaking occurs. By making all the steps in the reaction elementary reaction steps, we obtained the rate of entropy production in the system and studied its behavior below and above the transition point. Our results show that the transition is similar to a second-order phase transition with rate of entropy production taking the place of entropy and the radiation intensity taking the place of the critical parameter: the steady-state entropy production, when plotted against the incident radiation intensity, has a change in its slope at the critical point. Above the critical intensity, the slope decreases, showing that asymmetric states have lower entropy than the symmetric state. Introduction Modern thermodynamics, formulated in the 20th century by Onsager [1], De Donder [2], Prigogine [3,4], and others, introduced a critical concept lacking in its classical formulation: rate of entropy change and its relationship to irreversible processes. Classical thermodynamics was concerned with functions of state, such as energy and entropy, and their change from one equilibrium state to another. Absent from this theory of states is consideration of the rates of processes. Changes in entropy for infinitely slow reversible processes are calculated using the relation dS = dQ/T, (in which dS is the change in entropy, T is the temperature in Kelvin, and dQ is the heat exchanged between a system and its exterior). However, for changes that take place in a finite time due to irreversible processes, the theory does not specify a way of calculating the entropy change; it is only stated that dS > dQ/T. Modern thermodynamics is a theory of processes in which thermodynamic forces and the flows they drive are identified and the rate of entropy production is expressed in terms of these thermodynamic forces and flows [5,6]. More specifically, the rate of entropy production per unit volume, σ, is expressed in terms of the forces and flows as σ = ds dt = k F k J k (1) in which s is the entropy density, F k are the thermodynamic forces and J k are thermodynamic flows. A temperature gradient, for example, is the thermodynamic force, F k , that drives thermodynamic flow, J k , of heat current. The force that drives chemical reactions has been identified as affinity [2,5,6] and the corresponding flow is the rate of conversion form reactants to products. This flow is expressed as the time derivative dξ/dt (mol/s) of the extent of reaction ξ [5,6]. For an elementary chemical reaction step, Autocatalysis and reaction R6 result in spontaneous breaking of chiral symmetry when the intensity of radiation, II, is above a critical intensity, II C . Chemical Reactions Number The above model is a variation of the models in our earlier studies [11,12,15] which are modifications of the original model of Frank [8]. The modifications allow us to analyze non-equilibrium symmetry breaking and rate of entropy production. Models such as this are used to extract general properties that are not model dependent. Examples of such properties are the qualitative behavior of steady-state rate of entropy production as a function of a parameter that drives the system away from equilibrium (such as the incident radiation intensity II in the above model). The difference in concentration between enantiomers of a chiral species as a function of a parameter, such as the intensity II, is generally parabolic, as predicted by bifurcation theory based on the symmetry group (mirror symmetry in this case) of the system, regardless of the details of the chemical reactions that break chiral symmetry. Though there is currently no known reaction that has all the properties in the above model, the reaction has no steps that are implausible. For example, reactions (R1), (R1a), (R2) and (R3) comprise a photoaddition reaction that produces a chiral compound. An example is the following reaction series [19,20]: in which Ph is the phenyl group and R 1 = CH 3 or C 2 H 5 and R 2 = CH 3 , C 2 H 5 or C 3 H 7 . In the reaction (R9), a photon is absorbed by the electrons in the C=C double bond and the molecule transitions to a reactive excited state [(Ph) 2 C = CH(R 1 )]*. In the addition reaction shown in (R10) and (R11), the excited molecule reacts with an alcohol, (R 2 ) OH, and produces a chiral compound (Ph) 2 HC − CH(R 1 )(OR 2 ) in the L and D enantiomeric forms. In this compound, the carbon shown in boldface is a chiral carbon (its tetrahedral bonds to four different groups makes it so). Other examples of photoaddition reactions producing chiral products from achiral reactants can be found in [19]. We note that TE need not be an excited state; it could be a different, more reactive isomer of the T [19]. The reactions (R4a)-(R5b) are steps leading to chiral autocatalysis. This involves the formation of a chiral complex of S and X in their enantiomeric forms. Examples of chiral complexes resulting in reactions with a high degree of chiral selectivity have been known for a long time [21,22]. In an article published in 1984, we noted some mechanisms that are based on chiral ligands in rhodium phosphine catalysts which could lead to chiral autocatalysis [12,22]. To date, there are several chirally autocatalytic reactions that have been experimentally studied. Chiral symmetry breaking was noticed and systematically studied first in NaClO3 in 1990 [23], and in 1995 chiral autocatalysis and amplification of small initial enantiomeric excess was reported in inorganic reactions involving cobalt complexes [24] and in organic reactions involving alkylation of aldehydes [25]. Since then, these and closely related systems have been extensively experimentally studied and the mechanisms of chiral autocatalysis have been investigated [26][27][28][29][30][31]. A variant of chiral symmetry breaking in stirred crystallization was reported in 2005, and it too has been studied extensively [32,33]. The mechanisms of chiral autocatalysis vary in these systems: in crystallization, it is secondary nucleation, in the organic and inorganic reactions, cluster/complex formation seems to be involved. Reactions (R4a) and (R5a) may be thought of as a simple form of chiral complex formation. As was noted in a review [27], the exact details of chiral catalysis are not of significance for the general properties symmetry-breaking bifurcation and thermodynamic properties of such systems. In particular, properties such as phase-transitions-like behavior we present here are quite independent of the details of chemical kinetics. Examples of reaction (R6), the dimer formation of enantiomers, are also known; in fact, such dimerization of certain chiral catalysts leads to asymmetric amplification [34]. For the above theoretical model (R1)-(R8), the corresponding forward and reverse rate for each reaction are written as follows, in which concentrations are shown explicitly as functions of time: In these equations, the rate constants are written as k1f, k1r etc., and the concentration are written as T[t], S[t], etc. In terms of these forward and reverse rates, the rate equations for the concentrations can be written as: Symmetry 2020, 12, 769 This set of coupled non-linear equations were solved numerically using Mathematica NDSolve. NDSolve is a Mathematica command that has the following structure: NDSolve[{Equations}, {y i },{t, t min , t max }], in which "Equations" are the differentials equations for the set of functions {y i } with t as the independent variable; numerical solutions are obtained in the range t min , to t max . More details can be found in the online documentation that comes with Mathematica. The rate constants used for the numerical solutions are summarized in Figure 1. In assigning values to rate constants, there are consistency conditions that must be met. For example, since reactions (R4a) and (R4b) together are equivalent to (R2), the products of the equilibrium constants of R4a and R4b must equal the equilibrium constant of R2. This gives us the following condition for the rate constants: (k4af/k4ar)(k4bf/k4br) = k2f/k2r (23) Symmetry 2020, 12, x FOR PEER REVIEW 5 of 13 d TE[t]/dt = -R1f + R1r -R2f + R2r -R3f + R3r -R4bf + R4br -R5bf + R5br (15) d S[t]/dt = -R2f + R2r -R3f + R3r -R4af + R4ar -R5af + R5ar + 2R7f -2R7r (16) d SL[t]/dt = R4af -R4ar -R4bf + R4br (17) d SD[t]/dt = R5af -R5ar -R5bf + R5br (18) d XL[t]/dt = R2f -R2r -R4af + R4ar + 2R4bf -2R4br -R6f + (19) d XD[t]/dt = R3f -R3r -R5af + R5ar + 2R5bf -2R5br -R6f + R6r (20) d P[t]/dt = R6f -R6r -R7f + R7r This set of coupled non-linear equations were solved numerically using Mathematica NDSolve. NDSolve is a Mathematica command that has the following structure: NDSolve [{Equations}, {yi},{t, tmin, tmax}], in which "Equations" are the differentials equations for the set of functions {yi} with t as the independent variable; numerical solutions are obtained in the range tmin, to tmax. More details can be found in the online documentation that comes with Mathematica. The rate constants used for the numerical solutions are summarized in Figure 1. In assigning values to rate constants, there are consistency conditions that must be met. For example, since reactions (R4a) and (R4b) together are equivalent to (R2), the products of the equilibrium constants of R4a and R4b must equal the equilibrium constant of R2. This gives us the following condition for the rate constants: If II is thought of as a radiation from the sun, its blackbody temperature is very high compared to the temperature of the system. All rate constants are assumed to have the appropriate units, though not written explicitly. Numerical values were assigned to rate constants so as to fulfill these requirements. The units were chosen so that all concentrations are in mol/L. Assigned numerical values are such that the concentrations of the reactants have realistic values. Symmetric (racemic) and asymmetric states are parametrized by α = (X L − X D ), in the symmetric state α = 0 and in the asymmetric state α 0. Results The rate equations were first solved for an equilibrium state where the incident radiation intensity, II, was set to 0. The initial concentrations of species S and T were set to 0.01 M and the initial concentrations of all other species were set to 0.0 M. Under these conditions, the system evolves to its racemic equilibrium state, in which α = 0. The simulation code was run for sufficient time (about 10 4 s) to ensure the concentrations of all species have reached a steady state, which is the equilibrium state. At t = 10 4 s, the concentrations at equilibrium were: S = T = 8.478 × 10 −3 M, TE = 8.477 × 10 −6 M, S L = S D = 3.047 × 10 −6 M, X L = X D = 3.593 × 10 −5 M, and P = W = 7.188 × 10 −4 M. The conversion of the initial species S and T compared to other species was rather small for the numerical values of the rate constants shown in Figure 1. By choosing a different set of rate constants, the conversion could be increased. The numerical values confirm that complete symmetry of the system was maintained when no incident radiation is present. Figure 2 shows the time evolution of the chiral species S L , S D , X L , X D from t = 0 s, to t = 1000 s. As the system evolved to its equilibrium state, the entropy production σ was monitored; initially, it took a nonzero value but, as expected, its value decreased to zero at the equilibrium state. Units of variable parameter II may be thought of as W/m 2 . If II is thought of as a radiation from the sun, its blackbody temperature is very high compared to the temperature of the system. All rate constants are assumed to have the appropriate units, though not written explicitly. Numerical values were assigned to rate constants so as to fulfill these requirements. The units were chosen so that all concentrations are in mol/L. Assigned numerical values are such that the concentrations of the reactants have realistic values. Symmetric (racemic) and asymmetric states are parametrized by α = (XL-XD), in the symmetric state α = 0 and in the asymmetric state α ≠ 0. Results The rate equations were first solved for an equilibrium state where the incident radiation intensity, II, was set to 0. The initial concentrations of species S and T were set to 0.01 M and the initial concentrations of all other species were set to 0.0 M. Under these conditions, the system evolves to its racemic equilibrium state, in which α = 0. The numerical values confirm that complete symmetry of the system was maintained when no incident radiation is present. Figure 2 shows the time evolution of the chiral species S L , S D , X L , X D from t=0s, to t=1000s. As the system evolved to its equilibrium state, the entropy production σ was monitored; initially, it took a nonzero value but, as expected, its value decreased to zero at the equilibrium state. We then used these values for a symmetric equilibrium state as initial values for a system subject to a radiation input, i.e., II > 0. This radiation input serves as a means to push the system away from thermodynamic equilibrium. A very small excess, about 0.1% (3 × 10 −8 M) of X L was introduced into the system as a "random fluctuation". If the system has the mechanism to break chiral symmetry, it will have a critical value II C . At values of II < II C , this excess 0.1% of X L will decrease and the system will again evolve into a steady state where X L = X D ; at values of II > II C, the excess will increase and lead to a steady state in which X L > X D . In a real system, this slight perturbation may be due to a random fluctuation such as a local excess of one enantiomer that may then be amplified, resulting in a state of broken symmetry. The overall behavior of the system is summarized in Figure 3. system as a "random fluctuation". If the system has the mechanism to break chiral symmetry, it will have a critical value IIC. At values of II < IIC, this excess 0.1% of XL will decrease and the system will again evolve into a steady state where XL = XD; at values of II > IIC, the excess will increase and lead to a steady state in which XL > XD. In a real system, this slight perturbation may be due to a random fluctuation such as a local excess of one enantiomer that may then be amplified, resulting in a state of broken symmetry. The overall behavior of the system is summarized in Figure 3. Table 1.1, to one of two asymmetric states: B or C. In case B, the amount of X L is much greater than that of X D and ! > 0, by definition. Situation C says that the amount of X D is much greater than that of X L and ! < 0. Rate constants were then defined for the scheme above. Notation was written such that α!>!0! α!<!0! Figure 3. Schematic of reaction system. Radiation (shown as hν and denoted as II in the reaction scheme)) is incident on a closed chemical system. The radiation drives a generation-decomposition cycle of enantiomeric species X and other compounds, as shown in A. When II > II C , the system evolves to one of two asymmetric states, B or C. In state B, the amount of X L > X D , and in state C, X D > X L . The asymmetry is parametrized by α = (X L − X D ). It was found that this system indeed has the mechanism needed for breaking chiral symmetry. At values of II < 0.004, the system evolved to a symmetric state corresponding to X L = X D and α = 0. Figure 4 shows the time evolution of the chiral species when II = 0.0030. A steady state is reached in approximately 600 s. We then used these values for a symmetric equilibrium state as initial values for a system subject to a radiation input, i.e., II >0. This radiation input serves as a means to push the system away from thermodynamic equilibrium. A very small excess, about 0.1% (3 x 10 -8 M) of XL was introduced into the system as a "random fluctuation". If the system has the mechanism to break chiral symmetry, it will have a critical value IIC. At values of II < IIC, this excess 0.1% of XL will decrease and the system will again evolve into a steady state where XL = XD; at values of II > IIC, the excess will increase and lead to a steady state in which XL > XD. In a real system, this slight perturbation may be due to a random fluctuation such as a local excess of one enantiomer that may then be amplified, resulting in a state of broken symmetry. The overall behavior of the system is summarized in Figure 3. Figure 3. Schematic of reaction system. Radiation (shown as hν and denoted as II in the reaction scheme)) is incident on a closed chemical system. The radiation drives a generation-decomposition cycle of enantiomeric species X and other compounds, as shown in A. When II>IIC, the system evolves to one of two asymmetric states, B or C. In state B, the amount of XL > X D , and in state C, X D > X L. The asymmetry is parametrized by α = (XL-XD). For values of II > 0.004, the small excess of 0.1% of XL in the initial concentration increased, thus driving all chiral species to an asymmetric state. A time evolution of the chiral species in an asymmetric state when II = 0.008 is shown in Figure 5. concentrations, δCk, from the initial steady state. This leads to a set of linear equations of the type dδCk /dt = Σl LklδCl [6,7,9]. The eigenvalues of Lkl with positive real parts are the exponents that determine the initial rate of growth of the enantiomeric excess. However, the later growth and leveling off at the steady state depend on the kinetics and rate constants. In general, near the critical point IIC, the relaxation time is long, the so-called "critical slowing", but as the value of II increases, the growth rate becomes faster and the relaxation time decreases. To determine the relation between the steady-state value of α on II, the reaction was run for various values of II, from 0.0035 to 0.0045, and the corresponding steady-state values of α were obtained. As noted above, the time it takes for the system to reach an asymmetric steady state depends on the value of II; close to the critical value IIC (0.004 in this case), the relaxation to asymmetric steady state is slower and it becomes faster as the value of II increases. The exact quantitative relationship between relaxation time and II depends on the kinetics and rate constants and not of significance to the current study. In these runs, to obtain both positive and negative branches of α, the initial condition with a 0.1% excess of Figure 5. Time evolution of chiral species in an asymmetric state. Here, II = 0.008 and an asymmetric steady state is reached in about 7000 s. The blue solid and dashed lines represent X D and S D , respectively, and the red solid and dashed lines represent X L and S L , respectively. The black line represents α, which takes on a nonzero value once symmetry is broken. As shown, steady state is reached about 7000 s. The time taken to reach steady state, the relaxation time, depends on the value of II and on the amount of initial excess (0.1% of X L ). As is well known in the study of stability of steady states [6,7,9], the initial exponential growth of the small enantiomeric excess depends on the eigenvalue of the unstable mode of the linearized equations derived from the set (14)−(22) around the initial state. These linearized equations are obtained by assuming a small perturbation of the concentrations, δC k , from the initial steady state. This leads to a set of linear equations of the type dδC k /dt = Σ l L kl δC l [6,7,9]. The eigenvalues of L kl with positive real parts are the exponents that determine the initial rate of growth of the enantiomeric excess. However, the later growth and leveling off at the steady state depend on the kinetics and rate constants. In general, near the critical point II C , the relaxation time is long, the so-called "critical slowing", but as the value of II increases, the growth rate becomes faster and the relaxation time decreases. To determine the relation between the steady-state value of α on II, the reaction was run for various values of II, from 0.0035 to 0.0045, and the corresponding steady-state values of α were obtained. As noted above, the time it takes for the system to reach an asymmetric steady state depends on the value of II; close to the critical value II C (0.004 in this case), the relaxation to asymmetric steady state is slower and it becomes faster as the value of II increases. The exact quantitative relationship between relaxation time and II depends on the kinetics and rate constants and not of significance to the current study. In these runs, to obtain both positive and negative branches of α, the initial condition with a 0.1% excess of X D was also included. The dependence of α on II is shown in Figure 6, demonstrating the typical bifurcation of asymmetric states above the critical value II C = 0.004. As is expected, in a chiral symmetry breaking transition, the values of α above the critical point are parabolic. With these results, we now turn to the rate of entropy production σ. As stated above, initially the system is in the state of equilibrium, with the intensity of radiation II = 0 and σ = 0. Then the intensity of the radiation II is increased to a non-zero value and the rate of entropy production σ is monitored. Initially, σ sharply increases and, as the system reaches its steady-state, corresponding to the set value of II, the entropy production also reaches its steady state value. Figure 7 shows the evolution of σ from its value when t = 0, to its final steady-state value when II = 0.008. The final steady state value of σ is small compared to its initial value, but it is nonzero. α (Μ x 10 -5 ) Figure 6. Dependence of α on II. Units of α are M and II are Wm −2 . Steady-state values of α are plotted as a function of II. When II > II C , α takes a positive (X L > X D , shown in green X) or negative (X L < X D , shown in red X) value, depending on the random perturbation that drives the system away from the unstable racemic state α = 0. The blue Xs show the symmetric branch which is unstable above the critical point. In the region II > II C , α increases in a characteristically parabolic manner. XD was also included. The dependence of α on II is shown in Figure 6, demonstrating the typical bifurcation of asymmetric states above the critical value IIC = 0.004. As is expected, in a chiral symmetry breaking transition, the values of α above the critical point are parabolic. With these results, we now turn to the rate of entropy production σ. As stated above, initially the system is in the state of equilibrium, with the intensity of radiation II = 0 and σ = 0. Then the intensity of the radiation II is increased to a non-zero value and the rate of entropy production σ is monitored. Initially, σ sharply increases and, as the system reaches its steady-state, corresponding to the set value of II, the entropy production also reaches its steady state value. Figure 7 shows the evolution of σ from its value when t = 0, to its final steady-state value when II = 0.008. The final steady state value of σ is small compared to its initial value, but it is nonzero. Figure 7. Rate of entropy production in an asymmetric state when II = 0.008. Units of σ are JK -1 L -1 s -1 . As in Figure 3, steady state is reached at about 7000 seconds. Although it may appear that σ is approaching 0, it is not so; σ maintains a nonzero value at steady state after symmetry is broken. α (Μ x 10 -5 ) ΙΙ (Wm -2 x 10 -3 ) Figure 7. Rate of entropy production in an asymmetric state when II = 0.008. Units of σ are JK −1 L −1 s −1 . As in Figure 3, steady state is reached at about 7000 s. Although it may appear that σ is approaching 0, it is not so; σ maintains a nonzero value at steady state after symmetry is broken. We would like to note that the results shown above do not depend crucially on the particular values assigned to rate constants. Whether symmetry breaking occurs or does not depends mostly on the mechanism of reactions in the model, not on a narrow range of values of rate constants. In general, qualitative properties change drastically due to small changes in rate constants only in singular cases. The presented model is not singular. In our study, we have tried a range of rate constants and observed symmetry breaking. A typical value is presented in this article. Our objective is to study the behavior of σ as the intensity of radiation II moves form a value below to a value above the critical value II C . In a previous study [18] of an open system, σ behaved as entropy does in a second order phase transition: its slope is discontinuous at the transition point. In addition, we noted that its slope increased above the critical point. This behavior was consistent with the Maximum Entropy Production (MEP) hypothesis [35][36][37][38][39][40] that states that non-equilibrium steady states maximize the rate of entropy production. In other words, the breaking of symmetry was a reflection of the general tendency of non-equilibrium systems and, from this point of view, it was only to be expected. This would imply that biochemical asymmetry is a consequence of MEP. To date, there is no general proof for MEP; indeed, in our own investigation we found that MEP was valid for some systems but not all. MEP, in general, is a controversial hypothesis and its applications to various complex systems have been questioned [41]. Hence, we investigated MEP in the context of chiral symmetry breaking. Figure 8 shows the behavior of entropy production in the closed system we present here. It shows a change in the slope at the transition point, IIC, as in our previous study. From the point of view of symmetry breaking transition, we see that entropy production behaves in a way that is similar to that of entropy in a second-order phase transition. However, unlike the previous result, the slope decreases above the critical point. As indicated in Figure 8, when II > II C , if initially we set the system in a non-equilibrium symmetric state, which is an extrapolation of the symmetric state below II C , the system evolves to an asymmetric state in which σ is lower. The extrapolation of the symmetric state beyond the critical value II C is possible on a computer because, without a small perturbation in X L or X D , or other chiral species, the system stays in an unstable symmetric steady state. With a small perturbation, it evolves to the stable asymmetric state. The time evolution of the system from this initial state to its final asymmetric steady state results in the decrease in σ, thus indicating that the stable asymmetric state is associated with a lower value of σ compared with that of a symmetric state. Thus, we see that the entropy production in a closed system is not consistent with MEP, because MEP would predict higher values for σ in the asymmetric state. Symmetry 2020, 12, x FOR PEER REVIEW 11 of 14 Figure 8 shows the behavior of entropy production in the closed system we present here. It shows a change in the slope at the transition point, IIC, as in our previous study. From the point of view of symmetry breaking transition, we see that entropy production behaves in a way that is similar to that of entropy in a second-order phase transition. However, unlike the previous result, the slope decreases above the critical point. As indicated in Figure 8, when II> IIC, if initially we set the system in a nonequilibrium symmetric state, which is an extrapolation of the symmetric state below IIC, the system evolves to an asymmetric state in which σ is lower. The extrapolation of the symmetric state beyond the critical value IIC is possible on a computer because, without a small perturbation in XL or XD, or other chiral species, the system stays in an unstable symmetric steady state. With a small perturbation, it evolves to the stable asymmetric state. The time evolution of the system from this initial state to its final asymmetric steady state results in the decrease in σ, thus indicating that the stable asymmetric state is associated with a lower value of σ compared with that of a symmetric state. Thus, we see that the entropy production in a closed system is not consistent with MEP, because MEP would predict higher values for σ in the asymmetric state. The dashed arrow indicates a transition from an unstable state, where α = 0, to a stable asymmetric state where α is nonzero. In the region II > IIC rate of entropy production of asymmetric state is lower than that of the symmetric state. The solid arrow shows that transition from an unstable symmetric state to a stable asymmetric state results in the lowering of σ, the rate of entropy production. Concluding Remarks Our results show several aspects of chiral symmetry breaking. First, they show that Frank's original model can be modified with several additional steps-all of which are possible elementary chemical reaction steps-to demonstrate that spontaneous chiral symmetry breaking can occur in a The dashed arrow indicates a transition from an unstable state, where α = 0, to a stable asymmetric state where α is nonzero. In the region II > II C rate of entropy production of asymmetric state is lower than that of the symmetric state. The solid arrow shows that transition from an unstable symmetric state to a stable asymmetric state results in the lowering of σ, the rate of entropy production. Concluding Remarks Our results show several aspects of chiral symmetry breaking. First, they show that Frank's original model can be modified with several additional steps-all of which are possible elementary chemical reaction steps-to demonstrate that spontaneous chiral symmetry breaking can occur in a photochemically driven closed system. Our model is motivated by the fact that the earth is a essentially a closed system (except for a very small influx in interstellar matter such as meteorites) and the evolution of life was driven by incident solar radiation. The incident radiation drives a cycle of generation and decomposition of chiral compounds that, at a sufficiently high intensity of radiation, makes a transition to a state of broken chiral symmetry. This example demonstrates that life on earth could have evolved under such conditions of prebiotic molecular chiral asymmetry. From a thermodynamic viewpoint, this study also confirms that chiral symmetry breaking transitions are similar to second-order phase transitions, with entropy production taking the place of entropy. We note that the general qualitative features of bifurcation of asymmetric states and change in the behavior entropy production at the critical point, shown in Figures 6 and 8, are a consequence of the two-fold symmetry of the system, not the particularities of the chemical reactions in the model. In fact, using the system's symmetry group, it is possible to derive the following generic equation for the evolution of α near the critical point: dα/dt = −Aα 3 + Bα + C, in which A, B and C are functions of the kinetic constants of the chiral symmetry breaking chemical reactions [11][12][13]15]. Finally, we show that the behavior of entropy production in this chiral-symmetry-breaking system is not consistent with the MEP hypothesis, because MEP implies that the state of broken symmetry will have a higher rate of entropy production compared to a symmetric state, but we find the opposite to be the case in this model. In a system with an inflow of reactants and outflow of products, however, the entropy production was higher in the asymmetric state. This indicates that MEP is valid for a certain class of systems, but what this class is has not yet been clearly identified.
7,834.4
2020-05-06T00:00:00.000
[ "Physics", "Chemistry" ]
Emergence of a super-synchronized mobbing state in a large population of coupled chemical oscillators Oscillatory phenomena are ubiquitous in Nature. The ability of a large population of coupled oscillators to synchronize constitutes an important mechanism to express information and establish communication among members. To understand such phenomena, models and experimental realizations of globally coupled oscillators have proven to be invaluable in settings as varied as chemical, biological and physical systems. A variety of rich dynamical behavior has been uncovered, although usually in the context of a single state of synchronization or lack thereof. Through the experimental and numerical study of a large population of discrete chemical oscillators, here we report on the unexpected discovery of a new phenomenon revealing the existence of dynamically distinct synchronized states reflecting different degrees of communication. Specifically, we discover a novel large-amplitude super-synchronized state separated from the conventionally reported synchronized and quiescent states through an unusual sharp jump transition when sampling the strong coupling limit. Our results assume significance for further elucidating globally coherent phenomena, such as in neuropathologies, bacterial cell colonies, social systems and semiconductor lasers. The phenomenon of synchronization among coupled oscillators is fairly ubiquitous in natural and manmade systems. Examples in biology include the synchronized flashing of fireflies 1 , the chirping of crickets 2 , cardiac pacemakers 3 , yeast cells 4 and the firing of neurons 5 . In social systems coherence occurs in cooperative crowd effects 6,7 , while in non-living physical systems synchronization is seen, for example, in arrays of Josephson junctions 8 and semiconductor lasers 9 . Last, but not least, systems of coupled chemical reactions 10 provide representative examples in chemistry. Precise characterizations of synchronization have been made through analytical considerations (phase models based on the Kuramoto-family of models 11 ), while coupled electrochemical oscillators 12 , reactors 13 and well-mixed populations of catalyst-loaded oscillatory beads have proven to be excellent experimental templates 14 . The latter is particularly interesting, being scaleable to a large population and known to have a panoply of dynamical behaviors ranging from phase synchronization 15 to amplitude-entrainment through external driving 16 to quorum sensing effects 17 . Because of their rich phenomenology, systems of globally coupled beads provide an ideal setting to investigate potentially new dynamical behavior. In particular, there are few experimental instances demonstrating non-trivial dynamics beyond the synchronization transition, with the notable exception being oscillator death 18 where an initially synchronized population abruptly ceases oscillations with increased coupling strength. Furthermore, while a large amount of effort has been dedicated to examining transitions to synchronization, relatively little is known about the potential existence of multiple states of synchronization. Examples abound in nature, such as the observation that the frequency of synchronized crickets adjusts according to the ambient temperature 2 . In neurology, pathologies are known to occur due to abnormal synchronization in pyramidal neuronal cells, so called Interictal Epileptogenic Discharges 19 . These, however, are distinct states of synchronization from epileptic seizures which are global high amplitude patterns found in EEG recordings 20 occurring when globally connected groups of neurons communicate (through spatial transfer of neurotransmitters) with a faster time-scale than that of neural oscillations 21 . Here we report on our results in the search of multiple synchronized states in a large population of beads loaded with ferroin ( ( ) + Fe phen 3 2 ) as a catalyst and immersed in a catalyst-free Belousov-Zhabotinsky (BZ) solution 22 . Our experimental setup, described in 16 and SOM Sec. S1, consists of a continuously stirred tank reactor (CSTR), where beads are immersed in the reaction mixture which is then stirred at different rates to adjust their interconnectivity via transport-facilitation of signaling species. A RedOx cycle occurs through the oxidization of ferroin by reagents in the solution resulting in the production of an autocatalyst activator HBrO 2 and an inhibitor − Br . The oxidized ferroin reacts with the solution regenerating the reduced form of the catalyst and the inhibitor, with the cycle repeating itself when the latter falls below a particular threshold. A combination of the stirring rate and bead density represents the coupling strength; if one fixes the latter, then the former plays the role of the control parameter in our system. Results The resulting collective state was measured through the RedOx potential. In Fig. 1 we plot our experimental results. Panel a shows the temporal evolution of the signal with increasing stirring rate in a single experiment with fixed bead density. Two distinct regimes are present: for lower stirring rates (K = 900 rpm) one observes low-amplitude high-frequency oscillations (green curve) corresponding to the well-known collective synchronization of the bead oscillations 15 . As one samples the strong coupling regime by increasing the rate (K = 1400 rpm) there is a sudden and abrupt emergence of large-amplitude and lower-frequency oscillations (blue curve). This new regime is in sharp contrast to the expected dynamics in this regime which was thought to support the existence of a quiescent state (so-called oscillator death) 18 . In panels b and c we plot the period and amplitude of oscillations as a function of the stirring rate for the same bead density. Each data point is an average of multiple realizations of the experiment for a given stirring rate up to a maximum of 1500 rpm (which is the limit of our experimental apparatus). Both figures confirm the existence of two distinct synchronization regimes separated by a sharp jump transition in both period (50% increase) and amplitude (25% increase) prompting us to term the blue region as a super-synchronized state or mobbing state, similar to an equivalent phenomenon in sociology 7 . In order to check the robustness of this effect we conducted several experiments for multiple combinations of stirring rate and bead density. The results are compiled in the "phase" diagram shown in Fig. 1d where each point corresponds to multiple realizations of experiments conducted for a fixed pair of density and stirring values. The characteristic time evolution of the RedOx potential in each region shown in Fig. 1e allowed us to demarcate three distinct dynamical behaviors of the system. In addition to the green and blue states one also observes a globally quiescent state (red curve and points) at low bead density and high stirring. Intriguingly the figure suggests that the large amplitude blue state can be accessed directly from the quiescent red state either through an increase in density or-for a relatively narrower region of parameter space-an increase in the stirring rate. That is to say, while the strong coupling regime leads to oscillator death (as previously established), still stronger coupling leads to the emergence of the reported mobbing state. A representative example is shown in Fig. 1f where we plot the time evolution of the RedOx potential for three different stirring rates in the same experiment. The three regimes are clearly visible with the green and blue states now separated by an intermediate red state. Indeed, the boundary separating the quiescent and high-frequency oscillatory regimes is reminiscent of a quorum sensing transition as previously reported in 17 . The transition from the red state to the blue state is even more dramatic-the system directly transitions from a steady state to that with large amplitude oscillations-and may be considered a hyper quorum sensing effect. Insight into the unusual dynamical behavior of the system can be gleaned via the simulation of an idealized numerical model 17 -based on the three-variable oregonator model 23 -that best approximates our experimental setup. Here, for each bead i, the concentration values for HBrO 2 is denoted X i , Y i for − Br and Z i for the oxidized catalyst. The X i evolve according to, where φ j is the phase of the j , th oscillator and Φ that of the synchronized fraction 11 . In Fig. 2 we plot our results. Panel a shows the order parameter r as a function of density ρ and exchange coupling K ex . The figure distinguishes between two regimes, a quiescent state (r = 0) and a synchronized state (r ≠ 0) but provides no information about any difference in dynamical behavior within the latter regime. On the other hand, consistent with the experimental observations in Fig. 1b,c, the amplitude A and period T of oscillations for X i , shown in 2b,c clearly demarcates the synchronized region into two different dynamical states separated by an abrupt jump transition in both quantities at the same (ρ, K ex ) boundary. The combined information in panels a through c can be displayed as a single phase diagram (Fig. 2d) that shows the different dynamical regions. The experimentally relevant ones are color coded the same as in Fig. 1 densities; a quiescent state (red) exists at high exchange rate but low densities; and finally a large amplitude, low frequency state (blue) at high densities and a wide range of exchange rates. While the existence of the red and green states has been previously mapped out 17 , the unusual appearance of the blue state can be understood in the context of the flow of autocatalyst between the medium and the beads. In Fig. 2e we plot = − F X X i s as a function of ρ and K ex . For a wide swathe of parameter space the flow is negative, indicating a greater concentration of HBrO 2 in the medium than in the beads. This difference asymptotically decreases as one traverses phases space from the red to the green state but vanishes abruptly-once again through a jump transition-as the blue state is accessed. The time evolutions of X s and X i in the red, green and blue regions plotted in Fig. 2f through h make this effect more apparent. Both the red and green states are characterized on average by lower concentrations in beads than in the medium, with the main difference being the appearance of oscillations in Fig. 2g. In Fig. 2h, however, the signals for the medium and the bead are practically indistinguishable suggesting an identical phase, period and nearly identical amplitudes. Thus the primary difference between the green and blue states is the following: in the former case beads synchronize among each other and as the coupling increases eventually reach a state of full or complete synchronization with each other, sharing a common phase, frequency and amplitude. In this regime, as suggested by Fig. 2g, the dynamics of the medium is distinct from that of the beads immersed in it. However, as the coupling is further increased, in the latter case, there is a second dynamical transition where the already perfectly synchronized beads now also synchronize with the background medium with a common dynamical signature. To distinguish between the well-known complete synchronization state and the newly observed second dynamical transition reported here, we term the blue state a super-synchronized or mobbing state reflecting the strong harmony among the fully synchronized beads with the medium they are immersed in and whose active role is crucial for reaching this state. Discussion To summarize, we document the existence of a novel dynamical state in a population of coupled discrete chemical oscillators. This super-synchronized or mobbing state-characterized by large amplitude, low frequency oscillations-resides in the strong coupling limit (as measured by information exchange and oscillator density), a particularly surprising result, given the conventional wisdom of a complete cessation of oscillations in this regime. The detailed study of an idealized numerical model suggests the origin of this new mobbing state is a result of the interplay between the dynamics of the beads and the medium in which they are immersed. Specifically, the super-synchronized state is accessible only when the flow of signaling species (autocatalyst) between the beads and the medium is minimized, a condition one can achieve either by increasing the density of oscillators, the exchange rate or indeed both. The dynamical interplay between the oscillators, the medium and the rate of information exchange is reminiscent of global coherence phenomena in neurology such as in epilepsy or mobbing crowds in social networks 7 . Indeed, the abruptness of the super-synchronization transition may have implications in biology where synchronization plays an important role in many contexts, including the cell cycle 24 and cooperative behavior in bacterial cell colonies 25 . In addition, the system reaches a high level of self-organization as a consequence of the participation of the medium in the process of oscillator synchronization, brought about by the strong coupling between oscillators. Of course, this behavior is analogous to the so-called mobbing behavior found in hyper-communicated social media when the individuals not only have access to their immediate neighbors but they communicate with distant neighbors via social media. We also note that the degree of information exchange between the oscillators (coupling strength) serves as a "switch" to access the different states and their regimes, suggesting intriguing engineering applications in similar physical systems. Finally our description may serve as a template to explain some natural phenomena such as the observation that the frequency of synchronized chirps in grasshoppers increases with temperature 2 .
3,050.6
2016-01-12T00:00:00.000
[ "Physics" ]
Analysis of the response characteristics of a roadway wall under the impact of gas explosion Mine gas explosion causes serious failure (damage) to the roadway, but the research on the response characteristics of the impact load of gas explosion to the roadway wall are still very limited. In view of the shortcomings of the existing research, the LS‐DYNA software is used to establish the physical and mathematical models of gas explosion in a roadway, and the validation results show that the model is effective and reliable. Based on the model, changes of pressure, velocity, displacement of roadway walls and equivalent stress under the impact of gas explosions are measured. The response characteristics of the end and wall of roadway under the thermal impact of gas explosion are analyzed. The results show that the pressure on the closed end wall, the center and edge of the roadway is relatively large, and the wall failure is also relatively more serious. With the propagation of gas explosion, the overpressure on the closed end wall gradually attenuates, and the maximum overpressure region also retracts to the center. At the closed end, the explosion pressure is first loaded on the inner wall and gradually transferred to the outer wall. During the transfer, the pressure decays step by step, and the velocity of the measuring point at the closed end decays continuously. With the propagation of gas explosion, the displacement of each measuring point in the Z direction increases continuously. In the axial direction, the displacement near the center of the roadway is large, and then the displacement decreases in the form of regular rings. The impact load produced by the gas explosion is first loaded onto the closed end wall, resulting in wall deformation and equivalent stress. The study results can provide some theoretical basis and data support for the roadway wall design, and reduction of the wall failure. | INTRODUCTION The gas explosion accident in a coal mine causes many casualties, huge property losses and serious failure (damage) to roadway facilities and equipment. [1][2][3] The failure effect of the gas explosion is mainly reflected in the propagation stage of the gas explosion. To master the failure and response characteristics of gas explosion, many scholars have done a lot of research on gas explosion, [4][5][6][7][8][9] and achieved fruitful research results. For example, Yuan summarized the commonly used research methods about the study of dynamic response characteristics of surrounding rock of underground caverns, such as on-site blasting vibration test, theoretical calculation and numerical simulation. 10 Gao et al. studied the distribution law of vibration velocity, stress and bending moment of mountain tunnel lining under bottom dynamic load, which obtained that under the action of blasting dynamic load at the bottom, the peak vibration velocity at the bottom of the tunnel lining arch is the largest, followed by the two sides, and the peak vibration velocity in vault is the smallest. 11 Tianyuan et al. used FLAC3D software to establish an underground roadway model, and analyzed the variation characteristics of the velocity and displacement of the underground roadway under the action of blasting vibration. 12 Liu Bowen et al. studied the propagation and influence of vibration signals in the blasting excavation process of deep tunnels, as well as the difference of blasting vibration response characteristics between internal surrounding rock and surface rock. The results showed that the distance between rock mass particles and the excavation surface is an important factor, and the spatial relationship between them is also an important factor affecting the maximum vibration velocity of particles. 13 Zhang simulated the construction process of tunnel blasting, the numerical results found that the exponential load waveform can more completely show the spatial variability characteristics of surrounding rock than the smooth curve load waveform and the triangular load waveform, and the deformation size is consistent with the actual working condition. 14 He et al. studied the variation law of explosion load and transient unloading stress field of in-situ stress caused by multi-stage blasting on the excavation surface. The research results found that when the local stress continues to increase, the contribution of transient unloading of geostress to the damaged region will become more and more obvious. 15 Yang et al. studied the problem of rock fragmentation under the action of coupled static stress and spherical charge explosion, 16 and found that uniaxial static stress loading can change the rock failure surface from circular to elliptical, and increase the failure region. Under the action of biaxial equal stress, the shape of the failure surface is circular. Guo et al. established numerical simulation model of a rock blasting mechanics based on discontinuous deformation analysis (DDA) method, 17 and carried out numerical simulation of single-hole blasting of homogeneous rock under the conditions of two-way equivalent and unequal geostress. The results showed that under the condition of bidirectional equivalent stress, the blasting crack region is approximately circular, and its region decreases with the increase of initial geostress, in the condition of two-way unequal stress, the blasting crack expands deeper in the direction of larger geostress. At present, the research on the structural response and structural failures caused by explosion impact loads are mostly reflected in explosions of solid explosives. However, research on the response characteristics of impact loads in underground gas explosions to roadway walls is still very limited. In most research, laboratory pipes are mainly used to simulate underground roadways to analyze the wall response of roadways, but the limitations of explosion experiments make it impossible to get the detailed information of explosion process, however, numerical simulation can well reproduce the whole explosion process. Research shows that numerical simulations can better simulate the explosion impact problem. 1,7 In view of the shortcomings of the existing research, the LS-DYNA software is used to establish the physical and mathematical models of gas explosion in roadways. The response characteristics of roadways under the thermal impact of gas explosions are analyzed by measuring the changes in pressure, velocity, displacement and equivalent stress of roadway walls under the impact of gas explosions. It is expected that the study results can provide some help for the reduction of roadway wall damage. To simplify the calculation, some basic assumptions are made for the model as follows: (1) There is only one thermal source of gas explosion in the roadway. (2) Roadway wall is smooth, the turbulence caused by the wall is not considered. (3) The boundary is set as non reflective boundary condition. (4) Under normal temperature and pressure, the initial state of gas is evenly distributed, the initial temperature T 0 is 25°C and the initial pressure P 0 is 0.1 MPa. (5) There are no obstacles inside the roadway. (6) The thermal effect of roadway wall is not considered. (7) Only one step reaction of gas explosion is considered. Namely, in reaction, CH 4 + O 2 = CO 2 + H 2 O is only considered. The intermediate process of chemical reaction is ignored; intermediates and instantaneous products are also not considered. | Basic governing equation ANSYS/LS-DYNA software mainly adopts the Lagrangian description increment method. The particle position at the initial time is taken as X i ( = 1, 2, 3) i . At any time t, the particle position is The motion equation of the particle is as follows 18,19 : When t = 0, the initial condition is as follows: where, V i is the initial velocity. Momentum conservation equation where, σ ij is Cauchy stress, f i is volume force per unit mass, xï is acceleration. Mass conservation equation where, ρ is the current mass density, ρ 0 is the initial mass density, Energy conservation equation where, ε is the strain rate, q is volume viscous resistance, S is deviator stress, p is pressure, δ is Kronecker symbol. (9) where, n j ( = 1, 2, 3) j is the cosine of the outer normal direction of the boundary, t i ( = 1, 2, 3) i is the surface force load. 2. Displacement boundary condition where, K t ( ) i is the Shift function for a given location i 3. Displacement condition at geometric discontinuity of sliding contact surface In the process of solving the cycle, the new time step is the minimum value of the time step of all elements, namely. where, N is the number of units. The limit time step of the element can be calculated by the following equation: which is premixed gas with 9.5% of methane concentration. The premixed gas is separated from the inner space of the roadway by a film, and the air filling length is 5 m. The ignition position is (0, 0, 2). | Grid division The unified system of units (kg/m/s) is adopted in this model and material parameters. The roadway model is shown in Figure 2. The mapping grid division is selected for this model, the hexahedral element grid division is selected for the gas in the roadway, Euler grid division is used for air, and Lagrange grid division is selected for roadway wall. The unit length is 0.05 m. The finite element model after grid division is shown in Figure 3. The physical model of the gas explosion in the roadway is divided into 171700 units. After preliminary simulation, the grid generation can meet the needs of this study. | Unit type and material model Air constitutive model Initial state parameters of air in the standard state are as follows: P is 0.1 MPa, ρ is 1.29 kg/m³, T is 298 K, the thermal exchange during propagation and the friction between shock wave and roadway wall is ignored. Assuming that the expansion process of shock wave of gas explosion is an adiabatic process, according to Gama criterion, it can be expressed as follows: where P a is the gas pressure, γ is the specific thermal ratio, ρ is the current density of air, ρ a is the air density at the initial time, and E a is the internal energy per unit volume of gas. Air material model and state equation The *MAT_NULL is adopted in air material model, and is generally described by *EOS_LINEAR_POLYNOMIAL state equation. 20 A linear-polynomial state equation is as follows: where, P is the explosion pressure, E is the internal energy per unit volume. μ ρ ρ = / 0−1 , ρ is current density, ρ 0 is the initial density, μ is the relative density, C C equation is used to describe the relationship between explosion pressure and volume. 21 The JWL state equation is as follows: Where p is unit pressure, V is relative volume, E 0 is initial internal energy density, Parameters A and B are material constants. R 1 and R 2 are dimensionless constant. ω is Gruneisen constant, namely, the change rate of pressure relative to internal energy under constant volume condition, and the parameters of JWL state equation are shown in Table 2. 22 Material parameters of roadway wall Due to the automatic disappearance of the unit after the damage and destruction of the roadway wall, it is not conducive to monitoring the relevant parameters of the roadway wall. Therefore, a rigid material model is adopted in the wall surface of the basic roadway model, 23 and the key word for this model in LS-DYNA is * MAT_ RIGID. The selected materials are shown in Table 3. Hourglass control If the full integration algorithm is used, it will consume a lot of CPU time. CPU time can be effectively saved by using single point integration in the model. However, it is easy to cause the zero energy mode of the hourglass, so it is necessary to control the hourglass. For the entity unit solid164 hourglass control, the K file settings are shown in Table 4. 22 Ignition position In the K file, the ignition position of the model is set as Table 5. Time control The solution termination time is set to 0.05 s and the time step is controlled to 0.67, which is shown in Table 6. | ANALYSIS ON NUMERICAL SIMULATION RESULTS Gas explosions can produce thermal impact on the roadway wall. After the roadway wall bears the thermal impact load, the temperature field and thermal stress field inside the roadway will also change, and the wall F I G U R E 3 The finite element model after grid division. T A B L E 1 Air material equation of state parameters. The results obtained are compared with the literature 1,7 and experimental results, and the validation results show that the model is effective and reliable(due to space limitation, simulation diagram and measured comparison diagrams are not elaborated this time). In many studies on wall failure, researchers often focus on the wall of the explosive body, and often ignore the failure of the gas explosion in end face, therefore, the overall failure state of the exploded object cannot be obtained. To comprehensively study the dynamic response of the inner wall of the roadway under gas explosion, the corresponding measuring points are set at the closed end and the axial wall in this paper to conduct a relatively complete response analysis of the roadway failure. | Analysis on wall pressure The measuring points set on the closed end wall of the roadway are shown in Figure 4A, In the X direction, measuring points (A-E) are set at an interval of 0.2 m from the center to the side wall of the closed end wall of the roadway, among them, measuring point A is unit 41605, measuring point B is unit 41557, measuring point C is unit 41497, measuring point D is unit 430, and measuring point E is unit 425. The measuring points in the Y direction are as follows: measuring point F is unit 41609, measuring point G is unit 41613, measuring point H is unit 20893, and measuring point I is unit 20968. The measuring points set on the axial wall of the roadway are shown in Figure 4B, and the measuring points (1)(2)(3)(4) are set at an interval of 2 m from the closed end to the open end along the axial wall of the roadway. Among them, measuring point 1 is unit 4168, measuring point 2 is unit 8344, measuring point 3 is unit 12376, and measuring point 4 is unit 16552. Figure 5 is the pressure time history curve of the gas explosion in the closed end wall. To facilitate the analysis of the corresponding measuring points and conditions, some measuring points are selected as shown in Figure 6. It can be found from Figure 6 that the measured values and development trends of measuring point 425 and measuring point 20965 are almost the same, and the measured values and development trends of measuring point 41497 and measuring point 41613 are also basically the same. Because the roadway is circular and symmetrical, the shock wave front is loaded on the closed end wall with spherical waves at the initial stage of gas explosion, so the measured values of the corresponding measuring points on the closed end wall are basically consistent with the development trend. Therefore, in the follow-up study, only measuring points in X direction (named as measuring points A-E) are taken at the closed end for related response research. pressure process is that the pressure rises rapidly, reaches the peak value of pressure instantly, and the rising process tends to be linear and gradually shows nonlinear attenuation. With the propagation of gas explosion, it shows nonlinear attenuation and eventually tends to be stable. | Pressure analysis of closed end wall From the peak value of pressure, the peak value of pressure at measuring points A-D decreases successively, and the maximum pressure of measuring point A reaches 6.92 MPa. As the measuring point E is close to the interface between the two walls, the pressure is constrained by the wall, and the pressure cannot be sufficiently relieved, resulting in a large pressure of 5.98 MPa. The results show that the pressure at the center and edge of the roadway is relatively large, and the wall failure will be relatively more serious. | Pressure nephogram at closed end In LS-PREPOST, the S plane control function is used to slice the end face of the closed end of the roadway, and the slice position is shown in Figure 8. Figure 9 shows the pressure nephogram of the end face at different times. From Figure 9, IT can be seen that the positions with high end face pressure are the center position and the inner edge position. When t is 0.0005 s, the maximum pressure at the wall junction reaches 6.11 MPa, and the maximum pressure at the center of the wall also reaches 3.95 MPa. Subsequently, the pressure region on the wall continues to expand, as shown at the closed end, when t is 0.01 s. As the gas explosion continues, the pressure on the closed end wall gradually decreases, and the maximum pressure region also retracts to the center. At 0.04 s, it basically tended to be consistent, and then due to the reciprocating expansion of the pressure region reflected by the shock wave, the response state of nephogram in the closed end wall is consistent with the pressure conclusion at the measuring point of the closed end wall. Figure 10A shows the pressure time history of measuring points (1)(2)(3)(4). To facilitate the analysis, the pressure time history curves of each measuring point are extracted respectively, as shown in Figure 10B-E. It can be seen from Figure 10B-E that after the gas explosion in the roadway, the shock wave propagates in all directions at the same time. Due to the rigid wall constraint of the roadway and the existence of the closed end, the positive reflection shock wave in Z direction is generated at the closed end face. The measured value of pressure at measuring points 1 and 2 show an overall attenuation trend, and gradually tend to 0 with the depletion of gas. It can be seen from Figure 10B-E that the peak pressure at measuring point 1 is 5.9 MPa, and the peak pressure at measuring point 2 is 5.0 MPa. The measured value of pressure at measuring points 3 and 4 shows a trend of first increasing and then decreasing. The measured value of pressure is also lower than that of measuring points 1 and 2. The reason is that measuring points 3 and 4 are located in the air region, and the instantaneous shock wave of gas explosion does not reach the measuring point in the air region, but is disturbed to a certain extent. Figure 11 is the schematic diagram of the axial wall slice of the roadway. Figure 12 shows the dynamic development process of pressure from the inner wall to the outer wall of the roadway in the process of gas explosion. It can be seen from Figure 12 that the explosion pressure at the closed end is first loaded on the inner wall. Due to the continuity of the medium, it is gradually transmitted to the outer wall, and the pressure decays step by step in the transmission process. In the axial direction, at the moment of the gas explosion, the wall presents a high pressure state. At this time, the pressure load on the wall of the air region is relatively small, and the pressure decays step by step from the inner wall to the outer wall. Figure 13 shows the velocity time history change law of each measuring point on the closed end wall. From Figure 13A, the velocity change of each measuring point shows a consistent law, namely, with the progress of gas explosion, the velocity of measuring point in the closed end decreases continuously and basically tends to 0 at 0.05 s. It can be clearly seen from Figure 13B maximum measured value of velocity at each measuring point is different, the difference is not large. Figure 14 shows the velocity time history change law of each measuring point on the axial wall. It can be seen from Figure 14A that the velocity change of each measuring point is relatively disordered, and there is no obvious law. Because the impact load is irregularly reflected on the roadway wall many times, the velocity distribution of measuring points on the axial wall is irregular. According to Figure 14B-E, the measured velocity of each measuring point on the axial wall is about 0.1 m/s. On the whole, the measured values of velocity at measuring points 1 and 2 show an attenuation trend. The measured values of velocity at measuring points 3 and 4 oscillate significantly, but the oscillation frequency is lower than that at measuring points 1 and 2. Figure 15 shows the vector nephogram of threedimensional roadway wall velocity under the impact load of gas explosion, in which Figure 15A shows the velocity vector nephogram of roadway wall in Z direction at different times, and Figure 15B shows the velocity vector nephogram of roadway wall in XY directions at different times. It can be seen from Figure 15A that the velocity distribution at the closed end of the roadway at the initial stage of gas explosion is convex. When t is 0.00049 s, the maximum velocity in Z direction at node 4489 is 0.29 m/s. With the continuous progress of gas explosion, the velocity distribution of roadway wall also changes constantly. When t is 0.05 s, the maximum velocity at node 3073 is 0.014 m/s. The wall velocity gradually decreases from the closed end to the open end. Near the closed end of the roadway, because the impact load converges and overlaps at the corner, the junction is loaded in multiple directions at the same time, resulting in serious deformation, so the velocity is large, and the failure is more serious. As can be seen from Figure 15B, when t is 0.00049 s, the maximum velocity in XY directions at node 17427 is 0.0687 m/s. As the gas explosion continues, the velocity distribution on the roadway wall presents irregular changes. When t is 0.05 s, the maximum velocity at node 49888 is 0.0734 m/s. Figure 16A shows the time history change law of wall displacement at each measuring point on the closed end wall. According to Figure 16A, the displacement of each measuring point in Z direction increases continuously with the progress of gas explosion. In 0-0.01 s, the displacement variation reaches about 0.6E−03 m, the displacement increment is about 0.4E−03 m in the following 0.02 s, and the displacement increment was only about 0.2E−03 m in 0.03-0.05 s. It shows that at the initial stage of gas explosion, the displacements of measuring points increase rapidly, which is reflected in the large slope of the curve. Through the displacement time history curve, it can be obtained that the initial pressure relief of gas explosion is very important, and the explosion energy is fully diffused, so that the F I G U R E 12 Pressure nephogram on axial wall. displacement of the exploded object will not continue to increase. As the gas is exhausted, the explosion energy is gradually reduced, and the displacement change of the measuring point is also gentle. Figure 16 shows that the displacement of each measuring point on the wall is relatively similar, because the shock wave of gas explosion mainly propagates in the axial direction, the intensity difference on the wave front is small, and the region of the closed end wall is limited. Therefore, the displacement of each measuring point is relatively similar, and finally the displacement of each measuring point reaches about 0.0012 m. Figure 16B shows the time history change law of wall displacement at each measuring point on the circumferential wall. According to Figure 16B, the displacement of measuring points 1 and 2 in the gas region is large within 0.01 s, reaching 0.0445E−03 m. After 0.01 s, the circumferential wall displacement in the air region increases, but it is far less than the maximum displacement of the wall measurement point in the gas region. Figure 17 shows the vector displacement nephogram of the closed end of the roadway at the initial time t = 0.00049 s and the end time t = 0.05 s. Figure 17 not only includes the change nephogram of displacement time history in axial roadway, but also shows the displacement vector nephogram of the roadway in the XY directions, the vector arrow represents the magnitude and direction of displacement. It can be seen from Figure 17 that in the axial direction, the displacement near the center of the roadway is large, and then the displacement decreases in the form of a regular ring. As it is a circular roadway, the gas explosion propagates to the closed end wall with spherical (convex) shock wave, and the shock wave at the front end is intense. Therefore, the displacement on each circular belt from the center of the roadway to the roadway wall is relatively uniform, which can also be obtained from the relatively neat circular arrangement and distribution of the displacement vector. On the circumferential (XY) wall, the longer the arrow is, the larger the displacement is, which indicates the larger the deformation is, the denser the arrows are, the more concentrated the displacement is, and the more serious the failure is. Figure 18 is the displacement vector nephogram of three-dimensional roadway wall in Z direction. From Figure 18, the displacement law of the roadway wall under the impact load of gas explosion can be found more intuitively. At the moment of gas explosion, the displacement change of the closed end wall is obvious, and the displacement in Z direction is seriously protruding. The displacement of the center of the closed end wall is the largest, and the maximum displacement at node 4490 reaches 8.489E−05 m. The axial wall displacement of the roadway is smaller in the Z direction as a whole. When t is 0.05 s, the displacement of the closed end in Z direction reaches 1.216E−03 m, namely, within 0.05 s, the impact load continues to load the closed end wall, and the displacement continues to increase. At this time, the axial wall has a certain displacement in the Z direction, and the displacement from the closed end to the open end shows a decreasing trend. Figure 19 shows the circumferential displacement vector nephogram of three-dimensional roadway wall. The circular wall displacement law of the roadway under the impact load of gas explosion can be obtained from Figure 19. At the initial stage of gas explosion, the circumferential displacement of the wall in the gas region is significantly larger than that in the air region, and the circumferential displacement of the wall at the node 21826 in the gas region reaches 4.454E−05 m. As the explosion propagates towards the open direction, the fourth of the maximum displacement in the gas region. | Displacement nephogram In general, the overall displacement of the closed end wall is large, and the displacement of the gas region in the axial direction is larger than that of the air region. Figure 20 shows the time history curve of equivalent stress at each measuring point on the axial wall. In Figure 20A, the equivalent stress generally shows an attenuation trend. Within 0-0.02 s at each measuring point, the peak value and frequency of equivalent stress are relatively high, which indicates that the loading times of wall stress are frequent and the roadway fatigue damage is serious. It can be obtained from Figure 20B-E that the equivalent stress of measuring points 1 and 2 is similar, and the maximum peak value of stress exceeds 10E6 Pa. The peak stress of measuring points 3 and 4 in the air region is reduced compared with the peak stress of measuring points 1 and 2 in the gas region. Although the stress of measuring points 1 and 2 is large, the attenuation is relatively rapid, and the fluctuation amplitude of the stress of measuring points 3 and 4 is relatively uniform. According to the equivalent stress of the closed end wall in Figure 21 (A), the equivalent stress generally shows an attenuation trend. At the moment of gas explosion, the relationship of peak stress is E > A > B > C > D. Reason is that the explosion energy ZHENZHEN and QING | 2499 converges at the junction of the closed end of the roadway, and the junction is subjected to axial and circumferential loads at the same time. Serious deformation occurs at the junction of the roadway, and the maximum equivalent stress appears locally. Therefore, the peak stress of measuring point E is larger than that of other measuring points. After 0.01 s, the stress at each measuring point at the closed end decreases significantly. | Analysis on wall equivalent stress It can be obtained from Figure 21B-F that the maximum stress peak at measuring points A,B,C,D and E are 9.2, 8.6, 6.4, 4.9 and 9.9 MPa, respectively. All the maximum measured values occur at the moment of gas explosion, which indicates that the most serious failure to the wall surface of the closed end is caused by the initial stress of the wall surface. Figure 22 shows the distribution of equivalent stress nephograms of roadway walls with time. It can be found from Figure 22 that the impact load generated by the gas explosion is first loaded onto the closed end wall, resulting in wall deformation and equivalent stress. At the closed end, the equivalent stress at the center of the roadway and the junction of the roadway is relatively large. As the explosion continues, the load produced by the gas explosion continues to load on the roadway wall in the open direction, but the loading stress continues to decrease. It can be seen from Figure 22 that the stress loading is not completely uniform, and the stress is relatively concentrated in some wall regions, and these stress concentration regions are often regions with serious failure. | CONCLUSIONS To completely reflect the response characteristics of the roadway wall by gas explosion, the LS-DYNA is used to establish the simulation models of gas explosion in roadway, the parameters of pressure change, velocity change, displacement change and other stress changes of roadway wall under the gas explosion impact are measured, and the gas explosion impact and the response characteristics of the roadway wall is analyzed and the following conclusions are obtained. 1. At the closed end of the roadway, the pressure in the center and edge of the roadway is relatively large and the wall failure is relatively more serious. 2. With the propagation of gas explosion, the pressure on the closed end wall gradually attenuates and the maximum pressure region also retracts to the center, which basically tends to be consistent at 0.04 s, and then the pressure region expands back and forth due to the reflection of shock wave. 3. After the gas explosion in the roadway, the shock wave propagates in all directions at the same time. Due to the rigid wall constraint of the roadway and the existence of the closed end, the total reflections also appear on the closed end face and the axial inner wall. 4. At the closed end, the explosion pressure is first loaded on the inner wall and gradually transferred to the outer wall, and the pressure decays step by step in the transfer process. At the moment of gas explosion, the pressure load on the wall of the air region is relatively small, and the pressure also decays step by step from the inner wall to the outer wall. 5. With the propagation of gas explosion, the velocity of measuring point in the closed end decreases continuously. Because the impact load is reflected irregularly on the roadway wall many times, the distribution of velocity values of measurement points on the axial wall is irregular. 6. With the continuous progress of gas explosion, the velocity distribution of the roadway wall also changes constantly, and the wall velocity gradually decreases from the closed end to the open end. 7. With the propagation of gas explosion, the displacement of each measuring point in Z direction increases continuously. For the axial direction, the displacement near the roadway center is large, and then the displacement decreases in the form of a regular ring. The displacement on each circular belt from the center of the roadway to the roadway wall is relatively uniform. 8. The impact load produced by the gas explosion is first loaded on the closed end wall, resulting in wall deformation and equivalent stress. As the explosion continues, the load produced by gas explosion continues to load the roadway wall in the open direction, but the loading stress continues to decrease. In this study, many assumptions have been adopted. To improve the research results, in future research, assumptions can be further reduced to make the simulation conditions closer to the real situation. At the same time, it is possible to conduct further research on explosive gas action, temperature field, stress field, material property and their mutual coupling effects, as well as in-depth analysis on energy transfer, transformation during impact action, and its' energy impact loss process.
7,739.2
2023-05-22T00:00:00.000
[ "Engineering", "Environmental Science" ]
Investigation of transport behavior in borospherene-based molecular wire for rectification applications The transport properties of molecular wire comprising of B40 fullerene are investigated by employing density functional theory (DFT) and non-equilibrium green’s function (NEGF) methodology. The quantum transport is evaluated by calculating the density of states, transmission spectra at various bias voltages, molecular energy spectra, HOMO-LUMO gap, current–voltage curve, and transmission pathways. In context to its properties, results show that by increasing the length of molecular wire, the device exhibits rectification ratio and prominent NDR behavior. I–V curve scrutinizes that as the length of wire is increased the curve becomes non-linear. This non-linear behavior is more prominent in the case when the length of wire is increased up to six fullerene cages significant rectification ratio (R.R) and negative differential resistance (NDR) comes into the picture. The excellent negative differential resistance ensures that a device with at least six molecular wires can be used as a tunnel diode. Introduction The last two decades have brought a revolution in the field of nanomaterials; researchers and engineers have been actively engaged in finding various applications of nanomaterials in the field of medical, environment, and electronics. Ever since the famous lecture "There is plenty of room at the bottom" by Professor R. Feynman, the field of nanotechnology has gained great impetus [1]. Due to the availability of organic and inorganic material, the field of molecular electronics has become the center of attention for the research community. Researcher's started the research in the field of carbon fullerenes and their transport behavior under equilibrium and non-equilibrium conditions. In 1985 Kroto discovered the carbon fullerene C 60 which was marked as a breakthrough in the field of mole electronics [2]. Prinzbach amalgamed the C 20 fullerene molecule in the year 2000 which began the research in this area [3]. Researchers altered the carbon fullerene properties by decorating the fullerene cage with a different atom or by endohedral or exohedral placement. Later on, various properties of C 20 were scrutinized utilizing the local density approximation approach [4]. Fullerenes have intrigued wide interest due to their functional properties and various applications in molecular electronics [5]. The synthesis of B 40 in 2014 brought the revolutionizing change in the field of molecular electronics. The borospherene (B 40 ) is a highly stable fullerene, that comprises two hexagons and four heptagons [5] and comprises of D 2d symmetry arrangement corresponding to the Chinese red lantern [6]. Borospherene as a material has shown its ability in various device applications. He and Zeng inquired about the spectral properties of B 40 and deduced that it has more appropriate spectral properties in contrast to C 60 [5]. Later on, Yang et. al investigated B 40 molecular junction which exhibits excellent optoelectronic properties with a large rectification ratio and NDR characteristics [7]. Tang et. al explored that Sc@B 40 is capable of absorbing hydrogen gas molecules [8]. Furthermore, research discloses that borospherene has super atomic characteristics comprising of 2s, 2p, 2d, and 2f orbitals along with 1s, 1p, 1d, and 1f orbitals [9]. The research work of Keyhanian et. al probed the B 40 potential for aniline adsorption [10]. Mn and Fe atoms were doped and encapsulated for enhancing the fullerene properties. B 40 emerged as an excellent candidate in molecular electronics whose properties can be modified to a large extent by adding or removing elements. Thus, Kaur et. al designed B 40 molecular junction by doping Al, Si, and S from the periodic table [11]. Shakerzadeh et. al scrutinize electronic and non-linear optical properties (NLO) of M@B 40 (M = first row transition metals of the periodic table) and results showcased that boron-based devices can be used to design electro-optical materials. The molecular junction comprising B 40 exhibits a high rectification ratio and NDR [12]. Due to the large surface area and presence of both acidic and basic sites, B 40 has been extensively utilized as a sensor for the detection of various toxic gases like CO 2 , ammonia, phosgene gas, etc. [13,14]. Borospherene (B 40 ) has shown its application as a nanocatalyst for hydrogen splitting [15]. Hence transport properties of B 40 can be altered either by decorating borospherene with transition metals or by dopping or encapsulation of different atoms from the periodic table. Maniei et. al doped lithium in Borospherene and formed a device that can detect nitrogen dioxide, thus giving a sensing application [16]. Cheng et. al utilizing DFT formalism scrutinized guanine nucleo-base of DNA with B 40 . From the results, it was concluded that B 40 can be used for sensing guanine nucleo-base due to its electrical conductivity [17]. Zhang et. al probed yttrium-doped boron fullerene for hydrogen storing capacity [18]. It was inferred that one Y atom can capture five hydrogen molecules. Kosar et. al studied B 40 fullerene as an energy storage material. They probed the interaction of B 40 fullerene with Na and Na + [19]. In July 2020, Kaur et. al probed the potential of B 40 to recognize radium and radon in water [20]. Wang et. al in 2017 designed a molecular wire with B 40 to study the interaction between B 40 fullerene [21]. It was inferred that a reduction in the bandgap from 3.18 to 1.10 eV was noticed as an increase in B 40 fullerene. Although they have studied the interaction properties of B 40 as a molecular wire, a junction using gold electrodes has not been studied yet. In the present work, a B 40 -based molecular wire ( Fig. 1) was designed intending to study the change in electronic transport properties both at equilibrium and non-equilibrium by increasing the length of the wire. While designing the molecular junction linkers are not used as pure B 40 molecular junction has shown excellent electronic properties without linker's and when linkers have used the conductance of the device is considerably decreased [22]. A comparative study of six different junctions has been investigated. DFT-NEGF methodology is utilized to calculate transmission spectra, molecular energy levels, I-V curve, HLG, and transmission pathways of all the molecular junctions under consideration. Quantum transport at equilibrium The quantum transport was investigated under equilibrium conditions, i.e., 0 V. Density of states (DOS) was determined for all B40 devices to understand thoroughly the transport properties. DOS gives knowledge about the quantum states that are vigorously involved in the transmission. Figure 2 shows a comparative graph for six B40 devices, where peaks are anticipated on both sides of the fermi level (E F ). From Fig. 2, it is evident that for the 5th device and 6th device the peaks are more prominent below the Fermi level (E F ); hence the highest occupied molecular orbital (HOMO) dominates the transmission for both devices, whereas for other devices peaks are more prominent above the E F ; hence the lowest unoccupied molecular orbital (LUMO) dominates the transmission in the other four devices. The higher the value of D(E) for peaks, the greater is the involvement of that particular state in transmission and hence greater conduction [23]. For all the devices at 0 V, the molecular energy spectrum was studied. The HOMO, LUMO, and HLG in all the molecular junctions are tabulated in Table 1. From the table, it is deduced that HOMO-LUMO orbitals shift close to Fermi level, as the number of B 40 cages between the electrodes is increased. This results in a decreased HOMO-LUMO gap (HLG) in the devices, as we increase the number of fullerene stages in contrast to the single B 40 molecular junction which has the highest energy gap. The HOMO-LUMO gap (HLG) energies for molecular chain are calculated to be D 1 (0.563 eV) > D 2 (0.513 eV) > D 3 (0.436 eV) > D 4 (0.324 eV) > D 5 (0.055 eV) > D 6 (0.054 eV), respectively, where D 1 , D 2 , D 3 , D 4 , D 5, and D 6 represent the different molecular junctions. The transmission spectra exploit the molecular orbital energies for applied bias, to find out at which energies electron transfer would be strongest [24]. Figure 3 shows the transmission spectra of all devices considered at zero bias. The more number of peaks in transmission spectra the higher the transmission. If the transmission peaks are wider, it signifies the stronger coupling between electrodes and central molecules, thus resulting in more transmission. On the other hand, sharper peaks signify poor coupling and hence reduced transmission. In I-V characteristics, the position of the Fermi level is considered to be an important factor, as it helps in identifying the conductive characteristics of elements [25]. In the transmission spectra at zero bias, the Fermi level is considered to be at zero E F = 0. T(E) curves of all devices were determined in the energy range of − 1 to 1 eV. Comparing all the devices, it is visualized that as we increase the length of the molecular chain, higher number of transmission peaks are noticed near the Fermi level [21]. Quantum transport under non-equilibrium conditions After performing simulations and calculations, we first plotted the I-V curve. The current was determined at various bias voltages from − 1 to + 1 V with a step size of 0.2 V. From Fig. 4 it is visualized that current is always symmetric for all the devices. The curve is completely linear in the case of a single B 40 molecular junction implying that current flows across the junction without any barrier. As the length of the wire is increased further, the curve becomes non-linear. The non-linear behavior increases with an increase in the length such that for the device having six B 40 cages, significant NDR comes into the picture [21]. The negative differential resistance can be defined as the drop in electron mobility with an increase in an electric field which means that a device will exhibit a decrease in current with an increase in applied voltage. For the I-V curve of molecular chain devices, as shown in the figure, a prominent negative differential resistance (NDR) behavior is observed. By increasing molecular chain length, NDR behavior becomes more promising. From the figure, it is envisaged that for the molecular wire comprising of four B 40 stages NDR is observed in the short range of − 0.62 V to 0.64 V, and for five stages NDR is observed in the voltage range of − 0.67 V to 0.69 V, and then for six stages, NDR is observed in the voltage range of − 0.75 V to 0.76 V. Thus, B 40 molecular wire comprising of four, five, or six B 40 stages can be effectively utilized in several electronic applications like oscillators, amplification, and digital applications [21]. The rectification ratio R can be defined as R = I(V)/I(−V). The calculated rectification ratio is shown in the figure. From Fig. 5, it is evident that molecular chain devices with six B 40 fullerenes exhibit an enhanced rectification ratio and can be used as a tunnel diode. It is inferred that when 0.4 V bias is applied to a molecular chain consisting of one, two, and five B 40 stages, rectification ratio is unity, whereas for the molecular wire consisting of three and six B 40 stages rectification ratio is 0.93 and in the case of four B 40 stages rectification ratio is observed as 0.99. From Table 1, it is quite evident that D 6 with six B40 cages has HLG less than a single B40 cage even though a single cage device has increased the flow of current. Here's the reason why HLG is less for all other devices, even though they have lower values of current, is the weak bonding of electrons although the accumulation of charge carriers on all sides is more prominent yet the flow of current is less. Also, the free electrons are responsible for the reduction in current, as we keep on increasing the length of the molecular chain the number of free electrons decreases which further leads to a reduction in current. To understand the quantum transport properties, the zero bias transmission spectrum is not sufficient. Therefore, the change of transport properties under various bias voltages from − 1 to 1 V is investigated. Figures 6 and 7 present a graphical representation of the various bias voltages. From the graph of transmission spectra at − 1 V and + 1 V, it is visualized that for positive bias magnitude of the peaks is increased in contrast to negative bias. And this variation in magnitude can be seen for molecular wire comprising of three, four, five, and six borospherene cages. Also, there is a shift in the position of peaks for these devices. In the case of transmission spectra at + 1 V and at − 1 V, for the molecular wire comprising of six cages, it is visualized that magnitude increases for a positive bias in contrast to a negative bias. For the negative bias, first peak near the Fermi level is at − 0.04 eV with a magnitude of 0.965, whereas for the positive bias first peak lies at the Fermi level with a magnitude of 1.91. In the transmission spectra for bias voltage at − 0.8 V and + 0.8 V, it is interpreted that positive bias magnitude is increased in contrast to negative bias. In the case of molecular wire comprising of six cages, it is interpreted that for negative bias first peak lies close to the Fermi level at − 0.08 eV with a magnitude of 1.55, whereas for positive bias peaks close to the Fermi level with a magnitude of 2.85. Furthermore, the transmission spectra at + 0.6 V and − 0.6 V negative bias closest peaks lie at 0.08 eV with a magnitude of 1.92, whereas for positive bias closest peaks lie at − 0.04 eV with a magnitude of 2.31 which depicts that with change in bias states participating in conduction also change. For the transmission spectra at + 0.4 V and − 0.4 V, in the case of molecular wire comprising of six fullerene cages, in the positive bias, the first peak lies at − 0.04 eV with a magnitude of 3.219, whereas for negative bias the first peak lies at 0.04 eV with a magnitude of 2.709. Thus, it is deduced that magnitude of the peak is increased for positive bias. Next, for the transmission spectra at + 0.2 V and − 0.2 V in the case of molecular wire comprising of six cages, it is visualized that for positive bias first peak lies at 0.04 eV with a magnitude of 3.46, whereas for negative bias first peak lies at 0.04 with a magnitude of 3.004. Hence it is inferred that for positive bias values of peaks are enhanced in contrast to negative bias. So from the transmission spectra graphs, it is concluded that the shape of the curve is the same but amplitude changes with applied voltage. Moreover, by varying the bias voltages peaks don't move much from their position. Also as the bias is increased the states that were engaged in conduction don't change [25]. To have a clear image of how the length of the wire affects the conduction of B 40 fullerene cages, we have further analyzed the transmission pathways. The transmission pathway is a quantitative means to understand the current flow between molecule and electrodes in terms of the local contribution. In transmission pathways, the local transmission components are represented by arrows that superimpose on the molecular geometry [32]. If the value of local transmission within a pair of atoms is somewhat 10% of maximum local transmission, then only an arrow is drawn. These local transmission components can either follow the single pathway or multiple pathways. The total current I(V) equals the sum of local currents within the pairs of atoms p and q, where p is on one side of the surface and q is on the other side of the surface, Figure 8 represents the local flow of current in form of three different lines (dark, light, and dotted lines). The light lines give components of the transmission that are in direction of the net current flow; on the other hand, dark lines give components in opposite direction, thereby reducing the net current, whereas the dotted lines represent the scattering of electrons. The transmission pathways of six different B 40 devices give information about current flow through bridging molecules bound to metallic electrodes. The flow of current is along the spherical surface of the cage, whereas if the length of the device is increased there is a reduction in the flow of current. In the four, five, and six cage devices, the opposite direction current flows directly through inside the cage. From Fig. 8, it can be seen that as the length of the molecular chain is increased, there is a reduction in current. Conclusion In this study, B 40 fullerene is used to design a single molecular junction and molecular wire comprising fullerene cages. To enumerate the density of states, transmission spectra, molecular energy spectra, HLG, I-V curve, and transmission pathways the DFT along with NEGF formalism have been employed. Single molecular junction has the highest value of current, whereas on increasing the length of wire value of current keeps on reducing. This reduction in current is seen in transmission pathways where a device with six fullerenes has increased backscattered components in comparison with a single B 40 molecular wire that has more forward components. When six B 40 fullerenes are added between the junctions HLG gap reduces to 0.054 eV. It is deduced that for the first four devices LUMO orbitals play a dominant role in the transmission and for the other two devices HOMO orbitals dominate the transmission. The I-V curve depicts the non-linear behavior for molecular junctions as the length of wire is increased. Specifically, NDR behavior is observed in the I-V curve due to an increase in the length of the molecular wire. It means that molecular junctions comprising at least six B 40 fullerene cages can be used as molecular wire. Thus excellent negative differential resistance ensures that a device with at least six molecular wires can be used as a tunnel diode. Methodology In this paper, necessary calculations are performed utilizing Virtual Nano Lab software which is included in Atomistix Toolkit [26]. Virtual Nano Lab (VNL) immolates various calculation methods. Density Functional Theory (DFT) was chosen due to its better accuracy in comparison to extended Huckle theory (EHT) [27]. Generalized Gradient Approximation (GGA) of Perdew-Burke-Ernzerhof is used to calculate exchange-correlation energy [27]. For all the devices, double zeta polarized basis set was used along with a density mesh cut-off energy of 150Ry. The fullerenes (B 40 ) were sandwiched between the gold electrodes to form six different devices with 1, 1, 1 electrode orientation. The single covalent bond is formed between the fullerenes and metallic leads. The minimum length of the device was 45 Å as the length of the wire is increased the maximum length of the device is 75 Å. To probe the use of B 40 as a molecular wire, a molecular junction has been designed. B 40 fullerene cages are sandwiched between the gold electrodes forming a single molecular junction and then by increasing the length of wire up to six fullerene cages other molecular junctions are formed. The device encompasses three parts, namely the left electrode, central region, and right electrode. To calculate current characteristics, Landauer formalism is utilized at various applied voltages [28,29]. Landauer Buttiker formalism exploits the transmission probability T(E, V): In Eq. 1. T(E, V) signifies transmission function and µ L /µ R represents a bias window. In this study, the transmission was determined as [27] where Ŵ 1 and Ŵ 2 are coupling functions related to left/right electrodes, respectively. It provides information regarding the kind of contact among molecule and metallic contacts, and G M (E, V) represents Green's function. The values for the electrochemical potential for both left/right electrodes can be written as follows: From Eqs. 3 and 4, E F exhibits the equilibrium Fermi function, e represents the charge on the electron, and η gives the information on how potential difference (V) is divided among two metallic leads. When the molecule in the molecular junction has a minimal response on the conductance curve, then η describes the potential profile of the molecule [29,30]. To explore the transfer of electrons in molecular junctions, transmission pathways were calculated. The Green's function generally gives the value of current at the metal-molecule surface or within the metal [31], whereas transmission pathways describe the flow of current in electrode-molecule-electrode. The total equals the sum of local currents among all the pairs of atoms p and q, where p denotes one side of the surface and q denoted the other side of the surface. In transmission pathways, electronic bonding is termed as "through-bond" and "through-space". "Through-bond" is a term is used where atoms are directly connected or connected in a multiple atom sequence [32]. On the other hand, the "through-space" term is used where electronic coupling takes place between the non-linked atoms. These are weak interactions such as hydrogen bonds. Thus, in the case of a small molecule, the representation of local current will be influenced by through-bond terms. The local transmission via a molecule offers a path to understand the impression about the type of dynamics between the molecular assembly both in chemical and geometric essence and electronic bonding. The flow of current is given by local transmission components either in the single pathway or multiple pathways. By superimposing the device on the transmission pathways result, one can understand the flow of current. The direction of the current is shown by arrows colored in red, blue, and purple. The arrows colored in red show the transmission that is in direction of net current, whereas blue arrows demonstrate the reduction in current. And the purple arrow demonstrates the backscattered electrons.
5,131
2021-10-12T00:00:00.000
[ "Materials Science", "Physics" ]
The Effects of Aerodynamic Interference on the Aerodynamic Characteristics of a Twin-Box Girder To investigate the aerodynamic characteristics of a twin-box girder in turbulent incoming flow, we carried out wind tunnel tests, including two aerodynamic interferences: leading body-height grid, and leading circular cylinder. In this study, the pressure distribution and the mean and fluctuating aerodynamic forces with the two interferences are compared with bare deck in detail to investigate the relationship between aerodynamic characteristics and the incoming flow characteristics (including Reynolds number and turbulence intensity). The experimental results reveal that, owing to the body-height flow characteristics around the deck interfered with by the body-height grid, the disturbed aerodynamic characteristics of the twin-box girder differ considerably from those of the bare twin-box girder. At the upstream girder, due to the vortex emerging from the body-height grid breaking the separation bubble, pressure plateaus in the upper and lower surface are eliminated. In addition, the turbulence generated by the body-height grid reduces the Reynolds number sensitivity of the twin-box girder. At a relatively high Reynolds number, the fluctuating forces are mainly dominated by turbulence intensity, and the time-averaged forces show almost no change under high turbulence intensity. At a low Reynolds number, the time-averaged forces change significantly with the turbulence intensity. Moreover, at a low Reynolds number, the wake of the leading cylinder effectively forces the boundary layer to transition to turbulence, which reduces the Reynolds number sensitivity of the mean aerodynamic forces and breaks the separation bubbles. Additionally, the fluctuating drag force and the fluctuating lift force are insensitive to the diameter and the spacing ratio. Introduction In recent decades, super-long-span bridges have been largely designed using the sharp-edged twin-box girder, due to its superior aerodynamic stability, including the Xihoumen Bridge (main span, 1650 m), the Shanghai Yangtze River Bridge (main span, 730 m), and the Stonecutters' Cable-Stayed Bridge (main span, 1018 m). It is generally acknowledged that the stability of super-long-span bridges is an important indicator that represents the safety of the structures. Super-long-span bridges are often built at sea, where gales often occur, and the aerodynamic forces generated by wind-induced response cannot be neglected. Moreover, the shedding vortices around the box girder induce the vibration behavior, e.g., vortex-induced vibration (VIV). Therefore, the investigation of the aerodynamic performance of the bridge structures by wind tunnel experiment [1][2][3][4] or 2 of 17 numerical simulation [5][6][7][8] is necessary in the pre-research stage of bridge construction. However, the wind tunnel experiments usually examine the bridge section model under uniform inflow conditions, and the aerodynamic characteristics obtained under these conditions are used to represent the aerodynamic characteristics of the bridge, which cannot completely accommodate the dynamic complexity of the structures in the real engineering application environment. The bridge structure is not simply affected by uniform inflow in a natural environment, and the turbulent components usually exist in the incoming flow. For instance, the incoming flow may pass through other structures to reach the bridge deck, so the flow around the windward side of the bridge structure is in the wake of other structures, and is usually unstable, with a large fluctuation velocity component. Therefore, it is necessary to study the flow characteristics of the bridge structure under different incoming flow characteristics. Recently, the effect of the turbulent components of incoming flow on the aerodynamic performance of bluff bridge sections has fascinated many researchers, and triggered many experimental investigations to discern the effects of the incoming flow characteristics on the bridge. Zhou et al. [9] investigated the effects of the vertical turbulence intensity of incoming flow on the aerodynamic performance of a bridge. The authors claimed that the increase in the vertical turbulence intensity of the incoming flow increases the torsional frequency, and the critical flutter wind speed decreases when the vertical turbulence intensity is 2.84%. Hunt et al. [10] and Sarwar et al. [11] pointed out that the incoming flow with the fluctuation component of long-span bridges is nonlinear with the structural motion, and the turbulence components of incoming flow (such as turbulence intensity and turbulence scale) play a vital role in controlling the aerodynamic characteristics of bridges. Scanlan and Liu [12] experimentally investigated the flutter derivatives of a bridge deck based on the turbulent components of incoming flow, and a new theory was developed, which takes the turbulent components in the incoming flow into account. Haan and Kareem [13] investigated the effects of turbulence on the aero-elastic and self-excited forces of a rectangular prism via experiments. Since turbulence is highly heterogeneous and anisotropic, the selfexcited pressure fluctuation, self-excited force, and flutter derivative of a stationary prism are strongly affected by the turbulence. The authors also pointed out that the streamwise position would shift with the increase in turbulence intensity; however, the pressure amplitudes would decrease with a larger turbulence scale. Meanwhile, Matsumoto et al. [14] showed that turbulence can adversely affect the flutter performance of bridges based on experimental results. In recent years, a lot of theoretical models have been developed to evaluate the effects of turbulence on the aerodynamic characteristics of bridges. Chen et al. [15,16] proposed a time-domain approach to predict the aerodynamic response of bridges, and a nonlinear theoretical model was established to analyze the effects of turbulence on the self-excited forces and the flutter performance. Wu and Kareem [17] summarized the latest developments in aerodynamics and aero-elasticity of bluff bodies by turbulent winds. The development of theoretical models of the effects of turbulence on the aerodynamic characteristics of bluff bodies is beneficial in efforts to effectively solve the problem of aerodynamic nonlinear response induced by turbulence. However, to our best knowledge, there is still no effective theoretical model to explain the relationship between the characteristics of incoming flow and the aerodynamic performance of the twin-box girder, because the aerodynamic characteristics of the twin-box girder are highly nonlinear in turbulent flow. The main objective of this study is to experimentally investigate the influence of different cutting-edge aerodynamic interference methods on the aerodynamic characteristics of a twin-box girder. In the present work, the Xihoumen Suspension Bridge is adopted as a prototype of a twin-box girder to investigate. The turbulence intensity I is adopted to indicate the strength of incoming flow fluctuation, and the aerodynamic characteristics of long-span bridges are generally sensitive to the Reynolds number [18]. Therefore, the comprehensive influence of turbulence intensity and the Reynolds number on the aerodynamic characteristics of the twin-box girder is investigated. The remainder of this paper is organized as follows: In Section 2, the experimental method and geometric details of the twin-box girder are presented. In Section 3, the experimental results of different aerodynamic interferences are described and discussed in detail, including the surface pressure distributions and the time-averaged and fluctuating aerodynamic forces. Finally, conclusions are presented in Section 4. Experimental Setup The experiments were conducted in a closed-loop wind tunnel (SMC-WT1, Harbin Institute of Technology, Harbin, China). With screens and honeycomb installed before the inlet of the test section, the turbulence intensity was less than 0.4% over the speed range of 4-25 m/s. In the test section, the size of the cross-section was 505 mm × 505 mm. In this study, the Reynolds number (Re) is the ratio of inertial forces to viscous forces, which is an important dimensionless quantity in fluid mechanics, and is defined as: where ρ is the density of the fluid; U is the incoming flow velocity; H is the central height of the twin-box girder, which is adopted as the characteristic length; and µ is the dynamic viscosity of the fluid. Since the present work was conducted in a conventional atmospheric boundary layer wind tunnel, the variation in the Reynolds number was achieved by adjusting the wind speed. The turbulence intensity is an effective indicator that is associated with the turbulent kinetic energy (TKE), and it can be written as: where u x 2 , u y 2 , and u z 2 are the root mean square of the turbulence velocity fluctuations in the x, y, and z directions, respectively, U x , U y , and U z are the mean velocity in the x, y, and z directions, respectively, u is the root mean square of the turbulence velocity fluctuations, and U is the mean velocity. Section Model Geometrical Information and Surface Pressure Measurements The detailed geometrical information of the prototype bridge deck is shown in Figure 1. The twin-box girder has two parallel box girders with a gap of length L = 6 m and width B = 36 m, and the center height of the deck H = 3.51 m. The spanwise length of the section model is L s = 480 mm, and the geometric scale ratio of the section model is 1:120. Figure 2a shows a 3D sketch of the twin-box girder used in the present study. To obtain the surface pressure distributions, 46 pressure taps with a 0.5 mm radius were installed in the slice, which is 230 mm away from the right end of the bridge deck, as shown in Figure 2b. It should be noted that the experiments on the twin-box girder were stationary in this paper. is organized as follows: In Section 2, the experimental method and geometric details of the twin-box girder are presented. In Section 3, the experimental results of different aerodynamic interferences are described and discussed in detail, including the surface pressure distributions and the time-averaged and fluctuating aerodynamic forces. Finally, conclusions are presented in Section 4. Experimental Setup The experiments were conducted in a closed-loop wind tunnel (SMC-WT1, Harbin Institute of Technology, Harbin, China). With screens and honeycomb installed before the inlet of the test section, the turbulence intensity was less than 0.4% over the speed range of 4-25 m/s. In the test section, the size of the cross-section was 505 mm × 505 mm. In this study, the Reynolds number (Re) is the ratio of inertial forces to viscous forces, which is an important dimensionless quantity in fluid mechanics, and is defined as: where ρ is the density of the fluid; U is the incoming flow velocity; H is the central height of the twin-box girder, which is adopted as the characteristic length; and μ is the dynamic viscosity of the fluid. Since the present work was conducted in a conventional atmospheric boundary layer wind tunnel, the variation in the Reynolds number was achieved by adjusting the wind speed. The turbulence intensity is an effective indicator that is associated with the turbulent kinetic energy (TKE), and it can be written as: where 2 x u′ , 2 y u′ , and 2 z u′ are the root mean square of the turbulence velocity fluctuations in the x, y, and z directions, respectively, x U , y U , and z U are the mean velocity in the x, y, and z directions, respectively, u′ is the root mean square of the turbulence velocity fluctuations, and U is the mean velocity. Section Model Geometrical Information and Surface Pressure Measurements The detailed geometrical information of the prototype bridge deck is shown in Figure 1. The twin-box girder has two parallel box girders with a gap of length L = 6 m and width B = 36 m, and the center height of the deck H = 3.51 m. The spanwise length of the section model is Ls = 480 mm, and the geometric scale ratio of the section model is 1:120. Figure 2a shows a 3D sketch of the twin-box girder used in the present study. To obtain the surface pressure distributions, 46 pressure taps with a 0.5 mm radius were installed in the slice, which is 230 mm away from the right end of the bridge deck, as shown in Figure 2b. It should be noted that the experiments on the twin-box girder were stationary in this paper. The distribution of these taps around the section circumference is illustrated in 3. The pressure taps are connected to three pressure scanners (DSA3217, 16 chan each scanner) with a measuring range of ±2.5 kPa by using connecting tubes (indep polyvinyl chloride (PVC) tubes, internal diameter of 1 mm) with a full length of 5 This system is adopted to measure and record the instantaneous pressure distributi The Scanivalve system monitors the surface pressure data at a sampling rate Hz, and the time length of one sampling period is 32 s. Based on the correction alg [19], the distortion effects brought by the connecting tubes-such as amplification and phase shift [20]-are quite small and negligible in the surface pressure measur of this study. The time-averaged pressure coefficient Cp can be calculated by non sionalization of the time-averaged pressure pm, which can be written as: where pi denotes the instantaneous pressure, p ∞ is the pressure of the free stream is the density of air, U ∞ is the flow velocity of the free stream, and T is the sample In the present study, the Reynolds number is high (Re > 5 × 10 3 ); hence, the bo layer around the solid structure becomes turbulent and the pressure drag force dom the skin drag force [21]. Thus, the pressure drag force is the dominant component drag, and the skin friction drag can be neglected. Once the instantaneous pressure distributions are obtained, the correspondin dynamic forces of the bridge girder can be calculated by standard integration, wh be expressed as follows: The distribution of these taps around the section circumference is illustrated in Figure 3. The pressure taps are connected to three pressure scanners (DSA3217, 16 channels for each scanner) with a measuring range of ±2.5 kPa by using connecting tubes (independent polyvinyl chloride (PVC) tubes, internal diameter of 1 mm) with a full length of 500 mm. This system is adopted to measure and record the instantaneous pressure distributions. The distribution of these taps around the section circumference is illustrated in Figure 3. The pressure taps are connected to three pressure scanners (DSA3217, 16 channels for each scanner) with a measuring range of ±2.5 kPa by using connecting tubes (independent polyvinyl chloride (PVC) tubes, internal diameter of 1 mm) with a full length of 500 mm. This system is adopted to measure and record the instantaneous pressure distributions. The Scanivalve system monitors the surface pressure data at a sampling rate of 312.5 Hz, and the time length of one sampling period is 32 s. Based on the correction algorithm [19], the distortion effects brought by the connecting tubes-such as amplification factor and phase shift [20]-are quite small and negligible in the surface pressure measurements of this study. The time-averaged pressure coefficient Cp can be calculated by nondimensionalization of the time-averaged pressure pm, which can be written as: where pi denotes the instantaneous pressure, p ∞ is the pressure of the free stream, air ρ is the density of air, U ∞ is the flow velocity of the free stream, and T is the sample period. In the present study, the Reynolds number is high (Re > 5 × 10 3 ); hence, the boundary layer around the solid structure becomes turbulent and the pressure drag force dominates the skin drag force [21]. Thus, the pressure drag force is the dominant component of total drag, and the skin friction drag can be neglected. Once the instantaneous pressure distributions are obtained, the corresponding aerodynamic forces of the bridge girder can be calculated by standard integration, which can be expressed as follows: The Scanivalve system monitors the surface pressure data at a sampling rate of 312.5 Hz, and the time length of one sampling period is 32 s. Based on the correction algorithm [19], the distortion effects brought by the connecting tubes-such as amplification factor and phase shift [20]-are quite small and negligible in the surface pressure measurements of this study. The time-averaged pressure coefficient C p can be calculated by nondimensionalization of the time-averaged pressure p m , which can be written as: where p i denotes the instantaneous pressure, p ∞ is the pressure of the free stream, ρ air is the density of air, U ∞ is the flow velocity of the free stream, and T is the sample period. In the present study, the Reynolds number is high (Re > 5 × 10 3 ); hence, the boundary layer around the solid structure becomes turbulent and the pressure drag force dominates the skin drag force [21]. Thus, the pressure drag force is the dominant component of total drag, and the skin friction drag can be neglected. Once the instantaneous pressure distributions are obtained, the corresponding aerodynamic forces of the bridge girder can be calculated by standard integration, which can be expressed as follows: where P i is the pressure force, P xi and P zi are the pressure force components along the x and z directions, respectively, ds i denotes the element area, and ζ i is the moment arm. It should be noted that, when the gap exists, the pressure forces at pressure taps 9-11 and 32-34 should be considered. The drag force coefficient C D , the lift force coefficient C L , and the moment force coefficient C m are defined as: where F D , F L , and F m are the drag, lift, and moment forces, respectively, and B is the width of the twin-box girder. Then, the six effective indicators that represent the aerodynamic characteristics of the bridge girder can be obtained, including the mean drag force coefficients (C D mean ), the fluctuating drag force coefficients (C D rms ), the mean lift force coefficients (C L mean ), the fluctuating lift force coefficients (C L rms ), the mean moment force coefficients (C m mean ), and the fluctuating moment force coefficients (C m rms ). It should be noted that the root mean square of the specific value is adopted to represent the fluctuation. Leading Body-Height Grid Aerodynamic Interference For the practical bridge structures, there exist some other structures at the upstream-for example, in the event that two bridges are close together. It is necessary to investigate the effects of the incompletely developed turbulence from the upstream body on the aerodynamic dynamic characteristics of the bridge. Based on the above consideration, for generality, the turbulence generated by the leading body-height grid is adopted to simulate the wake of the upstream body. With different body-height grids installed at the same height as the section bridge deck at the flow inlet, the turbulence intensity of the incoming flow in the model height range is affected by the geometry of the body-height grid. Therefore, the flow around the windward side of the bridge deck is changed from laminar flow to turbulent flow, with different-sized vortices generated by the interference of the body-height grid. Figure 4 shows the sketch view of the locations of the section model in the test section and the body-height grid in the flow inlet, while Figure 5 presents the detailed geometry of fourteen body-height grids (body-height grids I-XIV). Leading Circular Cylinder Aerodynamic Interference The handrail is an important accessory of the bridge structure, which is usually posed of cylinders, and the cylinders disturb the incoming flow characteristics to ch the boundary layer of the box girder. Therefore, with different smooth cylinders (di ters of 3 mm, 4 mm, 5 mm, and 6 mm) installed in front of the essential flow areas o model (upper edge, middle upper region, leading edge, middle lower region, and l edge), the effects of different flow characteristics of essential flow regions around th tion model on the surface pressure distribution and aerodynamic response of the b deck were investigated, and Figure 6 shows the location of cylinders in front of the b deck with different spacing ratios. The spacing ratio ε is defined as the ratio of the dis S between the trailing edge of the cylinder and the frontal surface of the twin-box gird the diameter of cylinder D, ε = S/D. The spacing ratios used in this study were 2, 3, 4, a Leading Circular Cylinder Aerodynamic Interference The handrail is an important accessory of the bridge structure, which is usually composed of cylinders, and the cylinders disturb the incoming flow characteristics to change the boundary layer of the box girder. Therefore, with different smooth cylinders (diameters of 3 mm, 4 mm, 5 mm, and 6 mm) installed in front of the essential flow areas of the model (upper edge, middle upper region, leading edge, middle lower region, and lower edge), the effects of different flow characteristics of essential flow regions around the section model on the surface pressure distribution and aerodynamic response of the bridge deck were investigated, and Figure 6 shows the location of cylinders in front of the bridge deck with different spacing ratios. The spacing ratio ε is defined as the ratio of the distance S between the trailing edge of the cylinder and the frontal surface of the twin-box girder to the diameter of cylinder D, ε = S/D. The spacing ratios used in this study were 2, 3, 4, and 5. Leading Circular Cylinder Aerodynamic Interference The handrail is an important accessory of the bridge structure, which is usually composed of cylinders, and the cylinders disturb the incoming flow characteristics to change the boundary layer of the box girder. Therefore, with different smooth cylinders (diameters of 3 mm, 4 mm, 5 mm, and 6 mm) installed in front of the essential flow areas of the model (upper edge, middle upper region, leading edge, middle lower region, and lower edge), the effects of different flow characteristics of essential flow regions around the section model on the surface pressure distribution and aerodynamic response of the bridge deck were investigated, and Figure 6 shows the location of cylinders in front of the bridge deck with different spacing ratios. The spacing ratio ε is defined as the ratio of the distance S between the trailing edge of the cylinder and the frontal surface of the twin-box girder to the diameter of cylinder D, ε = S/D. The spacing ratios used in this study were 2, 3, 4, and 5. Turbulence Intensity Measurement The turbulence intensity of incoming flow is measured by the Cobra Probe system (including Series 100 Cobra Probe, cabling, and TFI Device Control software). The Cobra Probe is composed of a multi-hole pressure probe, and is able to reconstruct the velocity components along the x, y, and z directions from pressure data. In the present work, the sampling frequency was set as 2 kHz. The Cobra Probe was placed at positions 1, 2 and 3 (see Figure 7) to measure the spread of the turbulent flow. In the present study, the turbulent properties at position 3 were adopted to characterize the incoming flow. The turbulence intensity of incoming flow is measured by the Cobra Probe system (including Series 100 Cobra Probe, cabling, and TFI Device Control software). The Cobra Probe is composed of a multi-hole pressure probe, and is able to reconstruct the velocity components along the x, y, and z directions from pressure data. In the present work, the sampling frequency was set as 2 kHz. The Cobra Probe was placed at positions 1, 2 and 3 (see Figure 7) to measure the spread of the turbulent flow. In the present study, the turbulent properties at position 3 were adopted to characterize the incoming flow. Undisturbed Surface Pressure Distribution and Aerodynamic Forces To analyze the effects of the aerodynamic interference methods mentioned above, the aerodynamic characteristics of the bare twin-box girder were experimentally investigated, and the important aerodynamic parameters (including the surface pressure distribution and the aerodynamic forces) were obtained. Figure 8 gives the distribution of the mean pressure coefficient on the surface of the twin-box girder at different Reynolds numbers. At Re = 6.13 × 10 3 , the profile of the surface pressure indicates that the pressure of the upstream girder reaches a peak near the windward corner A, because the incoming flow passes through the windward slope on the upper surface and the incoming flow is affected by the forward pressure gradient, which enhances the flow speed and promotes the continuous increase in suction. After the windward corner, the flow velocity gradually decreases, and the negative pressure also decreases. It should be noted that a short-term pressure plateau is formed after the windward corner on the upper surface of the upstream girder at low Reynolds numbers (e.g., Re = 6.13 × 10 3 ), but this pressure plateau will disappear at high Reynolds numbers. The pressure plateau is formed by the separation bubble, because the internal and external fluids of the separation bubble are relatively stable; thus, the pressure in the separation bubble is almost identical. Moreover, the endpoint of the pressure platform could be regarded as the reattachment point. Figure 8 also shows that the amplitude of the surface pressure increases with the Reynolds number, and the pressure on the lower surface of downstream girder has a common tendency to decrease first and then increase at different Reynolds numbers. Figure 9 shows the mean and fluctuating aerodynamic force coefficients of the twinbox girder at various Reynolds numbers. The aerodynamic force coefficients show a significant change in the Reynolds number range adopted in this study. With the increase in the Reynolds number, the time-averaged drag force coefficient and the fluctuating drag force coefficient both decrease, and the rate of decrease first increases and then decreases; moreover, the maximum rate of decrease is achieved at Re = 9.21 × 10 3 , while the timeaveraged lift force coefficient significantly increases first and then flattens with the increase in the Reynolds number. With the Re rising, the time-averaged moment force coefficient first increases rapidly, and then decreases. The fluctuating lift force coefficient and the fluctuating moment force coefficient show a similar tendency to the fluctuating drag coefficient with increasing Re. Undisturbed Surface Pressure Distribution and Aerodynamic Forces To analyze the effects of the aerodynamic interference methods mentioned above, the aerodynamic characteristics of the bare twin-box girder were experimentally investigated, and the important aerodynamic parameters (including the surface pressure distribution and the aerodynamic forces) were obtained. Figure 8 gives the distribution of the mean pressure coefficient on the surface of the twin-box girder at different Reynolds numbers. At Re = 6.13 × 10 3 , the profile of the surface pressure indicates that the pressure of the upstream girder reaches a peak near the windward corner A, because the incoming flow passes through the windward slope on the upper surface and the incoming flow is affected by the forward pressure gradient, which enhances the flow speed and promotes the continuous increase in suction. After the windward corner, the flow velocity gradually decreases, and the negative pressure also decreases. It should be noted that a short-term pressure plateau is formed after the windward corner on the upper surface of the upstream girder at low Reynolds numbers (e.g., Re = 6.13 × 10 3 ), but this pressure plateau will disappear at high Reynolds numbers. The pressure plateau is formed by the separation bubble, because the internal and external fluids of the separation bubble are relatively stable; thus, the pressure in the separation bubble is almost identical. Moreover, the endpoint of the pressure platform could be regarded as the reattachment point. Figure 8 also shows that the amplitude of the surface pressure increases with the Reynolds number, and the pressure on the lower surface of downstream girder has a common tendency to decrease first and then increase at different Reynolds numbers. Figure 9 shows the mean and fluctuating aerodynamic force coefficients of the twinbox girder at various Reynolds numbers. The aerodynamic force coefficients show a significant change in the Reynolds number range adopted in this study. With the increase in the Reynolds number, the time-averaged drag force coefficient and the fluctuating drag force coefficient both decrease, and the rate of decrease first increases and then decreases; moreover, the maximum rate of decrease is achieved at Re = 9.21 × 10 3 , while the timeaveraged lift force coefficient significantly increases first and then flattens with the increase in the Reynolds number. With the Re rising, the time-averaged moment force coefficient first increases rapidly, and then decreases. The fluctuating lift force coefficient and the fluctuating moment force coefficient show a similar tendency to the fluctuating drag coefficient with increasing Re. Modulation of Surface Pressure Distribution by Leading Body-Height Grids First, the surface pressure distributions with the aerodynamic interference of th leading body-height grid were compared with those of the bare twin-box girder, as show in Figure 10, and the effects of turbulence generated by different body-height grids on th surface pressure distribution were evaluated and analyzed. Figure 10 clearly shows that the turbulence intensity, which is an important charac teristic of incoming flow, has a significant influence on the time-averaged surface pressur distribution. Compared with the surface pressure distribution of the bare deck, the char acteristic of the pressure distribution with the body-height grid aerodynamic interferenc changed a lot at low Reynolds numbers (Re < 1.0 × 10 4 ), because the boundary layer tran sitioned to the turbulent boundary layer. At low turbulence intensity (I = 3.57%-5.15% the amplitude of the pressure peak was enhanced. It is also noteworthy that, at the up stream girder, the pressure plateaus (appearing in the undisturbed surface pressure dis tribution) in the upper and lower surfaces were eliminated, because the vortices generate by the body-height grid broke the separation bubbles, and the laminar boundary laye Modulation of Surface Pressure Distribution by Leading Body-Height Grids First, the surface pressure distributions with the aerodynamic interference of the leading body-height grid were compared with those of the bare twin-box girder, as shown in Figure 10, and the effects of turbulence generated by different body-height grids on the surface pressure distribution were evaluated and analyzed. Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 18 was transformed into turbulence, which cannot maintain a stable pressure to form the separation bubbles. At moderate turbulence intensity (I = 16.9%-18.3%), the negative pressures are enhanced when x/0.5B > −0.7, indicating that the high turbulence intensity contains more energy to strengthen the flow velocity around the box girder. When the turbulence intensity is further increased (I = 27.9%-31.4%), the negative pressures are also increased. Figure 11 shows the time-averaged surface pressure distribution at different Reynolds numbers when the turbulence intensities are close. It is clearly shown that the turbulence intensity can effectively eliminate the Reynolds number sensitivity of pressure distribution, because the properties of the turbulent boundary layer are dominated by the turbulence intensity of the incoming flows. Figure 10 clearly shows that the turbulence intensity, which is an important characteristic of incoming flow, has a significant influence on the time-averaged surface pressure distribution. Compared with the surface pressure distribution of the bare deck, the characteristic of the pressure distribution with the body-height grid aerodynamic interference changed a lot at low Reynolds numbers (Re < 1.0 × 10 4 ), because the boundary layer transitioned to the turbulent boundary layer. At low turbulence intensity (I = 3.57-5.15%), the amplitude of the pressure peak was enhanced. It is also noteworthy that, at the upstream girder, the pressure plateaus (appearing in the undisturbed surface pressure distribution) in the upper and lower surfaces were eliminated, because the vortices generated by the bodyheight grid broke the separation bubbles, and the laminar boundary layer was transformed into turbulence, which cannot maintain a stable pressure to form the separation bubbles. At moderate turbulence intensity (I = 16.9-18.3%), the negative pressures are enhanced when x/0.5B > −0.7, indicating that the high turbulence intensity contains more energy to strengthen the flow velocity around the box girder. When the turbulence intensity is further increased (I = 27.9-31.4%), the negative pressures are also increased. Figure 11 shows the time-averaged surface pressure distribution at different Reynolds numbers when the turbulence intensities are close. It is clearly shown that the turbulence intensity can effectively eliminate the Reynolds number sensitivity of pressure distribution, because the properties of the turbulent boundary layer are dominated by the turbulence intensity of the incoming flows. Figure 11 shows the time-averaged surface pressure distribution at different Reynolds numbers when the turbulence intensities are close. It is clearly shown that the turbulence intensity can effectively eliminate the Reynolds number sensitivity of pressure distribution, because the properties of the turbulent boundary layer are dominated by the turbulence intensity of the incoming flows. Modulation of Surface Pressure Distribution by Leading Circular Cylinders Second, the time-averaged pressure distributions with cylinder interference were compared with the undisturbed pressure distribution. Figure 12 shows the mean surface pressure distributions with circular cylinder interference (D = 3 mm) at various spacing ratios. As shown in Figure 12, the turbulence generated by the wake of the circular cylinder significantly influences the time-averaged surface pressure distribution at the upstream girder, and the pressure plateaus formed by the separation bubbles are broken by the cylinder wake. Modulation of Surface Pressure Distribution by Leading Circular Cylinders Second, the time-averaged pressure distributions with cylinder interference were compared with the undisturbed pressure distribution. Figure 12 shows the mean surface pressure distributions with circular cylinder interference (D = 3 mm) at various spacing ratios. As shown in Figure 12, the turbulence generated by the wake of the circular cylinder significantly influences the time-averaged surface pressure distribution at the upstream girder, and the pressure plateaus formed by the separation bubbles are broken by the cylinder wake. Moreover, Figure 12 also shows that the surface pressure on the upper and lower surfaces of the downstream girder exhibit opposite distribution characteristics. The upper surface pressure distribution shows that the negative pressure on the upper surface is lower than the undisturbed pressure at first, and then is higher than the original pressure at the same Reynolds number level. This may be because the flow velocity on the upper surface decreases after passing through the cylinder; however, continuous filling of the surrounding unaffected fluid enhances the flow velocity. The absolute value of minimum pressure on the lower surface is always less than the undisturbed pressure, which demonstrates that the cylinder located in front of the lower surface limits the velocity of the whole lower surface. The time-averaged surface pressure distributions at different Reynolds numbers are shown in Figure 13. As shown in Figures 12 and 13, it is clear that the pressure distribution at the twin-box girder is insensitive to the spacing ratio and the diameter of the cylinder. Meanwhile, it should be noted that the surface pressure distribution with circular cylinder interference on the downstream girder presents slight Reynolds number sensitivity, indicating that the vortices generated by the flow passing through the leading circular cylinders fully enforce the turbulence on the boundary layer around the twin-box girder. Moreover, Figure 12 also shows that the surface pressure on the upper and lower surfaces of the downstream girder exhibit opposite distribution characteristics. The upper surface pressure distribution shows that the negative pressure on the upper surface is lower than the undisturbed pressure at first, and then is higher than the original pressure at the same Reynolds number level. This may be because the flow velocity on the upper surface decreases after passing through the cylinder; however, continuous filling of the surrounding unaffected fluid enhances the flow velocity. The absolute value of minimum pressure on the lower surface is always less than the undisturbed pressure, which demonstrates that the cylinder located in front of the lower surface limits the velocity of the whole lower surface. The time-averaged surface pressure distributions at different Reynolds numbers are shown in Figure 13. As shown in Figures 12 and 13, it is clear that the pressure distribution at the twin-box girder is insensitive to the spacing ratio and the diameter of the cylinder. Meanwhile, it should be noted that the surface pressure distribution with circular cylinder interference on the downstream girder presents slight Reynolds number sensitivity, indicating that the vortices generated by the flow passing through the leading circular cylinders fully enforce the turbulence on the boundary layer around the twin-box girder. strates that the cylinder located in front of the lower surface limits the velocity of the whole lower surface. The time-averaged surface pressure distributions at different Reynolds numbers are shown in Figure 13. As shown in Figures 12 and 13, it is clear that the pressure distribution at the twin-box girder is insensitive to the spacing ratio and the diameter of the cylinder. Meanwhile, it should be noted that the surface pressure distribution with circular cylinder interference on the downstream girder presents slight Reynolds number sensitivity, indicating that the vortices generated by the flow passing through the leading circular cylinders fully enforce the turbulence on the boundary layer around the twin-box girder. Figure 14 shows the distribution of the mean and fluctuating aerodynamic force coefficients in the phase plane of the Reynolds number and turbulence intensity, and the aerodynamic forces show strong sensitivity to the Reynolds number and turbulence intensity. Modulation of Aerodynamic Forces by Leading Body-Height Grids As shown in Figure 14a, the distribution of the time-averaged drag force coefficient shows different characteristics at different turbulence intensities. At low turbulence intensity (I ≤ 5%), the C D mean is sensitive to the Reynolds number when Re ≤ 1.0 × 10 4 . Moreover, the hypotenuse of the contour at the corner indicates that the turbulence intensity can reduce the Reynolds number effects of C D mean . When the Reynolds number is further increased, the C D mean shows slight Reynolds number sensitivity, which is consistent with the bare deck (see Figure 9a). At moderate turbulence intensity (5% ≤ I ≤ 20%), the C D mean is enhanced, and the C D mean presents slight Reynolds number dependence. It should be noted that there exists a lock-up region of C D mean . At high turbulence intensity (I > 20%), the C D mean is further promoted, and is only associated with the turbulence intensity. The time-averaged lift force coefficient shows strong sensitivity to the turbulence intensity when Re ≤ 1.0 × 10 4 (see Figure 14b), and with the increase in turbulence intensity, the lift force is enhanced, which implies that the strength of turbulent flow can effectively promote the lift force. Meanwhile, at Re > 1e4, the lift force is insensitive to both Reynolds number and turbulence intensity, because the boundary layer transitions to full turbulence. Figures 15-20 show the mean and fluctuating aerodynamic force coefficients of the twin-box girder under the aerodynamic interference by leading circular cylinders at different spacing ratios. Figure 15 shows that the time-averaged drag force coefficient of the twin-box girder with cylinder interference decreased significantly compared with the original force coefficient without interference, because the shedding vortices in the wake of the circular cylinder destroyed the original laminar boundary layers; hence, the laminar boundary layers transition to turbulence at the windward corner, and the turbulent boundary layers can be maintained throughout the girder section. Thus, the boundary layer is composed of many small vortices, which effectively reduce the contact area between the fluid boundary layer and the twin-box girder, leading to a decrease in the friction between the turbulent boundary layer and the model. Meanwhile, when the vortices As for the fluctuating forces, C D rms and C L rms show different distribution characteristics. At low turbulence intensity (I < 2.5%), C D rms shows Reynolds number dependence at low Re, i.e., Re < 1.0 × 10 4 . With the increase in turbulence intensity, the C D rms is enhanced by the strong fluctuating component of incoming flow, and the Reynolds number sensitivity of the C D rms is eliminated (see Figure 14d). However, the C L rms depends on the Reynolds number and turbulence intensity, because the increase in Re or turbulence intensity can enhance the vertical fluctuating force. Modulation of Aerodynamic Forces by Leading Circular Cylinders As shown in Figure 14c,f, when Re ≤ 1.0 × 10 4 and turbulence intensity I ≤ 12.5%, the mean moment force coefficient increases significantly with the turbulence intensity, which may be due to the increase in the vertical and horizontal fluctuating velocity of the flow field caused by the turbulence. However, in other regions, the mean moment force coefficient does not show strong turbulence intensity sensitivity, because the turbulence mainly affects and increases the horizontal and vertical velocities, but has a small impact on the moment force. The fluctuating moment force coefficient and the fluctuating lift force coefficient exhibit similar trends, indicating that the fluctuating lift force is the dominant component of the fluctuating moment force. Figures 15-20 show the mean and fluctuating aerodynamic force coefficients of the twin-box girder under the aerodynamic interference by leading circular cylinders at different spacing ratios. Figure 15 shows that the time-averaged drag force coefficient of the twin-box girder with cylinder interference decreased significantly compared with the original force coefficient without interference, because the shedding vortices in the wake of the circular cylinder destroyed the original laminar boundary layers; hence, the laminar boundary layers transition to turbulence at the windward corner, and the turbulent boundary layers can be maintained throughout the girder section. Thus, the boundary layer is composed of many small vortices, which effectively reduce the contact area between the fluid boundary layer and the twin-box girder, leading to a decrease in the friction between the turbulent boundary layer and the model. Meanwhile, when the vortices contact the girder, the velocity direction is generally opposite to the incoming flow. In this way, the vortices generate a force that is opposite to the streamwise direction, and the force generated by the interaction between the vortices and the girder reduces the total drag force of the bridge deck. Meanwhile, when the diameter of the cylinder is 3 mm, it clearly shows that the time-averaged drag force coefficient exhibits a periodic trend at different locations from the model, as shown in Figure 15, which indicates that the vortex scale is closer to the diameter. In addition, the time-averaged drag force with cylinder interference at low Re achieves a similar characteristic of undisturbed drag force at high Re. Modulation of Aerodynamic Forces by Leading Circular Cylinders Appl. Sci. 2021, 11, x FOR PEER REVIEW 13 of 1 contact the girder, the velocity direction is generally opposite to the incoming flow. In th way, the vortices generate a force that is opposite to the streamwise direction, and th force generated by the interaction between the vortices and the girder reduces the tota drag force of the bridge deck. Meanwhile, when the diameter of the cylinder is 3 mm, clearly shows that the time-averaged drag force coefficient exhibits a periodic trend at di ferent locations from the model, as shown in Figure 15, which indicates that the vortex sca is closer to the diameter. In addition, the time-averaged drag force with cylinder interferenc at low Re achieves a similar characteristic of undisturbed drag force at high Re. Figure 16 shows that the fluctuating drag force coefficient of the twin-box girder wit cylinder interference is much smaller than that of the twin-box girder without interferenc at Re ≤ 9.0 × 10 3 . Moreover, the fluctuating drag force is insensitive to the Reynolds num ber, and the disturbed fluctuating drag force at low Re is closer to that of the undisturbe fluctuating force at high Re. Furthermore, the fluctuating drag force is independent of th spacing ratio ε. Figure 17 shows that the time-averaged lift force coefficient of the girder with cyli der interference also decreased sharply compared with that of the bare twin-box girder. should be noted that with the increase in distance between the cylinder and the twin-bo girder, the CL mean shows less sensitivity to the Reynolds number, because the turbulen in the wake of cylinder has been fully developed, which causes the boundary layer of th body surface to transition to turbulence. When the cylinder is close to the twin-box girde the wake of the cylinder cannot fully develop into turbulence to change the bounda layer, which explains why the Reynolds number effect still exists. In conclusion, when th spacing ratio ε > 2, the turbulence generated by the wake of the cylinder effectively elim inates the Reynolds number effects on the time-averaged lift force. Figure 18 shows that the fluctuating lift force coefficient of the twin-box girder wi cylinder interference is independent of the Reynolds number and spacing ratio ε. More ver, the fluctuating lift force is much smaller than that of the undisturbed force at Re ≤ 9 × 10 3 . This may be because the vertical pulsation component in the cylindrical wake very small and the turbulence generated by the cylinder suppresses the vortex sheddin of the twin-box girder to reduce the fluctuation of the lift force. Compared with the undisturbed time-averaged moment force, Figure 19 shows tha the mean moment force coefficient with cylinder interference is large, and is dependen on spacing ratio and diameter. With the decrease in diameter, the time-averaged momen force coefficient Cm mean is slightly enhanced. Meanwhile, with the increase in the spacin ratio, the absolute value of Cm mean decreases gradually. When the spacing ratio ε = 5, th Cm mean is smaller than the undisturbed mean moment force. This indicates that the full developed turbulence wake generated by a large spacing ratio can effectively suppres the time-averaged moment force. Figure 20 shows the fluctuating moment force coefficient Cm rms of the twin-box girde with cylinder interference. At Re ≤ 9.0 × 10 3 , the interfered Cm rms is greater than the undi turbed Cm rms. When the spacing ratio ε ≤ 3, the Cm rms is insensitive to the diameter, and th Cm rms decreases slightly with the increase in the Reynolds number. Figure 16 shows that the fluctuating drag force coefficient of the twin-box girder with cylinder interference is much smaller than that of the twin-box girder without interference at Re ≤ 9.0 × 10 3 . Moreover, the fluctuating drag force is insensitive to the Reynolds number, and the disturbed fluctuating drag force at low Re is closer to that of the undisturbed fluctuating force at high Re. Furthermore, the fluctuating drag force is independent of the spacing ratio ε. Figure 17 shows that the time-averaged lift force coefficient of the girder with cylinder interference also decreased sharply compared with that of the bare twin-box girder. It should be noted that with the increase in distance between the cylinder and the twin-box girder, the C L mean shows less sensitivity to the Reynolds number, because the turbulence in the wake of cylinder has been fully developed, which causes the boundary layer of the body surface to transition to turbulence. When the cylinder is close to the twin-box girder, the wake of the cylinder cannot fully develop into turbulence to change the boundary layer, which explains why the Reynolds number effect still exists. In conclusion, when the spacing ratio ε > 2, the turbulence generated by the wake of the cylinder effectively eliminates the Reynolds number effects on the time-averaged lift force. Figure 18 shows that the fluctuating lift force coefficient of the twin-box girder with cylinder interference is independent of the Reynolds number and spacing ratio ε. Moreover, the fluctuating lift force is much smaller than that of the undisturbed force at Re ≤ 9.0 × 10 3 . This may be because the vertical pulsation component in the cylindrical wake is very small and the turbulence generated by the cylinder suppresses the vortex shedding of the twinbox girder to reduce the fluctuation of the lift force. Compared with the undisturbed time-averaged moment force, Figure 19 shows that the mean moment force coefficient with cylinder interference is large, and is dependent on spacing ratio and diameter. With the decrease in diameter, the time-averaged moment force coefficient C m mean is slightly enhanced. Meanwhile, with the increase in the spacing ratio, the absolute value of C m mean decreases gradually. When the spacing ratio ε = 5, the C m mean is smaller than the undisturbed mean moment force. This indicates that the fully developed turbulence wake generated by a large spacing ratio can effectively suppress the time-averaged moment force. Figure 20 shows the fluctuating moment force coefficient C m rms of the twin-box girder with cylinder interference. At Re ≤ 9.0 × 10 3 , the interfered C m rms is greater than the undisturbed C m rms . When the spacing ratio ε ≤ 3, the C m rms is insensitive to the diameter, and the C m rms decreases slightly with the increase in the Reynolds number. Conclusions In the present work, the effects of two cutting-edge aerodynamic interference measures on the pressure distribution and aerodynamic force of a twin-box girder were investigated. We used the leading body-height grid and leading circular cylinder to increase the turbulence intensity of the incoming flow. The flows gained more energy before reaching the separation point, and the entrainment rate of the flow was enhanced, which not only affects the separation bubbles and reattachment points, but also changes the characteristics of the boundary layer. The conclusions are summarized as follows: (1) The leading body-height grid generates the turbulent incoming flow, which effectively breaks the separation bubbles and the flow reattachment, and the laminar boundary layer in the undisturbed case at low Re is forced to transition to turbulent flow. Moreover, the characteristics of surface pressure distribution with body-height grid interference are similar to those of bare deck at high Re; (2) The Reynolds number sensitivity of time-averaged drag force decreases with the increase in turbulence intensity, and the C D mean is dominated by the turbulence intensity. While the C L mean and C m mean are dependent on the Re and turbulence intensity at low Re, at high Re, the C L mean and C m mean are insensitive to the Re and turbulence intensity. The fluctuating drag force C D rms depends on the turbulence intensity, and is insensitive to the Reynolds number, while the C L rms and C m rms are related to both turbulence intensity and the Reynolds number. In addition, the characteristics of C L rms and C m rms are similar, indicating that the C L rms is the dominant component of the C m rms ; (3) The coherent turbulence generated by the leading circular cylinders effectively changes the boundary layer of the twin-box girder. The Reynolds number sensitivity of surface pressure distribution is reduced by the interference of cylinders, and it is insensitive to the diameter and the spacing ratio. Moreover, the separation bubbles are also broken by the wake of the cylinder; (4) The time-averaged drag force C D mean is significantly reduced by the interference of the leading cylinder, and its Reynolds number sensitivity is diminished. Moreover, the time-averaged lift force C L mean with cylinder interference is drastically decreased, and it is also insensitive to the Reynolds number. With the increase in the spacing ratio, the time-averaged moment force C m mean is weakened, and its Reynolds number sensitivity is reduced. In addition, the fluctuating drag force C D rms and lift force C L rms are both insensitive to the Re, the spacing ratio, and the diameter, while the fluctuating moment force is closely related to these three parameters.
11,965.8
2021-10-13T00:00:00.000
[ "Physics", "Engineering" ]
Diffractive control of 3D multifilamentation in fused silica with micrometric resolution We show that a simple diffractive phase element (DPE) can be used to manipulate at will the positions and energy of multiple filaments generated in fused silica under femtosecond pulsed illumination. The method allows obtaining three-dimensional distributions of controlled filaments whose separations can be in the order of few micrometers. With such small distances we are able to study the mutual coherence among filaments from the resulted interference pattern, without needing a two-arm interferometer. The encoding of the DPE into a phase-only spatial light modulator (SLM) provides an extra degree of freedom to the optical set-up, giving more versatility for implementing different DPEs in real time. Our proposal might be particularly suited for applications at which an accurate manipulation of multiple filaments is required. 2016 Optical Society of America OCIS codes: (000.0000) General; (000.2700) General science. References and links 1. W. R. Zipfel, R. M. Williams, and W.W. Webb, “Nonlinear magic: multiphoton microscopy in the biosciences. Nature biotechnology,” 21(11), 1369-1377 (2003). 2. R. Meesat, H. Belmouaddine, J. F. Allard, C. Tanguay-Renaud, R. Lemay, T. Brastaviceanu, L. Tremblay, B. Paquette, J. R. Wagner, J. P. Jay-Gerin, M. Lepage, M. A. Huels, and D. Houde, “Cancer radiotherapy based on femtosecond IR laser-beam filamentation yielding ultra-high dose rates and zero entrance dose,” Proc. Nat. Acad. Sci. 109, E2508-E2513 (2012). 3. R. R. Gattass, and E Mazur, “Femtosecond laser micromachining in transparent materials,” Nat. Photonics, 2(4), 219-225 (2008). 4. V. I. Klimov and D. W . McBranch, “Femtosecond high-sensitivity, chirpfree transient absorption spectroscopy using kilohertz lasers,” Opt. Lett. 23(4), 277-279 (1998). 5. G. Cerullo, and S. De Silvestri, “Ultrafast optical parametric amplifiers,” Rev. Sci. Instrum. 74(1), 1-18 (2003). 6. J. J. Macklin, J . D. Kmetec, and C. L. Gordon III, “High-order harmonic generation using intense femtosecond pulses,” Phys. Rev. Let. 70(6), 766-769 (1993). 7. A. M. Weiner, “Femtosecond pulse shaping using spatial light modulators,” Rev. Sci. Instrum. 71(5), 19291960 (2000). 8. S. Hasegawa, Y. Hayasaki, and N. Nishida, “Holographic femtosecond laser processing with multiplexed phase Fresnel lenses,” Opt. Lett. 31(11), 1705-1707 (2006). 9. L. Martínez-León, P. Clemente, E. Tajahuerce, G. Mínguez-Vega, O. Mendoza-Yero, M. Fernández-Alonso, J. Lancis, V. Climent, and P. Andrés, “Spatial-chirp compensation in dynamical holograms reconstructed with ultrafast lasers,” Appl. Phys. Lett. 94(1), 011104 (2009). 10. S. H. Shim, D. B. Strasfeld, E. C. Fulmer, and M. T. Zanni, “Femtosecond pulse shaping directly in the mid-IR using acousto-optic modulation,” Opt. Lett. 31(6), 838-840 (2006). 11. F. Verluise, V. Laude, J. P. Huignard, P. Tournois, and A. Migus, “Arbitrary dispersion control of ultrashort optical pulses with acoustic waves,” JOSA B, 17(1), 138-145 (2000). 12. C. Y. Chang, L. C. Cheng, H. W. Su, Y. Y. Hu, K. C. Cho, W. C. Yen, C. Xu, C. Y. Dong, and S. J. Chen, “Wavefront sensorless adaptive optics temporal focusing-based multiphoton microscopy,” Biomed. Opt. Express, 5(6), 1768-1777 (2014). 13. B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013). 14. J. N. Yih, Y. Y. Hu, Y. D. Sie, L. C. Cheng, C. H. Lien, and S. J. Chen, “Temporal focusing-based multiphoton excitation microscopy via digital micromirror device,” Opt. Lett. 39(11), 3134-3137 (2014). 15. O. Mendoza-Yero, V. Loriot, J. Pérez-Vizcaíno, G. Mínguez-Vega, J. Lancis, R. De Nalda, and L. Bañares, “Programmable quasi-direct space-to-time pulse shaper with active wavefront correction,” Opt. Lett. 37(24), 5067-5069 (2012). 16. J. P. Vizcaíno, O. Mendoza-Yero, R. Borrego-Varillas, G. Mínguez-Vega, J. R. Vázquez de Aldana, and J. Láncis, “On-axis non-linear effects with programmable Dammann lenses under femtosecond illumination,” Opt. Lett. 38(10), 1621-1623 (2013). 17. G. Mínguez-Vega, C. Romero, O. Mendoza-Yero, J. R. Vázquez de Aldana, R. Borrego-Varillas, C. Méndez, J. Lancis, P.Andrés, V. Climent, and L. Roso “Wavelength tuning of femtosecond pulses generated in nonlinear crystals by using diffractive lenses,” Opt. Lett. 35(21), 3694-3696 (2010). 18. R. Borrego-Varillas, J. Perez-Vizcaino, O. Mendoza-Yero, G. Minguez-Vega, J. R. Vázquez de Aldana, and J. Lancis, “Controlled Multibeam Supercontinuum Generation With a Spatial Light Modulator,” IEEE Photon. Technol. Lett. 26(16), 1661-1664 (2014). 19. A. Couairon, and A. Mysyrowicz “Femtosecond filamentation in transparent media,” Phys. Rep. 441(2), 47-189 (2007). 20. J. M. Dudley, and S. Coen, “Coherence properties of supercontinuum spectra generated in photonic crystal and tapered optical fibers,” Opt. Lett. 27(13), 1180-1182 (2002). 21. I. Zeylikovich and R. R. Alfano, “Coherence properties of the supercontinuum source,” Appl. Phys. B 77, 265268 (2003). 22. C. Corsi, A. Tortora and M. Bellini, “Mutual coherence of supercontinuum pulses collinearly generated in bulk media,” Appl. Phys. B 77, 285-290 (2003). 23. C. Corsi, A. Tortora and M. Bellini, “Generation of a variable linear array of phase-coherent supercontinuum sources,” Appl. Phys. B 78, 299-304 (2004). 24. R. Borrego-Varillas, J. Pérez-Vizcaíno, O. Mendoza-Yero, J. R. Vázquez de Aldana, G. Mínguez-Vega, and J. Lancis, “Dynamic Control of Interference Effects between Optical Filaments through Programmable Optical Phase Modulation,” J. Display Technol. DOI 10.1109/JDT.2015.2511305 (to be published). 25. C. Romero, R. Borrego-Varillas, A. Camino, G. Mínguez-Vega, O. Mendoza-Yero, J. Hernández-Toro, and J. R. Vázquez de Aldana, “Diffractive optics for spectral control of the supercontinuum generated in sapphire with femtosecond pulses,” Opt. Express 19(6), 4977-4984 (2011). 26. R. Borrego-Varillas, C. Romero, O. Mendoza-Yero, G. Mínguez-Vega, I. Gallardo, and J. R. Vázquez de Aldana, “Femtosecond filamentation in sapphire with diffractive lenses,” J. Opt. Soc. Am. B. 30(8), 2059-2065 (2013). 27. A. Camino, Z. Hao, X. Liu, and J. Lin, “Control of laser filamentation in fused silica by a periodic microlens array,” Opt. Express. 21, 7908 (2013). 28. O. Mendoza-Yero, G. Mínguez-Vega, and J. Lancis, “Encoding complex fields by using a phase-only optical element,” Opt. Lett. 39(7), 1740-1743 (2014). 29. J. A. Davis, and D. M. Cottrell, “Random mask encoding of multiplexed phase-only and binary phase-only filters,” Opt. Lett. 19(7), 496-498 (1994). 30. C. Maurer, S. Khan, S. Fassl, S. Bernet, and M. Ritsch-Marte, “Depth of field multiplexing in microscopy,” Opt. Express 18(3), 3023–3034 (2010). 31. C. Iemmi, J. Campos, J. C. Escalera, O. Lopez-Coronado, R. Gimeno, and M. J. Yzuel, “Depth of focus increase by multiplexing programmable diffractive lenses,” Opt. Express 14(22), 10,207–10,219 (2006). 32. R. D. Leonardo, F. Ianni, and G. Ruocco, “Computer generation of optimal holograms for optical trap arrays,” Opt. Express 15(4), 1913–1922 (2007). 33. David Milam, "Review and assessment of measured values of the nonlinear refractive-index coefficient of fused silica," Appl. Opt. 37, 546-550 (1998). 34. N. T. Nguyen, A. Saliminia, W. Liu, S. L. Chin, and R. Vallée, "Optical breakdown versus filamentation in fused silica by use of femtosecond infrared laser pulses," Opt. Lett. 28, 1591-1593 (2003). Introduction Extremely short temporal light events can be regarded as excellent tools for accessing nonlinear optical effects, i.e., self-phase modulation, self-focusing, or plasma generation, due to the combination of spatially focused and femtosecond time scale pulsed light. It is wellknown that a suited control over the spatial and temporal properties of ultrashort pulses allows manipulating non-linear phenomena for developing multiple tasks, including two-photon microscopy [1], cancer therapy [2], micro-processing of materials [3], or non-linear spectroscopy [4]. Other settle-down applications i.e., for getting the initial seed in ultrafast optical parametric amplifiers [5], or for synthesizing high harmonics [6] have also been reported. Here, it is apparent that setting specific parameters to ultrashort pulses can be hard to achieve due to several unwanted effects such as temporal dispersion or spatial phase aberrations. In the temporal and/or spatial domains, pulse parameters can be changed in realtime by using optical devices like liquid crystal SLMs [7][8][9], acoustic-optics crystals [10,11] deformable mirrors [12] or digital micromirror devices (DMDs) [13,14]. On the other hand, the use of diffractive optics has demonstrated its capability to provide not only compact temporal pulse shapers [15], but also to manipulate nonlinear optical phenomena in the axial [16,17], as well as in the transversal direction of the pulse propagation [18]. In this context, we will focus on filamentation [19], which is basically a non-linear propagation phenomenon that is extended a distance longer than the Rayleigh range associated with the pulse. It is originated due to the balance of two main processes, pulse focusing by Kerr effect and defocusing caused by the plasma. Regarding this topic, the coherent nature of the filaments [20] has been investigated by means of several optical setups/devices such as a diffraction-grating-based interferometer [21], collinear geometries with time-delayed pulses [22], variable linear arrays of supercontinuum sources [23] or programmable liquid crystal SLMs [24]. In addition, the filamentation process in fused silica generated under femtosecond illumination has been studied by means of diffractive lenses [24][25][26]. In particular, conventional arrays of diffractive lenses have been implemented as a tool to generate multiple and predefined filaments in fused silica [24,27]. At this point, we want to note that the utilization of a conventional array of diffractive lenses for multifilamentation has some drawbacks. The first one is related to the impossibility to bring filaments closer to a distance smaller than twice the physical radius of a lens without using additional optics (assuming arrays of equal lenses). The second drawback comes from the apparent reduction of the numerical aperture of lenses with respect to the numerical aperture of the optical system because of the array implementation itself. Furthermore, the different spatial locations of lenses within an array cause the corresponding focal energies to strongly depend on the initial irradiance distribution of the light source onto the plane of the lens array. In this contribution we experimentally demonstrate a diffractive-based method to generate arbitrary three-dimensional distributions of filaments in fused silica with accurate control over the spatial locations of filaments. The filaments are originated after focusing femtosecond laser pulses into a fused silica sample by using a single DPE. The encoding of the DPE into phase-only liquid crystal SLM gives additional degrees of freedom to the proposed method, allowing for a dynamic and more versatile operation. In addition, by modulating the phase functions of the lenses we can vary their diffraction efficiency, so modifying the amount of energy employed to produce each filament. Furthermore, due to the characteristics of the encoding method, it is easy to generate filaments with lateral separations in the order of few micrometers. Hence, it will be relatively simple to study the mutual coherence among filaments without using two-arm interferometers or any additional optical components. The content of the manuscript is organized as follows. In section 2, details of the encoding method are given. In section 3, with the help of a femtosecond laser source and a commercially available SLM, several multifilamentation processes by means of DPEs are experimentally demonstrated. In section 4, we show the usefulness of DPEs to study the mutual coherence among filaments contained within two different spatial distributions. Finally, in section 5 the main conclusions of our work are presented. Basics of the encoding method In this manuscript a spatially multiplexed procedure is used to encode a set of diffractive lenses into a unique   y x DPE , . With this purpose, the phase information coordinates. From the above expression, each individual filament located at arbitrary transverse coordinates x 0 , y 0 within the fused silica crystal will be originated by the focused light associated with an off-axis kinoform lens also centered at x 0 , y 0 . In mathematical terms, the resulted   y x DPE , can be expressed as: In order to get further insight into the implementation of this encoding method, we include here a dummy example, see Fig. 1. In the first column of this figure the twodimensional binary masks   y x M n , employed for the sampling process are given. The square gray or black zones within these masks have values one or zero, respectivelly. In practice, each zone will correspond to one pixel of the SLM. In contrast, we found that the applicability of this encoding method depends on the accuracy of the sampling process, which is directly linked to the pixel width. Note that, a good sampling process should allows reconstructing the original function by simple extrapolation. So, the number of lenses that can be encoded with this method varies the available pixel width. For a given pupil extension the lower the pixel width the greater the energy at the focal point. In this context, it can be demostrated that the diffraction efficiency quadratic decreases with an increasing number of the superposed Fresnel lenses. In the literature, similar strategies for spatially multiplexed Fresnel lenses have been reported [28][29][30][31][32]. For instance, it can be metioned a method based on a random sampling of the phase called random mask encoding [31], or another characterized by the application of the weighted Gerchberg-Saxton algorithm [32]. In this manuscript we select a strategy that allows us to carry out a uniform sampling of the phase onto the SLM display. The main reason for doing that is obtaining similar amounts of energy at the focal points of Fresnel lenses even when the laser irradiance at the SLM plane has not a uniform distribution. In addition, as we will shown latter such a spatial multiplexed of lenses is very useful to achieve independent and precise control of the multifilamentation process in bulk. Multifilamentation with DPEs To experimentally show the ability of DPEs to manipulate at will some parameters of multiple filaments developed in fused silica, we have constructed the optical setup shown in Fig. 2. The light emitted by a Ti: Sapphire femtosecond laser is used as pulsed illumination source. The output pulses are about 30 fs intensity full width at half maximum, with 1 kHz repetition rate, centered at with an approximate energy per pulse of 800 mJ. Before it impinges into the liquid crystal SLM (Reflective PLUTO Phase Only SLM from HOLOEYE), the light is spatially magnified with the help of a 4X reflective optical beam expander (BE04R from Thorlabs). This magnification allows the light to fill the whole area of the liquid crystal display. After that, the light is sent to the SLM via a pellicle beamsplitter (BP145B2 from Thorlabs). In order to get access to regions very close to the DPE, we form an image of the liquid crystal display with the help of a 4f optical system. This optical system is composed of a couple of lenses with focal lengths . The above combination of lenses decreases by a factor of two the transversal extension of the DPE at the output plane of our imaging system. Accordingly, the magnification of the imaging system in the axial direction is 4 / 1 . After the output plane of the imaging system, the pulse focuses towards the entrance face of the fused silica crystal (denoted as FS-crystal in Fig. 2), originating a spatial distribution of filaments inside the crystal. Aside, a microscopy objective ( from UEYE). Here, we conveniently designed the DPEs to generate spatial distributions of filaments, all of them contained in planes parallels to the plane of the CCD camera. This ensures that, for a given set of experimental parameters i.e., fixed distances among filaments or specific focal lengths for diffractive lenses, all filaments can be recorded at once with the CCD camera. In addition to the above-mentioned camera, another CCD camera (model A102fc, with resolution 1388x1038 and pixel width m 6.45 from Basler) is placed after the rear face of the fused silica crystal in a plane perpendicular to the propagation direction of the filaments, see Fig. 2. This second camera is used to record images originated by the interference of multiple filaments as we will later explain in section 4. In the present experiment, the focal length of diffractive lenses for the central wavelength of the pulse, 0  , is mm 245 . However, owing to the 1/4 axial magnification of the 4f optical system, all foci arise in a transversal plane located is the nonlinear refractive index [33]. Hence, the energy per pulse needed for filamentation can be estimated ( ). This means that if all the available energy focal E is employed for non-linear processes, up to 10 additional filaments per encoded lens could be generated. In this context, it has been also shown [34] that for large focal lengths, the threshold energy for filamentation decreases as the corresponding one for the optical breakdown increases. For the parameters used in our experiments, e.g., 245 f mm  , and , the filamentation process takes place far away from the optical breakdown, avoiding in this manner modification of the optical parameters or damage of the fused silica sample. In Fig. 3, recorded images corresponding to four different spatial distributions of filaments within the fused silica crystal are shown. As expected, each distribution of filaments is generated by focusing the light with a specific DPE. To construct the DPEs, we follow the encoding method described in section 2. In all cases, these DPEs are included at the right-part of the filament's images in Fig. 3. Specifically, a set of nine filaments with relative lateral separations of m  128 are shown in Fig. 3(a). By changing convergent even lenses by highly divergent ones in the previous DPE e.g., with focal length of mm 500  , another symmetric spatial distribution but this time composed of five filaments with separations of m  256 was given in Fig. 3(b). Note that, as the light is no longer focused when divergent lenses are used, we can remove the corresponding filaments within the fused silica crystal. Furthermore, for all distances among filaments such that interaction effects can be neglected e.g., like the cases shown in Fig. 3, the encoding method ensures an independent control over the behavior of each filament. With the help of another DPE, a set of five filaments with non-equal separations is achieved in Fig. 3(c). Finally, the potential of the encoding method to generate arbitrary spatial distributions of filaments is experimentally demonstrated in Fig. 3(d). In this last case, a set of five filaments with non-linear increased/decreased distances among them has been obtained. At this point, it should be noted that both the energy and axial positions of filaments shown in Fig. 3 are almost the same. Small discrepancies in the positions are less than m  10 , which are in the order of the pixel size of our SLM, whereas energy variances are only about 8%. To get this, when necessary, the focal lengths of kinoform diffractive lenses were slightly modified to correct for unwanted effects due to real experimental conditions. We found that small misalignments of the beam onto the SLM plane led to visible changes of the axial positions of filaments within the crystal. The effect of misalignments can be more clearly seen in Fig. 4 where sets of nine and five filaments achieved without, Fig. 4(a) or Fig. 4(c), and with, Fig. 4(b) or Fig. 4(d), modification of the focal lengths are shown. For instance, when that focal length of all lenses was fixed to mm 245 we recorded the image given in Fig. 3 we got the filament distribution shown in Fig. 4(d). The variable i f refers to the focal length of the lens i within the DPE. On the other hand, if required, the focal energy coupled into the filaments can be conveniently decreased to finally obtain a similar amount of energy per filament. To do that, one can change the diffraction efficiency of each lens, modulating its quadratic phases   y x n ,  with a multiplicative phase parameter  that ranges from zero to one. In the above expression, the variables x , y represent again transversal coordinates onto the DPE plane. This kind of shaping of the energy distribution per filament may be very useful in situations when the amplitude of the laser beam at the DPE plane evidences clear inhomogeneities. In this case, if a conventional array of lenses is used to focus the pulse like in [24], the amount of energy coupled into the filaments could be quite different. In contrast, distributions of filaments shown in Fig. 3 and 5 have almost the same intensity, so they seem to be not affected by this problem. For this reason, in this experiment corrections were only done in the focal lengths, but not in the diffraction efficiency of lenses. There are other well-known factors i.e., laser beam aberrations [21] or non-uniform spatial phase response [22][23][24] of the SLM, that also can change energies and positions of foci. However, in our experiment the high temporal stability of the obtained filaments suggests that possible effects introduced by time-dependent factors might be included into the scrambled distribution of filaments already shown in Fig. 4(a) and (c). Here, it should be mentioned that under broadband spectral illumination each wavelength of the ultrashort pulse is focused by diffractive lenses at different axial positions. Specifically, these positions follow an inverse dependence with the wavelength of light. So, one might expect that filaments generated by focusing ultrashort pulses with diffractive, instead of refractive lenses, show somehow a different behavior due to several reasons. For instance, for the same experimental conditions, i.e., equal numerical aperture, pulse energies and focal lengths, the Rayleigh range due to diffractive lenses is longer than the one obtained by focusing the pulse with a bulk refractive lens. In the temporal domain, basically owing to the propagation time difference among pulses coming from the center and edges of the diffractive lenses, the temporal duration of the pulse at the focus can significantly increase with respect to the temporal pulse width achieved with corresponding refractive lenses. The comparison of the filamentation process obtained with refractive and diffractive optics is beyond the scope of this manuscript, but a detailed analysis of this topic can be found elsewhere [25]. Coherence properties of filaments In this section we experimentally demonstrate the usefulness of the encoding method to investigate the mutual coherence among filaments developed in fused silica. Note that, the possibility to bring filaments as closer as the pixel width of the SLM can lead to interference effects among them. In this experiment the ultrashort pulse is focused via DPEs into a fused silica crystal to form array of filaments which are able to interfere. The conical emission due to the nonlinear propagation of the pulse within the crystal is superimposed to the interference patterns. The spectral broadening within the visible region of the electromagnetic spectrum originates colored interference patterns whose shapes will depend on the spatial distribution of filaments. In contrast to a classical two arm interferometer which is usually highly dependent on environmental fluctuations and relatively difficult to align, the use of DPEs allows implementing a compact and robust optical system to measure the visibility of the interference fringes. It is basically composed of a couple of CCD cameras and a SLM. As explained in section 3, the CCD camera placed perpendicular to the propagation direction of the filaments is used to see the straight forward interference patterns, whereas the second camera placed aside the fused silica crystal allows recording the weak plasma emissions or filaments. A complete schematic representation of the optical layout can be seen in Fig. 2. In this optical setup, a suited filter (model KG5, with 25 mm of diameter from Edmund Optics) is utilized to remove non-converted infrared light. For this application, each DPE can be regarded as a programmable single arm interferometer which is poorly dependent on certain unwanted phenomena i.e., mechanical vibrations. Although DPEs cannot be used to significantly vary the delay among pulses that develop in different filaments, the positions of them inside the fused silica crystal if it can be precisely controlled (as demonstrated in section 3). In addition, the energy coupled into the filaments could be also modified by decreasing the abovementioned phase parameter  associated with the diffraction efficiency of the lenses. However, owing to the homogeneous spatial sampling of the lenses throughout the DPE, all generated filaments are almost of the same energy, so the need from diffraction efficiency compensation i.e., for instance claimed in [24], is not an issue in the present experiment. In Fig. 5, the shape of interference patterns for different lateral separations/distances among filaments distributed according to two spatial configurations are shown. The selected distances l among filaments appear as an inset in the right-top part of corresponding images. In the left-bottom part of the images indications of their longitudinal scales are included. Typical longitudinal fringe patterns arise when two filaments interfere, whereas for 3D interference of filaments spot-like patterns come out. In particular, at the left-part of Fig. 5, two similar filaments located approximately at the same axial positions, but having increased/decreased lateral separations between them are shown. The corresponding interference pattern is shown aside. Similarly, at the right-part of Fig. 5, with the same lateral separations as before and following the 3D spatial distribution, images of three filaments and their interference patterns are shown. These three constituent filaments are located in the vertices of an equilateral triangle. In this case, note that visible variations in the intensity of filaments are mainly due to locations of filaments at different planes within the triangular distribution. As it might be expected, the longer the distance among filaments the more magnified the interference pattern will be. In addition, one can see that interferences patterns are color-dependent. We found out that the distribution of colors within the interference patterns changes with the penetration depth of filaments within the fused silica crystal. This effect seems to be related to two main factors, the change of chromatic aberrations at the focal regions achieved for different focal lengths, and the variation in the amount of material dispersion introduced by the crystal for different penetration depths. On the other hand, it is apparent that changes in the lateral distance among filaments should not alter the visibility of the interference pattern. In fact, from Fig. 5 one can roughly see that the contrast of interference patterns is more or less the same in all cases. This guarantees a high coherence among the different filaments. However, this is not longer true when variations of distances take place in the axial instead of the lateral direction to the propagation direction of the filaments. In the next experiment we want to show that using DPEs it is possible to investigate the effect of the axial separation of two filaments on the visibility of the interference pattern. Our experimental results are shown in Fig. 6. The axial distances d shown as insets in the righttop part of Fig. 6(a-e) are taken with respect to the upper filament which is kept fixed. To change the distance between filaments, the focal length of the diffractive lens associated with the lower filament was ranged from mm 8 . 241 to mm 0 . 248 by means of variable increments. As the interference patterns are color-dependent, different visibility values were calculated for each image corresponding to the red, green and blue (RGB) channels of the camera. These visibility values were determined for a couple of central fringes, using an irradiance profile taken at the middle of each interference pattern. These irradiance profiles (also plotted with red, green and blue colors) are shown in the right-part column of Fig. 6, together with the corresponding three values of visibility added as insets. For comparison, all visibility values were normalized with respect to the value achieved with the red channel in Fig. 6(c). This maximum value corresponds to the situation when there is no separation between upper and lower filaments. In this case, the visibility assessed due to the remaining channels is also the greatest. Therefore, the visibility worsens while increasing the axial separation between filaments. In these cases, the optical path and the delay between both filaments is no longer zero because original infrared pulses pass through different amounts of material dispersion. The experimental results shown in Fig. 6 also reveal that apart from the apparent changes of visibility with the axial separation of filaments, there is an additional factor that could be taken into account. This factor is the different behavior of the visibility parameter for each RGB image of the interference pattern. For instance, for the experimental parameters used to get Fig. 6 i.e., penetration depths of filaments within the crystal of about 4 mm or focal lengths around mm 0 . 245 , the values of visibility achieved with the red channel are less affected by variations of the axial position of filaments than corresponding values due to the remaining channels. However, after changing the penetration depth of filaments within the crystal, the behavior of visibility parameter for each RGB images also varies. We found that, among other reasons, this phenomenon is linked to the visible change in the azimuthal distribution of colors within the interference pattern due to the modification of the penetration depth. In order to show this effect, a set of interference patterns obtained for three different penetration depths (included with the variable p in their top-right parts) are shown in Fig. 7. In this experiment, the interference patterns were originated by the interaction of a couple of filaments having lateral ( To introduce approximatelly the same amount of chromatic aberrations, the focal length ( mm f 280  ) of the two diffractive lenses employed to generate the filaments remains the same during the whole experiment. Instead, with the help of a motorized stage, the fused silica crystal was moved with respect to the filaments.After a visual inspection of Fig. 7, one can conclude that the predominant color within the interference patterns is shifted when modifying the penetration depth. In the time domain, increasing/decreasing the material dispersion implies variations in the temporal width of the pulse, which influence the development of filaments within the crystal, as well as the characteristics of corresponding interference patterns, see Fig.8. Conclusions In this manuscript, we experimentally showed that DPEs encoded into a phase-only SLM can be successfully utilized to generate arbitrary 3D spatial distributions of filaments in bulk optics with micrometric spatial resolution. The spatial sampling procedure employed to construct each DPE allows having high accuracy and independent control over some physical parameters of filaments such as the energy coupled into the filaments or their positions within the fused silica crystal. We found that both the coupled energy and the relative positions of filaments can be conveniently tuned by changing the diffraction efficiency, and the center of the corresponding off-axis kinoform diffractive lenses, respectively. The usefulness and robustness of DPEs for practical applications were tested with a couple of experiments addressed to study the mutal coherence of filaments while they develop inside the fused silica crystal. In particular, we showed that visibility of interference patterns due to the interaction of filaments within a predefined spatial distribution changes when modifying the relative axial distances among filaments. Its maximum value is achieved when there is no axial separation between filaments. In addition, for a selected pair of fringes we found that the visibility also depends on the RGB image associated with the interference pattern. In comparison with previous studies aimed to the control of multifilamentation processes in fused silica by using arrays of diffractive lenses, our proposal showed some advances. The most significant one is the possibility to bring filaments as close as the pixel width of the SLM without using additional optical components. Another advance is the tested ability of DPEs to generate filaments with relative similar energies, regardless the homogeneity of the beam's irradiance onto the DPE. These advances are made possible mainly due to the homogeneous spatial sampling of the diffractive lenses, which also guarantees high numerical aperture for the mix of lenses. However, with the encoding method used in this manuscript one can encode only a limited amount of lenses, basically because the sampling of the phase associated with each lens gets worse when increasing the number of lenses in the DPE. This drawback might be softened if technology behind the next generation of SLMs allows for devices with better resolution and lower pixel width.
7,464.8
2016-07-11T00:00:00.000
[ "Physics" ]
Dual-functional cellulase-mediated gold nanoclusters for ascorbic acid detection and fluorescence bacterial imaging Protein-protected metal nanomaterials are becoming the most promising fluorescent nanomaterials for biosensing, bioimaging, and therapeutic applications due to their obvious fluorescent molecular properties, favorable biocompatibility and excellent physicochemical properties. Herein, we pioneeringly prepared a cellulase protected fluorescent gold nanoclusters (Cel-Au NCs) exhibiting red fluorescence under the excitation wavelength of 560 nm via a facile and green one-step method. Based on the fluorescence turn-off mechanism, the Cel-Au NCs were used as a biosensor for specificity determination of ascorbic acid (AA) at the emission of 680 nm, which exhibited satisfactory linearity over the range of 10–400 µM and the detection limit of 2.5 µM. Further, the actual sample application of the Au NCs was successfully established by evaluating AA in serum with good recoveries of 98.76%–104.83%. Additionally, the bacteria, including gram-positive bacteria (Bacillus subtilis and Staphylococcus aureus) and gram-negative bacteria (Escherichia coli), were obviously stained by Cel-Au NCs with strong red emission. Thereby, as dual-functional nanoclusters, the prepared Cel-Au NCs have been proven to be an excellent fluorescent bioprobe for the detection of AA and bacterial labeling in medical diagnosis and human health maintenance. Introduction Ascorbic acid (AA, vitamin C), as one of the most vital micronutrients and antioxidants in the human body, plays an imperative role in numerous biochemical reactions involving oxidative stress reduction, disease prevention, immune response and other physiological activities (Abulizi et al., 2014;Liu et al., 2017).Furthermore, AA is also a medicine for the treatment of many diseases, including scurvy, immunodeficiency, allergic reactions and liver disease, which contributes to the absorption of iron and calcium, healthy cell development, and normal tissue growth (Zhuang and Chen, 2020).Thus, AA detection is very important in medical diagnosis and human health maintenance.At present, various analytical methods have been developed and utilized in the quantitative determination of AA, such as electrochemistry (Ma et al., 2021), high liquid chromatography (Burini, 2007), liquid chromatography-mass spectrometry/mass spectrometry (Diep et al., 2020).Although these technologies have been successfully implemented in AA detection, most of them still have disadvantages such as complicated instrument requirements, long detection time, and low sensitivity.Nowadays, the fluorescence method has gradually become an ideal alternative method for detecting AA because of its simplicity, high sensitivity and excellent reproducibility (Gan et al., 2020).Therefore, it is urgent to develop an innovative material with exceptional fluorescent properties in biosensing. Metal nanoclusters (NCs) consisting of several to dozens of atoms are typically ~3 nm which is equivalent to the Fermi wavelength of the electrons (Jin et al., 2016), resulting in a series of tunable metal core composition with discrete electronic states, obvious fluorescence molecular-like characteristics and excellent physicochemical properties (Zhang and Wang, 2014).Due to their inherent properties, metal NCs including Au, Ag, Cu, Pd and Pt NCs are being widely explored in biological imaging, biological sensing and advanced therapeutics fields (Guo et al., 2021;Tan et al., 2021).Notably, Au NCs become the most promising fluorescent nanomaterial owing to their excellent characteristics, such as strong photoluminescence, extraordinary photostability, explicit composition and combination properties (Guo et al., 2021).In light of this, various methods including microwave-assisted synthesis (Yue et al., 2012), sonochemistry (Xu and Suslick, 2010), photoreduction (Zhou et al., 2017), ligand-induced etching (Duan and Nie, 2007), and templateassisted synthesis (Qiao et al., 2021;Chen et al., 2022) have been developed to form the Au NCs. Up to now, many templates, including DNA, proteins, viruses, microorganisms and plants, have been used for the preparation of Au NCs (Huang et al., 2015;Chen et al., 2018;Wang et al., 2019).Among them, due to their specific amino acid sequence composition, unique spatial conformation and chemical functional groups, proteins as an effective biological template show tremendous potential for the synthesis of Au NCs with tunable size, fluorescent properties and favourable biocompatibility (Yu et al., 2014;Guo et al., 2021).For example, Bhamore et al. prepared amylase Au NCs with red fluorescent emission and an average size of 1.75 nm for the detection of deltamethrin and glutathione (Bhamore et al., 2019).In another case, human serum albumin (HSA) directed red-emitting gold nanoclusters (HSA-AuNCs) were used as a bioprobe for Staphylococcus aureus (Chan and Chen, 2012).Moreover, in our recent study, using flavourzyme as a template, first prepared Fla-Au NCs with blue fluorescence were successfully utilized for the determination of carbaryl (Chen et al., 2022).Papain-encapsulated platinum nanoclusters with green fluorescence can be used not only for sensing lysozyme in biofluids but also for gram-positive bacterial identification (Chang et al., 2021).Therefore, it is urgent to develop innovative protein-coated metal nanoclusters and explore their applications in bioprobes, bioimaging and therapy. Cellulase (Cel), as a pivotal industrial enzyme, catalyzes the decomposition of renewable lignocellulosic biomass into oligosaccharides or monosaccharides, which have been explored in numerous industries, such as textile, pulp and paper, detergent, food, and biofuel production (Ejaz et al., 2021;Areeshi, 2022).However, there are very limited reports on the synthesis and application of cellulase mediated nanostructure.Up to now, only Cel-protected copper nanoclusters (Cu NCs) with exceptional photostability, luminescence quantum yield, and colloidal stability has been investigated (Singh et al., 2016).Additionally, attributed to the oxidation resistance, conductivity, non-toxicity and stability of Au, the performance of Au NCs in biosensing and biomedicine is highly anticipated. Hereby, we innovatively fabricated one type of red-emitting Au NCs using cellulase as the template via a one-step biomineralization method.A series of characterization techniques were used to explore the optical properties, morphology, composition, and valence state of Cel-Au NCs, including UV-vis absorption spectrometry, fluorescence spectroscopy, transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FT-IR) and X-ray crystallography (XPS).As shown in Scheme 1, this turn-off and label-free biosensor provided an alternative choice for AA detection in the biofluid.Meanwhile, owing to ultra-small size, brightly red fluorescence and good biocompatibility, dualfunctional Cel-Au NCs could also be served as a bio-imaging probe for bacterial imaging.4H 2 O was purchased from Sinopharm Chemical Reagent Co., Ltd.(Shanghai, China).Cellulase, pepsin, trypsin and AA were obtained from Yuanye Biotechnology Co., Ltd.(Shanghai, China).Histidine, threonine, lysine, glycine, glutathione (GSH), maltose, sucrose, glucose and metal ions were acquired from Sangon Biotechnology Co., Ltd.(Shanghai, China).All reagents were of analytical purity and used directly.Milli-Q purified water prepared by the PR03200 ultra-pure water meter (Zhongshan Keningte Cleaning Supplies Co., Ltd.) was utilized in all experiments. Instruments All glass containers in the laboratory were thoroughly washed with aqua regia, rinsed with ultrapure water and dried before use.UV-1800 spectrophotometer (Shimadzu, Japan), PF-5301PC fluorescence spectrophotometer (Shimadzu, Japan) and Spark-Multimode microplate reader (Tecan, Switzerland) were applied to measure the UV-vis absorption spectra, the fluorescence spectra, and the bacterial density, respectively.Transmission electron microscopy (TEM) images were collected on a JEOL 2010 LaB6 TEM (TECNAI G2, the Netherlands) at an acceleration voltage of 200 kV.Fourier transform infrared (FT-IR) spectra and X-ray photoelectron spectra (XPS) were separately detected by BW17-FTIR-650 spectrometer (Beijing, China) and X-ray photoelectron spectroscopy (Shimadzu, Japan).The fluorescence lifetime and quantum yield (QY) of the samples were recorded on an FLS920 fluorescence spectrometer (Edinburgh, UK).Zeta potential values were performed using the Malvern Zetasizer sizer NanoZS ZEM-3600 instrument (Malvern, UK).Furthermore, bacteria imaging was collected using the fluorescence microscope (Zeiss, Germany). Synthesis of Cel-Au NCs Typically, 0.16 mL of the HAuCl 4 solution (25 mM) and 9.84 mL of the cellulase solution (1 mM) were mixed thoroughly with a vortexer for 5 min.After adjusting pH to 12 with the addition of 1 M NaOH solution, the above mixture was reacted at 37 °C for 12 h in the dark.Then the supernatant of the above mixture was collected by centrifugation at 8,000 rpm for 10 min, dialyzed to remove unreacted metal ions by a dialysis membrane (1, 000 MWCO) for 24 h, and placed at 4 °C for future use. The detection of AA For AA detection, the Cel-Au NCs (40 mg/mL, 50 μL), different concentrations of AA solutions (100 μL) and deionized water (850 μL) were mixed and incubated at 25 °C for 5 min in a water bath.The fluorescence signal of the above mixture was then measured using an F-4500 fluorescence spectrophotometer by exciting at 560 nm.To evaluate the selectivity and specificity of Cel-Au NCs for AA, the fluorescence variations of Cel-Au NCs were investigated toward 16 kinds of compounds (histidine, threonine, lysine, glycine, GSH, maltose, sucrose, glucose, AA, KCl, NaCl, LiCl, ZnCl 2 , CaCl 2 , MgCl 2 , MnCl 2 ).The as-prepared Cel-Au NCs were mixed with different compound solutions and measured under the same experimental condition as above.All experiments were performed three times in a parallel format. Analysis of AA in real samples To evaluate the applicability of the method, human serum samples provided from the Hospital of Traditional Chinese Medicine (Wuhu, China) were directly diluted 40 times with Milli-Q purified water before the experiment.Then, 50 μL of 40 mg/mL as-prepared Cel-Au NCs, 850 μL of diluted serum sample and 100 μL of different concentrations of AA solution were mixed and analyzed in accordance with the procedure mentioned above. Bacterial culture and viability assay Bacillus subtilis (B.subtilis, gram-positive bacteria), Staphylococcus aureus (S. aureus, gram-positive bacteria) and Escherichia coli (E.coli, gram-negative bacteria) were separately cultured on Luria-Bertani (LB) agar plates at 37 °C overnight.Subsequently, a single colony of the bacteria was separately picked and incubated in LB liquid culture medium with continuous shaking at 180 rpm at 37 °C for another 16-24 h. To estimate the biocompatibility of Cel-Au NCs, bacterial viabilities were measured by determining bacterial cell density at OD 600 on Spark-Multimode microplate reader.When OD 600 reached 0.6, the bacteria (B.subtilis, S. aureus, and E. coli) were seeded into a 96-well microplate at 1% inoculum.Then various concentrations of Cel-Au NCs (0, 10, 25, 50, 100 and 200 μg/mL) were separately added to the bacteria and cultured at 37 °C and 180 rpm.The growth of organisms was observed by measuring OD 600 until 24 h and all of the experiments were executed three times in parallel.The percentage of bacterial density without adding Au NCs was taken as 100%. Fluorescent imaging of bacteria After centrifuging at 8,000 rpm for 5 min, the above cultured bacterial cells were collected, washed with PBS, and incubated in the mixture of the prepared Cel-Au NCs (0.1 mL) and PBS (0.4 mL) in a shaker at 37 °C for 15 min.The bacterial cultures were examined on a Zeiss upright fluorescence microscope under 605 nm. Synthesis and characterization of Cel-Au NCs The red-emitting Cel-Au NCs were firstly prepared via a facile and green one-step biomineralization method based on the reduction of cellulase provided by sulfur-containing cysteines and methionines, which made the Au-S band formed between cellulase and Au atom (Balu et al., 2019;Wang et al., 2019).To obtain the optimal conditions of the synthesized Cel-Au NCs, the molar ratio (cellulase/HAuCl 4 ) and reaction pH were conducted in Supplementary Figure S1.The molar ratio of 2.5:1 and the reaction pH of 12 served as optimal conditions were selected for further study. Initially, UV-vis absorption spectra and fluorescence spectroscopy were employed to identify related optical properties of Cel-Au NCs.The UV-vis spectrum showed that Cel-Au NCs had a shoulder peak in the region range of 300-400 nm with a continuous rise and a distinct peak at 350 nm attributed to oxidation between cellulase and Au atoms, whereas the spectrum of cellulose showed no peak in these ranges, signifying the Cel-Au NCs were fabricated (Figure 1A).As shown in Figure 1B; Supplementary Figure S2, the red-emitting cellulase protected Au NCs displayed an emission peak maximum at 680 nm upon 560 nm excitation with a marked Stokes shift of 120 nm.Additionally, the QY of Cel-Au NCs in aqueous solution was determined to be 10.19% using Rhodamine 6 G as a reference (Supplementary Figure S3). The morphology of the prepared Cel-Au NCs was characterized by TEM, revealing that the Cel-Au NCs had a good dispersion and the average size was 1.68 nm by counting 146 samples (Figures 1C, D), which was consistent with the diameter of metal NCs prepared in previous studies (Wei et al., 2010;Bhamore et al., 2019).Subsequently, FT-IR was used to characterize the chemical composition of Cel-Au NCs.As shown in Figure 1E, the peaks of pure cellulase and Cel-Au NCs for O-H stretching, C-H stretching, C=O stretching, C-H bending, N-H stretching and C=C bending were separately observed at 3,400 cm -1 , 2,940 cm -1 , 1,655 cm -1 , 1,415 cm -1 , 1,250 cm -1 and 1,025 cm -1 , whereas a distinct peak in the spectrum of Cel-Au NCs was observed at 1,580 cm −1 ascribing to the formation of a bond between Au and cellulose. XPS was used to measure the oxidation states of gold in Au NCs and it showed the peaks of Au, S, C, N and O in the XPS spectra (Supplementary Figure S4).Two peaks centered at 88.0 and 84.3 eV were separately ascribed to 4f 5/2 and 4f 7/2 for Au (Figure 1F).The peak of 4f 5/2 of the prepared Cel-Au NCs was further deconvoluted into two different components, one at 88.05 eV corresponding to Au (0), and the second one at 88.60 eV attributed to Au (I).Also, the two peaks of 4f 7/2 assigning to 84.32 and 84.99 eV showed the simultaneous presence of Au (0) and Au (I) in Cel-Au NCs.The spectra of Au 4f 7/2 showed a binding energy of > 84.0 eV, indicating both Au (0) and Au (I) existed in Cel-Au NCs and the presence of Au-S complexes formed by the formation of charge transfer bands (Bothra et al., 2017). Fluorescence quantification assay of AA When the addition of AA was increased from 10 μM to 800 μM, a corresponding reduction in the fluorescent signal of Cel-Au NCs was examined (Figure 2A). Figure 2B depicted the relationship between the fluorescent intensity of the Cel-Au NCs and the different concentrations of AA, and showed a good linear correlation over a range of 10-800 µM with a LOD of 2.5 µM (R 2 = 0.99134), indicating that the detection system possessed superior sensitivity.Simultaneously, the fluorescent intensity of Cel-Au NCs was correspondingly reduced with the increasing concentration of AA by UV light (Figure 2C).Furthermore, the specificity of the Cel-Au NCs for AA was conducted by testing the response of the biosensor prepared against other compounds.Interestingly, the fluorescent intensity of Cel-Au NCs was extremely decreased just after adding AA, whereas there were barely any changes in the presence of the other compounds (Figures 2D, E; Supplementary Figure S5).Compared with the published methods in AA detection (Table 1 and Supplementary Table S1), the proposed method displayed a wider detection range and an appreciable detection limit, which is simplicity, rapidity, efficiency and economics.Thus, as an alternative biosensor, it is potential for AA detection in the biological environment using Cel-Au NCs. To elucidate the quenching mechanism of AA on the Cel-Au NCs, fluorescence resonance energy transfer (FRET), inner filter effect (IFE), dynamic and static quenching as well as photoinduced electron transfer had been investigated.As depicted in Supplementary Figure S6, AA displayed a strong absorption peak at 245 nm, which did not overlap with the Cel- Materials Linear range (μM) Detection limit (μM) References Au NCs emission spectrum (600-800 nm), demonstrating that the mechanism of quenching mechanism caused by AA was not FRET and IFE (Fan et al., 2022).Notably, the fluorescent lifetimes were 10.27 μs and 9.60 μs for Cel-Au NCs before and after the addition of AA, separately (Figure 2F).The noticeable change in the fluorescence lifetime of Cel-Au NCs upon the addition of AA indicated that the quenching mechanism might be dynamic quenching rather than static quenching.Similarly, the fluorescence quenching of LDH-GQD caused by Fe 3+ was determined to be dynamic quenching due to the reduction of fluorescence lifetime from 6.45 ns to 1.21 ns (Shi et al., 2021).Furthermore, the zeta potential of Cel-Au NCs increased from −15.2 mV to −13.3 mV after adding AA (Supplementary Figure S7).The negative zeta potential of the Cel-Au NCs is attributed to the presence of carboxylic groups with negative charges on the surface of cellulase, while the apparent increase in the zeta potential of Cel-Au NCs after the addition of AA confirms that the positively charged AA was attached to the surface of the negatively charged Cel-Au NCs.Additionally, the reducing power of AA caused the alteration in the oxidation state of Au (I), localized on the surface of the Au (0) core, further leading to the fluorescence quenching of Cel-Au NCs.(Li et al., 2015;Li et al., 2017).Hence, the quenching mechanism of Cel-Au NCs might be attributed to photoinduced electron transfer and dynamic quenching mechanism. Application of AA detection in real samples For assessing the practicality of the method in actual samples, the detection of AA in serum samples was carried out.As depicted in Table 2, the recovery rates of AA in actual samples were in the range of 98.76%-104.83%,and the relative standard deviations (RSD) ranged from 1.05% to 5.04%.Furthermore, to demonstrate the practicability and accuracy of this biosensor, diverse concentrations of AA in serum samples were analyzed by the commercial HPLC method (Supplementary Table S2).The recoveries of AA were between 94.24% and 102.24% with RSD of 0.13%-3.07%.These results illustrated that this developed biosensor was applicable for the detection of AA in biological samples in comparison with the HPLC method. Biocompatibility assessment of Cel-Au NCs The biocompatibility of Cel-Au NCs was evaluated by measuring the bacterial density at OD600.The assay was conducted on three kinds of bacteria including B. subtilis (gram-positive bacteria), S. aureus (gram-positive bacteria) and E. coli (gram-negative bacteria).As evidenced by Supplementary Figure S8, Cel-Au NCs exhibited a negligibly inhibitory effect on bacterial cell proliferation within the range of 0-100 μg/ml and had a slight inhibitory on bacterial cell proliferation at 200 μg/ml, indicating low cytotoxicity of the Cel-Au NCs to bacteria. Bioimaging for types of bacteria To verify the bacterial labeling ability of Cel-Au NCs, bacterial cells incubated with Cel-Au NCs were observed under a fluorescence microscope.B. subtilis (gram-positive bacteria, Figures 3A, D), S. aureus (gram-positive bacteria, Figures 3B, E), and E. coli (gram-negative bacteria, Figures 3C, F) stained by Cel-Au NCs were respectively shown in the bright field and the dark field with strong red emission when excitation at 605 nm.In light of this, we hypothesized that Cel-Au NCs with ultra-small size might be absorbed by bacteria and interact with multiple proteins in the bacteria.In our previous study, S. aureus, B. subtilis as well as Microbacterium incubated with papain-Pt NCs could emit distinct green fluorescence (Chang et al., 2021).Besides, in the latest research, Li's group used red-fluorescent cBSA-AuAgNCs with an average diameter of 1.80 nm to label E. coli (Li et al., 2022).Therefore, Cel-Au NCs with satisfactory fluorescence characteristics could be explored as a bioprobe that effectively labels the microorganism cells. Conclusion In summary, with cellulase serving as the template, a onestep biomineralization strategy was successfully proposed to synthesize fluorescent Au NCs for the first time.The average size of as-synthesized Au NCs was found to be 1.68 nm and it displayed an emission peak maximum at 680 nm when excited at 560 nm.Notably, the fluorescent Cel-Au NCs as a "turn-off" biosensor could be used to assay AA with an extraordinary linear correlation over a range of 10-800 µM and a LOD of 2.5 µM.Furthermore, the practical application of the biosensor was successfully developed by evaluating AA in serum samples with appreciable recoveries of 98.76%-104.83%.In addition, Cel-Au NCs displayed a negligibly inhibitory effect on bacterial cell proliferation over 0-100 μg/ml, indicating low cytotoxicity of the pre-made Au NCs to bacteria.Furthermore, due to ultra-small size, obvious red fluorescence, and water solubility, Cel-Au NCs were also used as a bioprobe for various bacterial labeling, including B. subtilis, S. aureus and E. coli.This analytical and bioimaging procedure is notable as it can perform directly in a complicated environment and does not require any organic reagents as pretreatment.Therefore, this study provides new proteindirected and dual-functional Au NCs open alternative avenues for AA detection and bacterial imaging in biomedical fields. FIGURE 1 (A) UV-vis absorption spectra of Cel-Au NCs (red line) and cellulase (black line), Inset: photographs of cellulase (left) and Cel-Au NCs (right) with UV light (365 nm).(B) Fluorescence excitation spectra of Cel-Au NCs at emission wavelength 680 nm (red line) and emission spectra of Cel-Au NCs upon excitation at 560 nm (black line).(C) Transmission electron microscopy images showed the average size of Cel-Au NCs with 10 nm bar.(D) Size distribution histogram of Cel-Au NCs calculated from the TEM images by counting 146 samples.(E) FT-IR spectra of cellulase-Au NCs (red) and cellulase (black).(F) XPS spectra for the Au 4f of Cel-Au NCs.The original spectrum is black, the fitted spectrum is red, the Au(0) 4f 7/2 spectrum is blue, and the Au(I) 4f 7/2 spectrum is pink. FIGURE 2 (A) Fluorescence spectra of Cel-Au NCs with varied concentrations of AA (top to bottom: 10-800 μM).(B) The linear relationship between I0/I and different concentrations from 10 to 800 μM of AA (I 0 /I, where I 0 and I are the fluorescence intensity of Cel-Au NCs in the absence and presence of AA, respectively).(C) Image of Cel-Au NCs with different concentrations (10-800 μM) of AA under UV light.(D) Relative fluorescence intensity (I/I 0 ) of Cel-Au NCs when excited at 560 nm with various analytes (I/I 0 , where I and I 0 are the fluorescence intensity of Cel-Au NCs in the presence and absence of various analytes, respectively).(E) Photographic image of Cel-Au NCs solution upon the addition of various analytes under UV light illumination at 365 nm.(F) Time-resolved fluorescence spectra of Cel-Au NCs in the presence or absence of 600 μM AA (λ ex = 560 nm, λ em = 680 nm). FIGURE 3 FIGURE 3The fluorescence microscopic images of bacteria respectively correspond to bright fields and dark fields of Bacillus subtilis (A,D), Staphylococcus aureus (B,E), and Escherichia coli (C,F) using Cel-Au NCs as a probe. TABLE 1 Comparison of the determination of AA using Cel-Au NCs and other reported fluorometric methods. TABLE 2 The concentration of AA in 40-fold diluted serum detected using the Cel-Au NCs.
5,086.2
2023-08-28T00:00:00.000
[ "Chemistry", "Materials Science", "Biology" ]
Effects of Microbeam Irradiation on Rodent Esophageal Smooth Muscle Contraction Background: High-dose-rate radiotherapy has shown promising results with respect to normal tissue preservation. We developed an ex vivo model to study the physiological effects of experimental radiotherapy in the rodent esophageal smooth muscle. Methods: We assessed the physiological parameters of the esophageal function in ex vivo preparations of the proximal, middle, and distal segments in the organ bath. High-dose-rate synchrotron irradiation was conducted using both the microbeam irradiation (MBI) technique with peak doses greater than 200 Gy and broadbeam irradiation (BBI) with doses ranging between 3.5–4 Gy. Results: Neither MBI nor BBI affected the function of the contractile apparatus. While peak latency and maximal force change were not affected in the BBI group, and no changes were seen in the proximal esophagus segments after MBI, a significant increase in peak latency and a decrease in maximal force change was observed in the middle and distal esophageal segments. Conclusion: No severe changes in physiological parameters of esophageal contraction were determined after high-dose-rate radiotherapy in our model, but our results indicate a delayed esophageal function. From the clinical perspective, the observed increase in peak latency and decreased maximal force change may indicate delayed esophageal transit. Introduction Radiation-induced esophagitis is one of the most common [1] and dose-limiting [2] acute toxicities in treating various thoracic tumors. After reaching a cumulative dose of 20 to 30 Gy of conventional fractionated radiation therapy, the affected patients may suffer from dysphagia or odynophagia [3], often necessitating symptomatic therapy [4]. In most cases, these symptoms are associated with esophagitis [5], which is due to radiation-induced mucosal damage [6]. Nevertheless, morphological mucosal changes are not always present, and in some patients, the clinical symptoms do not necessarily correlate with the endoscopic findings [5]. While some patients complain of only low-grade dysphagia despite endoscopically more severe esophagitis [6], other patients with subjectively higher-grade dysphagia had only marginal or even absent endoscopic findings [7]. Alternatively, there is evidence of radiation-induced motility disorder (RIMD). In the esophagus, RIMD is usually described as a late sequel [8][9][10][11] due to damaged esophageal muscle layers [8,10] or nerves [8], mainly associated with fibrosis or stenosis [8][9][10], but has also been described as acute toxicity [12][13][14]. In association with several clinical trials, it was confirmed that esophageal transit is acutely impaired by irradiation [15][16][17]. However, these findings remain controversial [16,17], as other trials found no effect [7] following irradiation. The QUANTEC database [18] discussed acute esophagitis in detail, whereas abnormal esophageal motility was mentioned only once. The underlying mechanisms are insufficiently characterized and, consequently, difficult to treat. Over the last decade, high-dose-rate irradiation has increasingly come into focus for superior preservation of normal tissue [19]. Microbeam irradiation (MBI) is an experimental irradiation technique characterized by a high dose rate and spatial dose fractionation of synchrotron-generated X-ray beams in the micrometer range [19]. A multislit collimator (MSC) is inserted into the X-ray beam, producing an array of quasi-parallel microbeams. This results in an inhomogeneous dose distribution in the irradiated target, with a repetitive sequence of high (peak) dose and low (valley) dose zones [20]. The width of individual microbeams is typically in the range of 20-100 µm with a center-to-center spacing of several hundred micrometers. With its high dose rates, MBI takes advantage of the FLASH effect, described as tissue-preserving at dose rates of ≥40 Gy/s [21,22]. It has shown effective tumor control in small animal models and excellent normal tissue tolerance in the brain [23][24][25][26][27][28][29]. In a recent study, a spontaneous canine brain tumor was successfully treated with MBI [30]. While the initial focus of MBI development was in the brain, more recent studies have also shown good normal tissue tolerance in the lung [31] and efficiency in treating lung tumors in a small animal model [32]. In an ex vivo study, it was shown that, even with peak doses up to 400 Gy, MBI could be conducted without severe acute effects on cardiac function [33,34]. In addition to the heart, the esophagus is also an organ at risk (OAR) in thoracic irradiation. To our knowledge, no previously published study investigated the effect of high-dose-rate irradiation on the esophagus. Therefore, we designed a pilot study to develop an ex vivo model system suitable for assessing the acute effects of irradiation on esophageal function. Our major finding is a delayed esophageal contraction without loss of contraction strength following MBI. Preparation of Isolated Rat Esophageal Segments for Isometric Contraction Measurement in the Organ Bath All experiments were conducted in accordance with the Guide for the Care and Use of Laboratory Animals. Sufficient water and food were available. In the current study, an acute ex vivo model was used. Before any experiment or procedure, Wistar rats aged 8-12 weeks were anesthetized and decapitated under deep anesthesia. Deep anesthesia was proven by the absence of pain reflex. After the post-mortem esophagectomy, the esophagus was submerged into a HEPES-buffered storage solution (in mmol/L: 120 NaCl, 4.5 KCl, 26 NaHCO 3 , 1.2 NaH 2 PO 4 , 1.6 CaCl 2 , 1.0 MgSO 4 , 0.025 Na 2 -EDTA, 5.5 glucose, 5.0 HEPES, pH = 7.4) for the preparation, and the esophagus was cut into three sections representing the proximal, middle, and distal part of the esophagus. For isometric contraction measurements, as described before [35,36], thin nylon threads (Gütermann Toldi) were sutured to either end of the segments to enable longitudinal fixation in an organ bath (Panlab ML0146/C, ADInstruments, Oxford, UK). The organ bath was filled with a buffer solution (in mmol/l: 120 NaCl, 4.7 KCl, 2.5 CaCl 2 , 1.2 MgCl 2 , 30 NaHCO 3 , 0.5 Na 2 -EDTA, 5.5 Glucose, 2.0 sodium-pyruvate, pH = 7.4, osmolarity 295-300 mosmol/L) and continuously gassed with carbogen (95% O 2 and 5% CO 2 , AirLiquide, Lutherstadt-Wittenberg, Germany). The temperature in the organ bath was kept at 37 • C. Isometric contraction was measured with a force transducer (MLT0201, ADInstruments, Oxford, UK) and recorded with a bridge amplifier (ML224, ADInstruments) connected to an analog-digital-converter (Powerlab 4/30, ADInstruments) and analyzed by the LabChart 7 Software (ADInstruments). Time Course of the Experiments The time course of the experiments and a representative measurement of the contraction force are shown in Figure 1. After fixation in the organ bath, the initial mean tension of the segments (4.56 mN ± 0.16 mN, n = 79) was adjusted, and recordings were registered for 30 min to establish stable baseline conditions. Then, this baseline tone was recorded for 15 min ( Figure 1A). Carbachol (carbamoylcholine chloride, CCH, Tocris bioscience, Bristol, UK), a structural non-hydrolyzable analog of the neurotransmitter acetylcholine, was used to induce the isometric contraction. After adding 100 µL of the CCH stock solution (2.5 mM CCH) to yield a final CCH concentration of 10 µM in the organ bath, the isometric contraction was recorded for 15 min ( Figure 1B). CCH was washed-out, the specimens were allowed to relax, and the baseline tone was reached after an additional duration of 30 ± 10 min. The segments were removed from the organ bath with their force transducers to maintain the tension and then positioned on the irradiation table and irradiated. The segments were dipped into the buffer solution just before irradiation to keep them as humid as possible. After irradiation, the force transducers and the segments were returned to the organ bath. The time interval between the removal of segments and replacing them in the organ bath was 8 ± 3 min. The segments recovered for 15 min. Finally, 100 µL of 2.5 mM CCH was added again, and isometric contraction was registered for another 15 min. To compare the effects of MBI with its high peak doses to the effects caused by a homogeneous valley dose, we also performed a high-dose-rate BBI study with a dose approximately corresponding to the valley dose of the MBI study (first control group, Table 1). For technical reasons, during the irradiation process, the segments were not submerged in the buffer solution. Thus, to control for this situation, we also performed sham irradiation as a second control group (Table 1). These control segments were not irradiated, but the buffer solution was removed for 8 ± 3 min, equivalent to the duration of the irradiation procedure. At the end of the experiments, all segments were fixed in 3.7% PFA-solution for immunohistochemistry. Calculation of Parameters for Characterization of Isometric Contraction The force-time curves were used to calculate several parameters ( Figure 1C). In addition to the segment length, five parameters were calculated. To describe the function of the contractile apparatus, we calculated the baseline tone, the maximal contraction strength, and the peak amplitude. The peak latency and the maximal force change were calculated to evaluate the signal transduction process. The baseline tone was the mean contraction strength during the last 10 s before adding CCH, and the peak amplitude was the difference between maximal contraction strength and baseline tone. Peak latency was Calculation of Parameters for Characterization of Isometric Contraction The force-time curves were used to calculate several parameters ( Figure 1C). In addition to the segment length, five parameters were calculated. To describe the function of the contractile apparatus, we calculated the baseline tone, the maximal contraction strength, and the peak amplitude. The peak latency and the maximal force change were calculated to evaluate the signal transduction process. The baseline tone was the mean contraction strength during the last 10 s before adding CCH, and the peak amplitude was the difference between maximal contraction strength and baseline tone. Peak latency was the interval between adding CCH and the maximal contraction strength. The force-time function was derived from the recorded force-time curves, and the inflection point (dF/dt) was calculated. Irradiation Protocol The experiments were conducted at the beamline P05 of the PETRA III synchrotronradiation source operated by HEREON on the DESY campus in Hamburg, Germany [37]. The setup was designed as a full-field imaging beamline with a tunable monochromatic energy spectrum between 5 keV and 50 keV. The irradiation procedure was conducted in the second experimental hutch, dedicated to microtomography. Due to the large distance of 85.9 m from the undulator source, a beam width of up to 7 mm horizontally was obtained. To operate with an optimized photon flux, the experiment was carried out with a double multilayer monochromator at an energy of 30 keV with an energy bandwidth of approximately 1%. This can deliver a photon flux in the order of 10 13 ph/s. Although MBI studies have been conducted in the past exclusively at white beam beamlines, mostly on wiggler sources, from a physics perspective, there are certain advantages to using a monochromatic beam. A beam produced by an undulator offers an intrinsically better horizontal collimation (at P05 28 µrad rms), improving the homogeneity of off-axis microbeams. Due to the monochromatizating, no beam hardening occurs. With the photon energy fluence, in general, well determined at a monochromatic synchrotron beamline, the absorbed dose rate to a medium can be readily calculated using the mass energy absorption coefficient [38]. The drawback of an undulator beamline is, however, that the dose rate is at least one order of magnitude lower compared to the one available at a white beam wiggler source. For ex vivo experiments, such as the one developed in this study, given that the motility of the esophagus is only minimal, the requirements of an extremely high dose rate are not as strict as they would be for in vivo experiments where much physiologic movement due to breathing and cardiac activity can be expected. In the case of a physiologic movement during irradiation, a lower dose rate and a subsequently longer exposure time can cause a smearing of the microbeam edges, which in turn will result in a decrease of the peak-tovalley dose ratio (PVDR) and thus impairment of normal tissue tolerance. There is evidence in experimental radiotherapy for a positive correlation between dose rate and normal tissue protection [21]. For this experiment, the beam size was adjusted to 4.85 × 3.8 mm 2 (horizontal × vertical), optimizing the homogeneity of the intensity distribution. The dose rate of the broadbeam field was determined with a small field, soft X-ray ion chamber for clinical use (PTW TM34103W) and cross-checked with a Si photodiode (Canberra PIPS detector calibrated at PTB Berlin, the German national institute for metrology standards). To determine the dose in the microbeams, a method providing a high spatial resolution was necessary. To achieve this, Gafchromic™ film (HD-V2, EBT3, Ashland, Bridgewater, MA, USA) was used after cross-calibration in the broadbeam field. A dose rate of 81 Gy/s in the broadbeam field was measured. All dosimetric values presented here were determined at the sample entrance. The broadbeam field was split by an MSC (UNT, Morbier, France) into an array of vertical quasi-parallel microbeams with an individual width of 50 µm, spaced at a center-to-center distance of 400 µm. Using the available horizontal beam width, up to 12 microbeams could be obtained. Thus, the target zone was covered with a grid of high (peak) dose zones and low (valley) dose zones. The microbeam array could be visualized with a CMOS camera used for microtomography (Ximea CB500MG, 7920 × 6004 pixels with a pixel size of 0.9 microns at the lowest magnification). The esophagus segments were irradiated either in MBI mode (n = 39, group I in Table 1) or in BBI mode (seamless irradiation, n = 20, group II). Some segments served as non-irradiated controls (n = 20, group III). The irradiation dose was administered in one single fraction in both MBI and BBI modes. During the irradiation, the samples were translated vertically through the beam. By varying the speed or the vertical beam height, the dose deposited in the sample was altered by two orders of magnitude. For the high peak dose MBI, a speed of 0.77 mm/s was chosen. For the low-dose BBI, the speed was raised to 2.18 mm/s while reducing the beam height to 0.15 mm. For dosimetry and verification of the correct positioning, a Gafchromic TM film was placed behind the irradiated specimens. Statistical Analysis SigmaPlot (Systat Software Version 13) was used for the statistical analysis. The test of univariate normality was the Shapiro-Wilk test. To compare the data before and after irradiation, a parametric paired t-test and a non-parametric Wilcoxon test were performed. For comparison between the three groups, a non-parametric Kruskal-Wallis test followed by a post hoc test (one-way ANOVA on ranks) was used. In some cases, we also used a parametric two-sample student's t-test, a non-parametric Mann-Whitney U-test, and a parametric one-way ANOVA. The level of significance was set to 0.05. The data are presented as mean ± standard error of the mean (SEM). Concentration-Response Relationship for Carbachol The first experiment was to investigate the concentration dependence of CCH-induced contraction of the esophageal smooth muscle (Supplementary Materials Figure S1). Be-Cells 2023, 12, 176 6 of 15 fore adding CCH, the baseline tone among the segments was not statistically different (mean baseline tone of all segments 2.99 ± 0.52 mN, one-way ANOVA, p = 0.252, grey box in Supplementary Materials Figure S1). In the presence of 0.01 µM CCH, no contraction was registered. The recorded force remained within the range of the baseline tone (student's t-test, p = 0.739). With increasing concentrations of CCH, discernible isometric contractions were obtained with increasing amplitudes until all muscle cells were contracted and a plateau was reached. Flattening of the dose-response curve was presumed in the presence of 10 or 100 µM CCH indicating saturation (11.32 ± 1.33 mN, or, respectively, 12.20 ± 1.75 mN). There was no statistical difference between these two contractions (student's t-test, p = 0.715). However, the relaxation time after washing out CCH was longer for the higher concentration of CCH. Therefore, we conducted the rest of the study using 10 µM CCH concentration in the organ bath (dashed rectangle, Supplementary Materials Figure S1). The Three Experimental Groups Were Homogeneously Randomised To ensure that the segments were evenly distributed throughout the groups, we analyzed the CCH-induced contraction in all segments before irradiation, including the mean segment length, mean baseline tone, mean maximal contraction strength (peak), mean peak amplitude, mean peak latency, and mean maximal force change. We found no significant differences (Kruskal-Wallis ANOVA) on the rank test (Table 2) between the experimental groups (Table 1). MBI Significantly Increased Peak Latency and Decreased Maximal Force Change We next analyzed the peak latency and the maximal force change (Figure 2A,B) before and after irradiation. In the BBI group, we found no significant difference (paired t-test resp. Wilcoxon test, see Figure 2A,B). However, in the MBI group, the peak latency significantly increased (260 ± 31 s to 335 ± 33 s, Wilcoxon test, p < 0.001), and the maximal force change significantly decreased (0.12 ± 0.02 mN/s to 0.09 ± 0.01 mN/s, Wilcoxon test, p < 0.001). This may indicate an impairment of the signal transduction cascade for contraction in muscle cells after MBI. Furthermore, the contraction in the SHAM group was not altered, suggesting that the irradiation procedure did not affect the CCH-induced contraction. before and after irradiation. In the BBI group, we found no significant difference (paired t-test resp. Wilcoxon test, see Figure 2A,B). However, in the MBI group, the peak latency significantly increased (260 ± 31 s to 335 ± 33 s, Wilcoxon test, p < 0.001), and the maximal force change significantly decreased (0.12 ± 0.02 mN/s to 0.09 ± 0.01 mN/s, Wilcoxon test, p < 0.001). This may indicate an impairment of the signal transduction cascade for contraction in muscle cells after MBI. Furthermore, the contraction in the SHAM group was not altered, suggesting that the irradiation procedure did not affect the CCH-induced contraction. Figure 2. Characterization of the signal transduction. (A,B) CCH-induced contraction before and after irradiation for mean peak latency and mean force change. In the BBI and SHAM groups, there was no statistical difference (paired t-test resp. Wilcoxon test, p > 0.05), but mean peak latency significantly increased and mean maximal force change decreased after MBI. (C,D). In the subgroup analysis, only peak latency for the distal segment (Wilcoxon test, p = 0.027) and maximal force change for the middle (Wilcoxon test, p = 0.005) and distal (Wilcoxon test, p = 0.021) segment remained statistical different. P-values were calculated with a paired t-test, and, respectively, with a Wilcoxon test. Outliers are plotted as black dots. * p < 0.05; The median is illustrated by the yellow Figure 2. Characterization of the signal transduction. (A,B) CCH-induced contraction before and after irradiation for mean peak latency and mean force change. In the BBI and SHAM groups, there was no statistical difference (paired t-test resp. Wilcoxon test, p > 0.05), but mean peak latency significantly increased and mean maximal force change decreased after MBI. (C,D). In the subgroup analysis, only peak latency for the distal segment (Wilcoxon test, p = 0.027) and maximal force change for the middle (Wilcoxon test, p = 0.005) and distal (Wilcoxon test, p = 0.021) segment remained statistical different. p-values were calculated with a paired t-test, and, respectively, with a Wilcoxon test. Outliers are plotted as black dots. * p < 0.05; The median is illustrated by the yellow horizontal line. Abbreviations: MBI: microbeam irradiation (red boxplot); BBI: broadbeam irradiation (gray boxplot); SHAM: SHAM irradiation (black boxplot); pre-RT and post-RT: before (pre-) and after (post-) irradiation; CCH = carbachol. The number of segments is given in parentheses. Subgroup Analysis of the Peak Latency and Maximal Force Change Since the peak latency significantly increased and the maximal force change significantly decreased following MBI, we did a subgroup analysis in this group ( Figure 2C,D). Only peak latency for the distal segment (Wilcoxon test, p = 0.027) and maximal force change for the middle (Wilcoxon test, p = 0.005) and distal (Wilcoxon test, p = 0.021) segment remained significantly different. In the proximal segment, the peak latency (Wilcoxon test, p = 0.091) and the maximal force change (paired t-test, p = 0.106) were not altered. MBI Did Not Affect Baseline Tone, Maximal Contraction Strength, and Peak Amplitude We also compared the baseline tone, maximal contraction strength, and peak amplitude to characterize the contractile apparatus ( Figure 3A-C). We found no significant differences within the MBI-and the BBI-group (paired t-test resp. Wilcoxon-test if normality test failed, p-values see Figure 3). This indicates that MBI and BBI did not affect the contractile apparatus of esophageal smooth muscle. Again, there was no statistical difference in the SHAM group. Dosimetric Characteristics Based on the HD-V2 Gafchromic™ film measurements ( Figure 4A), the peak dose was between 225 ± 15 Gy. The microbeam profile ( Figure 4B) shows that the higher doses were delivered in the more central the microbeams, while the peak doses were lower in the peripheral microbeams. Since the incident beam width at the sample position was only 5.6 mm, only the centrally located 12 slits of the MSC were used. Based on the values obtained with EBT3 Gafchromic™ film ( Figure 4C,D), the valley dose was approximately 2.25-2.5 Gy. The resulting peak-to-valley dose ratio (PVDR) was calculated between 93 and 107. This corresponds well to the PVDR of 85-114 determined based on the CCD Figure 3. Characterization of the contractile apparatus. (A-C) CCH-induced contraction before and after irradiation for mean baseline tone, mean maximal contraction strength, and mean peak amplitude. There is no statistical difference before and after irradiation, indicating that MBI and BBI do not affect the contractile apparatus. p-values were calculated with a paired t-test, and, respectively, with a Wilcoxon test (p > 0.05). Outliers are plotted as black dots. The median is illustrated by the yellow horizontal line. Abbreviations: MBI: microbeam irradiation (red boxplot); BBI: broadbeam irradiation (gray box-plot); SHAM: SHAM irradiation (black boxplot); pre-RT and post-RT: before (pre) and after (post) irradiation; CCH = carbachol. The number of segments is given in parentheses. Dosimetric Characteristics Based on the HD-V2 Gafchromic™ film measurements ( Figure 4A), the peak dose was between 225 ± 15 Gy. The microbeam profile ( Figure 4B) shows that the higher doses were delivered in the more central the microbeams, while the peak doses were lower in Cells 2023, 12, 176 9 of 15 the peripheral microbeams. Since the incident beam width at the sample position was only 5.6 mm, only the centrally located 12 slits of the MSC were used. Based on the values obtained with EBT3 Gafchromic™ film ( Figure 4C,D), the valley dose was approximately 2.25-2.5 Gy. The resulting peak-to-valley dose ratio (PVDR) was calculated between 93 and 107. This corresponds well to the PVDR of 85-114 determined based on the CCD camera readouts. The broadbeam dose was between 3.5 and 4.0 Gy, as determined by a soft X-ray chamber. Visualization of Dose Deposition in the Esophagus after Microbeam Irradiation Irradiation induces single-or double-strand breaks in the DNA [39]. A suitable marker for DNA damage is the phosphorylation on serin 139 after irradiation of the histone H2AX (γ-H2AX) [40,41]. Previous studies showed that the intensity of the γ-H2AX stain positively correlates with the dose administered and thus is a useful marker for dose deposition in the tissue after MBI [42]. Therefore, γ-H2AX immunostaining was used to detect the radiation-induced DNA damage. Representative fluorescence images are shown in Figure 5A,B Noteworthy, due to the fixation and preparation for immunostaining, the microbeams do not always appear parallel. To visualize the esophageal smooth muscle, we also did a desmin immunostaining ( Figure 5C,D). Visualization of Dose Deposition in the Esophagus after Microbeam Irradiation Irradiation induces single-or double-strand breaks in the DNA [39]. A suitable marker for DNA damage is the phosphorylation on serin 139 after irradiation of the histone H2AX (γ-H2AX) [40,41]. Previous studies showed that the intensity of the γ-H2AX stain positively correlates with the dose administered and thus is a useful marker for dose deposition in the tissue after MBI [42]. Therefore, γ-H2AX immunostaining was used to detect the radiationinduced DNA damage. Representative fluorescence images are shown in Figure 5A,B Noteworthy, due to the fixation and preparation for immunostaining, the microbeams do not always appear parallel. To visualize the esophageal smooth muscle, we also did a desmin immunostaining ( Figure 5C,D). Discussion In conventional thoracic radiotherapy, one of the goals is to minimize the risk radiogenic damage to the esophagus as an OAR. Depending on the location of the tum optimal sparing of the esophagus is not always feasible. In experimental studies, MBI h shown good normal tissue tolerance in the lung concerning thoracic irradiation [31,3 However, acute toxicity in the esophagus following MBI is unknown so far. To o knowledge, the present study is the first that investigated the radiation effects of MBI an isolated rat esophagus in an acute ex vivo model. We irradiated isolated esophag segments and evaluated CCH-induced isometric contractions before and after irradiati No significant changes regarding baseline tone and maximal contraction force w Discussion In conventional thoracic radiotherapy, one of the goals is to minimize the risk of radiogenic damage to the esophagus as an OAR. Depending on the location of the tumor, optimal sparing of the esophagus is not always feasible. In experimental studies, MBI has shown good normal tissue tolerance in the lung concerning thoracic irradiation [31,32]. However, acute toxicity in the esophagus following MBI is unknown so far. To our knowledge, the present study is the first that investigated the radiation effects of MBI on an isolated rat esophagus in an acute ex vivo model. We irradiated isolated esophageal segments and evaluated CCH-induced isometric contractions before and after irradiation. No significant changes regarding baseline tone and maximal contraction force were determined, but the peak latency was found to be increased, and in addition, the maximal force change decreased after MBI. These results align with the findings in cardiac physiological studies [33,34] during and after MBI. In these studies, rodent hearts in the Langendorff perfusion system were irradiated with MBI peak doses up to 400 Gy and 4000 Gy, respectively. Up to MBI peak doses of 400 Gy, no acute or subacute severe effects on cardiac function were observed [33,34]. No significant changes in ventricular or aortic pressure were found, and no structural alterations occurred [34]. Even after irradiation with MBI peak doses of 4000 Gy, only temporary arrhythmia occurred, which converted back to sinus rhythm spontaneously [33]. This may indicate that MBI interfered with the signal transduction cascades of the contraction rather than the contractile apparatus itself, which would agree with the observations made in the current study on the esophagus. Similar physiologic studies were conducted with a rat urinary bladder [43][44][45] or with a human anal sphincter [46] using a conventional linear accelerator. Giglio et al. [43] found that the methacholine and the electrical-field-stimulation (EFS) induced contractions were reduced after irradiation, whereas the contraction in response to potassium chloride (KCl) was not altered. In the study by McDonnell et al. [44], no effects on agonist-induced contractions (CCH and KCl) were found. In mucosal-free bladder strips, EFS-induced contractions were unchanged, and in normal bladder strips, EFS-induced contractions were reduced in a frequency-dependent manner. Similarly, Lorenzi et al. [46] demonstrated an impaired function of the human anal sphincter following radiation therapy for rectal cancer. They found significant differences in response to CCH but not to sodium nitroprusside. In all studies [43,44,46], it was concluded that irradiation affects neuronal structures and intracellular signaling rather than the muscle itself. The same could apply in our preparations since the CCH-induced maximal contraction strength was preserved, but the peak latency and maximal force change were significantly impaired. In contrast, the recent study by Turner et al. [45] reported a KCl-induced decreased contraction of the bladder following irradiation of the rat prostate, and it was speculated that irradiation might also affect the smooth muscle itself. This highlights that the mechanisms of radiogenic injury to muscle remain poorly characterized. Some limitations of the study should be discussed. First, blood perfusion was acutely interrupted after esophagectomy, resulting in a potential risk of hypoxia. Hypoxic cells were described to be less radiosensitive [47]. In our study, the esophagus was oxygenated by diffusion, which requires a high partial pressure of oxygen. Carbogen (95% O 2 , 5% CO 2 ) gassed solution serves this purpose. Good cell survival has been shown in brain tissue [48][49][50], in the Langendorff perfusion system [33,34,51], and in the organ bath [52], especially when using intestinal tissue [53] or esophageal sections [54][55][56]. One study [57] reported toxic effects of 95% O 2 in prolonged esophageal cell cultures, but these results have not been replicated [58]. Second, the risk of autolytic processes during the esophagus removal from the organ bath for several minutes should be addressed. We immersed the esophagus before and after irradiation into the carbogen-gassed buffer solution at room temperature to keep it as humid as possible. If a marginal degree of autolysis had occurred, the effects of MBI would have been masked by this process rather than exaggerated. In this case, our results would have been underestimated. Furthermore, if significant autolytic processes had occurred, this would have become apparent in both the irradiated and the non-irradiated control tissue (in which the buffer was also removed during sham irradiation). Taken together, neither hypoxia nor autolysis should have significantly confounded our data. Finally, the short observation time in this study did not allow us to address vascular damage. Radiation-induced vasculopathy is a common late toxicity that can be seen in patients after conventional radiotherapy [59] as well as in mice following MBI [60], leading to fibrosis or necrosis [61]. In organ bath experiments, irradiation did not affect the function of a rabbit aorta [62]. However, inflammation of the endothelium was found 24 h after irradiation [63]. Inflammatory processes may influence the function of the esophageal contractile apparatus. We plan to conduct an in vivo study to investigate the early and late effects of irradiation on the vascular system after conventional and microbeam irradiation. Conclusions In the present study, the function of the contractile apparatus itself was preserved, but the signal transduction was slightly impaired in the middle and distal esophagus segments. Regarding MBI, preserving the cardiac and esophageal function is promising for future therapeutic approaches. From the clinical perspective, our results may plausibly explain radiation-induced motility disorder. Whether this causes symptomatic dysphagia remains to be tested in vivo. Institutional Review Board Statement: All experiments were conducted in accordance with the Guide for the Care and Use of Laboratory Animals. Sufficient water and food were available. In the current study, an acute ex vivo model was used. Before any experiment or procedure, the animals were decapitated in deep anesthesia, proven by the absence of a pain reflex. In accordance with national law, ethical review and approval were waived for this study. Informed Consent Statement: Not applicable. Data Availability Statement: Research data are stored in an institutional repository and will be shared upon request to the corresponding author.
7,207.2
2022-12-31T00:00:00.000
[ "Medicine", "Physics", "Environmental Science" ]
Gaming to Learn astronomy, an innovation approach, two study cases In this paper I am going to present you two innovative approaches for learning Astronomy through gaming. On one hand we used the subject of Astronomy as a vehicle since Astronomy provides a unique environment for educators from Kindergarten to Lyceum, and because its multidisciplinary character is ideal for an introduction to science education. On the other hand Gaming to learn has been around for a decade, but it is only recently that these possibilities in the realm of education truly have been appreciated. If indeed humans think immeasurably better as part of a network than on their own, then games are an obvious terrain in which to set minds free and let them wander around. Thus it;s really worthy to test and try this new medium of learning to a fachinating science. Introduction We believe that by developing and promoting the teaching of Astronomy in the broadest possible way to students we can introduce students to science in a very pleasant way and easily prepare them for a "life-long learning" journey [3] . Furthermore, the cultural and philosophical role of Astronomy is undisputed. Studying the Universe is a way of searching for our own origin, learning to situate ourselves within cosmic infinity and developing a sense for the beauty and fragility of our planet the Earth. It also allows us to keep a critical approach towards irrational pseudo-sciences [4] . Thus it's a unique opportunity to involve Astronomy to the new case for Educational Learning through Play. Those who believe in using games in education usually start from a common set of assumptions. They observe that game players regularly exhibit persistence, risk-taking, attention to details and problem solving skills, all behaviors that ideally would be regularly demonstrated in school. They also understand that game environments enable players to construct understanding actively, and at individual paces, and that well-designed games enable players to advance on different paths at different rates in response to each player's interests and abilities, while also fostering collaboration and just-in-time learning i.e. «collective intelligence» [5] . In other words games which are formalized expressions of play allowing people to go beyond immediate imagination and direct physical activity, help students to develop noncognitive skills that are as fundamental as cognitive skills in explaining how we learn and if we succeed. Skills such as patience and discipline, which one should acquire as a child but often does not, correlate with success better than IQ scores do [6] . And those non-cognitive skills -that is, not what you know but how you behave -are far better suited to a game context than to a traditional classroom and textbook context. Gamification, is the application of game-design elements and game principles in nongame contexts to improve user engagement, organizational productivity, flow, learning, and more. History of games Games are an integral part of all cultures and are one of the oldest forms of human social interaction (eg. games are found in burials in Egypt 3100 BC, in prehistoric India, Plato and Homer mention board games called 'petteia, etc). According to the German philosopher Friedrich Schiller, play is a force of civilization, which helps humans rise above their instincts and become members of enlightened communities [1] (" humans are only fully human when they play"). The Dutch historian Huizinga ( founder of modern cultural history), saw games as a starting point for complex human activities such as language, law, war, philosophy and art. Educational Games are games that have been specifically designed to teach people about a certain subject, expand concepts, reinforce development, understand an historical event of culture, or assist them in learning a skill as they play. This includes board, card and video games. Just a few places you see Educational games being used are: K-12, Universities, Military, Business. Officially games were incorporated in education early in the 19th century, with the creation of Kindergarten by Friedrich Fröbel, which was based on learning through simple educational toys such as blocks, sewing kits, clay, and weaving materials. Description of the game The game "StarStorm" was created as an entry to the National Hellenic Contest of Astronomy to celebrate the 2009 IYA, run by the Hellenic Physical Society, where it won the second place. The team that created the board game was a school team of mine that consisted of 8 students. We carefully designed the game based on the following features : align game goals with cognitive work, uncertainty of outcome agreed upon rules with hard competition, make it adaptive, build in opportunities for suspense, conflict, and complication, make sure the game is simple in the right way, separate place and time, elements of fiction, elements of chance and personal enjoyment [6] . The general objective of the created board game is to promote students/public understanding of science and astronomy, that is by playing. The "StarStorm" board game engages the game players from I. 10-18 years old, to activate their personality on the intellectual, the emotional, the desire, the intuitive and the imaginative level, and, II, older than 19 years, to discover the wonders of the Universe, the myths and basic characteristics of the constellations. Additionally, "StarStorm" gives players the message that it is not only luck (dice) but it is also scientific knowledge (question and cosmic events cards) which makes a civilization to survive. In detail, "StarStorm" is a strategic board game for 4 players. It is played on a board depicting a constellation map, divided into territories-constellations. Players are in command of an army fighting for galaxy domination. They have to organize their forces to crush their enemies and dominate the constellations in this fast-paced game of strategy, negotiation, knowledge, and luck. It's up to them to deploy their troops, attack their enemies, and even betray their allies, in an aggressive effort to find a habitable exoplanet, as their new home, since they cannot live on their planet anymore because they have contaminated it. Through this game the players (from ages 10 to 100!!), are encouraged and motivated with a pleasant and entertaining way to get astronomical knowledge, through the cards of the game, and familiarize themselves with the night-sky constellations and learn about them from the board itself. For the game we created 1) a leaflet with the rules , 2) CONSTELLATION cards, which include a brief description of the respective constellation remarking the main features and the myths beyond them, and its pattern, 3) MISSION cards, with the mission that is given to each player for the game, and, 4) QUESTION cards and COSMIC FACT cards that have been carefully created and are related to edge-on astronomy topics. For the area (constellation) movement, "StarStorm" ignores realistic limitations, using the technology of wormholes!! from 10 years old Fig.1 The board game "StarStorm" The Evaluation Research shows that games as a medium can be effective, but not always. Design is really what matters. Nobody assumes that all lectures, labs or books are good simply because of their medium [7,8] . We thus synthesize comparisons of game conditions versus non-game conditions (i.e., media comparisons) correlated with multiple learning objectives, requiring only basic understanding of required skills in order to evaluate our game. For the evaluation, 50 students played the game. We carefully created a questionnaire and applied it as well also to 50 other students who did not play the game. The result was that when students played they adopted a mastery mindset that is highly conducive to learning. Moreover, students' interest and enjoyment increased when they played. Additionally: 30 gymnasium students found that the game was more motivating than pencil and paper activities when learning astronomy (motivation and conceptual learning increased). 20 lyceum students found that using the board game was significantly better than formal learning (of the constelllations etc). As well, students claimed that by playing the game, they did not need guidance, did not need to be challenged and did not need time to reflect! Study case II: Learning about the Galaxy by constructing a Galactic Garden at school In this gamification project we designed a flower replica of our Galaxy! Most bright stars in our Milky Way Galaxy reside in a disk. Since our Sun also resides in this disk, these stars appear to us as a diffuse band that circles the sky. The above panorama of a northern band of the Milky Way's disk covers 90 degrees. Visible are many bright stars, dark dust lanes, red emission nebulae, blue reflection nebulae, and clusters of stars. All the above and more can easily be explained and tought to students by creating a galactic garden at school, with them. The aim, was to engage students in the learning process by gamification. The procedure we followed was to learn about a) galactic coordinates, b) the structure of the galaxy, then set up the scale of the model to "plant" and decide about the content to be demonstrated and carefully choose the plants/flowers that will correspond to the content and make up the budget! Then work for the creation and walk around this Galactic Garden asking and answering questions. All the data used were taken by the 2008c-10b map of NASA/JPL-Caltech /R.Hurt. We didn't evaluate this project, but students always enjoy walking, asking and learning through the galactic-garden about the Galaxy. Conclusion Collective intelligence is a good practice in the service of pooling knowledge and diversity to make the world a better place and is applied to games as architectures for engagement. Nevertheless without a strong evidence base to back up games' academic effectiveness, the allure of using games for learning is hard to pass up. As research evolves, ensuring good assessment practices is critical for ensuring effective learning in games. A set of criteria is needed to help sort through all of this, and future research will need to examine design features that optimize learning across curricula. Students played the 'StarStorm" game and entered the gamification procedure of creating a Galactic Garden, got thrilled by the experience! Astronomy is a unique opportunity for implementing "New Literacies", which expands the conception of literacy beyond books and reading and through it to introduce science to students.
2,374.4
2019-07-01T00:00:00.000
[ "Education", "Physics", "Computer Science" ]
The Impact of Media on a New Product Innovation Diffusion : A Mathematical Model In this paper, we proposed a three compartment model consisting of non-adopter, adopter and frustrated classes of population to discuss the influence of media coverage in spreading and controlling of adopter of a particular product in a region. The model exhibits two equilibria:(i) a adopter-free and (ii) unique interior equilibrium. Stability analysis of the model shows that the adopter-free equilibrium is always locally asymptotically stable if the influence number of adopter (R0), which depends on parameters of the system is less than unity. Otherwise if R0 > 1, a unique interior equilibrium exists, it is locally asymptotically stable under some set of conditions. Further analytically and numerically it is observed that the region for backward bifurcation of adopter population increases with the decrease of the valid contact rate before media alert. Finally, numerically experimentation are presented to establish the effect of different media alert rate on adopter and non adopter population. Introduction A manager seeking to introduce a new product into the potential market has a limited number of variables under his control.The marketing manager must understand how these decision variables impact the diffusion process if he hopes to use them effectively [1,25,26].A review on the theory of adoption of new products by a social system has been presented in [2].These ideas have been expressed mathematically in diffusion models which emerged early in epidemiology and population models [3]- [12].The diffusion of an innovation has traditionally been defined as the process by which an innovation is communicated through certain channels over time among the members of a particular geographical region [2,26]. 2000 Mathematics Subject Classification: 91FXX, 34CXX, 34DXX The diffusion process has frequently been modeled via a two-stage single differential equation approach, representing the epidemics manner in which the penetration and adoption of the innovation are influenced simultaneously by external and internal sources [13,14,15,16].The price and advertising variables have been typically incorporated in these models to determine the basic parameters of the differential equation [17,18,19]. The theory of innovation diffusion when viewed as a theory of communication, centers around communication channels, which transmitted information to social system and also with in social system.The communication channels considered in the theory are two: mass media and interpersonal communication.The first one mass media facilitate gain if of information of innovation by the individuals, it is more effective in imparting knowledge, whereas second interpersonal communication plays a decisive role of persuasion level in the society, where face-to-face exchange of view is a continues process [24].In developing this model the basic behavior theory which stipulates that the innovation is at first adopted by innovators them self which encourages its adoption by the society via interpersonal communication.The diffusion process in marketing has been described by classical bass 1969 [13] model as this differential equation. where N (t) is the cumulative number of adopter at time t, m is the total population of potential adopters, p is the coefficient of innovation and q is the coefficient of imitation.The first term in equation ( 1) denotes the adoption by innovators and the second term denotes the adoption by imitators.Therefore, in this paper, we propose a non-linear mathematical model to study the media alert effect on innovation diffusion by using the stability theory of differential equations.In section 2, we have developed and analyzed a model to incorporate the media impact considering three classes of population namely, non-adopter, adopter and frustrated.The calculation of adopter free and endemic equilibrium, the basic influence number and proof of the local stability of adopter free equilibrium and the local stability of the endemic equilibrium are presented in section 3. Again in section 4 and 5, we have discussed existence of backward bifurcation and numerical simulation of the system respectively. Mathematical Model We proposed a non-linear dynamical mathematical model considering three types of population classes, first is non-adopter class second is adopter class and third is frustrated class with population densities N (t), A(t) and R(t) respectively at time t.Let r is the recruitment rate of population which will join non-adopter class, ν is the rate of frustration from adopter population which will join frustrated class, δ is the coefficient of discontinuance rate of adopters, d 1 is the natural death rate of population for all classes, β 1 is the contact rate before media alert and m+A is the contact rate after media alert.We choose this function The Impact of Media on a New Product Innovation Diffusion 171 to model the media alert with the assumption that β 2 A m+A to reflect the transmission rate when adopter individuals appear and are reported.When A → ∞, the increased value of the transmission rate approaches its maximum β 2 , and the increased value of the transmission rate equals half of the maximum β 2 when the reported adopter arrives at m (i.e., half saturation period).In real life, it is true, that almost everybody will take measures to affect themselves from adopter as soon as adopted individuals are reported by media coverage, which will raise the transmission rate more or less.Generally speaking, the more individuals become adopter.The schematic flow diagram of our proposed system is show in figure 1. Hence our proposed three compartment model is govern by following system of equations: where all the paraments are positive.In the next section, we will examine the steady state behavior of the system. Steady State, Basic Influence Number and Stability The system (2.1)-( 2.3) has one adopter free equilibrium: E 0 = ( r d1 , 0, 0) and the interior equilibrium(s): ) and A * is the positive solution of the following equation: where The local stability of E 0 can be obtained through a straightforward calculation of the eigenvalues.It follows that for the proposed compartmental model, local stability of adopter free equilibrium is governed by the basic influence number of model.The basic influence number R 0 , is defined as the expected number of secondary adoption caused by an adopter individual upon entering a totally non-adopter population, as similar to the disease spreading models [22].Using the notation in [22,23], we have two vectors F and V to represent the new infection term and remaining transfer terms, respectively: The Adopter compartment is A, hence a straightforward calculation of jacobian matrices gives where F is non-negative and V is a non-singular M-matrix, therefore F V −1 is non-negative, and Hence the influence number is given ρ(F V −1 ) and Here the basic influence number (R 0 ) define as on average the number of nonadopter population become adopter under the influence of an adopter over the course of its adopter period.Again the interior equilibrium E * = (N * , A * , R * ) to exist, the solution of (3.1) must be real and positive.Since X > 0 and we can easily summarize the following conditions for the existence: (iv)no interior equilibrium otherwise.Now, we will show that all the solutions of the system (2.1)-(2.3)are bounded in a region B ⊂ R 3 + .We consider the following function: and substituting the values from (2.1)-( 2.3), we get ) which implies as τ → ∞, ω → r d1 .Hence consider the set: we can state the following lemma: Lemma 3.2.The system ( 2)-( 4) is bounded in the region B ⊂ R 3 + .Now we will state and prove the local stability of all steady states.Theorem 3.3.For the system (2.1)-(2.3),(i)if R 0 < 1, has a unique adopter free equilibrium E 0 = ( r d1 , 0, 0) which is always locally asymptotically stable.(ii) if R 0 > 1, has a unique interior equilibrium E * (N * , A * , R * ) and it is locally asymptotically stable if Proof: The jacobian matrix J = [j lk ] for the system of equations (2.1)-(2.3)evaluated at equilibrium E 0 = ( r d1 , 0, 0) is The characteristic equation about E 0 is given by The eigenvalues of the characteristic equation of J( r d1 , 0, 0) are Thus all the eigenvalue of equation ( 12) are negative real when R 0 < 1.It is observed from the above eigenvalues that the equilibrium point E 0 is always locally asymptotically stable of (2.1)-(2.3)for R 0 < 1.Thus, the adopter-free equilibrium is locally asymptotically stable.The jacobian matrix J = [j lk ] for the system of equations (2.1)-(2.3)evaluated at equilibrium where The characteristic equation about E * is given by (3.11) Now from Routh-Hurwitz criteria that all the eigenvalue of (3.11 Hence, for R 0 > 1, the interior steady state E * is locally asymptotically stable if q 1 < p 1 .✷ As similar as in [27], we will establish that R 0 = 1 is a bifurcation point, in fact, across R 0 = 1 the adopter free equilibrium changes its stability properties.In the following we consider system (2.1)-(2.3)and investigate the nature of the bifurcation involving the adopter-free equilibrium E 0 for R 0 = 1.More precisely, we look for conditions on the parameter values that cause a forward or a backward bifurcation to occur.In order to do that, we will make use of the result summarized below, which has been obtained in [20] and is based on the use of general center manifold theory [21]. Consider the following general system of ordinary differential equations with a parameter φ : Without loss of generality, it is assumed that x = 0 is an equilibrium for system (3.12) for all values of the parameter φ, (that is f(0,φ)= 0, for all φ). Theorem 3.4. [20] Assume (A 1 ) : A = D x f (0, 0) is the is the liberalization matrix of the system (3.12)around the equilibrium x = 0 with φ evaluated at 0. Zero is a simple eigenvalue of A and other eigenvalues of A have negative real parts; (A2): Matrix A has a nonnegative right eigenvector w and a left eigenvector v (each corresponding to the zero eigenvalue). Let f k be the k th component of f , and Then, the local dynamics of the system (3.12)around x = 0 is totally determined by a and b.Moreover, the requirement of nonnegative components of w is not necessary. It clearly appears that, at φ = 0 a transcritical bifurcation takes place: more precisely, when a < 0 and b > 0, such a bifurcation is forward; when a > 0 and b > 0, the bifurcation at φ = 0 is backward.Now let φ = β 1 be the bifurcation parameter, such that R 0 < 1 for φ < 0 and R 0 > 1 for φ > 0, such that x 0 is a adopter-free equilibrium for all values of φ.Consider the system dx dt = f (x, φ), where f is continuously differentiable at least twice in both x and φ.The adopterfree equilibrium is the (x 0 ; φ) and the local stability of the adopter-free equilibrium changes at the point (x 0 ; φ) [23].Now we want to show that there are nontrivial equilibrium near the bifurcation point (x 0 ; φ). We will apply the result discussed above and explore the possibility of backward bifurcation in the system at R 0 = 1.We consider the adopter free equilibrium E 0 = ( r d1 , 0, 0) and observe that the condition R 0 = 1 is equivalent to . The eigenvalues of the matrix are given by λ 1 = −d 1 , λ 2 = −(d 1 + δ), λ 3 = 0. Thus λ 3 = 0 is simple zero eigenvalue of the matrix J(E 0 , β * 1 ) and the other eigenvalues are real and negative.Therefore, we can use the center manifold theory.Hence, when β 1 = β * 1 (or equivalently when R 0 = 1), the Adopter free equilibrium.Now we denote by W = (w 1 , w 2 , w 3 ) T , a right eigenvector associated with the zero eigenvalue λ 3 = 0. and w 2 = (d 1 + δ).Therefore, the right eigenvector is Furthermore, the left eigenvector V obtained from solving V.J = 0 and V.W = 1 is given by Evaluating the partial derivatives at the adopter-free equilibrium, we obtain , and ∂ 2 f2 ∂x2∂φ = r d1 .and all the other second-order partial derivatives are equal to zero.Thus, we can compute the coefficient a and b, i.e, It follows that a = 2v 2 w 1 w 2 ∂x2∂φ ; in view ( 19) and ( 20), we get It is observe that the coefficient b is always positive so that, according to Theorem 1, it is the sign of the coefficient a which decides the local dynamics around the adopter-free equilibrium for Hence, we have the following theorem, which is similar to the result established in [27]: 3) exhibits a backward bifurcation when 3) exhibits a forward bifurcation when R 0 = 1. Numerical Simulations In this section, we perform the numerical simulation of system (2.1)-( 2.3) to verify the results obtained in previous sections.We choose following set of parametric values for the numerical experimentation: 1 and β * * 1 , we observe that the adopter equilibrium E 0 is the only equilibrium for system (2.1) − (2.3) for β 1 < β * 1 and a interior equilibrium occurs for β * 1 < β 1 < β * * 1 .Again, the interior stable interior equilibrium is locally asymptotically stable for β 1 > β * * 1 and adopter free equilibrium E 0 became unstable i.e. an unique interior equilibrium exists is LAS. 4. For the parameter values r = 5, β 1 = 0.002, δ = 0.01, ν = 0.05, d 1 = 0.02, m = 5, shown the media effect of adopter and non-adopter in Fig. 3(a)-(b) when media rate β 2 is high adopter reached maximum with in a short period of time, but the media is low then slowly adopter reached its maximum after a certain period of time adopter population finally settle down to its equilibrium The Impact of Media on a New Product Innovation Diffusion 179 label, on the other hand as expected the opposite situation arise in the case of non-adopter population. Conclusion In the present paper, we have analyzed a innovation diffusion model consisting of three nonintersecting classes of population, namely, non-adopter, adopter and frustrated.Here, we presented a new measure, i.e., basic influence number of an individual adopter, which means that on an average the number of non-adopter population become adopter under the influence of an adopter over the course of its adoption period.We have studied the stability of adopter free equilibrium as well as interior equilibrium and it is shown that the adopter free equilibrium is locally asymmetrically stable if the basic influence number R 0 < 1.Again, when R 0 > 1 the interior equilibrium state exists and it is locally asymptotically stable under some parameter conditions.The existence of backward bifurcation of adopter population has been studied with respect to the valid contact rate before media alert (β 1 ) and a numerical threshold is determined for a particular set of parametric values. Figure 2 : Figure 2: Population distributions with the existence of (a) Adopter free and (b) Interior equilibrium. Figure 3 :Figure 4 : Figure 3: Population distributions with different media rates: (a) Adopter population and (d) Non-adopter population
3,610.6
2014-05-08T00:00:00.000
[ "Mathematics", "Engineering", "Business" ]
Sustainable development of the industry in the context of digitalization and ecolinguistics monitoring . The development of organizational communication monitoring involves increasing the sustainability of production development, ensuring the efficiency of production. The article sets out the experience of some research in this area carried out by the authors. The authors propose to develop a system of optimization of efforts and resources aimed at improving the methodological apparatus of communication activity of specialists in production, considering the existing experience. Development of a comprehensive approach to acting in conditions of uncertainty, quasi-stable situations of risk and communicative tension. Development of creative potential of communication participants, their ability to integrate into various management situations and respond quickly to the changing situation in the workplace. The system development of a pragmatic approach to the communicative component of company management, the development of environmentally appropriate measures to maintain continuous managerial communication in the digital space, comfort and stress resistance of the employee. Introduction One of the priority tasks of the digital economy is the development of remote management of industry and other fields.Features of the organization of harmonious anthropocentric and nature-forming working digital space, communication and the ecology of language in this new for management sciences and linguistics so far poorly studied [1][2][3].The task of effective communication in a distance form currently remains relevant.For its performance theoretical developments, experimental researches, development of methodical recommendations and regular practical training of professional development of the managerial staff for mastering the ways and techniques of influencing speech and healthsaving methods of the staff work organization are necessary [4]. Operational management of construction production implies not only the possibility of constant employee participation in the production process, but also an increase in entropy, increase uncertainty in the management of documented procedures, staff fatigue, information pollution of the digital workplace [5][6][7]. The purpose of the study is to consider the possibility of developing a system of ecolinguistics monitoring, which takes into account new negative factors in the organization, planning and management of construction production. The scientific novelty of the study lies in the fact that for the first time from the point of organization, planning and management of construction, as well as the theory of ecolinguistics of communication is given the idea of the possibility to consider and predict the factors that negatively affect the ecology and productivity of managerial staff in the construction organization Materials and methods of research According to the data cited by Bell, the information component of management communication decreases by 40% with respect to each link of transmission and no more than 13% of the original information reaches the final performer [3]. The successful functioning of an organization is determined, first of all, by how effectively the management system is organized.The most important part of management is the work with information, most of which is recorded in documents [2,[8][9][10].The dynamically changing socio-economic situation does not allow a more effective management process through the routine technology of working with paper documents, now it is the task of digitalization of the main industries and stuffing them educational institutions.The modern level of information technology development provides opportunities for radical reorganization of management processes through the transition from the traditional paper to electronic document management and the formation of a digital workplace of the contractor [10,11].Electronic document management is a fundamentally new technology, a qualitatively new phenomenon that imposes new requirements on organizational communication, including the ecological nature of communication and the employee's comfort in the digital space.It should be noted that such problems as the development of the necessary communication functionality of systems, organization of staff training, comfort and the ecology of labour directly depend on its productivity and efficiency [12][13][14]. The relevance of the implementation of communicative and ecological monitoring systems is determined by the fact that it will allow us to define a qualitatively new approach to production management and create an effective science-based tool for the management of organizational communication in the context of digitalization, which in turn will ensure increased labour productivity and employee's comfort, and will allow a more sensible distribution of company resources and maintain the health of the staff employed in digital workplaces. Modern operational management of the company needs a specific and workable functionality to monitor the effectiveness of organizational communication and ecological consistency of the staff work organization [15].For this purpose, it is offered to apply a specific plan of scientific and scientific-practical researches. The action plan includes the following stages: 1.The stage of database formation for analysis and experiments.I. Analysis of the system of interaction between employees of the enterprise and drawing up a scheme of the current organizational and managerial communication indicating the time parameters for the duration of communicative acts, identifying typical acts of communication II.Interview with employees and drawing up matrices of identified communication problems VI.Conducting an experiment on the implementation of organizational communication monitoring software and assessment its effectiveness at the enterprise VII.Development of recommendations for the implementation of the organizational communication monitoring software and assessment of its effectiveness at the enterprise Each experiment developed in the project is aimed at obtaining a specific result, allowing the development of a methodology and the creation of a comprehensive software product that would simplify production management at the stage of communicative interaction. The expected results are: 1. Development of the methodology for determining the zones of communication barrier in the organizational schemes of production. The experience of this kind of work exists in domestic science, but it is focused on the traditional, pre-digital management system. 2. Creation of a corpus of units and communicative models, allowing software products to find and determine the zones of barriers in organizational communication.The task set by the authors implies the creation of such a corpus for modern digital organizational communication. 3. Development of a temporal model of organizational communication, taking into account the possibility of identifying zones of barriers and predicting the development of the communicative situation, its impact on the efficiency of production, taking into account time and resource factors.The use of Gantt charts and other types of network planning in production is a common practice, but so far it has not considered ecoinguistic factors and features of digitalization of main production cycles.The authors propose the creation of visualizable and measurable models of the dependencies of communication efficiency and resource intensity of organizational relations. 4. Development and implementation of a software product, which will allow to introduce a monitoring system of ecological and ecolinguistics interaction of managerial staff and working staff of various productions in conditions of digitalization into organizational relations, which will allow to quickly eliminate arising difficulties.The development of systems involves the creation of a self-learning automated complex capable of recognizing emerging communication difficulties, predicting losses and issuing recommendations of a regulatory nature. An example of the genesis of organizational communication is the experience of the American firm Austin Company, which in the 1940s contributed the construction of onestory industrial buildings across the United States.For this purpose, they developed a system of control cells, which made it possible to exchange correspondence and make organizational decisions by telegraph during the day.The special code implemented in the company made it possible to compensate for imperfect technical means and to communicate remotely and as efficiently as possible, taking into account the technological capabilities of that era.Then the construction of multi-story buildings using elevators for bringing raw materials to the upper floors and chutes for bringing materials and finished products down to the lower floors became more common.However, the appearance of more powerful and heavier process equipment in industries such as steel and food has forced many firms to refuse from existing industrial buildings with limited floor loading capacity and rooms cramped inside by columns.Today, most industrial plants are located in one-story buildings with a controlled climate, applying air conditioning with additional heat and vapor insulation of walls and coatings to limit heat loss and its entry from the outside and thus reduce energy consumption.Other innovations were excellent lighting (facilitated by white concrete floors) and sound insulation.The first such facility was the Symonds Saw and Steel Company building (Fitchburg, Massachusetts).All these innovations are the result of the development of a special concept of accounting and processing complaints, the study of production experience and the wishes of a consumer, the effect that the creative component of the communicative stage in project management achieves.As the study shows, the main volume of delays is due to discoherence of communication.The concept of discoherence of communication defines the inconsistent course in space and time of communicative processes.Communication by its nature resembles a wave response consisting of interacting phases and reactions of communication.Inconsistency of these phases at any of the stages leads to the disruptive effect of communication and prevents the achievement of the goal of its participants.The presence of discoherence in the communication of the construction company is determined by a sharp increase in the information flow on the same operation, the emergence of duplicate flows, the presence of returns and clarifications of information, which serves as an indicator of the lack of proper conditions for communication compared with the period of goal achievement by the average time/resources for the same communication operation. For example, the order to carry out the next stage of construction and installation works at the site due to artificially created barriers on the part of additional visiting services of the enterprise (legal department, design supervision department) was four days late and was implemented in an unfavorable weather period (precipitation, temperature drop), despite the objections from the contractor.As a result, the work turned out to be of low-quality, which led to specific losses of the enterprise.When analyzing the situation, when the barrier zones in the communication component were identified, the persons responsible for the delay and issuance of instructions that were no longer appropriate under the current conditions, denied their guilt, referring, among other things, to the lack of personal motivation in skipping the information message further along the communication line: "I do not understand what these two days decide"; "If the instruction is issued, agreed by lawyers, it must be executed, no matter what the weather is like.The lawyers approved it" (quotes from the employees). The experiment on developing a temporal model showed the peak nature of the greatest delays in production communication (Figure 2).As can be seen from the chart, the conflict of stereotypes plays the least role in the ratio of time delays, and the discoherence plays the greatest.Delays of three days of total time mean significant losses for the company or lost profits with a better organization of enterprise management. Discussion Surveys of employees and analysis of their data showed the following problems in the organization of digital workplaces of remote access and their ecological compatibility in terms of comfort, communication, psychological state of employees.Questionnaire survey and open conversation with employees showed that high stress, fatigue, decreased quality of work, motivation for work and labour discipline directly depend on the organization of the digital workspace, user-friendly interface and wellthought-out communication system in the company.Most delays associated with positional shortage, lack of time for processing and making decisions and insufficient level of communicative competence are associated with a badly-designed system of relations, incompetence of employees with their duties, lack of understanding of the essence of incoming demands, lack of respect for other participants in the communicative process.Changing the communication system and implementation of optimized communication schemes as well as a number of measures to optimize communication, including through elimination of duplicate units, significantly increases the effectiveness of the enterprise. Globally, there has been a steady increase in interest in the issues of industry sustainability in the new digital economy, reducing costs and improving production efficiency.The issue of introducing new technologies and equipment in order to improve the economy and environmental safety of production requires well-designed monitoring.The authors intend to develop a system that is based on the fundamental principles of creative development of scientific and organizational staff in production, taking into account existing experience: I. Development of the creative potential of communication participants, their ability to integrate into various managerial situations and respond quickly to the changing situation in the workplace.For this purpose, the concept of teaching risk theory, risk management, anticrisis management should be included in the course of special training. II. Systematic development of a pragmatic approach to the communicative component of company management, development of environmentally appropriate measures to maintain continuous managerial communication in the digital space, comfort and stress resistance of the employee.This methodology involves the development of a culture of digital organizational thinking and affects the system of perception of communicative space, considering the negative factors, and the ability to level them. III. Optimization of efforts and resources aimed at improving the methodological apparatus of communicative activity of specialists.Development of a comprehensive approach to acting in conditions of uncertainty, quasi-stable situation of risk and communicative tension. Conclusions Globally, there has been a steady increase in interest in the issues of industry sustainability in the new digital economy, reducing costs and improving production efficiency.The issue of introducing new technologies and equipment in order to improve the economy and environmental safety of production requires well-designed monitoring.The authors intend to develop a system that is based on the fundamental principles of creative development of scientific and organizational staff in production, taking into account existing experience: I. Development of the creative potential of communication participants, their ability to integrate into various managerial situations and respond quickly to the changing situation in the workplace.For this purpose, the concept of teaching risk theory, risk management, anticrisis management should be included in the course of special training. II. Systematic development of a pragmatic approach to the communicative component of company management, development of environmentally appropriate measures to maintain continuous managerial communication in the digital space, comfort and stress resistance of the employee.This methodology involves the development of a culture of digital organizational thinking and affects the system of perception of communicative space, considering the negative factors, and the ability to level them.III.Optimization of efforts and resources aimed at improving the methodological apparatus of communicative activity of specialists.Development of a comprehensive approach to acting in conditions of uncertainty, quasi-stable situation of risk and communicative tension. III. Photography of employee's workday communication with identification of barriers for effective communication and determination of psycholinguistic comfort of employee's work IV.Observation of communication processes and recording of emerging problems with fixation of time and place in the communicative act 2. The stage of analysis and conducting experiments.Assessment of oral and written communication at the enterprise I. Data systematization and identification of patterns of communicative barriers, typical location of barrier zones in the scheme of current organizational and managerial communication II.Creation of a data corpus on communicative problems of organization interaction at the enterprise III.Determination of enterprise losses due to ineffective communication.IV.Conducting experiments to create an automated system for accounting delays in communication procedures and calculating the probably losses of the enterprise.V. Conducting an experiment to determine the degree of psychological comfort reduction as a result of ineffective communication VI.Development of a temporal model of communication barrier zones VII.Development of recommendations for their elimination VIII.Development of a predictive model for the effectiveness of communication and reduction of losses due to the elimination of barrier zones 3. The stage of analysis and conducting experiments.Assesment of digital communication at the enterprise I. Assessment of the existing digital management document workflow according to the communication effectiveness criteria developed in the second stage of the project.II.Fixing the zones of communication barriers III.Conducting an experiment to build a temporal model of communication barrier zones, considering the probability of stochastic instability of the digital communication system and the possible crisis of communication due to imperfections in the communication component of the document workflow IV.Conducting an experiment to create a model of effective document workflow, taking into account organizational communication V. Development of the monitoring system for effective communication and psycholinguistic comfort at the enterprise. Fig. 1 . Fig.1.Chart of communication problems and delays in a construction company, data in % of 100% (the data obtained by the authors). Fig. 2 . Fig. 2. Chart of the temporal model of communication problems and delays in a construction company, data in hours per month (data obtained by the authors). Fig. 3 . Fig. 3.Chart of the distribution of potential for conflict in the digital workplace in % of 100% (data obtained by the authors).
3,822
2022-01-01T00:00:00.000
[ "Environmental Science", "Business", "Engineering", "Computer Science" ]
The SAFEX-JIBAR Market Models It is possible to construct an arbitrage-free interest rate model in which the LIBOR rates follow a log-normal process leading to Black-type pricing formulae for caps and floors. The key to their approach is to start directly with modeling observed market rates, LIBOR rates in this case, instead of instantaneous spot rates or forward rates. This model is known as the LIBOR Market Model. We formulate the SAFEX-JIBAR market model based on the fact that the forward JIBAR rates follow a log-normal process. Formulae of the Black-type are deduced. Introduction Instantaneous rate models, although theoretically satisfying, are less so in practice.Instantaneous rates are not observable and calibration to market data is complicated.Hence, the need for a market model where one models LIBOR rates seems imperative.In this modeling process, we aim at regaining the Black-76 formula [1] for pricing caps and floors since these are the ones used in the market.To regain the Black-76 formula we have to model the LIBOR rates as log-normal processes.The whole construction method means calibration by using market data for caps, floors and swaptions is straight-forward.Brace, Gatarek and Musiela [2] and, Miltersen, Sandmann and Sondermann [3] showed that it is possible to construct an arbitrage-free interest rate model in which the LIBOR rates follow a log-normal process leading to Black-type pricing formulae for caps and floors.The key to their approach is to start directly with modeling observed market rates, LIBOR rates in this case, instead of instantaneous spot rates or forward rates.Thereafter, the market models, which are consistent and arbitrage-free [2,4,5], can be used to price more exotic instruments.The resulting model is known as the LIBOR Market Model.Some of the advantages of market models as compared to other traditional models are that market models imply pricing formulae for caplets, floorlets or swaptions that correspond to market practice.Consequently, calibration of such models is relatively simple. The plan of this work is as follows.Based on an improved version of the standard risk-neutral valuation approach, the forward risk-adjusted valuation approach, and on an elaborate process of computing forward riskadjusted measures, a proposition is made to apply the technique to the pricing of South African caps and floors. The JIBAR Each day at 10:30 am, each of the 14 South African and South African-based foreign banks are asked to provide the midpoint between Bid and Offer of their 1, 3, 6, 9 and 12 month deposit National (Negotiable) Certificate of Deposit (NCD) rates quoted as yield.In each category, e.g., in the 1 month category, the 14 rates are arranged in order.The top two and the bottom two are eliminated and the remaining 10 are averaged and rounded to 3 decimal places.The resulting rate is termed a k-month JIBAR rate where The main deterrent factor is that many of them do not necessarily have access to sophisticated pricing models to accurately price these derivatives.However, for many corporate treasurers, caps and floors have been the preferred method of achieving disaster insurance against incidents like the 1998 emerging markets crisis.This sterms from the fact that caps and floors are highly adaptable to the particular needs and requirements of companies wishing to manage and hedge against interest rate reset risk on interest-sensitive assets and liabilities.On the exercise date of the cap or floor agreement, the prespecified strike rate is compared to the standard reference floating rate, that is the 3-month SAFEX-JIBAR rate.The interest differential is then applied to the contractually specified notional principal amount (amount to be borrowed/lent) in order to calculate the amount to be paid by the writer/seller to the holder/buyer (the settlement).The notional principal amount is normally at least R1 million. Settlement of a single period cap/caplet is done in the following manner.The seller of a cap agrees to pay the buyer the difference between the fixed strike rate and the reference floating rate (JIBAR), based on the notional principal amount, when the JIBAR reset exceeds the fixed strike rate.Settlement occurs on each reset date according to the formula: where is the settlement amount in Rands, S J is the JIBAR rate for that period/quarter, c K is the cap strike rate, is the notional principal amount, and is the exposure period in days (usually 91 or 92). L d In the majority of cases, settlement takes place in arrears, in which case the settlement amount is then present-valued to the exercise date. In a similar fashion, the settlement amount of a single period floor/floorlet is given by the formula: where is the settlement amount in Rands, S J is the JIBAR rate for that period, f K is the floor strike rate, is the notional principal amount, and is the exposure period in days. L d In this case, the seller of a floor agrees to pay the buyer the difference between the fixed strike rate and the SAFEX-JIBAR, based on the notional principal amount, when the SAFEX-JIBAR rate resets below the fixed strike rate.Settlement also takes place on each reset date.To get a better feeling of this, take a company that expects a surplus cash receipt of R1 million in a month's time which it will wish to invest.The company fears rates will be lower in future and therefore decides to buy a T1m-T4m at-the-money floorlet with a maturity of 3 months, to hedge against the risk of losing money. Pricing Caps, Floors and Collars Each caplet/floorlet is priced from the implied 3-month forward rate for that period, from the yield curve.Hence, the at-the-money price of a caplet/floorlet is just the forward rate for that period.A strike price lower than that implied by the forward rate will result in an in-the-money caplet with both intrisic and time values, whereas a strike price above the forward rate will result in an out-themoney caplet.Similarly as with most option-styled derivative instruments, the more time to expiry, the greater the time value inherent in the option.This means that a T3m-T6m period caplet has time value of 3 months while a T21m-T24m period caplet has time value of 21 months.Volatility (annualized) is another factor that affects the value of a cap/floor.There is a positive correlation between volatility and the price of both caps and floors.The more volatile the price or rate of an asset, the more likely it is to reach the option strike price, and so the more valuable the option.In brief, higher volatility implies higher option value.Standard option pricing theory postulates that the spot price or rate of the underlying follows a lognormal random walk.The fact that there are so many factors impacting on the price of a cap/floor makes it practically impossible for market-makers to hedge caps and floors.Cap and floor values also change as the shape of the yield curve changes, something which is not a factor in equity derivatives.Basically, the pricing of caps and floors in the South African market follows an extension of the Black-Scholes option valuation formula and is done in the following manner. Suppose we have an interest rate cap with strike rate K and reset at times 1 2 , , , N t t t  , with a final payment to be made at time where is the nominal amount.L Similarly for a floor, the price of the kth floorlet with strike In both cases,   i is log-normal under its measure, we assume J t is a Geometric Brownian Motion, then we have The SAFEX-JIBAR Market Models Consider a fixed set of increasing maturities 0 1 , , , N T T T  such that Thus, for a portfolio of caplets we would have the following settlements: , . Since by definition, i J is an average, for every 1, 2, , i  N, the JIBAR-SAFEX process i J is a martingale under the corresponding forward measure on the interval the value of caplet is given by i Since Now letting and , we have, as required that Proposition 5. 3 In the SAFEX-JIBAR market, the price of a floorlet whose settlement amount is given by   max , 0 . is given by the formula where where c K and f K are the cap and floor strike rates respectively, , , is the volatility of the interest rate of the period . Eq pricing caps and floors in the JIBAR market is   uations ( 6) and (9) show that the numeraire for the of   i i q t . The Greeks intend to derive formulae for some or our model.Most traders employ In this section, we hedging measures f sophisticated hedging schemes which involve the calculation of such measures as delta, gamma and vega.The delta of an option measures the rate at which the option price changes with respect to the price of the underlying forward rate.Gamma is the rate of change of the option's delta with respect to the forward rate.Vega is the rate of change of option price with respect to the volatility of the underlying.If vega is high in absolute terms, then the option value is sensitive to small changes in volatility.In contrast, if vega is small in absolute terms, volatility changes have relatively little impact on the value of the option.We will recall that This fact will help us deduce our measures in the following manner.For a caplet, Copyright © 2012 SciRes.JMF Similarly, it can be shown that for floorlets,     Note that the delta, gamma and vega of a cap/floor is simply the arithmetic sum of the respective delta, gamma and vega for the caplets involved. . The present work has made some nota e contribut the interest rate modeling arena.A clear understanding of the LIBOR theory enabled an easy extension of the same ideas to the construction of the SAFEX-JIBAR m model which gives prices consistent with oth econ practicality and with other Black-type mo ls.Contracts," Journal of omics, Vol. 3, No. 1-2, 1976, pp.167-179.04-405X(76)90024-6 It is the rate at which banks buy and sell short-term money among themselves and is traditionally a wholesale and not a retail rate.It is reset every quarter and is fixed for the duration of the quarter. iAfrica are hesitant to enter into interest rate derivative agreements which involve an element of optionality. The cap/floor price is the sum of the prices of the caplets/fllorlets. i J t
2,399.6
2012-11-19T00:00:00.000
[ "Business", "Economics", "Mathematics" ]
Does substrate colour affect the visual appearance of gilded medieval sculptures? Part I: colorimetry and interferometric microscopy of gilded models In the history of medieval gilding, a common view has been circulated for centuries that the substrate colour can influence the visual appearance of a gilded surface. In order to fully understand the correlation between the gilding substrate and the colour appearance of the gold leaf laid above, in this paper (Part I) analytical techniques such as colorimetry and interferometric microscopy are implemented on models made from modern gold leaves. This study demonstrates that the substrate colour is not perceptible for gold leaf of at least 100 nm thickness, however the surface burnishing can greatly alter the visual appearance of a gold surface, and the quality of the burnishing is dependent on the substrate materials. Additionally, surface roughness and texture of the substrate can play supplementary roles, which can be visually observed through digital microscopy and quantified through interferometric microscopy. The findings in this paper will form the basis for the study of gold leaf samples taken from medieval European gilded sculptures in Part II. Introduction As the climax epoch for altars and altar sculptures, the late Middle Ages exhibited exquisite art technologies and complex materials in the sculpting, carving and polychromy of these artefacts. Gilding, as the most important component of medieval polychromy, has been extensively used in sculptures and altarpieces, as well as other artworks such as panel paintings, wall paintings, illumination books and textiles, in order to show the wealth of a person, state or country and to exhibit the magnificence and splendour of the "House of God" and the divine nature of saints [1]. Although many medieval sculptures were destroyed in the following Reformation, some have survived and a few have even been well preserved. Based on these artefacts, art historians and researchers have brought forward interesting theories and arguments regarding the technologies of medieval gilding. One widely accepted point of view states that the colour of the gilding substrate plays a crucial role in the visual appearance or perception of medieval gilded artefacts due to the transparency of the gold leaf of that time [2]. This view sounds plausible and has sometimes been used to explain why certain substrate colours such as red, yellow and white have been frequently observed in medieval gilding. For example, some researchers argue that preparatory layers of fine yellow ochre can lend a warmer tone to mordant gilding [3]; a chromatic function of coloured bole is to give a warm red or ochre tonality to the gold surface [4]; a white underlayer is supposed to enhance a cool, pale tonality of metal leaf such as silver [5]. In addition to modern literature, this point of view is also supported by historical documents such as medieval artists' treatises. For example, in The Craftsman's Handbook Cennini suggests using the red Armenian bole for water gilding [6]. Around this view a few researchers have recently demonstrated their understandings through building and analysing models. For example, Dumazet et al. use digital 3D models to simulate artefacts, in order to study correlations between gold leaf imperfections (e.g. holes), light reflection from substrate and optical transmission of gold leaf [7]. Mounier and Daniel have presented colorimetric observations of gold leaf on white, red and black substrates [8]. However, in these studies an important technological parameter of historical gold leaf, namely the actual leaf thickness, has not been linked or seriously investigated, leaving their arguments less supported. Furthermore, these studies mainly present the oil gilding (called Mixtion technique in these articles) on stone artefacts or wall paintings. Oil gilding is a common gilding technique, but the gold leaf laid above is unburnishable [9] and hence does not exhibit the typical metal gloss of a burnished gilded surface commonly observed on altars and altar sculptures that are mainly made from wood. In order to fully understand the roles that the gilding substrate plays in the visual appearance of the gold surface of medieval sculptures, we have taken an integrated approach that combines investigations of models made from modern gold leaf (Part I) and samples of medieval gold leaf taken from artefacts (Part II). In the current paper, we focus on understanding the correlation between the appearance of the gold leaf and its substrate through colorimetry measurements on the models, which are made with traditional gilding techniques including water-, oil-and ground gilding on substrates with different materials (where appropriate for the gilding technique) and colours. The roughness of the gold surface is observed through digital microscopy and further quantified through interferometric microscopy. As supportive data, scanning electron microscopy coupled with energy dispersive X-ray analysis (SEM-EDX) is also performed, in order to obtain the thickness and gold content of the modern gold leaves used for the models. This technique will however be discussed in detail in Part II, together with information regarding the technological features of medieval gold leaf samples. Gilding techniques and stratigraphy Simply speaking, gilding is the process to attach metal leaves or foils onto a surface of other materials; water gilding (also called bole gilding) and oil gilding (or mordant gilding 1 ) are the most common types of traditional gilding techniques. In water gilding, metal leaf is laid atop a "bole", which mainly contains clay and is usually bound with proteinaceous media; a high-gloss surface can be realized through a thorough surface burnishing [9,10]. In oil gilding, siccative oil-based binding media are employed, and the metal leaf laid above is unburnishable and the surface hence appears relatively matte [10]. Due to the high metal gloss caused by the surface burnishing, water gilding is also called "glossy gilding" [9]. Before the water and oil gilding techniques became popular in European works of art, a variant of water gilding called ground gilding was the main gilding type prior to the mid thirteenth century [2,5]. In this technique, metal leaf is applied onto a polished ground by means of thin adhesives such as diluted animal glue or egg glair [5,11], and can be slightly burnished [9]. Figure 1 presents schematic illustrations for the basic stratigraphy of these three types of gilding techniques, which contain (from bottom to top) grounding, bole (or mordant; or thin adhesive) and metal leaf. The main ingredients of the grounding are either chalk or gypsum, dependant on the geographical regions (generally chalk grounding for Northern Alps and gypsum for the South) [9]. Both chalk and gypsum groundings typically appear white and are usually bound with proteinaceous media. Bole and mordant are two common types of gilding substrates and usually appear colourful. The most common bole colour is red or red brown, which can be realized by adding red pigments (e.g. iron-based red ochre) or based a b c Fig. 1 Schematic illustrations of basic stratigraphy for a water gilding, b oil gilding and c ground gilding techniques on the bole's own colour, for example, the famously red Armenian bole [2]. Another common substrate colour is yellow, which is the essential colour for mordant and can be made through adding yellow pigments (e.g. ochre, lead-tin yellow) into the drying oil or mordant [12], and has been frequently observed in matte gold areas such as the hair of saint statues [13]. In addition to gold leaf, which was extensively applied on the outer surface of saints' gown and altar background, silver and part-gold leaves (also called Zwischgold, refers to a metal leaf containing an upper gold layer and a lower silver layer [14,15]) were also frequently observed in medieval gilded artefacts. Influencing factors to the visual effects of gilded surfaces The visual perception of a glossy surface such as metal is composed of the colour appearance and shininess [16], which respectively correspond to the diffuse and specular light reflection [17,18] and can be affected by a few factors, such as the illumination pattern, surface roughness, and presence of a superficial layer [16,19]. In the case of a gold leaf surface, the influencing factors could be extended to: • Illumination. This mainly refers to the type, orientation and intensity of the light source, which certainly show a close relationship with the exhibition arrangement of the artefacts; • Viewing condition. Human eyes have diverse resolving powers and a human observer can change viewing conditions and angles to make an appearance judgement [18]. • Surface coating. A surface coating layer (e.g. varnish) can change the surface roughness, and thus manipulate the light reflected from the surface. A protective varnish is usually not needed for a gold surface due to the chemical inertness of gold. However, a thin glue coating was sometimes partly applied onto a burnished gold surface, in order to create a matte/glossy aesthetic comparison [9]. • Surface burnishing. It is obvious to observe that a burnished gilded surface exhibits a high metal gloss while an unburnished surface appears relatively matte. • Leaf imperfection. A careless surface burnishing can cause cracks or holes in the gold leaf. In certain magnitudes of such leaf imperfections, reflected light from the substrate could go through to reach the surface [7]. Similar effect would also occur if small areas of the substrate are exposed between adjacent gold leaves. • Materials composition of gold leaf. The alloying elements play an essential role in the colour of a gold leaf. For example, Dumazet et al. [7] states that a gold alloy becomes greenish with the addition of 25% silver; while a copper content of 25% makes the alloy appear red. • Leaf thickness. The thickness of a gold leaf is a critical parameter in how opaque it is and thus determines whether the substrate colour has the possibility to influences the visual appearance of the gold surface. • Optical properties of gold. Gold is well known to strongly reflect and absorb visible light [20] and so only a very thin layer of gold would allow the transmission of light to illuminate the substrate or to allow light from the substrate to return to an observer. Four of these factors, namely the illumination, viewing condition, surface coating and leaf imperfection, are strongly dependant on the individual objects or viewers, and hence are difficult to study in general. However, the materials composition of medieval gold leaf and its leaf thickness are either traceable in historical documents or can be analysed through scientific approaches, which will be discussed in detail in Part II. The thickness of the gold leaf is a critical factor in whether the substrate colour is able to influence the appearance of the gilded surface. For the hypothesis to be true, light from an external source would need to be transmitted through the gold surface (i.e. not reflected) and the leaf thickness (i.e. not absorbed), then be reflected from the substrate to impart its colour and be transmitted through the leaf again to reach an observer. According to Loebich [20], if a pure gold leaf is 50 nm thick, then only about 10% of the incident light (λ = 492 nm; cyan) can transmit through this gold leaf, which would be reduced to 10% on the return passage for about 1% (10% × 10%) of the original light leaving the surface to be observable. This is without factoring in further losses due to reflectivity when entering the gold surface or when reflecting from the substrate. A gold film that is about 100 nm thick (similar to the modern gold leaves used in this study) cannot be expected to show any significant light transparency. A reproduced graph regarding the reflectivity and light transmission of thin gold films in terms of their thicknesses is presented in Additional file 1: Fig. S1. Colour measurements on the self-made models are therefore expected to show that the substrate colour does not affect the appearance of a gilded surface, but that variations in appearance may be caused by other factors. Surface burnishing is an important part of water gilding and well known to strongly affect the appearance of the gilded surface; and it is thus of great significance for historical gilded artefacts. The effect of burnishing is therefore a focus of this paper. Gilding techniques, materials and nomenclature of models A total of twelve models (Fig. 2) were produced in March 2019 (3) and March 2020 (9) with traditional gilding techniques, including one ground gilding, one oil gilding and ten water gilding models. In the models made in 2019, Spezial-Poliergold-Altgold-Dunkel (from Deffner & Johann, Germany) was applied only with the water gilding technique. In the models made in 2020, ground gilding and oil gilding techniques were also added into the model production; and due to a shortage of Poliergold, Dukaten-Doppelgold (from Noris Blattgold, Germany) was used. Although manufacturers claim a gold purity of 22.5 carat for Poliergold 2 and 23 carat for Doppelgold [21], SEM-EDX quantification on samples shows that these two types of gold leaf actually have very close gold contents (23.1-23.2 carat), with an Au:Ag:Cu mass ratio of 96.8:2.1:1.1 for Poliergold and 96.6:1.8:1.6 for Doppelgold; and their average leaf thickness is 96 ± 9 and 116 ± 16 nm respectively. Details of the SEM-EDX measurements are presented in Part II. A chalk ground bound with diluted animal glue was used as the grounding for all models. Three types of gilding substrates including the coloured (red, yellow, bluegrey) bole, oil-based gold size (Mixtion à dorer from Lefranc & Bourgeois, France) (half-transparent, light brown) and white chalk ground were respectively used to apply gold leaf with the water, oil and ground gilding techniques. The applied gold leaf was then either burnished ("b") or kept unburnished ("nb"). Here, it is worthwhile to point out that surface burnishing was performed manually with an agate burnishing tool in one direction of the surface plane in order to produce a high metal gloss; variations in effect are therefore dependent on a few factors such as applied pressure, rubbing speed and substrate moistness/softness, which are not easy to precisely control in a manual process. Technological details regarding the models are presented in Table 1 and more details of production procedures are presented in Additional file 1: Section 3. Note that no adhesive was needed for water gilding models. Instead, a mixture of ethanol and water (1:2 by volume), functioned as the wetting agent and was brushed onto the bole immediately before the application of gold leaf. A diluted water-based rabbit skin glue (1.5% by weight) was applied onto the white ground substrate of the ground gilding model; it was also used as adhesive between the gold leaves in the additional models with two layers of gold leaf, while the first layer of gold leaf in such models was still applied onto the bole with the ethanol/water mixture. In the oil gilding model, the gold size layer was used as both substrate and adhesive. The gold leaf was applied above the gold size when it was almost dry but still tacky. The model names form a shorthand code describing their construction. They are composed of the substrate colour and identification number, followed by an underscore and the gold leaf type, as shown in Table 1. For example, "y4_pg" refers to "No. 4 yellow bole substrate applied with Poliergold", while "w1_dg" means "No. 1 white ground substrate applied with Doppelgold". Here "y" refers "yellow", "r" for "red", "b" for "blue-grey", "w" for "white", "pg" for "Poliergold" and "dg" for "Doppelgold". Note that double layers of gold leaf were applied on some additional water gilding models, indicated with a "x2" suffix. In the oil gilding model, the gold size layer is fully Fig. 2 a Poliergold models with yellow and red bole substrates including (from top to bottom) "y4_pg", "r1_pg" and "r5_pgx2"; b Doppelgold models with blue-grey bole substrates including "b1_dg", "b4_dg" and "b2_dgx2"; c Doppelgold models with yellow and red bole substrates including "y7_ dg", "r7_dg", "r9_dg" and "r10_dgx2"; d Doppelgold models including the ground gilding model "w1_dg" and the oil gilding model "w4_dg_oil"; in the latter a gold size layer is located between the white ground and gold leaf, and thus invisible in the image; e bare substrates including yellow bole, red bole, blue-grey bole and white ground. The dark areas in the gilded sections of some models are the reflection of the camera lens Gold size nb covered by the gold leaf and invisible to the observers, and thus the white ground colour "w" is still used in its name. In order to differentiate the ground and oil gilding models, the latter's name is suffixed with "_oil". Further, in the water gilding and ground gilding models, the gold surface on the left half was unburnished and that on the right half was burnished. Therefore, each such model is divided into two sub-models, labelled with "nb" (non-burnished) and "b" (burnished). For example, "y4_pg_nb" refers to "No. 4 yellow bole substrate applied with non-burnished Poliergold". The oil gilding model is unburnishable and therefore there is only one sub-model, labelled as "_oil_nb". Analysis techniques and experimental conditions Colour measurements on models were implemented through a portable spectrophotometer Konica Minolta CM-2600d. Two measurement modes 'Specular Component Included' (SCI) and 'Specular Component Excluded' (SCE) were used for each measurement, with both 'Medium Area View' (MAV, ∅ 8 mm) and 'Small Area View' (SAV, ∅ 3 mm) apertures. A built-in colour system CIE L*a*b* with a standard illuminant D65 was applied for the colour analysis of a surface, through which three colorimetric values are obtained, including L* for the lightness from black (0) to white (100), a* from green (−) to red (+), and b* from blue (−) to yellow (+) [22]. A colour difference (∆E*ab) between a sample and the target spot is calculated based on the difference of colorimetric values, i.e. ∆L* ("+" lighter, "-" darker), ∆a* ("+" redder, "-" greener) and ∆b* ("+" yellower, "-" bluer) [23]. A commonly accepted Just Noticeable Difference (JND) value of ∆E*ab is 2.3 [22,24]. Calculation formula for a colour difference is presented in "Appendix". A digital microscope Keyence VHX-5000 was used for observations on the roughness and texture of the model surfaces at the Paul Scherrer Institute (PSI), with the 200× objective, MIX lighting and HDR image quality. Positions representing general surface conditions for the non-burnished, burnished gold surfaces and bare substrate areas of the models were selected for imaging through a 3D-Stitching mode, in which 3 × 3 images were stitched into a total area of 3798 × 2954 μm. The entire region is displayed in focus by measuring a focal stack with a vertical pitch of 20 um. A Zygo NiewView 5010 white light interferometer was used for surface roughness measurements at the PSI. The 2.5× objective used in the study provides a total magnification of 100, a numerical aperture of 0.075, and a field of view of 1.4 × 1.0 mm. The samples were placed and aligned under the microscope, and subsequently scanned in height. With typical peak-to-valley values of 2-10 μm, a scanning range of 40 μm was used. As supportive data, SEM-EDX was implemented for the gold content and leaf thickness of the gold leaves used for the models. The experimental conditions of SEM are presented in Additional file 1: Section 2. Measurements on substrates In order to ensure the reliability of the colour measurements on the gold surface of the models, it is necessary to first check the colour quality of the substrates. Four substrate groups including yellow, red, blue-grey boles and white grounds, which were made either in 2019 or in 2020 (details presented in Table 1), were investigated through colorimetry with the MAV aperture. Note that the oil-based gold size was not measured, due to its tacky nature and tendency to easily collect dust. Measurement results show that there is no significant colour difference (for both SCE and SCI values of ∆E*ab) within the individual substrates and within the same substrate groups, indicating that the colours of the substrates are even and homogeneous. Colorimetric data of the substrates is presented in Additional file 1: Table S1. Measurements on gold surfaces with different apertures During the colour measurements on gold surface of models, it is observed that the SCI values of ∆E*ab for almost all "nb" and "b" sub-models are below JND, which is inconsistent with our visual perception that the burnished gold surface appears darker and more saturated than the unburnished one. Indeed, according to the colorimeter manufacturer Konica Minolta, the SCE mode (i.e. diffuse reflection 3 ) is similar to the visual perceptions by human eyes for a glossy surface [25]. Therefore, further analysis is only focused on the SCE data. Both MAV and SAV apertures were used for the colour measurements on gold surface. For MAV measurements, about 20 sample spots were selected on each "nb" and "b" sub-models. Minor imperfections in the gold leaf (e.g. fine scratches, small worn spots and stains), which are usually generated during the application and burnishing procedures, were included in these spots. In SAV measurements about 25 spots were selected, in which the minor defects were avoided. An example of the selection of MAV and SAV measurement spots on Model "y4_pg" is presented in Additional file 1: Fig. S3. The comparison between the MAV and SAV measurement data is expected to show whether minor leaf defects could affect the visual appearance of the gold surface. Note that during the analysis of SAV data around 1-3 spots in a few sub-models, which show unusual ∆E*ab values compared to the averaged one, were further excluded. Figure 3 presents the data charts of the MAV and SAV colour differences (∆E*ab) measured in Poliergold models (a) and Doppelgold models (b). It is obvious that almost all sub-models have very close ∆E*ab values, except for "r5_pgx2_b" and "r7_dg_b", which show higher values in MAV measurements than SAV by ca. 7 and 9 units respectively. We are not certain about the reason for such colour discrepancy. It is assumed that the MAV measurement spots in these two sub-models might contain relatively large defects, which could lead to certain levels of colour change of the gold surface. The comparison between the MAV and SAV data indicates that minor imperfections in the gold leaf do not significantly influence the colour appearance of the gold surface. Further analysis is focused on the SAV data. Measurements on Poliergold models The Poliergold models include three water gilding models "y4_pg", "r1_pg" and "r5_pgx2". Figure 4 shows that the colour differences (compared to the target) of the three "nb" sub-models are not significant (∆E*ab values < JND); the same also applies for their "b" sub-models. In the latter, the ∆L* values drop dramatically (by 39-43 units compared to the corresponding "nb" sub-models), followed by the ∆b* values (drop by 8-9 units); while the ∆a* values slightly increase (by ca. 4-5 units). This observation indicates that after surface burnishing the gold leaf appears much darker and its colour change is in the direction of blue and red. Compared to "y4_pg_b" and "r1_pg_b", the sub-model "r5_dgx2_b" seems even darker and more red-blueish (indicated with lower ∆L* and ∆b*, and higher ∆a*). We believe that this effect was likely caused by a stronger surface burnishing. Since the gold leaves on differently coloured (i.e. red and yellow) bole substrates do not show significant colour change in both unburnished and burnished states, we have evidence against the hypothesis that the substrate colour plays a role in the colour appearance of the gold leaf laid above. However, to be conclusive it is necessary to study more models, in order to better understand the correlation between the colour of the gold surface and its substrate. Corresponding data is available in Additional file 1: Table S2 a b Fig. 4 Chart of colorimetric values of ∆L*, ∆a*, ∆b* and ∆E*ab for a "nb" and b "b" sub-models in Poliergold models. Corresponding data is available in Additional file 1: Table S3 Measurements on Doppelgold models Figure 5a, b presents the colour measurements of nine Doppelgold models, including one ground gilding, one oil gilding and seven water gilding models. Four subsets of models with specific features (Fig. 5c-j) were further compared and analysed. Chart of colorimetric values of ∆L*, ∆a*, ∆b* and ∆E*ab for a "nb" and b "b" sub-models in all Doppelgold models; c "nb" and d "b" sub-models with old bole substrates; e "nb" and f "b" sub-models with new bole substrates; g "nb" and j "b" sub-models with red bole substrates; i "nb" and j "b" sub-models made with different gilding techniques. Corresponding data is available in Additional file 1: Table S4 The first colour comparison was implemented between "y7_dg" and "r7_dg" (Fig. 5c, d), in which the bole substrates "y7" and "r7" were made in 2019. It is obvious that the colorimetric data for both "nb" and "b" sub-models of these two models are very similar, indicating that there is no significant colour difference between the gold surfaces on the differently coloured bole substrates. Such observation further confirms the measurement output from the Poliergold models. • Subset 2: Models with new bole substrates Five models ("b1_dg", "b4_dg", "b2_dgx2", "r9_dg" and "r10_dgx2") were built on new bole substrates made in 2020 (Fig. 5e, f ). Sub-models "b4_dg_nb" and "b2_ dgx2_nb" show higher L* values (76.43 and 72.74) than that of "b1_dg_nb" (68.28), indicating relatively higher diffuse light reflection and also leading to their slightly different ∆E*ab values (8.66, 5.16 and 2.67). Indeed, digital microscopy images of these three "nb" sub-models (Fig. 7a-c) show that the latter two submodels appear rougher than the former. Here, it is worth noting that the blue-grey bole materials seem to contain larger pigment or filler particles than the yellow and red bole materials; and the substrate "b1" was slightly sanded to obtain a relatively smoother surface than "b2" and "b4", for the purpose of comparison. Since "b1", "b2" and "b4" exhibit different levels of surface roughness but very similar colorimetric values (Additional file 1: Table S1), the colour discrepancy in their "nb" sub-models indicates that the surface roughness of the substrate could play a role in the visual appearance of an unburnished gold leaf laid above. • Models "r9_dg" and "r10_dgx2" show almost the same colorimetric values in both their "nb" and "b" sub-models, indicating that there is no significant colour difference between the single-and double-layered gold leaf. This is also an evidence to prove that the slight colour difference between the Poliergold models "r1_pg_b" and "r5_pgx2_b" was likely caused by different magnitudes of surface burnishing. It is interesting to observe that the ∆E*ab values of "r9_dg_nb" (2.67) and "r10_dgx2_nb" (3.00) are very close to that of "b1_dg_nb" (3.10), which could be attributed to the fact that the substrate "b1" was sanded to obtain a smoother surface and its roughness could be similar to the new red boles. As for "b" sub-models, except for "b4_dg_b", the other four sub-models show close colorimetric values. The ∆E*ab value of "b4_dg_b" is slightly higher than the others. Again, this could be attributed to a stronger surface burnishing. • Subset 3: Models with red bole substrates The "nb" sub-models of three models with red bole substrates ("r7_dg", "r9_dg" and "r10_dgx2) show very close colorimetric values; while "r7_dg_b" exhibits higher ∆E*ab values than the other two "b" submodels (Fig. 5g, h). Note that the substrates "r7", "r9" and "r10" do not show significant differences in their colour measurements (Additional file 1: Table S1). • Subset 4: Models made with different gilding techniques This comparison is performed between the water gilding model "b1_dg", ground gilding model "w1_dg" and oil gilding model "w4_dg_oil" (Fig. 5i, j). Note that the comparison for "b" sub-models (j) is only performed between "b1_dg_b" and "w1_dg_b", since the gold surface in oil gilding is unburnishable. It is not surprising to observe that "w1_dg_b" shows much lower ∆E*ab value (16.23) compared to "b1_ dg_b" (32.62), which is mainly attributed to the fact that the L* value of "w1_dg_b" (53.10) is much higher than that of "b1_dg_b" (36.76) by ca. 16 units. This observation indicates that a poor-quality surface burnishing has been performed on this ground gilding model. Indeed, historical literature states that ground gilding can be only slightly burnished [9]. Fig. 5i show that the ∆E*ab values of the three "nb" sub-models vary on a small scale, of which "w1_dg_ nb" shows the highest value of 6.08, while the values of the other two are very close (3.10 for "b1_dg_nb"; 4.41 for "w4_dg_oil_nb"). This discrepancy also results from their different L* values: "w1_dg_nb" shows higher lightness (74.09) than the other two "nb" sub-models (68.28 and 67.36), indicating a slightly more diffuse light reflection. Since the chalk ground was polished by fine sandpaper and thus appears very flat and smooth, it is worth to further investigate why more light could be diffusely reflected from the gold leaf laid atop such a highly polished and flat substrate surface. Colour comparison between Poliergold models and Doppelgold models The analysis in the previous section exhibits some examples about the possible correlation between the surface roughness of the substrate and the colour change of the gold leaf. Therefore, the colour comparison between different gold leaves must be performed on the models with substrates of similar roughness, likely the substrates made at the same time. The comparison was implemented between two Poliergold and two Doppelgold models with yellow and red bole substrates. Figure 6 show that these four models exhibit very close colorimetric values in both their "nb" and "b" sub-models, indicating that the small discrepancies in the gold content and leaf thickness of the gold leaf do not cause significant colour change to its surface. From the colour measurements on the models with different types of gilding techniques, gold leaves and substrates, we see strong evidence that the substrate colour does not play an essential role in the visual appearance of the gold leaf laid above. Instead, the surface burnishing can strongly alter the colour appearance of the gold leaf and its quality is dependent on the substrate materials. Within the three common gilding substrates, the coloured bole provides the best surface burnishing due a b Fig. 6 Chart of colorimetric values of ∆L*, ∆a*, ∆b* and ∆E*ab for a "nb" and b "b" sub-models applied with Poliergold and Doppelgold (compared to Target "y4_pg_nb_sp1). Doppelgold is indicated with a pattern. Corresponding data is available in Additional file 1: Table S5 Fig . 7 Digital microscopy 3D-Stitching images for the "nb", "b" and substrate areas of the models with the blue-grey bole substrates "b1_dg", "b4_dg" and "b2_dgx2" to the presence of its elastic clay ingredients, which correspondingly leads to more depth effects to the gold leaf laid above; while the gold surface above the ground substrate can be only slightly burnished due to the high hardness of the chalk ground. However, it is worth pointing out that the quality of surface burnishing is not only dependant on the substrate materials but also due to the implementers and preparation processing. Details are presented in Additional file 1: Section 3.1. Surface roughness of the substrate seems to be another critical factor to influence the colour of the gold leaf. The correlation between the surface roughness of the substrate and the visual appearance of the gold leaf are further studied through digital microscopy and interferometric microscopy in the following sections. Figure 7 exhibits the digital microscopy (DM) images of three models with the blue-grey bole substrates ("b1_dg", "b4_dg", "b2_dgx2") through the 3D-stitching mode. Although it is known that the substrate "b1" was sanded to obtain a relatively smoother surface than "b2" and 'b4", these three bare substrates cannot not be well differentiated in their DM images (Fig. 7g-i). Instead, the substrate roughness can be reflected through the DM image of an unburnished gold leaf laid above, especially with the MIX lighting and the HDR image quality. The sub-model "b1_dg_nb" clearly exhibits a less rough gold surface than "b4_dg_nb" and "b2_dgx2_nb" (Fig. 7a-c); and it is not surprising to observe that after burnishing, all the "b" sub-models show smoother surfaces in a similar level (Fig. 7d-f ) compared to their "nb" sub-models, indicating a greater specular/diffuse proportion ratio in the light reflection. The DM observations on these three models is consistent with their colorimetry measurements. Digital microscopy imaging on models It is also interesting to compare the DM images of models with different gilding techniques. Figure 8 shows DM observations on the gold surfaces of "b1_dg", "w1_ dg" and "w4_dg_oil". These three models look similarly rough but with different textures in their "nb" and "b" sub-models respectively. For example, "b1_dg" exhibits a relatively even and homogenous texture (Fig. 8a, d); while "w1_dg" shows many fine horizontal lines in both its "nb" and "b" sub-models (Fig. 8b, e), although the surface burnishing was performed on the latter in the vertical direction. The fine horizontal lines present in "w1_dg" were likely caused by the sanding marks on its ground substrate, since the sanding and polishing of all chalk grounds were performed in this direction. Different from "w1_dg", the model "w4_dg_oil" shows a few larger-scaled vertical lines in its "nb" sub-model (Fig. 8c). Such vertical lines could be possibly attributed to the presence of leaf folds. Since the gold leaf in oil gilding is unburnishable, leaf folds created during the manual pressing with cotton balls cannot be flattened through a surface burnishing. Interferometric microscopy measurements on surface roughness of models The quantification of surface roughness was determined using interferometric microscopy. Measurement spots Fig. 8 Digital microscopy 3D-Stitching images for the "nb", "b" and substrate areas of the models made with different gilding techniques "b1_dg", "w1_dg" and "w4_dg_oil". Note that "w4_dg_oil" does not have a "b" sub-model that represent general surface conditions were selected for each of the three areas in the models: "nb", "b" and bare substrate. Figure 9 shows radially averaged 2D power spectral density (PSD) of the surface topologies of the three types of water gilding models investigated in this study as a quantitative measure of surface roughness [26], including models with yellow bole (a), red bole (b) and blue-grey bole substrates (c). A common trend of the roughness change between the bare substrate, unburnished ("nb") and burnished ("b") areas can be easily observed in each type of the models: the bare substrates show the highest roughness, while the "nb" sub-models show a similar roughness on the longer length scales (following the shape of the substrate) and a strong roughness decrease at short length scales, and the "b" sub-models show decreased roughness across all length scales. The water gilding models with the blue-grey bole substrates ("b1_dg", "b4_dg" and "b2_dgx2") are further analysed in Fig. 10. Comparisons with respect to "nb", "b" and bare substrate are presented in Fig. 10a-c. The surface roughness of the "nb" sub-models in Fig. 10a shows that "b1_dg_nb" is significantly smoother than "b4_dg_ nb" and "b2_dgx2_nb" at the length scales larger than ca. 0.05 mm; while below 0.05 mm the surfaces show very similar roughness. A similar situation is also observed Fig. 9 Radially averaged 2D power spectral density (PSD) of surface topologies measured by interferometric microscopy for models with the yellow, red and blue-grey bole substrates. Lines show the average of multiple measurements, while shaded areas represent 90% confidence intervals Fig. 10 Radially averaged 2D power spectral density (PSD) of surface topologies measured by interferometric microscopy for models with the blue-grey bole substrates "b1_dg", "b4_dg" and "b2_dgx2". Lines show the average of multiple measurements, while shaded areas represent 90% confidence intervals in their bare substrates in Fig. 10c where "b1" shows a lower roughness than "b4" and "b2" at the larger length scales, while in the smaller scales there is no difference. Comparing these to the "b" sub-models in Fig. 10b, we observe that burnishing causes a strong reduction in the surface roughness and now the three "b" sub-models show similar roughness in all length scales, indicating the performance of a good-quality surface burnishing on all these three models. The output of the roughness measurements is consistent with our observations through digital microscopy and colorimetry, providing strong evidence for a correlation between the surface roughness of the substrate and the colour appearance of the gold leaf. For example, the substrate "b1" was sanded to obtain a smoother surface (to a similar level as the yellow bole), and the gold leaf laid above ("b1_dg_nb") correspondingly shows the lowest roughness and smallest colour difference (compared to the target "y7_dg_nb_sp1") within this set of three models; after surface burnishing its surface roughness becomes similar to the other two models. Figure 10d-f shows the comparison between "nb", "b" and substrate for individual models. It is clear that all "b" sub-models exhibit the lowest roughness, followed by the "nb" sub-models and the bare substrates are the roughest. The roughness change in this set of models is similar to the trends observed in Fig. 9. Figure 11 presents the radially averaged 2D PSD roughness profiles and the corresponding interferometric microscopy images of three models produced with different gilding techniques (i.e. "b1_dg", "w1_dg" and "w4_dg_oil"), which were also observed through digital microscopy. Note that measurements of the substrate roughness of "w4_dg_oil" was performed on the chalk ground rather than the oil-based gold size due to the tacky nature of the latter. From the images shown in Fig. 11b-d, we can see that the three "nb" sub-models show different large-scale textures, however these differences are not reflected in the corresponding PSDs shown in Fig. 11a, due to a lack of statistics in the large length scale region. A similar situation is also observed in the roughness graphs of their bare substrates in Fig. 11h. However, the colour measurements show that the ∆E*ab values of "w1_dg_nb", "w4_dg_oil_nb" and "b1_dg_nb" are 6.08, 4.41 and 3.10 respectively; the relatively higher a b c d e f g h i j k Fig. 11 Radially averaged 2D power spectral density (PSD) of surface topologies and the corresponding height map images for the "nb", "b" and substrate areas of models with different gilding techniques "b1_dg", "w1_dg" and "w4_dg_oil". Note that "w4_dg_oil" does not have a "b" sub-model and the roughness measurements of its substrate were performed on the ground rather than on the gold size ∆E*ab value of the former is mainly attributed to its higher L* value compared to those of the other two (74.09 vs. 67.36 and 68.28), indicating a slightly more diffuse light reflection. In this case, the PSDs in Fig. 11a are not sufficient to explain the observations in colour measurements. We expect that the diffuse light reflection from "w1_dg_nb" could be enhanced by the light scattering through many fine horizontal lines on its gold surface, which we have observed in the DM image (Fig. 8b) and expect are the result of sanding marks on the chalk ground. Such fine horizontal lines can be also observed in the topography images of the "nb" and "b" sub-models of "w1_dg" (Fig. 11c, g), as well as the ground substrates of "w1_dg" and "w4_dg_oil" (Fig. 11j, k), but are not observable in the corresponding PSDs due to poor statistics at length scales above about 0.2 mm. The vertical lines in "w4_dg_oil_nb" that have been observed in the DM image ( Fig. 8c) can be also seen in its topography image (Fig. 11d); and the small wavy wrinkles present in the same image appear consistent with the drying process of the oil contained in the gold size. The roughness comparison between "b1_dg_b" and "w1_dg_b" (Fig. 11e) seems rather clear, with "b1_dg_b" showing significantly lower roughness than "w1_dg_b" at length scales below 0.1 mm. This lower roughness can be expected to result in a much less diffuse light reflection. Indeed, our colour measurements indicate that the L* values of "b1_dg_b" and "w1_dg_b" are 36.76 and 53.10, respectively, which result in a significant colour difference by ca. 16 units (∆E*ab value of 32.62 vs. 16.23). The comparison of these sub-models is a clear demonstration of how a soft substrate (i.e. bole) produces a superior burnished surface than hard substrates (i.e. polished ground). Conclusions Our analysis of colourimetry and surface roughness measurements on models made from modern gold leaves provides strong evidence that the substrate colour itself does not play an essential role in the colour appearance of the gold leaf laid above. Instead, we attribute the variations in visual perception to the surface roughness of the applied gold leaves, which can be affected by the properties of the substrate. Generally, for water gilding with bole substrates, the roughness of the substrate directly affects the gilded surface at length scales larger than 0.05 mm for the unburnished state, as the foil conforms to the largescale topography of the substrate. However, burnishing drastically changes the surface roughness and appearance of the gilding, which is affected by the substrate in a very different way. The effectiveness of the burnishing in achieving a high metal gloss is aided by the cushioning function provided by the substrate. For example, the bole substrate is soft and elastic and exhibits the best burnishing performance; while a hard chalk ground provides an inferior surface burnishing, leading to relatively higher diffuse light reflection from the gold surface. While the greatest colorimetric changes with burnishing were observed to be in the lightness axis (L*), smaller but significant changes in the red-green axis (a*) and yellow-blue axis (b*) were also observed that would help to explain the reported increase in warmth and depth of well burnished gold surfaces. The findings of this article build the foundation for further analysis of medieval gold leaf and its historical developments in Part II.
9,664.2
2020-09-01T00:00:00.000
[ "Art", "Materials Science", "History" ]
FBA-PRCC. Partial Rank Correlation Coefficient (PRCC) Global Sensitivity Analysis (GSA) in Application to Constraint-Based Models Background: Whole-genome models (GEMs) have become a versatile tool for systems biology, biotechnology, and medicine. GEMs created by automatic and semi-automatic approaches contain a lot of redundant reactions. At the same time, the nonlinearity of the model makes it difficult to evaluate the significance of the reaction for cell growth or metabolite production. Methods: We propose a new way to apply the global sensitivity analysis (GSA) to GEMs in a straightforward parallelizable fashion. Results: We have shown that Partial Rank Correlation Coefficient (PRCC) captures key steps in the metabolic network despite the network distance from the product synthesis reaction. Conclusions: FBA-PRCC is a fast, interpretable, and reliable metric to identify the sign and magnitude of the reaction contribution to various cellular functions. Introduction Genome-scale metabolic models (GEMs) that combine functional annotation of the genome with available metabolic knowledge are an valuable tool for modern computational and systems biology [1,2]. GEMs were used in biotechnology for strain engineering [3][4][5] for a better understanding of the metabolic consequences of various pathological processes [6][7][8], such as cancer [9,10], metabolic syndrome, and obesity [11], to name a few. Over the past decade, GEMs have been created for several hundreds of unicellular organisms (BiGG [12], MEMOTE [13], AGORA [14]) and dozens of human body tissue types [15]. However, most of these models were created by either semi-or fully-automated processes, which could suffer from incorrect gene annotation, arbitrary reactions added by the gap-filling process [13,14,16], etc. All of these inflate the size of the model, tangle it with unnecessary reactions, and make its analysis more complicated. There is a quote attributed to Einstein: "You should make things as simple as possible, but not simpler". The model reduction methods applicable to the GEMs are an active area of research [17,18]. Here we propose another approach to model simplification, which is based on flux balance analysis (FBA) and global sensitivity analysis (GSA). FBA is a computational approach used to analyze and predict the metabolic behavior of an organism using GEM [19]. FBA is commonly used in systems biology and metabolic engineering to study cellular metabolism and to predict the growth and behavior of an organism in different conditions. FBA is based on the principle of mass balance, where the input and output of each metabolite in a metabolic network are balanced. The metabolic network is represented as a set of reactions, which are connected by metabolites. Each reaction has an associated flux, which represents the rate at which the reaction occurs. FBA uses linear programming to optimize the flux through the metabolic network of GEM, subject to constraints such as the availability of nutrients and the capacity of enzymes. The goal of FBA is to find the set of fluxes that maximizes a specific objective function, such as the growth rate of the organism. The FBA approach can be used to predict the effect of genetic and environmental perturbations on cellular metabolism, and to identify metabolic engineering targets for improving the performance of industrial bioprocesses. FBA has been successfully applied to a wide range of organisms, including bacteria, yeast, plants, and humans. Kinetic modeling involves creating mathematical models that describe the behavior of complex systems, such as chemical reactions, biological processes, or ecological systems. These models often contain a large number of parameters, each representing a specific aspect of the system, such as reaction rates, enzyme concentrations, or external inputs. However, not all of these parameters are equally important for the behavior of the system. Some parameters may have little influence on the system trajectory, and their values may be difficult or impossible to estimate from experimental data. These unimportant parameters are called unobservable parameters. Identifying unobservable parameters is important because it can help simplify the model and reduce the number of unknowns that need to be estimated. One approach to identifying unobservable parameters is global sensitivity analysis (GSA), which estimates how variations in each parameter value affect the behavior of the system as a whole [20]. GSA can help identify parameters that have little influence on the system behavior, and therefore can be considered unobservable. One approach to identifying unobservable parameters in FBA is Flux Variability Analysis (FVA). FVA estimates the range of possible flux values for each reaction in the network, while keeping the objective near its optimal value. This allows researchers to identify reactions that are essential for the network's behavior, which have narrow flux distribution, as well as reactions that can be varied without significantly affecting the network's output. However, FVA does not provide information about flux interactions and how fluxes influence the objective value when it is far from optimum, which can be important for understanding the behavior of complex networks. To analyze flux interactions, Kelk and colleagues have developed a method called CoPE-FBA [21], which utilizes a decomposition approach to break down alternative flux distributions into three topological features: vertices, rays, and linealities. These features correspond to paths, irreversible cycles, and reversible cycles in a metabolic network, respectively. The authors demonstrated that the optimal solution space is often determined by a few subnetworks or modules consisting of numerous reactions, each with multiple internal routes. By analyzing the solution space using this method, it is possible to characterize the entire space based on these subnetworks or modules. As a result, two reactions would be present in the same module if their flux values across all vertices are correlated, regardless of whether they are in the same flux route or in exclusive ones. To analyze how flux perturbation influences other fluxes and objectives, a local version of sensitivity analysis in FBA that is combining FVA with Monte-Carlo sampling was developed [22]. In this approach, all reactions are divided into three groups: 'stable' reactions have a low FVA range, 'robust' reactions vary a little with perturbations of the other reaction fluxes, and 'sensitive' reactions significantly change their fluxes in response to the perturbation. Then, the fraction of each group was compared between different constraint types and mutations. However, neither the CoPE-FBA nor Monte-Carlo approach show how variation in the flux influences the objective value. Recently, a global sensitivity analysis (GSA) of constraint-based models was published in the literature [23]. This type of analysis is useful for identifying which model parameters have the greatest impact on the model output, and for understanding the behavior of the model in response to changes in those parameters. However, the authors of the study chose to use a relatively complicated and computationally expensive method of GSA called Sobol variance-based sensitivity analysis. Sobol variance-based sensitivity analysis is a powerful tool for quantifying the contribution of individual parameters and interactions between parameters to the variability of the model output. It is based on the decomposition of the variance of the model output into contributions from individual parameters, as well as combinations of parameters. This allows the authors to identify which parameters have the greatest impact on the output, and to quantify the degree to which the interactions between parameters affect the model behavior. This method of GSA is computationally expensive and requires the development of a complex computational infrastructure. This may limit its applicability in some contexts, particularly for models that are computationally intensive or have a large number of parameters. Moreover, it may require specialized expertise in order to implement and analyze the results of the method. Despite its limitations, Sobol variance-based sensitivity analysis remains a powerful tool for GSA and can provide valuable insights into the behavior of complex models. It is important for researchers to carefully consider the trade-offs between computational cost and analytical power when selecting a method for GSA, and to choose a method that is well-suited to the specific needs of their study. In response to the limitations of Sobol variance-based sensitivity analysis, a new approach has been proposed for estimating objective function sensitivity to flux boundary values using Partial Rank Correlation Coefficient (PRCC) calculations [20]. The PRCC approach is based on calculating the partial correlation coefficient between the ranks of each parameter and the rank of the objective function value: where y and x j are obtained from linear regression models: This approach provides a measure of the sensitivity of the objective function to each parameter, while taking into account the interactions between parameters. Rank-transformed data are used to take into account possible nonlinearity in the data. One advantage of the PRCC approach is that it does not require extensive coding and can be implemented using standard flux balance analysis (FBA) tools, such as the Cobrapy toolbox [24]. The calculation time for the PRCC approach depends on the number of available CPUs in the high-performance computing (HPC) cluster, as parallelization is applied at the level of flux boundaries. This approach is therefore computationally efficient and can be applied to large-scale models with many parameters. Another benefit of the PRCC approach is that the sensitivity coefficient is signed, allowing researchers to distinguish between parameters that positively or negatively influence the objective function. This provides additional insight into the behavior of the model and can help guide the selection of interventions or modifications to the system. To calculate the PRCC sensitivity coefficient, a set of random points in the parameter set is sampled using the Sobol low discrepancy sequence, as described in previous work [25][26][27][28]. The PRCC sensitivity coefficient is then calculated as a partial correlation coefficient between each parameter and the objective function value, with the influence of other parameters controlled for. Marino and co-authors [20] provide methods to estimate both the significance and saturation of the PRCC sensitivity coefficient. The significance of the coefficient is determined by comparing its magnitude to the distribution of coefficients obtained from randomized permutations of the data. The saturation of the coefficient is a measure of how much of the variation in the objective function can be explained by the variation in the parameter value, with higher saturation indicating a stronger relationship between the parameter and the objective function. In summary, the PRCC approach provides a computationally efficient and flexible alternative to Sobol variance-based sensitivity analysis for conducting GSA in constraint-based models. Its ease of implementation and ability to provide signed sensitivity coefficients make it a valuable tool for studying the behavior of complex systems. Materials and Methods The metabolic network with m metabolites and r reactions is described by an m·r stoichiometry matrix, N. The (i, j)-th entry of N, n ij , is the stoichiometric coefficient of the i-th metabolite in the j-th reaction. Any reaction flux vector v that satisfies Nv = 0 contains reaction fluxes such that the system is in a steady state. In Flux Balance Analysis (FBA) [18], some optimization problem is solved to identify a unique solution vector v o , such that where w is the objective coefficient vector and v l and v u are reaction bounds. We are interested in the estimation of the sensitivity of the objective function to the values of reaction boundaries. There is a special type of reaction in the constraint-based modelling called 'boundary reactions', which usually describe the exchange of metabolites between the internal 'cell' and the external 'environment'. Our approach consists of three steps: 1. Define parameter space: for non-boundary irreversible reactions only one parameter v u is created, for reversible and boundary reactions two parameters are created for each reaction-v l and v u . 2. Generate a set of quasi-random low-discrepancy points in the parameter space. Update parameters (reaction bounds) and find the optimal objective value for each point in the parameter space. 3. Calculate Partial Rank Correlation Coefficient (PRCC) for each parameter and objective value. The statistical significance of the PRCC value is estimated as described by Marino et al. [20]. The sufficiency of the sample size for reliable PRCC estimation is controlled by the top-down coefficient of concordance (TDCC): when TDCC between PRCC vectors calculated at different sample sizes exceeds the threshold of 0.9, the sample size is considered sufficient for analysis. Biomolecules 2023, 13, x FOR PEER REVIEW 4 of 11 based models. Its ease of implementation and ability to provide signed sensitivity coefficients make it a valuable tool for studying the behavior of complex systems. Materials and Methods The metabolic network with metabolites and reactions is described by an • stoichiometry matrix, . The ( , )-th entry of , n , is the stoichiometric coefficient of the -th metabolite in the -th reaction. Any reaction flux vector that satisfies = 0 contains reaction fluxes such that the system is in a steady state. In Flux Balance Analysis (FBA) [18], some optimization problem is solved to identify a unique solution vector v , where is the objective coefficient vector and v and v are reaction bounds. We are interested in the estimation of the sensitivity of the objective function to the values of reaction boundaries. There is a special type of reaction in the constraint-based modelling called 'boundary reactions', which usually describe the exchange of metabolites between the internal 'cell' and the external 'environment'. Our approach consists of three steps: 1. Define parameter space: for non-boundary irreversible reactions only one parameter v is created, for reversible and boundary reactions two parameters are created for each reaction-v and v . 2. Generate a set of quasi-random low-discrepancy points in the parameter space. Update parameters (reaction bounds) and find the optimal objective value for each point in the parameter space. 3. Calculate Partial Rank Correlation Coefficient (PRCC) for each parameter and objective value. The statistical significance of the PRCC value is estimated as described by Marino et al. [20]. The sufficiency of the sample size for reliable PRCC estimation is controlled by the top-down coefficient of concordance (TDCC): when TDCC between PRCC vectors calculated at different sample sizes exceeds the threshold of 0.9, the sample size is considered sufficient for analysis. The toy model (Figure 1) was created with Cobrapy v.0.25.0 [24] and saved as JSON for the model diagram drawing with Escher web interface [29] and SBML [30] format for further simulations. E. coli str. K-12 substr. W3110 WGMM was taken from BiGG database [12] in SBML format. For network distance calculations, all metabolites participating in more than 20 reactions in any compartment, except amino acids, succinate, PEP, and fructose-6-phospate, were removed. Network distance was calculated as the number of reaction steps between the node of interest and objective reaction. Calculations were performed with R package 'igraph' version 1.3.5 [31]. All simulations were performed on the OIST HPC cluster with 8CPU and 64GB per job. Sobol points generation, application to the reaction boundaries and optimization of objectives were performed in chunks of 8192 per job. Calculations of the PRCC sensitivity coefficients were performed on 262,144 Sobol points in chunks of 10 features per job. Convergence of the calculation was controlled by TDCC between consecutive datasets different in 8192 Sobol points. The TDCC value between 262,144 and 253,952 was 0.909. The average execution time was 30 min per job for the Sobol point calculations and 7 h per job for the PRCC calculation. Results Techniques such as Flux Balance Analysis (FBA) utilize the stoichiometry matrix of the reaction system to estimate steady-state fluxes, which are compatible with the viable state of the system. In FBA, an objective is optimized over the steady-state flux vectors, usually by maximizing the flux through certain reactions. The behavior of constraint-based models is controlled by parameters, such as reaction flux boundaries. Our approach estimates the sensitivity of the model's objective function to these boundary values. FBA-PRCC Can Identify the Backbone of the Flux-Related Network To construct the parameter space, we consider that reversible reactions have two boundaries, whereas irreversible reactions have only one, with the lower bound usually set to zero. Boundary reactions, which describe the transport of matter through the model boundary, require special treatment. To evaluate the sensitivity of the objective function to the presence of various nutrients, all boundary reactions are considered reversible, contributing two parameters to the parameter space. As an example, we consider the toy model described in the Kelk paper [21] (Figure 1), consisting of 27 reactions, of which, two are boundary, nine are reversible, and EX_Y is the objective reaction; there are 37 parameters in our GSA. The parameter space is sampled with a Sobol low-discrepancy sequence, which is designed for uniform coverage of multidimensional spaces with quasi-random points. With 20 K points, we obtain a stable estimation of the PRCC coefficients ( Table 1). As expected, the upper boundaries for reactions R5, R12, and R22 are among the most sensitive parameters, and the upper bounds for reactions R8 and R11 control the reaction module between R5 and R12. The presence of the reversible loop R19-R20-R21-R14 renders reactions R15 and R18 less important for the EX_Y flux. As expected from the model structure, EX_Y flux appears to be sensitive to no one of the lower bound parameters. FBA-PRCC Can Identify Controlling Steps in the Flux-Based Network For a more biologically relevant example, we calculated the sensitivity of the Lysine production pathway in E. coli str. K-12 substr. W3110 (BiGG iEC1372_W3110). Using lower bounds for reversible and boundary reactions and upper bounds for all reactions in the model, we obtained a total of 3730 parameters. The objective coefficient was set to one for the lysine exchange reaction EX_lys__L_e_u. We simulated over 262 K points to obtain reliable values for the PRCC coefficients. Out of 3730 parameters, 55 were significant at the 1% threshold, with 19 lower and 36 upper bounds (Supplementary Table S1). The PRCC plot against the network distance from the objective reactions in Figure 2 shows that the highest sensitivity coefficients correspond to the last three steps of lysine production, including the exchange reaction, transport to extracellular compartment and transport to periplasmic compartment. Biomolecules 2023, 13, x FOR PEER REVIEW 7 of 11 Figure 2. Network distance between vs. PRCC value. For each boundary value network, distance is calculated as a number of reaction steps between the reaction controlled by the boundary value and the objective reaction EX_lys__L_e_u. To avoid influence of hub molecules on the network distances, standard currency metabolites, such as water, ATP, etc., were excluded from network before distance calculations. Figure 3 shows that the majority of reactions with positive PRCC values form the backbone of the lysine biosynthesis pathway. Experimental analysis of lysine biosynthesis in E. coli has shown that overexpression of diaminopimelate decarboxylase (lysA) and aspartate kinase (lysC) increased lysine titers by 78.1% and 123.6%, respectively [36]. In our model, this corresponds to reactions DAPDC and ASPK, respectively. The PRCC values for the upper boundary of the irreversible DAPDC reaction are 0.12 (p-value < 1 × For each boundary value network, distance is calculated as a number of reaction steps between the reaction controlled by the boundary value and the objective reaction EX_lys__L_e_u. To avoid influence of hub molecules on the network distances, standard currency metabolites, such as water, ATP, etc., were excluded from network before distance calculations. Figure 3 shows that the majority of reactions with positive PRCC values form the backbone of the lysine biosynthesis pathway. Experimental analysis of lysine biosynthesis in E. coli has shown that overexpression of diaminopimelate decarboxylase (lysA) and aspartate kinase (lysC) increased lysine titers by 78.1% and 123.6%, respectively [36]. In our model, this corresponds to reactions DAPDC and ASPK, respectively. The PRCC values for the upper boundary of the irreversible DAPDC reaction are 0.12 (p-value < 1 × 10 −16 ), and for the reversible ASPK reaction, the PRCC coefficients for its upper and lower bounds are 0.0037 (p-value 5.9%) and −0.004 (p-value 3.9%), respectively. Although FBA-PRCC identifies the reactions important for lysine production and their contribution, the order is different from the experimental data. For instance, DAPDC has a higher PRCC value but a lower increase in lysine production. However, it is important to note that lysine biosynthesis is highly regulated in the cell, as mentioned in [37] and in [36], and accurately describing the regulatory relationships in FBA models is challenging. The majority of the 15 parameters that are negatively correlated with lysine production correspond to redox balance by decreasing H + production or by shifting NAD/NADH balance. FBA-PRCC Is Computationally Efficient Unlike the recently published Sobol variance-based sensitivity analysis for conducting GSA in constraint-based models [23], the FBA-PRCC approach does not require development of special low-level software. All its steps were implemented in Python and R using standard Cobrapy software [24] for FBA calculations and R 'sensitivity' package [33]. Calculations of the PRCC values for each flux are independent, so we were using OIST HPC computation clusters to run all these calculations in parallel. Sobol points generation, application to the reaction boundaries, and optimization of objectives were performed in chunks of 8192 per job. Calculations of the PRCC sensitivity coefficients were performed on 262,144 Sobol points in chunks of 10 features per job. The average execution time is 30 min per job for the Sobol point calculations and 7 h per job for the PRCC calculation. The majority of the 15 parameters that are negatively correlated with lysine production correspond to redox balance by decreasing H + production or by shifting NAD/NADH balance. FBA-PRCC Is Computationally Efficient Unlike the recently published Sobol variance-based sensitivity analysis for conducting GSA in constraint-based models [23], the FBA-PRCC approach does not require development of special low-level software. All its steps were implemented in Python and R using standard Cobrapy software [24] for FBA calculations and R 'sensitivity' package [33]. Calculations of the PRCC values for each flux are independent, so we were using OIST HPC computation clusters to run all these calculations in parallel. Sobol points generation, application to the reaction boundaries, and optimization of objectives were performed in chunks of 8192 per job. Calculations of the PRCC sensitivity coefficients were performed on 262,144 Sobol points in chunks of 10 features per job. The average execution time is 30 min per job for the Sobol point calculations and 7 h per job for the PRCC calculation. Discussion In this work, we have presented a new, fast, and parallelizable framework for estimating the sensitivity coefficients of reaction boundaries in constraint-based models. We demonstrated the performance of our framework using a 27-reaction toy model and the whole-genome metabolic reconstruction of E. coli metabolism. This is the first time that sensitivity coefficients have been calculated for all boundary values in the whole-genome model. Previous analyses, such as those published in [23], have been focused on only a small subset of exchange reactions. One area where our approach could be particularly useful is in identifying new antibacterial drug targets. We can do this by identifying reaction boundaries that negatively correlate with biomass production and then finding inhibitors for these reactions. Additionally, we can model combinational therapy by analyzing the FBA-PRCC of perturbed models where certain reactions are inhibited or blocked, similar to our analysis in our previous work, Lebedeva et al. [25]. In the future, we plan to expand the application of our approach by exploring its potential in engineering chimeric bacterial cells or microbial communities. To achieve this, we plan to combine our FBA-PRCC method with our metagenomic analysis platform, ASAR [38]. By doing this, we hope to uncover new insights into microbial metabolism and how it can be manipulated for various applications. Overall, we believe that our approach has the potential to be a powerful tool for both fundamental research and practical applications in biotechnology and medicine.
5,467
2023-03-01T00:00:00.000
[ "Computer Science", "Biology" ]
Performance evaluation of the IRIS XL-220 PET/CT system, a new camera dedicated to non-human primates Background Non-human primates (NHP) are critical in biomedical research to better understand the pathophysiology of diseases and develop new therapies. Based on its translational and longitudinal abilities along with its non-invasiveness, PET/CT systems dedicated to non-human primates can play an important role for future discoveries in medical research. The aim of this study was to evaluate the performance of a new PET/CT system dedicated to NHP imaging, the IRIS XL-220 developed by Inviscan SAS. This was performed based on the National Electrical Manufacturers Association (NEMA) NU 4-2008 standard recommendations (NEMA) to characterize the spatial resolution, the scatter fraction, the sensitivity, the count rate, and the image quality of the system. Besides, the system was evaluated in real conditions with two NHP with 18F-FDG and (-)-[18F]FEOBV which targets the vesicular acetylcholine transporter, and one rat using 18F-FDG. Results The full width at half maximum obtained with the 3D OSEM algorithm ranged between 0.89 and 2.11 mm in the field of view. Maximum sensitivity in the 400–620 keV and 250–750 keV energy windows were 2.37% (22 cps/kBq) and 2.81% (25 cps/kBq), respectively. The maximum noise equivalent count rate (NEC) for a rat phantom was 82 kcps at 75 MBq and 88 kcps at 75 MBq for energy window of 250–750 and 400–620 keV, respectively. For the monkey phantom, the maximum NEC was 18 kcps at 126 MBq and 19 kcps at 126 MBq for energy window of 250–750 and 400–620 keV, respectively. The IRIS XL provided an excellent quality of images in non-human primates and rats using 18F-FDG. The images acquired using (-)-[18F]FEOBV were consistent with those previously reported in non-human primates. Conclusions Taken together, these results showed that the IRIS XL-220 is a high-resolution system well suited for PET/CT imaging in non-human primates. Background Positron emission tomography (PET) is undoubtedly an essential tool for understanding the pathophysiological mechanisms related to various disorders, and for the developments of new therapeutic approaches. Based on its non-invasiveness and translational capabilities [1], it is widely used for clinical and preclinical studies in neuropsychiatry, oncology and cardiology [2][3][4][5][6][7][8][9]. While most preclinical studies are performed in rodents, working with non-human primates (NHP) is crucial for research related to the biology of diseases, and the development of new health care technologies. Preclinical research on rodents is well suited to answer basic scientific questions, but are limited by their anatomical and functional differences, especially for the brain, immune system and metabolism [10]. Even though the number of NHP used in preclinical research is estimated to be less than 1%, its impact is huge [10]. For example in brain research, NHP models offer a unique opportunity to explore complex functions based on the strong similarity between the NHP and human brains [11][12][13][14]. As NHP have a definite advantage in a more translational approach to the clinical world, there is a need for preclinical PET systems dedicated to the imaging of NHP. The last 2 decades have seen the emergence of new high-performance preclinical PET systems, but almost exclusively dedicated to the imaging of small animals. To date, very few PET systems dedicated to NHP imaging have been evaluated with only one being currently on the market [15,16]. PET systems dedicated to NHP offer better performance in terms of spatial resolution and detection efficiency compared to the use of clinical systems, specially thanks to dedicated high-resolution detectors and the more suitable gantry diameter. Here, we evaluated the IRIS XL-220 PET/CT system dedicated to NHP imaging, developed, and commercialized by the company Inviscan SAS (Fig. 1). The present paper evaluates the performance parameters of the PET component of the IRIS XL-220 PET/CT system, based on the National Electrical Manufacturers Association (NEMA) NU 4-2008 standard recommendations [17]. The NEMA standard enables to assess the system's characteristics, such as the spatial resolution, the scatter fraction, the sensitivity, the count rate, and the image quality. To illustrate its initial capacities for in vivo molecular imaging, we also performed a PET scan on an NHP and a rat using 18 F-FDG widely used for PET imaging, along with another PET scan on an NHP using a radiotracer of reference specifically targeting the brain vesicular acetylcholine transporter, the (-)-[ 18 F]FEOBV. Methods This paper presents the characterization of the IRIS XL-220 PET system using a single detector ring leading to an axial field of view of 45 mm. Based on the same detector technology as the IRIS PET system [18], the IRIS XL-220 PET detector ring consists in 16 modules arranged in one single ring. Each module can acquire coincidences with the nine opposing modules. The PET field-of-view (FOV) has an axial coverage, and a transaxial coverage of 45 and 170 mm, respectively. Each module consists in a 27 × 27 matrix of 1.6 × 1.6 × 16 mm 3 Lutetium Yttrium orthosilicate Cerium-doped crystals (LYSO:Ce) with a pitch of 1.68 mm. Each matrix is directly coupled to a multi-anode PhotoMultiplier Tube (H12700A Hamamatsu Photonics K.K., Japan). Each module is completely independent from the others. The IRIS XL-220 PET data acquisition system is similar to the DAQ of the IRIS PET/CT scanners. The CT system consists of a micro-focus X-ray tube with 80 W and a flat panel CMOS detector. The CT covers a FOV of 170 mm in transaxial and 85 mm in axial direction with a single acquisition. The CT can achieve 90 μm resolution at 10% MTF and ultrafast scanning in less than 8 s. We acquired experimental performance data following the NEMA NU 4-2008 standard (NEMA 2008). Unless specified otherwise, default data preprocessing settings were used (i.e., energy window = 250-750 keV, coincidence window = 5.2 ns) and data were processed using the Single Slice Rebinning (SSRB) algorithm as proposed by the NEMA procedure. Spatial resolution According to the NEMA standard, a 0.55 MBq 22 Na point source (1 cm 3 cubic source, Eckert and Ziegler Isotope Products, Germany) was used to perform spatial resolution measurements. Measurements were acquired with the source located at the axial center of the FOV, and at one-fourth of the axial FOV from the center of the axial FOV, at the following radial distances from center: 0-, 5-, 10-, 15-, 25-, 50-, and 75-mm. Data were acquired for 120 s and reconstructed using the 3-dimensional ordered-subset expectation maximization (3D OSEM) algorithm with 8 iterations and 8 subsets. The IRIS XL-220 PET system uses system matrices to reconstruct the data. The user can either choose to reconstruct the data acquired in the entire primate FOV (170 mm transverse diameter) with voxels of 0.84 × 0.84 × 0.84 mm 3 or to reconstruct the data acquired in a transverse FOV of 80 mm in diameter, corresponding to the rodent FOV, with voxels of 0.42 × 0.42 × 0.84 mm 3 . Resolution measurements corresponding to radial distances of 0, 5, 10, 15, 25 mm were first reconstructed using the thinner voxels (within the rodent FOV) and radial distances of 50 and 75 mm were reconstructed using the larger one (within the primate FOV). The full width at half maximum (FWHM) and full width at tenth maximum (FWTM) of the point source response function along the radial, tangential, and axial directions were determined following the NEMA suggested procedure. Results were not corrected for positron range effect or source size. Scatter fraction and count rate measurements The purpose of these measurements was to evaluate the scanner performance in terms of the counting rate capability, the scatter fraction (SF), and the random coincidences rate. According to the system FOV, the protocol measurements were performed with a rat-like phantom (150-mm length, 50 mm diameter) and a monkey-like phantom (400 mm length, 100 mm diameter) made of a high-density polyethylene cylinder. Both phantoms have a cylindrical hole (3.2 mm diameter) drilled parallel to the central axis. The rat-like and monkey-like phantoms were filled with 120 MBq and 270 MBq of 18 F-fluoride solution, respectively. Data were acquired for 1 min every 15 min using both phantoms. Data were processed without dead-time, decay, attenuation, or random counts corrections. Two energy windows of 250-750 keV and 400-620 keV were used to process data from both phantoms. Sensitivity Sensitivity measurements were performed using the same source described in the spatial resolution measurements section, according to the NEMA recommendations. The source was moved along the scanner axial FOV and data were acquired with a 1 mm step for 2 min at each position. Image quality study The NU4-2008 standard prescribes the use of the specific NEMA Image Quality phantom (IQP). The IQP is composed of a main phantom body that contains a fillable cylindric chamber (diameter = 30 mm × length = 30 mm) and a solid part (20 mm in length) into which five fillable rods with various diameters (1, 2, 3, 4, and 5 mm) have been drilled. A lid attached to the uniform region of the phantom supports two coldregion chambers. These regions are hollow cylinders (15 mm in length and 8 mm in inner diameter with 1 mm wall thickness), to be filled with non-radioactive water and air, respectively. The phantom was filled with 3.7 MBq of 18 F-fluoride solution and placed in the central part of the FOV. The duration of the scan was set to 20 min. PET images were reconstructed with the 3D-OSEM algorithm (8 iterations, 8 subsets), including random, radioactive decay and dead time corrections. No attenuation correction was performed. We followed the NEMA NU-4 2008 standard for the analysis of the IQP data to compute the recovery coefficient (RC) for each rod, the spillover ratio and the signal to noise ratio. In vivo experiments Rat experiment One male rat (262 g, 10 weeks old; Charles River) was imaged for 15 min, 1 h after an intraperitoneal injection of 18.9 MBq of 2-deoxy-2-( 18 F)fluorod-glucose ( 18 F-FDG). Anesthesia was induced and maintained with 1.5% isoflurane in medical air with a calibrated vaporizer during the 1 h biodistribution. The rat was then euthanized for the PET acquisition with the brain centered in the system's FOV. Data were reconstructed into a 205 × 205 × 54 three-dimensional volume by use of the iterative 3D OSEM algorithm with 8 iterations and 8 subsets. The voxel size was equal to 0.84 mm in all directions. The calibration factor was included in the normalization file and applied during the reconstruction process. PET data were fully corrected for random coincidences, radioactive decay, and dead time. No scatter correction was applied. The PET acquisition was followed by a 576 projections computed tomography (CT) at 80 kV leading to 98 s CT acquisition. This experiment was performed in accordance with European Institutes of Health Guidelines regarding the care and use of animals for experimental procedures and were approved by the Alsace Regional Ethics Committee for Animal Experimentation (approval identification: APAFIS#1531). NHP experiments Two NHP experiments were performed in accordance with EU guidelines (EU Directive 2010/63/EU for animal experiments) and approved by a local Ethics Committee (Comité d'Ethique en Experimentation Animale-Pays de la Loire-France-approval identification: APAFIS#27750). Two Macaca fascicularis females weighing 2.9 and 4.5 kg were used for this study (Fig. 2). Anesthesia was induced by an intramuscular injection of a mixture of ketamine/medetomidine (3 mg/kg and 125 μg/ kg, respectively). Each NHP was placed on a heating blanket, and intubated. Anesthesia was then maintained using Vetflurane (1%; Coveto, France), and each NHP was placed on the bed of the system positioned on its back with the head taped to limit its movements. A catheter was inserted in the saphenous vein for the radiotracer injection. During all the experiment, the NHP were monitored for their body temperature, heart rate, respiratory rate, and oxygenation. In a first experiment, the whole brain uptake of 18 F-FDG (Curium, France) was evaluate using a two-bed PET and CT scans performed with the center of the field of view positioned in the center of the brain. The NHP (4.5 kg) was injected with 18 was performed considering 576 projections over 360 degrees, at 80 kV and 0.9 mA with a 90 ms exposure time, leading to a total acquisition time of 104 s. The threedimensional image was reconstructed with beam hardening correction, ring artifacts pre-correction, resulting in 1190 slices with a 160 × 160 × 160 μm voxel size. In a second experiment, another NHP (2.9 kg) was injected intravenously with 92 MBq of [ 18 F]fluoroethoxybenzovesamicol ((-)-[ 18 F]FEOBV), a radiotracer of reference specifically targeting the vesicular acetylcholine transporter [19,20]. (-)-[ 18 F] FEOBV was synthesized using an adapted method of the original process [19]. The corresponding enantiomerically pure precursor of the (-)-[ 18 F]FEOBV, the (-)-TEBV was purchased to ABX advanced biochemical compounds GmbH. A TRACERlab FXFN Pro (General Electric Healthcare) was used as synthesizer. [ 18 F]fluoride was purchased to Curium Pharma. Sep-Pak Accell Light QMA, t-C18 and light t-C18 cartridges were purchased from Waters. High performance liquid chromatography quality control (QC) was performed on an Ultimate 3000 system equipped with a UV detector and a radioactivity detector (PET metabolite Bioscan). The column for the purification and QC Luna Phenyl Hexyl (Phenomenex) with respectively the following specifications 10µ, 4.6 × 250 mm and 5µ, 4.6 × 250 mm. In these conditions, we obtained pure (-)-[ 18 F]FEOBV in 79 min with a radiochemical yield of 62.6% with a molar activity of 307.3 GBq/µmol. Imaging was performed using the same procedure as previously described, with a 98 s CT scan performed to place the brain in the center of the field of view, followed by a PET dynamic list-mode acquisition starting 1 min before (-)-[ 18 F]FEOBV injection and lasting for 121 min. PET images were reconstructed as previously described with the following defined frames: 1 × 60 s, 18 × 30 s, 11 × 60 s, 10 × 10 min corresponding to a total of 40 frames. A one bed CT scan has also been performed using the parameters previously described. Spatial resolution The FWHM and FWTM at axial center and at one quarter from the axial center obtained with the 3D OSEM algorithm are presented in Tables 1 and 2 Scatter fraction and count rate measurements Figure 3 presents the results obtained using both the rat-like and the primate-like phantoms. For the rat phantom, the maximum NEC is 82 kcps at 75 MBq for an energy window of 250-750 keV with a scatter fraction equals to 20% at low activity. As recommended by NEMA procedure, the value of the peak true count rate is 147.04 kcps at 82 MBq in these conditions. Reducing the energy window to 400-620 keV increased the NEC to 88 kcps at 75 MBq and decreased the scatter fraction to 10%, and the peak true count rate value to 119.49 kcps at 82 MBq. For the monkey phantom, the maximum NEC is 18 kcps at 126 MBq for an energy window of 250-750 keV with a scatter fraction equals to 40% at low activity. In these conditions, the peak true count rate value is 55.24 kcps at 238 MBq. Reducing the energy window to 400-620 keV increased the NEC to 19 kcps at 126 MBq and decreased the scatter fraction to 25%, and the peak true count rate value to 43.12 kcps at 138 MBq. Figure 4 presents the sensitivity profiles along the axial axis. Maximum absolute sensitivity in the 400-620 keV and 250-750 keV energy windows were 2.37% (corresponding to 22 cps/kBq) and 2.81% (corresponding to 25 cps/kBq), respectively. Table 5 Report for accuracy of corrections Region SOR %STD Water-filled cylinder 0. 17 12.74 Air-filled cylinder 0.05 14.87 Fig. 5 Coronal, sagittal, and axial slices of (column A) the CT, (column B) the PET and (column C) the co-registered PET/CT images of the 18 F-FDG rat's brain study Image quality study The RCs are reported in Table 3. In addition, we measured a Standard Deviation of 9.58% in the uniform-activity region (Table 4), and spillover ratios for water and air were 0.17 (with 12.7% SD) and 0.05 (with 14.8% SD), respectively (Table 5). In vivo experiments The cerebral uptake of 18 F-FDG in rats and NHP are shown in Figs. 5 and 6 along with registrations with their related CT scans. The 18 F-FDG binding was the high in the striatum and frontal cortex compared to lower SUV observed in the occipital cortex and cerebellum (SUV of 3.0 ± 0.5, 3.0 ± 0.8, 2.7 ± 0.6, and 2.8 ± 0.6 in the striatum, frontal cortex, occipital cortex and cerebellum, respectively). The NHP injected with (-)-[ 18 F]FEOBV showed a high binding of the radiotracer in the different part of the striatum, the thalamus and geniculate nuclei compared to the lower binding observed in the cerebellum (Fig. 7). Discussion In recent decades, only a few PET systems dedicated to NHPs imaging have emerged, such as the MicroPET Focus 220 Small Animal PET Scanner (formerly supplied by Concord/Siemens), or more recent systems with the LFER 150 PET/CT device (Mediso Ltd.), and the mini-Explorer displaying a long axial field of view [15,16]. Researchers Fig. 6 Coronal, sagittal, and axial slices of (column A) the CT, (column B) the PET, and (column C) the co-registered PET/CT images of the cerebral uptake of 18 F-FDG of one Macaca fascicularis have also used clinical PET or PET systems dedicated to imaging the human brain [22,23]. Although the clinical systems exhibit good performance and allow the imaging of NHPs, the use of dedicated scanners makes it possible to achieve better performances more in line with the subject. Like preclinical PET systems dedicated to small animals, the performance of primate systems is assessed based on the NEMA NU 4-2008 standard recommendations (NEMA). The volumetric resolution, corresponding to the product of the radial, tangential and axial FWHM, was chosen to ensure a fair comparison with others NHP dedicated PET systems. Values of the IRIS XL-220 PET measured with the point source reconstructed with 3D-OSEM algorithm were better than the resolutions reported for both the Micro PET Focus 220 and the LFER 150 PET/CT, which were obtained using 2D FBP (1.87 vs 6.73 and 5.17 mm 3 for the IRIS XL-220, LFER 150 PET/CT and Micro PET Focus 220, respectively). The significant differences in terms of spatial resolutions can be easily explained using different reconstruction algorithms. However, studies reported that using 2D ordered subset expectation maximization with the Focus 220 enabled to visualize rods with 1.6-mm diameter [24,25]. Similarly, Sarnyai et al. [16] performed a small-animal Derenzo phantom study, indicating that rods with 1 mm diameter can be distinguished with the LFER 150, when using an iterative 3D reconstruction method. In addition, it is important to mention that the spatial resolution values obtained at radial distances greater than 40 mm from the center of the FOV were measured in images reconstructed with a larger voxel size than the others. However, according to the NEMA procedure, the resolutions must be determined by linear interpolation between adjacent pixels at half (or one-tenth) the maximum value of the response function. This directly impacts the values by generating a possible bias between those obtained from images reconstructed with thinner voxels. The absolute sensitivity obtained with the IRIS XL-220 PET was substantially higher than the one reported for the Focus 220 [24] and slightly lower than the one reported for the LFER 150 PET/CT. However, sensitivity values reported in the present paper were obtained considering a single detector ring. The absolute sensitivity of the IRIS XL-220 PET will be significantly increased in a 2 detector rings configuration. Because a fair comparison of new preclinical PET systems based on NEMA standard evaluations has been questioned [26], we further studied the performances of the IRIS XL-220 PET system in real conditions using two distinct radiotracers. The images obtained from the 18 [20,27,28]. This revealed that the IRIS XL-220 PET provided an excellent quality of images in NHP, but also in rats based on the spatial resolution of the system. Conclusion The IRIS XL-220 PET/CT system is a high-resolution system mainly dedicated to NHP imaging. With similar performance to its rare competitors and a large transverse field of view, the IRIS XL-220 PET/CT system allows a wide range of preclinical studies. Further studies are currently conducted to assess the system's capabilities in terms of high-throughput imaging, consisting in multiple rodents imaged simultaneously for quantification purposes. The exceptional spatial resolution coupled with the large CT FOV also makes IRIS XL the ideal system for ex-vivo studies on large samples.
4,774
2022-02-05T00:00:00.000
[ "Physics" ]
New Mechanical Knee Supporter Device for Shock Absorption : Conventional knee supporters generally reduce knee pain by restricting joint movement. In other words, there were no mechanical knee supporters that functioned powerfully. Considering this problem, we first devised a device in which a spring is inserted into the double structure of the cylinder and piston, and a braking action is applied to the piston. This mechanism retracts when the knee angle exceeds a certain level. Next, the knee and the device were modeled, and the dynamic characteristics of the device were investigated to find effective elements for knee shock absorption. Although various skeletal and muscular structures have been studied for the knee section, we kept the configuration as simple as possible to find effective elements for the device. A shock-absorbing circuit was devised, and air was used as the working fluid to facilitate smooth knee motion except during shock. Increasing the spring constant effectively reduced the knee load. Introduction Conventional knee supporters were generally made of elastic cloth or taping tape imitations that were worn on the knee to restrict joint movement and reduce knee pain.[1][2][3][4][5] Others inserted nylon rods in the direction of the bone axis or added a spring function to the rear of the knee, but there were no mechanical knee supporters that functioned as powerfully as a bicycle suspension.For example, dampers used in automobiles, motorcycles, and other vehicles use hydraulic pressure, but those directly applied to the body structure do not exist as far as the author has been able to determine.In addition, the use of oil had the disadvantages of being easily contaminated and difficult to remove small impacts due to the difficulty of compression.Considering these problems, the authors decided to develop a device using air pressure that can absorb knee shocks more powerfully than conventional devices during normal exercise such as walking, stair climbing, mountain climbing, and jogging.A below-the-knee body model of such a simple knee supporter has already been proposed in the fields of agriculture and nursing care by Shiraishi et al.It was not sufficient to develop a new simplified supporter device based on this model.In addition, only a basic model was developed, and it was considered insufficient as a practical product in light of the actual human body model [6].It was also thought that if this product could be provided safely and inexpensively, it would be useful not only in the fields of agriculture and nursing care, but also in preventing knee failures during light exercise in middle-aged and older people, etc., and could also be applied to lifelong sports, etc. [7,8].This knee supporter is not an advanced power-assist device that has been widely researched and developed [9][10][11][12].The reason for not utilizing a power-assist device was that there existed inadequacies as a device, such as too much load on the body and discomfort, because it forces a forced force against the body structure.In addition, there were disadvantages such as the time required to wear the device, the difficulty of using it simply, and the need to wear a device with a different function for each movement.Therefore, we developed a mechanical knee supporter that is as inexpensive as possible, has a simple structure, and can handle high-speed movements, replacing conventional fabric knee supporters and power-assist devices.Humans can effectively reduce walking impact by flexing the knee after a blow [13][14][15].Furthermore, the stiffness and damping of the human leg are crucial to reduce joint and limb damage during impact [16,17].This knee supporter can be easily applied because it is secured with a belt, making it a machine that does not place a load on the body structure.In addition, since the individual functions work according to each movement, there exists the advantage of eliminating the trouble of wearing different devices.Because the device is small, it can be worn without discomfort.Since it is operated not by an actuator such as a motor, but only by a solenoid valve and a piezoelectric element, it consumes little electricity.The basic structure consists of only dampers and springs, which are activated as needed by signals from sensors.In addition, the mechanism allows the knee to be bent to a large extent (~180°).In addition, conventional springs can reduce displacement, but cannot absorb shock.In this study, we attempted to improve these problems by combining a spring and a damper.In addition, shock-absorbing soles have existed to absorb shock at the ankle, but it is difficult to adapt them to different shocks.In addition, supporters that restrict movement, such as cloth supporters, have existed in the past, but there was no such supporter that also functioned to absorb shock.Thus, we clarified the problems in the configuration of conventional device elements and attempted to develop a completely new device that solves these problems. Concept The concept is to develop a device that can absorb knee shocks more powerfully than conventional ones when walking, climbing stairs, mountain climbing, jogging, etc.A simple knee supporter device as shown in Figure 1. Development Objectives The following four goals were set for the development of this product. (1) This product will be activated only when a shock is applied during knee bending, such as when ascending or descending stairs or running.The product does not operate during normal, slow bending of the lower limb.Since the product also does not operate when the knee is extended, stretching exercises can be performed without discomfort.(Figure 2a) (2) As a mechanism to absorb sudden load applied to the knee, a spring shall be used for the first large load, and air pressure shall be used for the subsequent loads.(3) When the knee is bent more than 90°, the product is retracted, and the user can sit upright.(Figure 2b) (4) The actuator is operated only for a short period of time compared to those that use a motor all the time, consumes less electricity, and can be operated by a small battery. (a) (b) This product is only activated when a shock is applied during knee bending, such as when climbing stairs or running.Therefore, it does not work in the case of normal slow leg bending motions.(b) When the knee is bent more than 90 degrees, the product is stowed and can be used to sit upright. Shock Mitigation and Absorption Mechanism The device is shown in Figure 3, with the knee shock and relaxation absorption performed by the cylinder system, pneumatic circuit, and control uni.Shock and relaxation absorption of the knee was performed by the cylinder system, pneumatic circuit, and control unit.The device is shown mounted in Figure 3.As shown in Figure 3, the knee can be bent at various angles, eliminating the need for mounting each time.The device can be quickly attached and used for various purposes.The bending is also soft and can absorb even small shocks. Cylinder System for Shocks As shown in Figure 4, the cylinder piston section of the newly devised system is double, with a spring inserted between them.A piezoelectric element is attached to the piston section on the opposite side of the cylinder rod, and when the piezoelectric element is activated, a braking action is applied to the piston.At that point, the spring force acts when the rod is pushed down.The piezo element moves because it moves in unison with the piston.The spring also functions to reduce displacement.When the piezoelectric element does not work, the piston and spring move in unison and function in the same manner as a normal cylinder.Piezoelectric elements have the advantage of high response and large force per volume, but the voltage-displacement ratio is small.In this case, the displacement was 40 μm at 140 VDC. Pneumatic Circuit When an acceleration sensor attached to the sole detects a shock, the piezoelectric element and pneumatic valve are activated.The power supply is 24 V for the valve, which can be substituted for a smartphone battery.The piezoelectric element requires voltage, but only for 0.2-0.3s, so power consumption can be kept very low.The cylinder (damper) contains a spring and a piezoelectric element.A pneumatic circuit was devised that not only moves the cylinder independently, but also activates it only in the event of a shock.This circuit reduces the resistance to actuate the cylinder except when a shock is applied.Figure 5 shows the circuit when no shock is applied.The air flowing into and out of the cylinder passes through the open circuit of the solenoid switching valve without passing through the throttle valve, allowing the cylinder rod to move smoothly.If oil is used as the cylinder's working fluid, the cylinder resistance during normal operation (when no shocks are applied) increases.Figure 6 shows the circuit when a shock is applied.First, when the acceleration sensor detects a shock, the piezoelectric element is activated to fix the piston in the cylinder for a certain period of time.The initial large shock is absorbed by the spring.At the same time, the solenoid valve is closed.Subsequently, the piezoelectric element is disengaged, and the cylinder rod is lowered, leaving no place for air to escape from the head side, and the shock is softly received by the air compressibility. Simulation of Cylinder System The effect of the cylinder system was confirmed by simulation.Figure 7 shows the cylinder rod displacement-time and acceleration-time relationships for the air compressibility only, and Figure 8 shows the spring.Figures 9 and 10 for the combined spring and cylinder system.As a pseudo-shock, a step-like 14 × 9.8 N input was applied between 0.5-0.6 s.As shown in Figure 7, the method using only the compressibility of air results in a gradual displacement of the rod, but the displacement is only about 0.25 m, making it difficult to make the total cylinder length practical.The use of oil as the working fluid could be considered here, but oil would increase the operating resistance during non-operation.In addition, as shown in Figure 8, when only a spring is used, the acceleration changes significantly, indicating that the shock is not effectively absorbed.The use of oil as the working fluid could be considered here, but oil would increase the operating resistance when the system is not in operation.Observing the combined spring and pneumatic compressibility in Figures 9 and 10, it is possible to vary the shock more gently than with the spring alone at practical cylinder displacements.Figure 9 shows the piezoelectric element with an operating time of 0.5 s to 0.6 s, and Figure 10 shows the same with an operating time of 0.5 s to 0.55 s, but it can be observed that the displacement and acceleration of the rod are significantly different. Cylinder Slide Mechanism When this product is installed, sitting, bending, and other movements are restricted due to the cylinder length.Therefore, when the knees are bent significantly, the cylinder is slid to enable sitting upright.Figure 11 shows an overview of this structure.First, a position sensor detects that the knee has been bent significantly.At the same time, the bar restraining the cylinder to the rail is released, and the cylinder moves upward along the rail.Figure 12 shows the prototype developed for this project. Calculation Results of the Model in the Dynamic Analysis of the Device To investigate the dynamic characteristics of the device and find effective elements for knee shock absorption, the knee portion and the device were modeled.The knee area is composed of various skeletal and muscular structures that have been studied, but to find effective elements for the device, we kept the configuration as simple as possible.The knees are shown in an upright position.The device contains springs and dampers in series, but in the model, they are inserted in parallel, as is common.This is also to make it easier to understand the characteristics of the elements of the device. As shown in Figure 13, the spring constant and damper coefficient of the thigh section are represented by k2 and c2, and those of the knee section by k1 and c1.The spring constant and damper coefficient of the apparatus are represented by k and c.The equation of motion without the device is a two-degree-of-freedom system and can be expressed as ( 1) and ( 2), where F(t) is the external force.Equations ( 3) and ( 4) are also obtained by including the device.Based on the model equations, the effects of the spring constant and damper coefficient on the knee were investigated.Figure 14 shows the basic results.Figure 14 does not show the device attached; a step-like load is input from 2 s.In the figure, the displacement of the knee and the force applied to the knee are shown.When device is not installed, the knee displacement is about 9 cm, and the force is about 0.8 N. Figure 16 shows the results of Figure 14 overlaid with the device inserted.The knee displacement is smaller when the device is inserted, but the load remains almost the same at the first peak and becomes smaller after the second peak.To investigate the effect of the spring constant of the device, k was increased from k = 200 N/m to 600 N/m with c = 50 kg/s.N/m] and the results are superimposed on Figure 15 and shown in Figure 17.The displacement becomes smaller as the device spring constant is increased.However, the first and second peaks of the load applied to the knee do not change much, and the third and fourth peaks become smaller.Then, to investigate the effect of the damper coefficient of the device, c was increased from c = 50 kg/s to 250 kg/s.Figure 18 shows the results of increasing the damper coefficient of the apparatus from c = 50 kg/s to 250 kg/s with k = 200 N/m.By increasing the damper coefficient of the device, the displacement becomes stable.The knee load also decreases after the second peak.However, there is no significant change in the first peak. Toward Practical Application In the development of the knee supporter described in this paper, the goal was to create a strong supporter capable of jumping off from an altitude.Therefore, (1) it is easier to absorb shock than a normal supporter.In addition, (2) ordinary springs and dampers alone lack flexibility and make it difficult to walk.We generated and developed a model that compensates for these two disadvantages.As a future development, it would be better if we could create a model that can respond immediately from the time a shock is applied to the ankle to the time it is transmitted to the knee.In generating this model, it is important to know when the force should be applied, so more rigorous calculations will be required for practical use. Advantages of Pneumatics The advantages of using air are (1) its compressibility, (2) it is relatively easy to move with air, whereas it is difficult to move with oil, and (3) it does not get dirty due to oil leakage.On the other hand, as a demerit, air alone is too compressible, so it needs to be stopped by a piezoelectric element (stopper).In addition, although it has high instantaneous response, it is necessary to apply excessive voltage to activate the piezoelectric element. Alternatives to Piezoelectric Elements Electromagnets are a possible stopper, but their sole mechanism has a relatively weak operating force.In addition, because of the weight involved, the burden on the knee would be great.If there is an alternative that has more instantaneous force and can generate a larger force than a piezoelectric element, it is preferable to use it. Regarding the Oscillations in the Graphs In Figures 14 through 18, the knee motion is oscillating.This movement situation may not seem natural, but it is easy to understand if you recall, for example, the vibration of an inorganic object such as a doll when force is applied to it during impact.The graph in this paper assumes a state of natural movement without a great deal of force applied to the muscles.When the muscles are exerting force, they do not vibrate, but when the muscles are not exerting force, they do vibrate as shown in the graph when jumping and walking. Comparison with Previous Studies Methods of reducing walking impact in bipedal walking fall into two categories.One is to configure shock absorbers in the lower limbs.The other is to modify the joint arrangement of the lower limb.[12] For the former, the basic idea is to configure shock-absorbing soles [18], but the soles have limitations in thickness and damping, so the shock that can be absorbed is also limited.Various mechanical devices have been developed to solve this problem.Ueda et al. [19] developed a lower limb-worn linkage mechanism for shock absorption in extreme environments; Jeongsu et al. [20] developed a powered exoskeleton for powered exoskeletons that can effectively reduce PGRF for a specific gait.They designed a shock-absorbing mechanism on the tibia; and Long et al. created the AIDER (an exoskeletal robot for patients with lower body paralysis), which is actively driven by servomotors.[12] However, since these papers were not based on a simple mechanism and were not human-driven, various problems existed, such as wearing time and their weight.On the other hand, the supporter system we have created is very simple, yet easy to wear and functional.Although various advantages and disadvantages exist, the authors hope that this simple idea will be used universally. Conclusions In this study, we investigated a system that can absorb knee impacts more powerfully than a fabric supporter.A pneumatic circuit was devised, in which the working fluid was air instead of oil, so that the knee could be moved smoothly except during impact.Simulations of the effectiveness of this system showed that the spring alone was not effective as a shock-absorbing mechanism, and the piston stroke had to be increased to cope with the impact if only the air pressure in the cylinder was used.However, it was found that combining the air compressibility inside the cylinder with a spring would allow for a smaller size and more efficient shock absorption.A detailed study of the effects of the elements showed that increasing the spring constant resulted in smaller knee displacement during impact.On the other hand, increasing the damper coefficient did not change the magnitude of knee displacement, but it also made the response at impact more gradual.The knee load (impact) could also be effectively reduced.Based on various quantitative comparisons, it was found that the use of dampers was more effective in absorbing spring impacts than springs.The authors hope that the development of this simple model will enable many people to lead their daily lives without worrying about knee loading. Figure 7 . Figure 7. Time vs. rod displacement and acceleration by using air compressibility. Figure 8 . Figure 8.Time vs. rod displacement and acceleration by using spring. Figure 10 . Figure 10.Time vs. rod displacement and acceleration by using spring and air compressibility (example 2). Figure 12 . Figure 12.The prototype developed for this project. Figure 13 . Figure 13.(a) Schematic of knee and device modeling.(b) Meaning of the symbols. Figure 14 . Figure 14.Fundamental model calculation result without device. Figure 15 Figure 15 shows the device inserted.The device k = 200 N/m and c = 50 kg/s.Figure 15 is used as the basis of the device element study. Figure 15 . Figure 15.Fundamental model calculation result with device. Figure 16 . Figure 16.Comparison with and without device calculation result. Figure 17 . Figure 17.Comparison of device spring constant. Figure 18 . Figure 18.Comparison of device dumper constant.The result of increasing both the spring constant k = 600 N/m and the damper coefficient c = 250 kg/s of the device and Figure 18 spring constant k = 200 N/m and damper coefficient c = 250 kg/s are shown in Figure 19.Although the knee displacement becomes smaller, there is no significant change in the first peak of the knee load. Figure 19 . Figure 19.Comparison of device dumper and spring constant. Figure 20 Figure 20 shows the results for a spring constant of the device k = 600 N/m and a damper coefficient c = 500 kg/s.The results are shown in comparison with the spring constant k = 600 N/m and damper coefficient c = 250 kg/s of the device shown in Figure 19.The rise of knee displacement was slower and the knee load at the first peak was smaller. Figure 20 . Figure 20.Comparison of device dumper constant increase.
4,678.8
2022-07-16T00:00:00.000
[ "Engineering" ]
Implementation of Vision-based Object Tracking Algorithms for Motor Skill Assessments Assessment of upper extremity motor skills often involves object manipulation, drawing or writing using a pencil, or performing specific gestures. Traditional assessment of such skills usually requires a trained person to record the time and accuracy resulting in a process that can be labor intensive and costly. Automating the entire assessment process will potentially lower the cost, produce electronically recorded data, broaden the implementations, and provide additional assessment information. This paper presents a low-cost, versatile, and easy-to-use algorithm to automatically detect and track single or multiple well-defined geometric shapes or markers. It therefore can be applied to a wide range of assessment protocols that involve object manipulation or hand and arm gestures. The algorithm localizes the objects using color thresholding and morphological operations and then estimates their 3-dimensional pose. The utility of the algorithm is demonstrated by implementing it for automating the following five protocols: the sport of Cup Stacking, the Soda Pop Coordination test, the Wechsler Block Design test, the visualmotor integration test, and gesture recognition. Keywords—Vision-based Object Tracking; Motor Skill Assessment; Multi-marker Tracking; Computer-based Assessment. I. INTRODUCTION Assessment of upper extremity motor skills often involves manipulating physical objects, hand drawing and writing, or performing specific gestures [1] - [7].Early assessment of such skills can potentially lead to early diagnosis of any deficits and thus result in better treatment outcomes in the long term [1], [8].For example, motor skills deficiencies can be observed and are symptomatic of a learning or developmental disability, a traumatic brain injury, and normal aging.Assessment of such skills by a human clinician may encounter several challenges such as high cost [9]; time constraints [3]; and inconsistent professional awareness and expertise in diagnosis [10], [11].Advancement in computing and sensing technologies have enabled automation of such assessment tasks previously conducted by human administrators.Automation does not only improve the accuracy and efficiency of tasks, but also can accomplish tasks that were previously impossible using human skills alone [12].The assessment of motor skills would in particular benefit from automation.This is partly because of the increased accuracy, efficiency, and consistency of the measurements, but more notably automation can result in quantitative information that would not be possible from traditional manual assessment methods.For example, the Box and Block Test of Manual Dexterity (BBT) could be automated by installing RF readers in the two boxes and embedding RF tags in all the blocks [13], [14].The system was able to automatically sense when the blocks were placed in either of the boxes based on the relative signals from the two readers [15], [16].This resulted in the same assessment data as the manual assessment while being more time efficient and collecting more data about the blocks movements. Automation of upper extremity motor skill assessment that involves object manipulation can be realized in two ways: i) by employing active objects with embedded sensing and communication capabilities or ii) using passive objects with an external sensing device(s).It may be a combination of the two.Over the past decade, a variety of active objects have been developed for a broad range of education, entertainment, and research purposes [17].Several studies have used the sensorembedded blocks for measuring three-dimensional (3D) spatial cognitive abilities by observing construction patterns and performance [18]- [20].Learning Block is a digitally augmented physical block system enriched with a speaker and LED display [21].It aims to function as a playful learning interface for children via embedded gesture recognition.Another interesting application is the use of a sensor-embedded block system, called Navigation Blocks, for tangible navigation of digital information through tactile manipulation and haptic feedback [22].Tangibles is also an active object system designed for tangible manipulation and exploration of digital information [23].There are also block systems integrated with sound feedback.For example, AudioBlocks and Block Jam features an augmented sound feedback mechanism to enable users to design musical sequences by manipulating the tangible objects with visual and sound feedback [24], [25].Multi-agent autonomous interactive blocks and games were developed specifically for behavioral training of children with an autism spectrum disorder [26]. Most of the existing work on object manipulation and gesture detection using passive objects has been geared toward vision-based approaches [27], [28].For example, a depthsensing camera was used to build a height map of the objects on an interactive tabletop platform for recognizing objects and detecting interaction between the player and the objects [29].PlayAnywhere is a projection-vision system that can detect hover and touch by a human finger on a tabletop with a projected image [30].Another interesting system is called Touch-Space, which is a game environment that combines reality with a virtual game environment based on ubiquitous, tangible, and social computing [31], [32].Vision-based systems, compared to the methods using active objects, allow flexibility in the game or test design and the types of applications.However, www.ijacsa.thesai.orgmost of these algorithms are computationally expensive [30].In addition, sensing is limited to the vision range unless additional sensing devices are used.Using active objects with embedded sensors or combining the two approaches may overcome the limitations of a vision-only method, but the hardware can be costly, in particular if a large number of objects are employed, and it is difficult to make a versatile method due to inflexibility of the hardware [33], [34].This paper presents an algorithm designed for assessing object manipulation skills and hand gestures using a single standard webcam.No additional equipment other than a webcam is required.The algorithm is based on color thresholding for initial localization and morphological operations to find the object's edges.The corners are then identified by transferring the edges and used for pose estimation in real time.The result is the three-dimensional (3D) pose of the object which can be used for test automation and additional behavioral assessments.The algorithm described here is for tracking welldefined objects or markers rather than directly tracking hands and arms to simplify the computational complexity.Tracking hand motions would give a lot of interesting information about the person's upper extremity motor skills as explored by other researchers [35].However, it is not necessary or ideal for object-based motor skill assessments for several reasons.First, the assessments being automated do not rely on hand position information, but instead on the resulting position of objects.Thus it would be counterproductive to track the hands since it would add another layer of complexity to determine the objects position relative to the hands.Second, our goal is to make this method work in real-time on a computer with a normal computing capability.The objects with simple, known shapes can be tracked without requiring heavy computations in contrast to hands with irregular shapes.The utility of the algorithm is demonstrated by the following four applications: the sport of Cup Stacking, the Soda Pop Coordination test, the Wechsler Block Design test, and a simple hand gesture test. A. Overview Assessments of upper extremity motor skills often involves a set of objects that are manipulated by the person being evaluated or a sequence of tasks, such as extending the shoulder and twisting elbow [4], [5], [13], [36], [37].Resulting measurements include the time for completion, accuracy, and extension/flexion range of each motion.Our approach aims to automate the evaluation process with real-time data collection by employing vision-based techniques using a standard webcam.The algorithm first identifies specific objects within a field of view, projects their position into 3D space based on known shape information, and then tracks them in real time.To simplify the processing time, the algorithm targets tracking objects being manipulated by or attached to human hands instead of directly tracking the hands.The output of this algorithm is the 3D pose of each object.The only requirements are that the item must have straight edges and it must be distinguishable from the rest of the environment by either its shape or color.Shape and color form a two-tiered classification structure that determines whether objects within a video frame are items of interest.These values can be altered depending on applications via calibration. A major advantage of the presented algorithm over similar approaches [38], [39] is that it is versatile.The algorithm works with a variety of different markers without requiring reprogramming.The limiting factors are that the markers must be unique in the environment to avoid false detections.A detailed description of the algorithm is provided in the following subsections.Section II-B describes the item localization method based on the two-tiered classification scheme used to identify items of interest within the image and to detect the corner locations.Section II-C presents the pose estimation method using the corner points to project the item from the 2D image frame into the 3D real world frame using object shape information and internal camera properties.This process is called pose estimation and is a technique for extracting 3D information from a single camera frame.Lastly, Section II-D describes the camera calibration needed to introduce the algorithm to new markers and determine internal camera properties necessary for pose estimation.The codes were written in C++ and utilized OpenCV for computer vision implementations.The captured images are in the RGB format with a resolution of 640 × 480 pixels.Post-processing of the data for some applications was performed in MATLAB. B. Item Localization Localization is the process of identifying the location of the target item(s) in the image frame.We employ a two-tiered classification approach for localizing items of interest.A twotiered system achieves a high degree of accuracy in identifying items as a result of the two different properties that are required to detect a matched item.Color and shape are used as the distinguishing properties.Color indicates the normalized color of the item within a certain color range.Shape is defined as the number and relative positions of an item's corner points.Explicitly, the algorithm searches images for items with a known normalized color, and then locates the edges of the items using morphological operations on the color regions.Those edges are then traversed to locate the items' corners.The resulting corners are the outputs of localization and can be used to estimate the 3D positions of the items using known shape information. 1) Color classification: The first step of localization is to segment the image in order to identify what parts of the input image could potentially be items of interest.Normalized color initially distinguishes the potential object regions within the image.It was chosen as the distinguishing property because it is not affected by adverse lighting conditions and represents the inherent color of an object [40].Color normalization compensates for the intensity changes in lighting by forcing all intensity values to sum to 1.The well-known equations for normalizing the color at each pixel location in an image are used: The intensity values correspond to the values of the three image planes (red (r), green (g), and blue (b)) that make up an image. The color of an image is thresholded by examining each pixel's values to determine whether it falls within certain threshold ranges.A binary value of 1 or 0 is assigned based on whether it passes the threshold or not, www.ijacsa.thesai.orgrespectively.The ranges are defined by minimum and maximum values which are included in the objects color ({r min , r max }, {g min , g max }, {b min , b max }).These ranges can be easily identified for an object of interest by normalizing its color and finding the minimum and maximum color values for the object.Fig. 1 shows examples of the binary images resulting from color normalization. The resulting binary image often requires additional processing to increase the accuracy.This process is necessary when color normalization fails to compensate for all imperfect lighting conditions or when the color threshold ranges are not completely accurate in reflecting the item's actual color.One technique is conditional dilation which can be beneficial when the color threshold detects only part of an item.It detects the rest of the item by expanding its area until it reaches the item's edges.The morphological operator of dilation is applied to the binary image but the results are only kept if the color values are close to the values of their neighboring pixels.Edges of objects are distinguishable by the dramatic change in the color range.Color values remain similar within the same item, but once dilation approaches an edge the values start to change quickly and exceed the acceptable range by the conditional dilation operator.In order to use this operator, the item's edges must be well defined. 2) Shape classification: The next stage of classification takes the outputs from color classification and further narrows down the regions of the image to detect the items of interest.Since color classification results in defined areas, the next step would often be blob detection.However, this is not the most convenient method in this case.Instead, the edges of the items are found and traversed in order to locate the corners of the item.The corners are then used to classify the item's shape as defined by the number of corners and their spacing.This method is chosen because it gives accurate positions of the corners used for classification that are essential for finding the object's 3D position.A morphological operator is used to find the object's edges through two steps.First, the color thresholded image is taken and a dilation operator is applied.The difference is then taken between the original image and the dilated image.A dilation operator expands the colored regions outward by operating on the image using a 3 × 3 rectangular structuring element.The result from taking the difference is an image containing only the edges of the colored areas.The edges are guaranteed to be 1-pixel thick, 4-connected, and form a closed loop.These three properties make them easy to traverse.The examples of these ideal edges are shown in Fig. 2. The identified edges are traversed with the aim of pinpointing the location of the corners (Fig. 3).It takes three traversals of an edge set to find these corners.On the first traversal, spurs are removed, the object's centroid is found, and the edges are put in a stack so they can be accessed easily on the second two traversals.The edges are traversed by first finding a point on the image that is part of an edge.The next point is found by checking the four coordinate directions in the order of up, right, down, and left and then moving in the first found direction that is part of the edge.Each point previously visited is added to a stack of edges so that it can be referenced later and the location in the image is blacked out so that it is no longer recognized as an edge.A spur is recognized to exist on the edge when a point, that is not the beginning, is found to have no neighbors.At that point, the path is retraced by popping values from the edge stack until a new point is found that has a neighbor, meaning that it originally had two neighbors.An edge is considered complete when it loops back to its starting location.The centroid of each region is calculated by keeping a running average of pixel locations. After the first traversal of an object all the edge points are conveniently in a stack and the centroid has been calculated.The next traversal is used to find a point on the edge that is guaranteed to be a corner.Since the objects have straight edges, the edge location farthest from the object's centroid is guaranteed to be a corner point.This point is found by calculating the distance between the center and every point along the edge.The point that has the greatest calculated distance will be the corner point of interest.The last traversal finds all additional corners.They are found by moving along the edge and calculating the slope for each point edge.A constant slope designates the straight edge of an object and a rapid change in slope indicates a corner.After the corners are found they are amended by finding the intersection of lines fitted to the edges on either side of each corner.The resulting corner points are finally classified.If the number of points for an item is not equal to the expected number of corner points or the spacing of corners is not similar to that of the known shape, then the item can be conclude to not be an item of interest.For example, if the item of interest is a square then in order to be an object of interest there must be four evenly spaced corners.If it is found to be an object of interest then pose estimation can be used to get the object's 3D position.Fig. 4 shows an example of the processed image. C. Position Estimation using Shape Information The corners of the object, found through item localization, are used to estimate the position of the object in 3D space using known information about the object's shape and internal camera properties.The internal camera properties determine the perspective with which the camera views an object.By comparing the actual shape of an object with its warped shape within the camera frame, its pose relative to the camera can be determined.However, the relationship is nonlinear and typically cannot be solved directly.This can be circumvented using a variety of methods, including making additional assumptions about the object position or iterating to find the best values instead of solving directly.In an image frame, objects can be scaled and their perspective can be altered due to their relative position and orientation to the camera.If an assumption is made that the object's deformation in the image is either due to scaling or perspective then the equations can be simplified greatly.If it is assumed that the object has only been scaled, then the distance to the camera for all points on the object will be the same.At least two points are required to solve this system of equations, but the calculations can be made more accurate if more than two points are known. The follow equation converts the {x, y, z} image coordinate system to the {X, Y, Z} real world coordinate system.The relationship can be described using their simple geometric relationship as shown in Fig. 5, given that the camera's focal length is broken down into f x and f y and the center of the image is at c x and c y .The Z axis is perpendicular to the image frame and both X and Y are parallel.The equations for this relationship are provided below and are rearranged so that they solve for the real world values.The two dimensions are independent and can thus be treated separately. Fig. 5: The pinhole camera model in one dimension used for camera calibration. There are three unknowns in (1), {X, Y, Z}.This problem is solved by using two points with a known relationship between each other and that are at the same distance from the camera.This provides a known distance between the points represented by the equation, (∆X) 2 + (∆Y ) 2 + (∆Z) 2 = d 2 for a known d .The second simplification is that the points are at the same distance from the camera such that Z 1 = Z 2 = Z since the Z-direction is perpendicular to the image.Under these assumptions, the real world coordinate is calculated by where D. Calibration Calibration is an essential process to determine the conditions in which the camera is used and the properties of the item of interest.To locate an object, its color and shape must be known.The internal properties of the camera must also be quantified to determine how it views the item and to project the item from a 2D camera space into a 3D real space.The internal camera properties, or intrinsics, determine how a 3D object is projected into the 2D camera plane.The intrinsics includes the focal length and image center and are different for every camera.A camera can be represented by the pinhole camera model in which the light that the camera captures goes through a pinhole and is then projected onto the image plane.The focal lengths, f x and f y , are the distances in the x and y directions between the pinhole and the image center.The image center {c x , c y } is the location of the pinhole projected onto the image frame.The geometric relationships between the 2D image and the 3D space are previously provided in (1).A commonly used object for determining the camera intrinsics is a checkerboard due to its defined number of points with known spacing.By analyzing the relative position of the checkerboard points within the image, the camera intrinsics can be found.Fig. 6 shows how camera intrinsics can be used to compensate for perspective and orientation undesirabilities.The left image frame shows the camera's perspective on the area and the right shows the perspective altered frame.It has been changed so that the corners of the checkerboard form a perfect square and aligned so that the work area lines up with the camera frame. A. Overview This section describes the applications of the computer vision algorithm previously described.The algorithm requires a uniquely colored square piece of paper to be placed visibly on the object to be tracked, or the object to contain a surface that is uniquely colored, and a camera to capture its motion.The paper/surface can be any color that is unique in the environment and the only requirement is that it stays visible throughout the motion.The versatility of the tracking algorithm is proven through its application to three different situations.The first is a sport played mostly by elementary school children called Cup Stacking.The second is a motor skill and coordination test developed by Hoeger & Hoeger called the Soda Pop Coordination test.The third is the Wechsler Intelligence Scale that is one of the most widely accepted psychological assessment tool.Among its subtests, we selected the Block Design test for the third application of our algorithm.For these applications, an automatic scoring system is implemented by identifying when certain events occur.In addition to these three specific examples, we also implemented the algorithm for potential applications in visual-motor integration assessment and gesture recognition. B. Cup Stacking Cup stacking (also called Sport Stacking) is an activity for individuals and teams in which specialized cups are used to create pyramids of three, six, and ten cups as quickly as possible.It is governed by the World Sport Stacking Association and a variety of studies have been conducted to assess its influence on motor skills [36], [41], [42].Specifically, a study involving the second graders playing cup stacking for 15 minutes a day for 12 weeks showed that it might improve central processing and perceptual-motor integration skills [36].Another study involving second and fourth graders playing cup stacking for 10-15 minutes a day for 3 weeks found no difference between a control group and a group participating in cup stacking [41].It is also found that cup stacking is effective in improving hand-eye coordination and reaction time in second graders by playing the game for 20-30 minutes a day for 5 weeks [42].It is notable that the sequences for stacking have a learning curve so cup stacking cannot be used to directly measure motor skills unless a training period is allowed. The scoring of cup stacking was automated by placing a marker on the top of each cup.The 3D position of each cup, (X, Y, Z), was found using pose estimation and then saved for further analysis.A six-cup stack game was employed as shown in Fig. 7.The automatic scoring was performed in real-time by recording when certain key actions occurred.The tasks included when the cups first started to move indicating the start of the activity, when three cups were placed as the base, when two cups were placed on top of the base, when the top cup was placed, and finally when all the cups come back together and stop moving indicating the activity is complete.A measure of the cup placement accuracy is determined by the straightness of the placement of cups in the bottom row and the relative angle between the bottom and middle rows, indicating how precisely the middle is placed relative to the bottom row.The automatic scoring component can easily be evaluated by comparing manually and automatically recorded trials.These values were compared for 91 different times and the resulting correlation is reasonable with an r-squared value of 0.9615 and an average error of 0.35 ± 0.27 seconds. C. Soda Pop Coordination Test The Soda Pop Coordination Test is a motor skills test that is a part of the American Alliance for Health, Physical Education, Recreation & Dance (AAHPERD) battery of tests.It is advantageous over similar tests because it uses commonly available materials and is easy to administer [37].The test uses three soda pop cans and needs six marked locations on the table for the cans to be placed on, as shown in Fig. 8.In basic terms, the test involves flipping the soda cans over one at a time as fast as possible.Specifically, Can A is moved from position 1 to position 2, Can B is moved from position 3 is position 4, and Can C is moved from position 5 to position 6.Then the cans are moved back to their original positions in the reverse order.The hand must start with the thumb facing upward for the first set of movements and downward for the second set of movements.The test is usually scored using the time it takes to go back and forth twice and can be done for either the dominant or non-dominant hand. The advantage of having an automated system for the Soda Pop Coordination test is that it increases the accuracy of scoring and makes data processing easier.Traditionally, the performance results would be a large amount of hand written www.ijacsa.thesai.orgFig. 8: Starting configuration for the soda pop coordination test with three soda cans and six locations data (i.e.time, accuracy) that would have to be manually inputted into a computer.By having an automated system, the times are already saved on the computer and the mindless data entry step can be skipped, allowing for fewer opportunities for errors in recording.The Soda Pop Coordination test is usually administered before and after some training regimen to demonstrate how an action has improved a person's abilities [42], [43], [44].It can also be administered to monitor coordination skills and then compared to or used to create standardized scores [37], [45].Examples of before and after testing include a study to identify the effects of a 10 week Tai-Chi-Soft-Ball training on the physical functional health of Chinese adults [43], a 5 week study on second graders to identify the effect of 20-30 minutes of sport-stacking on handeye coordination and reaction time [42], and a study on the effect of a weight-bearing and water-based exercise program on osteopenic women [44].Examples of using standardized scores include a study on the elderly, which showed the relationship between heart rate variability and coordination [45]. The test was automated by placing a marker on the top of Can A. The start time is set as the time the marker starts moving and the stop time is set as the time the marker comes back into view and stops moving.The marker will disappear as the cup is turned over and, in order to accommodate false starts, it is assumed to take more than two seconds to complete the test.Additionally, since the test has two sets of back and forth that count as one round, the numbers for the two consecutive times can simply be added together.The system was evaluated by comparing manually and automatically collected data to determine the accuracy and usability.Data was manually and automatically collected for 87 laboratory rounds of the Soda Pop Coordination test.The correlation between the manually and automatically collected data is 0.985 and the average difference in timing is 0.215 seconds. D. Wechsler Block Design Test The Wechsler Intelligence Scale for Children (WISC) and the Wechsler Adult Intelligence Scale (WAIS) are widely accepted psychological assessment tests used to measure intelligence in children and adults that were initially developed by David Wechsler in the 1930s [46], [6], [7].Both scales contain a subtest called the Block Design test that measures a person's non-verbal conceptualization, spatial visualization, and fine-motor control [47].The Block Design test was first proposed by Kohs in 1923 [48], but has been incorporated in some form into most intelligence tests.The WISC and WAIS subtests themselves involve recreating 2D red-and-white geometric patterns using 3D cubes that have red, white, and red-and-white sides.The patterns can be made up of two, four, or nine blocks and a score is awarded for each pattern based on the time taken to complete the assembly and whether the final assembly is correct [6], [7].Typically when this test is administered, a trained professional must be present to walk the testee through the process by keeping track of completion times, recording incorrect answers, scoring the test, and monitoring the test taker for any psychological clues. In this test, the algorithm was implemented to directly track the blocks instead of placing separate markers on them because the blocks themselves satisfy the requirements for serving as a marker.The blocks have sides that appear as triangles and squares when either white or red color is tracked, as well as being able to form more complex shapes by putting the blocks together.Scoring requires additional considerations because the system must recognize whether a testee has successfully created a pattern using multiple blocks.This means that the resulting position of the blocks must be used to estimate the pattern created by the blocks.The scoring process involves overlaying a grid over the found blocks and determining the color layout within the grid to match to the pattern.The start time of a trial was indicated by the blocks being dispersed throughout the environment and is marked as complete when the blocks form the goal pattern of that trial.If a pattern is not completed successfully, then the time is stopped and marked incomplete when the blocks are dispersed for the next pattern.The score is assigned by the same conventions as in the WISC and WAIS block design tests.The automation was tested by comparing manually and automatically scored tests showing 100% accuracy. E. Visual-Motor Integration Test A part of motor skills is reflected by how well a person can trace lines and shapes in 2D.The closeness of a followed path to the ideal path and steadiness of the movements reflect the motors skills of the person in terms of how advanced they are in their motor development or if they have any difficulties with any of their individual joints or muscles.The idea is similar to that of the Beery-Buktenica Developmental Test of Visual-Motor Integration (Beery VMI) where the subject must copy or trace lines and shapes using a pencil [49].For our demonstration, a cup is used instead of a pencil to trace out a pattern on the table and shapes in the air.The path could be anything as long as its shape is known so that an ideal path is available for comparison.Table I shows six trails of drawing a straight line between two points using a cup as a marker.A correlation between the actual position data and an ideal fitted straight line was analyzed by performing a linear regression between the two.A higher value of r indicates the movement trajectory was closer to the given straight line. F. Gesture Recognition Gesture recognition aims to classify the motion that a person is performing [49], [50].It has a wide range of applications including aids for the hearing impaired, interpreting sign language, lie/stress/emotional state detection, and controls or tools for interaction with virtual environments [49].A variety of methods can be used to interpret gestures including principal component analysis, the CONditional DENSity PropagATION (CONDENSATION) algorithm, Kalman filtering and more advanced particle filtering, and hidden Markov models [49].The goal of this application is to create a simple gesture recognition tool that can identify the motion of tracing the geometry of a shape.It is also desired that it is not affected by the speed or the size of the motion but is simply unique to the shape or form of the motion. Only a couple of distinct shapes were explored for this section, so a simple method was chosen for recognition.The motions were to draw a circle, triangle, and square in the air and the method used to recognize the shapes was a shape descriptor technique called shape signatures [51].Shape signatures represent an object's shape as a one dimensional function of its edge points.A variety of different methods can be used to create this function but a common method, which is used here, is the distance of the boundary points and angle relative to their centroid.The signature is made scale invariant by dividing all distances by the maximum distance and is made orientation invariant by finding the angular position of the maximum point and making the function start at this value.The signature can then be analyzed to find the number of corners mapped to a function.In this case, the signature is simply examined at key locations that distinguish the different shapes.The first key feature is the relationship between the minimum and maximum value of the radius (R min , R max ).This distinguishes circles (or in this case ellipses) from the squares and triangles.Unless the eccentricity is high, the ratio will be significantly higher for circles then for the other two.The second feature is the behavior of the shape at an angle of zero.Circles have extreme points at angles of π and 0, so the behavior at 0 should be high.Squares have extreme points at π, π/2, 0, and π/2, so the behavior at 0 should be high.Finally, triangles have extreme points at π, 2π/3, and −2π/3, so the behavior at 0 should be low.For these three shapes, two features effectively take care of all possible cases.If more shapes need to be identified then additional features would become necessary, but would be easy to add to the current framework.Table II shows recognition results for each shape.P 1 and P 2 are calculated by and the shape is recognized as a circle if P 1 > 0.4 and P 2 > 0.6, a triangle if P 1 < 0.4 and P 2 < 0.6, or a square if P 1 < 0.4 and P 2 > 0.6. IV. CONCLUSION AND DISCUSSION This paper presented an integrated low-cost, real-time vision processing algorithm that can be used for a variety of assessment tests for upper extremity motor skills that involve object manipulation.While individual layers of the algorithm utilize existing techniques, the main contribution of this paper lies in the proper integration of these techniques keeping the computational cost low for target clinical and educational applications.The algorithm was implemented in four well-known games/tests and a simple gesture recognition application for demonstrating its potential utility.When such motor assessment tests need to be periodically administered to an individual or to a large group of people, automating the entire process can significantly reduce the time, cost, and labor intensity while also improving the quantity and quality of the measurable data.The specific applications presented in this paper were carefully selected to cover a broad range of motor skill assessment tests so that one can easily take it into use. The presented algorithm requires comparison with other vision-based object tracking algorithms to prove its time efficiency.To further improve the versatility of the algorithm, another layer of prior image processing can be added for automatically determining the color threshold range instead of using a pre-defined value so that any arbitrary objects can be detected and tracked as long as they are distinguishable from the environment.In addition, benefits expected by the algorithm implementation needs to be verified through human subject studies involving non-technical administrators (e.g.teachers, parents, and clinicians) and potential testees (e.g.students, children with varying cognitive/motor skills, and older adults).Our ongoing work involves human subject evaluation and cost analysis in addition to continuous improvements in the algorithm. Fig. 2 : Fig. 2: Examples of edges found using the difference between a morphological dilation and the original image. Fig. 3 : Fig. 3: Two images of blocks overlaid with centroids found after first traversal of each object's edges. Fig. 4 : Fig. 4: A processed image showing the centroids of individual blobs and starting points (in green), overall centers (in red), and corners and block edges used to calculate tilt (in blue). Fig. 6 : Fig. 6: A sample frame from a webcam showing the experimental set-up from a near-vertical camera view (left) and the transformed image to compensate for initial camera angle (right). Fig. 7 : Fig. 7: Top view of the cups with green squares placed on the top illustrating five steps of six-cup stacking. TABLE I: Paths exhibiting a range of different accuracies between two points shown in graphs of points and straight ideal lines along with the calculated correlation values for the match between the two. TABLE II : Signatures for the three shapes (circle, square, and triangle) and calculated feature values and logic gate outputs using the threshold values of P 1 = 0.4 and P 2 = 0.6.
8,569.4
2015-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Multiplex PCR Assays for the Detection of One Hundred and Thirty Seven Serogroups of Shiga Toxin-Producing Escherichia coli Associated With Cattle Escherichia coli carrying prophage with genes that encode for Shiga toxins are categorized as Shiga toxin-producing E. coli (STEC) pathotype. Illnesses caused by STEC in humans, which are often foodborne, range from mild to bloody diarrhea with life-threatening complications of renal failure and hemolytic uremic syndrome and even death, particularly in children. As many as 158 of the total 187 serogroups of E. coli are known to carry Shiga toxin genes, which makes STEC a major pathotype of E. coli. Seven STEC serogroups, called top-7, which include O26, O45, O103, O111, O121, O145, and O157, are responsible for the majority of the STEC-associated human illnesses. The STEC serogroups, other than the top-7, called “non-top-7” have also been associated with human illnesses, more often as sporadic infections. Ruminants, particularly cattle, are principal reservoirs of STEC and harbor the organisms in the hindgut and shed in the feces, which serves as a major source of food and water contaminations. A number of studies have reported on the fecal prevalence of top-7 STEC in cattle feces. However, there is paucity of data on the prevalence of non-top-7 STEC serogroups in cattle feces, generally because of lack of validated detection methods. The objective of our study was to develop and validate 14 sets of multiplex PCR (mPCR) assays targeting serogroup-specific genes to detect 137 non-top-7 STEC serogroups previously reported to be present in cattle feces. Each assay included 7–12 serogroups and primers were designed to amplify the target genes with distinct amplicon sizes for each serogroup that can be readily identified within each assay. The assays were validated with 460 strains of known serogroups. The multiplex PCR assays designed in our study can be readily adapted by most laboratories for rapid identification of strains belonging to the non-top-7 STEC serogroups associated with cattle. INTRODUCTION The polysaccharide portion, called the O-antigen, of the lipopolysaccharide layer of the outer membrane of Escherichia coli provides antigenic specificity and is the basis of serogrouping. As many as 187 E. coli serogroups have been described based on the nucleotide sequences of O-antigen gene clusters (DebRoy et al., 2016). Escherichia coli serogroups that cause disease in humans and animals are categorized into several pathotypes. The serogroups that carry Shiga toxin genes on a prophage are categorized as the Shiga toxin-producing E. coli (STEC) pathotype. As many as 158 serogroups of E. coli are known to carry Shiga toxin gene(s), which make STEC the most predominant E. coli pathotype ( Table 1). Illnesses caused by STEC in humans, which are often foodborne, range from mild to bloody diarrhea with life-threatening complications of renal failure and hemolytic uremic syndrome (HUS), and even death, particularly in children (Karmali et al., 2010;Davis et al., 2014). Seven serogroups of STEC, O26, O45, O103, O111, O121, O145, and O157, called "top-7, " are responsible for the majority of human STEC illnesses, including food borne-outbreaks (Brooks et al., 2005;Scallan et al., 2011;Gould et al., 2013;Valilis et al., 2018). However, STEC serogroups other than the top-7, called "non-top-7" have also been reported to cause human illnesses, more often as sporadic infections, although a few are also known to cause severe infections, such as hemorrhagic colitis and HUS (Hussein and Bollinger, 2005;Bettelheim, 2007;Hussein, 2007;Bettelheim and Goldwater, 2014;Valilis et al., 2018). In a recent systematic review done by Valilis et al. (2018), 129 O-serogroups of STEC were identified to be associated with clinical cases of diarrhea in humans. Ruminants, especially cattle, are a major reservoir of STEC and harbor the organisms in the hindgut and shed them in their feces. A number of studies have reported on the fecal prevalence of the top-7 STEC in cattle because of the availability of detection methods. For these serogroups, culture method involving serogroup-specific immunomagnetic separation and media for selective isolation and PCR assays to identify serogroups of putative isolates have been developed, validated and widely used (Bielaszewska and Karch, 2000;Chapman, 2000;Bettelheim and Beutin, 2003;Noll et al., 2015a). A number of studies have reported shedding of non-top-7 STEC in cattle feces ( Table 2). However, not much is known about the prevalence of these STEC serogroups in cattle feces, in terms of their distribution and proportion of animals in a herd positive for various serogroups, largely because of lack of isolation and detection methods. Traditionally, identification of serogroups or serotyping of E. coli, conducted by agglutination reaction using serogroup-specific antisera, is restricted to a few reference laboratories that possess the required antisera. However, the method is time consuming and often exhibits cross-reactions with other serogroups (DebRoy et al., 2011a). A number of PCRbased assays, end point or real time, have been developed and validated for the detection of one or more clinically relevant serogroups of E. coli (Perelle et al., 2004;Monday et al., 2007;Fratamico et al., 2009;Bai et al., 2010Bai et al., , 2012DebRoy et al., 2011b;Madic et al., 2011;Luedtke et al., 2014;Iguchi et al., 2015b;Noll et al., 2015b;Sanchez et al., 2015;Shridhar et al., 2016a). However, only a few mPCR assays have been described to detect certain STEC serogroups that are non-top-7 (Iguchi et al., 2015b;Sanchez et al., 2015;DebRoy et al., 2018). In recent years, DNA microarray and whole genome sequencing have been widely used to identify E. coli serogroups and serotypes (Liu and Fratamico, 2006;Lacher et al., 2014;Joensen et al., 2015;Norman et al., 2015). However, mPCR assays targeting serogroup-specific genes to identify STEC is a (246) To understand the ecology and prevalence of these STEC serogroups in cattle, it is essential to detect the non-top-7 STEC serogroups shed in cattle feces in order to determine their impact on food safety and human health. Therefore, the objectives of the present study were to develop and validate mPCR assays targeting serogroup-specific genes to detect 137 non-top-7 STEC serogroups known to be associated with cattle. Design of the Assays A total of 14 mPCR assays, each targeting 7-12 STEC serogroups were designed. The targeted genes to design primers for serogroup detection included: wzx, which encodes for the Oantigen flippase required for O-polysaccharide export (Liu et al., 1996), wzy, which encodes for the O-antigen polymerase required for O antigen biosynthesis (Samuel and Reeves, 2003), gnd, which encodes for 6-phosphogluconate dehydrogenase for O antigen biosynthesis (Nasoff et al., 1984), wzm, which encodes for transport permease for O antigen transport, and orf469 and wbdC, which encode for mannosyltransferase for O antigen biosynthesis (Kido et al., 1995). The primers were designed based on the available nucleotide sequences of the target genes for each of the STEC serogroups from the GenBank database. The sequences for each serogroup were aligned using ClustalX version 2.0. The primers were designed to amplify the target genes with distinct amplicon sizes for each serogroup within an assay for easier visualization. The forward and reverse primer sequences for these serogroups are provided in Supplementary Tables 1A-N. Validation of PCR assays The specificity of each assay was determined with pooled DNA of the positive controls from the other 13 sets and top-7 STEC plus O104 PCR assays. Additionally, each assay was validated with one or more strains of the targeted serogroups. A total of 460 STEC strains belonging to 137 targeted serogroups were used for the validation of the assays (Table 4; Supplementary Tables 2A-N). The strains were obtained from our culture collection (n = 104), E. coli Reference Center at Pennsylvania State University (n = 223), Michigan State University (n = 42), University of Nebraska (n = 5), and Food and Drug Administration (n = 86). Strains stored in CryoCare beads (CryoCare, Key Scientific Products, Round Rock, TX) at −80 • C were streaked onto blood agar plates (Remel, Lenexa, KS) and incubated overnight at 37 • C. Following incubation, colonies from the blood agar plates were suspended in 1 ml of distilled water, boiled for 10 min, centrifuged at 9,300 × g for 5 min and the supernatant was used for the PCR assays. RESULTS Out of the 158 serogroups of STEC, only five, which include O36, O66, O95, O184, and O187, have not been reported to be present in cattle feces, beef or beef products ( Table 1). A total of 14 mPCR assays, each targeting 7-12 O-types of 137 non-top-7 serogroups, were designed (Table 3). Each set of mPCR assay contained primer pairs that generated amplicons of different sizes for each target serogroup that were readily differentiated using a capillary electrophoresis system (Table 3; Figures 1A-N). The PCR product size for all the assays ranged from 145 to 1,046 bp (Table 3; Figures 1A-N). The specificity of each assay was confirmed when only the genes of the targeted serogroups were amplified and none of the serogroups targeted by the other 13 sets and top-7 plus O104 PCR assays was amplified (data not shown). The assays were validated with 460 strains of known serogroups, and the results indicated that all the assays correctly identified the target serogroups ( Table 4). The 14 sets of mPCR assays did not include the following 14 serogroups: O14, O30, O36, O52, O57, O59, O95, O97, O104, O158, O183, O184, O185, and O187. DISCUSSION Of the known 187 serogroups of E. coli, 158 serogroups have been shown to possess genes that encode for Shiga toxin 1, 2 or both. Serogroups, O26, O45, O103, O111, O121, O145, and O157, are top-7 serogroups responsible for a majority of human STEC illness outbreaks (Scallan et al., 2011;Gould et al., 2013). Among the top-7, fecal shedding of the O157 serogroup has been studied extensively, but relatively fewer studies have examined fecal shedding of the other six non-O157 serogroups in cattle, particularly in the United States (Renter et al., 2005; Paddock et al., 2014;Dewsbury et al., 2015;Noll et al., 2015a;Cull et al., 2017). Among the six top-7 non-O157 serogroups, O26, O45, and O103 are the dominant serogroups in cattle feces with prevalence ranging from 40 to 50%. However, only a small proportion of these serogroups (2-6%) carry Shiga toxin genes (Noll et al., 2015a). Because Shiga toxin genes are located on a prophage, it is suggested that the serogroups lacking these genes either have lost the prophage or have the potential to acquire the prophage (Bielaszewska et al., 2007). A majority of the non-O157 top-six STEC have been show to carry Shiga toxin 1 gene . There is evidence that the type of stx gene carried by STEC in cattle is dependent on the age of the animal and season. Shiga toxin gene of STEC strains in adult cattle are predominantly of the stx2 type, whereas the strains from calves primarily possess stx1 type (Cho et al., 2006;Fernández et al., 2012). In a study on E. coli O157 in Argentina, strains of O157 detected in all seasons were predominantly of the stx2 type, the proportion of strains containing stx1 decreased and proportion of strains possessing both types increased in warm seasons (Fernández et al., 2009). Many PCR assays have been developed and validated, generally targeting top-7 STEC serogroups, and often in combination with major virulence genes (Shiga toxins 1 and 2, intimin, and enterohemolysin: Bai et al., 2010Bai et al., , 2012DebRoy et al., 2011b;Fratamico et al., 2011;Lin et al., 2011;Anklam et al., 2012;Paddock et al., 2012;Noll et al., 2015b;Shridhar et al., 2016a). There is limited development of PCR assays targeting the non-top-7 STEC in cattle feces. Individual primer pairs have been described and PCR assays have been developed for each of the 187 serogroups of E. coli (DebRoy et al., 2018). However, there are only a few multiplex PCR assays targeting non-top-7 STEC serogroups (Iguchi et al., 2015b;Sanchez et al., 2015). Sanchez et al. (2015) reported the development of three mPCR assays targeting 21 of the most clinically relevant STEC serogroups associated with infections in humans. The assays included, top-7 serogroups and O5, O15, O55, O76, O91, O104, O118, O113, O123, O128, O146, O165, O172, and O177. Iguchi et al. (2015b) designed primer pairs to develop 20 mPCR assays, with each set containing six to nine serogroups, to detect 147 serogroups that included STEC and non-STEC. STEC serogroups other than the top-7 have been reported to be involved in sporadic cases and a few outbreaks of human illness (McLean et al., 2005;Espie et al., 2006;Buchholz et al., 2011;Mingle et al., 2012). Among the non-top-7 STEC, certain serogroups, such as O1, O2, O8, O15, O25, O43, O75, O76, O86, O91, O101, O102, O113, O116, O156, O160, and O165, specifically certain serotypes within these serogroups, have been involved in outbreaks associated with consumption of contaminated beef in the US and European countries (Eklund et al., 2001;Hussein, 2007). Many of the outbreaks included cases of hemorrhagic colitis and HUS. Serogroups O91 (mostly H21 and H14 serotypes) and O113 (mostly H21 serotype) have been associated with severe cases of hemorrhagic colitis and HUS in the US and other countries (Feng et al., , 2017. Obviously, the difference in virulence between serogroups and serotypes is attributable to specific virulence factors encoded by genes in the chromosome, particularly on large horizontally acquired pathogenicity islands, or on plasmids (Levine, 1987;Bolton, 2011). In contrast to humans, cattle are generally considered to be not susceptible to STEC infections. Only new born calves, particularly those that are immunocompromised because of deprived colostrum, have been shown to exhibit E. coli O157:H7 infections characterized by bloody diarrhea and attaching and effacing lesions (Dean-Nystrom et al., 1998;Moxley and Smith, 2010). Other serogroups that have been associated with diarrheal diseases of calves include O5, O8, O20, O26, O111, and O113 (Mainil and Daube, 2005). The majority of the serotypes causing infections in calves carried only Shiga toxin 1 gene (Mainil and Daube, 2005). Moxley et al. (2015) have reported isolation of STEC O165:H25 from the colonic mucosal tissue of an adult heifer that died of hemorrhagic colitis. Of the 158 STEC serogroups, 130 serogroups have been associated with clinical cases of diarrhea in humans (Mainil and Daube, 2005;Hussein, 2007;Valilis et al., 2018). Therefore, there are 28 STEC serogoups that have not been reported to cause human infections, which is interesting because Shiga toxins are potent virulence factors. Either these STEC have not yet been linked to an illness or they lack other virulence factors, such as those needed for attachment and colonization, necessary to cause infections. A further understanding and assessment of the virulence potential of these serogroups will require sequencing of the whole genome to obtain a comprehensive gene profile. In conclusion, the multiplex PCR assays designed in our study, which can be readily performed in most microbiology laboratories, will allow for rapid identification of isolates belonging to the non-top-7 E. coli STEC serogroups that are prevalent in cattle feces, beef or beef products. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS JB, TN, and CD conceived and designed the experiments. JL and XS performed the experiments. XS, JB, CD, ER, RP, and TN contributed reagents, materials, and analysis tools. PS, CD, XS, JB, and TN wrote the paper. All authors contributed to the article and approved the submitted version. FUNDING This material is based upon work that is supported by the National Institute of Food and Agriculture, U. S. Department of Agriculture, under award number 2012-68003-30155. The funders had no role in the study design, data collection and analyses, preparation of the manuscript or decision to publish.
3,595.2
2020-07-29T00:00:00.000
[ "Biology", "Agricultural and Food Sciences", "Environmental Science" ]
Design and Development of Data Management System to Support The Preparation Process of Accreditation form for Program Study ― In order to realize public accountability, the program study must actively build an internal quality assurance system. To prove that the internal quality assurance system has been implemented properly and correctly, the program study must be accredited by an external quality assurance institution. Criteria for evaluating commitments in accreditation are outlined in number of standards and are presented by programs study in instruments of forms. The problem that arises is that data/ documents for the process of supporting accreditation forms are not all available in the Program Study so they still have to involve other units to obtain them. The purpose of this study is to design a data management system to prepare the availability of data / documents to support the process of preparing accreditation forms. The System Development Life Cycle (SDLC) method is used with the waterfall model in designing this system. This research has produced an application system called Dokumen Akreditasi Online which is oriented towards standards of accreditation form. Overall the results of testing the system functionality shows that all features in the application can be run. The results of measuring the level of user satisfaction amounted to 75.27% which means that users are satisfied with this system. Thus, the need to prepare for accreditation is faster and easier and useful for other administrative needs. I. INTRODUCTION CCREDITATION is a quality assessment carried out by a team of peer experts (team of assessors) based on established quality standards, to obtain recognition that an institution or program study has met the specified quality standards, making it feasible to carry out its program [1]. Explanation of each standard in the framework of accreditation is presented by the study program in the form of instruments, which are a collection of data and information about inputs, processes, results, and impacts characterized by efforts to improve the quality of performance. During this time the documents used to compile the forms are still fragmented and there are still some that are stored manually (not yet stored in one container). The officer must look for documents in a pile of files so it requires quite a long time. It is necessary to build a system to facilitate the management of supporting documents needed Especially during the visitation process the document accreditation in the form of hardcopy becomes evidence to be shown to the assessor. The purpose of this study is to discuss project activities in building a data center information system application that is able to provide facilities for meeting data needs in Program Study. Documents will be managed and documented based on existing accreditation standards. With this system the preparation of data availability especially in the process of filling accreditation forms become faster and easier and also be utilized to assist in managing and maintaining all documents produced from all organizational, administrative and academic activities. Problem This study purpose to design and develop an information system oriented to the standards of accreditation BAN-PT to provide convenience the data needs in the Program Study for the process of preparing accreditation forms. Scope of Problem The scope of problem in this research are:(1)This research was conducted in the ITS Computer Engineering Study Program in the framework of preparing the BAN-PT Study Program Accreditation process; (2)The preparation of the study follows the BAN-PT 2018 accreditation format; (3)Using the System Development Life Cycle (SDLC) method with the waterfall model with the stages of identifying the needs, design, implementation and testing of the system. Purpose The purpose of this study to discuss activities in building a data center information system application that is able to provide facilities for meeting data needs in Program Study. Documents will be managed and documented based on existing accreditation standards. With this system the preparation of data availability especially in the process of filling accreditation forms become faster and easier and also be utilized to assist in managing and maintaining all documents produced from all organizational, administrative and academic activities. Benefit This researh have banefit: (1)Help managing and maintaining all documents produced from all organizational, administrative and academic activities; (2)Documents are more accessible for various purposes and document security in terms of physical damage can be avoided; (3)Understanding the use of the SLDC Method with the waterfall concept in designing a system. Add affiliation description for all authors and the affiliations are cited by superscripts as shown in the above example. Add an asterisk (*) in superscript to the corresponding author. II. METHOD The research systematics used is the System Development Life Cycle (SDLC) with the waterfall model. This method used 4 stages namely requirements analysis, desain proses, implementation and testing in Figure 1. III.RESULT AND DISCUSSIONS This section will discuss research activities to build a data management system application with a focus on activities for the preparation of study program accreditation. Figure 2. illustrates the use case diagram for user management to be developed. Use case diagram is a diagram used to illustrate the application functionality of users who use the application. While the results of the operational needs of the system which includes hardware requirements (hardware) and software (software) are as follows: Server Data and document identification are carried out to answer the questions in each accreditation standard. The identification process is adjusted to III-A Book of Accreditation Forms and then the data collection process is carried out. Desain Proses Next will be discussed regarding the design (design) which includes the design of database systems, design or display design. 1) Database Design The next step is database design which is used to explain the relationship between data in the system. Figure 3. Figure 3 is the result of database design where a table in the database represents a data entity.The data table design is used to provide information about what data is needed in development information systems. (1) User Table: The user table is used to store administrator data. in this table there are user username and password data which can only be known by the user himself. In the user system is distinguished by its type, namely as a user and as an admin; (2)Prodi Table: This table is used to store data on the names of program study in the Faculty; (3)Standard tables: are used to store accreditation forms data in accordance with their respective standards. The data in the database table for standards 1 to 6 are the same. While the data in the table for standard 7 there is little difference where there are additions related to the amount and source of funding for lecturer activities; (4)The lecturer table: is used to store lecturer data with the main components being the name of the lecturer, lecturer NIP and homebase of each lecturer; (5)The scientific seminar table: is used to store data on lecturer seminar activities with components including the type of seminar, date of implementation, seminar name, location and source of funds; (6)The lecturer achievement table: is used to store the achievement data held by the lecturer. Like the scientific seminar table on the lecturer achievement table, add the event activity name; (7)The PKM lecturer activities table: is used to store data of community service activities conducted by the lecturer. The main components of data stored are the date of implementation, location of activities, title of activities and System login ☑ 3 Operating the system as an admin ☑ 4 Operating the system as a user ☑ 5 Data save operation ☑ 6 Data edit operation ☑ 7 Operation appears data ☑ 8 Erase data operation ☑ 9 File upload operation ☑ 10 File upload operation ☑ 11 Operation of document filters ☑ 12 System logout ☑ sources of funds; (8)Scientific journal tables: are used to store data related to journals / papers written by lecturers with the main components being the date of receipt of the paper and the title of the paper published; (9)The scientific journals table: is used to store the data units providing funding used for various lecturer activities, both seminar, research and PKM. 2) Disply Design The display design serves as a reference to create a user interface in system implementation. This design includes the design of the user login page and the design of the main page as shown in " Figure 4. and " Figure 5. (1)Dakon menu is the main page display menu. On the main page there are other menus whose functions are to carry out processes such as creating, reading, updating, deleting and downloading documents; (2)The Filter Menu is designed to facilitate the process of finding data quickly and precisely; (3)User Status, the section that displays whether the user is an admin or the user is only a user; (4)Identity Contains the ITS logo and shows the name of the Faculty and the name of the Study Program. For the name of the Faculty and Study Program this view will change according to the choices on the Study Program menu; (5)Menu Prodi, displays the choice of study programs in the Faculty because this application is also prepared for the needs of accreditation for all study programs within the Faculty; (6)Menu Standard, contains the. Standard 1 to Standard 7 menus. When the user wants to enter the data entry process and upload a document, he must first choose a standard then proceed with selecting the standard document type. The type of standard document used is a list of standard documents resulting from the identification of data requirements for accreditation. Thus the uploaded documents have a place in accordance with their respective standards; (7)Standard Display is a menu that is ready to display the results of the number of documents that have been loaded and stored on each standard. Here the user can also view the document and perform download or delete documents if desired; (8)Feature Menu is intended to display lecturer data from Tridarma activities that have been carried out such as lecturer achievements, scientific seminars, scientific journals and PKM activities. The amount of data entered will be captured and appear on the main page in the features section; (9)Lecturer Review, to see in detail the documents of each lecturer from all standards based on the results of data that has been inputted; (10)Stat Display Menu is a feature to view the results of statistical data on the amount of research activities and community service and the total amount of research funds, which are used from both activities. Implementation The implementation of the system is the stage of implementing the design that has been done. The result of this application system design is called the Dokumen Akreditasi Online (DAKON). This application system is also prepared for the needs of accreditation for all study programs that are within the scope of the Faculty, but currently the limitation of the use of applications is still in the Computer Engineering study program. At this stage, the design that has been created is implemented using number of applications and programming as follows: (1)Notepad ++ which functions as a text editor; (2)PHP which functions to make the website display dynamic and interactive; (3)Mysqli which functions as a data storage or database server. Figure 6 displays the login page, while Figure 7 displays the main page. On the initial appearance of the login page, the user must fill in their username and password to enter the system where each user has different access rights. Testing System testing is a process to ensure the success of the system created. Testing the information system of this Online Accreditation Document is done in two ways, namely functional testing and testing the level of user satisfaction using a questionnaire. 1) System Functional Testing To carry out functional testing of this system is done by doing a test case as follows, for example: Activity : inputing lecturer research data on standard 7. Steps: (1)Enter the application system by logging in; (2)Choose standard 7, choose add standard document 7 and choose lecturer proposal document; (3)Perform data input and upload documents, as follows Figure 8. System testing is carried out to ensure the success of the system that has been created. Overall the results of testing the system functionality obtained as shown in Table 1. From table 1, it can be concluded that all functional aspects of the Online Accreditation Document information system computer engineering study program are functioning properly. 2) User Satisfication Testing User satisfaction testing is a stage to measure the level of user satisfaction with the Online Accreditation Document information system. This test is carried out by the admin, the managemen of study program, the accreditation team, the admin staff as well as the lecturers involved in the study program accreditation process with 11 respondents. In the testing process, respondents gave an assessment of 5 questions. Rating given is based on the assessment indicators as in Table 3. Meanwhile, to find out the level of user satisfaction, used indicators of user satisfaction shown in Table 4. The total value of the maximum rating indicator = 275 . Percentage of user satisfaction (%) Based on the measurement results of the level of user satisfaction, it can be concluded that the user is satisfied with the information system of the Online Accreditation Document for Computer Engineering Study Programs with a percentage of satisfaction of 75.27% which is included in the range (61% -80%). IV.CONCLUTION Development of an Dokumen Akreditasi Online (DAKON) information system that adopts the SDLC method with the waterfall model, for 4 stages, namely requirements analysis, design, implementation, and testing, this web-based has can be run on localhost with a web browser (Mozilla Firefox). Through testing, it is known that all features have been able to function in accordance with the expected needs.The results of testing to 11 respondents to measure the level of user satisfaction on questions answered, the results obtained a percentage of 75.27% which means that users are satisfied with this system. This website-based information system can assist in the provision of data and document storage needed in the process of compiling the accreditation forms of study program so as to provide effectiveness and efficiency benefits V.RECOMMENDATION This information system has not provided new innovation features in the form of data analysis, for example, a recapitulation of values from internal assessors that can provide evaluations to study programs before being assessed by assessors from BAN-PT and dynamic standard data so that they can be adapted to new standard data when changes in rules accreditation forms assessment. At the next stage of development the version must be adjusted to the new accreditation instrument with 9 criteria of accreditation standards from BAN PT. For management, the accreditation forms application is intended to be used as an evaluation material, so that the study program can make performance improvements to optimize education quality standards in the study program.
3,470.6
2021-01-18T00:00:00.000
[ "Computer Science" ]
Chemical waste risk reduction and environmental impact generated by laboratory activities in research and teaching institutions The environmental impact caused by teaching and research with regard to chemical waste is of increasing concern, and attempts to solve the issue are being made. Education and research-related institutions, in most laboratory and non-laboratory activities, contribute to the generation of small quantities of waste, many of them highly toxic. Of this waste, some is listed by government agencies who are concerned about environmental pollution: disposal of acids, metals, solvents, chemicals and toxicity of selected products of synthesis, whose toxicity is often unknown. This article presents an assessment of the problem and identifies possible solutions, indicating pertinent laws, directives and guidelines; examples of institutions that have implemented protocols in order to minimize the generation of waste; harmonization of procedures for waste management and waste minimization procedures such as reduction, reuse and recycling of chemicals. INTRODUCTION The environmental impact of chemical waste produced by teaching and research is a topic that has been of great concern and discussion for at least two decades, as illustrated in the book by Ashbrook and Reinhardt (1985) on the generation of hazardous wastes in academic institutions.In the book, the authors stressed the need to implement a practice for the treatment of chemical waste in educational institutions, which in most laboratory and non-laboratory activities, contribute to the generation of small quantities of waste, many of them highly toxic.Some of this is listed by governmental agencies who are concerned about the quality of the environment.Examples include the disposal of toxic acids, metals, solvents, chemicals and also products of synthesis whose toxicity is often unknown.Furthermore, it is noteworthy that the composition of waste from research labs constantly changes according to each project being developed.This situation can no longer be ignored by academic institutions, and various research and educational institutions in Brazil are concerned about this problem and are integrating hazardous waste management into their activities.Some of these activities are available in frequent articles published in the Química Nova Journal, numbering among them articles by : Jardim, 1998;Cunha, 2001;Amaral et al., 2001;Afonso, et al., 2003;Alberguini et al., 2003;Bendassolli, 2003;Afonso, 2004;Gerbase, et al., 2005;Imbroisi, et al., 2006.There are also several books by Brazilian authors addressing the management of chemical waste in universities (Alberguini et al., 2005;Figueredo, 2006). The work by Nolasco et al. (2006), analyses the implementation of programs for managing laboratory chemical waste in Brazilian universities, and states that several programs are responding to the requirements of the pillars of sustainability and ecological awareness, which were the main proposals of Agenda 21.The authors also mention that in the last decade, some of the oldest and most prominent Federal and State universities have been adapting and establishing proper measures for control waste. These institutions include the Center for Nuclear Energy in Agriculture, University of São Paulo (Tavares, 2004), the University of Campinas -UNICAMP (Gerbase et al, 2005), the Institute of Chemistry of the University of Rio de Janeiro -IQ / UERJ - (Barbosa et al., 2003), the Department of Chemistry, Federal University of Parana -DQ / UFPR (Cunha, 2001), the Institute of Chemistry of the Federal University of Rio Grande do Sul -IQ / UFRGS (Amaral et al., 2001), the Regional Integrated University of High Uruguay and Missions -URI (Demaman et al., 2004), and the Federal University of Rio de Janeiro -UFRJ (Afonso et al., 2004).In addition to these sites, other initiatives are being carried out in other educational institutions, for example Borghesan et al. (2003), cited the University of São Paulo, São Carlos, while Mortari (2003) mentioned the Franciscan University Center.Otenio et al. (2008) also described a case study associated with the management of biowaste for milk at Embrapa Gado and pools the opinion of researchers, analysts and trainees on the problem of waste generated in biological research.The advantages of establishing and maintaining programs for waste management in universities, teaching and research institutions, both governmental and private, largely outweigh the operational costs that these entail. One of the most significant advantages is undoubtedly the fact that students are taught how to adequately deal with the waste produced in research and in classrooms, thereby minimizing damage to the environment.Moreover, another advantage, which should not be overlooked is that of working in a safe, healthy and clean environment, in line with the principles of ecology (Armour, 1996). The United States of America's legislation related to environmental care in educational institutions is the Resource Conservation and Recovery Act (RCRA), also known as "Solid Waste" Disposal Act, which came into force in 1976, and is an interesting example of the concern over the risks associated to ecological damage.Its objectives are to protect human health and the environment, reduce the generation of all types of waste, toxic or otherwise, and to promote the conservation of energy and natural resources.This law gives the U.S. Environmental Protection Agency (EPA) the power to regulate the disposal of toxic waste in the U.S.A, and authorization to bring civil and criminal charges against whoever violates this law.There have already been cases of not only industries, but also various American universities, being charged, condemned and subject to severe penalties.According to the amendment to this law, dated October 1990, (USA) individuals charged with this type of violation, can be personally prosecuted, convicted and sentenced to imprisonment in State or Federal prisons.Another penalty of significant importance to educational and research institutions committing this type of violation, is that they may no longer receive funds or subsidies from government organizations to support or sponsor their research. In 2006, EPA proposed alternative and more flexible standards for the management of hazardous waste generated in academic institutions, as the environmental agency considered that the legislation, which had formerly been established for industries, needed to be adapted in various aspects.Academic institutions present different characteristics to those of industry, since the amounts of waste generated are smaller, diverse and distributed across various laboratories, manipulated by students in various situations that are not always supervised by trained individuals.Thus, the revised legislation came into force in 2008 (Monz, McDonough, 2006;Archer et al., 2000). As stated above, although the amount of waste generated in academic institutions is small, less than 1% of the total generated nationally, waste in education institutions is considered heterogeneous, and may include highly toxic compounds.Therefore, any teaching and research institution committed to its employees' and students' health must consistently uphold the laws related to workers' chemical safety, and laws on management of hazardous waste released by its laboratories. The concept of waste minimization encompasses any action that reduces the amount and/or toxicity of anything to be discarded as hazardous waste.It is therefore essential that the waste is properly handled, stored and disposed of.When the waste generating source has been identified, whether highly hazardous or otherwise, protocols or operational procedures aimed at their appropriate disposal should be implemented. Most of what is used in university laboratories, albeit related to research or teaching at some point, can become hazardous.Examples are solvents, glassware, reagents, packaging of dangerous products, biological material, out of order, broken or obsolete equipments, broken thermometers, and outdated or obsolete computers.A visitor to most unpretentious academic laboratories would see such material, which can cause safety problems and have an impact on environmental health if disposed of in an indiscriminate manner.Thus, it is important to urgently address this problem, where this can be done by cutting down on waste production, and properly treating and disposing of the waste which is produced. HARMONIZATION OF WASTE MANAGEMENT PROCEDURES The last few decades have seen great development in the creation of standards and quality systems in several professional activities many of them aimed at improving harmonization of procedures, the quality of manufactured products, and professional activities.Among these systems are the ISO -International Standards Organization, whose function is to ensure the desirable characteristics of products or services such as quality, environmental care, safety, reliability and efficiency.Most of the ISO guides are specified for products, processes or materials.ISO 9000 procedures focus on quality and ISO 14000 on the environment, and are considered generic systems of management standardization.The term generic used here has the connotation of allowing these systems to be applied to any organization, large or small, in order to carry out any activity in any sector of the economy, public administration, or government organizations.The ISO 9001 standard provides a series of requirements for implementing a quality management system, while the ISO 14000 pertains to an environmental management system (www.iso.org). When organizations follow the spirit of ISO 14000, it means that they promote changes in attitude, operational procedures and in management, which then yields many benefits.This standardized protocol provides common sense information to help reduce the negative impact of several activities on the environment, therefore reducing costs by cutting down on waste and preventing pollution, in addition to contributing to the quality of the communities in which the organization operates (Rondinelli et al., 2000). Thus, institutions both private and public can benefit from the implementation of such systems that significantly improve the environment in which they are located.A program of waste management is an integral part of the environmental care recommended by ISO 14000.Such programs can and should be implemented in educational institutions, and this necessarily includes constant evaluation of laboratory activities and processes, aimed at reducing the generation of disposable material and increasing recycling. The waste generated by academic institutions such as universities, institutes and high schools can be classified into four main categories: household, biological and chemical waste, and radiation.The last three may or may not be considered dangerous.Those considered hazardous should be discarded as such and not as common garbage, seeking to minimize environmental impact and to adhere to specific waste management laws enforced by European, American, and Brazilian legislations (Council Directive 91/689/EEC, USEPA, 1996;Brazil, 2002). Hazardous chemicals normally found in academia and requiring proper treatment are: • chemical wastes generated in research laboratories, and during teaching activities; • old chemical agents, considered an institutional liability, often difficult to identify and abandoned in the laboratory; • chemical agents surpassing their expiration date and therefore in need of re-evaluation of their effectiveness, and need for disposal; • bottles of chemicals without labels or with wrong or unreadable labels; • material in a state of deterioration or in packages which are deteriorated, or damaged; • unknown residues in chemicals containers; • laboratory waste such as paper towels and rags; • personal protective equipment: aprons, glasses, masks, gloves contaminated with harmful biological, chemical or radioactive material; • non-recyclable batteries and gas cylinders; • photographic film processing solutions; • pesticides, equipment containing toxic compounds, different types of waste oils, used solvents, Thinner, oil remover, wood preservers; • formaldehyde, formalin, acrylamide waste in liquid or gel form; • mercury and other metals with high toxicity; • defunct electronics, computers and thermometers; • sharp devices such as: needles, syringes, chromatography needles, Pasteur pipettes, tips; • bleach, ammonia, cleaning solvents, liquid wood polish; • chemical bottles (glass and plastic) empty but contaminated; • contaminated broken (or damaged) laboratory glass; • mercury-contaminated, broken (or damaged) thermometers; • carcinogenic and radioactive chemicals, pathogenic microorganisms. RELEVANT LAWS RELATED TO WASTE DISPOSAL Although very little legislation is directly related to the management of hazardous waste in teaching and research activities, there are many laws, rules and ordinances in a Federal, State and local realm related to the subject in question.An interesting portal to be consulted for information on health laws and regulations is the site: http://www.cvs.saude.sp.gov.br/publ_leis3.asp. Examples of pertinent legislation: • Resolution RDC 306/2004 (ANVISA) -technical regulations for the management of health services waste; • Resolution CONAMA 357/2005 -sets the conditions and standards for effluents release; Resolutions Conama n. 358 and ANVISA n. 306 Resolutions CONAMA n.358 of 29 April 2005, and the Ministry of Environment and ANVISA in the DRC 306, 7 December 2004, seek to minimize occupational hazards and protect the health of workers and the general population.This resolution, in its Article 1, specifies that its application is related to all services that include assistance to human or animal health, laboratories of analysis for health products and, subject to this chapter, health educational and research institutions, among others.This resolution classifies the various types of waste into 5 categories, and indicates how each one should be handled: I -GROUP A: Waste with the possible presence of biological agents that, by its characteristics of greater virulence or concentration, may pose a risk of infection.a) A1: 1. Microorganisms cultures and stocks; residue waste of biological products manufacture except hem-derivatives; disposal of attenuated or live microorganisms, vaccines, culture medium of microorganisms and instruments used to transfer, inoculation or mixing of cultures, residue waste from laboratories of genetic manipulation; 2. Waste resulting from individuals or animal health care suspect or certain of contamination including class 4 risk biological agents, microorganisms of epidemiological relevance and risk of dissemination, or able to cause of emerging diseases that could become of epidemiologically importance, or when its transmission mechanism is unknown; 3. Bags containing blood or blood transfusion rejected due to contamination or poor storage, or expired validated date; 4. Remains of laboratory samples containing blood or body fluids, containers and materials resulting from health care processes, containing blood or body fluids. The generating unit should treat this type of waste before disposing of it. b) A2: Carcasses, body parts, other waste residues from animals used in experimental processes involving inoculation of microorganisms, and the corpses of animals suspected of being carriers of microorganisms of relevance and epidemiological risk of dissemination. The generating unit should treat this type of waste before disposing of it.c) A3: Human body parts, products of fecundation weighing less than 500g and/or less than 25 cm, with gestation age less than 20 weeks: The generating unit should organize for material to be cremated, incinerated, or buried.d) A4: 1. Kits of arterial, intravenous and dialysis lines, when discarded; 2. Filters for air and gases aspirated from contaminated areas; filter membrane from hospital, clinical and research equipment, among others; 3. Remains of laboratory samples and their containers of feces, urine and secretions from patients that do not contain, nor are suspected to contain risk class 4 agents, and neither have relevance and epidemiological risk of dissemination, or microorganism causing emerging disease that has become important epidemiologically or their transmission mechanism is unknown, or suspected of prion contamination; 4. Fat residue waste from liposuction, liposculpture or other plastic surgery procedure that generates this type of waste; 5. Containers and materials resulting from health care processes, which contains no blood or body fluids; 6. Anatomical parts (organs and tissues) and other waste from surgical procedures or of anatomical and pathological studies or diagnostic confirmation; 7. Carcasses, anatomical parts, or other waste from animals not submitted to experimentation processes with inoculation of microorganisms; 8. Empty or semi used blood bag for transfusion. The generating unit should discard this material without prior treatment at sites previously designated for Health Services waste disposal. e) A5: Organs, tissues, body fluids, piercing or sharp material and other materials from human or animal health care, suspected or certain of contamination with prions. The generating units must incinerate them.II -GROUP B: Waste containing chemicals that may present risk to public health or the environment, depending on its characteristics of flammability, corrosivity, reactivity and toxicity.a) Hormonal, antimicrobial, cytostatic, anticancer, immunosuppressant, digitalis, immunomodulatory, anti-retroviral products, when discarded by health services, pharmacies, drugstores and distributors; medicines and pharmaceutical raw materials residue waste; b) Sanitizing, disinfectants waste; waste containing heavy metals, laboratory reagents, including containers contaminated with these materials; c) Image processor effluent (developers and fixers); V -GROUP E: piercing or sharp material, such as shaving blades, needles, scalp, glass ampoules, drills, endodontic files, diamond burs, scalpel blade, lancets, capillary tubes, micropipettes, and spatulas; also all the broken glass utensils from laboratory (pipettes, blood collection tubes and Petri dishes) and others.This legislation considers agents of class 4 exposure (high individual risk and high risk to the community) the pathogens that represent major threat to humans and animals, and which pose great risk to whom might handle it; or likely to be transferred from one individual to another; and those without preventive measures or treatment. Hazardous chemical waste are those listed in Group B, as per CONAMA resolution 283 of 12 July 2001, and classified as hazardous, according to NBR 10004, by presenting characteristics of toxicity, reactivity, flammability and/or corrosivity.The nonhazardous chemical waste are the result of institutions' laboratory activities for provision of health services that do not exhibit the above characteristics, defined as follows: • Flammability: any liquid whose flash point is less than 60 ºC, compressed gas with a high degree of ignition, oxidants, substances capable of catching fire, under normal pressure and temperature, by processes of friction, absorption of moisture or spontaneous changes; • Corrosivity: aqueous solutions with pH below 2 or above 12.5, or compounds with high reactivity in water or forming potentially explosive mixture with water, or usually unstable, or those that generate fumes, smoke or toxic gases when mixed with water, or cyanide residues or sulfide gas generators, smoke or toxic fumes in pH between 2 and 12.5, or explosive compounds, and if capable of corroding steel at the rate of 6.55 mm per year, in temperature of 55 ºC; • Reactivity (causes irritation): non-corrosive chemical agents which by contact with skin or mucosa, can cause inflammation.It is worth noting that corrosive substances at low concentrations may be irritating; hydrophilic agents such as ammonia, are irritating to the upper respiratory tract; organic solvents are irritants due to dissolution of the dermal lipid layer. • Toxicity: toxic metals, pesticides, organic compounds, polychlorinated biphenyls, dioxins, among other residues.The toxicity is proven from toxicological in vivo and in vitro tests, adopting harmonized and internationally accepted protocols (USEPA, OECD).The University of Houston suggests a procedure for chemical safety and risk management, based on Title 40, Code of Federal Regulations, Protection of the Environment, (U.S. Environmental Protection Agency).According to EPA definition, pollution prevention is a reduction of waste at source and environmentally correct recycling, thus any plan to tackle this problem must consider the 3 Rs, of Reduce, Reuse and Recycle; • Reduce the use of hazardous material and, where possible, use small quantities of chemical agents; • Reuse material, i.e., transfer or share the use of hazardous material between the various laboratories of the institution, or store it properly to be used when necessary; • Recycle which uses filtration and distillation systems, among others, that allow the reuse of solvents. Another important way to minimize waste generation is to consider the use of less toxic chemicals, both in research laboratories and in the classroom.Avoid the use of unnecessary ingredients, such as emulsifiers in solvents to be used or discarded, and separate the different types of solvents for reuse or recycling.The university laboratories tend to generate a considerable amount of chemical waste as they often use outdated techniques and a large volume of solvents.Other suggestions designed to minimize the generation of waste are: replace the use of mercury thermometers for digital thermometers; replace sulfochromic solution or alcoholic solution of potassium hydroxide or potassium hydroxide for sonication, when possible for the cleaning laboratory glassware; replace tests with acids and strong bases, therefore more toxic, for vinegar and ammonia; replace carbon tetrachloride by cyclohexane, according to the process described below: REDUCTION In the process of waste reduction at source, the goal is to facilitate any activity that reduces or eliminates the generation of hazardous chemical waste.This activity can be implemented with good management when acquiring materials; when replacing toxic material with less harmful ones, and with good laboratory practice.Here are some suggestions that allow the reduction of waste at source according to the University of Florida: • implement a policy of minimizing waste in the university research and students practice laboratory, and train all those involved in these activities.The reduction at the source can be achieved through improvement of methods or processes and replacement of ineffective equipment.In educational institutions, this is not always possible, but one should consider using more modern extraction techniques, such as solid phase extraction, or supercritical fluid, to minimize waste by using smaller volumes of organic solvents; • do not mix dangerous classes of waste with non-hazardous ones; • consider the possibility of using less toxic reagents, substituting for products with lower toxicity.In this context, it is possible to consider the replacement of products such as benzene, used as a solvent, with hexane or xylene; formalin or formaldehyde, used as a preservative for specimens in the laboratory, with ethanol; halogenated solvents in the extraction process with non-halogenated solvents; sodium dichromate with sodium hypochlorite, in some oxidation reactions; in studies using radioactive material, replace liquid scintillation-based toluene with a non-flammable solvent; in qualitative tests for heavy metals, replace sulfide ion with ion hydroxide.Replacement is not always possible because some substitutes do not always produce fully satisfactory results, or are toxic or too expensive.Thus, it is necessary to evaluate if the replacement material is suitable and delivers acceptable results.A common practice is to make an inventory of the compounds used in laboratories and to identify the likely replacement.The laboratory technician responsible for the use of these compounds needs to evaluate the possible replacements, using the information given by suppliers in the material safety data sheet (MSDS -Material Safety Data Sheet); • centralize the acquisition of chemicals, and biological and radiation materials; • dating all received material, thus facilitating earlier use of the oldest ones; • make an inventory of purchased and used chemical agents in the laboratory: maintain a file containing their location, which should be updated annually.This facilitates the reduction of the quantity stored, and avoids the purchase of unnecessary material; • provide employees with updated MSDS -Material Safety Data Sheet of the chemicals used in laboratories; • acquire any chemical, biological and radiation materials in the least possible amount.The motto of the American Chemical Society (2008), is "Less is better", it is safe and environmentally correct to buy less material, use less and, ultimately, dispose of less, allowing a reduction in risk of accidents, fires, or harm to human health, and at the same time reduces costs; • purchase the equipment needed for immediate use and avoid purchasing materials in large quantities, even it seems to be economically advantageous, since stocking can be expensive, or dangerous, and may lead to products exceeding their expiration date.A significant part of disposal done by universities is related to the purchase of unnecessary equipment; • label all reagents to allow their ready identification.Borrow material from other labs, or buy it in small quantities.There is a successful story about a chemicals redistribution program at the University of Wisconsin -Madison, which has existed since 1980.About 30% of excess chemical purchased for each quarter are redistributed by the university, which allows an economy of $ 10-20,000.00on disposal of chemicals for the university.Thus, the institution donates chemicals to those who need them and at the same time, reduces the amount to be discarded, and all benefit from this type of procedure; • consider the possibility of testing in micro-scale, using new glassware and techniques that reduce quantities used to milligrams, which yields many benefits such as lower costs, since small-scale experiments using fewer solvents and other chemical agents are generally processed more quickly, because it is faster to heat or cool small volumes, reduces exposure to harmful agents and reduces harmful emissions.However, please note that this technique can only be implemented to achieve the analytical proposed objective; • consider the alternative of presentations/demonstrations on video, computer modeling and simulations, which eliminate environmental impacts, as substitutes for laboratory tests in the classroom.These multimedia simulations allow the student to observe more complex procedures than would be possible in traditional activities in the laboratory; • consider prior separation of reagents and weighing in the laboratory, avoiding contamination of several rooms and environments; • avoid using reagents containing toxic metals such as lead, chromium, arsenic, mercury, barium, silver, cadmium and selenium; • do not use sulfochromic solutions: substitute them for less toxic solutions such as biodegradable detergents like Alconox or Pierce RBS35.Evaluate the possibility of using hot water and detergent for cleaning glass, instead of solvents; • always keep the laboratory clean and in order; • discard waste for disposal in the sink leading to the sewage system.Some organic and inorganic compounds may be discarded in the sewage system, in quantities of 100g and diluted 100 times.Generally, water-soluble organic compounds which have lower boiling temperature of approximately 50ºC should not be discarded in this manner.Some compounds are hydrophilic, when present at levels up to 3% and have low toxicity.The compounds listed below are readily biodegradable and can be discarded in the sink: Organic compounds: alkyl alcohols with less than 5 carbon atoms: t-myl alcohol; Alkanediols with less than 8 carbon atoms: glycerol, sugars alcoxi alkanes with less than 7 carbon atoms: n-C4H9OCH2CH2OCH2CH2OH, 2-Chloroethanol; Aldehydes: Aliphatic compounds with less than 5 carbon atoms; Amides: RCONH2 and RCONHR with less than 5 carbon atoms; RCONR2 with less than 11 carbon atoms; Amines: aliphatic compounds with less than 7 carbon atoms; aliphatic diamines with less than 7 carbon atoms, benzilamina, pyridine carboxylic acids: alkanoics acids with less than 6 carbon atoms; hydroxy alkanoics acids with less than 6 carbon atoms; amino alkanoic acid with less than 7 carbon atoms; class of ammonia salts , sodium and potassium salts of the acids mentioned above, with less than 21 carbon atoms; chloralkanoic acid with less than 4 carbon atoms; Esters: esters with less than 5 carbon atoms; isopropyl acetate.The compounds that have unpleasant odor, such as dimethylamine, 1,4-butanediamine, butyric and valeric acid, must be neutralized and their salts should be discarded in the sink into sewage drain diluted with at least 1,000 volumes of water; Ketones with less than 6 carbon atoms; Nitriles: acetonitrile, Propionitrile; Sulfonic acid: sodium or potassium salts of these acids are acceptable. Reuse and recycling Reuse and recycle processes when possible in a new way, or treat and reuse it in the same way, or in another type of activity.Some examples of recycling are: • distillation of used solvents; • in cleaning processes, the glassware can be initially washed with used solvents; • purchase only compressed gas cylinders from manufacturers that accept the return of empty or partially used ones; • in pesticide studies, it is advisable to establish the practice of returning any unused material to the research sponsor; • avoid the contamination of fuel with solvents or heavy metals; • share chemical agents among the various university units; • control the use of metallic mercury. If the above procedures are not suitable for specific situations to minimize waste, an alternative may be the final chemical treatment of the generated hazardous waste.The techniques routinely used in reducing chemical waste are: neutralization, precipitation, oxidation, reduction and distillation, practices to be conducted by trained laboratory technicians. Precipitation, oxidation and reduction These processes can remove hazardous components of chemical waste and the final product can be discarded as common trash.Precipitates derived from these reactions may require more effective waste treatment.The application of these procedures for chemical treatment in laboratories, apart from reducing hazardous waste, allows its incorporation as a common practice in teaching the students responsible management of chemical waste, fostering future generations of scientists with a better understanding of proper waste disposal. The recycling of solvents, among other materials used in technical analysis, allows the reuse of material, which otherwise would be discarded as hazardous waste.These techniques require planning, when they are incorporated into the teaching laboratories activities.Solvent recycling, if well done, brings advantages to the academy in terms of risk reduction, harmful waste reduction, lower costs, and is also beneficial for the students because as previously mentioned, it allows them to learn about waste management in a responsible manner and understand the university commitment to hazardous waste reduction. According to IZZO, 2000, coordinating hazardous waste management at a university can be very complex, as most universities are decentralized, most research laboratories have an unstable workforce, relying heavily on graduate students and postdoctoral associates who are usually at the university for limited time, and the type of waste generation changes frequently, as the focus of the research changes. As mentioned earlier, educational and research institutions generate pollution because they produce harmful waste, discard hazardous materials in the sink, allow the evaporation of solvents, among other activities detrimental to the environment.The recommended process of reducing production of harmful wastes is not always feasible, because the concept of research implies the study of new compounds and their disposal if the result is neither interesting nor relevant.This type of activity is quite different from industrial processes, where there are routine activities, constant use of raw materials and well known waste products.In research, the multitude of non-routine activities is inherently more difficult to control.Nevertheless, the suggestions described here may contribute to the prevention of environmental pollution, and minimize costs with health and the environment.The proper disposal of hazardous waste involves direct costs to the universities, as is the case at the University of Bristol which produced 1,209 tonnes of waste between August 07 and July 08, excluding construction waste.Of this 1,861 tonnes, the amount recycled or reused was 704 tonnes (39%).The total cost of waste management in 2008 was over £200,000.By increasing the amount of waste that can be reused and recycled it is possible to reduce the amount going to landfills and save money in the process (Bristol University, http://www.bristol.ac.uk/environment/waste/).Thus, while reducing the production of waste and encouraging recycling, they also reduce costs. Hazardous waste management programs encourage the minimization of waste in universities and provides incentives to these institutions to reduce the health environmental risks, and at the same time decrease the amount of hazardous waste to be handled, stored and transported, as well as reduce the cost of packaging, transportation, and disposal. The University of Clenson ( http://ehs.clemson.edu/)offers an example of a safety and environmental health plan in the campus that applies to all faculty units, staff and students, as well as to all activities undertaken there.This plan includes the following: general safety on campus; accordance with the laws of occupational and environmental health, disaster management, emergency response to hazardous products, air quality in the external and internal environment of the university, management of lead used in the edifications throughout the university; ergonomic plan, plan for prevention, control and measures of chemicals spillage, policies against violence in the workplace; hygiene in the use of chemicals; biosafety manual; risk communications; nanotechnology manual; manual for management disposal of chemical agents; manual for respiratory protection, and control of exposure to blood borne pathogens, industrial hygiene and manual of radiation protection. An example of pioneering initiatives in this area is the School of Pharmaceutical Sciences of the University of São Paulo that is taking on board the values outlined above on health and environmental quality, having decided to implement an environmental management system to establish a more suitable environmental performance, according to more modern canons. CONCLUSION Green chemistry was a concept introduced by EPA in the 1990s, a more sustainable chemistry in collaboration with the American Chemical Society (ACS) and the Green Chemistry Institute.This green chemistry concept is related to the invention, development and application of methodologies that reduce or eliminate the use of dangerous chemicals and sub-products, harmful to human health or the environment.Several European countries, the U.S. and Japan, among others, are encouraging the implementation of this concept in industries and activities of teaching and research, including rewarding companies and researchers for developing chemical processes, services and products that not damage the environment.In fact this concept does not introduce anything new, because it encompasses values that have being discussed since the 1970s.However, incorporate values and the parameters of sustainability discussed in Agenda 21, the Protocol of Kyoto and Rio +10.However, the relevance of this green chemistry is to incorporate their concerns into the concept of danger or toxicity of chemicals.It is worth noting that, the concept of toxicity is the potential capability that the substances have to present a hazard to life and the environment under certain conditions (Klassen, 2007).The concept of green chemistry is to avoid this hazard.Thus, in avoiding this hazard you can make use of the basic paradigm of Toxicology that is the concept of risk, which includes the hazard or toxicity, which is likely to produce harm under specific conditions, namely: Risk= Hazard (or toxicity) x Exposure This risk approach allows the intrinsic toxicity of chemical compounds and also their conditions of exposure to be dealt with (Klaassen, 2007).Thus, while advocating the use of substances of lower toxicity, it is also creating a policy of damage prevention for humans and the environment.In the process of creating control over chemical exposure, alternatives to minimize chemical risk are also created.The laws related to minimizing chemical risk are based on maximum permissible or tolerable levels of exposure, the use of preventive measures to minimize exposure, the use of personal protection equipment, and also, measures to control technology and treatment of effluents.Since chemical safety is the opposite of chemical risk, any of the above presented alternatives such as the replacement of a less intrinsically toxic compound, or alteration of exposure conditions are welcome.If low risk products are used, no additional costs will be needed to secure conditions of exposure (Tundo, 2000). In conclusion, the implementation of constant training of university teachers and students with regard to safety in the use, storage and disposal of dangerous products, is key for all above mentioned procedures of health and management quality to become a reality. • Resolution CONAMA 358/2005 -treatment and final disposal of health services waste; • State Decree 8468/1976 -provides for environmental pollution prevention and control; Law No. 12300/2006establishing the Solid Waste State Policy (São Paulo State); • Resolution SMA -31, 22-7-2003 -management of chemical waste from health service establishments in São Paulo State; • Nuclear Energy National Commission -Standards.6.05 (Management of Radioactive Waste in radioactive facilities), CNEN resolution 19/85, Federal Official Gazette, 17/12/1985; • Nuclear Energy National Commission -Standards.N. 6.09 (Acceptance Criteria for Disposal of low and average Radiation Levels), Resolution CNEN 19/09/2002, Brazilian Federal Official Gazette, 23/09/2002; • ANVISA -DRC 306 and Ministry of Health and the Environment -CONAMA 358/2005 -Treatment of waste containing biological or pathogenic material.In the case of chemical waste from laboratories, there is no specific legislation for their classification, treatment and disposal, thus one should use the NBR 10004:2004 -Standard for classification of solid waste, along with other resolutions and decrees, State or Federal, above mentioned organization (State Environmental Authority -SP: CETESB) is in charge of controlling this activity.There are also other technical standards such as those released by the Brazilian Association of Technical Standards (ABNT) (www.abnt.org.br)listed below: • NBR 12807 -Medical Waste Residues -Terminology; • NBR 12808 -Medical Waste Residues-Classification; • NBR 12809 -Handling Waste Health Residues Procedures; • NBR 12810 -Disposal of Waste Health Residues Procedures; • NBR 12980 -Collection, sweeping and packaging of solid waste -Terminology; • NBR 8419 -Projects of landfills for Urban Solid Waste Residues; • NBR 9191 -Plastic bags for waste; • NBR 10004 -Solid Waste -Classification; • NBR 10005 -Procedure for Waste residue leaching; • NBR 10006 -Procedure for Solubilization of Waste residues; • NBR 10007 -Waste Sampling Procedure; • NBR 10157 -Hazardous Waste Landfill -Procedure Criteria for Design, Construction and Operation. d) Effluent of automated equipment used in clinical testing; e) Other products considered dangerous, under classification of the ABNT NBR 10004 (toxic, corrosive, flammable and reactive).The generating unit should treat them for final specific disposal, unless they are subjected to reuse, recycling or recovery processes.III -GROUP C: Any material resulting from human activities containing radionuclides in quantities exceeding the limits specified in the rules for disposal of the National Commission of Nuclear Energy-CNEN and for which the reuse is inappropriate or not provided for.Any materials resulting from research and educational labs, health, clinical and laboratory testing services, nuclear and radiation medicine containing radionuclides exceeding limits of elimination fall into this group.The generating unit should follow the rules of the Nuclear Energy National Commission -CNEN -for the treatment of radioactive waste.IV -GROUP D: Biological, chemical or radiological waste not causing health or environmental risk, can be treated as household waste.a) Sanitary paper, diapers, sanitary napkin, disposable pieces of clothing, food remains from patient, material used in anti-sepsia, and other similar material not classified as A1; b) Food and food preparation scraps; c) Cafeteria food scraps; d) Waste from administrative areas; e) Sweeping waste, flowers and gardens; f) Residue waste gypsum from health care.
8,496.6
2010-06-01T00:00:00.000
[ "Chemistry" ]
Imprints of Casimir wormhole in Einstein Gauss–Bonnet gravity with non-vanishing complexity factor This article investigates Casimir wormhole solutions in Einstein Gauss–Bonnet (EGB) gravity. We are familiar that Null energy conditions (NEC) need not be satisfied for a stable wormhole due to the existence of exotic matter. As the Casimir effect acts as a negative energy source, it can be treated as a classical applicant for the exotic matter to discuss the stable dynamics of the wormhole. This work explores the Casimir effects with the Generalized Uncertainty Principle (GUP) on wormhole geometry in EGB gravity by confining our results for D=5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D=5$$\end{document}. We have examined two GUP procedures, e.g., Kempf, Mangano, Mann (KMM) and Dentournay, Gabriel, and Spindel (DGS). We have developed shape functions for Casimir wormholes, and GUP corrected Casimir wormholes and studied their existence. In addition, we investigate the behavior of the Gauss–Bonnet (GB) Coupled parameter and minimal uncertainty (MU) parameter on the Equation of state (EOS) parameter. The active gravitational mass and embedding diagrams for all developed shape functions are analysed. Moreover, the violation of the NEC by an exotic matter, the equilibrium forces, and the complexity factor of Casimir wormholes and GUP-corrected Casimir wormholes have also been explored. Introduction In 1935, Einstein and Rosen discussed the concept of the hypothetical link called the wormholes or Einstein-Rosen bridge via space-time by considering the framework of General Relativity (GR) [1]. These bridges or tunnels connect two different paths of the same cosmos or create a shortcut. Still, the existence of wormholes should be explored experimentally. In 1988 Morris and Thorne established the idea a e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>of traversable wormholes [2]. They presented a new class of solutions of Einstein field equations that narrate wormholes and predict the existence of wormhole throat, which does not have a horizon problem. The stability in traversable wormholes depends on the presence of exotic matter. It is known that exotic matter in wormholes violates the NEC. The dynamics of wormholes have been studied with different modified theories in literature intensively [3][4][5][6][7][8][9][10]. The cosmic evolution of wormhole geometries in f (R) modified gravity has been discussed [11,12]. The exact solution of the wormhole in the presence of phantom energy has been evaluated, and it found out phantom energy is limited in the vicinity of the wormhole throat [13]. The wormhole geometeries are discussed in other modified gravities e.g. f (R) [11,14], f (T ) [15,16], BD theory [17][18][19][20], f (R, T ) [21], scalar-tensor teleparallel gravity [22]. Recently, Kimet Jusufi and his fellow researchers [23] studied the existence of wormholes in 4D EGB gravity. In this article, they have developed shape functions with various techniques and analyzed wormhole conditions. Many efforts have been made in the literature to cut down the exotic matter content. Visser proposed one such model in which he suggested a traversable path should not be fallen in an area of exotic matter [24]. Interestingly, no such traversable wormhole has been found because it is nearly impossible to have enough negative energy density. Scientists have proved that negative energy can be created in laboratories known as Casimir energy [25]. Dutch physicist Hendrik Casimir suggested the phenomenon of the Casimir effect in 1948. A force may exist between the uncharged, parallel, conducting plates [25]. Garattini [26] considered Casimir energy a potential source to study the existence of traversable wormholes. The Casimir effect strongly depends on geometry's shape and is an artificial source of energy. This form of energy is a suitable source for traversable wormhole as the quantum field of the vacuum between two parallel uncharged plates give rise to negative energy density. The most recent work in quantum mechanics deals with the idea of the minimal length of the order of Planck length. The purpose of using a minimal length scale is to restrict the resolution of small space-time distances. Therefore, it comes up with MU in the position. We can redefine the Casimir energy density with the generalized uncertainty principle (GUP). Therefore, scientists find it quite exciting to work with GUP-corrected wormholes with traversable wormholes. Jusufi et al. [28] studied three types GUP models with the source term Casimir energy density. The GUP-corrected Casimir wormholes are discussed in f (R, T ) modified theory by Tripathy [29]. The effects of the MU parameter and theory-coupled parameters on wormhole conditions, EOS, and energy conditions have been studied. In weak limit approximation, Javed et al. [30] investigated the weak deflection angle of the photon by Casimir wormhole. They use Gauss-Bonnet theorem on Gaussian optical spacetime to find Gaussian optical curvature. In another article, the authors explore the relationship between an absurdly benign traversable wormhole and Casimir energy [31]. They generalized the idea of Absurdly Benign Traversable Wormhole and explored that wormhole throat is Planckian, but huge. Muniz et al. studied the Casimir effect between the parallel plates in the space-time of a rotating wormhole [32]. The EGB Gravity is a particularly basic example of the larger class of gravitational theories known as Lovelock Gravities, which were proposed by Lovelock [33]. Higher power curvature terms are present in the actions of these gravities, but second-order derivatives in the metric are maintained in the subsequent equations of motion. As a result, Lovelock gravities act as GR most natural extension to higher-dimensional spacetimes. This theory, the most extensive of higher curvature gravities, demonstrates second-order equations of motion and has a number of attractive characteristics in common with Einstein gravity that are missing from other higher curvature gravities theories. Since, we know that the polynomial type of the Lagrangian in Lovelock theories, the first terms is the Einstein-Hilbert action, while the second order term correspond the GB invariant, which is defined as The Ricci scalar, Ricci tensor, and Riemann curvatures are represented by a particular combination known as the Gauss-Bonnet invariant, abbreviated as G. The f (R) formalism has previously been used to study Casimir wormholes in the setting of modified gravity, taking into account two distinct models: f (R) = R + α R 2 and f (R) = f 0 R n [34]. We have concentrated on EGB gravity in our research. A noteworthy property of EGB gravity, the potential presence of two separate maximally symmetric solutions, even with distinct curvature scale signs, is what drives the study of wormhole solutions. The flaring-out condition is a crucial need for traversable wormholes. In the background of GR, this condition entails the violation of the NEC. Extensive study has been done to lessen the dependency on exotic matter in light of the energy condition violations [35,36]. It has been interestingly found that higher-dimensional cosmological wormholes [37] and wormholes in modified gravity theories with higher-order curvature invariants can satisfy the energy requirements at the throat [38][39][40]. Since the higher-order curvature terms, which can be thought of as a gravitational fluid, support these non-conventional wormhole geometries, it has been shown that it is possible to impose matter threading the wormhole throat to stick to all of the energy conditions in modified gravity. In order to alleviate the energy condition violations, particularly at the throat area, wormhole geometries in higher-dimensional theories are therefore strongly encouraged. The GB term is defined in this theory along with the Lanczos tensor, which results in the Weyl tensor. The curvature of spacetime is quantified by the Weyl tensor. The Riemann curvature tensor is used to assess curvature in all other theories using the GB term, such as f (G), f (G, T ), f (G, R), etc. The Riemann curvature tensor can offer information on changes in a body's volume, whereas the Weyl tensor only indicates how tidal forces cause a body's shape to change. This is where the differences between the two tensors reside. Information on changes in volumes caused by tidal forces is precisely captured by the Ricci curvature, also known as the trace component of the Riemann tensor. As a result, the Weyl tensor can be thought of as the Riemann tensor's traceless component. The Riemann tensor's symmetries are shared by this structure, but it also has to be trace-free. The EGB theory thus appears as a plausible and suitable option to study the importance of the GB invariant in respect to the Lanczos tensor. This article investigates the wormhole space-time geometry in EGB gravity powered by Casimir wormhole. The Lorentzian wormhole solutions were studied in the Ndimensional EGB gravity [41]. The solution of these wormholes greatly depends on the space-time dimension and the GB parameter. These two parameters play an essential role as wormhole throat radius is also constrained by them. Moreover, they studied the dynamics of these parameters for weak energy conditions (WEC) in the neighbourhood of wormhole throats only [41]. Also, authors explored the Lorentzian wormhole solutions of third-order Lovelock gravity [42]. They explored that wormhole throat has a lower bound depending upon the lovelock coefficient, space-time dimension, and function shape. The reference [43] discussed the dynamical wormholes in lovelock gravity. They constructed the shape function by constraining the Ricci scalar and three scale factors. Maeda at el. [44] evaluated static and symmetric wormholes in EGB gravity for D ≥ 5. In a Similar work, authors discussed the WEC of traversable wormholes in 5D EGB gravity [45]. They have constructed shape functions by considering specific EOS and traceless energy moment ten-sors (EMT). In the background of N-dimensional EGB gravity, authors studied the Gaussian and Lorentzian distributed noncommutative geometry of wormhole [46]. They studied the dynamics of GB coupled parameters for the fifth and sixth dimensions. Recently, Herrera [47] introduced the self-gravitating system in an anisotropic system to calculate the vanishing complexity factor. He has explored general mass function and used the orthogonal splitting of the tensor(curvature) for Tolman mass and structure scalars. In 2009, Herrera et al. [48] studied the primary outcomes of gravitational collapse concerning the Israel-Stewart notion for viscid dissipative analysis with bulk shear viscosity. Herrera et al. [49] discussed the solutions for the self-relativistic gravitating collapse in dissipative situations in the background of Post-quasi static estimation. Moreover, by extending the work to the dissipative case in the form of free radiation streaming and heat flow, the scientist Herrera and Santos [50] studied the gravitational collapse in the framework of the Misner and Sharp approach. In addition to this, Herrera et al. [51] concluded their findings in self-gravitating collapsing source with anisotropic matter distribution on the system of the equation, which brings about the actual state for vanishing spatial gradients of energy density. In 2019, the complexity factor for the self-gravitating system in modified GB gravity has also been calculated [52]. Complexity factor for a class of compact stars in f (R, T ) modified gravity has been explored by [53]. Moreover, Abbas and Nazar [54] studied the complexity factor in f (R) modified gravity for anisotropic system in non-minimal coupling metric. The order of the present paper is as follows: in Sect. 2, we presented the basic formalism of higher dimensional EGB gravity and developed the field equations using static wormholes. In the same section, we have discussed the wormhole solution by NEC. The Casimir effect and solution of Casimir wormholes have been discussed in Sect. 3. In the next Sect. 4 we have discussed GUP corrected Casimir wormholes in EGB gravity. The active gravitational mass and wormhole geometry is discussed in Sects. 5 and 6 respectively. In Sects. 7 and 8, we have calculated the equilibrium forces and complexity factor of Casimir wormhole and GUP-corrected Casimir wormholes, respectively. We have summarized our results in the last Sect. 9. Basic formalism of field equations of EGB gravity The action in the background of EGB is expressed as follows [45], where R is Ricci scalar, D defines the dimension of spacetime and μ 2 is GB coefficient. The GB invariant is expressed by G. The expression for GB invariant is By varying the action w.r.t metric tensor, the field equations can be written as Here G αβ represents Einstein tensor, H αβ GB tensor and T αβ is EMT. The expression for H αβ is as follows We have considered that 8π G D = 1, where G D is D dimensional gravitational constant. The wormhole geometry for D − 2 sphere is expressed as [45], where 2 D−2 shows metric on the surface of the D −2 sphere. The gravitational redshift function is denoted by (r ) and known as simply redshift function. The gravitational redshift is defined as the required frequency a photon will have when dragged out of gravitational potential. Dragging photons out of gravitational potential requires energy, which is proportional to its frequency. The increase in energy increases frequency called the redshift function. It must be noted that a photon cannot have enough energy to escape if a wormhole has an event horizon. To avoid the horizon, the redshift function should be finite everywhere in the domain. The shape function is denoted by b(r ), and it predicts the shape of the wormhole. The shape of the wormhole can be seen through the embedding diagram [2]. The radial coordinate r is nonmonotonic, which ranges from r 0 < +∞ where r 0 is the throat of the wormhole. It is defined as a throat that connects two faces since the wormhole has two faces. The b(r ) at wormhole throat satisfies b(r 0 ) = r 0 . The flaring out condition says that b (r 0 ) < 1 or it can be understand in this way b(r ) − rb (r ) b(r ) 2 > 0 where shows derivative with respect to r . The asymptotically flatness condition is defined as b(r ) r → 0 as r → ∞. The acquired EMT is expressed as here ρ(r ) is the energy density. p r (r ) signifies radial pressure and p t (r ) symbolizes transverse pressures. By using Eq. (3) the field equation in EGB gravity are expressed as follows here, the prime signifies a derivative w.r.t r . Also, we define μ = (D − 3)(D − 4)μ 2 for convenience. We have three field equations that are Eqs. (7)-(9) and five unknown ρ(r ), p(r ), p(t), (r ) and b(r ). Therefore, the number of unknowns exceeds the number of equations. To discuss the wormhole solutions, one may adopt various techniques. We provide below various plan of action for finding the solution of field equations. Wormhole solutions In GR, it is well-known that energy conditions have been violated in static spherically symmetric in four dimensional space-time [55]. The violation of these conditions depends upon the flaring out condition. Moreover, there is a possibility that violation of energy conditions can be avoided or satisfy only in the neighbourhood of wormhole throats, in higher dimensional theories [42]. The NEC can be defined as here, K is null vector. For anisotropic fluid content, we can express NEC as follows From Eqs. (7)-(9), we have the following relationships of NEC. It can be verified from the above equations for μ = 0 and (r ) = 0, the resulting equations do not satisfy NEC, due to flaring out condition. At wormhole throat r = r 0 and D = 5 Eq. (13) can be written as From the flaring out condition, we know that b (r 0 ) < 1, therefore in the violation of NEC μ plays a vital role. A particular kind of exotic matter that satisfies the flaringout criterion and defies the weak energy condition is necessary to maintain the integrity of the wormhole structure [2]. The weak energy conditions are broken by this type of stuff, also known as exotic matter [24], at least close to the wormhole throat. While violations of these requirements may appear unnatural in terms of classical relativity, quantum field theory argues that they occur naturally as a result of the dynamic fluctuations in the topology of spacetime over time. The quantum field between them is perturbed by the existence of uncharged parallel plates, leading to a negative energy density. It is possible to consider this negative energy density as a source of workable traversable wormholes. The introduction of the idea of a minimal length scale, which is on the order of the Planck length, is another important advancement in present-day quantum mechanics. The precision with which short distances in spacetime may be determined is constrained by this minimal length scale. In models of quantum gravity, where the degree of positional uncertainty is constrained, the presence of a minimal length naturally arises. Because of this idea of a minimal length, the position-momentum uncertainty relation must be changed. The negative Casimir energy density is redefined by using the GUP. It is significant to note that the precise design of maximally localised quantum states is a precondition for this redefined Casimir energy density. Therefore, while modelling traversable wormholes, it becomes interesting to take the consequences of GUP into account. As a result, traversable wormholes can be stabilised via the Casimir effect, which produces a negative energy density. Therefore, we can study the possibility of utilising the Casimir effect to accomplish such goals by taking into account quantum phenomena inside a classical framework. In subsequent sections, we will study the dynamics of GB-coupled parameters, on wormhole conditions and energy conditions by considering cases of Casimir and GUPcorrected Casimir energy densities. Casimir wormholes in EGB gravity The effects of quantum mechanics within the context of GR have been extensively studied. A universally acknowledged theory of quantum gravity, however, is still elusive despite these attempts. As a result, we are now focusing on investigating how the EGB theory affects this situation. The results of the EGB theory are being investigated in order to learn more about how gravity behaves in quantum mechanical contexts. Dutch physicist Hendrik Casimir suggested the phenomenon of the Casimir effect in 1948. A force may exist between the uncharged, parallel, conducting plates [25]. This is because the vacuum of the electromagnetic field causes disturbance. It is linked with zero point energy of a quantum electrodynamics vacuum contorted by the plates suggested by Niel Bohar. Later on, it was confirmed from experimental research [56]. The theory of Casimir energy is based on the quantum effect, which says that the initial state of quantum electrodynamics is the main fact that causes the parallel, without charged plates to attract. Moreover, the Casimir energy shows the only unnatural source of exotic matter generated in laboratories. The dependence of Casimir's energy is on the shape of boundaries [26]. Generally, it has been noticed that exotic matter does not satisfy the energy condition, especially NEC. In particular, it seems logical to assume the Casimir effect in traversable wormholes does not satisfy NEC as it contains exotic matter. Garattini has suggested the idea of traversable wormholes by using an equation of state coming out of Casimir energy. These wormholes are called Casimir Wormholes [26]. The attractive force between two plates arise due to the renormalization of a negative source of energy, according to Casimir effect. where A shows surface area of the the plates and a represents the distance of separation between the plates. It has been noticed that if we move two plates closer together, then Casimir's energy is lowered. The energy density is expressed as We can obtain pressure from the renormalization of the negative energy source expressed in Eq. (15). The expressions of ρ(a) and p(a) defined in Eqs. (16) and (17) leads to EOS p = wρ with w = 3. The Casimir force F is expressed as the surface area multiplies the pressure. The above equation shows that the force is attractive as it contains a negative sign. It is reported that, for existance of traversable wormholes,the NEC is violated as it containts exotic matter. It has been observed by [27] that we can have stable wormhole by allowing wormhole to collapse slowly, moreover, it is also feasible to analyze the stability of a traversable wormhole, if it consists of large throat as contrary to Planck scale. In order to find out the b(r ) using Casimir energy density in EGB gravity. We will compare Eqs. (7) and (16), plus the distance between plates which is represented by a is replaced by r . We will get the following form of ODE. Now, confining our analysis to the case of 5D EGB, we find the following b(r ). where g 1 is constant of integration and calculated by b(r 0 ) = r 0 . The final form of the shape function is given below here, the above equation contains two shape functions corresponding to the sign of ± i.e. b(r ) + and b(r ) − . We have The asymptotically flatness condition is also satisfied and graphically plotted in Fig. 2a. Figure 2b says that wormhole throat is located at r 0 = 2 where b(r ) − r cuts the r -axis. To understand the dynamics of GB coupled parameter μ, we have plotted the wormhole conditions in terms of contour plots in Fig. 3. It can be seen from Fig. 3a that with the increase in μ, the dynamics of b(r ) increases near the wormhole throat. This implies μ has a direct relationship with the shape function near the wormhole throat. Moreover, Fig. 3b shows that b (r ) < 1 while μ increases. Henceforth, we restrict to the b(r ) + , in this case the 5D EGB wormhole metric reads as By keeping the redshift function constant, the expressions of p r and p t are as follows where b * = 270r (r 4 + 4μ(μ + r 2 0 )). By using the asymptotically flat shape function, the radial and tangential EOS are defined as w r (r ) = p r ρ and w t (r ) = p t ρ respectively. The radial and tangential EOS expressions in EGB gravity for D = 5 are expressed as follows. The plot of w r and w t against r are displayed in Fig. 4. It can be seen from Fig. 4a that μ is in direct relationship with w r at wormhole throat r 0 = 2. The dynamics of w t is shown in Fig. 4b. Before the wormhole throat, the w t increases while away from the wormhole throat, it decreases in a certain domain of μ. We found the valid regions for NEC to be satisfied. It is found that ρ + p r ≥ 0 for {μ < −3 }. For ρ + p t ≥ 0 we have {μ < −90} and {0 < μ < 2}. In Fig. 5, contour plots of NEC are presented, following the above validity ranges. Evolution of ρ + p r and ρ + p t is depicted for both positive and negative values of μ. GUP-corrected Casimir wormholes in EGB gravity The concept of existance of minimum length scale leads the way to modify the uncertainty principle. The problems of GUP relating to momentum and position are explored by [57,58]. We are intended to find the Casimir effect due to GUP. In classical sence, momentum and position are not conjugate variables. Therefore we can consider actual physical position as position eigenspace since we change position momentum relation. There is another method to discuss the position as conditions projected onto the maximally localized state, also called as Quasi position representation [57]. Although there are two ways to have maximally localized states, one is known as KMM [57] and other one is DGS [58]. In this article, we have used both methods to determine the impact on the Casimir wormhole. In N-dimensional minimal length corrected commutation relation is defined as in below equation [59] [ where f ( p) and g( p) show generic functions, one can find out by using rotational and translational invariance of the commutation relation. We can introduce various generic functions which express different models and confirms maximally localized states Now we will discuss two different methods of GUP introduced by Kempf, Mangano, and Mann (KMM) [57] and Detournay, Gabriel and Spindel (DGS) [58]. The two interesting ways are KMM which employs squeezed state, and DGS, which works on variational principle. These models depend upon the number of specific models and dimensions. The model KMM depends on the choice of generic function f ( p 2 ) and g( p 2 ) [60]. The maximally localized states for KMM construction requires where α is MU parameter and γ is defined as γ = 1 + √ 1 + N /2 and N is known as number of spatial dimensions. The maximally localized states for DGS construction requires The idea of GUP and minimal length to get the finite energy between the uncharged plates was introduced by Frassino and Panella. Both find out the corrections to Hamiltonian and the Casimir energy because of minimal length. The Casimir energy of two different models of fabrication of maximally localized states are expressed as follows [59]. where ζ 1 = π 2 28 + 3 where i = 1, 2 which depicts two models (a) KMM and (b) DGS. According to the model introduced, the energy densities and pressure becomes as follows, In order to get GUP corrected Casimir wormholes in EGB gravity, we can replace plate separation distance a by radial coordinate r . By using Eqs. (7) and (39), we will get following equation. The GUP corrected shape function is expressed as follows. here we present our results only for D = 5, and g 2 is the constant of integration. The constant of integration is evaluated by b(r 0 ) = r 0 . g 2 = −π 2 αζ i + 2160r 4 0 + 2160μr 2 0 + 2π 2 r 2 0 log(r 0 ) Therefore, the final form of shape function is expressed as follows. The GUP corrected shape function has two branches according to the sign of ±. The shape function b(r ) + is asymptotically falt while b(r ) − does not satisfy the asymptotically flatness condition. We have considered shape function b(r ) + for further analysis. We have studied wormhole conditions from Figs. 6, 7, 8 and 9. The solid lines in Fig. 6 show dynamics of b(r ) when α = 2 while dotted lines show the evolution of b(r ) when μ = 2. It has been observed from Fig. 6a, b that dynamics of b(r ) grow positively as μ increases. However, the value of b(r ) decreases with an increase in the value of α parameter. Figure 7a, b follow the flaring-out condition, which says b (r ) < 1 for r > r 0 . The Fig. 8a, b confirm the asymptotic flatness condition is satisfied for both as r → ∞. In Fig. 9a, b, we have studied the wormhole throat radii for different values of μ and α. These plots show all conditions of the shape function allow us to study the GUP-corrected wormhole in EGB gravity. By using an asymptotically flat shape function, expressions of w r and w t are as follows. where We have studied the dynamics of radial and tangential EOS parameters from Figs. 10 and 11. The Fig. 10a is plotted in certain domain of μ when α is held constant while Fig. 10b is plotted in certain domain of α when μ is held constant for ζ 1 . It can be seen from Fig. 10a that μ is directly proportional Fig. 10 The trajectory a shows dynamics of w r when α = 2 while the trajectory b shows dynamics of w r when μ = 1 Fig. 11 The trajectory a shows dynamics of w t when α = 2 while the trajectory b shows dynamics of w t when μ = 1 to w r keeping α constant. And α is inversely proportional to w r keeping μ constant. Moreover, we have observed that μ and w r are in an inverse relationship at wormhole throat. At r = r 0 ⇒ μ = − 2160r 6 0 π 2 αζ 1 w r +π 2 r 2 0 w r +2160r 4 0 . Therefore at wormhole throat, with the increase in the parameter μ, w r decreases. The dynamics of the tangential EOS parameter is plotted in Fig. 11. It can be seen from Fig. 11a that with increase in μ tangential EOS parameter decreases for ζ 1 . Figure 11b shows that with the increase in α tangential EOS parameter increases in terms of negative values, while when α decreases w t also decreases. Similar, behaviour of plots have been observed for ζ 2 DGS model therefore we haved displayed plots only for KMM model. Some energy conditions are not satisfied due to the presence of exotic matter in the wormholes. We have checked NEC in GUP corrected wormholes in Figs. 12 and 13 for ζ 1 . Firstly, we calculated valid regions through regional plots and then studied the dynamics of NEC in contour plots. The valid region for ρ+ p r ≥ 0 is {μ ≤ −16 and −100 < α < 100} for ζ 1 . For ρ + p t ≥ 0 we could not found any region in the domain of 2 ≤ r ≤ 10 for ζ 1 . We could not find any region closer to the wormhole throat, which is valid for NEC to be satisfied. We have plotted Fig. 12 by keeping α = 2 in a certain domain −10 ≤ μ ≤ 10. For the fixed value of μ = 1, we have plotted Fig. 13 in a certain domain of α which is −10 ≤ α ≤ 10. Now, the valid region for ρ + p r ≥ 0 is {μ ≤ −12 and − 100 ≤ α ≤ 100} for ζ 2 . For ρ + p t ≥ 0 we found {μ ≤ −13 and − 100 ≤ α ≤ 100} for ζ 2 . Similar, behaviour of plots have been observed for ζ 2 DGS model therefore we haved displayed plots only for KMM model. Active gravitational mass In this section, we will explore the active gravitational mass of Casimir and GUP-corrected Casimir wormholes. This mass exists inside the wormhole's region from the wormhole's throat r 0 to the boundary of the radius r . The active gravitational mass is denoted by M A and calculated by the expression below Fig. 12 The trajectories a, b show dynamics of NEC against r for ζ 1 when α is fixed Fig. 13 The trajectories a, b show dynamics of NEC against r for ζ 1 when μ is fixed The expression for M A of Casimir wormhole is expressed below. The expression for M A of GUP-corrected Casimir wormhole for ζ 1 is written below. The expression for M A of GUP-corrected Casimir wormhole for ζ 2 is calculated as The M A for Casimir wormhole and GUP-corrected Casimir wormhole is plotted in Figs. 14 and 15 respectively. It can be observed from Figs. 14 and 15 that M A decays with increase in r . The negative active gravitational mass is a sign of the existence of exotic matter. It is well known Fig. 14 The active gravitational mass of Casimir wormhole. Herein we set r 0 = 2 that such matter violates energy conditions, and it can be experienced by Sects. 3 and 6. Presently, the Casimir effect is the real representative of such exotic matter. Due to the presence of the Casimir effect, the active gravitational mass in the area of space is measured to be negative. Wormhole geometry This section focuses on understanding wormhole dynamics geometrically. By taking into account the wormhole spacetime represented by Eq. (5), we can envisage or see them. By fixing t = constant and θ = π/2, we can obtain an equatorial slice for such wormhole space-time. Consequently, metric becomes We will now create a 3D space and an embedded Euclidean 2D surface using this wormhole space-time. So, we'll present (z, r, φ) cylindrical coordinates. Following is an expression for the embedding space-time. In order to define z = z(r ), we can take advantage of the embedded surface's axial symmetry. The surface's line element is expressed as By comparing Eqs. (51) and (53), we get Here, we will use the shape function developed from the Casimir wormhole and GUP-corrected Casimir wormhole expressed in Eqs. (22) and (44) for ζ 1 and ζ 2 . The embedding diagram for upper (z > 0) and lower (z < 0) universe for the Casimir wormhole is displayed in Fig. 16. At the same time, the embedding diagram for both models of GUP-corrected wormholes have been evaluated and we have obtained similar results as presented in Fig. 16. Therefore, we have displayed only one figure. Figure 16 shows each wormhole has a radius of r = r 0 , implying an embedded surface is vertical. It can be observed that far away from the throat space showed asymptotically flatness behavior, i.e. dz dr → 0 as r → ∞. Equilibrium condition In this section, we will find the equilibrium configuration of wormhole solutions based on the shape function developed from the Casimir wormhole and GUP-corrected Casimir wormhole in EGB gravity. For this purpose, we will use the generalized Tolman Oppenheimer Volkoff equation and expressed as [46] − dp r dr The above equation tells us about the equilibrium condition for the wormhole geometries based on the following forces. where F g , F h , F h are known as a gravitational, hydrostatic, and anisotropic force, respectively. For equilibrium condition, F g +F h +F a = 0 must hold. In our case the equilibrium condition reduces to F h + F a = 0. Using the shape function developed from the Casimir energy density, expressed in Eq. (22), the forces F h and F a are calculated and written as follows. (60) Figure 17 shows the dynamics of hydrostatic and anis otropic forces for different values of μ. We can also observe from the expressions of hydrostatic and anisotropic forces they do not completely quit each other. Therefore, no equilibrium configuration has been examined near wormhole throat but forces are in equilibrium for r > 6. Using the shape function developed from the GUP corrected Casimir energy wormholes, expressed in Eq. (44), the forces F h and F a are calculated and expressed as follows. √ 10r 2 r 4 0 + 7π 2 √ 10r 2 r 2 0 45π 4 αr 2 0 Fig. 17 The dynamics of hydrostatic and anisotropic forces for different values of μ for Casimir wormhole where Fig. 18 shows dynamics of F h and F a when α = 2 is fixed and μ is varying for KMM. The plot (b) in Fig. 18 shows dynamics of F h and F a when μ = 1 is fixed and α is varying for KMM. The plot (a) in Fig. 19 shows dynamics of F h and F a when α = 2 is fixed and μ is varying for DGS. The plot (b) in Fig. 19 shows dynamics of F h and F a when μ = 1 is fixed and α is varying for DGS. It can be seen from these figures that equilibrium configuration has been experienced for D = 5 as they do not cancel out each other completely. Fig. 18 The trajectories a, b show dynamics of F h and F a for ζ 1 Fig. 19 The trajectories a, b show dynamics of F h and F a for ζ 2 Complexity factor in Casimir sormholes andd GUP corrected Casimir wormholes In 2018, Herrera introduced the concept of complexity factor in the background of GR, for spherically symmetric and static self-gravitating systems [47]. Mainly, the idea of the complexity factor is based on simple or minimal complicated systems presenting homogeneous energy density and isotopic pressure. This type of fluid distribution shows zero complexity factor. Moreover, with anisotropic pressure and inhomogeneous energy density, zero complexity factor has been calculated in self-gravitating systems, as long as the effects of these two factors on the complexity factor cancel each other. The traced free scalar complexity factor Y T F is expressed as follows. where = p r − p t , which leads us to the following complexity factor for D = 5 and constant redshift function, Now, using the shape function developed from Casimir wormhole in the above expression, we have the following results It can be seen from Eq. (65), at the wormhole throat, the contribution of the first term, which comes from the integration of the derivative of energy density, becomes zero. The dynamics of complexity factor of casimir wormhole versus Fig. 20. We have observed that r → ∞, or away from wormhole throat Y T F → 0. According to [47], the minimal complexity factor shows homogenous energy density and isotropic pressure. Moreover, the zero complexity factor predicts inhomogeneous energy density and anisotropic pressure as long as these two effects cancel each other on the complexity factor. Therefore, near the wormhole throat, the complexity factor is monotonically increasing, and for higher values of radial coordinate, Y T F approaches zero. It has also been observed that for μ = 10, the energy density is homogenous at very high values of r , and pressure shows isotropic behaviour after r = 10. Therefore, in the case of a wormhole, we experience complexity factor approaches to zero for higher values of the radial coordinate. Moreover, in the dynamics of complexity factor, pressure isotropy plays a more vital role compared to the homogeneity of energy density. Using GUP corrected Casimir shape function in the definition of complexity factor, we have the following equation −2π 2 α Mr 2 + 3π 2 α Mr 2 0 + 4320r 2 r 4 0 + 4320μr 2 r 2 0 + π 2 r 2 r 2 0 − 4π 2 r 2 r 2 0 log It can be seen from Eq. (66), the last term disappears at r = r 0 , which is due to the contribution of integration from Eq. (64). We have plotted Fig. 21a for different values of μ when α is fixed while Fig. 21b shows dynamics of Y T F for different values of α when μ is fixed. It can be seen that with increase in μ and α, Y T F decreases. We have observed that for r → ∞, implies Y T F → 0. Therefore, for GUPcorrected Casimir wormholes, the complexity factor for both models approaches to zero. The plots of the complexity factor have similar behavior for both models, so we have plotted for KMM model only. Deviation of EGB gravity from general relativity The expressions of shape function for Casimir wormhole and GUP-corrected Casimir wormhole in EGB gravity are expressed in Eqs. (22) and (44). The expression of shape function for Casimir effect in GR [26] is written as where r 1 = π 3 l 2 p 90 . The Eq.'s (22) and (44) possess two independent maximally symmetric solutions. Both solutions can have different asymptotic behaviour, but this is not the case in GR. Mathematically, the shape function from GR is only a function of r . While in the case of EGB gravity, we have another parameter μ, or we can say that we have an extra degree of freedom to understand the dynamics of shape function in EGB gravity. Similar to this, in the case of a Casimir wormhole with GUP correction, the EGB gravity has degrees of freedom twice of GR i.e., μ and α, whereas GR has only one which is α. Here, we present comparison of our results with that of GR. To measure the contribution of EGB gravity theory in comparison with the theory of GR, we have plotted Figs. 22 and 23, by picking the asymptotically flat shape function. These plots compare shape functions from GR background [26] and EGB gravity framework (Results of a present manuscript). Shape function in EGB background allows us to understand the dynamics of wormhole geometry in a wider range compared to GR. To understand the dynamics of GB coupled parameters on wormhole geometry, we have also studied wormhole conditions in terms of contour plots. We have treated GB coupled parameter as an independent variable in these plots in manuscript. The equation of the state of dark energy can characterize cosmic inflation and the accelerated expansion of the universe. The Fig. 24 shows the dynamics of radial and transverse EOS parameters. We can observe that w r is an increasing function of r while w t is decreasing function of r for the Casimir wormhole for certain values of μ in EGB gravity and GR. The dynamics of w r is in the phantom phase while w t lies in the non-phantom phase. We have also studied EOS in terms of contour plots to explore the dynamics of GB-coupled parameter. We have plotted the embedding diagram of Casimir wormhole in the case of shape function in GR and in EBG gravity as shown in Fig. 25. We can see that the shape function expands more in the case of EGB gravity for higher values of theory coupled parameter (Fig. 26). The dynamics of complexity factors are studied compared to the wormhole solution from GR [26]. It can be seen that the dynamics of both complexity factors are monotonically increasing. The range of modulus value of Y T F is larger in the case of EGB gravity than for GR. Summary In this manuscript, we have studied asymptotically flat, static, traversable wormhole geometries in the background of EGB gravity. We have worked with EGB gravity as it has been extensively studied in the scientific literature regarding cosmological phenomena like wormholes, black holes, and stellar structures. The Casimir effect exists due to distortion between two parallel, without charged, closely spaced plates placed in a vacuum. When two plates move closer to each other, the waves of shorter wavelengths start to fit between the plates, and the total energy between the plates will be less than elsewhere in the vacuum. Due to this, energy plates will attract each other. The concept of the Casimir effect was first predicted theoretically and later confirmed in the Philips laboratories through experimental work. The wormhole connects two points of the same cosmos, and its solution is based on Einstein's field equations. Since the wormhole contains exotic matter and it violates the NEC. Therefore, the quan-tum nature of the Casimir effect seems a suitable candidate for the modeling of wormholes in EGB gravity. In this paper, we have studied the traversable wormhole solutions in the framework of EGB gravity by exploring the dynamics of the quantum nature of the Casimir effect. By comparing Casimir energy density with the field equation of EGB gravity, integrating the resulting equation yields a shape function. The obtained shape function has two branches with positive and negative signs. We have selected a positive branch for further analysis, as it satisfies the asymptotically flatness condition. The flaring out and throat conditions are also satisfied and displayed graphically. To understand the dynamics of GB coupled parameter on shape function, we have plotted contour plots in a specific domain of GB parameter. It has been seen that with an increase in the GB parameter, the obtained b(r ) of the wormhole increases. The slope of the b(r ) is less than one for μ < 0 and μ > 0. We have also studied the evolution of radial and tangential EOS parameters in specific domains of μ. With the increase in the GB parameter, the value of w r grows positively away from the wormhole throat, while near r 0 , we find decreasing behavior. In the case of w t , we have experienced the opposite behavior. We have studied the behavior of NEC for positive and negative values of μ. It can be observed from valid regions and plots of NEC that near the wormhole throat (r 0 = 2), the NEC are not satisfied. The generalized principle of uncertainty has been studied due to the idea of minimal length in background of quantum gravity theories. In the remaining part of this paper, we have analyzed the dynamics of the GB coupled parameter and MU parameter on the dynamics of the wormhole. There are adequate ways to generate possible generic functions to develop the maximally localized quasi-quantum states. Here, we have considered two models of KMM and DGS to develop quasi-quantum states. In addition, we have modeled Casimir wormhole geometries in the framework of EGB gravity. The behavior of both models in the dynamics of the GB coupled parameter and MU parameter on wormhole geometries is relatively the same. The b(r ) of wormhole increases with the increase in GB coupled parameter, while for the MU parameter, we have experienced the opposite results. All wormhole conditions are satisfied for the increasing value of the GB parameter and the decreasing value of the MU parameter. The dynamics of w r and w t have also been studied for both models by keeping one parameter fixed and varying another parameter. The plots of NEC for GUP-corrected wormholes are also displayed, and NEC is violated near the wormhole throat. It has been evidenced that prominent dynamics of GB couple parameter can be seen, despite of the fact which has been discussed in Ref. [29]. We have explored the active gravitational mass of Casimir and GUP-corrected Casimir wormholes. This mass exists inside the wormhole's region from the wormhole's throat r 0 to the boundary of the radius r . In the next section, we aim to discuss wormhole geometry by providing mathematical modeling of obtained wormholes in 3D and 2D space-time. The source is Casimir energy density. We have seen that the developed shape function satisfies all wormhole conditions. We have plotted embedded diagrams to display wormhole shape for t = constant and θ = π 2 for Casimir wormholes and GUP-corrected Casimir wormholes. Plus, two univereses in 3D and 2D space-time is presented, illustrating asymptotically flat wormhole. We have also probed the equilibrium forces for the Casimir wormhole and GUP corrected Casimir wormhole. The hydrostatic and anisotropic forces are calculated in each case. The plots show that they do not cancel out each other completely. The complexity factor of Casimir wormholes and GUP-corrected Casimir wormholes have also been calculated and monotonically increasing behaviour of complexity factor have also been experienced. In the present study, we have explored wormhole geometries to study the physical behavior of dynamics of theory-coupled parameters and MU parameter in EGB gravity. A number of theories of gravity have been tested using useful wormhole solutions, which have been thoroughly researched in a variety of settings [61,62]. Researchers have also looked into whether modified theories of gravity, such as f (R) gravity [11,14], ( f (τ ) gravity (where τ denotes torsion) [15,16,[63][64][65], ( f (R, T ) gravity (where R represents the Ricci scalar and T is the trace of the energy-momentum tensor) [21], Brans-Dicke (BD) theory [18,22,66,67], scalar-tensor teleparallel gravity [22], and Einstein-Gauss-Bonnet gravity [45]. These investigations aim to explore the dynamics of wormhole properties in alternative theories of gravity. Agnese and Camera [66] investigated static spherically symmetric wormholes in the context of Brans-Dicke (BD) theory, which depend on the post-Newtonian parameter gamma > 1. In the context of the BD theory, traversable wormhole solutions can be found for both positive (omega > 0) and negative (omega0) values of the parameter. The scalar field acts as exotic matter in scalar tensor theories [18,67]. Ebrahim and Riazi [20] introduced two Lorentzian wormhole solutions in BD theory by using a traceless energy-momentum tensor. These solutions were developed taking into account both closed and open universe theories. According to the literature now available, the topic of Casimir wormhole dynamics has been thoroughly investigated both within the framework of GR and in numerous modified theories of gravity. Garattini [26] has put up a model for a static traversable wormhole, looking at the Casimir effect's negative energy density and analysing the outcomes within the context of GR. Building on a similar strategy, it has been discussed how the GUP affects the geometry of Casimir wormholes [28]. Garattini, [31], conducted additional research into Yukawa Casimir wormholes while assuming no tidal force. Additionally, Sokoliuk [34] has studied the possibility of Casimir wormholes in f (R) modified gravity when there is no tidal force. Three different Casimir wormhole systems have been studied within the framework of f (Q) modified theory, according to Zinnat Hassan [68]. In this work, we analyse Casimir wormhole solutions, concentrating on a five-dimensional (D = 5) case, utilising the higher-dimensional gravity theory Einstein Gauss-Bonnet (EGB) gravity. It serves a specific purpose to study the wormhole solution in EGB gravity, highlighting its extraordinary property of having up to two different maximally symmetric solutions, even with various signs of the curvature scale. These solutions display several asymptotic behaviours, which are discussed in the manuscript's equations (22) and (44). The curvature scale of the maximally symmetric solutions in EGB gravity is still unknown, in contrast to general relativity (GR). Additionally, when investigating EGB black hole solutions, one comes across two unique branches with various asymptotic behaviours [69]. It is important to remember that, according to Zwiebach [70], EGB gravity can be considered the low energy limit of some string theories. Notably, in higher-dimensional gravity theory, Casimir wormholes and GUP-corrected Casimir wormholes have not been discussed before. Existing literature has not yet examined how the "Casimir wormhole"'s active gravitational mass and complexity factor are measured. Data availability statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: Data sharing not applicable to this article as no datasets were generated or analysed during the current study]. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development.
11,874.8
2023-06-01T00:00:00.000
[ "Physics" ]
Traffic-Light Control at Urban Intersections Using Expected Waiting-Time Information We consider an optimal traffic-light control framework for urban traffic intersections to alleviate congestion phenomena. We analyze a scenario in which we provide drivers with information about the waiting time at the intersections. We model the drivers’ lane-changing information-based behavior as the solution of a convex optimization problem. We compute the optimal traffic-light control mechanism as the solution to a bi-level optimization problem. We provide a complete analysis in terms of (i) the existence of a solution; (ii) an iterative algorithm to compute it; (iii) sufficient conditions for the solution’s uniqueness and the algorithm’s convergence. Early simulation results show the proposed control scheme’s effectiveness compared with an optimal control algorithm in the absence of waiting-time information. I. INTRODUCTION Transportation is the energy end-use sector with the fastest growth rate in terms of greenhouse gas emissions, and road traffic is estimated to be responsible for over 80% of this increase since 1970 [1]. Road traffic, moreover, is associated with several other problems of environmental, financial, and social nature due to congestion, delays, and infrastructure maintenance or building. For example, traffic congestion costs billions of dollars to the economy every year [2]. All these negative consequences are exacerbated in presence of high-density traffic and congestion. Therefore, as the number of road vehicles steadily increases every year 1 , rethinking the way traffic is managed is necessary to guarantee a sustainable future for transportation. In this paper, we focus on an urban traffic setting and, in particular, on intersection control. For more than 50 years, computer-aided traffic lights have been the standard tool for controlling intersections [3], and quite an extensive literature exists on the topic boasting many different approaches. Design solutions based on dynamic programming or informed by control theory have been proposed, for instance, in [4]- [11]. See also [12], [13] for broad reviews. Varaiya, in his seminal paper [14], proposed a traffic light control algorithm 1 See, e.g., https://www.acea.be/publications/article/report-vehicles-in-useeurope-2019, and https:// www.gov.uk/government/statistics/road-trafficestimates-in-great-britain-2019 to stabilize the queue length at the intersections. Since then, many algorithms have been proposed to achieve the same aim under various settings [15]- [18]. However, all these papers did not consider the possibility of showing drivers the information on how much time they have to wait, on average, to cross the intersection. However, when these information are provided to the drivers they can react by changing lanes based on the displayed expected waiting-time information at the lanes at an intersection. We seek to answer the following questions: If we inform the drivers of the waiting-time at the intersections, can the congestion be alleviated? Second, can we develop an optimal traffic-light control mechanism by considering the impact of the drivers' rerouting decisions based on the displayed information? We consider a network of intersections where each intersection may consist of multiple lanes. We propose and analyze the addition to traffic lights of a visual indication of each lane's expected waiting time. Then, we propose a control policy deciding the green-light duration of each traffic light based on the observed and estimated future traffic flows. Compared to canonical traffic light approaches, a visual indication of the expected waiting time allows us to considerably increase the duration of red lights. Indeed, as drivers can see that their waiting time is too large, they can change lanes and find alternative paths. Thus, we enable an additional degree of freedom, for instance, to divert traffic to specific routes to avoid congestion actively. Compared to state of the art, it is worth noting that incentivization mechanisms such as toll prices and their impact on the traffic flows have been considered before [19]- [21]. However, the above papers did not consider the control of traffic-lights at intersections. The closest to our work is [22] which investigated how the vehicles reroute based on the traffic delays at various intersections. The authors in [22] then investigated the performance of various traffic-light control algorithms in alleviating the congestion. However, the above paper did not formalize how an optimal traffic-signal control algorithm should be computed by incorporating the drivers' behavior based on the traffic-delay. Our proposed model formalizes how the drivers react to the information and provides an algorithm to optimally compute the green trafficlight durations at intersections based on the drivers' reactions on the expected waiting-time. Further, [22] considered an information structure where the travel-time for each vehicle is updated at every instance of time, which is computationally costly. Instead, in our methodology, we provide the drivers the waiting-time information at an intersection, which is easier to implement in practice. We cast the design of the control policy at the trafficintersections into a convex optimization problem on the observed traffic characteristics. We model the fraction of the drivers changing lanes based on the displayed information as the solution to a second convex optimization problem. Since control decisions depend on the drivers' behavior, and vice versa, the traffic-controller operates online and in closedloop, in which the loop is closed on humans [23], [24]. Indeed, the closed-loop system results in a solution to a bilevel optimization problem which turns out to be non-convex. We show that an optimal solution exists. Further, we propose a fixed-point iterative algorithm to find an optimal solution. We provide sufficient conditions under which the algorithm converges. We provide numerical results showing promising performance of the proposed architecture compared with an optimal traffic-light control policy that does not provide visual information to the drivers. Specifically, our algorithm controls the flow at intersections in such a manner which significantly reduces the mean queue-length across the network compared to the optimal policy which does not provide visual information to the drivers. II. THE MODEL In this section, we first describe the network structure which we consider (Section II-A). Subsequently, we characterize the traffic-light control architecture (Section II-B), and the dynamics of the traffic flow(Section II-C). Finally, we characterize the constraints on the decision variables the green traffic-light durations at different lanes across the intersections (Section II-D). A. The Road Network The road network is modeled by a directed graph G = (V, E), in which V is the set of nodes and E ⊆ V × V the set of edges. Nodes represent intersections and edges represent roads connecting adjacent intersections. In particular, if (i, j) ∈ E, then there is path going from i to j. A nonempty subset N e of nodes represents "terminal intersections", from where traffics can only originate or terminate (Fig. 1). The incoming traffic from these terminal nodes can not be controlled and they are exogenous variables. The set of nodes which have direct outgoing edges toward node i is denoted by N in i . Similarly, the set of nodes which have incoming edges from node i is denoted by N out . For example, in Fig. 1, (A, C, F) ∈ P C . Furthermore, we let P = ∪ i∈V P i . At each intersection i, there is a queue for each path (j, i, k) ∈ P i , including vehicles coming from node j ∈ N i in and going towards node k ∈ N i out via node i. For notational convenience, we shall assume that there is a unique queue for each path in P. B. Traffic Lights We assume that there is a different traffic light for each path (j, i, k) ∈ P. Each traffic light is characterized by its duty cycle, defined as the relative duration of the green light in each period. In particular, the duty cycle of the traffic light on path (j, i, k) ∈ P is denoted by g j,i,k ∈ [0, 1]. The duty cycles (g j,i,k ) (j,i,k)∈P represent the variables controlled by the decision logic. The controller updates its decision every ∆t ∈ R >0 minutes. In the meanwhile, the duty cycles are kept constant. In every decision interval ∆t, each traffic light performs T ∈ N >0 cycles, each one of duration ∆t/T , in which green and red lights are alternated according to the current value of the duty cycles (amber lights are neglected for simplicity). Hence, for the traffic light controlling the path (j, i, k), the green and red light durations in each period are given, respectively, by g j,i,k ∆t/T and (1 − g j,i,k )∆t/T . Traffic lights belonging to the same intersection may not be independent to each other. Indeed, vehicles on potentially colliding paths cannot cross the intersection at the same time. For each intersection i, non-colliding paths are identified by a covering and (j , i, k ) represent non-colliding paths. Each I i is maximal 2 with respect to this property. C. Traffic Dynamics In this section, we formally describe the time evolution of traffic flows within an arbitrary decision interval of length ∆t, in which each traffic light operates T cycles. For every traffic-light cycle t ∈ {1, . . . , T } and every node i ∈ V, we denote by Λ t j,i the number of vehicles going from node j ∈ N in i to node i within the traffic-light cycle t. If j ∈ N e , i.e., j a terminal node, then Λ t j,i is the amount of vehicles 3 which come from outside the considered road network. We denote by α i k,j the fraction of vehicles in Λ t j,i that goes toward k ∈ N out i . When k ∈ N e , those vehicles will then exit the network. We assume that α i k,j does not depend on the particular cycle t. Hence, the traffic going from node j toward node k through i and during time period t is given by k,j that denotes the percentage of the vehicles have their destinations at node i. Vehicles may start their journey from in-between every pair of connected nodes. In particular, we denote by ζ t j,i,k be the amount of vehicles that initiate their journey between node j ∈ N in i and node i during cycle t, and that go towards node k. Let N t j,i,k be the total amount of vehicles arriving in the queue of path (j, i, k) ∈ P during cycle t. Then, (1) For ease of exposition, we assume that the traffic controller has access to 4 ζ j,i,k and α i j,k for each (j, i, k) ∈ P. Furthermore, in the following we make the simplifying assumption that, during a traffic-light cycle, only the vehicles present in the queue at the beginning of the cycle can move towards the next node. This is a standard assumption (see, e.g., [15], [25]), and it is justified in our context by the fact that, in a urban area, paths are short and traffic speed is small 5 . Let N 0 j,i,k denote the amount of vehicles in the queue (j, i, k) at the end of the previous control interval and, for t = 1, . . . , T , let N t j,i,k denote the amount of vehicles in the queue (j, i, k) at the end of cycle t. We assume that the number of vehicles at each queue, N t j,i,k , can be measured. Furthermore, let M t j,i,k be the total number of vehicles moving from node j ∈ N in i towards node k ∈ N out i during cycle t, and let v j,i,k be the maximum traffic outflow for the path (j, i, k) within a traffic-light cycle (for simplicity, v j,i,k is assumed to be independent from t). Then, for all (j, i, k) ∈ P and all t = 1, . . . , T , which represents the total amount of vehicles that exit the queue (j, i, k) during cycle t. Note that v j,i,k g j,i,k represents the amount of vehicles which can exit the intersection. Hence, the total amount of vehicles in queue (j, i, k) ∈ P at the end of cycle t, is given by Moreover, since in view of (2) the amount of vehicles going from node i ∈ V to node k ∈ V during cycle t is given by M t j,i,k , then we have D. Dependency Constraints on the Decision Variables During each cycle t = 1, . . . , T , the traffic lights controlling the non-colliding paths in each set I i ∈ L i of each intersection i ∈ V can be all green at the same time. For example in Fig. 1, the traffic-lights corresponding to paths in I C = (B, C, D) , (B, C, F) can be green simultaneously. Moreover, due to maximality of the sets I i , no other traffic light associated to a path in P i \ I i can be green during such time. For each i ∈ V and each I i ∈ L i , we introduce a variable g I i ∈ [0, 1], that indicates the fraction of time per cycle in which the traffic lights in I i can be green 6 . For each i ∈ V, the variables (g I i ) I i ∈Li must satisfy The inequality in (5) indicates that it may happen that for a certain fraction of time all the paths in P i are blocked by a red light. Moreover, since for every i ∈ V a path (j, i, k) may belong to multiple elements 7 of L i , then the duty cycles must satisfy If a path (j, i, k) belongs to only one element I i ∈ L i , then the maximum duty cycle for the traffic light controlling (j, i, k) is bounded by g I i . If all paths have this property, then we can simply set g I i = max (j,i,k)∈I i g j,i,k . Otherwise, the variables (g I i ) I i ∈Li, i∈V represent a further set of control variables that must be decided by the controller. The controller decides the values of the duty cycles (g j,i,k ) (j,i,k)∈P and of the variables (g I i ) I i ∈Li, i∈V . Decisions are taken every ∆t minutes and are kept constants for T traffic-light cycles of duration ∆t/T minutes in which traffic lights alternate green and red lights. III. CONTROL OF TRAFFIC LIGHTS In this section, we formalize the control problem in terms of an optimization problem cast on the traffic characteristics. First, in Section III-A, we approach the "classic" problem, where no additional information is shown to the drivers. In Section III-B, we then consider the control problem in the case in which traffic lights display an information about the expected waiting time and the vehicles re-distribute based on the information. In Section III-C, we characterize the drivers' response based on the expected waiting-time information. Subsequently, in Section III-D, we formulate the optimal control problem considering the drivers' response as a bilevel optimization problem. A. Control Without Waiting-Time Information As a further degree of freedom, we consider the case in which the duty cycles are lower bounded by an arbitrary designer-decided quantity g min ≥ 0. Hence, the decision variables satisfy The controller is then obtained as a solution to the following optimization problem Q : subject to (1), (2), (3), (4), (5), (6), (7), in the decision variables (g j,i,k ) (j,i,k)∈P and (g I i ) I i ∈Li, i∈V . We observe that, instead of the nonlinear constraint in (2), we can equivalently consider the following linear inequalities Then, Q is equivalent to Q : in the decision variables (g j,i,k ) (j,i,k)∈P and (g I i ) I i ∈Li, i∈V . A solution of Q , is also a solution of Q and vice versa. Thus, they are equivalent. Q is a convex quadratic problem whose solution can be easily obtained with standard solvers. B. Control With Waiting-Time Information In this section, we suppose that traffic lights inform drivers in every queue (j, i, k) about the time w t j,i,k ∈ R ≥0 that a vehicle is expected to wait before crossing the intersection during cycle t. Recall that at the end cycle t, the amount of vehicles in the queue (j, i, k) is N t j,i,k . Recall also that v j,i,k is the outflow for queue (j, i, k). Hence, the total amount of vehicles which exit queue (j, i, k) during a traffic light cycle is g j,i,k v j,i,k . The waiting time depends on the position of the vehicle within the queue and on the time required for all the vehicles in front to cross the intersection. Thus, the average waiting time displayed at the beginning of cycle t+1 to drivers in the queue (j, i, k) is given by 8 8 If g min > 0 in (7), then the waiting times w t j,i,k are bounded. If, instead, g min = 0, then (9) may be saturated to ensure boundedness. Vehicles may want to change lanes depending on the shown value of the expected waiting time. We let β t+1 j,i,k,k be the fraction of vehicles of N t j,i,k at queue (j, i, k) which move 9 toward queue (j, i, k ) after receiving the information about the average waiting-time at the end of the cycle t at the beginning of cycle t + 1.Clearly, the vehicles in queue (j, i, k) can only move toward a queue (j, i, k ) which is accessible at node i for vehicles coming from node j. In particular, we let C j,i := {k ∈ V : (j, i, k) ∈ P} ⊆ N i out be the set of all accessible nodes for vehicles going toward node i ∈ V from node j ∈ N i in . Whether a vehicle decides to change lane given the new information provided by the traffic light may depend on many factors. We postpone the discussion to Section III-C, in which we propose a model for the drivers average behavior, and in the remainder of this section we treat the quantities β t j,i,k,k as parameters, only assumed to satisfy k ∈C j,i β t j,i,k,k = 1 and β t j,i,k,k ≥ 0, ∀k ∈ C j,i (10) for all i ∈ V, j ∈ N i in , k ∈ C j,i , and t = 1, . . . , T . The constraints (10) make β t j,i,k,k probability factors, ensuring a redistribution that preserves the amount of vehicles. Let N t j,i,k be the number of vehicles in queue (j, i, k) at the start of cycle t. Then, This is the state of the queue (j, i, k) ∈ P after re-balancing at the start of the traffic-phase cycle t = 1, . . . , T . As only the vehicles which are in the queue at the beginning of cycle t can move to the next intersection, in view of (11), we thus replace (2) and (3) with and respectively. Denote β := (β t j,i,k,k ) i∈V,j∈N i in ,k,k ∈C j,i ,t=1,...,T . Then, similarly to the control design without information display (Section III-A), for every fixed β satisfying (10) the control policy with waiting time information is obtained as a solution to the following optimization problem H(β) : subject to (1), (5), (6), (7), (9), (11), (12), (13) in the decision variables (g j,i,k ) (j,i,k)∈P and (g I i ) I i ∈Li, i∈V . We observe that H(β) is convex. C. Models of Drivers' Reaction Whether a vehicle changes lane in reaction to the displayed information about the waiting time depends on many factors. In this paper, we consider that for each path (j, i, k) ∈ P the value of the variables β j,i,k,k , k ∈ C j,i in a given cycle t is chosen as the solution of the following optimization problem. subject to (10) parametrized by the waiting times w t := (w t j,i,k ) (j,i,k)∈P . The first term in the cost function of D j,i,k (w t ) is a negative entropy term. Minimizing this implies maximizing dispersion. The second term, δ β j,i,k,k w t j,i,k , weights the expected waiting time. Minimizing this term leads to an arrangement towards the lanes for which the expected waiting time is minimum. Finally, the third term, −β j,i,k,k , represents the inertia to change lane since vehicles may have reluctance to change lanes unless they get a larger saving in wait-times. This term is indeed minimum when β j,i,k,k = 1, meaning that there is no change of lane. Solving D j,i,k (w t ) leads therefore to a compromise between dispersion, reaction to waiting times, and inertia, in which the relative importance of the three terms over the others is regulated by the parameters η and δ. In this respect, we observe that, even if η and δ are much larger than one (meaning that the inertia term has no effect), the presence of the entropy term implies that vehicles will not always change to lanes where the average waiting-time is smaller, and vehicles may move towards the ones where the waiting time is not minimum albeit with a smaller probability. Indeed, when η decreases to zero the probability distribution concentrates around the minimum-waiting-time lanes. The negative entropy terms are important since vehicles may not move towards the lane with the smallest waitingtime because of various reasons. For instance, different vehicles may have different destination or preferences for some specific routes. Finally, we remark that negative entropy terms are customarily used to model the decision of agents in the context of learning theory [26], [27] where it is used for regularization. Throughout this paper, we shall assume that η > 0. For fixed w, D j,i,k (w t ) has a unique solution given by the following lemma, whose proof is omitted for reason of space Lemma 1 For every (j, i, k) ∈ P and t = 1, . . . , T , let then, the unique optimal solution of D j,i,k (w t ) is given by Note that the probability that the vehicles will move towards the lane associated with smaller waiting-times are higher compared to the probability that the vehicles will move towards the lanes associated with higher waiting-times. As η increases, the decision becomes more random. D. The Closed-Loop System For fixed β, Problem H(β) has a unique optimal solution, which produces a value of the weighting times w t according to (9). Conversely, for fixed w t , the problem D j,i,k (w t ) has a unique solution given by Lemma 1 for every (j, i, k) ∈ P. These solution results in an optimal value for β. Therefore, the closed-loop system consists in a bi-level optimization problem obtained as the interconnection between H(β) and (D j,i,k (w t )) (j,i,k)∈P . Unfortunately, K is not convex, and its solution cannot be find efficiently in general. An approach to find optimal solution to K is discussed in the next section. IV. PATHWAYS FOR SOLVING K As anticipated earlier in Section III-D, the overall control problem K in presence of waiting-time information is not convex. Nevertheless, it is given by the interconnection of two convex problems, and the following can be concluded by means of the Brouwer's fixed point theorem (details are omitted for reason of space). Theorem 1 K admits an optimal solution. However, Theorem 1 is only an existence result, and it does not guarantee uniqueness of the solution, nor it gives any analytical expression. In this section, we devise a solution procedure based on the Banach fixed-point iteration that can be used to solve K. Moreover, we give sufficient conditions guaranteeing that K has a unique solution, and that the proposed procedure finds it. For fixed β, we denote by Γ(β) the unique solution to H(β) satisfying (10) for all i ∈ V, j ∈ N i in , k ∈ C j,i , and t = 1, . . . , T . Likewise, we denote by w := (w t ) t=1,...,T , and by Ψ(w) the unique solution to D(w) = (D j,i,k (w t )) (j,i,k)∈P,t=1,...,T for fixed w. We recall that every assignment g of the decision variables (g j,i,k ) (j,i,k)∈P and (g I i ) I i ∈Li, i∈V produces a value of w according to (9), which we denote simply by w(g). As a consequence, every optimal solution g of K satisfies g = Γ(Ψ(w(g )), namely, it is a fixed point of the map Γ•Ψ•w. This, motivates the following iterative procedure, which starts at k = 0 from an arbitrary initial conditionβ 0 : S1. Computeĝ k+1 = Γ(β k ), the optimal solution of H(β k ). S2. Computeβ k+1 = Ψ(w(ĝ k+1 )) according to Lemma 1. S3. Set k ← k + 1 and repeat from S1 until convergence, or a stopping criterion is met. Convergence to an optimal solution to K cannot be established in general. Nevertheless, convergence can be concluded under a contraction-like property of the map Γ • Ψ • w. More precisely, we first notice that, under some conditions, the solution maps Γ and Ψ•w are both Lipschitz, as established by the lemma below. Lemma 2 Γ and Ψ are Lipschitz. If in addition g min > 0 in (7), then also w and Ψ • w are Lipschitz. Let L Γ and L Ψw denote, respectively, the Lipschitz constant of Γ and Ψ•w. Then, the following theorem guarantees that the procedure devised above always converges to an optimal solution if L Γ L Ψw < 1. Theorem 2 Suppose that L Γ L Ψw < 1. Then, K has a unique optimal solution g . Moreover, for every initial condition (β 0 ,ĝ 0 ), the procedure described by S1-S3 produces a sequence (ĝ k ) k∈N that converges exponentially to g . V. NUMERICAL RESULTS We consider the road topology shown in Fig. 1 are set so that the incoming flow is always split evenly in all the possible outgoing paths. For example, the flow between F and G is divided equally between the two possible options: D and K. Therefore, α G F,D = 0.5 and α G F,K = 0.5. The amount of the new vehicles that enter the network is where r t is sampled randomly from a uniform distribution on [0.5, 1]. Hence, no vehicle is dissipated and no additional vehicle is created outside the terminal nodes. The maximum flows v t j,i,k are sampled randomly from a uniform distribution on [2, 4], ∀t and ∀ (j, i, k) ∈ P. The drivers react to the green light as explained in Lemma 1 with η = 1 and δ = 2. The model is simulated using a 5 minutes traffic light cycle, and it runs for 480 minutes (8 hours). In these settings, we tested three controllers: A. The controller Q (without waiting-time display) with g min = 10 −4 ; B. The controller Q (without waiting-time display) with g min = 0.05; C. The controller K with g min = 10 −4 , η = 1 and δ = 2 computed using the algorithm proposed in Section IV. The controller A computes the traffic lights duty cycle without considering that the drivers can change path in response to the traffic light duration in the considered time interval. Controller B is a variation of A in which g min is larger. This is usually a sensible constraint to avoid the use of excessive red times, which may frustrate drivers when not informed about it. Finally, controller C considers the fact that the traffic light duty cycle will influence the drivers' decision to change their path in the considered time interval. All the presented controllers act every T = 5 cycles and therefore ∆t = T · 5 = 25 minutes. In the first hour of the simulation all the controllers are disabled and the traffic lights durations have a fixed duty cycle, thus distributing the time equally between all the possible paths. The controllers assume that the values of v j,i,k and ζ t j,i,k are constant for the next T traffic cycles. These constant values are selected as the mean of the last 60 minutes of measurements. The waiting time is limited to be less than 50 minutes. This constraint is not strict, and it is used mainly to avoid computational problems that can arise when dealing with extremely high values. The quadratic optimization problems Q and H (β) are solved using YALMIP [28] equipped with Gurobi [29] as a solver. Fig. 2 (left plot) shows the mean queue length at each time instant for the three controllers and for the case where no controller is enabled. An increasing queue length means that the network is congested and that there are more vehicles coming in than there are coming out. Therefore, it is clear that the controller C successfully avoids congestion, while the other approaches fail to do so. The same conclusion can be reached by looking at Fig. 2 (right plot), where the flow balance of the network, i.e. the difference between the total outgoing and incoming flows, is shown. Here, we can see that controllers A and B have a negative flow balance. This result in an increasing amount of vehicles inside the network and therefore in congestion. Vice versa, controller C, after a small transient, is characterized a flow balance that oscillates around zero. Therefore, the amount of vehicles inside the network does not increase, thus avoiding congestion. Fig. 3 shows the queue length (normalized with respect to the maximum queue) in all the 43 paths of the network. Here, it is possible to note that the controller C manages to keep a distribution of vehicle more balanced in the network and to avoid the accumulation of the traffic in a single queue. Controller B Paths Controller C Paths Fig. 3. Queue length at time t = 420 minutes (7 hours) obtained using the three controllers. The length of the queues in each scenario is normalized with respect to the maximum queue obtained in that particular case. VI. CONCLUSIONS AND FUTURE WORK We consider a scenario where the waiting-time information is provided to the drivers at the intersections and the drivers can change lanes based on that information. By considering the drivers' reaction based on the traffic-light durations, we formulate the optimal traffic-light duration selection at a network of urban intersections as a bi-level optimization problem and propose an iterative algorithm to solve it. We, empirically, show that our approach can alleviate the congestion compared to the scenario where the waiting-time information is not provided to the drivers. We did not consider any driver specific information such as origin and destination while computing the response of the driver. The characterization of the optimal traffic-light duration by incorporating such minute specifications constitutes a future direction for research. Computing a decentralized algorithm which can be implemented at each intersection using only local information is also left for the future.
7,301
2021-12-14T00:00:00.000
[ "Engineering", "Computer Science" ]
Self-affine subglacial roughness: consequences for radar scattering and basal water discrimination in northern Greenland . Subglacial roughness can be determined at a variety of length scales from radio-echo sounding (RES) data either via statistical analysis of topography or inferred from basal radar scattering. Past studies have demonstrated that subglacial terrain exhibits self-affine (power law) roughness scaling behaviour, but existing radar scattering models do not take this into account. Here, using RES data from northern Greenland, we introduce a self-affine statistical framework that enables a consistent integration of topographic-scale roughness with the electromagnetic theory of radar scattering. We demonstrate that the degree of radar scattering, quantified using the waveform abruptness (pulse peaki-ness), is topographically controlled by the Hurst (roughness power law) exponent. Notably, specular bed reflections are associated with a lower Hurst exponent, with diffuse scattering associated with a higher Hurst exponent. Abrupt waveforms (specular reflections) have previously been used as a RES diagnostic for basal water, and to test this assumption we compare our radar scattering map with a recent prediction for the basal thermal state. We demonstrate that the majority of thawed regions (above pressure melting point) exhibit a diffuse scattering signature, which is in contradiction to the prior approach. Self-affine statistics provide a generalised model for subglacial terrain and can improve our understanding of the relationship between basal properties and ice-sheet dynamics. Introduction With the development of the newest generation of thermomechanical ice-sheet models, there has been a growing awareness that better constraining the physical properties of the glacier bed is essential for improving their predictive capability (e.g.Price et al., 2011;Seroussi et al., 2013;Nowicki et al., 2013;Shannon et al., 2013;Sergienko et al., 2014;Ritz et al., 2015;Cornford et al., 2015).Notably, the basal traction parameterisation -which encapsulates the thermal state, basal roughness, and lithology -is potentially the largest single geophysical uncertainty in projections of the response of ice sheets to climate change (Ritz et al., 2015).Distinction between frozen and thawed regions of the glacier bed is particularly important in constraining ice dynamics, since appreciable basal motion can only occur in regions where the glacier bed is wet (Weertman, 1957;Nye, 1970;Mac-Gregor et al., 2016).Airborne radio-echo sounding (RES) is the only existing remote sensing technique that can acquire bed data with sufficient spatial coverage to enable subglacial information to be obtained across the ice sheets (refer to Pritchard (2014) and Bamber et al. (2013a) for recent Antarctic and Greenland coverage maps).Often, however, there is great ambiguity in RES-derived subglacial information (Matsuoka, 2011), or RES-derived information is suboptimal for direct applicability in ice-sheet models (Wilkens et al., 2015).Subsequently, data analysis methods which seek to improve the clarity and glaciological utility of RES-derived subglacial information are undergoing a period of rapid development (e.g.Oswald and Gogineni, 2008;Li et al., 2010;Fujita et al., Published by Copernicus Publications on behalf of the European Geosciences Union. 2012; Wolovick et al., 2013;Schroeder et al., 2013Schroeder et al., , 2016;;Jordan et al., 2016). RES data analysis methods for determining subglacial physical properties can be categorised in two ways: those which determine bulk properties (including the discrimination of basal water) and those which determine interfacial properties (subglacial roughness).Bulk material properties of the glacier bed can, in principle, be determined using the basal reflectivity (Bogorodsky et al., 1983;Peters et al., 2005;Jacobel et al., 2009;Schroeder et al., 2016).Performing basal reflectivity analysis on ice-sheet-wide scale is, however, greatly limited by uncertainty and spatial variation in englacial radar attenuation (Matsuoka, 2011;Matsuoka et al., 2012;MacGregor et al., 2012MacGregor et al., , 2015;;Jordan et al., 2016).In contrast to bulk properties, subglacial roughness analysis methods are (nearly) independent of radar attenuation.Subglacial roughness can be determined either via statistical analysis of topography (typically spectral analysis) (Taylor et al., 2004;Siegert et al., 2005;Bingham and Siegert, 2009;Li et al., 2010;Rippin, 2013) or inferred from the electromagnetic scattering properties of the radar pulse (Oswald and Gogineni, 2008;Schroeder et al., 2014;Young et al., 2016).Spectral analysis can provide valuable insight toward aspects past ice dynamics and landscape formation (Siegert et al., 2005;Bingham and Siegert, 2009;Rippin et al., 2014).However, since the technique is limited to investigating length scales greater than the horizontal resolution (typically ∼ 30 m or greater), the relevance of the method of informing contemporary basal sliding physics -which requires metre-scale roughness information (Weertman, 1957;Nye, 1970;Hubbard et al., 2000;Fowler, 2011) -remains unclear.Radar scattering is sensitive to the length scale of the electromagnetic wave (Shepard and Campbell, 1999) (∼ 1-5 m in ice for the majority of airborne sounders) and can potentially reveal finer-scale roughness information, including the geometry of subglacial hydrological systems (Oswald and Gogineni, 2008;Schroeder et al., 2013Schroeder et al., , 2015;;Young et al., 2016).High reflection specularity, such as occurs from deep (> 10 m) subglacial lakes (Oswald and Robin, 1973;Gorman and Siegert, 1999;Palmer et al., 2013), has been proposed as a RES diagnostic for basal water (Oswald andGogineni, 2008, 2012). Degrees of radar scattering can be mapped either using the waveform properties of the bed echo -e.g. the waveform abruptness (pulse-peakiness) (Oswald andGogineni, 2008, 2012) -or by constraining the angular distribution of scattered energy -e.g. the specularity content (Schroeder et al., 2013;Young et al., 2016).Maps of both scattering parameters indicate defined spatial patterns but, to date, have not been integrated with topographic-scale roughness analysis (horizontal length scales ∼ 10 s of metres and upwards).As such, there is a knowledge gap regarding the topographic control upon radar scattering.Observations indicate that subglacial roughness exhibits self-affine (fractal) scaling behaviour over length scales from ∼ 10 −3 to ∼ 10 2 m (Hub-bard et al., 2000;MacGregor et al., 2013).Self-affine scaling corresponds to when the vertical roughness increases at a fixed slower rate than the horizontal length scale, following a power-law relationship that is parameterised by the Hurst exponent (Malinverno, 1990;Shepard et al., 2001).It is observed for a wide variety of natural terrain (Smith, 2014), including the surface of Mars (Orosei et al., 2003), volcanic lava (Morris et al., 2008), and alluvial channels (Robert, 1988).If widely present, the self-affinity of subglacial roughness poses a challenge for integrating topographic roughness with existing glacial radar scattering models (Berry, 1973;Peters et al., 2005;MacGregor et al., 2013;Schroeder et al., 2015).This is because these are statistically stationary models which assume that roughness is independent of horizontal length scale, and hence an artificial scale separation between high-frequency roughness and low-frequency topography is present (Berry, 1973).Radar scattering models with nonstationary, self-affine statistics naturally incorporate the multiscale dependence of roughness and are in widespread use in other fields of radar geophysics (e.g.Shepard and Campbell, 1999;Franceschetti et al., 1999;Campbell and Shepard, 2003;Oleschko et al., 2003). In this study, we explore the connection between selfaffine subglacial roughness and radar scattering using recent airborne Operation IceBridge (OIB) RES data from the north-western Greenland Ice Sheet (GrIS).Firstly we review the theory of self-affine roughness statistics, using examples from ice-penetrating radargrams and bed elevation profiles to demonstrate its applicability to subglacial terrain (Sect.2).We then outline analysis methods that enable topographic roughness and radar scattering (quantified using the waveform abruptness) to be extracted from RES flighttrack data (Sect.3).A self-affine radar scattering model, adapted from planetary radar sounding (Shepard and Campbell, 1999;Campbell and Shepard, 2003), is then used to predict the relationship between the Hurst exponent and waveform abruptness (Sect.4).We then present maps of the RESderived roughness and scattering data for the northern GrIS and compare the spatial distribution with bed topography (Bamber et al., 2013a) and a recent prediction for the basal thermal state (MacGregor et al., 2016) (Sect. 5.1).The radar scattering model is then used to quantify self-affine topographic control upon radar scattering, via the Hurst exponent (Sect.5.2).The statistics of the RES-derived data in predicted thawed and frozen regions of the glacier bed are then analysed (Sect.5.3), with the purpose of testing the basal water discrimination method by Oswald andGogineni (2008, 2012), which assumes a specular scattering signature is present.Finally, we discuss the wider consequences of our study, including subglacial landscape classification, the relationship between bed properties and ice-sheet dynamics, basal thaw/water discrimination, and radar scattering theory applied to RES (Sect.6). Overview Statistical methods to calculate the Hurst exponent, and thus to quantify self-affine scaling behaviour, are well established in the earth and planetary science literature (Malinverno, 1990;Shepard et al., 2001;Kulatilake et al., 1998;Orosei et al., 2003).These space-domain methods extract the Hurst exponent using the variogram (roughness verses profile length) and deviogram (roughness versus horizontal lag).Our motivation for use of these methods, rather than the spectral (frequency-domain) methods previously applied in studies of subglacial roughness (Taylor et al., 2004;Siegert et al., 2005;Bingham and Siegert, 2009;Li et al., 2010;Rippin, 2013), is that they better reveal self-affine scaling behaviour (Turcotte, 1992;Shepard et al., 1995Shepard et al., , 2001)).Since the theory of self-affine roughness and related space-domain methods are not widely discussed in the glaciological literaturethe only example being MacGregor et al. ( 2013) -we now provide a review of the key concepts. Interfacial roughness parameters Topographic roughness can be measured by means of statistical parameters that are, in general, a function of horizontal length scale (Shepard et al., 2001;Smith, 2014).Two different interfacial roughness parameters -the root mean square (rms) height and rms deviation -are typically employed in self-affine roughness statistics (Shepard et al., 2001).The rms height is given by where N is the number of sample points within the profile window of length L, z(x i ) is the bed elevation at point x i , and z is the mean bed elevation of the profile.ξ represents the standard deviation in bed elevation about a mean surface and models the topographic roughness as a Gaussian-distributed random variable (Orosei et al., 2003).The rms deviation is given by where x is the horizontal step size (lag).ν has a particular significance in the parameterisation of radar scattering models with self-affine statistics (Shepard and Campbell, 1999;Campbell and Shepard, 2003), and we focus upon this roughness parameter when integrating topographic-scale roughness with radar scattering data.The rms slope, which is proportional to the rms deviation, is also widely used in selfaffine statistics, but we do not do so here. Self-affine scaling behaviour and the role of the Hurst exponent Self-affine scaling is a subclass of fractal scaling behaviour and can be parameterised using the Hurst exponent, H (Malinverno, 1990;Shepard et al., 1995Shepard et al., , 2001)).H quantifies the rate at which roughness in the vertical direction increases relative to the horizontal length scale (and is defined for 0 ≤ H ≤ 1).For a self-affine interface the following powerlaw relationships hold: and where L 0 is a reference profile length and x 0 is a reference horizontal lag (Shepard and Campbell, 1999;Shepard et al., 2001).Three limiting cases of self-affine scaling are typically discussed (Shepard and Campbell, 1999).Terrain with H = 1 (where the roughness in the vertical direction increases at the same rate as the horizontal length scale) is referred to as "self-similar".Terrain with H = 0.5 (where the roughness in the vertical direction increases with the square root of horizontal length scale) is referred to as "Brownian". Terrain with H = 0 (where the roughness in the vertical direction is independent of horizontal length scale) is referred to as "stationary".For a stationary (H = 0) interface it follows from Eqs. (3) and (4) that ξ and ν are independent of L and x respectively (i.e. the roughness parameters are independent of horizontal length scale). We will later demonstrate that subglacial terrain exhibits near-ubiquitous self-affine scaling behaviour with pronounced spatial structure and variation for H . Examples of OIB ice-penetrating radargrams (Z scopes) (Paden, 2015) and associated bed elevation profiles for terrain with different H are shown in Fig. 1.Clear differences are apparent between the different terrain examples.The black (H ≈ 0.9) terrain (Fig. 1a) and red (H ≈ 0.7) terrain (Fig. 1b) are between Brownian and self-similar scaling behaviour.This terrain exhibits "persistent trends", where neighbouring measurements tend to follow a general trend of increasing or decreasing elevation (refer to Shepard and Campbell (1999) for a full discussion).A feature of terrain with higher H is that it tends to appear relatively rough at larger length scales (low frequency) and smooth at smaller length scales (high frequency).By contrast, the green (H ≈ 0.3) terrain (Fig. 1d) is in the sub-Brownian scaling regime and exhibits "anti-persistent trends", where neighbouring measurements tend to alternate between increasing and decreasing elevation.A feature of lower H terrain such as this example is that it tends to have similar roughness across length scales.The blue (H ≈ 0.5) terrain (Fig. 1c) is close to an ideal Brownian surface and exhibits no overall persistence (with some www.the-cryosphere.net/11/1247/2017/The Cryosphere, 11, 1247-1264, 2017 sections of the profile following an increasing/decreasing elevation trend and other sections alternating).The 10 km profile windows in Fig. 1 represent the length of flight-track data over which H is calculated (see Sect. 3.2). Calculation of the Hurst exponent using the variogram and deviogram In order to calculate H , and identify the scale regime over which glacial terrain exhibits self-affine behaviour, ξ and ν are plotted as functions of L and x, respectively, on doublelogarithmic-scale plots, referred to as the variogram and deviogram (Kulatilake et al., 1998;Shepard et al., 2001).Variogram and deviogram plots for ξ(L) and ν( x) for the four terrain examples in Fig. 1 are shown in Fig. 2a and b respectively.It follows from Eqs. (3) and (4) that, upon this doublelogarithmic scale, a straight line relationship is predicted for glacial terrain that is self-affine with the gradient equal to H .In practice, a single self-affine relationship only holds over a limited scale regime and a "break-point" transition is often observed (Shepard et al., 2001).We describe how we assess the break points for glacial terrain in Sect.3.2, along with further details regarding the application of the variogram and deviogram to along-track RES data.Figure 2 clearly demonstrates the significance of the Hurst exponent and horizontal length scale when assessing the relative roughness of different terrain.For example, the black (H ≈ 0.9) terrain is rougher than the red (H ≈ 0.7) terrain at larger length scales but is smoother at smaller length scales.The space-domain variogram and deviogram have an approximate correspondence to the frequency-domain power spectrum (Turcotte, 1992;Shepard et al., 1995Shepard et al., , 2001)).In The plots correspond to subglacial terrain profiles in Fig. 1.The Hurst exponent is estimated from the linear gradient of the first five data points (indicated by dashed lines).These space-domain plots are (approximate) equivalents to frequency-domain roughness power spectra, and smaller length scales correspond to higher frequencies.frequency space, self-affine scaling occurs when the power spectrum, S, has a relationship of the form S(k) ∝ k −β , where k is the spatial frequency and −β is the spectral slope.The relationship between β and H is dimensionally dependent and for along-track data is given by H = 1 2 (β − 1) (Turcotte, 1992).Despite this correspondence, the space-domain methods are recommended to calculate H as they are less noisy and less likely to bias slope estimates than the power spectrum method (Shepard et al., 1995).The study by Hubbard et al. (2000) observed self-affine scaling in the roughness power spectrum over length scales from ∼ 10 −3 to ∼ 10 m for different sites across recently deglaciated terrain in the immediate foreground of Tsanfleuron glacier, Switzerland.Their range for measured values of β corresponds to 2.27 < β < 2.48, which implies H ≈ 0.7.Paden, 2015).For the flight lines considered, the alongtrack resolution after synthetic aperture radar (SAR) processing and multi-looking is ∼ 30 m with an along-track-sample spacing of ∼ 15 m (Gogineni et al., 2014).The 2011 and 2014 field seasons were used since they have a higher alongtrack resolution than other recent field seasons and hence enable a clearer connection to be made between radar scattering and topographic-scale roughness. The study focused on flight-track data from north-western Greenland and encompassed measurements close to three deep ice cores: Camp Century, NEEM, and NorthGRIP (Fig. 3).The first reason for selection of this region is that the data coverage for the 2011 and 2014 field seasons is of high density relative to most other regions of the ice sheet.The second reason is that confidence regarding the basal thermal state is high near to the ice cores and thus enables the validity of the basal water RES analysis by Oswald andGogineni (2008, 2012) to be tested. Measurements from MCoRDS are supplied as data products with different levels of additional processing (Paden, 2015).Level 2 data correspond to ice thickness, ice surface, and bed elevation data and are used to calculate topographicscale roughness and the Hurst exponent (Sect.3.2).Details regarding the semi-manual picking procedure are described by Paden (2015), and only the highest-quality picks were used.Level 1B data correspond to radar-echo strength profiles and are used to extract the waveform abruptness parameter from the bed echo (Sect.3.3).Basal reflection values can also be extracted from Level 1B data, but we do not do this here because we do not wish to bias our interpretation due to uncertainty in radar attenuation.The preprocessing of the combined channel Level 1B data is also described by Paden (2015).Sequentially this involves channel compensation between each of the antenna phase centres, pulse compression (using a 20 % Tukey window in the time domain), coherent-averaging of the channels, SAR processing with along-track frequency window, channel combination, and waveform combination. Determination of topographic roughness and Hurst exponent from Level 2 data The along-track spacing (∼ 15 m) of the Level 2 data is half the horizontal resolution (∼ 30 m), which represents the spacing at which bed elevation measurements are considered as independent.Therefore, to remove local correlation bias, the Level 2 data were down-sampled, considering every second data point (corresponding to a ∼ 30 m along-track spacing).Each flight track was then divided into 10 km alongtrack profile windows, as shown in the examples in Fig. 1a. T. M. Jordan et al.: Self-affine subglacial roughness The windows overlap with a sample spacing of 1 km, with the centre of each window defined to be the point to which H and the roughness parameters are geolocated.This "moving window" approach was employed as it enables greater continuity in the estimates for H . Prior to estimating H , ξ(L) and ν( x) were computed following Eqs.( 1) and ( 2) respectively.These calculations used the "interleaving" sampling method described in Shepard et al. (2001), which enables all of the data points to be sampled effectively.The windowing method is similar to that described in Orosei et al. (2003) for the self-affine characterisation of Martian topography, where a non-overlapping 30 km window was assumed.The choice of 10 km for the profile window and 1 km for the effective resolution represents a good tradeoff between resolution and the smoothness of the derived data fields. In this study we are interested in calculating H at the length scale of the Fresnel zone (∼ 100 m), since this enables the most accurate parameterisation of the radar scattering model described in Sect. 4. Additionally, due to the break point transitions that occur at larger length scales, the focus on smaller length scales is a robust approach to calculate H (Shepard et al., 2001).For the data we consider, the lower bounds of the horizontal length scales are ∼ 90 m for ξ(L) (since three elevation measurements are the minimum required to calculate ξ(L) using Eq. 1) and ∼ 30 m for ν( x).ν( x) therefore better enables the estimation of H at smaller length scales and we primarily focused on the deviogram method (Fig. 2b).Additionally, as suggested in Fig. 2, the relationships for ν( x) are, in general, significantly smoother than ξ(L).The upper length scales in the deviogram and variogram were set to be x = 1 km and L = 1 km respectively, which follows from the recommendation by Shepard et al. (2001) that at least 10 independent sections of track are used in the calculations.As shown in Fig. 2, the gradients (H ) were calculated using the first five data points (which, for the deviogram, are over the range x ∼ 30-150 m).Self-affine scaling behaviour often extends beyond these smaller length scales and we estimated the break points for ξ(L) and ν( x) using a segmented linear regression procedure.Briefly, this involved firstly calculating the gradient (H ) for the first five data points.Additional data points at increasing length scales were then added into each linear regression model, and the gradient was recalculated.Finally, break points in the linear relationship were identified by testing if the new gradient exceeded a specified tolerance from the original estimate. Determination of waveform abruptness from Level 1B data The post-processing of the Level 1B data (analysis of the basal waveform) uses the procedure described in Jordan et al. (2016), which, in turn, is largely based upon Oswald and Gogineni (2008).Firstly, this involved performing an alongtrack average of the basal waveform, where adjacent basal waveforms are stacked about their peak power values and arithmetically averaged.This averaging approach is phaseincoherent and acts to smooth power fluctuations due to electromagnetic interference (Oswald and Gogineni, 2008).The size of the averaging window varies as a function of Fresnel zone radius, and subsequently each along-track averaged waveform corresponds to approximately a separately illuminated region of the glacier bed (see Jordan et al., 2016, for details).The degree of radar scattering is quantified using the waveform abruptness where P peak is the peak power of the bed echo and P agg is the aggregated power, which is calculated by a discrete summation of the bed-echo power measurements in each depth range bin.P agg was introduced by Oswald and Gogineni (2008) since, based upon energy conservation arguments, it is argued to be more directly related to the predicted (specular) reflection coefficients than equivalent peak power values.In radar altimetry, the waveform abruptness is commonly called "pulse peakiness" (e.g.Peacock and Laxon, 2004;Zygmuntowska et al., 2013).Observed values of A range from ∼ 0.03 to 0.60, and in Sect.4.3 we theoretically constrain the maximum value to be ∼ 0.65.Three examples of basal waveforms, along with their corresponding A values, are shown in Fig. 4. Higher A values are associated with specular reflections from smoother regions of the glacier bed (e.g. the blue waveform), whilst lower A values are associated with diffuse reflections from rougher regions (e.g. the green waveform) (Oswald and Gogineni, 2008).The positions of the peak power were established by firstly using Level 2 data picks, then applying a local re-tracker to centre over the peak power.When calculating the summation for P agg (both fore and aft of the peak power so as to best capture the energy contained in the echo envelope), a signal-noise-ratio threshold was implemented by testing for decay of the peak power to specified percentage above the noise floor.Thresholds of 1, 2, and 5 % were considered and 2 % was found to give the best coverage, whilst excluding obvious anomalies.Due to this qualityfiltering step there are therefore sometimes small gaps in the along-track A data. As RES over ice employs a nadir-facing sounder, the scattering contribution toward the waveform abruptness is mainly from coherent reflection (as opposed to side-looking SAR instruments which would be mainly diffuse scattering).Whether coherent pre-processing (either coherent presumming of Doppler focusing) of the raw data acts to increase or decrease the value of A depends upon the exact character and roughness of the surface.As a first example, if the specular/nadir component of the echo is assumed to be coherent, whilst the diffuse/off-nadir component is assumed to be incoherent (e.g.Grima et al., 2014), then coherent processing would cause the specular component of the signal to increase with coherent gain but not the diffuse (incoherent) Bed-echo waveforms Figure 4. Examples of bed-echo waveforms and their abruptness (pulse peakiness).Observed values for A range from ∼ 0.03 (associated with diffuse scattering) to ∼ 0.60 (associated with specular reflection).For the purpose of comparative plotting, the waveforms are normalised about their peak power values with the sample bin of the peak power set to zero.The sample bin spacing corresponds to a depth-range spacing of ∼ 2.81 m in ice. signal.Therefore the measured A value would decrease with gain.As a second example, if both the specular/nadir and diffuse/off-nadir components of the echo are assumed to be coherent (e.g.Schroeder et al., 2013Schroeder et al., , 2015)), then for small SAR processing angles (coherent pre-summing) the waveform abruptness should be largely unaffected.However, for larger angles (exceeding the angle spanned by the specular component of the echo in the scattering function) the A value will decrease with coherent pre-processing. The basal waveform (and hence the calculated values of A) results from a superposition of along-track and cross-track energy (Young et al., 2016).Subsequently, the anisotropy of radar scattering (and inferences regarding the anisotropy of subglacial roughness) is not explicitly revealed by A. Hence, the studies of Oswald andGogineni (2008, 2012) treat A as an isotropic parameter, and we follow this approach here. 4 Radar scattering model for self-affine roughness Overview The waveform abruptness has previously been discussed without reference to roughness statistics, and here we do this using a self-affine radar scattering model.Radar scattering models from natural terrain fall into two different categories: "coherent", which incorporates deterministic phase interference, and "incoherent", which incorporates random phase interference (Ulaby et al., 1982;Campbell and Shepard, 2003;Grima et al., 2014).Coherent scattering models are applicable where the reflecting region is orientated nearly perpendicular to the incident pulse (the nadir regime) and the reflecting region is fairly smooth at the scale of the illumi-nating wavelength (Campbell and Shepard, 2003), which is normally assumed to be a good approximation for the RES of glacier beds (e.g.Peters et al., 2005;MacGregor et al., 2013;Schroeder et al., 2015).Volume (Mie) scattering is typically neglected from basal RES scattering analysis and would hypothetically require scatterer dimensions of the order of the radar wavelength (∼ 0.5 to 5 m dependent on the bed dielectric and radar system).This neglection of volume scattering is justified given the ∼ 10 −6 to 10 −3 m scale of water pore radii in typical bed materials (Nimmo, 2004).Moreover, even in the extreme case of planetary ice regoliths (which are colder than terrestrial ice and will therefore sustain larger heterogeneities), scatterer dimensions are ∼ 10 −3 to 10 −2 m and volume scattering losses are small (Aglyamov et al., 2017). Below we describe and adapt a coherent scattering model, first developed for the nadir regime of planetary radar sounding measurements, which incorporates self-affine roughness statistics (Shepard and Campbell, 1999;Campbell and Shepard, 2003).The model is parameterised using the Hurst exponent values derived from the subglacial topography (Sect.3.2) and thus enables a connection to be made between the topographic roughness and radar scattering.Coherent scattering models can be used to model a decrease in specularly reflected power as a function of rms roughness (Berry, 1973;Peters et al., 2005), and this is the central aspect of the model which we focus upon here.Specifically, we show that, under assumptions of energy conservation, this power decrease can be used to theoretically predict the relationship between the Hurst exponent and waveform abruptness. Modelling the coherent power The physical assumptions behind the self-affine scattering model are summarised in Shepard and Campbell (1999).The central assumption that differentiates the model from coherent stationary (H = 0) models (Berry, 1973;Peters et al., 2005;MacGregor et al., 2013;Grima et al., 2014;Schroeder et al., 2015) is that the rms height increases as a function of radius, r, about any given point, following the self-affine relationship where ν λ = ν( x = λ) is the wavelength-scale rms deviation.Equation ( 6) assumes radial isotropy for H and ξ .Since we are focusing upon constraining the (near-) isotropic abruptness parameter, this is a justifiable approximation.The statistical distribution for ξ(r) is assumed to be Gaussian, which is similar to most H = 0 models (but with an additional radial dependence).Via ν λ , the self-affine model is explicitly formulated with respect to the horizontal scale of rms roughness.The radio wavelength of MCoRDS in ice is ∼ 0.87 m, and hence wavelength-scale rms deviation is approximately equivalent to metre-scale rms deviation.An un-avoidable caveat to the parameterisation of the radar scattering model using Eq. ( 6) is that the H values derived from the topography (length scale ∼ 30-150 m) are extrapolated downwards to the wavelength scale. An expression for the radar backscatter coefficient (radar cross section per unit area) is then derived by considering a phase variation, 4πξ(r) λ , integrated across the Fresnel zone (Shepard and Campbell, 1999;Campbell and Shepard, 2003).For nadir reflection the radar backscatter coefficient is given by where r = r λ is the wavelength-scaled radius, rmax is the wavelength-scaled radius of the illuminated area (the Fresnel zone), and R e is the reflection coefficient for the electric field (Campbell and Shepard, 2003).The coherent power, P , can then be obtained by dividing Eq. ( 7) by 4π 2 r2 max (a geometric factor which follows from the backscatter coefficient of a flat conducting plate; Ulaby et al., 1982) to obtain For the case where H = 0, ξ(r) in Eq. ( 6) is independent of radius.It follows that ξ 2 = 1 2 ν 2 λ and the exponent in Eq. ( 8) is also independent of radius, which gives Equation ( 9) is the same power decay formula as coherent H = 0 models (Peters et al., 2005;MacGregor et al., 2013;Grima et al., 2014;Schroeder et al., 2015), where it is sometimes multiplied by a first-order Bessel function (which enables some of the incoherent energy contribution to be captured; MacGregor et al., 2013).Thus the stationary limit of the self-affine model is consistent with previous glacial basal scattering models.It is clear that the coherent power for the self-affine model, Eq. ( 8), has two roughness degrees of freedom: H and ν λ , which can be conceptually related to the gradient and the intercept of the deviogram (Fig. 2).This contrasts with the stationary model, Eq. ( 9), which has one degree of freedom: ξ . Predicted relationship between the Hurst exponent and waveform abruptness The utility of the waveform abruptness in quantifying different degrees of scattering rests upon the assumption that the majority of the overall energy is contained within the echo envelope (Oswald and Gogineni, 2008).In other words, it is assumed that, for reflection from the same bulk material, the aggregated/integrated power from a rough interface (ν λ > 0) is equivalent to the peak power from a given smooth interface; i.e.P agg ≈ P (ν λ = 0).This energy equivalence was demonstrated to hold well for the waveform processing procedure and Greenland RES systems by Oswald and Gogineni (2008).It follows from this energy equivalence that the abruptness, A, can be expressed in terms of the coherent power, Eq. ( 8), as where C is a proportionality constant that corresponds to the theoretical maximum abruptness value, which occurs when the radar pulse is specularly reflected and P agg = P peak .For a perfectly specular reflection the pulse is the shape of compressed chirp (absolute value of a sinc function with the width determined by the signal bandwidth).If the depthrange sample spacing of the waveform (Fig. 4) were the same as the depth-range resolution then C would be near unity.However, C can be estimated from the ratio of the sample spacing (∼ 2.8 m) to the range resolution (∼ 4.3 m) to give C ∼ 0.65.Finally, substituting Eq. ( 8) into Eq.( 10) gives As is the case for P in Eqs. ( 8) and (11) A has two roughness degrees of freedom: H and ν λ .Shepard and Campbell (1999) note that the primary dependence for P (and hence A) is upon H , with a weaker secondary dependence upon ν λ .In order to illustrate this dependency, we consider first the relationship between A and H for fixed ν λ (Fig. 5a) and secondly the relationship between A and ν λ for fixed H (Fig. 5b). Figure 5a demonstrates that higher values of ν λ (the black curve) result in negligible A for all but the lowest values of H . Intermediate values of ν λ (the red and blue curves) exhibit a sharp transition from higher to lower values of A as H increases.Low ν λ (the green curve) has high A for all H . Figure 5b demonstrates a monotonic decrease in A with ν λ for each value of H , with the decay length decreasing rapidly with increasing H .It is important to note that the predictions of the self-affine radar scattering model are consistent with the specular RES scattering signature that we would expect from electrically deep subglacial lakes.Under the self-affine roughness framework, a large geometrically flat feature such as a lake would have a negligible value of H and ν λ .This scenario occurs for the low H limit of the green curve in Fig. 5a, where predicted values for A are ∼ 0.65 (corresponding to a perfectly specular reflection). The physical explanation for the strong dependence of the coherent power upon H , and the relationships which we observe in Fig. 5, is discussed by Shepard and Campbell (1999) and Campbell and Shepard (2003).It relates to the fact that significant coherent returns can only occur from annular regions where ξ(r) < λ 8 (the Rayleigh criterion).It follows from Eq. ( 6) that high values of H lead to a rapid increase in roughness with radius that rapidly exceeds this threshold.Subsequently, for high H interfaces, the roughness at the wavelength scale, ν λ , must be a couple orders of magnitude smaller than the Rayleigh criterion to enable significant coherent returns (i.e.non-negligible A).The curves in Fig. 5 assume rmax = 100 (corresponding to a Fresnel zone radius ∼ 115 m for the ice wavelength ∼ 0.87 m).In general, the relationships in Fig. 5 are insensitive to this choice of radius.This is because the radii of the coherent annular regions are typically significantly less than the Fresnel zone and thus act as the dominant length scale for the integration limit in Eq. ( 11). Results Firstly, we describe maps for the rms deviation and Hurst exponent (topographic-scale roughness) and the waveform abruptness (radar scattering) in the northern Greenland (Sect.5.1).In this analysis we compare the RES-derived data with the Greenland bed digital elevation model (DEM) (Bamber et al., 2013a) and the predicted basal thermal state (MacGregor et al., 2016).Secondly, by comparing the theoretical predictions of the self-affine radar scattering model with the observed relationship between the Hurst exponent and waveform abruptness, we quantitatively assess topographic control upon radar scattering (Sect.5.2).Thirdly, we perform a statistical analysis of the RES-derived data in predicted thawed and frozen regions of the glacier bed (Sect.5.3), which enables us to assess the validity of the basal water discrimination algorithm in Oswald andGogineni (2008, 2012).Finally, we present uncertainty estimates for the RES-derived data (Sect.5.4). Maps for self-affine roughness and radar scattering in northern Greenland In Fig. 6 flight-track maps for the RES-derived roughness and scattering data are compared with the Greenland bed DEM (Bamber et al., 2013a) and the predicted basal thermal state (Fig. 11 in MacGregor et al., 2016).The flighttrack maps all demonstrate a high degree of spatial structure, with some notable correlations present (both between each other and the DEM).There is a clear inverse relationship between the rms deviation, ν (shown at two different length scales in Fig. 6a and b), and the waveform abruptness, A (Fig. 6c), with higher abruptness (specular reflections) present in smoother regions of the ice-sheet bed and lower abruptness (diffuse scattering) present in rougher regions.For example, smoother regions (lower ν, higher A) occur for flight tracks in the region inland from the settlement of Qaanaaq and around Camp Century (including the green profile, Fig. 1d), around the NorthGRIP ice core, and a region ∼ 150 km ENE of the NEEM ice core.Whilst these smoother regions are at a range of bed elevations (ranging from ∼ 800 m NE of Qaanaaq to around sea level in the interior), they are all spatially correlated with flatter bed topography (Fig. 6e).Correspondingly, many rougher regions (higher ν, lower A) are spatially correlated with more complex topography -e.g. the region of the ice sheet inland from the Melville Bugt coast (including the red profile, Fig. 1b).However, some rougher regions of the bed have a less obvious correlation with higher contour gradients -e.g. the flatter regions inland from the Humboldt glacier.Pronounced spatial variation in the Hurst exponent, H , is evident in Fig. 6d.H also has a inverse relationship with A and spatially correlates with the bed topography in a similar manner to ν.In other words, lower H is associated with higher A and flatter regions of the bed -e.g.near Camp www.the-cryosphere.net/11/1247/2017/The Cryosphere, 11, 1247-1264, 2017 Century -whilst higher H is associated with lower A and generally more complex bed topography -e.g.inland from the Melville Bugt coast and inland from Ryder glacier (including the black profile, Fig. 1a).In Sect.5.2 a quantitative assessment of this relationship is made using the radar scattering model.The simple notion that, at the topographic scale, rougher regions of the bed correspond to higher H can be related back to the power-law scaling relationship in the deviogram (Fig. 2b).The length scales for the rms deviation maps ν( x = 30 m) in Fig. 6a and ν( x = 150 m) in Fig. 6b are chosen as they are the lower and upper bounds in the deviogram calculation for H .It is notable that, despite the clear spatial variation in H in Fig. 6d, the overall spatial distributions for ν( x = 30 m) and ν( x = 150 m) are remarkably similar.Thus, from a purely visual inspection of ν at differ-ent length scales, the pronounced spatial variation in H is not immediately apparent. The basal thermal state prediction by MacGregor et al. ( 2016) (Fig. 6f) represents an up-to-date best estimate for the GrIS at a 5 km resolution.It is based upon a trinary classification: likely thawed/above pressure melting point (red), likely frozen/below pressure melting point (blue), and uncertain (grey).The mask was determined using four independent methods: thermomechanical modelling of basal temperature, basal melting inferred from radiostratigraphy, surface velocity, and surface texture.The mask is therefore independent of our RES-derived data fields.There are some obvious correlations between the basal thermal state prediction and the RES-derived roughness and scattering data.For example, many predicted thawed regions toward the margins -e.g. the region of the ice sheet inland from the Melville Bugt coastcorrespond to rougher terrain (higher H and ν) and diffuse scattering (lower A).However, there are regions of predicted thaw that demonstrate the opposite behaviour (lower H and ν and higher A) -for example, the two interior regions previously identified as smooth around the NorthGRIP ice core and the region ENE of NEEM.The scattering signature of predicted thawed regions is therefore non-distinct and can be either specular or diffuse.Predicted frozen regions tend to be smoother with specular reflections (higher A), although it is clear that spatial variation is present with some regions exhibiting more diffuse scattering (lower A).Section 5.3 provides a more detailed statistical analysis. There are some clear discontinuities in the flight-track maps for ν and H in Fig. 6.These can be explained by either roughness anisotropy or the self-affine terrain model breaking down in certain regions (e.g. a sharp terrain discontinuity such as a subglacial cliff).By contrast the map for A is smoother, which is consistent with its interpretation as an isotropic scattering parameter. Statistics for topographic control upon radar scattering and comparison with radar scattering model Before we consider a quantitative comparison between the predictions of the radar scattering model and the RESderived data, we first summarise the statistics for the Hurst exponent, H .The total frequency distribution for H , corresponding to the flight-track data in Fig. 6d, is shown in Fig. 7a.The distribution is divided into three categories: (i) H > 0.75 ("high" H ), (ii) 0.5 < H ≤ 0.75 ("medium H "), and (iii) H ≤ 0.5 ("low" H ), which we later use to compare with the radar scattering model predictions.These categories correspond to approximately 30, 50, and 20 % of the total data respectively.Approximately 0.1 % of the H estimates are > 1 and none of the H estimates are < 0, representing near-ubiquitous self-affine scaling behaviour (0 < H < 1).An overall negative skew for the distribution of H is observed with a mean value of 0.65, indicating that the majority of the subglacial terrain along the flight tracks lies between Brownian (H = 0.5) and self-similar (H = 1) scaling regimes.The spatial coverage of the radar flight tracks in Fig. 6d is, however, more comprehensive in regions of higher H . Thus the mean value and skew of H in Fig. 7a are likely overestimates and underestimates of true (equal area) averaged values for the region of the northern GrIS in Fig. 3.The self-affine coherent scattering model (Sect.4) predicts that there are two roughness degrees of freedom that control A: H (the primary control) and ν λ (the secondary control).At metre scale, ν λ is significantly smaller than the alongtrack resolution (∼ 30 m) and therefore cannot be observed directly.Additionally, given the theoretically predicted primary dependence of A upon H , a natural starting point is to compare with the observed relationship between A and H (Fig. 6).Based upon the assumption that ν λ varies spatially, a statistically distributed inverse relationship between H and A is predicted which corresponds to the family of predicted curves in H -A space in Fig. 5.This approach assumes a downward extrapolation of H from the topographic scale to the wavelength scale in the radar scattering model. In order to test this prediction, we considered the statistics of three separate A distributions for each H category, which are shown for high H in Fig. 7b, medium H in Fig. 7c, and low H in 7d.A nearest-neighbour interpolation was used to pair each A value (∼ 100-150 m along-track spacing) with each H value (1 km along-track spacing).The lowest mean value, smallest variance, and strongest positive skew are observed for the high-H category.This supports the general prediction in Fig. 5 that higher A values (specular reflections) are suppressed in regions of higher H , with lower A values (diffuse scattering) being more probable.The highest mean value, greatest variance, and weakest positive skew are observed for the low-H category.Again, this supports the prediction in Fig. 5 that A is less constrained in regions of lower H , with a tendency toward higher values (specular reflections).As would be expected, the A-distribution statistics for the medium H category lie between the high-H and low-H categories with intermediate mean values, variance, and skewness.Finally, the observed values of A in Fig. 7 range from ∼ 0.03 to 0.60, which is in agreement with the theoretically constrained maximum value of 0.65. Statistics in thawed and frozen regions Here we summarise the statistics of the RES-derived roughness and scattering data in predicted thawed and frozen regions of the glacier bed, with an overall purpose of testing the basal water discrimination algorithm by Oswald andGogineni (2008, 2012).Conceptually, their approach assumes that water in thawed regions has a similar RES signature to deep subglacial lakes which exhibit brighter and more specular reflections than surrounding regions (e.g.Oswald and Robin, 1973;Gorman and Siegert, 1999;Palmer et al., 2013).In their algorithm wet regions are discriminated if (i) the relative bed reflectivity is above a threshold (using an attenuation model where the attenuation rate has an inverse relationship with surface elevation) and (ii) the abruptness is also above a threshold (around 0.3).Thus, in their approach, high abruptness (specular reflections) is a necessary, but not sufficient, criterion for identifying basal water.A further feature of their approach is that spatial continuity for water is imposed, i.e.only larger-scale regions (∼ 100s of km 2 and upwards) are considered. The distributions for all RES-derived data exhibit pronounced statistical differences between thawed and frozen regions (Fig. 8).The mean value for H in thawed regions is 0.74 with a strong negative skew (Fig. 8a), whereas the mean value for H in frozen regions is 0.54 with a weak negative skew (Fig. 8b).The mean value for ν( x = 30 m) in www.the-cryosphere.net/11/1247/2017/The Cryosphere, 11, 1247-1264, 2017 Distribution for Hurst exponent thawed regions is 6.36 m, which is over double the mean value of 2.80 m in frozen regions.A qualitatively similar distinction between thawed and frozen regions is also present for ν( x = 150 m), with a mean value of 21.7 m in thawed regions and 7.2 m in frozen regions (not shown).The thawed distribution for A is similar to the high-H category in Fig. 7b, with a mean A value of 0.165 and strong positive skew. The frozen distribution is similar to the low-H category in Fig. 7d with a mean A value of 0.264 and a weak positive skew.These statistics demonstrate a contradiction with the basal water discrimination algorithm of Oswald andGogineni (2008, 2012).Lower abruptness (diffuse scattering) is more common in thawed regions where basal water is likely to be present.Moreover, the necessary high-abruptness (specular reflections) condition for water is generally not satisfied (particularly at the larger spatial scales that were considered by Oswald andGogineni (2008, 2012) when mapping basal water). Uncertainty and consistency of RES-derived data In RES data analysis, cross-over distributions at flight-track intersections can give an indication of uncertainty based upon internal consistency (e.g.MacGregor et al., 2015;Jordan et al., 2016).However, due to the anisotropy in Fig. 6d, cross-over analysis for H (ν( x)) cannot be applied directly.Hence repeat estimates were made using the variogram to calculate H (ξ(L)) (i.e.calculating H using rms height).The map for H (ξ(L)) (not shown) has a similar spatial distribution as Fig. 6d but with greater high-frequency noise ap-parent.Differencing the estimates as H (ν( x))-H (ξ(L)) and performing cross-over analysis gives a mean bias of −0.026 and a standard deviation of 0.10 (10 % of the parameter range).The small mean bias is potentially explained by the variogram estimates being at a slightly larger length scale (L ∼ 90-210 m).Additional cross-over analysis using different profile window sizes (e.g. 15 km) confirms that 0.10 serves a reasonable estimate for the uncertainty of H . Since A is assumed to be isotropic, the uncertainty can be estimated via cross-over analysis of flight-track intersections.This gives a cross-over standard of ∼ 0.05 (again ∼ 10 % the parameter range). As part of the analysis we also considered estimation of the breakpoint transitions for H (ξ(L)) and H (ν( x)) using the segmented linear regression procedure described Sect.3.2.The exact values of the breakpoints depend upon how strict the stopping criterion is, so here we just discuss some general trends.Firstly, the self-affine scaling relationships often extend over a much greater length scale than the upper length scale used in the calculation of H (often over 500 m as occurs in Fig. 2).Secondly, the breakpoints for H (ν( x)) generally occur at greater length scales than for H (ξ(L)). Thirdly, the break points for both H (ν( x)) and H (ξ(L)) tend to be greater toward the ice-sheet margins where H is higher. Discussion Our results demonstrate that self-affine scaling behaviour is a near-ubiquitous property of the subglacial topography of northern Greenland.Moreover, there is both spatial structure and variability in the Hurst exponent, which can range from being near-self similar (H ≈ 1) to sub-Brownian (H < 0.5).The Hurst exponent is valuable as it provides a way to integrate maps of topographic-scale roughness metrics (e.g.rms height and rms deviation) and maps of radar scattering parameters (e.g. the waveform abruptness), which provide finer-scale roughness information.Notably, theoretical predictions and observations both demonstrate that higher values of the abruptness (specular reflections) are suppressed in rougher regions of the bed with a higher Hurst exponent.Additionally, extended continuous regions of higher abruptness are generally limited to occur in smoother regions with a lower Hurst exponent.This finding implies that maps of radar scattering information -including the waveform abruptness parameter in this study and in Oswald andGogineni (2008, 2012) and the specularity content in Schroeder et al. (2013) and Young et al. (2016) -will benefit from analysis that incorporates self-affine topographic control. The Hurst exponent provides information about the relationship that exists between vertical roughness and the horizontal length scale.Whilst it is related to the slope of the roughness power spectrum, past spectral analysis of glaciological terrain tends to obscure this information (since an integrated "total roughness" metric is typically used) (Taylor et al., 2004;Siegert et al., 2005;Bingham and Siegert, 2009;Li et al., 2010;Rippin, 2013).Subsequently, the Hurst exponent represents new subglacial roughness information that could potentially be utilised much more widely than our current application in constraining radar scattering.For example, planetary scientists have previous employed the Hurst exponent in a geostatistical classification of Martian terrain (Orosei et al., 2003).Interestingly, the spatial distribution of the Hurst exponent for the Martian surface has a similar level of spatial variation and coherence to what we observe for glacial terrain.Additionally, the distribution of H for Martian terrain is skewed toward higher, self-similar values with nearcontinuous regions of lower H limited to mid-latitude plains.For Greenland, this self-affine statistical landscape classification could be integrated with existing knowledge of geology (e.g.Henriksen, 2008) and larger-scale landscape features including subglacial drainage networks (Cooper et al., 2016;Chu et al., 2016;Livingstone et al., 2017) and palaeofluvial canyons (such the "mega canyon" feature observed www.the-cryosphere.net/11/1247/2017/The Cryosphere, 11, 1247-1264, 2017 in Fig. 6e, which has Petermann glacier as its modern-day terminus) (Bamber et al., 2013b). The Hurst exponent has previously been shown to play a dynamical role in the flow resistance of alluvial channels (Robert, 1988).Whilst basal sliding is clearly a different physical phenomena -modulated by enhanced plastic flow and regelation (Weertman, 1957;Nye, 1970;Hubbard et al., 2000;Fowler, 2011) -it is possible that the Hurst exponent may provide a useful radar-derived parameter for our understanding of geometric control upon this process.In Sect.5.1 and 5.3 we observed that toward the ice-sheet margins, such as inland from the Melville Bugt coast, predicted thawed regions are characterised by higher (often near self-similar) values of H .One could therefore speculate that the persistent behaviour associated with high-H interfaces (neighbouring points follow a similar elevation trend; Sect.2.3) could act to promote basal sliding.However, as is widely acknowledged, attributing a direct link between subglacial roughness and contemporary ice dynamics is a complex topic (Siegert et al., 2005;Bingham and Siegert, 2009;Rippin et al., 2014).Therefore, as with other measures or basal roughness, the spatial variation in the Hurst exponent is likely to also originate from different glaciological processes at a variety of spatial scales, including erosion and deposition.Additionally, we recommend that future works which investigate the connection between the Hurst exponent and glaciological processes should be discussed with reference to anisotropy and flow direction. The statistical analysis of the waveform abruptness in predicted frozen and thawed regions (Sect.5.3) demonstrates that, overall, very different RES scattering signatures are present than assumed by Oswald andGogineni (2008, 2012).Firstly, the majority of the predicted thawed regions have lower abruptness (diffuse scattering).In their algorithm, this would correspond to false-negative detection of basal water (since the necessary high abruptness condition is not satisfied).Secondly, high abruptness is often present in predicted frozen regions, many of which are interpreted as wet by Oswald and Gogineni (2008Gogineni ( , 2012) (e.g.some of the region of higher abruptness near to the Camp Century ice core, which at high bed elevation is likely to correspond to harder bedrock).It is, however, important to note that some of the smoother regions discriminated as wet by Oswald andGogineni (2008, 2012) are consistent with basal thermal state prediction by MacGregor et al. ( 2016) (e.g.near NorthGRIP).Radar bed reflectivity was also used by Oswald andGogineni (2008, 2012) in their discrimination of thawed beds.However, since these original studies, the role that uncertainty in radar attenuation plays in biasing the spatial distribution of radar bed reflectivity has become much better understood (Matsuoka, 2011;MacGregor et al., 2012;Jordan et al., 2016).For example, if an attenuation model has a constant systematic bias in attenuation rate, then there will be an ice-thickness-correlated bias in estimated bed reflectivity (Jordan et al., 2016).Thus, spatially correlated bias in the attenuation model is one explanation for why elevated reflectivity was observed in some predicted frozen regions.Additionally, geological transitions, between less-reflective sediment and more-reflective bedrock (see Bogorodsky et al. (1983) and Peters et al. (2005) for reflectivity values) could also play a role in complicating the analysis. Subglacial hydrological systems are understood to produce more complex and variable scattering signatures than the specular lake-like reflection assumed by Oswald andGogineni (2008, 2012).For example, concentrated hydrological channels act as an anisotropic rough surface capable of orientation-dependent scattering (Schroeder et al., 2013;Young et al., 2016).Additionally, due to scattering from the lake bottom and related interference effects, shallower (depth < 10 m) subglacial lakes can produce diffuse scattering (Gorman and Siegert, 1999).Whilst the majority of the thawed regions have lower abruptness, there are some smaller, localised patches of higher abruptness present in Fig. 6c.These regions are consistent with the presence of deep lake-like water (in the sense that specular reflections are observed in a region predicted to be above pressure melting point).However, because the frozen abruptness distribution in Fig. 8f indicates that basal water is not required to produce highly specular reflections, it is not possible to confirm this without additional analysis.This is because the frozen abruptness distribution in Fig. 8f indicates that basal water is not required to produce highly specular reflections, and thus smooth regions of bedrock may be responsible for the high abruptness.The presence of at least some localised patches of high abruptness in thawed regions is consistent with the recent discovery of two small subglacial lakes in north-western Greenland of ∼ 8 and ∼ 10 km 2 in extent (Palmer et al., 2013).More generally, however, the relative rarity of high abruptness in thawed regions is in agreement with hydrological potential analysis (Livingstone et al., 2013), which predicts that deep subglacial lakes are both rare and small in the north-west of the GrIS.Instead, channelised drainage networks -such as the system recently identified beneath Humboldt glacier (Livingstone et al., 2017) -are likely to be common in thawed regions (and are consistent with the generally diffuse scattering signature that we observe). The anisotropy of the Hurst exponent was not considered in the radar scattering model, which was justifiable because we were interested in understanding how the Hurst exponent relates to the (near-) isotropic waveform abruptness.However, in certain regions of the ice sheets, basal radar scattering is known to be highly anisotropic, as revealed by maps of the specularity content for Thwaites glacier (Schroeder et al., 2013) and Byrd glacier (Young et al., 2016).Thus a clear direction of future research would be to modify the self-affine radar scattering model (Sect.4) to take into account anisotropy in H and then to compare this model with maps for the specularity content.The pronounced spatial heterogeneity for H implies that estimation of roughness statistics from H = 0 radar scattering models (Eq.9; Berry, 1973;Ulaby et al., 1982;Peters et al., 2005;Grima et al., 2014) may give erroneous results, particularly when comparing the overall spatial distribution between regions with different H values.Additionally, the radar scattering model is formulated with respect to wavelength-scale (approximately metre-scale) roughness and thus provides a way to estimate metre-scale roughness (i.e.given A and H obtain an estimate for ν λ in accordance with the curves in Fig. 5).This could have important glaciological consequences, since the physical processes which influence basal sliding operate at the metre scale (Weertman, 1957;Nye, 1970;Hubbard et al., 2000;Fowler, 2011). Finally, geostatistically based interpolation methods which employ aspects of self-affine statistics (Goff and Jordan, 1988) have found recent application in generating synthetic subglacial topography (Goff et al., 2014).The selfaffine characterisation of subglacial topography described here informs such techniques and, in turn, could be used to inform the ice-sheet-wide interpolation of future Greenland (Bamber et al., 2013a;Morlighem et al., 2014) and Antarctic (Fretwell et al., 2013) subglacial digital elevation models. Summary and conclusions In this study we used recent OIB RES data to demonstrate that subglacial roughness in northern Greenland exhibits self-affine scaling behaviour, with pronounced spatial variation in the Hurst (roughness power law) exponent.We modified a planetary radar scattering model to predict how the Hurst exponent exerts control upon the degree of scattering, which we parameterised using the waveform abruptness.We then demonstrated an agreement between the predictions of the radar scattering model and the statistically distributed inverse relationship that is observed between the Hurst exponent and waveform abruptness.This enables us to conclude that self-affine statistics provide a valuable framework in understanding the topographic control which influences ice-penetrating radar scattering from glacier beds.Self-affine statistics also provide a generalised model for subglacial terrain and in the future could be used to further explore the relationship between bed properties, ice-sheet dynamics, and landscape formation. An additional glaciological motivation behind our study was to establish whether the waveform abruptness could be used to aid in the discrimination of basal water (and to test the prior assumption that subglacial hydrological systems in Greenland produce abrupt bed echoes; Oswald andGogineni, 2008, 2012).To do this we compared our RES-derived data fields with a recent basal thermal state prediction for northern Greenland (MacGregor et al., 2016).The analysis demonstrated that thawed regions of the glacier bed have statistically lower values of the waveform abruptness than frozen regions (more diffuse scattering).The simple explanation is that many thawed regions are relatively rough with a higher Hurst exponent, whilst many frozen regions are relatively smooth with a lower Hurst exponent.This finding should not be viewed as a new RES diagnostic for basal water (since deep subglacial lakes do have the specular signature proposed by Oswald andGogineni, 2008, 2012).However, it indicates that the diagnostic in Oswald andGogineni (2008, 2012) is likely to yield both false negatives (failing to identify water in rougher regions and where hydrological systems have more complex scattering signatures) and false positives (identifying some smoother frozen regions as wet). TFigure 1 . Figure 1.Example radargrams (top panel) and 10 km bed elevation profiles (bottom panel) for subglacial terrain with different Hurst exponent, H : (a) H ≈ 0.9 (near self-similar), (b) H ≈ 0.7 (between Brownian and self-similar), (c) H ≈ 0.5 (Brownian), and (d) H ≈ 0.3 (sub-Brownian).The location of the profiles are shown in Fig. 3. Evident in the radargrams are the surface reflection (pink line), the bed reflection (red line), and reflections from internal layers in ice.The bed elevation profiles are linearly detrended about zero with horizontal resolution ∼ 30 m.The horizontal-vertical aspect ratio of the bottom panels differs between (a), (b) and (c), (d) by a factor of ∼ 10. Figure 2 . Figure 2. (a) Variogram for rms height, ξ , versus profile length L (log-log scale).(b) Deviogram for rms deviation, ν, versus horizontal lag, x (log-log scale).The plots correspond to subglacial terrain profiles in Fig.1.The Hurst exponent is estimated from the linear gradient of the first five data points (indicated by dashed lines).These space-domain plots are (approximate) equivalents to frequency-domain roughness power spectra, and smaller length scales correspond to higher frequencies. 3 Analysis of RES data 3.1 Ice-penetrating radar system and coverage region The airborne RES data used in this study were collected by the Center for Remote Sensing of Ice Sheets (CReSIS) within the OIB project, over the months March-May in years 2011 and 2014.For all measurements the radar instrument, the Multichannel Coherent Radar Depth Sounder (MCoRDS), was installed upon a NASA P-3B Orion aircraft.The sounder has a frequency range from 180 to 210 MHz, corresponding to a centre wavelength ∼ 0.87 m in ice.After accounting for pulse shaping and windowing, this results in a depth-range resolution in ice of ∼ 4.3 m (Rodriguez-Morales et al., 2014; Figure 3 . Figure 3. Data coverage map for OIB flight tracks and region of interest.The locations of the Camp Century, NEEM, and North-GRIP ice cores are indicated, along with the terrain profile sections in Fig. 1. Figure 5 . Figure 5. Parametric dependence of the self-affine radar scattering model.(a) Abruptness, A, as a function of the Hurst exponent, H , for sections of constant wavelength-scale rms deviation, ν λ .(b) A as a function of ν λ for sections of constant of H .The plots illustrate primary dependence for A upon H and secondary dependence for A upon ν λ .High A is suppressed for high H except in the case of exceptionally small ν λ . Figure 7 . Figure 7. Relationship between Hurst exponent, H , and waveform abruptness, A (corresponding to flight-track data in Fig. 6).(a) Total distribution for Hurst exponent.(b) Abruptness distribution for high H , (H > 0.75).(c) Abruptness distribution for medium H , (0.5 < H 0.75).(d) Abruptness distribution for low H , (H ≤ 0.5).The observed distributions in (b), (c), and (d) confirm the theoretical prediction of the self-affine radar scattering model that a statistically distributed inverse relationship exists between H and A. Figure 8 . Figure 8. Distributions from basal RES analysis in thawed and frozen regions of the northern GrIS (corresponding to flight-track data in Fig. 6): (a) Hurst exponent, H , in thawed regions; (b) H in frozen regions; (c) rms deviation, ν( x = 30 m), in thawed regions; (d) ν( x = 30 m) in frozen regions; (e) abruptness, A, in thawed regions; (f) A in frozen regions.The data subsets correspond to the red (thawed) and blue (frozen) regions of the map in Fig. 6f.
14,238
2017-05-24T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
Emptiness and the Eight Consciousnesses : Toward a Deeper Understanding of Intuitive Judgment This paper empirically investigates whether emptiness (according to the Mādhyamaka school) has a positive association with the intuitive judgment that results from the eight consciousnesses (according to the Vijñānavāda school). A questionnaire-based quantitative approach was used to collect data from 157 professional spirit mediums. The results show that emptiness is significantly correlated with pure brightness and that pure brightness is, in turn, is significantly associated with intuitive judgment. Therefore, this paper argues that emptiness can improve or enhance the eight consciousnesses in making moral decisions. Finally, for the gap between moral judgment and action, this research provides new insight by asserting that this gap must have existed a priori. Background The Mahāyāna stream of Chinese Buddhism often distinguishes between the Mādhyamaka and Vijñānavāda, taking the former as exclusively acknowledging all phenomena of emptiness (śūnyatā) (Kaag, 2012) and the latter as concentrated single-mindedly on the ideation (vijñaptimātra) of the universe (Nedu, 2015).The concepts of the main subjects of the investigations of these schools are assumed to be in conflict (Duckworth, 2014), since the idea of emptiness is not at all present in the teachings of the Vijñānavādaschool and idealism is absent from those of the Mādhyamaka (King, 1994).Although they are distinct, these two discourses are also closely intertwined, for Vijñānavāda practitioners only analyze and describe how human experience is constructed by the mind to serve, pragmatically, the abstruse aims of the Mādhyamaka: the Prajñāpāramitā state that people undergo in attaining emptiness and freedom from cognitive obscurations and emotional obsessions (Waldron, 2006).However, one must also recognize that there was an essential development in the hermeneutics of the doctrine of emptiness in Vijñānavāda (King, 1994).This event surfaced critical issues in the effect of emptiness in Vijñānavāda.However, until recently, empirical research on the role of emptiness in Vijñānavāda was almost non-existent. According to the Vijñānavāda school, conceptual knowledge appears at the level of mental consciousness (the sixth consciousness), which, in its turn, is determined by manas consciousness (the seventh consciousness).Because manas consciousness is responsible for the misconception of the individual self, this error will also characterize any form of conceptual knowledge that appears at the level of mental consciousness.Thus, the theory of the conditioning of decision-making states that this conditioning is affected by two factors: manas consciousness of individuality and the seeds of âlayavijnâna (the eighth consciousness) (Nedu, 2015). Objectives This paper empirically investigates whether emptiness has a positive association with the intuition that results from the eight consciousnesses (viz.Vijñānavāda).This analysis can provide much edification for inducing morally intuitive decisions.This task is especially challenging, because of the unfamiliarity of most readers of English with Buddhist philosophy.The research result may shift modes of thought and lead to profound differences in how we make decisions from intuition.This paper will address the concept of Vijñānavāda before describing the empirical experiments. Emptiness (Śūnyatā; Sanskrit) Mādhyamaka and Yogācāra (called Vijñānavāda in China) represent two schools of Mahāyāna Buddhism.During the seventh century, after the famous pilgrim Tripitaka Master Xuanzang (玄藏, AD 602-664) introduced a new corpus of canonical Buddhist texts, Mādhyamaka and Yogācāra were considered doctrinally contradictory to each other in the polemic circumstance of Emptiness-Existence (i.e., kongyou 空有) and the Buddha nature (Lee, 2016).A comprehensive explication of the notion of emptiness, as found in the philosophical literature of the Mādhyamaka school, provides a doctrinal key to unlock the deep meanings of the Prajñāpāramitā sutras (King, 1994).In the Heart Sutra (心經), the phrase of Prajna-paramita implies that five aggregates of being are all empty (照見五蘊皆空), and emptiness here signifies that noting has an independent, "Ego-nature" or "Ego-appearance" of its own (Cheng Kuan, 2015).Although Yogācārins maintained that there was something given in experience, namely, a non-objective perception, Yogācāra accepts śūnyatā (i.e., emptiness) (King, 1994).Thus, it is difficult to ignore the centrality of the notion of śūnyatā to Vijñānavāda. Śūnyatā can indicate the empty, emptiness, empty space, sky, among other meanings.If śūnyatā is used in a cosmological sense (as empty space), it is not congruent with the ontological concept of emptiness (śūnyatā, lack of inherent existence) as employed in Mahāyāna philosophy or Daoist mystery (Eskildsen, 2015).Śūnyatā is a specialized term in Buddhism.Precisely, it signifies that nothing has an independent ego-nature or ego-appearance of its own, because everything is constituted from various amalgamated parts, and these elements are interdependent and inter-related, forming an "apparent whole," which does not remain intact for even a short duration and is subject to the law of inconstancy.Everything changes from instant to instant, and, therefore, its ultimate ego-nature is ungraspable and unobtainable.Because the ego-appearances of things are unobtainable, it is said that the ego-nature of all beings is empty (Venerable, 2005).In other words, substances and phenomena have no fixed, unchangeable, eternal existence or "self" (Chan, 2015). When the mind is thoroughly emptied and made calm, it merely observes things as they are without imposing its own biases and wishes.Perhaps when one observes stuff around one and calmly for what they are, one realizes that it is essential to the nature of all things to flourish and decline (Eskildsen, 2015).This situation is called pure brightness. Pure brightness is expressed by the great concern for the spirit mediums' purity of mind.So as not to interfere with the flow of the god's thoughts, medium's mind has to be calm, clear and obvious of its self (Clart, 2003). Eight Consciousnesses The basic concept of Vijñānavāda is that everything is created from the mind as ideation (vijñaptimātra).Vijñānavāda uses the eight consciousnesses to explain the workings of the mind and the way it constructs the reality we experience (Clark, 2011).The doctrine of Verses Delineating the Eight Consciousnesses is used to explain each level of consciousness in this paper.Verses Delineating the Eight Consciousnesses (八識規矩頌), by Tripitaka Master Xuanzang (玄藏,, is a summary of the doctrine contained in Xuanzang's most celebrated work, Treatise on Consciousness Alone (成唯識論). Sixth consciousness (mental consciousness). The first five consciousnesses are the physical senses (those instantiated in the eye, ear, nose, tongue, and body), and the sixth consciousness is mental consciousness (Clark, 2011).These five senses, accompanied by their objects, are posited by valid, straightforward cognition, solely using bodily sense faculties (Berzin, 2013).Mental consciousness distinguishes all incoming data.Since the first five consciousnesses always arise together with mental consciousness, it and all the sense data are fed into the seventh consciousness (manas) (Clark, 2008).The character or nature of these six consciousnesses can be good, evil, or neutral (Tripitaka Master Xuanzang, 1998).On mental consciousness, it is stated in the Verses, "Whenever it is wholesome or unwholesome, they make distinctions and accompany it.The basic and subsidiary afflictions together with faith and other wholesome dharmas always arise jointly with the sixth consciousness."Theprimary function of mental consciousness is to make distinctions, such as between good and evil or between long and short.According to Vasubandhu, the fundamental afflictions are greed, anger, stupidity, arrogance, doubt, and improper views.Because these primary afflictions always come together, mental consciousness colors the incoming sense data and interprets it through the senses (Clark, 2011). Seventh consciousness (manas consciousness). The first six consciousnesses are separated and involve the quick registration of sense impressions.Manas coordinates thoughts and sensory information received from the first six consciousnesses and is capable of reflecting, considering, and making judgments (Clark, 2011).In other words, conceptual knowledge appears at the level of mental consciousness, which is determined by manas consciousness (Nedu, 2015).The nature of manas is neutral but obscuring; therefore, mana will be innately contaminated by defiling afflictions (Tripitaka Master Xuanzang, 1998). The Verses state, "The eight derivative afflictions, the five universal interactions, the judgment of the particular states, greed, anger, doubt, and improper views all interact and accord with it… It continuously focuses its mental activity on inquiry which results in the characteristic that is self."Theeight derivative afflictions are "lack of faith, laziness, laxness, torpor, restlessness, distraction, improper knowledge, and scatteredness."Greed, anger, doubt, and improper views are four of the six primary afflictions.These eight derivative afflictions and four fundamental afflictions always interact with manas.Its primary function is to make judgments, which involves decision making based wholly on worldly knowledge, which is defiled by the self (Tripitaka Master Xuanzang, 1998).At the level of manas, the delusion of I arises, because this reflects the illusion that there is someone inside who is in charge, making decisions, acting on my preferences, and consciously pursuing my choices (Clark, 2008). Eighth consciousness (âlayavijnâna). The eighth consciousness, whose nature is a non-obscuring neutral, always arises together with the seventh consciousness (Tripitaka Master Xuanzang, 1998).Âlayavijnâna means the storehouse or seed consciousness, which induces transmigration or rebirth, causing the origination of a new existence (Clark, 2008).Âlayavijnâna receives impressions from all the functions of the other consciousnesses and accumulates potential energy for the mental and physical manifestation of life (Berzin, 2013).According to the traditional interpretation, the other seven consciousnesses are evolving or transforming consciousnesses, originating in this seed consciousness.Alayavijnâna is a complexly conditioned mode of cognitive awareness that simultaneously supports and informs all occurrences of manifest consciousness (Siderits, 2005).Although it is initially immaculate in itself, âlayavijnâna contains a mysterious mixture of purity and defilement, good and evil.Because of this mix, the transformation of consciousness from defilement to purity can take place (Waldron, 2008). Ninth consciousness (amalavijñāna). Amalavijñāna is the immaculate consciousness.This pure consciousness is identified with the nature of reality.Alternatively, amalavijñāna may be considered the real aspect of âlayavijnâna (Buswell& Lopez, 2013).The concept of amalavijñāna ultimately derives from Tathagatagarbha thought, which similarly emphasizes the inherent purity of mind (Buswell, 1989).The intrinsic Buddha nature is called the Tathagatagarbha.According to the Sutra of Queen Srimala of the Lion's Roar (勝蔓經), Tathagatagarbha is eternal, unchanging, and inherently pure.If there were no Tathagatagarbha, there would be no aspiration to seek nirvana.Tathagatagarbha signifies that everyone can be Buddha.The discovery of amalavijñāna clears away defilements and wrong views to find the emptiness that is the source of Tathagatagarbha (Clark, 2008). Three Types of Intuition Freud divided the mind into the conscious mind and the unconscious mind.Jung then split the unconscious into two layers: the personal unconscious and the collective unconscious.In introducing Buddhist concepts to Western audiences, Waldron (2006) found it useful to consider the conception of the âlayavijnâna as a form of Jung's collective unconscious, according to which the world is constructed through a shared language (Siderits, 2005).This paper borrows Jung's concept of intuition to express manas, âlayavijnâna, and amalavijñāna. Personal experience intuition (manas consciousness).The personal unconscious consists of complexes and individuals' experience (Jung, 1981).A compound is a core pattern of emotion, memories, perceptions, and wishes (Shultz & Shultz, 2009).The personal unconscious has a similar function to manas-vijñāna (Pine, 2013).Intuition from manas consciousness is labeled own experience intuition.Because of affliction and self-delusion, this intuition cannot make proper moral judgments. Collective archetype intuition (âlayavijinâna).The archetype is one element of the collective unconscious. Archetypes are unclear underlying forms from which emerge images and motifs.History, culture and personal context shape these manifest representations (Jung, 1981).Intuition from âlayavijinâna can be called an intuition of the collective archetype.Because of karma, this intuition cannot make proper moral judgments. Collective universal intuition (amalavijñāna).Another component of the collective unconscious is the universal image (Raff, 2000).The universal image is an essential thing: the most far-fetched mythological motif and symbol (Jung, 1981).Intuition from here is labeled collective universal intuition.Immaculate and pure collective universal intuition is similar to Tathagatagarbha, which represents the core concept of amalavijñāna. Hypotheses Based on the relevant literature, the results of the studies presented above, and essential variables identified by previous research on ethics, the following hypotheses are proposed concerning the association between emptiness and intuitive judgment. H1. Emptiness is positively associated with pure brightness. H2. Pure brightness is positively associated with personal experience intuition. H3.Pure brightness is positively associated with collective archetype intuition. H4: Pure brightness is positively associated with collective universal intuition. Samples Unlike other studies that have taken students or managers as samples, this study examined particular spirit mediums.This was done for the following reason.First, mediumship indicates the ability to communicate with the souls of the dead in spiritualist sessions.The function of intuition in the intermediary state unites mediums and intermediary beings (Pilard, 2015).Since the spirit medium's experience helps others understand unconscious and conscious psychology, spirit mediums are an essential subject for the new psychology (Shamdasani, 2009).Second, according to Fontein (2006), the performances of spirit mediums can involve "responding to, and engaging with, the social, political, and moral expectations of those around them." Also, spirit mediums may have a more profound enlightenment of emptiness than most.The performance of Taiwanese spirit mediums derives from beliefs and practices from Daoism, Buddhism, and Confucianism (Marshall, 2003). Setting Members of the Taiwan Mediums' Association, founded in 1989, were contacted and invited to participate in the survey.The help of the association allowed the researchers to reach a population that is widely dispersed geographically and difficult to achieve through the conventional survey method; 200 questionnaires were sent to 200 spirit mediums, each living in a different area in Taiwan.In all, 157 responded with completed questionnaires that could be used in the analyses, and 43 responses did not include answers to all items well, and thus we did not use these 43 reactions in our study.This response rate was 78.5% (157 out of 200).Because Loehl in ( 2004) has the requirement that the least number of samples should be over 100, our 157 responses match the minimum requirement for structural equation modeling (SEM). Design and measures This survey-based approach used one questionnaire to assess emptiness, pure brightness, and three types of intuition (see appendix).The questionnaire items were responded to on a five-point Likert-type scale: agree strongly (5), agree (4), not certain (3), disagree (2), and disagree strongly (1).The definition of each variable is as follows:  Pure brightness.Pure brightness means that the mind is clear or free of any thoughts that would confuse the mind during decision-making.It also means that the mind is calm, without any emotions to agitate it (Eskildsen, 2015).This was measured using six items (see appendix).  Emptiness.Śūnyatā means emptiness or vacuity, a highly specialized term in Buddhism.Specifically, it signifies that nothing has an independent ego-nature or ego-appearance of its own.It was measured using five items (see appendix). Personal experience intuition, collective archetype intuition, and collective universal intuition.These three types of intuition were measured using four items in PEI and three items in CAI and CUI (appendix).The reason we adopted the term clairvoyance in the collective universal intuition is that it resembles the highest faculty that has access to some ultimate reality (Pilard, 2015). Procedure Similar to the samples used by Giacalone and Jurkiewicz (2003) and Ayoun, Rowe, and Yassine (2015), the snowball sampling technique, often used for hidden populations that are difficult for researchers to access, was employed.To ensure that a good sample would be obtained, the initial distribution of the surveys specifically targeted the Mediums' Association, with the request that recipients pass it on to others who might be interested.Any survey forwarded to expand the sample had a good chance of reaching valid respondents, as the target population was sufficiently diversified (spread throughout Taiwan) to accommodate minor variations in dissemination.Furthermore, as an extra precaution, one question in the survey requested the respondent's job title, to ensure that he or she was a spirit medium in a top position.Another benefit of snowball sampling in this study was that it might have encouraged participation.A survey that measures sensitive subjects, such as spirituality and ethics, may be received better from a friend or familiar colleague than an outsider. Ethical Considerations Items in the questionnaire were created by previous studies (see appendix).Before implementing the survey, the questionnaire was approved by the Chairman of the Taiwan Mediums' Association (Mr. Gao, Tian-Wen).The participants in this questionnaire are anonymous.Because the survey was anonymous and participants were not identified by name, this project presented no foreseeable risk to participants.Participants were told that their participation is voluntary and that they can decline to answer specific questions or to end their involvement at any time without penalty.The participants were aware of that the results of this study could be published in professional and social journals.Participants were also informed that the results could also be used for educational purposes or presentation.However, no individual subject will be identified.The questionnaire was approved by the College of Business in Feng-Chia University (NO.20111018P9945756). Analysis The data were analyzed with the SPSS package, using reliability, correlation, and regression.Also, SEM and the Rival Model were also used to test validity and model fit (Hancock, 2015).The use of SEM is commonly justified in the social sciences because of its ability to impute relationships between unobserved constructs (latent variables) from observable variables (Loehlin, 2004).Table 1 shows the ideal value of the model fit index in SEM.Also, once the researcher is confident that a plausible hypothesis supports the inferred relationship, he must then rule out the plausibility of rival hypotheses (Campbell & Stanley, 1963) and models.If the model survives to this point, it may be reasonable to conclude, probabilistically, that a causal relationship has been demonstrated (McCoach et al., 2007). Table 1.Idea value of model fit index (Bollen & Long, 1992) Smaller is better Reliability All the measures were analyzed for reliability and validity.The appendix contains measured characteristics and sample measurement items.Nunnally (1978) stated that Cronbach's α denotes high reliability if it is between 0.7 and 0.98.If Cronbach's α is above 0.5, the research findings should be regarded as significant.If Cronbach's is under 0.35, the research should be rejected (Hair, Anderson, Tatham, & Black, 1992).Cronbach's α was 0.682 for emptiness, 0.740 for pure brightness, 0.801 for personal experience intuition, 0.519 for collective archetype intuition, and 0.686 for collective universal intuition.Cronbach's α for the total 21 items was 0.857.All were over the 0.5 standard value.Therefore, the results of this survey should be considered reliable. Validity  Content Reliability: This paper adopts the relevant references as the research basis, cooperating with an expert spirit medium to discuss, make, pre-test, and re-correct the questionnaire.Therefore, this study should reach content validity. Composite Reliability: Confirmatory factor analysis was used to test validity.The composite reliability values of emptiness, pure brightness, personal experience intuition, collective archetype intuition, and collective universal intuition were 0.727, 0.744, 0.805, 0.534, and 0.824, respectively.All were more than 0.5, which showed that they had good construct validity (Raines-Eudy, 2000). Correlation The variables for emptiness, pure brightness, personal experience intuition, collective archetype intuition, and collective universal intuition were analyzed using a correlation analysis.Demographic variables for age, years of experience and number of students were also included.Correlation analysis (Table 2) indicated that emptiness had strong correlations with pure brightness (0.571, p<0.01).Pure brightness had significant correlations with personal experience, collective archetype intuition, and collective universal intuition (0.181, p<0.05; 0.345, p<0.01; 0.476, p<0.01, respectively).These results provide support for H1, H2, H3, and H4. Testing the Proposed Model The proposed model was examined further using SEM techniques with the software AMOS.The results of SEM for PEI indicated a good fit of the model (χ 2 =166.140,df=88, χ2/df=1.888,GFI=0.882,AGFI=0.839,PNFI=0.785,CFI=0.883,RMR=0.075,RMSEA=0.075).χ 2 /df (2.025) was lower than 3. Additionally, GFI (0882), AGFI (0.839), and PNFI (0.624) were over 0.8, 0.8, and 0.5, respectively, falling within the suggested index (Schumacker and Lomax, 2010).The CFI (Bentler, 1989), at0.883, indicated a good fit.Overall, the proposed model performed well (Figure 1).(Bollen & Long, 1992).Our extremely parsimonious model posits that pure brightness will be positively associated with PEI; this implies a nomological status for CAI and CUI.This study compared (Table 4) the proposed model with two rival models on the following criteria: (1) overall fit of the model-implied covariance matrix to the sample covariance matrix, as measured by CFI, (2) the models' hypothesized parameters that are statistically significant, and (3) parsimony, as measured by the PNFl (James, Mulaik, & Brett, 1982).Because the model fit indexes of the three models were nearly the same, the proposed model and two rival models performed well. For the estimate of the path from emptiness to pure brightness in three models, the critical ratios (t-value) were 5.395, 5.347, and 5.433, respectively.All had statistical significance because their p is smaller than 0.001.Thus, H1 was supported. Regarding the path from pure brightness to each intuition, all path estimates were significant (CR=1.997;3.561;4.627).This finding implies that pure brightness is significantly positively associated with PEI, CAI, and CUI.Thus, H2, H3, and H4 were supported. Outline of the results This study investigated the association between emptiness and intuitive judgment.With the use of correlation, regression, and SEM analysis, a statistically significant correlation was found between emptiness and pure brightness using a questionnaire-based survey with 157 spirit mediums.Pure brightness also had a significantly positive association with three types of intuition (PEI, CAI, and CUI).This finding indicates that emptiness has a critical role in explaining attitudes toward intuitive judgment.Thus, H1, H2, H3, and H4 were supported. Limitations of the study An investigation into the relationship between emptiness and intuition is a difficult endeavor and will be constrained by numerous limitations.Varying definitions of intuition constrain comparability of this study to similar studies.Additionally, research on the dynamic associations between emptiness and intuition can be approached using various research questions, designs, and hypotheses.The results of this study, therefore, require careful interpretation to avoid conclusions that overreach the data.Furthermore, the fact that many respondents in this research were Taiwanese must be considered when interpreting the results.Including variables of different cultural and religious backgrounds in future research may improve the understanding of the nuances of intuitional influences on ethical decision-making processes. Practical Implications As Narvaez (2010) notes, because we think in and through our bodies, our thinking is bounded and shaped by them, making our conceptual systems unconscious, metaphorical, and imaginative.Consistent with this notion, reason and intuition should intricately interweave in moral psychology and neuroscience.Moral evaluation is, therefore, not only a process of reason but also guided by metaphors of purity (Zhong & Liljenquist, 2006), influenced by affect and emotions (Haidt, 2001).However, the influence of these dynamics on intuition is largely unconscious and are not best captured by current calculative and analytic tools (Zhong, 2011). The school of Vijñānavāda claims that the experience of knowledge is determined solely by the individual predispositions of the knowing subject (his "imprints of the linguistic constructions-Abhilapavasana) and not by an alleged "external reality (Nedu, 2015, p52).Thus, Vijñānavāda texts consider that fact is affected by the illusions of the determined individuality and of subjectivity, which obstruct the absolute and liberated condition of reality.The mana, in its essence, is responsible for the illusory appearance of the ego of the individual. Through the determining relation it exerts on the mental consciousness, the manas also transfers obstructed nature to mental consciousness (Nedu, 2015). According to the concept of Mādhyamaka, Tathagatagarbha is inherently pure, and everyone has Buddha nature.However, how can this intrinsically pure nature be contaminated by extrinsic and other virulent defilements?As the Sutra of Queen Srimala of the Lion's Roar (勝蔓經) notes, there are defilements (such as fundamental and derivative afflictions), and there are defiled consciousnesses (such as mental consciousness and manas consciousness).In other words, both the mental and the manas consciousness should be inherently pure but have been innately contaminated by fundamental afflictions such as greed, anger, doubt, and improper views.This situation is called ignorance (avijjā) from or before the very beginning.Therefore, the manas is not moral intuition and cannot make appropriate ethical decisions if afflictions defile it.Mental consciousness will then follow its decision to act unethically.In other words, only moral intuition can result in a moral judgment and thereby moral action.The non-moral effect is caused just by non-moral judgment, which arises from non-moral intuition.This concept is in line with moral psychology, in its assertion that moral intuitions come first and directly cause moral judgments (Shweder & Haidt, 1993). Accordingly, the gap between judgment action is similar to that in Kant's morality, which is universal and necessary (a priori).This gap between judgment and action also exists a priori.The gap that occurs does not involve an incongruity between moral judgment and non-moral action.Instead, the gap directly displays that moral judgment is not genuinely moral and is thus able to result in non-moral action.This gap will not occur if the moral judgment is genuinely moral.Therefore, this paper argues that the gap between judgment and action is just an extended comparison between non-moral judgment and non-moral action. Conclusion The study reported here provides evidence that emptiness has a positive effect on intuition, no matter what intuition it is.It implies that emptiness (Mādhyamaka) can improve the decision-making process of the eight consciousnesses (Vijñānavāda).Much work has been done already to examine the link between intuitive cognition and moral judgment.However, much work is also required to understand the relationship between moral judgments and antecedent factors, such as karma or afflictions. Although Mādhyamaka and Vijñānavāda are distinct from each other in their doctrines, this empirical study suggests their approaches are closely associated with intuitive moral judgments.This paper is not able to address all the potential vital deficiencies in the current state of moral judgment theory.However, if a new proposed theoretical model can at least adequately take into account the primary concerns of consciousness raised by Vijñānavāda, a potentially more robust model will have been developed for use by a broader range of empirical researchers. Figure 1 . Figure 1.The SEM Model of Emptiness to Personal Experience Intuition 4.5 Testing the Rival Model Table 3 . Regression Model and Results for H1
5,868.8
2017-12-26T00:00:00.000
[ "Philosophy", "Psychology" ]
Identification and Detection of a Peptide Biomarker and Its Enantiomer by Nanopore Until now, no fast, low-cost, and direct technique exists to identify and detect protein/peptide enantiomers, because their mass and charge are identical. They are essential since l- and d-protein enantiomers have different biological activities due to their unique conformations. Enantiomers have potential for diagnostic purposes for several diseases or normal bodily functions but have yet to be utilized. This work uses an aerolysin nanopore and electrical detection to identify vasopressin enantiomers, l-AVP and d-AVP, associated with different biological processes and pathologies. We show their identification according to their conformations, in either native or reducing conditions, using their specific electrical signature. To improve their identification, we used a principal component analysis approach to define the most relevant electrical parameters for their identification. Finally, we used the Monte Carlo prediction to assign each event type to a specific l- or d-AVP enantiomer. ■ INTRODUCTION L-Amino acid enantiomers are the predominant forms found in animal tissues.−4 Altered regulation production of these isomers can have profound effects on organisms.For example, during the natural aging process, there is a decline in the level of D-serine. 5Conversely, in pathological aging conditions, such as Alzheimer's disease, there is excessive activation of serine racemase, resulting in hyperstimulation of the NDMA receptor due to an excess of D-Ser. 5Notably, levels of D-Arg levels are decreased. 6dditionally, several D-amino acids containing peptides (DAACPs) have been identified in various pathological conditions associated with diseases or natural aging processes, as evidenced by the presence of D-β-Asp-containing peptides in elastic fibers of sun-damaged skin, 7 D-Asp in αA-crystallin from the lenses of individuals with cataracts, 8 and the β-amyloid peptide of Alzheimer's patients. 9These DAACPs are mainly formed following the post-translational modification of L- amino acids, either by enzymatic racemization of L-amino acids into their D-form or spontaneously, despite a very slow process, within long-lived proteins and tissues. 1 It has been established that oxidative stress and free radicals can induce such modifications in proteins. 10,11Notably, the alteration of a specific residue from L-to D-configuration often leads to a modification of the biological activity of the peptide and could be developed as therapeutic molecules. 12Modifications of peptide functional properties are likely due to a change in the higher-order structure of a protein. 13,14The significant increase of DAACPs discovered in diseases has encouraged researchers to consider them as potential biomarkers, 1 and any new technique capable of detecting/quantifying or finding new DAACPs, even at low concentrations, would represent a major advancement in early diagnosis. As mentioned above, there is a clear need to develop fast, low-cost, and direct techniques to identify chiral amino acids in a polypeptide chain without prechemical/enzymatic treatment.Due to the difference between L-and D-amino acids, we expect the polypeptide chains to adopt different conformations.−17 To the best of our knowledge, the classical methods used, such as mass spectrometry, liquid chromatography, or enzymatic assays, for the detection of L-and D-amino acids do not allow the simultaneous detection of proteins, their conformation, and their chiral amino-acid composition. 14he nanopore electrical detection technique offers a unique multiscale analytical tool for peptide and protein enantiomer biomarker detection.−40 Machine learning is now used to validate the data from the sequencing of individual amino acids 37 to the identification of biomolecules 41,42 and biomarkers. 43hile the first study on detecting individual chiral amino acids (Trp, Phe, Tyr, Cys, and Asp) using an engineered Cu 2+ phenanthroline alpha-hemolysin channel was published in 2012 by Bayley's group, 18 only a few studies have been published since.Two other groups showed the ability to discriminate individual L-and D-amino acids using cyclodextrin adapters 44 inside a covalent organic framework-(His) or an alpha-hemolysin mutant 45 (Phe, Trp, Tyr).Mubarak et al. developed a nanopipette system to detect single amino-acid enantiomers (Tyr, Trp, and Phe) using a polymeric conical nanopore functionalized with BSA. 46They used current rectification to detect each kind of enantiomer.The first example of detecting chiral amino acids within a polypeptide biomarker was described by Luchian's group, where they identified, using Cu 2+ chiral recognition, the presence of histidine enantiomers. 47A recent study used mutant OmpF to control the side chain orientation of a β-amyloid peptide inside the nanopore due to its lateral electric field. 48They were able to discriminate between D-Ser and D-Asp isoforms and mutants.Finally, Maglia's group studied the detection of D-Ala and D-Leu in enkephalin peptides using FraC and CytK mutants. 49ntil now, no studies have been conducted to detect enantiomers within a biologically relevant peptide that has secondary and tertiary structures such as disulfide bonds.Furthermore, most previous studies needed an adapter or several pore mutations to detect chiral amino acids in the polypeptide sequence.To prove the ability to sense enantiomers containing elements in secondary and tertiary structures under native conditions, we used vasopressin as a model peptide.Vasopressin (L-Arg-AVP, later named L-AVP), a hormone produced in mammals, is involved in the regulation of water balance and blood pressure, as well as having an influence on social behavior, memory, and the cardiovascular system through its interaction with V1a, V1b, and V2 receptors. 50Vasopressin quantification was performed to assess hyper-or hyposecretion pathologies.In the case of diabetes insipidus, 51 quantifying vasopressin levels allows to determine the cause of the disease, which can be AVP receptor insensitivity or a decrease in hormone production.Notably, the preferred technique for detecting L-AVP in biomedical analyses has shifted to HPLC-MS/MS, sidelining traditional immunological techniques, partly due to its low molecular weight.This choice, however, requires essential preliminary steps for extracting and purifying the peptide of interest.The vasopressin nonapeptide possesses two cysteine residues at positions 1 and 6, forming a constrained peptide loop structure via a disulfide bridge.NMR analysis has revealed saddle and open conformations within the cyclic part of the molecule, while the noncyclic part demonstrates greater flexibility. 50The D-Arg-AVP (later named D-AVP) enantiomer represents a synthetic derivative of L-AVP, primarily employed for research purposes. 52onversely, the C-terminally deaminated form of D-AVP, known as desmopressin, is used for therapeutic applications. Indeed, desmopressin selectively binds to a single receptor, V2, unlike the natural hormone L-AVP, limiting its physiological effects on water retention.Including D-Arg in the peptide chain makes it more resistant to proteolysis and enhances the molecule's lifespan in the bloodstream. 53his work proves that a wild-type (WT) aerolysin nanopore can discriminate, at the single-molecule level, enantiomeric peptides, L-Arg8 or D-Arg8, contained in the native peptide chain.The L-and D-AVP can also be identified in a mixture.Interestingly, we can detect multiple AVP conformations, such as open and saddle states already observed by NMR.Furthermore, we show that after using a reducing agent several conformations observed in native conditions disappear, and the nanopore is still sensitive enough to identify both L- and D-forms in single and mixed experiments.By changing the relative ratio of each component in the mixture, we demonstrate the identification of each enantiomeric state according to its blockage level or volume.To identify each kind of event population and attribute them to different conformations, we used a PCA approach to first determine the best electrical parameters for their discrimination.We also developed a Monte Carlo prediction approach to identify each enantiomer conformation. ■ RESULTS AND DISCUSSION Electrical Enantiomer Characterization in Native Conditions.L-and D-AVP are two analogous peptides of 9 amino acids differing from a single chiral amino acid Arg8 (Figure 1a,b).Chirality can be represented as the nonsuperposable mirror image of an object.Therefore, the main difference between L-and D-AVP is the orientation of the lateral chain of Arg8 (Figure 1b).The two peptides form a disulfide bond between Cys1 and Cys6, creating a looped peptide structure (Figure 1a).As well as a disulfide bond, it has been found that L-AVP can possess one or two beta-turns stabilized by hydrogen bonds: 50 the first one within the loop created by the disulfide bond creates the saddle conformation, and the other is formed across Pro7.L-and D-AVP structures are similar, 54 so we can hypothesize that D-AVP can also form these two beta-turns.Discriminating L-and D-AVP with a WT aerolysin nanopore would enable us to show that we can detect a slight difference in conformation with a single chiral amino acid in structurally complex peptides.By reducing the disulfide bonds, we want to show if we can detect a conformational difference between the native and reduced states and determine the best conditions to discriminate between L-and D-AVP. Using L-and D-AVP as model peptides, we want to show if with a wild-type aerolysin nanopore we can discriminate and identify a single amino acid enantiomer in both native and reducing conditions (Figure 1c).We performed nanopore experiments with L-and D-AVP, independently or in a mix, in the presence or absence of tris(2-carboxyethyl)phosphine (TCEP), a reducing agent.A WT aerolysin nanopore is inserted into a lipid bilayer separating two compartments (cis and trans) filled with an electrolyte solution (4 M KCl, 25 mM Tris, pH 7.5) (Figure 1c).A potential difference is applied by using two electrodes in each compartment.Ions in the electrolyte solutions flow through the pore, and we can measure an ionic current of 250 pA at 110 mV in our experimental conditions (I 0 ).In the presence of an analyte at the pore entry, we observe current blockades (I b ), characterized by a blockade level (DI b = I 0 − I b ) and by a dwell time (T t ) (Figure 1d).In native conditions, we observe two types of events for both L-and D-AVP: Type I with short dwell times of ∼440 and ∼390 μs, respectively, and a lower blockade current of ∼60 and ∼70 pA; and Type II with longer dwell times: for L-AVP ∼820 μs and D-AVP ∼1140 μs and a higher blockade current of ∼150 pA (Figure 1e) (Supporting Information S1, S2).In the presence of TCEP, a reducing agent, we observe only one type of event for each peptide characterized by long dwell times with a medium blockade current of ∼130 pA (D-AVP) and ∼120 pA (L-AVP) (Figure 1f).Interestingly, for these individual events, we can already observe a difference in the average blockade current for L-and D-AVP in either native or reducing conditions (Figure 1e and f, respectively). For experiments in native conditions, the characteristic parameters of dwell time and average blockade level for each event were extracted from the current traces and plotted as a bidimensional cloud (Figure 2a,b,d,e,g,h,j,k).−57 We observed the two main types of events characterized by at least two defined current blockades, 0.43 ± 0.01 (Type IIa) and 0.72 ± 0.01 (Type I) for L-AVP, while 0.39 ± 0.01 (Type IIa) and 0.71 ± 0.01 (Type I) for D-AVP (Figure 2c,f) (Supporting Information S1).These results show that we can discriminate and identify L-and D-AVP in native conditions using the Type II events and their average current blockade.Unfortunately, the less frequent Type I events for L-and D-AVP cannot be discriminated.These events could be attributed to the entrance of the cycle formed by the disulfide bound.In fact, due to the volume of this cycle, we can suppose it is entropically unfavorable for entry into the pore.On the other hand, it could explain the highest blockade level observed.Furthermore, we observe a discrete subgroup in the Type II population that blocks the pore slightly more (Figure 2b,c,e,f).For L-AVP, we can measure a blockade level of 0.48 ± 0.01 (Type IIb) for this population and 0.43 ± 0.01 for D-AVP (Type IIb) (Supporting Information S1).Blockade levels depend on the volume of the chain interacting with or passing through the pore. 36,49,58,59everal studies showed that L-AVP could adopt multiple conformations.NMR studies 50,60 showed that L-AVP can be either in a "saddle" or "open" conformation with a ratio of 70:30, respectively (Supporting Information S5).These two populations with similar observed blockade levels (Types IIa and IIb) could be due to this conformation change, with the most probable population observed being the saddle conformation, or Type IIa, and the less probable population being the open conformation, or Type IIb.These two conformations have different values for molecule volume. 50o better discriminate and identify each population, we will discuss this later in the paper by performing semisupervised classification to assign types of events to specific conformations. Electrical Enantiomer Characterization in Reducing Conditions.In experiments repeated in the presence of 5 mM TCEP, the bidimensional clouds for dwell time and blockade level display a single population of events with an average current blockade of 0.53 ± 0.01 for L-AVP and 0.49 ± 0.01 for D-AVP (Figure 2h,k; Supporting Information S1, S4).This single population in the presence of a reducing agent shows that we indeed have a change or loss of conformation.We confirmed that this single population is not due to TCEP affecting the detection of the peptides (Supporting Information S6).By comparing the previous experiments in native conditions, we can attribute events Type I and II to the disulfide bond.In the presence of the reducing agent, we observe that for both peptides the current blockade for Type II events increases, from 0.39 ± 0.01 for D-AVP to 0.49 ± 0.01 for D-AVP+TCEP and from 0.43 ± 0.01 for L-AVP to 0.53 ± 0.01 for L-AVP+TCEP (Figure 2i,j) (Supporting Information S1).The reduction of the disulfide bond releases the constraint on the peptide conformation, conferring more flexibility.To confirm these results, we analyzed and compared the event frequency in native and reducing conditions (Supporting Information S7).We can observe that event frequency at similar concentrations of peptide and applied voltage increased drastically after treatment with TCEP, from 19.1 ± 1.0 Hz to 153.3 ± 8.5 Hz for L-AVP at 110 mV in native and reducing conditions and 28.1 ± 0.9 and 107.2 ± 3.6 Hz for D-AVP, respectively.This result shows a reduced energy barrier for the entry of peptides treated with TCEP inside the nanopore.The reduction of the disulfide bond allows the peptide chain to adopt different conformations, which makes the peptide more flexible.This result is confirmed by an increase in the blockade level between the native and the reduced peptides (Figure 2c,f,i,l). Enantiomer Discrimination in a Mix.Since we observed a significant difference in blockade level between L-and D-AVP in native and reducing conditions, we performed equimolar mixes in both conditions to determine (1) if we can discriminate them in a mix and ( 2) what are the best experimental conditions for their discrimination (native or reducing conditions, applied voltage). We compared the superposition of the histograms of each independent experiment and the mixes (Figure 3a,b,g,h).In the absence of TCEP, we focused on the Type II events that allowed discrimination between the peptides.We varied the applied voltage (Supporting Information S8) and salt concentration (Supporting Information S9) to determine the best experimental conditions, with data at 50 and 110 mV in 4 M KCl displayed in Figure 3.At 50 mV, the blockade level histograms of L-and D-AVP in native or reducing conditions overlap (Figure 3a,c,e,g,i).Indeed, in a mix, we could not discriminate between each peptide population at this voltage.By increasing the voltage to 110 mV, the blockade levels for each peptide were better resolved (Figure 3b,d,f,h,j). We measured the most probable mean blockade level for both mixes (Supporting Information S10).In native conditions, blockade levels of 0.39 ± 0.01 and 0.43 ± 0.01 were measured, while with TCEP they were found to be 0.49 ± 0.01 and 0.53 ± 0.01 (Figure 3d,j) (Supporting Information S11), confirming that we can discriminate L-and D-AVP in an equimolar mix in both conditions.To verify that each population observed is attributed to L-or D-AVP, we changed the concentration ratio to 75% L-AVP and 25% D-AVP (Figure 3e,f).We observed a decrease in the number of events for the population attributed to D-AVP compared to L-AVP.To further confirm this, we changed the ratio to 25% L-AVP and 75% D-AVP and showed the same ability to discriminate between the peptides (Supporting Information S12).Here, we demonstrated that we could discriminate and identify L-and D-AVP, two native or reduced peptides differing only by one chiral amino acid in a mix using a WT aerolysin nanopore at 110 mV. Identification by Principal Component Analysis and Monte Carlo Approaches.In order to accurately identify each enantiomeric peptide and their different conformations, we performed a principal component analysis (PCA).For the experimental data analysis, we considered just the blockade level DI b to characterize each blockade.−43 Then, we can consider four other parameters defined in Figure 4a: maximum and minimum blockades (DI bmax , DI bmin , respectively), duration (T t ), and standard deviation (σ).We removed the blockades characterized by a standard deviation smaller than 1 pA, which were due to bumping, or by a blockade level smaller than 0.2 (see above).These data are plotted in Figure 4b from the results obtained with 10 μM D-AVP and show two main clusters: a small one characterized by a high blockade level of around 0.7 and a large one around 0.4. If we want to take the five parameters together into account, it is necessary to perform a PCA to reduce the number of relevant parameters.Then, we consider the two most relevant parameters: the first and second principal components (PC1 and PC2, respectively; Figure 4e).We observe two clusters similar to those in Figure 4b: a small cluster and a large one.We are particularly interested in the second principal component (PC2) and its Gaussian distribution for the larger population, which is similar to the blockade levels (Figure 2c,f).The two peptides have very similar structures, so the corresponding distributions are very close.These two distributions overlap, making classical clustering analyses with well-separated domains impossible (Supporting Information S13). To overcome this difficulty, we considered the results obtained from experiments with individual peptides.As a first approximation, each distribution is fitted by a Gaussian centered at μ D (respectively μ L ) with a standard deviation of σ D (respectively σ L ) for the D-AVP (respectively L-AVP) peptides.Blockades belonging to the domain [μ-3•σ, μ+3•σ] are attributed to the corresponding peptide (D-AVP or L-AVP), allowing the classification of 99% of blockades (Figure 4e,f and Supporting Information S13).Fitting the histogram of the main PC2 population for the D-AVP like this determines μ D = −0.039and σ D = 0.043 (Figure 4e).The classification is performed by taking into account all of the blockades in the range [μ D -3•σ D , μ D +3•σ D ].These data are colored in orange.The remaining data are not labeled and are in cyan (Figure 4f). We followed the same approach with the L-AVP experimental data using the correlation matrix previously calculated from D-AVP data.We can observe a similar behavior: a PC2 distribution fitted by a Gaussian function as before, where μ L = 0.047 and σ L = 0.035 are the fitted values for L-AVP.All the blockades in the range [μ L − 3*σ L , μ L + 3*σ L ] are attributed to L-AVP peptides (in green).The other blockades are unassigned (in cyan) (Supporting Information S13). We used these criteria to discriminate between D-AVP and L-AVP in a mixture.First, we combined data from D-AVP and L-AVP current traces to define the training data (Figure 5a).These classifications led to labeling the previous data combined (Figure 5a).The PCA is computed using the correlation matrix previously calculated from D-AVP data.The PC2 distribution shows the two Gaussian peaks previously observed for each peptide (Figure 5b).As both distributions are overlaid, the logistic regression approach to define clusters is irrelevant (Supporting Information S14).We follow a Monte Carlo approach to discriminate the two peptides and label them (Figure 5c), followed by a comparison of our prediction with the original labeling of D-AVP and L-AVP by calculating a confusion matrix (Figure 5e).The success rates are 71% for D- AVP and 75% for L-AVP recognition.Nevertheless, the false- positive rates are similar for both peptides (around 22%) due to the overlap in the two distributions. We applied this approach to data collected from an equimolar mixture (10 μM D-AVP and L-AVP).The PC2 distribution (Figure 5f) is similar to the one observed for the combined data from D-AVP and L-AVP in Figure 5b.We used the same criteria as used previously to label the blockades: 1 = D-AVP (orange), 2 = L-AVP (green), and 3 = unlabeled data (cyan).Figure 5g shows the scatter plot of the two firstprincipal components according to this labeling, which leads to the blockade trace represented in Figure 5h. We applied this method to nonequimolar mixtures (2.5 μM L-AVP/7.5 μM D-AVP and 7.5 μM L-AVP/2.5 μM D-AVP), leading to the ratio calculation of the number of L-AVP blockades divided by that of D-AVP blockades (Supporting Information S15).The correlation between this ratio and the relative composition of the mixtures (1:3, 1:1, and 1:3 L- AVP:D-AVP, respectively) shows the accuracy of our approach. Using five parameters provides a more comprehensive view of the blockade phenomena and allows the determination of the most relevant parameters.The correlation matrix calculated from the D-AVP data was systematically used for each analysis.The scatter plots representing the first two principal components show a structure with several domains (a large and a small) comparable to classic representations of blockade levels versus durations, whether for D-AVP or L-AVP peptides. In the case of data collected from a mixture, we followed a Monte Carlo approach to assign the blockades to one peptide or the other (Figure 5f).Our approach was validated using the combined data sets from experiments performed with the L- AVP and D-AVP peptides separately.The corresponding confusion matrix shows a success rate of 73%, which is explained by the overlapping between the two distributions.This result is interesting and demonstrates the relevance of this approach in the statistical analysis of peptides by nanopores. Furthermore, these blockade level distributions (Figure 2c,f) highlight a shoulder attributed to the open conformation. 50he PC2 distributions clearly show two symmetrical shoulders for D-AVP (Figure 4e), whereas there is only one for L-AVP (Supporting Information S13).Then, we could also discriminate the two peptides from the PC2 shape distribution.Principal component analyses are very promising for further analysis of the different conformations of peptides at the singlemolecule level.■ CONCLUSION The world's population is increasing, and people worldwide are living longer.The degradation of the environment also impacts health.In this context, we expect an increase in chronic diseases, cancers, and neurodegenerative diseases. 40To answer this health challenge, we need a powerful, sensitive, and specific tool to perform early disease detection.Many molecules, including established peptide and protein clinical biomarkers and yet-to-be-identified biomarkers, remain underutilized or unexploited for clinical applications.This is due to the absence of techniques capable of discerning different conformations, post-translational modifications, and D/L amino acids.This study paves the way for the ability to identify, characterize, and quantify free or contained D-amino acids in peptides or proteins, which will be crucial for the early detection of human diseases.From a chemical and pharmaceutical point of view, this technique would also allow the scientific community to discriminate isomers 61 or follow the conversion of molecule chirality 62 and explain nonenzymatic racemization pathways. 63While this study focuses on a specific peptide model, there is no doubt that this technique can be extended to other enantiomer peptides (lanthipeptide and Aβ peptides), as shown in two recent publications using different protein nanopores such as OmpF, 48 CytK, and FraC. 49e showed that we can detect and discriminate biologically relevant peptides differing by a single amino-acid enantiomer, L-or D-Arg.The peptides can be identified individually or as a mixture.We detected two different peptide conformations, saddle (Type IIa) and open (Type IIb), which NMR already described.Due to their unique conformation, we could identify each peptide using two electrical parameters: dwell time and current blockade.Using a reducing agent, we did not detect any more conformational variants observed under native conditions.On the other hand, we can still identify L-and D-AVP that adopt different conformations.We also used a PCA approach to confirm our experimental data analysis and define the best electrical parameters to discriminate each population of events.This method is up-and-coming and will make it possible to study the conformations of biomarkers in more detail at the single-molecule level.Protein Production.Proaerolysin was produced by Dreampore SAS (Cergy, France) as described previously. 21riefly, C-terminally His-tagged protein was expressed into the periplasm of BL21 Ros 2 cells (MilliporeSigma), harvested by osmotic shock, before further purification by nickel affinity and buffer exchange chromatography (Cytiva, Malborough MA, USA) in standard buffers containing 350 mM NaCl.Purified protein was stored at 4 °C until further use, when it was activated using trypsin-bound beads (Thermo Scientific, Waltham, MA, USA).It was then used in nanopore experiments at a 0.5−1 nM final concentration of aerolysin monomers. Peptides Nanopore Experiments.Nanopore experiments were performed using a vertical planar lipid bilayer setup (Warner Instruments, Hamden CT, USA) with a 150 mM aperture, as described previously. 57A 10 mg/mL stock of diphytanoylphosphatidyl-choline (DphPC, Avanti Polar Lipids, Alabaster, AL, USA), dissolved in decane, was used to create a planar lipid bilayer separating compartments containing 1 mL of electrolyte solution, 4 M KCl, and 25 mM Tris, pH 7.4.Once a single aerolysin pore was inserted, peptide analytes (individually or in a mix) were introduced at different concentrations before a 110 or 50 mV voltage differential was applied through Ag/AgCl electrodes. Data Acquisition and Analysis.Electrical measurements were performed by using an Axopatch 200B and a Digidata 1440 digitizer.Data were recorded at 250 kHz or 4 μs sampling time and filtered at 5 kHz using Clampex software (Axon Instruments, Union City, CA, USA).Three-minute recordings taken at 50 or 110 mV were cleaned and analyzed with Igor Pro (Wavemetrics, Portland, OR, USA), using inhouse algorithms to detect events and extract their characteristic parameters (current baseline level I 0 , values for the average (I b ), maximum (I bmax ), and minimum (I bmin ) current blockade level within the event, event noise or standard deviation of the current values within the event (I bs ), dwell time, and blockade time location in the data set.The average open pore current (I 0 ) and standard deviation (σ) for each recording were determined statistically, 64 and a threshold of I 0 − 7σ was used to define events for the target peptides in the characterization experiments (Figures 2, 3).Extracted parameters from multiple recordings were concatenated for low-frequency events to provide sufficient information for robust statistics.To remove bias introduced by short bumping events with a high blockade fraction, events with dwell times equal to or greater than 200 μs were used for further analysis.This is a reasonable approach given the overall distribution of average blockade fraction vs dwell time data (Supporting Information S2, S3), relative length of event dwell times, and the use of a 5 kHz filter.Histograms of the average blockade fraction for each event (DI b /I 0 , where DI b = I 0 − I b ; binned at 0.005) were fit with Gaussian or bi-Gaussian functions to determine the average blockade fraction for each population of events.Semilog histograms of the number of events against dwell time, with 30 bins per decade, were fit with an exponential function to determine the most probable dwell time (Supporting Information S2).Semilog distributions of interevent time, binned at 2 ms for global frequency and 4 ms for population frequency, were fit with single-exponential functions to determine the event frequency (Supporting Information S7).Each experiment contained at least 1700 events of more than 200 μs in dwell time.Depending on their blockade level distribution, populations were selected to determine each mean dwell time value.For Type I (Land D-AVP), the chosen points were found between 0.7 and 0.8 blockade level, and for Type II, between 0.35 and 0.55.In the presence of TCEP, points were selected between 0.4 and 0.8.The average and standard deviation determined from 3 independent fits were used for each fit to account for fit robustness. Semisupervised Clustering Using Principal Component Analysis and Logistic Regression Classification.The five parameters (T t , DI bmin , DI bmax , DI b , and I bs ) were determined (Figure 5b) using DI bmin = I 0 − I bmax , DI bmax = I 0 − I bmin , and DI b = I 0 − I b, respectively.PCA was performed using the PCA module of the scikit-learn library in Python. 65,66riefly, the covariance matrix between these previously normalized five parameters was calculated to determine this matrix's eigenvalues and vectors.Each parameter is projected onto each eigenvector, with only the eigenvectors characterized by the most significant eigenvalues (PC1 and PC2) used for further analysis.The calculations with L-AVP or mixtures were performed using the correlation matrix already calculated with D-AVP.The logistic regression classification 67 was performed using the corresponding module in the scikit-learn Python library.The training is performed with 20% of the data set composed by concatenating data from D-AVP and L-AVP experiments (without or with TCEP).We can also follow a Monte Carlo type approach.We first used the histograms of PC2 obtained with each D-AVP or L-AVP peptide.These histograms are fitted by distinct Gaussian distributions centered in μ with a standard sigma deviation.We used these two Gaussian distributions to classify the blockages of a mixture according to a Monte Carlo approach.This method is evaluated using data obtained after the concatenation of data obtained previously with D-AVP and L-AVP peptides.We only take into account the blockages included in the band [μ-3•σ, μ+3•σ] of each Gaussian distribution.The predictions are compared to the true initial values to evaluate this method and calculate the corresponding confusion matrix using the corresponding sci-kit-learn library in Python. Additional experimental details and results as well as tables regrouping population characteristics in each condition (PDF) Figure 1 . Figure 1.(a) Isomeric representation of L-and D-AVP (chemical structures created with ChemDraw).(b) Chemical representation of the chirality of vasopressin Arg8.(c) Representation of experimental conditions for the analysis of L-AVP (green) and D-AVP (orange) using a wild-type aerolysin nanopore (ribbon representation of PDB 5JZT using ChimeraX), inserted in a lipid bilayer in 4 M KCl and 25 mM Tris, pH 7.5.With an applied voltage, Cl − and K + ions go through the pore toward the oppositely charged electrodes, creating an ionic current (I 0 ).(Created using BioRender.com.)(d) Example of an event showing the open pore current (I 0 ) and blockade current (I b ) resulting in a blockade level (DI b ) over a dwell time (T t ), characteristic of a peptide at 110 mV.(e) Example of the most representative types of events of L-AVP (green) and D-AVP (orange) at 110 mV.(f) Example of the most representative type of event of L-AVP (blue) and D-AVP (red) in the presence of 5 mM TCEP at 110 mV.(Image generated using Biorender.com.) Figure 2 .Figure 3 . Figure 2. Experimental result of the independent analysis of 10 μM L-and D-AVP with or without 5 mM TCEP using an aerolysin nanopore in 4 M KCl 25 and mM Tris pH 7.5.(a, d, g, j) Example of current traces (filtered at 5 kHz) for each independent experimental condition at 110 mV: L- AVP (green), D-AVP (orange), L-AVP+TCEP (blue), and D-AVP+TCEP (red).Open pore currents, I 0 = 243.82± 1.77 pA for L-AVP (a), I 0 = 246.78± 2.04 pA for D-AVP (d), I 0 = 259.22± 3.33 pA for L-AVP+TCEP (g), and I 0 = 250.16± 3.46 pA for D-AVP+TCEP (j).(b, e, h, k) Scatter plot representing the normalized average blockade level (DI b ) against the dwell time of each event longer than 200 μs between 0.2 and 1.0 in the blockade level representing the interacting population of each peptide in the absence and presence of TCEP.(Raw scatter plot can be found in Supporting Information S3 and S4.) (b, e) Scatter plot showing populations of events for L-and D-AVP in the absence of TCEP: Type I between 0.6 and 0.8 blockade level and Type II between 0.38 and 0.6 blockade level.(c, f, i, l) Histograms of normalized blockade levels as a function of the number of events fitted with a Gaussian function (black lines) to determine the most probable blockade level for Type I and a bi-Gaussian for Type II: L-AVP: Type I, 0.72 ± 0.01; Type IIa, 0.43 ± 0.01 for the population with the most events; Type IIb, 0.48 ± 0.01; and D-AVP: Type I, 0.71 ± 0.01; Type IIa, 0.39 ± 0.01; Type IIb, 0.43 ± 0.01.With TCEP average blockade levels fitted by a Gaussian for L-AVP + TCEP: 0.53 ± 0.01 and D-AVP + TCEP: 0.49 ± 0.01.Data shown are from a single recording, with fitted values being mean and standard deviation for three independent fits.NL-AVP = 2812 events; ND-AVP = 3961 events; NL-AVP+TCEP = 13383 events; ND-AVP+TCEP = 7644 events.The number of events was calculated by selecting the population with a dwell time superior to 200 μs, depending on their blockade level.(Image generated using Biorender.com.) Figure 4 . Figure 4. Principal component analysis (PCA).(a) Parameters used to characterize each blockade minimum, average, maximum blockades (DI bmin , DI b , DI bmax ), standard deviation (σ), and duration (T t ) of each blockade.(b) Scatter plot of the average blockade of each D-AVP blockade according to its duration.(c) PCA strategy to decrease the parameter numbers.(d) Second principal component (PC2) according to the first one (PC1).(e) PC2 histogram fitted by a Gaussian distribution (μ D = 0.0474, σ D = 0.0348, describing the most probable value and the standard deviation of the distribution, respectively).(f) Selection of the relevant blockades in the range [μ D ± σ D ] in 10 μM D-AVP using an aerolysin nanopore in 4 M KCl and 25 mM Tris, pH 7.5, V = 110 mV. Figure 5 . Figure 5. Prediction of D-and L-AVP mixtures.(a) Combined data from blockade traces obtained for D-(orange) or L-AVP (green), respectively.The gray events correspond to the ones with σ < 1 pA (10 μM D-or L-AVP using an aerolysin nanopore in 4 M KCl and 25 mM Tris, pH 7.5, V = 110 mV).(b) Distribution of the second principal component (PC2), fitted by bi-Gaussian (gray) or Gaussian (orange or green) curves.(c) Scatter plot of the two principal components.The area colored in orange or green corresponds to the bands centered in μ D (μ L ) and with a width of ±3 σ D (σ L ).It is labeled according to the nature of each isomer.(d) Evaluation of Monte Carlo prediction.Predicted blockade trace according to four types of labeling obtained from the data used in (a).(e) Confusion matrix calculated from the Monte Carlo prediction.(f) Equimolar mixture of 10 μM D-and L-AVP distribution of the second principal component (PC2), fitted by bi-Gaussian (gray) or Gaussian (orange or green) curves.ΔV = 110 mV.(g) Scatter plot of the two principal components.The area colored in orange or green is calculated according to a Monte Carlo algorithm.(h) Blockade trace according to four types of labeling: σ < 1 (gray), D-AVP (orange), L-AVP (green), and not attributed (cyan).
8,033.2
2024-05-03T00:00:00.000
[ "Medicine", "Chemistry" ]
The backtracking survey propagation algorithm for solving random K-SAT problems Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables. Optimization problems with discrete variables are widespread among scientific disciplines and often among the hardest to be solved.The K-satisfiability (K-SAT) problem is a combinatorial discrete optimization problem of N Boolean variables, x = {x i } i=1,N , submitted to M constraints.Each constraint, called clause, is in the form of an OR logical operator of K literals (variables or their negations), and the problem is solvable when there exists at least one configuration of the variables, among the 2 N possible ones, that satisfies all constraints.The K-SAT problem for K ≥ 3 is a central problem in combinatorial optimization: it was among the first problems shown to be N P -complete [1,2,3] and is still very much studied.Efforts from the theoretical computer science community have been especially devoted to the study of the random K-SAT ensemble [4,5], where each formula is generated by randomly choosing M = αN clauses of K literals [6]; indeed formulas from this ensemble become extremely hard to solve when the clause to variable ratio α grows [6] and still the locally tree-like structure of the factor graph [7], representing the interaction network among variables, makes the random K-SAT ensemble a perfect candidate for an analytic solution.The study of random K-SAT problems and of the related solving algorithms is likely to shed light on the origin of the computational complexity and to allow for the development of improved solving algorithms. Both numerical [8] and analytical [9,10] evidences suggest that a threshold phenomenon takes place in random K-SAT ensembles: in the limit of very large formulas, N → ∞, a typical formula has a solution for α < α s (K), while it is unsatisfiable for α > α s (K).It has been very recently proved in Ref. [11] that for K large enough the SAT-UNSAT threshold α s (K) exists in the N → ∞ limit and coincides with the prediction from the cavity method in statistical physics [12].A widely accepted conjecture is that the SAT-UNSAT threshold α s (K) exists for any value of K.The main problem is that finding solutions close to α s is very hard, and all known algorithms running in polynomial time fail to find solutions for α > α a .The algorithmic threshold α a depends on the specific algorithm, but it is well below α s for most algorithms.There are two main open questions: (i) finding improved algorithms having a larger α a , and (ii) understanding what is the theoretical upper bound to α a .In the present manuscript we present progresses on both issues. The best prediction about the SAT-UNSAT threshold comes from the cavity method [12,13,14,15]: for example, α s (K = 3) = 4.2667 [14] and α s (K = 4) = 9.931 [15].Actually the statistical physics study of random K-SAT ensembles also provides us with a very detailed description of how the space of solutions changes when α spans the whole SAT phase (0 ≤ α ≤ α s ).Considering typical formulas in the large N limit and the vast majority of solutions in these formulas (i.e.typical solutions), we known that, at low enough α values, the solutions are many and well connected, such as to form a single cluster 1 .Increasing α not only the number of solutions decreases, but at α d the random K-SAT ensemble undergoes a phase transition and the space of solutions shatters in an exponentially large (in the problem size N ) number of clusters: two solutions belonging to different clusters have a Hamming distance O(N ).Defining the energy function E(x) as the number of unsatisfied clauses in configuration x, it has been found [12] that for α > α d there are exponentially many metastable states of positive energy, which may trap algorithms that look for solutions by energy relaxation (e.g.Monte Carlo simulated annealing). Further increasing α, each cluster loses solutions and shrinks, but the most relevant change is in the number of clusters.The cavity method allows us to count clusters of solutions as a function of the number of solutions they contain [17] and from this very detailed description several other phase transitions have been identified [18,15].For example, at α c a condensation phase transition takes place, such that for α > α c the vast majority of solutions belong to a sub-exponential number of clusters, leading to effective long-range correlations among variables in typical solutions, which are hard to approximate by any algorithm with a finite horizon.In general α d ≤ α c ≤ α s hold.Most of the above picture of the solutions space has been proven rigorously in the large K limit [19,20]. Moving to the algorithmic side, a very interesting question is whether such a rich structure of the solutions space affects the performances of searching algorithms.While clustering at α d may have some impact on Monte Carlo based algorithms and on algorithms 1 In SAT problems we say 2 solutions are neighbors if they differ in the assignment of just one variable; in other problems (e.g.XOR-SAT [16]) this notation of neighboring needs to be relaxed, because a pair of solutions differing in just one variable are not allowed by construction.As long as the notion of neighborhood is relaxed to Hamming distances o(N ) all the picture of the solutions space based on statistical physics remains unaltered. that try to sample solutions uniformly [21], many algorithms exist that can find at least one solution with α > α d [12,22,23]. A very likely conjecture is that the hardness of a formula is related to the existence of a subset of highly correlated variables, which is very hard to assign correctly altogether; the worst case being a subset of variables that can have a unique assignment.This concept was introduced with the name of backbone in Ref. [24].The same concept applied to solutions within a single cluster lead to the definition of frozen variables (within a cluster) as those variables taking the same value in all solutions of the cluster [25].It has been proven in Ref. [26] that the fraction of frozen variables in a cluster is either zero or lower bounded by (αe 2 ) −1/(K−2) ; in the latter case the cluster is called frozen. According to the above conjecture, finding a solution in a frozen cluster is hard (in practice it should require a time growing exponentially with N ).So the smartest algorithm running in polynomial time should search for unfrozen clusters as long as they exist.Unfortunately counting unfrozen cluster is not an easy job, and indeed a large deviation analysis of their number has been achieved only very recently [27] for a different and simpler problem (bicoloring random regular hypergraphs).For random K-SAT only partial results are known, that can be stated in terms of two thresholds: for α > α r (rigidity) typical solutions are in frozen cluster (but a minority of solutions may still be unfrozen), while for α > α f (freezing) all solutions are frozen.It has been rigorously proven [28,29] that α f < α s holds strictly for K > 8.For small K, which is the interesting case for benchmarking solving algorithms, we only know that α f (K = 3) = 4.254 (9) from exhaustive enumerations in small formulas (N ≤ 100) [30] and α r (K = 4) = 9.883 (15) from the cavity method [15].In general The conjecture above implies that no polynomial time algorithm can solve problems with α ≥ α f , but also finding solutions close to the rigidity threshold α r is expected to be very hard, given that unfrozen solutions becomes a tiny minority.And this is indeed what happens for all known algorithms.Since we are interested in solving very large problems we only consider algorithms whose running time scales almost linearly with N and we measure performances of each algorithm in terms of its algorithmic threshold α a . Solving algorithms for random K-SAT problems can be roughly classified in two main categories: (a) algorithms that search for a solution by performing a biased random walk in the space of configurations and (b) algorithms that try to build the solutions by assigning variables, according to some estimated marginals.WalkSat [31], focused Metropolis search (FMS) [22] and ASAT [23] belong to the former category; while in the latter category we find Belief Propagation guided Decimation (BPD) [21] and Survey Inspired Decimation (SID) [32].All these algorithms are rather effective in finding solutions to random K-SAT problems: e.g. for K = 4 we have α BPD a = 9.05, α FMS a 9.55 and α SID a 9.73 to be compared with a much lower algorithmic threshold α GUC a = 5.54 achieved by Generalized Unit Clause, the best algorithm whose convergence to a solution can be proven rigorously [33].Among the efficient algorithms above, only BPD can be solved analytically [21] to find the algorithmic threshold α BPD a ; for all the others we are forced to run extensive numerical simulations in order to measure α a . At present the algorithm achieving the best performances on several constraint satisfaction problems is SID, which has been successfully applied to the random K-SAT problem [12] and to the coloring problem [34].The statistical properties of the SID algorithm for K = 3 have been studied in details in Refs.[35,32].Numerical experiments on random 3-SAT problems with a large number of variables, up to N = 3 × 10 5 , show that in a time that is approximately linear in N the SID algorithm finds solutions up to α SID a 4.2525 [35], that is definitely smaller, although very close to, α s (K = 3) = 4.2667.In the region α SID a < α < α s the problem is satisfiable for large N , but at present no algorithm can find solutions there. To fill this gap we study a new algorithm for finding solutions to random K-SAT problems, the Backtracking Survey Propagation (BSP) algorithm.This algorithm (fully explained in the Methods section) is based, as SID, on the survey propagation (SP) equations derived within the cavity method [12,35,32] that provide an estimate on the total number of clusters N clus = exp(Σ).The BSP algorithm, like SID, aims at assigning gradually the variables such as to keep the complexity Σ as large as possibile, i.e. trying not to kill too many clusters [35].While in SID each variable is assigned only once, in BSP we allow unsetting variables already assigned such as to backtrack on previous non-optimal choices.In BSP the r parameter is the ratio between the number of backtracking moves (unsetting one variable) and the number of decimation moves (assigning one variable).r < 1 must hold and for r = 0 we recover the SID algorithm.Running times scale as 1/(1 − r), but the algorithm complexity remains practically linear in N for any r < 1. The idea supporting backtracking [36] is that a choice made at the beginning of the decimation process, when most of the variables are unassigned, may turn to be suboptimal later on, and re-assigning a variable that is no longer consistent with the current best estimate of its marginal probability may lead to and N = 50 000 (empty symbols).On instances not solved, a second run rarely (< 1%) finds a solution.a better satisfying configuration.We do not expect the backtracking to be essential when correlations between variables are short ranged, but approaching α s we know that correlations become long ranged and thus the assignment of a single variable may affect a huge number of other variables: this is the situation when we expect the backtracking to be essential. This idea may look similar in spirit to the survey propagation reinforcement (SPR) algorithm [37], where variables are allowed to change their most likely value during the run, but in practice BSP works much better.In SPR, once reinforcement fields are large, the re-assignment of any variable becomes unfeasible, while in BSP variables can be re-assigned to better values until the very end, and this is a major advantage. Results In this section we show the results of extensive numerical simulations using BSP.In each plot having α on the abscissa, the right end of the plot coincides with the best estimate of α s , in order to provide an immediate indication of how close to the SAT-UNSAT threshold the algorithm can work. Probability of finding a SAT assignment The standard way to study the performances of a solving algorithm is to measure the fraction of instances it can solve as a function of α.We show in Fig. 1 such a fraction for BSP run with three values of the r parameter (r = 0, 0.5 and 0.9) on random 4-SAT problems of two different sizes (N = 5 000 and N = 50 000).The probability of finding a solution increases both with r and N , but an extrapolation to the large N limit of these The upper panel shows Σ res /N res as a function of α, for random 4-SAT problems solved by BSP with r = 0.9.This order parameter is mostly size-independent (few deviations appear only at the right end) and vanishes at α BSP a ≈ 9.9, slightly beyond the rigidity threshold α r = 9.883 (15), marked by a vertical line.The lower panel shows that the same linear extrapolation holds for other values of r, the black line being the fit to r = 0.9 data shown in the upper panel, but SID without backtracking (r = 0) has a much lower algorithmic threshold, α SID a ≈ 9.83. data is unlikely to provide a reliable estimation of the algorithmic threshold α a . Order parameter and algorithmic threshold In order to obtain a reliable estimate of α a we look for an order parameter vanishing at α a and having very little finite size effects.We identify this order parameter with the quantity Σ res /N res , where Σ res and N res are respectively the complexity (i.e.log of number of clusters) and the number of unassigned variables in the residual formula.As explained in Methods, BSP assigns and re-assigns variables, thus modifying the formula, until the formula simplifies enough that the SP 9) measured on small problems [30], and marked by the vertical line.A linear fit to the residual complexity (red line) extrapolates to zero slightly beyond the SAT-UNSAT threshold, at α BSP a ≈ 4.268, strongly suggesting BSP can find solutions in the entire SAT phase for K = 3 in the large N limit.The black line is a linear fit vanishing at α s .fixed point has only null messages: the residual formula is defined as the last formula with non-null SP fixed point messages.We have experimentally observed that the BSP algorithm (as the SID one [35]) can simplify the formula enough to reach the trivial SP fixed point only if the complexity Σ remains strictly positive during the whole decimation process.In other words, on every run where Σ becomes very close to zero or negative, SP stops converging or a contradiction is found.This may happen either because the original problem was unsatisfiable or because the algorithm made some wrong assignments incompatible with the few available solutions.Thanks to the above observation we have that Σ res ≥ 0 and thus a null value for the mean residual complexity signals that the BSP algorithm is not able to find any solution, and thus provides a valid estimate for the algorithmic threshold α BSP a .As we see in the data shown in the upper panel of Fig. 2 the mean value of the intensive mean residual complexity Σ res /N res is practically size-independent and a linear fit provides a very good data interpolation: tiny finite size effects are visible in the largest N datasets only close to the dataset right end.The linear extrapolation predicts α BSP a ≈ 9.9 (for K = 4 and r = 0.9), which is slightly above the rigidity threshold α r = 9.883 (15) computed in Ref. [15].This means that BSP is able to find solutions in a region of α where the majority of solutions is in frozen clusters and thus hard to find.We show below that BSP actually finds solutions in atypical unfrozen clusters, as it has been observed already in very smart algorithms solving other kind of constraint satisfaction problems [38,39]. The effectiveness of the backtracking can be appreciated in the lower panel of Fig. 2, where the order parameter Σ res /N res is shown for r = 0 and r = 0.5, together with linear fits to these datasets and to the r = 0.9 dataset (black line).We observe that the algorithmic threshold for BSP is much larger (on the scale measuring the relative distance from the SAT-UNSAT threshold) that the one for SID (i.e.r = 0 dataset). For random 3-SAT the algorithmic threshold of BSP, run with r = 0.9, practically coincide with the SAT-UNSAT threshold α s (see Fig. 3), thus providing a strong evidence that BSP can find solutions in the entire SAT phase.The estimate for the freezing threshold α f = 4.254(9) obtained in Ref. [30] with N ≤ 100 is likely to be too small, given that all solutions found by BSP for N = 10 6 are unfrozen, even for α > α f . Computational complexity As explained in Methods, the BSP algorithm performs O(f −1 (1−r) −1 ) elementary steps, where at each step f N variables are either assigned [with prob.1/(1 + r)] or released [with prob.r/(1 + r)].At the beginning of each step, the algorithm solves the SP equations with a mean number η of iterations2 , and then sort local marginals.The total number of elementary operations to solve an instance grows as f −1 (1−r) −1 (ηN +N log N ), i.e. the algorithm running time is at most O(N log N ).Fig. 4 shows that η is actually a small number changing mildly with α and N both for K = 3 and K = 4.The main change that we observe is in the fluctuations of η that for K = 3 become larger beyond α f (marked by a vertical line).We expect η to eventually grow as O(log(N )), but for the sizes studied we do not observe such a growth.Moreover, given that the sorting of local marginals does not need to be strict (i.e. a partial sorting [40] running in O(N ) time can be enough), we have that in practice the algorithm runs in a time almost linear in the problem size N . Whitening procedure Given that the BSP algorithm is able to find solutions even slightly beyond the rigidity threshold α r , it is natural to check whether these solutions have frozen variables or not.We concentrate on solutions found for random 3-SAT problems with N = 10 6 , since the large size of these problems makes the analysis very clean, and because we have a good number of solutions beyond the best estimate for the freezing threshold α f = 4.254(9) [30]. On each solution found we run the whitening procedure (first introduced in [41] and deeply discussed in [42,26]), that identifies frozen variables by assigning the joker state to unfrozen variables, i.e. variables that can take more than one value without violating any clause and thus keeping the formula satisfied.At each step of the whitening procedure, a variables is considered unfrozen (and thus assigned to ) if it belongs only to clauses which either involve a variable or are satisfied by another variable.The procedure is continued until all variables are or a fixed point is reached: non-variables at the fixed point correspond to frozen variables in the starting solution. We uncover that all solutions found by BSP are converted to all-by running the whitening procedure, thus showing that solutions found by BSP have no frozen variables.This is somehow expected, according to the conjecture discussed in the Introduction: finding solutions in a frozen cluster would take an exponential time, and so the BSP algorithm actually finds solutions at very large α values by smartly focusing on the sub-dominant unfrozen clusters. The whitening procedure leads to a relaxation of the number of non-variables as a function of the number of iterations t that follows a two steps relaxation process [25] with an evident plateau (see upper panel in Fig. 5), that becomes longer increasing α towards the algorithmic threshold.The time for leaving the plateau, scales as the time τ (c) for reaching a fraction c on non-variables (with c smaller than the plateau value).The latter has large fluctuations from solution to solution, as shown in the central panel of Fig. 5 for c = 0.4 (very similar, but shifted, histograms are obtained for other c values).However, after leaving the plateau, the dynamics of the whitening procedure is the same for each solution.Indeed plotting the mean The upper panel shows the mean fraction of non-variables during the whitening procedure applied to all solutions found by the BSP algorithm.The line is the SP prediction for the fraction of frozen variables in typical solutions at α = 4.262, and shows that solutions found by BSP are atypical.The central panel shows the histograms of the whitening times, defined as the number of iterations required to reach a fraction 0.4 of nonvariables.Increasing α both the mean value and the variance of the whitening times grow.The lower panel shows the same data used for the upper one, but averaged at fixed τ (0)−t, the time to reach the all-fixed point.Errors are tiny since the whitening dynamics below the plateau is the same for each solution.fraction of non-variables as a function of the time to reach the all-configuration, τ (0) − t, we see that fluctuations are strongly suppressed and the relaxation is the same for each solution (see lower panel in Fig. 5). Critical exponent for the whitening time divergence In order to quantify the increase of the whitening time approaching the algorithmic threshold, and inspired by critical phenomena, we check for a power law divergence as a function of (α a − α) or Σ res , which are linearly related.In Fig. 6 we plot in a double logarithmic scale the mean whitening time τ (c) as a function of the residual complexity Σ res , for different choices of the fraction c of non-variables defining the whitening time.Data points are interpolated via the power law τ (c) = A(c) + B(c)Σ −ν res , where the critical exponent ν is the same for all the c values.Joint interpolations return the following best estimates for the critical exponent: ν = 0.269 (5) for K = 4 and ν = 0.281 (6) for K = 3.The two estimate turn out to be well compatible within errors, thus suggesting a sort of universality for the critical behavior approaching the algorithmic threshold α BSP a .Nonetheless a word of caution is needed since the solutions we are using as starting points for the whitening procedure are atypical solutions (otherwise they would likely contain frozen variables and would not flow to the all-configuration under the whitening procedure).So, while finding universal critical properties in a dynamical process is definitely a good news, how to relate it to the behavior of the same process on typical solutions it is not obvious (and indeed for the whitening process starting from typical solutions one would expect the naive mean field exponent ν = 1/2, which is much larger than the one we are finding). Discussion We have studied the Backtracking Survey Propagation (BSP) algorithm for finding solutions in very large random K-SAT problems and provided numerical evidence that it works much better than any previously available algorithm.That is, BSP has the largest algorithmic threshold known at present.The main reason for its superiority is the fact that variables can be re-assigned at any time during the run, even at the very end.In other solving algorithms that may look similar, as e.g.survey propagation reinforcement [37], re-assignment of variables actually takes place mostly at the beginning of the run, and this is far less efficient in hard problems.Even doing a lot of helpful backtracking, the BSP running time is still O(N log N ) in the worst case, and thanks to this it can be used on very large problems with millions of clauses. For K = 3 the BSP algorithm finds solutions practically up to the SAT-UNSAT threshold α s , while for K = 4 a tiny gap to the SAT-UNSAT threshold still remains, but the algorithmic threshold α BSP a seems to be located beyond the rigidity threshold α r in the large N limit.Beating the rigidity threshold, i.e. finding solutions in a region where the majority of solutions belongs to clusters with frozen variables, is hard, but not impossible.Even under the assumption that finding frozen solutions takes an exponential time in N , very smart polynomial time algorithms can look for a solution in the sub-dominant unfrozen clusters [38,39].BSP belongs to this category, as we have shown that all solutions found by BSP have no frozen variables. One of the main questions we tried to answer with our extensive numerical simulations is whether BSP is reaching (or approaching closely) the ultimate threshold α a for polynomial time algorithms solving large random K-SAT problems.Under the assumption that frozen solutions can not be found in polynomial time, such an algorithmic threshold α a would coincide with the freezing transition at α f (i.e. when the last unfrozen solution disappears).Unfortunately for random K-SAT the location of α f is not known with enough precision to allow us to reach a definite answer to this question.It would be very interesting to run BSP on random hypergraph bicoloring problems, where the threshold values are known [43] and a very recent work has shown that the large deviation function for the number of unfrozen clusters can be computed [27]. It is worth noticing that the BSP algorithm is easy to parallelize, since most of the operations are local and do not require any strong centralized control.Obviously the effectiveness of a parallel version of the algorithm would largely depend on the topology of the factor graph representing the specific problem: if the factor graph is an expander, then splitting the problem on several cores may require too much inter-core bandwidth, but in problems having a natural hierarchical structure the parallelization may lead to further performance improvements. The backtracking introduced in the BSP algorithm helps a lot in correcting errors made during the partial assignment of variables and this allow the BSP algorithm to reach solutions at large α values.Clearly the price we pay is that a too frequent backtracking makes the algorithm slower.A natural direction to improve this class of algorithms would be to used biased marginals focusing on solutions which are easier to be reached by the algorithm.For example in the region α > α r the measure is concentrated on solutions with frozen variables, but these can not be really reached by the algorithm.The backtracking thus intervenes and corrects the partial assignment until a solution with unfrozen variables is found by chance.If the marginals could be computed from a new biased measure which is concentrated on the unfrozen clusters, this could make the algorithm go immediately in the right direction and much less backtracking would be hopefully needed. Methods The Backtracking Survey Propagation (BSP) algorithm is an extension of the Survey Inspired Decimation (SID) algorithm for finding solutions to random K-satisfiability problems.A detailed description of the SID algorithm can be found in Refs.[12,13,32].Here, we briefly recall SID and describe the novelties leading to the BSP algorithm [36]. Survey Inspired Decimation (SID) The SID algorithm is based on the survey propagation (SP) equations derived by the cavity method [12,13], that can be written in a compact way as ma→i = j∈∂a\i m j→a , where ∂ a is the set of variables in clause a, and ∂ + ia (resp.∂ − ia ) is the set of clauses containing x i , excluding a itself, satisfied (resp.not satisfied) when the variable x i is assigned to satisfy clause a. Physical interpretation of the SP equations: ma→i represents the fraction of clusters where clause a is satisfied solely by variable x i (that is, x i is frozen by clause a), while m i→a is the fraction of clusters where x i is frozen to an assignment not satisfying clause a. The SP equations impose 2KM self-consistency conditions on the 2KM variables {m i→a , ma→i } living on the edges of the factor graph [7], that are solved in an iterative way, leading to a message passing algorithm (MPA) [4], where outgoing messages from a factor graph node (variable or clause) are functions of the incoming messages.Once the MPA reaches a fixed point {m i→a , m a→i } that solves the SP equations, the number of clusters can be estimated via the complexity where K a is the length of clause a (initially K a = K) and ∂ + i (resp.∂ − i ) is the set of clauses satisfied by setting x i = 1 (resp.x i = −1).The SP fixed point messages also provide information about the fraction of clusters where variable x i is forced to be positive (w + i ), negative (w − i ) or not forced at all (1 − w + i − w − i ) The SID algorithm then proceed by assigning variables (decimation step).According to SP equations, assigning a variable x i to its most probable value (i.e., setting x i = 1 if w + i > w − i and viceversa), the number of clusters gets multiplied by a factor, called bias With the aim of decreasing the lesser the number of cluster and thus keeping the largest the number of solutions in each decimation step, SID assigns/decimate variables with the largest b i values.In order to keep the algorithm efficient, at each step of decimation a small fraction f of variables is assigned, such that in O(log N ) steps of decimation a solution can be found.After each step of decimation, the SP equations are solved again on the subproblem, which is obtained by removing satisfied clauses and by reducing clauses containing a false literal (unless a zero-length clause is generated, and in that case the algorithm returns a failure).The complexity and the biases are updated according to the new fixed point messages, and a new decimation step is performed. The main idea of the SID algorithm is that fixing variables which are almost certain to their most probable value, one can reduce the size of the problem without reducing too much the number of solutions.The evolution of the complexity Σ during the SID algorithm can be very informative [35].Indeed it is found that, if Σ becomes too small or negative, the SID algorithm is likely to fail, either because the iterative method for solving the SP equations no longer converges to a fixed point or because a contradiction is generated by assigning variables.In these cases the SID algorithm returns a failure.On the contrary, if Σ always remains well positive, the SID algorithm reduces so much the problem, that eventually a trivial SP fixed point, m i→a = ma→i = 0, is reached.This is a strong hint that the remaining subproblem is easy and the SID algorithm tries to solve it by WalkSat [31]. A careful analysis of the SID algorithm for random 3-SAT problems of size N = O(10 5 ) shows that the algorithmic threshold achievable by SID is α SID a = 4.2525 [35], which is close, but definitely smaller than the SAT-UNSAT threshold α s = 4.2667. The running time of the SID algorithm experimentally measured is O(N log(N )).However it is not excluded than an extra factor log(N ) could be present in the convergence time of the iterative solution to SP equations, but for sizes up to N = O(10 7 ) has not been observed [32]. Backtracking Survey Propagation (BSP) Willing to improve the SID algorithm to find solutions also in the region α SID a < α < α s , one has to reconsider the way variables are assigned.It is clear that the fact the SID algorithm assigns each variable only once is a strong limitation, especially in a situation where correlations between variables becomes extremely strong and long-ranged.In difficult problems it can easily happen that one realizes that a variable is taking the wrong value only after having assigned some of its neighbours variables.However the SID algorithm is not able to solve this kind of frustrating situations. The BSP algorithms tries to solve this kind of problematic situations by introducing a new backtracking step, where a variable already assigned can be released and eventually re-assigned in a future decimation step.It is not difficult to understand when it is worth releasing a variable.The bias b i in terms of the SP fixed point messages { m a→i } a∈∂ i arriving in i can be computed also for a variable x i already assigned: if the bias b i , that was large at the time the variable x i was assigned, gets strongly reduces by the effect of assign-ing other variables, then it is likely that releasing the variable x i may be beneficial in the search for a solution.So both variables to be fixed in the decimation step and variables to be released in the backtracking step are chosen according to their biases b i : variables to be fixed have the largest biases and variables to be released have the smallest biases. The BSP algorithm then proceeds similarly to SID, by alternating the iterative solution to the SP equations and a step of decimation or backtracking on a fraction f of variables in order to keep the algorithm efficient (in all our numerical experiments we have used f = 10 −3 ).The choice between a decimation or a backtracking step is taken according to a stochastic rule (unless there are no variables to unset), where the parameter r ∈ [0, 1) represents the ratio between backtracking steps to decimation steps.Obviously for r = 0 we recover the SID algorithm, since no backtracking step is ever done.Increasing r the algorithm becomes slower by a factor 1/(1−r), because most variables are reassigned O(1/(1−r)) times before the BSP algorithm finishes, but its time complexity remains O(N log N ) in the problem size. The BSP algorithm can stop for the same reasons the SID algorithm does: either the SP equations can not be solved iteratively or the generated subproblem has a contradiction.Both cases happen when the complexity Σ becomes too small or negative.On the contrary if the complexity remain always positive the BSP eventually generate a subproblem where all SP messages are null and on this subproblem WalkSat is called. The code used for collecting data shown in this study is available from the authors upon request. Figure 1 : Figure1: Fraction of random 4-SAT instances solved by BSP.Such a fraction is computed over 100 instances at each α value for N = 5 000 (solid symbols) and N = 50 000 (empty symbols).On instances not solved, a second run rarely (< 1%) finds a solution. Figure 2 : Figure2: BSP algorithmic threshold on random 4-SAT problems.The upper panel shows Σ res /N res as a function of α, for random 4-SAT problems solved by BSP with r = 0.9.This order parameter is mostly size-independent (few deviations appear only at the right end) and vanishes at α BSP Figure 3 : Figure 3: BSP algorithmic threshold on random 3-SAT problems.Same as Fig. 2 for K = 3. BSP does not show any change at the freezing threshold α f = 4.254(9) measured on small problems[30], and marked by the vertical line.A linear fit to the residual complexity (red line) extrapolates to zero slightly beyond the SAT-UNSAT threshold, at α BSP Figure 4 : Figure 4: BSP convergence time.The mean number η of iterations to reach a fixed point of SP equations grows very mildly with α and N , both for K = 3 (upper panel) and K = 4 (lower panel). Figure 5 : Figure 5: Whitening random 3-SAT solutions.The upper panel shows the mean fraction of non-variables during the whitening procedure applied to all solutions found by the BSP algorithm.The line is the SP prediction for the fraction of frozen variables in typical solutions at α = 4.262, and shows that solutions found by BSP are atypical.The central panel shows the histograms of the whitening times, defined as the number of iterations required to reach a fraction 0.4 of nonvariables.Increasing α both the mean value and the variance of the whitening times grow.The lower panel shows the same data used for the upper one, but averaged at fixed τ (0)−t, the time to reach the all-fixed point.Errors are tiny since the whitening dynamics below the plateau is the same for each solution. Figure 6 : Figure6: Critical exponent for the whitening time divergence.The whitening time τ (c), defined as the mean time needed to reach a fraction c of nonvariables in the whitening procedure, is plotted in a double logarithmic scale as a function of Σ res for random 3-SAT problems with N = 10 6 (upper dataset) and random 4-SAT problems with N = 5 × 10 4 (lower dataset).The lines are power law fits with exponent ν = 0.281(6) for K = 3 and ν = 0.269(5) for K = 4.
8,968.4
2015-08-20T00:00:00.000
[ "Computer Science", "Mathematics" ]
Nonadiabatic analysis of strange-modes in hot massive stars with time-dependent convection We carry out nonadiabatic analysis of strange-modes in hot massive stars with time-dependent convection (TDC). We find that the instability of the modes excited at the Fe bump is weaker with TDC than with frozen-in convection (FC). But the instability still remains with TDC, and could be a possible candidate for the trigger of luminous blue variable (LBV) phenomena. Introduction "Strange-modes" were originally found by a theoritical study [1].They are one type of stellar pulsation modes, but have significantly different properties from ordinary modes found in many observed pulsators.They appear only in very luminous stars with L/M > ∼ 10 4 L /M [2], and have extremely short growth timescale, comparable with pulsation periods. As for massive stars, [4] first suggested instability of strange-modes near the Humphreys-Davidson (HD, [3]) limit.Because of the rapid growth of the amplitude, the instability might be associated to envelope expansion in luminous blue variables (LBV), and nonlinear analyses are going on to obtain the pulsationally-driven mass-loss (e.g.[5], [6], [7], [8]).A recent observation [9] found that the mass-loss rate in a luminous B star changes on a timescale of pulsation, which could be regarded as a strange-mode [10]. Stability of strange-modes has been so far analyzed with frozen-in convection (FC, e.g.[11], [12]).However, the Fe opacity bump in hot massive stars causes convection with certain contribution to energy transfer, and with timescale comparable to pulsation periods.Therefore, the convective effects on pulsation are not definitively negligible.In this study, we implement the time-dependent convection (TDC) formula by [13] to the nonadiabatic code by [14], and analyze the pulsational stability of radial modes in hot massive stars. TDC analysis Fig. 1 shows modal diagrams of radial modes in the 50M model sequence with X = 0.70, Z = 0.02, constructed by MESA [15], in cases of FC and TDC.While the ascending sequences (e.g.A1, A2) correspond to ordinary modes, the descending ones to strange-modes.Instability on D1 and D2 sequences is weaker with TDC than with FC.For these sequences, the excitation takes place around the Fe bump convection zone.When taking into account TDC, damping effects due to the convection compensates the excitation, and weakens the instability.On the other hand, the instability of D3 is hardly different in cases of FC and TDC.For D3 sequence, the excitation occurs around the He opacity bump.The ratio of the convective to the total luminosity, L C /L r , is negligible, and the convective timescale is much longer than pulsation periods, while the ratio L C /L r is non-negligible (∼ 10 −1 ) and the convective timescale is comparable to pulsation periods in the Fe bump convection zone. Fig. 1 also shows solutions by the adiabatic approximation.In the high effective temperature side, the D1 sequence fits to a sequence of adiabatic solutions.In this case, strange-modes are excited by the κ-mechanism.On the other hand, D1 sequence in the low effective temperature side, and D2 and D3 sequences do not have their corresponding adiabatic sequences.In this case, the so-called "strange-mode instability" acts on the modes (See [16] for details). We have shown that TDC weakens the instability of the strange-modes.But the instability still remains with TDC, and could be a possible candidate for the trigger of LBV phenomena, although further nonlinear analyses are required. 6 7KLVFig. 1 . Fig. 1.Radial modes in the 50M model sequence (X = 0.70, Z = 0.02) for FC (left) and TDC (right).The crosses are modes obtained by the adiabatic approximation.The open and the filled circles are pulsationally stable and unstable modes, respectively, obtained by the nonadiabatic analysis.The vertical axis is nondimensional pulsation frequency multiplied by the dynamical timescale R 3 /(GM ).The horizontal axis is the logarithm of the effective temperature.The vertical dashed line indicates the evolutionary stage crossing the HD limit.
884.8
2015-09-01T00:00:00.000
[ "Physics" ]
Rhizoctonia solani as causal agent of damping off of Swiss chard in Spain During September 2011, post-emergence damping off of Swiss chard (Beta vulgaris subsp. cicla L.) was observed in a greenhouse in Villa del Prado (Spain). About 20% of the seedlings showed damping off symptoms. Lesions were initially water soaked, dark brown necrosis of crown tissue, irregular in shape and sunken in appearance on large plants, causing the infected seedlings to collapse and eventually die. Rhizoctonia solani was isolated consistently from symptomatic plants. After morphological and molecular identification of the isolates, pathogenicity was tested by placing agar plugs of four isolates adjacent to the stem at the three or four true leaf stage. In inoculated plants, brown crown and stem necrosis occurred while control plants did not show disease symptoms. Pathogenicity using non-germinated seeds was also tested. All four isolates produced extensive damping off when inoculated on non-germinated seeds. To our knowledge, this is the first report of damping off of Swiss chard caused by R. solani in Europe. Additional key words: Beta vulgaris subsp. cicla; damping-off; morphological identification; molecular identification; silver beet. identified as Rhizoctonia solani (Sneh et al., 1991).Pythium spp.and Ulocladium spp.were also isolated from the samples but always in a very low percentage of samples (< 8%). Molecular identification was performed by sequencing the region ITS1-5.8S-ITS2 of the rDNA.PCR amplifications were carried out using the primer set ITS1/ITS4 and the conditions described by White et al. (1990).The fragments obtained were subsequently sequenced in both directions. Subsequent database searches by the BLASTN software indicated that the resulting sequence of 526 bp had a 100% identity with the corresponding gene sequence of R. solani anastomosis group (AG) 4, a common soil fungus with a wide host range that causes a number of plant diseases.The sequences were deposited on the EMBL Sequence Database (Accession numbers HE655451, HE655450, HE655449). Four R. solani isolates, the main pathogen isolated from diseased plants were tested in pathogenicity assays.These isolates were maintained on PDA media and stored at 4°C in the fungus collection of the Plant Production Department of the Technical University of Madrid.Pathogenicity was confirmed through inoculation of healthy Swiss chard plants cv.Lyon Yellow, commonly used in the area.Four-week-old plants were grown on 1000 mL plastic greenhouse pots, previously filled with a disinfected (twice autoclaved 105 kPa, 30 min at 120°C) mix of vermiculite and peat (1:1).Plants were inoculated with each of the isolates by placing a 5-mm PDA plug of mycelia at the stem base and covering with a thin layer of substrate.Another four plants treated with non-inoculated PDA served as control.The experiment was repeated.After inoculation, the plants were maintained at 23-28°C and a 14 h photoperiod of 10,000 Lux cool white fluorescent light.Disease symptoms were graded into five classes as follows: 0 = no symptoms; 1 = reddish brown colored crown; 2 = necrosis of crown tissue only; 3 = necrosis of crown and the leafstalks; 4 = crown completely rooted with defoliation; 5 = completely rotted dead plant.A disease severity index (DSI) was calculated as the mean of four plants for each isolate and test replicate.Disease symptoms were recorded 3 weeks after inoculation.At the end of the experiment, plants were oven-drying at 80°C for 48 h before dry weight was determined. The same four selected R. solani isolates were used to test their effect on Swiss chard seeds germination and occurrence of damping off disease of cv.Lyon Yellow.ble assortment by growing as a vegetable for its edible leaves and stalks.Swiss chard is also a very nutritive demanding species (Pokluda & Kuben, 2002). Swiss chard area in Europe is reduced.Annual Spanish Swiss chard production is around 66,500 tons grown on 2,100 ha.Villa del Prado is the main production area within the Community of Madrid with 40.5 ha and a total annual production of 9,624 tons.In Villa del Prado (Madrid), chard harvest is manual and staggered, removing the outer leaves, allowing the inner ones to grow, so most tender leaves are harvested (Hoyos et al., 2005). In September 2011, symptoms of damping-off were observed on approximately 20% of the plants at the stem base around the soil line of Swiss chard seedlings of cv.Lyon Yellow in a greenhouse (800 m 2 , tunnel type) in Villa del Prado (Spain) where seedlings were transplanted.The symptoms were not detected in any other greenhouses in the area.Lesions were initially water soaked, dark brown necrosis of crown tissue, irregular in shape, and sunken in appearance on large plants, causing the infected seedlings to collapse and eventually die.These symptoms were similar to those caused by R. solani in other crops and prompted us to determine the etiology of this damping-off.R. solani has been previously reported to cause damping-off of Beta vulgaris subsp.cicla L. in California (USA) (Koike & Subbarao, 1999) and China (Yang et al., 2007), and it has been previously reported in Spain in many crops, for example in strawberry nurseries (Duhart et al., 2000;De Cal et al., 2004), cotton (Melero-Vara & Jimenez-Díaz, 1990), potato (Sardiña, 1945) and French beans (Tello et al., 1985;Sinobas et al., 1994). Small pieces of symptomatic lower stem and roots of 25 symptomatic plants were surface disinfected in sodium hypochlorite (0.5% w/v) for 2 min and air dried.The sections were then placed on PDA (potato dextrose agar) medium and a selective media for Oomycetes (Ponchet et al., 1972) and incubated for 5 days at 25°C. Isolations from diseased stem and root tissue consistently yielded fungal colonies light gray to brown with abundant growth of mycelia and dark brown sclerotia in 92 and 84% of the samples plated on PDA and Ponchet media, respectively.The hyphae tended to branch at right angles when examined under microscope.A septum was always present in the branch of hyphae near the original point and slight constriction at the branch was observed.Isolates were morphologically Short communication.Rhizoctonia solani on Swiss chard Seeds were surface disinfested in NaOCl (40-50 mg L -1 active Cl 2 ) for 3 min, rinsed five times in sterile distilled water and placed on surface disinfested 36 × 52 × 7 cm plastic trays previously filled with two-thirds capacity of an autoclaved (105 kPa, 30 min at 120°C) vermiculite substrate (Agroalse S.L., Moncada, Valencia, Spain).Mycelium from one actively growing 10-14 days old culture on PDA incubated at the same conditions indicated above was homogenized in sterile distilled water and a volume of 200 mL was added to each tray seeded with 50 non-germinated seeds.Following inoculation, seeds were covered with a 1 cm layer of autoclaved vermiculite.Control seeds were treated with PDA homogenized in sterile water.Inoculated and control plants were maintained at 20-25°C and a 14 h photoperiod of 18.8 µE m -2 s -1 .After 14 days seedlings were rated for damping off as described by Schumann & D'Arcy (2006) following the recommendations of the International Seed Testing Association standards (ISTA, 2004).The experiment was repeated. Data collected in experiments were subjected to one way ANOVA, with DSI or weight as dependent variable, and isolate as independent variable.Parametric analyses (one-way ANOVA) were used when Levene's test indicated no significant heterogeneity of variance.Non-parametric analyses (Kruskal-Wallis test) were used when control treatment was not included in the analyses and the heterogeneity of variance was significant.Similar analyses were carried out for seed germination.All calculations were carried out using StatsGraphics Centurion XV.II (Statistical 195 Graphics Corp., Herndon, VA, USA).Symptoms of inoculated plants included wilting and brown to black necrosis of the lower taproot of three or four leaf stage chard.Water-soaked, brown lesions, identical to the symptoms described above, were observed on the stem base of all inoculated plants, whereas no symptoms developed on the control plants.The fungus was isolated from affected crown samples, and the identity was confirmed by microscopic appearance of the hyphae, fulfilling Koch's postulates.This pathogenicity test was conducted twice. DSI values from inoculated Swiss chard were always higher than 2.5 and they were not significantly different (p = 0.374) among isolates (Fig. 1A).All R. solani inoculated isolates caused dry weight reductions on inoculated chard plants when compared to that estimated for control plants (Fig. 1B).Isolate 1 caused the most severe decreases, with up to a 44% significant reduction (p < 0.05) in dry weights of seedlings, but isolate 2 did not cause significant reductions as compared to control plants. The same isolates were tested in pathogenicity test in un-germinated seeds.All four isolates produced extensive damping-off on pre-germinated seeds without significant differences among them (Fig. 2), with a disease incidence above 90% for the different pathogen isolates. Successful control of Rhizoctonia damping-off remains a serious problem for cucumber and melon cultivation in Spain (Tello et al., 1990) with the pathogen being reported in different crops in the last few years (El Bakaki, 2000;Delgado et al., 2005).R. solani has been previously reported to cause Isolate damping-off of B. vulgaris subsp.cicla L. in California, USA (Koike & Subbarao, 1999) and China (Yang et al., 2007). The current losses of R. solani have not been accurately evaluated in the production area of Swiss chard in Spain but taken into account our results it seems necessary to be done in the near future.To our knowledge, this is the first report of damping-off caused by R. solani on Swiss chard in Europe.With the extended use of Swiss chard in crop rotations with some vegetables such as cucumber, the occurrence of the dampingoff pathogen needs to be taken into account when designing disease management programs or when selecting crops for rotations.Damping off (%) Isolate Figure 1 . Figure 1.Disease severity index (A) and dry weight (B) of Swiss chard (Beta vulgaris ssp.cicla) seedlings following artificial inoculation with four isolates of R. solani.Thin bars indicate the standard deviation of the data. Figure 2 . Figure 2. Effects of inoculation with four isolates of Rhizoctonia solani on damping off in Swiss chard (Beta vulgaris ssp.cicla).Thin bars indicate the standard deviation of the data.
2,291.8
2012-10-30T00:00:00.000
[ "Environmental Science", "Biology" ]
From genes to environment in shaping of an embryo: understanding embryonic-extraembryonic interactions at the BSDB autumn meeting in Oxford The British Society for Developmental Biology Autumn Meeting, held in Oxford in September 2018, was the third in a series of international workshops which have been focussed on development at the extraembryonic-embryonic interface. This workshop, entitled “Embryonic-Extraembryonic Interactions: from Genetics to Environment” built on the two previous workshops held in 2011 (Leuven, Belgium) and 2015 (Göttingen, Germany). This workshop brought together researchers utilising a diverse range of organisms (including both vertebrate and invertebrate species) and a range of experimental approaches to answer core questions in developmental biology. This meeting report highlights some of the major themes emerging from the workshop including an evolutionary perspective as well as recent advances that have been made through the adoption of emerging techniques and technologies. It was a warm late summer day in Oxford, when 92 developmental biologists descended for the British Society for Developmental Biology Autumn meeting (September 10th-13th) held at Corpus Christi College at the University of Oxford. Organisers (Kat Hadjantonakis (Sloan Kettering Institute, US), Kristen Panfilio (University of Warwick, UK; University of Cologne, Germany), Tristan Rodriguez (Imperial College London, UK), Susana M.Chuva de Sousa Lopes (Leiden University Medical Centre, Netherlands) and Shankar Srinivas (University of Oxford, UK)) had put together a diverse and dynamic meeting programme. This meeting built on the success of the two previous meetings (Downs 2011;Stern 2015) and included an expanded contribution from researchers using invertebrate and non-model vertebrate systems. Indeed, participants presented not only the state of art knowledge on the interplay between embryonic and extraembryonic tissues, but also bridged diverse invertebrate and vertebrate animal models and illustrated the utility of advanced cell and molecular biology approaches for addressing core questions in developmental biology. Embryonic or extraembryonic? Cell fate specification in the early embryo One of the core scientific questions recurring during the meeting was how cells are directed towards embryonic or extraembryonic fate. In case of mammals, the initial decisions are made in preimplantation embryos, where blastomeres first Bdecide^whether to form an inner cell mass (ICM) that gives rise to the future embryo body and extraembryonic membranes, or a trophectoderm (TE) that will create the embryonic part of placenta. Later, differentiation events occur inside the ICM itself, with specification of EPI and primitive endoderm (PE) lineages. EPI gives rise to the embryo proper together with extraembryonic membranes, including the allantois and amnion, and PE gives rise to the endodermal layer of the yolk sac. Many talks were also dedicated to other key developmental events, such as anterior-posterior (A-P) or dorsal-ventral (D-V) axis formation, gastrulation, or neurulation, both in mammals and non-mammalian species. The meeting was opened by Elizabeth Robertson (University of Oxford, UK), who discussed some of her seminal work on Nodal signalling in the segregation of embryonic Communicated by Angelika Stollewerk and extraembryonic tissues in early post-implantation development of mouse embryos, including her exciting recent studies demonstrating that Smad2/3 are required to maintain distinct embryonic and extra-embryonic cell identity in the EPI, during lineage priming (Senft et al. 2018). Elizabeth's talk also reinforced the insight that can be gained from using integrating different approaches, such as ATAC-seq (Nelson et al. 2017), embryonic stem cells, and knockouts (Senft et al. 2018), to understand fundamental biological questions; this set the scene for what was an on-going theme throughout the meeting. Several talks addressed the earliest stages of embryo linage specification; in particular, Miguel Manzanares (Centro National de Investigaciones Cardiovasculares, Spain) presented his results on role of Notch signalling in ICM vs. TE differentiation in mouse (Menchero et al. 2018). Takashi Hiiragi (European Molecular Biology Laboratory, Germany), on the other hand, used mouse blastocysts displaying pulsatile shape changes as an experimental model to investigate the interplay between embryo size, TE biomechanical properties and TE functionality (Chan et al. 2018). Véronique Azuara's (Imperial College London, UK) talk revealed the role of the BMI1 transcription factor in EPI/PE specification in mouse blastocysts. Claire Chazaud (GRed Research Centre, France) showed how experimental work can be complemented by mathematical modelling in order to dissect a role of Nanog, Gata6 and FGF signalling in differentiation of murine ICM cells (Tosenberger et al. 2017). Nestor Saiz from Kat Hadjantonakis' group (Sloan Kettering Institute, US) discussed a potential mechanism that links EPI/PE lineage size with fate decisions of individual ICM cells. Ayaka Yanagida from Jennifer Nichols' and Kevin Chalut's groups (University of Cambridge, UK) focused on the relationship between biomechanical properties of the EPI/PE progenitor cells and their ability to segregate into the separate EPI and PE layers, typical for a mature mouse blastocyst. Nicolas Porchet from Jérôme Collignon's lab (Institut Jacques Monod, France) described involvement of Nodal signalling in PE specification in mouse embryos. Progressing to the post-implantation stages, Matthew Stower from Shankar Srinivas' group (University of Oxford, UK) and Go Shioi from Yasuhide Furuta's lab (RIKEN, Japan) showed how fluorescently tagged proteins and timelapse imaging helped in revealing the role of cellular rearrangements in the formation of anterior visceral endoderm (AVE) and distal visceral endoderm (DVE) in mouse embryos (Shioi et al. 2017). Di Hu, again from Srinivas' group stayed in the mouse theme and discussed the role of Ets2 transcription factor in extraembryonic ectoderm (ExE) and AVE specification. Jennifer Nichols (University of Cambridge, UK) gave a comprehensive talk about functions of Oct4 in mammalian embryonic development, starting from preimplantation stages through A-P axis formation to gastrulation (Mulas et al. 2018). Gastrulation, and particularly involvement of Nodal signalling in this process, was also the main topic of Vasso Episkopou's (Imperial College London, UK) talk, presenting the dose-dependent functions of Nodal (Carthy et al. 2018). Following on from this, Elisabetta Ferretti (Novo Nordisk Center for Stem Cell Biology, Denmark) discussed the molecular mechanisms of mesoderm formation. The 2018 Dennis Summerbell Award Lecture was delivered by Mariya Dobreva (VIB-KU Leuven Centre for Brain and Disease Research, Belgium). This award was given for Mariya's detailed and elegant lineage tracing experiment that defined the developmental origins of the amnion, the innermost embryonic membrane. Mariya's analyses indicate that the amniotic ectoderm arises from four types of progenitor cells residing in the early proximal anterolateral epiblast and that Smad5 has an inductive role in mediating spatial cues crucial for establishment of the amniotic ectoderm (Dobreva et al. 2018). Although the mouse is still the predominant model for scientists interested in preimplantation embryonic development, other mammalian species made an appearance at this meeting as well. A significant part of the conference was dedicated to marsupial, rabbit, bovine and even human preimplantation embryos. A highlight was Stephen Frankenberg's (University of Melbourne, Australia) talk on his pioneering research regarding embryonic lineage differentiation in marsupials (wallabies and dunnarts), focusing mostly on a potential role of Gata2 in TE formation. James Turner (Francis Crick Institute, UK) talked about his ground-breaking research using single-cell RNA-seq in marsupial (opossum) embryos to search for linage specification markers and mechanisms of X-chromosome dosage compensation. Berenika Plusa (The University of Manchester, UK) compared roles of Nanog, Gata6 and FGF signalling in mouse, rabbit and human embryos and demonstrated differences in lineage specification mechanisms between these species. The rabbit story was continued by Anna Piliszek (Institute of Genetics and Animal Breeding, Polish Academy of Sciences, Poland), whose results reveal that ICM cells in rabbits differentiate later than in mouse embryos (Piliszek et al. 2017). Zofia Madeja (Poznan University of Life Sciences, Poland) discussed the relationship between chromatin territory structure and gene expression (Orsztynowicz et al. 2017), and also the signalling pathways involved in maintaining pluripotency in bovine epiblast cells and embryonic stem cells (ESCs) (Madeja et al. 2015). Moving further through embryogenesis, Monika Bialecka from Susana M. Chuva de Sousa Lopes' lab (Leiden University Medical Center, the Netherlands) talked about PGCs differentiation in in vitro cultured human embryo outgrowths. Siegfried Roth (University of Cologne, Germany) presented diverse modes of D-V axis specification in various groups of insects, focusing on the intricate interplay between Toll and BMP signalling (Sachs et al. 2015). Federica Bertocchini (Instituto de Biotecnologia de Cantabria, Spain) talked about A-P axis formation in chicks (Arias et al. 2017) and its evolutionary analogies in reptiles, whereas Irene Yan (Universidade de São Paulo, Brazil) discussed transcriptional and post-transcriptional regulation of Scratch2, a conserved regulator of neural development, in chick neurulation. Extraembryonic membranes at the interface: From environment to genes to phenotype In all animals, extraembryonic membranes have important functions including facilitating gas and nutrient exchange and protecting the embryo from mechanical stress. There are, however, key differences between animal groups in how the extraembryonic membranes are formed and how they function. Myriam Hemberger (Babraham Institute, UK) talked about her project systematically identifying mutations in mice associated with placental defects (Perez-Garcia et al. 2018) and how she uses this approach to discover novel pathways of trophoblast differentiation and placentation. In mammals, placental function in particular has been associated with growth of the embryo, and environmental perturbations, including stress, are known to affect both the placenta and embryo itself. Rosalind John's (Cardiff University, UK) pioneering research has implicated placental imprinting in embryonic growth but also mediating maternal care (Creeth et al. 2018). Additionally, as demonstrated by David Harrison working with Rosalind, placental imprinting may also alter offspring phenotypes. The effects of environmental exposures, such as alcohol, on function of extraembryonic membranes was discussed by Jacinta Kalisch-Smith (University of Queensland, Australia) and Diana Laird (University of California, US) extended this discussion to incorporate how environmental exposures might affect subsequent generations, possibly via epigenetic mechanisms. Elizabeth Duncan (University of Leeds, UK) discussed the role of epigenetic mechanisms in how insects, like the honeybee, are able to respond to environmental cues to produce two or more entirely different phenotypes from the same genotype, a phenomenon known as phenotypic plasticity. In insects, there are generally two extraembryonic membranes, the amnion and the serosa. Kristen Panfilio (University of Cologne, Germany; University of Warwick, UK) discussed what we know about the developmental origins of these tissues in insects and analogies with mammalian extraembryonic membranes. Kristen also delved into her work on the rupture of extraembryonic membranes (Hilbrant et al. 2016), a normal part of embryogenesis in insects, but a pathological process in mammals. In addition to the key roles in morphogenesis, Maurijn van der Zee (Leiden University, the Netherlands) emphasised that the serosa protects the insect eggs against environmental exposures including pathogens and desiccation (Jacobs et al. 2014). This was reinforced by Nora Braak (Oxford Brookes University, UK), who showed that the serosa, although developing differently in butterflies, also has a role in embryonic immunity. Cutting edge new methods applied to solving old puzzles The overarching theme of the talks was the increasing contribution of cutting edge technologies to further advancement of developmental biology research. A good example here are the highly sophisticated imaging techniques, often combined with transgenic reporters and advanced computational image analysis that help to visualise dynamics of developmental processes, as demonstrated by Matthew Stower and Kristen Panfilio (lightsheet microscopy), Go Shioi (spinning-disc microscopy) and Anna Ajduk from University of Warsaw, Poland (optical coherence microscopy) (Karnowski et al. 2017). Another cutting-edge experimental approach that has enormous future potential is single-cell RNA sequencing, allowing researchers to pinpoint transcriptional changes occurring in individual cells at multiple time points in the development. Its utility for different organisms and biological questions was clearly illustrated in talks by James Turner, Di Hu, Elisabetta Ferretti and also by Sarah Teichmann (Wellcome Sanger Institute, UK). Sarah's pioneering work using singlcell sequencing is helping to reveal how cell states and cellular phenotypes change during normal and pathological processes in various biological contexts, including in the immune system (Hagai et al. 2018) and in the maternal-fetal interface (Vento-Tormo et al. 2018) and is contributing to the Human Cell Atlas project (https://www.humancellatlas.org/). Staying with the molecular biology theme, 'omics research methods, for example ATAC-seq to identify putative enhancer regions and ChIP-seq to identify binding sites of transcription factors, are making massive contributions to our understanding of gene regulation during development, and this was clearly demonstrated by several talks including our plenary speaker Elizabeth Robertson and also Laura Banaszynski (UT Southwestern Medical Center, US). It was also clear that recent and unprecedented development of effective ways to produce transgenic animals resulted in a number of projects dedicated to a widespread functional analysis of mammalian genes. Myriam Hemberger and Jaime Rivera-Perez (University of Massachusetts Medical School, US) reported on the projects looking for various embryonic lethal mutations in mice, focusing on placental (Perez-Garcia et al. 2018) or embryonic phenotypes, respectively. Another approach that is increasingly popular in developmental biology and definitely complements a more traditional way of experimental embryo work is an in vitro culture system for peri-and post-implantation embryos or for differentiating ESCs (Jennifer Nichols, Monika Bialecka, Elisabetta Ferretti). Indeed, Ali Brivanou's (Sloan Kettering Institute, US) closing keynote lecture was not only a great summary of cell fate specification mechanisms, but also a very interesting introduction to in vitro models of gastrulation and neurulation. He presented results on the role of BMP4/Wnt/Nodal pathway and TGFβ signalling, respectively, in these processes (Yoney et al. 2018). He also discussed the potential of creating a synthetic 3D embryo built exclusively from cells originating from ESCs and the contribution that this model would have to our understanding of developmental biology ). An evolutionary perspective and future prospects By gathering researchers representing various branches of developmental biology, working on different animal models and stages of embryogenesis, the meeting organisers created a unique platform for broader scientific discussions, including a challenging, but yet crucial for developing a deeper understanding of developmental processes, evolutionary perspective. Kristen Panfilio set the scene for evolutionary comparisons in her talk early in the meeting, discussing the analogies between insect and mammalian extraembryonic membranes. Although these tissues have different evolutionary and developmental origins, they can be considered functionally analogous. Indeed, it has been proposed that the evolution of extraembryonic membranes in insects facilitated the radiation of insects on land (Jacobs et al. 2013;Zeh et al. 1989) as one of these membranes, the serosa, protects against desiccation (Jacobs et al. 2013). Similarly, the evolution of the amniote egg has been implicated in supporting the radiation of vertebrates on land (reviewed in Ferner and Mess 2011). The field of evolution and development (Evo-Devo) has highlighted aspects of development that are conserved amongst phylogenetically diverse animals and therefore were likely present in the bilaterian ancestor; for example the role of BMP signalling in dorsoventral axis specification and the subdivision of the dorsoventral axis into distinct ectodermal domains (reviewed in Bier and De Robertis 2015). However, questions remain over how novelties, such as the extraembryonic membranes of insects and vertebrates, evolved and the molecular mechanisms that underpin these novelties. Understanding the molecular and evolutionary origins of the extraembryonic membranes and the intricate interplay between these membranes and the embryo in normal development and morphogenesis, in a phylogenetically diverse range animals, will allow us to examine the extent to which these molecular mechanisms overlap or converge. These comparisons may allow general inferences to be made about the evolution of novel cell types, cell fate and lineage specification as well as the role of co-option of conserved cell signalling pathways such as Wnt, FGF and Notch (Pires-daSilva and Sommer 2003) and gene regulatory network, transcription factor, genome organisation and chromatin landscape evolution in these processes. In amniotes, we understand most about early development in mice, and in insects, we know most about development in the fruit fly, Drosophila melanogaster although our knowledge of other species is increasing. We now have access to an array of cutting-edge techniques that can be readily applied to a wide range of organisms, including CRISPR/Cas9mediated genome editing, live imaging technologies and advances in 'omics technologies like single-cell sequencing. Applying these tools will rapidly advance this research field and deepen our understanding of early development, including the specification and function of the extra-embryonic membranes, the intricate interactions between the embryo and these membranes in normal and pathological states, how robust or sensitive these developmental processes are to environmental stressors, as well as how these remarkable systems evolved independently in vertebrates and insects.
3,753.4
2019-02-23T00:00:00.000
[ "Biology", "Environmental Science" ]
An investigation into the effect of pre-bending on the tube hydro-forging technology THFG is an advanced technology to manufacture tubular components with complex cross-sections. Meanwhile, curved axis often exists in such components, which is formed by pre-bending steps before THFG. However, the effect of pre-bending on the subsequent THFG, especially on the critical internal pressure required to inhibit wrinkling, has not been clarified yet. Considering the difference in the cold work-hardening and the thickness distribution caused by pre-bending, the change rule between the critical internal pressure and the hoop strain was re-established based on the energy method. It is found that the cold work-hardening has a great influence on the change rule. Subsequently, by solving the three-dimensional mechanics condition of single and double curvature differential segments respectively, the distribution of hoop strain after THFG was obtained by combining pre-bending. Pointing out that the initial thickness has an obvious effect on the hoop strain distribution, while the cold work-hardening is almost negligible. The maximum hoop strain was brought into the change rule between critical internal pressure and hoop strain, then, a new analytical model between the critical internal pressure and the punch stroke considering pre-bending was built. The critical internal pressure considering pre-bending is determined by that of outer straight wall, and its value is always greater than the critical internal pressure without considering pre-bending under the same punch stroke. With the reduction of bending radius, the critical internal pressure distinction between considering and not considering pre-bending will be greater. Moreover, the smaller the friction coefficient also will lead the distinction to be more prominent. In this work, our proposed new prediction model of critical internal pressure is meticulously demonstrated, which can improve the accuracy by 74% at least when existing the pre-bending. Introduction A complex cross-sectional tubular component is one of the important ways to realize the lightweight technology, which has been widely used in the automotive and aerospace industry. Particularly, with the wide application of lightweight materials (such as the high-strength steel and the aluminum alloy), the weight reduction effect of such components is more and more obvious. To date, a series of methods have been developed to fabricate the cross-sectional shape. Here, high-pressure tube hydroforming (HPTH) has been the most popular technology [1]. However, due to the expansion deformation, HPTH will cause the excessive thinning of the formed part and the cracking. In addition, as the strength of the material increases, the required forming internal pressure is growing explosively. For this reason, Nikhare et al. [2] proposed the low-pressure tube hydroforming (LPTH) to replace the HPTH process. In contrast to HPTH, LPTH avoids the expansion deformation by introducing the action of die movement, so it can control the occurrence of cracking defects successfully. Meanwhile, LPTH can reduce the required internal pressure (about 5 to 15% of HPTH) and the tonnage of the equipment greatly [3]. Unfortunately, LPTH is only suitable to form the component with a constant perimeter along the axial direction [4]. Besides, the bending moment is introduced during LPTH; hence, the cross-sectional spring back is inevitable [5]. In recent years, Chu et al. [6] first proposed tube hydro-forging technology (THFG) to form the tubular components with complex cross-sections. In THFG, under sufficient internal pressure, the cross-section can be compressed by the upper die movement to obtain the required shape. It is because the material has always born the compression state in three directions, which can effectively avoid the occurrence of thickness thinning and cracking [7]. The cross-section with variable perimeter along the axial direction can be produced by setting different punch strokes simultaneously. Moreover, this technology relies on the clamping force as the driving force. In other words, the internal pressure only plays the function of supporting, thereby diminishing the demand for high-pressure equipment. As we all know, in order to meet the needs of space assembly, those tubular components always have a curved axis [8]. The initial tube needs preforming operations like pre-bending to obtain the same or similar shape of the axis of the product. However, the pre-bending will lead to a series of changes, such as the thinning of the outer layer and thickening of the inner layer, the cold work-hardening, the axis springback, and the cross-sectional distortion [9]. Consequently, analyzing the influence of pre-bending on the subsequent cross-section forming process has gathered significant interest among researchers. For example, compared with the straight tube, the outer layer of pre-bending tube is more easily to lead the thinning and the cracking during HPTH [10]. Besides, because of a curved axis, the material is difficult to achieve the feeding along the axial direction, which makes the cracking defect more obvious. Prabhu et al. [11] pointed out that the process parameter of pre-bending has an important effect on HPTH and found that the formability of HPTH could be degenerated with the reduction of bending radius. Gao and Strano [12] proposed that the increase of friction coefficient can aggravate the degree of thickness thinning during pre-bending, which also promotes the generation of fracture defects after HPTH. In order to overcome the above problems, Han et al. [13] put forward a method to adjust the contact sequence of tube and die surface to solve the inhomogeneous deformation problem in hydroforming of a bending tube, effectively controlling the cracking defect. Moreover, a higher internal pressure is needed during HPTH because of the cold working-hardening caused by pre-bending. For LPTH, the cross-sectional distortion caused by pre-bending also will lead to the unequal perimeter along the bending axis, what further increases its difficulty in forming complex cross-sections. As can be seen from the above, the pre-bending step has an important bearing on the final quality of the product. Although THFG has many advantages in forming complex cross-sectional tubular components, but because of the compression, the straight wall is prone to wrinkling when the internal pressure is insufficient [14], which also exists in the LPTH process [15]. Fortunately, Chu et al. [6] emphasized that excessive hoop strain is the ultimate reason and there is a critical internal pressure (the minimum forming internal pressure to inhibit wrinkling). Through the energy method combined with finite element (FE) simulation, the critical internal pressure can be obtained. However, when studying the wrinkling behavior and the critical mechanics condition, it is necessary to focus on the influence of material properties, thickness, geometry of parts, etc. Under double side constraint, Cao and Wang [16,17] established a more accurate analytical model to predict the critical pressure required to suppress wrinkling in sheet drawing. Thereafter, considering the thickness variation, a revised model for the sheet drawing wrinkling was proposed in the recent research [18]. Because of different geometric shapes, when solving the sidewall wrinkling model for Tee-joint hydroforming, the effects of stress ratio (ratio of hoop stress to axial stress) must be comprehensively analyzed [19]. Meanwhile, the influence of thickness and cold work-hardening on the forming limit of Tee-joint was studied and found that the thicker the thickness, the stronger the resistance to wrinkling [20,21]. As a new forming method, the existing theoretical result has not considered the effect of prebending on THFG, especially on the critical internal pressure. Therefore, by combining the energy method and the classical plastic theory, this paper will establish a new mathematical model for the critical internal pressure when considering the pre-bending in order to improve prediction precision. Figure 1 shows the schematic diagram of THFG combined with pre-bending. At first, the performing part with the desired curved axis is obtained through the pre-bending process. During THFG, the internal pressure plays a supporting role, and then the upper die moves ∆ to compress the cross-section. If the internal pressure is enough, the material at the inner/outer straight wall will be compressed and the perimeter of the cross-section will be reduced by 2∆, but when the internal pressure is insufficient, the wrinkling will occur at the inner/outer straight wall instead of stable compression deformation, as shown in Fig. 1. It is a typical compression instability defect caused by excessive hoop compression strain. The reasonable loading path between the critical internal pressure p cri with the punch stroke ∆ is the key to control wrinkling during THFG. Critical internal pressure As shown in Fig. 2, it is a traditional solution process of the critical internal pressure for a straight tube, which is mainly composed of three steps [6]: in step 1, the energy method is used to solve the change rule between the critical internal pressure p cri and the hoop strain th during THFG; next, in step 2, the hoop strain distribution on the cross-section under the certain punch stroke ∆ is determined according to FE method; in step 3, by substituting the maximum hoop strain on the cross-section into the change rule, the critical internal pressure p cri can be obtained under the certain punch stroke ∆ at last. For the straight tube, the initial state of each differential segment on the cross-section is the same, but for the prebending tube, the thickness and work-hardening degree of each differential segment is different. In addition, due to the existence of a curved axis, the mechanics conditions of wrinkling must be different from that of a straight tube during THFG. These changes not only affect the change rule but also decide the specific distribution of the hoop strain on the cross-section. Therefore, the above factors inevitably lead to the critical internal pressure difference between considering and not considering pre-bending. Consequently, the state of each differential segment after pre-bending should be determined at first. Then, on the basis of pre-bending, through analyzing the mechanics conditions of pre-bending, the change rule and the hoop strain distribution should be solved. Substituting the maximum hoop strain into the change rule, the critical internal pressure considering pre-bending can be obtained finally. Theoretical model Before theoretical derivation, reasonable simplification can not only ensure the calculation accuracy but also improve its efficiency: 1. The Hollomon hardening law is taken to describe the stress-strain relationship of the material: where K is the strength coefficient and n is the strainhardening exponent, both can be obtained from the uniaxial tensile test. (1) =K n Fig. 1 Schematic diagram of tube hydro-forging combined with pre-bending 2. According to the Von Mises yield criterion for isotropic material, the equivalent stress and the equivalent strain can be obtained respectively: 3. The pre-bending process is assumed to be plane strain and plane stress [22]; that is, the normal stress bd t and the hoop strain bd can be ignored, what can be expressed by Eq. (4): 4. The arbitrary cross-section of the tube remains plane before and after pre-bending. 5. A plane strain state along the axial direction can be also assumed during THFG: 6. The distribution of stress-strain state and thickness at each differential segment are continuous after pre-bending and before THFG. Results of pre-bending As previously mentioned, the state of each differential segment after pre-bending should be solved firstly, and its specific solution process is shown in Appendix 1. The distribution of thickness after pre-bending t bd can be acquired, as shown in Eq. (6): Simultaneously, due to the cold work-hardening, the distribution of flow stress bd and equivalent strain bd is also solved: Meanwhile, we can notice that the thickness and flow stress of each differential segment at the inner or outer straight-wall are the same. Next, based on the above prebending results, the new mathematical model of the critical internal pressure will be established during THFG. Change rule Compared with a straight tube, it is necessary to consider the cold work-hardening and thickness variation when calculating the change rule between the critical internal pressure p cri and the hoop compression strain th for prebending tube. Through Appendix 2, the change rule considering pre-bending can be obtained: Hoop strain distribution After obtaining the change rule, the following will solve the hoop strain th distribution after THFG. It can be seen from Sect. 3.1 that the degree of work-hardening on each differential segment is different. According to the plastic theory, only when the equivalent stress of each differential segment reaches and exceeds its corresponding flow stress, the subsequent yield will occur, which results in the generation of hoop strain th afterwards. In this case, whether the subsequent yield occurs can be judged by comparing the equivalent stress th and its flow stress bd . Here, the equivalent stress th can be obtained under the simultaneous action of a hoop stress th and a normal stress th t : If th ≻ bd , the subsequent yield will occur; otherwise, the subsequent yield will not occur. The hoop stress th is related to the hoop force F and the thickness t bd : Once the pre-bending radius R is fixed, the flow stress bd and thickness t bd can be determined. Therefore, whether the subsequent yield occurs only depends on its hoop force F, so the distribution of hoop force F during THFG will be analyzed below. Hoop force distribution As shown in Fig. 3, due to the existence of a curved axis, the differential segments on the cross-section can be divided into two categories: the double-curvature differential segment and the single-curvature differential segment. Owing to the different mechanics conditions, the distribution of hoop force at these two kinds of differential segments will be solved individually. 1. The double-curvature differential segment The hoop force distribution is caused by friction force. When the friction coefficient μ is fixed, the friction force f is only determined by the normal contact pressure p'. According to Fig. 3a, we first establish the static stress equilibrium model along the normal direction on the double-curvature differential segment: Among them, d th also can be solved by building the static stress equilibrium model along the hoop direction: Equation (13) can be simplified: According to Eq. (5), when the subsequent yield occurs, the axial stress th meets the condition: th = th + th t 2 , and the normal stress th t can be expressed as: th t = p + p � 2 . Consequently, the relationship between the hoop stress th and the axial stress th is obtained as follows: Substituting Eqs. (14) and (15) into Eq. (12), after simplification, we can get: Further arrangement, the normal contact pressure p' can be expressed: Assuming the double-curvature differential segment along the curved axis is the unit length, so the contact pressure p' at the double-curvature differential segment can be described as: Here, we can define = r∕ 2 + (R + r cos ) [r(R+ r cos )] , especially, which can be further simplified as ≈ 1∕r + 1∕2R when R > 2r. However, for the differential segment that does not enter into the subsequent yield, its axial stress th remains in the stress state at the end of pre-bending: Similarly, if the subsequent yield does not occur, by substituting Eq. (19) into Eq. (12) and simplifying, we can get the following result: After the contact pressure p' is obtained, the hoop force F distribution on the cross-section can be calculated. In this paper, BB' is the parting surface, so the direction of material flow during THFG is shown in Fig. 4. According to the flow law of material, there must have two fixed differential segments A (A') and D (D') which are relative to the die; hence, the cross-section can be divided into two parts: ABCD segment and A'B'C'D' segment. When the punch stroke is ∆, the perimeter of ABCD and A'B'C'D' will be compressed by ∆ simultaneously. For convenience, we can assume that the angle of AB segment is φ 11 , the length of BC segment is l 11 , the angle of CD segment is φ 12 , likewise, the angle of A'B' segment is φ 21 , the length of B'C' segment is l 21 , and the angle of C'D' segment is φ 22 . Meanwhile, according to the geometric relationship, there are following expressions: φ 11 + φ 21 = π, φ 12 + φ 22 = π, l 11 = l 21 . Therefore, according to Fig. 4, the force balance formula on the double-curvature differential segment can be established as shown in Eq. According to the analogy method, it can be concluded that: When n → ∞ , the limit of F n can be taken as: In the same way, if the differential segment has not entered into the subsequent yield, by using Eq. (20), we can obtain its hoop force distribution: 2. The single-curvature differential segment Through the above analysis, the hoop force distribution on the double-curvature differential segment has been solved. Moreover, the hoop force distribution on the singlecurvature differential segment also can be derived using the same method. As shown in Fig. 3b, the static stress equilibrium model on the single-curvature differential segment can be built at first: By simplifying, the contact pressure p' on the singlecurvature differential segment which has entered into the subsequent yield can be calculated as follows: In the same measure, when n → ∞ , it can assume the limit of F n : For the single-curvature differential segment that has not entered into the subsequent yield, substituting Eq. (19) into Eq. (27), and then we can get: Next, for the differential segment without entering into subsequent yield, it can be obtained by an analogous method: 3. Hoop force distribution on the whole cross-section Through the above efforts, the hoop force distribution on the whole cross-section has been obtained. For example, if all differential segments have entered into the subsequent yield, the hoop force distribution can be obtained in turn: For the ABCD segment, if the hoop force at the differential segment A is F A , we can get the hoop force F B at the differential segment B: The hoop force F C at the differential segment C: The hoop force F D at the differential segment D: Similarly, for the A'B'C'D' segment, if the hoop force of the differential segment A' is F A' , we can also get the hoop force F B' at the differential segment B': The hoop force F C' at the differential segment C': The hoop force F D' at the differential segment D': In which, F A' is equal to F A , so it can be found that the hoop force at any differential segment on the cross-section can be solved by the hoop force F A , such as: If the differential segment has not entered into the subsequent yield, it just needs to substitute Eqs. (26) and (31) into the above analysis. Stress-strain corresponding relationship The hoop force distribution on the cross-section is obtained, and then by combining with Eq. (11), the hoop stress distribution also can be acquired. In order to get the distribution of hoop strain on the cross-section, it is necessary to solve the stress-strain corresponding relationship (Fig. 5). At first, when the hoop stress does not satisfy the subsequent yield condition, there is no plastic deformation on the corresponding differential segment: However, if the hoop stress th reaches the subsequent yield, the corresponding differential segment will undergo plastic deformation, and the hoop strain th will generate. Considering that THFG satisfies the plane strain ( th = 0 ), so it can be obtained by combining the Eqs. Consequently, the hoop force F can be described: 47), the hoop strain th distribution on the cross-section can be obtained, and then the maximum hoop strain on the cross-section also can be got. Relationship between maximum hoop strain and punch stroke According to the analysis in Sects Through solving Eqs. (48)-(50), the three unknowns under any punch stroke ∆ i can be obtained. Substituting these three unknowns into the above theoretical derivation, the distribution of hoop force and hoop strain under any punch stroke ∆ i will be obtained, and then the maximum hoop strain under this punch stroke ∆ i also can be got. Figure 6 lists the specific calculation process for the critical internal pressure when considering the pre-bending. In order to obtain the critical internal pressure under any punch stroke ∆ i , there needs a cycle calculation. At first, it is necessary to give an initial value of internal pressure p 0 . Subsequently, the corresponding maximum hoop strain on the cross-section can be solved. By substituting it into the change rule, we can get the temporary value of critical internal pressure p cri,i . If it meets the condition: | | p 0 ∕p cri,i − 1 | | ≤ 1% , the critical internal pressure p bd cri,i of pre-bending tube is p cri,i ( p bd cri,i = p cri,i ). Otherwise, let p 0 = p cri,i and re-substitute p 0 into the cycle calculation until the condition is met. Finally, by iterating ∆ i+1 = ∆ i + d∆, the loading path between the critical internal pressure and the punch stroke can be built for when considering pre-bending. Materials and experiment The material used in this paper is the DP800 tube. The strip-shaped specimens were cut from the tube along the axial direction. Through the uniaxial tensile test, the true stress-strain curve of the material was obtained and corresponding mechanical parameters are shown in Table 1. Besides, its cross-sectional shape parameters also are shown in Fig. 7. In practical production, there are many methods to realize the pre-bending process, such as rotary draw bending (RDB), press bending, pushing bending, etc. RDB which is combined with computer numerical control (CNC) can obtain the required bending radius and angle stably. And through the application of mandrel, the bending wrinkling, and the cross-sectional distortion both can be controlled. In addition, the cross-sectional characteristics of the prebending tube formed by RDB are more consistent along the axis, which is also conducive to the following theoretical analysis. As shown in Fig. 8a, the experiment device of RDB included a pressure die, clamping die, mandrel, and bending die. The radius of the bending die was set as 100, 150, 200, and 250 mm respectively to study the influence of different bending radius R on the critical internal pressure. THFG needs a special experimental device: hydro-forging machine which could control the loading path between the internal pressure and the punch stroke accurately. The machine was mainly composed of three parts: experiment setup, computer control system, and pressurization system, as shown in Fig. 8b. Additionally, the experiment setup included an upper die, lower die, and sealing means. All the digital signals were collected into the computer control system, thus the experimental data could be output in real time. For the effect of friction coefficient, two kinds of experimental conditions were set up [23]: dry friction and MoS 2 . Their corresponding friction coefficients are shown in Table 2. In order to reduce the experimental error, each group of the experiment was repeated three times, and finally, the average value was taken as the experimental result. FE method According to the experiment requirement, the RDB simulation model and the THFG simulation model were established respectively, as shown in Fig. 9. The RDB model also consisted of five parts: ① pressure die, ② clamp die, ③ mandrel, ④ tube, and ⑤ bending die. The dies were all defined as a rigid body, while the tube was set as a deformable body with the isotropic material model. Meanwhile, the tube was discretized by the element type of C3D8R. In order to guarantee the accuracy of FE simulation, the element sizes were 2 mm and 5 mm along the hoop and axial direction respectively, and the thickness direction was defined 5 elements. The radius of the bending die was selected as 100, 150, 200, and 250 mm respectively. The THFG model adopted restart analysis, and the model mainly included three parts: ① upper die, ② tube (imported by pre-bending step), and ③ lower die. Here, since the performing tube in this step was imported from the pre-bending step, its settings were the same as before. According to Table 2, the friction coefficient μ between the die and the tube was set to 0.09 and 0.018 respectively, and the Coulomb friction criterion was used in FE simulation. Experiment results Some of the experiment results are shown in Fig. 10. When the punch stroke was 3 mm, different wrinkling situations could be obtained by applying different internal pressure. For the straight tube, according to the previous mathematical model deduced by Chu et al. [6], its critical internal pressure p str cri was 46 MPa. However, for the pre-bending tube, when the provided internal pressure From the experiment results, we can conclude that under the same punch stroke ∆, the difference in critical internal pressure exists not only in the pre-bending tube and the straight tube but also in the inner and outer straight wall of pre-bending tube. It can be seen that if the influence of pre-bending on THFG is not considered, the calculation reliability of its critical internal pressure will be dropped considerably. Analysis of the pre-bending effect The distribution of flow stress and thickness on the crosssection after pre-bending is studied at first, as shown in Fig. 11. When the R = 150 mm, the thickness reduction at the B'C' segment (outer straight-wall) is the most serious, and the thickness increase at the BC segment (inner straightwall) is the most prominent. Similarly, due to the cold workhardening, the flow stress at each differential segment all improves to different degrees. The flow stress distribution on the cross-section is approximately concave, and the cold work-hardening degree of BC and B'C' segment is the most serious, while the cold work-hardening degree of the neutral layer is the least. It can be found that the initial state of the differential segment at the inner/outer straight-wall is different due to the effect of pre-bending. According to Sect. 3.2, we can obtain the change rule between the critical internal pressure and the hoop strain for the pre-bending tube. In Fig. 12, when pre-bending is considered, it can be clearly seen that the change rules on the inner/outer straight wall are basically the same. When the R is 150 mm, for the inner/outer straight-wall, their thickness after pre-bending is 1.29 mm and 1.76 mm respectively, and their flow stress is 875 MPa and 878 MPa individually. A conclusion can be drawn by comparing the change rule of the inner/outer straight-wall: the difference in thickness has little effect on the change rule, while cold work-hardening Fig. 11 Distribution of flow stress and thickness after prebending Fig. 12 Influence of pre-bending on the change rule has a significant effect on that. Especially if the hoop strain is fixed, the critical internal pressure required for the prebending tube is always greater than that required for the straight tube. With the increase of hoop strain, the change rule difference between pre-bending tube and straight tube also increases. After obtaining the change rule between critical internal pressure and hoop strain, it is necessary to analyze the hoop strain th distribution. For comparative purposes, the critical internal pressure for the straight tube p str cri is applied to the pre-bending tube and the straight tube concurrently; that is, let p pro =p str cri , as shown in Fig. 13, where the punch strokes ∆ are taken as 0.5, 1, 2, and 3 mm respectively. It can be seen from the figure that the fixed differential segments A (A') and D (D') on the pre-bending tube will move with the increase in ∆, but for the straight tube, the above-fixed differential segments always remain at the midpoint of the arc. The different initial after pre-bending leads to the asymmetry of plastic deformation at ABCD and A'B'C'D' segment during subsequent THFG. More importantly, for ABCD and A'B'C'D' segment, the hoop strain th at the straight-wall is always greater than that at the arc under any punch stroke, when the pre-bending is considered. This is because the differential segments B and B' are the parting surface, where the hoop force is the largest. And under the action of friction, the hoop force diminishes along both sides of the parting surface and reaches its minimum value at fixed differential segments A (A') and D (D') respectively. According to Eq. (47), the hoop strain is proportional to the hoop force, so the hoop strain at the arc is always smaller than that at the straight wall. In addition, it also can be found that when ∆ is 0.5 mm, the hoop strain does not occur at some differential segments. Due to the cold work-hardening caused by pre-bending, the hoop force F does not reach the subsequent yield condition of these differential segments. From the above analysis, it can be known that the hoop strain at the straight wall is always greater than that at the arc, and reaches its maximum value at differential segments B and B' respectively, when the pre-bending is considered. Wrinkling is most likely to occur at the maximum hoop strain, so the following will study the effect of pre-bending on the strain at differential segments B and B'. Obviously, under the same ∆ and p pro , the hoop strain at B' differential segment of pre-bending tube is larger than that at the same position of straight tube, while the hoop strain at differential segment B of pre-bending tube is smaller than that at the same position of straight tube, as shown in Fig. 13. This is because when the R is 150 mm, the thickness t bd at B' differential segment reduces to 1.29 mm after pre-bending. On the one hand, the decrease of thickness results in the larger hoop stress under the same hoop force, which leads the differential segment B' being easier to enter into the subsequent yield firstly according to Eq. (10). On the other hand, based on Eq. (47), the thinner the thickness is, the greater the hoop strain will be under the same punch stroke at the differential segment B'. In addition, the thickness at differential segment B increases to 1.76 mm, which causes the opposite result compared with differential segment B'. The thickness of a straight tube is between the above two differential segments, so its hoop strain is also between the above two differential segments. Meanwhile, according to the analysis of the change rule, although the provided internal pressure p pro can inhibit the wrinkling of a straight tube, it is not enough for pre-bending tube. Therefore, the wrinkling of the pre-bending tube will occur in this case. To complement the theory, the FEM results allow us to intuitively observe the effect of pre-bending on the distribution of hoop strain on the differential segments B' and B as shown in Fig. 14. When considering the effect of prebending, the hoop strain on the differential segment B' is always greater than that on the differential segment B under any punch stroke. However, for a straight tube, the value of hoop strain on the differential segments B' and B is equal, and the value is just between that at the inner/outer straight wall of pre-bending tube. The reason has been explained above. In addition, with the increase of punch stroke, the hoop strain difference between differential segments B' and B also increases. Here, it can be seen that the error between the analytical and numerical results is relatively small, which preliminarily verifies the correctness of the previous analytical derivation. According to the specific calculation process shown in Fig. 6, the critical internal pressure required for the inner/ outer straight-wall to suppress wrinkling can be obtained respectively, as shown in Fig. 15. The FE simulation cannot accurately reflect the wrinkling of tubes, so the loading path between critical internal pressure and punch stroke is mainly verified by experiments [19]. When the different internal pressure is applied, there are three different results: 1. When the provided internal pressure p pro is greater than the critical internal pressure p out req of the outer straight wall ( p pro ≻ p out cri ), the wrinkling can be completely inhibited. 2. However, when the provided internal pressure p pro is smaller than the critical internal pressure p in req of the inner straight-wall ( p pro ≺ p in cri ), wrinkling occurs on the outer/inner straight-wall simultaneously. 3. Besides, when p out cri ≻ p pro ≻ p in cri is applied, the wrinkling only occurs at the outer straight wall, but not at the inner straight wall. Consequently, the experimental results in Sect. 5.1 can be reasonably explained. The third result needs relatively high internal pressure, so its wrinkling height is less than that of the second result. Meanwhile, it can be seen that the critical internal pressure p str cri of the straight tube is just between Fig. 14 Hoop strain at the differential segments B and B' under the critical internal pressure p out cri and p in cri . It can be found that the agreement between the analytical and experimental results is good. In conclusion, the critical internal pressure p out cri of the outer straight wall is the minimum internal pressure required to inhibit the wrinkling of pre-bending tube entirely. Therefore, the p out req can be defined as the critical internal pressure p bd cri of the pre-bending tube. And the p bd cri is always greater than the p str cri under the same punch stroke ∆. Besides, with the increase of punch stroke, the effect of pre-bending is more significant. On the one hand, the growth of punch stroke leads to amplifying the hoop strain difference between the pre-bending tube and the straight tube; on the other hand, according to Fig. 12, the greater the hoop strain difference, the greater the change rule difference between the pre-bending tube and the straight tube. Therefore, the difference in the critical internal pressure enlarges with the increase of punch stroke. Different pre-bending radius The following will mainly study the influence of different pre-bending radius R on the p bd cri through the combination of experiment and theory. Here, the bending radii of 100, 150, 200, and 250 mm are selected respectively. For the pre-bending tube, the p bd cri is determined by its outer straight wall. Figure 16 shows the influence of different R values on the change rule between hoop strain and critical internal pressure for the outer straight wall. Under the same hoop strain th , the critical internal pressure p bd cri increases slightly with the decrease of radius. According to the analysis in Sect. 5.2, the cold work-hardening caused by pre-bending is the main reason for the difference between the pre-bending tube and straight tube. The smaller the R, the more severe the cold work-hardening on the outer straight-wall, which in turn leads to the greater the critical internal pressure required. It can be seen from the above that the maximum hoop strain of pre-bending tube is located at the differential segment B', and the effect of pre-bending radius on the maximum hoop strain is shown in Fig. 17 where the line is the theoretical result and the scatter point is the FEM result. Apparently, when the punch stroke is fixed, the maximum hoop strain difference at the differential segment B' between pre-bending tube and straight tube decreases with the increase of pre-bending radius. The thickness reduction of the outer straight wall is the main cause of the hoop strain difference. According to Eqs. (6) and (7), as the bending radius increases, the effect of pre-bending on the thickness reduction is weakened, which leads to the decrease of maximum hoop strain difference. Then, by substituting the hoop strain at the differential segment B' into the change rule, the influence of the R on the critical internal pressure p bd cri can be obtained, as shown in Fig. 18 where the line is the theoretical result and the scatter point is the experimental result. On the one hand, according to the change rule shown in Fig. 16, the smaller the R is, the higher the critical internal pressure p bd cri is under the fixed hoop strain; On the other hand, with the decrease of R, the maximum hoop strain difference between pre-bending tube and straight tube increases. Therefore, combining the above reasons, there is an important conclusion that with the decrease of the R, the critical internal pressure difference between p bd cri and p str cri increases under the same punch stroke. For example, if the punch stroke ∆ is 4 mm (shown in Fig. 18d), the minimum value of the difference p Δ=4 min is 24.2 MPa when the R is 250 mm, and the difference reaches its maximum value Fig. 15 Loading path for outer/ inner straight wall when the R is 150 mm p Δ=4 max = 52.9 MPa when the R reduces to 100 mm, which cannot be ignored in actual production. On the contrary, with the decrease of punch stroke, the difference of critical internal pressure between p bd cri and p str cri also decreases. When the R is 100 mm, if the punch stroke decreases to 2 mm, the maximum value p Δ=2 max will reduce to 23.4 MPa. The experimental results are consistent with the theoretical results what proves the correctness of the theoretical derivation. Friction coefficient Through the above theoretical analysis, it is found that friction is one of the important factors affecting the hoop force distribution. Therefore, in this paper, two lubrication conditions (dry friction and MoS 2 ) are selected to study the influence of friction on the critical internal pressure p bd cri , and the specific friction coefficient values are shown in Table 1. From Fig. 19, whether the pre-bending is considered, the increase of μ value will lead to the increase of the critical internal pressure under the same punch stroke. The reason is that according to the theoretical analysis in Sect. 3.3, in order to produce the same punch stroke, the larger the μ value will increase the hoop force and the hoop strain required at differential segment B', and then the critical internal pressure required also will be enlarged. In addition, with the decrease of μ, the effect of pre-bending becomes more pronounced, that is, the critical internal pressure difference between p bd cri and p str cri increases with the reduction of friction coefficient. Therefore, the smaller the friction coefficient, the more important it is to consider the influence of pre-bending. Practical application The automobile longeron is a typical complex crosssectional tubular component with a curved axis, as shown in Fig. 20. There are two difficulties in forming this part: (1) because of 3D curved axis, the axis springback after pre-bending is large and difficult to control (springback was 10 mm) (2) The cross-sectional shape is complex, and the perimeter difference between the maximum crosssection and the minimum cross-section can reach 15%. THFG combined with pre-bending is very suitable to manufacture this kind of component. According to the model of this paper, the critical internal pressure required for THFG when considering the pre-bending is 54 MPa. Finally, not only the axis springback can be controlled (dropped almost to zero) [24], but also the variable crosssection along the axis can be obtained without wrinkling. Conclusion By combining the energy method with mechanics analysis, the influence of pre-bending on the subsequent THFG, especially on the critical internal pressure required to inhibit wrinkling, was clarified in this research. The main following conclusions were obtained: 1. Through the energy method, the change rule of the critical internal pressure with the hoop strain can be obtained first when existing the pre-bending. Because of cold work-hardening, the critical internal pressure of pre-bending tube is higher than that of a straight tube under the same hoop strain, and their difference becomes more obvious with the decrease of bending radius. On the contrary, the variation of thickness caused by prebending has little effect on this change rule. 2. Based on the mechanics analysis, the hoop strain distribution and the maximum hoop strain considering prebending both can be determined. For the pre-bending tube, mainly due to the thickness variation, the maximum hoop strain of the outer straight wall is greater than that of the inner straight wall. And the decrease of R value leads to the increase of the maximum hoop strain difference between the pre-bending tube and straight tube. 3. After substituting the maximum hoop strain at the inner/ outer straight-wall of pre-bending tube into the change rule, the mathematical model of the corresponding critical internal pressure p out cri and p in cri can be obtained respectively. When the provided internal pressure p pro ≺ p in cri , wrinkling occurs at both the outer/inner straight-wall, while p out cri ≻ p pro ≻ p in cri , wrinkling only occurs at the outer straight-wall, and only under p pro ≻ p out cri , wrinkling can be suppressed entirely. Therefore, p out cri is defined as the critical internal pressure p bd cri of the pre-bending tube. 4. The critical internal pressure p bd cri of the pre-bending tube is always greater than that p str cri of a straight tube under the same punch stroke. Moreover, with the decrease of bending radius, the difference between p bd cri and p str cri will increase. If the punch stroke is 4 mm, the critical internal pressure difference is 33% when the R is 250 mm, while the R decreases to 100 mm, the critical internal pressure difference enlarges to 74%. Besides, the bigger the punch stroke, the more significant the pre-bending effect. 5. Whether pre-bending is considered, the smaller the friction coefficient is, the smaller the critical internal pressure requires. But the smaller the friction coefficient will lead the influence of pre-bending to be more prominent. The above results were verified by experiment and FE simulation. In conclusion, this work provided a new prediction model of critical internal pressure which can improve the accuracy by 74% at least when existing the pre-bending. Appendix 1. Mathematical model for pre-bending According to Eq. (4), the expression bd = bd 2 can be satisfied. Subsequently, by combining with Eqs. (2) and (3), the equivalent stress bd and the equivalent strain bd after prebending can be reduced to: . On the basis of the previous study [25], the radius of neutral layer ρ u after pre-bending can be determined as follows, as shown in Fig. 21: The normal strain bd t and the axial strain bd can be expressed as: Here, t 0 is the initial thickness and t bd is the thickness after pre-bending. In case of bd t = − bd , the distribution of thickness t bd and equivalent strain bd after pre-bending can be achieved by combining Eqs. (54) and (55): By substituting the Eq. (57) into the Hollomon hardening law, the flow stress distribution bd after pre-bending can be obtained: Appendix 2. Solution of the critical internal pressure by energy method As shown in Fig. 22 of the Appendix, if there is a small edge displacement u φ /2 and the internal pressure p is enough, the wrinkling will not produce and still remain the perfect plate, which needs the plastic strain energy in a perfect plate E0. However, when the internal pressure p is insufficient, wrinkling will occur, and the bending energy in a wrinkling plate is E w . According to the energy method [16], if the wrinkling can be suppressed by internal pressure p exactly, the difference between E 0 and E w is the minimum external work W p which is executed by the internal pressure p and can be expressed as: First, we can assume that the mode shape of the wrinkling plate is a cosine wave: In which, L denotes the wrinkling wavelength and δ is the maximum wrinkling amplitude. Besides, m is the frequency of the cosine mode, which can be expressed as: For a perfect plate, if a certain edge displacement u φ /2 is applied, the hoop compression strain th can be determined as: Therefore, the m also can be expressed as: Assuming that the length of the plate before and after wrinkling is equal, both are L. Therefore, according to the arc length integral criterion, we can get: Through Taylor expansion, the maximum wrinkling amplitude δ can be calculated by the hoop compression strain th : In addition, Cao and Wang [16] take the wrinkling in a deep-drawing operation as the research object and assume that the external force F satisfies F=F max − F max (y∕ − 1) 2 , so the critical internal pressure p cri can be described: According to Eq. (5), a plane strain state along the axial direction is assumed, thus the equivalent strain can be expressed as: Substituting Eq. (66) into the Hollomon hardening law to solve the equivalent stress: For the pre-bending tube, compared with a straight tube, it is necessary to consider the effect of thickness variation According to the Cao's mathematical model [17], for two certain wrinkling lengths L 1 and L 2 , there is a corresponding transition strain th ; thus, the following relationship can be built as: Eventually, we can use computer software to obtain the th with different wrinkling lengths L, and then substitute it into Eq. (70) to build the change rule between the hoop compression strain th and the critical internal pressure p cri :
10,607.8
2021-12-20T00:00:00.000
[ "Engineering", "Materials Science" ]
Transverse Coherence Limited Coherent Diffraction Imaging using a Molybdenum Soft X-ray Laser Pumped at Moderate Pump Energies Coherent diffraction imaging (CDI) in the extreme ultraviolet has become an important tool for nanoscale investigations. Laser-driven high harmonic generation (HHG) sources allow for lab scale applications such as cancer cell classification and phase-resolved surface studies. HHG sources exhibit excellent coherence but limited photon flux due poor conversion efficiency. In contrast, table-top soft X-ray lasers (SXRL) feature excellent temporal coherence and extraordinary high flux at limited transverse coherence. Here, the performance of a SXRL pumped at moderate pump energies is evaluated for CDI and compared to a HHG source. For CDI, a lower bound for the required mutual coherence factor of |μ12| ≥ 0.75 is found by comparing a reconstruction with fixed support to a conventional characterization using double slits. A comparison of the captured diffraction signals suggests that SXRLs have the potential for imaging micron scale objects with sub-20 nm resolution in orders of magnitude shorter integration time compared to a conventional HHG source. Here, the low transverse coherence diameter limits the resolution to approximately 180 nm. The extraordinary high photon flux per laser shot, scalability towards higher repetition rate and capability of seeding with a high harmonic source opens a route for higher performance nanoscale imaging systems based on SXRLs. For sufficiently narrow-band HHG radiation a spatial resolution below the wavelength of illumination near the Abbe limit can be achieved 17,18 . In turn, plasma-based soft X-ray lasers (SXRL) emitting short pulses in the XUV range between 3 and 40 nm (ref. 19) are table-top sources with high single-shot photon flux. Among the numerous schemes proposed for soft X-ray lasing, the transient collisional excitation scheme has proved to be the most reliable and promising for the development of compact laser-pumped SXRLs 20 . Combining this scheme with the so-called grazing incidence pumping (GRIP) geometry compact systems are feasible 21,22 . Using the GRIP scheme strong XUV emission in the range between 10 and 20 nm with pulse energies of the SXRL up to 3 µJ have been demonstrated for pump laser energies of about 1.5 J (refs 23 and 24). Due to the properties of the gain medium the spectral bandwidth of the XUV emission is in the order of a few picometer resulting in a very high temporal coherence 25 . The transverse coherence of the SXRL strongly depends on the geometry of the excitation scheme, i.e. one versus two stage amplifiers or seeded versus unseeded operation, and the pumping conditions. In dependence on the pumping conditions the highly coherent part of the beam contains between 1% (unseeded, pump energy 1-2 J, ref. 26) and 90% (seeded or two stage SXRL with pump energies up to 10 J, refs 27 and 28) of the total SXRL energy. There are only very few data sets concerning the transverse coherence of a single stage, and thus more easily realizable, SXRL pumped at moderate pump energies below 1 J (ref. 29). On the other hand, these systems are relatively attractive for applications requiring high average photon flux, since pump lasers with pulse energies below 1 J are easily scalable to repetition rates up to 200 Hz. There are only very few examples for coherent diffraction imaging (CDI) experiments using laser plasma-based SXRL in literature. Kang et al. demonstrated 30 a SXRL operating at 13.9 nm pumped by a 1.5 J, 10 Hz Ti:Sapphire laser and its application to coherent diffraction imaging. They achieved a resolution in the order of 100 nm. Here, we investigate the applicability of SXRLs towards CDI in the light of the high XUV pulse energy, the excellent temporal coherence and the limited transverse coherence. While there is a wide array of previous publications discussing CDI with partially coherent sources 31,32 in-depth and modifications to established reconstruction algorithms 33, 34 , we limit ourselves in this work to standard plane wave CDI methods in order to retrieve the optical properties of the SXRL and subsequent limitations towards applications such as CDI. We demonstrate that the captured diffraction data of a known object allows estimating the transverse coherence, which is compared to a quantitative measurement of the transverse coherence employing double slits. Moreover, we directly compare the SXRL-CDI measurement to a HHG-CDI measurement within the same instrument to benchmark the performance. Finally, this paper outlines a concept and operation parameters for a SXRL pumped at moderate energies. Experimental setup and properties of the SXRL The experimental setup is depicted in Fig. 1a. The SXRL consists of a molybdenum target that is pumped with a 70 mJ/150 ps long pre-pulse and 270 mJ/2 ps short pulse, respectively, at a repetition rate of 100 Hz (see Methods for further details). Typical far-field mode profiles feature a rectangular shaped beam profile, with higher frequency modulations hinting at the plasma and single pass nature of the source (Fig. 1b). The generated soft X-ray radiation passes a plane molybdenum-silicon multilayer mirror (M1, reflectivity approximately 22%, bandwidth 2.5 nm) at 45 degree and is refocused onto the sample for coherent diffraction imaging by two concave spherical mirrors (radius of curvature (ROC), ROC 2 = 2 m, ROC 3 = 1 m) with a molybdenum-silicon multilayer coating near normal incidence. The design of all mirrors was chosen such that the reflectivity peaks at the central wavelength according to the SXRL emission line at 18.9 nm. The mirrors were additionally optimized for reflectivity for higher throughput, while the total bandwidth is mainly determined by the strong monochromatic SXRL molybdenum emission line 35 . The curved multilayer mirrors (M2 and M3) have been realized with a dual ion beam deposition system described in ref. 36. From the model calculations and the reference measurement at the PTB synchrotron beamline at BESSY II, a reflectivity of 14.6% (38.2% per single mirror) under near normal incidence (2 degrees with respect to the normal) at 18.9 nm was determined. The combined reflectivity of all multilayer mirrors (M1-3) is approximately 3.2%. The experiment was arranged such that incoherent plasma emission, that appears as a more than two orders of magnitude weaker background in the spectrum in Fig. 1c, could potentially only reach the detector via the mirrors aligned for the directed SXRL emission. The distance of 1 m between M2 and the source geometrically selects the directed SXRL emission over the existing 4π plasma emission. Further, the multilayer mirrors spectrally select a narrow portion of the emission centred near the laser line to further reduce possible emission not relating to lasing to reach the target. In the experiment the SXRL emission was optimized regarding the number of pump laser exposures on the same position on the Mo target (see Methods). Subsequent exposures on the same target position generate a microstructure on the initially polished target and the resulting inhomogeneous plasma formed exhibits reduced laser emission. During the experiment, no significant background from this plasma emission in absence of laser emission in case of too many subsequent exposures was observed confirming sufficient selectivity of SXRL emission over plasma emission in the experiment. For imaging applications and coherence measurements different samples were introduced into the refocused beam. A soft X-ray sensitive CCD captured the diffracted light downstream. The photon flux per shot, measured with a calibrated XUV-sensitive photo diode, is (3.2 ± 0.3) × 10 10 photons per shot at the source, which leads to an overall capability of the system producing up to 3 × 10 12 photons per second in less than 0.01% bandwidth centred around 18.9 nm. Coherence of the SXRL For quantification of the transverse coherence of the SXRL radiation 300 nm wide double slits with different slit distances (d = 0.92 to 6 µm) were introduced into the refocused beam after the curved mirrors. The fringe patterns were subsequently recorded by a CCD. As a next step the pixels of the CCD were binned down vertically to obtain typical fringe patterns along the horizontal axis of the beam. Typical obtained fringe patterns are depicted in Fig. 2a-d. For increased slit separation, the fringe visibility decreases indicating reduced coherence for wavelets emitted from the slits. As a sanity check that the narrower fringe spacing for larger slit spacing does not reduce the visibility due to possibly limited resolution of the detector, these larger distances were measured at a larger sample to CCD distance, which confirmed the limited visibility. From scanning diffracting objects across the beam, the beam diameter in the focus was determined to be approximately 15 µm at full width of half maximum (FWHM) resulting in a source size of approximately 30 µm considering that the curved mirrors in the setup demagnify the beam by a factor of two. Hence, homogenous illumination can be assumed for the slit distances discussed here. Fitting the well-known formula, for a two-slit diffraction pattern 37 at position z along the propagation axis πα λ πα λ µ π λ λ π λ λ πδβ λ πδβ λ where α and β are the width and spacing of the slits, respectively, to the experimental data allows deducing the modulus of the complex coherence factor |μ 12 |. We assume I 0 to be unity for normalized data and the fringe pattern centered around x = 0. In Fig. 2e the modulus of the complex coherence factor is plotted over the slit distance indicating that the SXRL exhibits partial transverse coherence. By fitting a Gaussian to the experimental data 37 and use that the transverse coherence diameter D coh (diameter of the coherent patch) is defined when |μ 12 | = 0.88 (ref. 38) the transverse coherence diameter of the reimaged source can be deduced to D coh = 1.3 ± 0.1 μm. Hence, the transverse coherence diameter can be estimated to be in the order of ~8% of the beam diameter. This value agrees very well with a value of 1/20 beam diameters obtained by Liu and co-workers 29 for a fully saturated nickel-like Cd SXRL as well as the values obtained by Wang and co-workers 28 for an unseeded high-energy pumped nickel-like Mo SXRL and a neon-like Se SXRL 39 . These numbers further are comparable to the results reported for the FLASH XUV-FEL having a coherence relative transverse coherence diameter of ~6% for multi-shot accumulation 40 and ~40% for single shot exposure 41 . While the temporal coherence due to the narrow linewidth is outstanding for the presented SXRL source, the transverse coherence limits the usability towards coherent diffraction imaging (CDI). It is important to note that the measurement was performed in the refocused beam, ensuring the highest fluence on the sample and the shortest exposure time for imaging. By doing so, further the Fraunhofer diffraction condition is explicitly satisfied on the double slits, given that in the focus for an ideal beam a flat wave front with infinite curvature can be assumed. This is advantageous for coherence characterization of sources applied for coherent diffraction imaging, because it is known that the transverse coherence of a divergent beam can change over propagation distance 42 . As a result, a far-field measurement of the transverse coherence does not necessarily yield a reliable value for the transverse coherence present in the refocused beam and apart from special cases the transverse coherence is known to be improved in a refocused beam 43 . Finally, the temporal coherence or longitudinal coherence length of the SXRL can be estimated. The measured bandwidth of Δλ = 25 pm, as depicted in Fig. 1c, is limited by the resolution of the spectrometer. However, precise measurements of the bandwidth of a Mo-based SXRL using a high resolution spectrometer 25 yielded a bandwidth of 1.8 pm corresponding to a temporal coherence length of the order of Hence, no deterioration of the diffraction pattern due to finite bandwidth of the beam is expected, with the transversal coherence length being substantially smaller. To verify that, a double slit diffraction pattern with the narrowest double slit, having a slit separation of 0.92 µm and a slit width of about 300 nm, was measured in detail. The fringe pattern depicted in Fig. 3 indicates that fringes are observed for very high diffraction angles due to the excellent temporal coherence given that the slit separation is smaller than the transverse coherence diameter. Hence, two wavelets exiting either of the slit observed under large angles, in the presented case 42° with respect to the normal of the sample plane, can still interfere with high fringe contrast. The slight decrease in the mutual coherence factor (red dots in Fig. 3) for high momentum transfers can be attributed to limitations in the dynamic range of the detector (~10 3 ). CDI using a SXRL As object for coherent diffraction imaging a 50 nm thick silicon nitride membrane with 50 nm gold deposited onto it was employed, where an aperture was fabricated using a focused ion beam (STEM image shown in Fig. 4c). The object was placed in the refocused soft X-ray beam with a large area X-ray-sensitive CCD camera (Andor iKon L, 27.6 × 27.6 mm 2 chip size) placed 15 mm downstream for capturing the diffracted light. For this geometry, the numerical aperture was 0.67 allowing for a half-pitch resolution of 14 nm. A typical diffraction pattern exhibiting fringes extending to the edge of the detector could be recorded within 300 laser shots (Fig. 4a). Even with a single laser shot diffraction fringes were observed in an area of about 30% of the detector centres around the central speckle. For the iterative phase retrieval a guided version of the hybrid input-output algorithm was employed that proved to work well in previous CDI experiments 15,17 . The phase retrieval algorithm was not able to reconstruct the object in full and in the anticipated resolution due to limited transverse coherence of the SXRL (Fig. 4e). and comparing it to the measured data (d) one finds that it is best represented by convoluting the simulated pattern with a Gaussian having a width of 1.7 pixels (f) to account for the degree of decoherence. (g) The resulting simulation of the object space from the filtered Fourier space compares well to the reconstruction of the experimental data using established phase retrieval algorithms (e). The additional amplitude modulation in the retrieved object (Panel (e)) compared to the simulation (g) can be explained by the non-uniform wavefront of the SXRL focus (cf. far-field intensity distribution in Fig. 1b). See text for further discussion. Note that panel (a) is plotted using a logarithmic scale, while panels (b,d and f) are plotted on a linear scale. The scale bar in (a) is 10 µm −1 and those in (c), (e) and (g) are one micron. In Fig. 4b to g this is studied in detail by investigating the effect of limited transverse coherence on the central part of the diffraction pattern under the assumption that temporal coherence is not a limiting factor at low spatial frequencies. Comparing the calculated modulus in Fourier space (Fig. 4b) to the measured (Fig. 4d) smearing of the fringes due to limited transverse coherence is observed. For determining the degree of decoherence, the calculated pattern (Fig. 4b) is convoluted with a Gaussian of varying width and the error between calculated and measured pattern is minimized to find best agreement (Fig. 4f). The Gaussian for best agreement was found to have a standard deviation width of 1.7 pixels. A forward calculation of the object space from the diffraction data with limited transverse coherence (Fig. 4g) shows reasonable agreement with the object space reconstructed from experimental data (Fig. 4e). The small discrepancy in the amplitude distribution between Fig. 4e and g potentially arises from a non-uniform illumination amplitude in the refocussed SXRL light. This can be expected from the spatial distribution of the measured far-field profiles (Fig. 1b), which likely translates into the focus. For the simulation (Fig. 4g) a uniform illumination with flat wavefront was assumed. For the reconstruction presented, the object space support was kept fixed to a support retrieved from the STEM image (Fig. 4c). Hence, a determination of achieved resolution by established methods is not possible. Here it should be stressed that an idealized support is fed into the reconstruction in order to retrieve the coherent contribution of the object space amplitude and observe the limitations due to spatial decoherence. One finds that in the experiment the object can be reconstructed with sufficient amplitude up to a spatial extent of approximately 1.5 microns, which compares well to the result for D coh retrieved from a double slit measurement. All distances larger than this in object space result in fringes that cannot be properly phased due to incoherence. For a direct comparison to a high harmonic source, the same sample was investigated with the same instrument, while the source was switched (details of the laser system and HHG source can be found in ref. 7). The multilayer mirrors were replaced with a pair of mirrors that select the 23 rd harmonic at 35 nm wavelength. A diffraction pattern of comparable signal coverage on the CCD as for the SXRL (Fig. 4a) was captured within 600 s or 600.000 laser shots. Since the wavelength was approximately doubled while the geometry was otherwise kept constant, this effectively reduces the achievable resolution by limiting accessible k space approximately to a half compared to the SXRL measurement. Due to a power-to-the-four scaling law of integration time versus achievable resolution, this results in 16 times higher required integration time for the HHG source for achieving the same signal in k space as with the SXRL, neglecting that in this case the numerical aperture would already be larger than one. Comparing the SXRL, which can run up 100 Hz repetition rate, to a standard kHz-HHG source in terms of achievable resolution with respect to the integration time, we find an advantage of more than three orders of magnitude for the SXRL. For reconstructing the object space from the measured diffraction pattern the same implementation and procedure as described for the SXRL data was used, except for using the shrink-wrap algorithm 44 to find the support iteratively during the reconstruction without any a priori knowledge. See methods section in ref. 17 for further details. The support retrieved by gently applying shrink-wrap constitutes a soft edge of the object allowing an estimate of the achieved resolution (see Supplementary Fig. 1 for a comparison with a fixed support reconstruction). For the SXRL data the fixed support can be considered a soft edge as well, since amplitudes are not retrieved towards the edge of the support which would otherwise mitigate the retrievable resolution. The reconstruction of the HHG experimental data (Fig. 5a) features the object (Fig. 4c) in detail. The smallest features of the sample are well-resolved (width of the vertical sections in letter "n" is approximately 80 nm wide). In contrast, the reconstruction of the SXRL measurement (Fig. 5b) features only an approximately 1.5-micron large fraction of the object as discussed before. Because the phase is not stable across the object the support was kept fixed in the reconstruction of the SXRL data. This could hint at significant fluctuations of the SXRL wavefront as would be expected from the complex shape of the far-field beam profile of the SXRL (Fig. 1b). Directly comparing the diffraction patterns ( Fig. 5c and d) shows the effects of limited transverse coherence. From the direct comparison of the CDI reconstruction and the quantitative measurement of the transverse coherence, it can be concluded that for plane wave CDI applications the modulus of the complex coherence factor needs to be larger than 0.75 for a given object size. It is worth noting that the measured limiting value for the complex coherence factor constitutes a lower bound required. For more realistic samples exhibiting hard-edges but non-zero background such as for instance binary masks and lithographic structures 8,45 in reflection geometry CDI but also for soft-matter samples exhibiting soft edges this estimate remains valid (see also Supplementary Fig. 2). However, in general a higher photon flux or a better signal-to-noise ratio on the detector will be required for achieving comparable imaging conditions due the lower scattering cross section of such samples. An important measure for the quality of a CDI measurement is the achieved transverse resolution. A comparison for the CDI data taken with a SXRL and HHG source are shown in Fig. 6a and b, respectively. The advantage of the HHG source is evident, while the limited transverse coherence for the SXRL source allows only a glimpse on the shape of the object. For a quantitative analysis line profiles were taken around the left vertical bar of the letter "H" (Fig. 6a and b, indicated by white dotted line), which is measured by STEM to be 200 nm wide, and analysed at 90/10% level 46 (Fig. 6c). For the HHG source a resolution of 48 nm was determined, while the SXRL reconstruction features approximately 180 nm spatial resolution. Due to the high degree of transverse incoherence, the resolution in the SXRL case (Fig. 6a) is not constant over the object and using alternative methods such as measuring the cut-off of the phase retrieval transfer function is prohibited here due to using a fixed support constraint. Hence, for this particular measurement using the SXRL only a rough estimate for the achieved transverse resolution can be made. It is important to note that for both experiments the speckles in the diffraction patterns captured extended well beyond a numerical aperture of 0.5. Assuming a fully coherent signal one would expect a resolution in the order of the wavelength. In the HHG experiment the achieved resolution amounts to 1.4 wavelengths. The limitation here is likely related to the temporal coherence of the source, i.e. the linewidth of the harmonic line (Δλ/λ ≈ 1/30), that effectively limits the resolution 17 . For the SXRL the resolution expected from the extent of the fringes (Fig. 4a) and the wavelength is at least 20 nm, which is about an order of magnitude away from the value achieved in the reconstruction. We attribute this to the limited transverse coherence of the SXRL source. The convergence and quality of the reconstruction is severely limited by the fact that the object is much larger than the coherence length of the source. It is expected that for an object smaller than the transverse coherence diameter under otherwise identical illumination conditions the object can be readily reconstructed at a resolution near the value expected from the largest measured momentum transfer k. This is further indicated by the excellent fringe modulation measured for a double slit spacing of 0.92 µm (Fig. 2a). Established methods in iterative phase retrieval, such as shrink-wrap, that allow retrieval of the object without a priori information are thus prone to fail in this case a sharp edge of the object cannot be determined. This constraint can be resolved if the SXRL is used in conjunction with ptychography 47,48 or if isolated objects are imaged that are smaller than 1.5 micron, which is also suggested by high fringe contrast obtained from the SXRL for sufficiently small slit spacing (Fig. 2a). Summary In summary, we have studied the transverse coherence properties of a transient nickel-like Mo soft X-ray laser pumped at moderate energies and applied this source for coherent diffraction imaging. A direct comparison to a high harmonic source indicates that the extraordinary high flux of the SXRL allows for a more than three orders of magnitude shorter integration time for collecting diffraction data that can potentially result in the same imaging resolution. A drawback of using a SXRL for table-top CDI is the fixed wavelength and the limited transverse coherence diameter rendering only a fraction of the wavefront useful for CDI. Direct comparison to a double slit experiment suggests that the modulus of the complex coherence factor of |µ 12 | > 0.75 is required for plane wave CDI. For measuring the coherence properties, it is beneficial to measure the coherence directly in the refocused beam, i.e. the reimaged source, as this omits propagation effects of the coherence properties. In this case a known complex-shaped aperture and established algorithms for phase retrieval in coherent diffraction imaging can be employed to estimate the transverse coherence properties given that the diffracting object is larger than D coh . For the experiment presented the transverse coherence limits the object size or possible field of view for CDI to 1.5 microns. The presented approach of employing a SXRL pumped at moderate energies enables scaling the concept to much higher repetition rates (up to 1 kHz, cp. ref. 49). Under the conditions of pumping the SXRL at moderate laser energies (<500 mJ) comparable coherence properties as SXRLs pumped with energies exceeding 1 J is HHG SXRL c d π 0 Figure 5. Comparison of CDI using a HHG and SXRL source. The reconstruction of the object from the diffraction pattern captured using a HHG source (a) shows the object (Fig. 4c) in detail, while the reconstruction from the SXRL (b) is incomplete and features unstable phases. Comparing the raw data measured ((c) and (d)) the effect of limited transverse coherence of the SXRL becomes obvious. In Panels (a) and (b) the complex-valued object space is depicted where the brightness and hue encode the amplitude and phase respectively (see inset in Panel (b)). The scale bars are one micron. In Panels (c) and (d) the measured intensity of the diffraction pattern around the central speckle is depicted. The image area shown was cropped for easy comparison of the two. confirmed. Further it is shown that the transverse coherence of the SXRL compares to those of SASE-FELs for multi-shot exposure. Future directions in using SXRLs for single-shot and high resolution diffraction imaging might employ seeding with a high harmonic source 28,50,51 , which might combine the best of two worlds. Despite the observed limitations the presented source and scheme offers a wealth of possible applications, e.g., in imaging of quantum dots or lithographic mask inspection in combination with ptychographic scanning technique. Methods Molybdenum soft X-ray laser. The soft X-ray laser (SXRL) operating in grazing incidence pump (GRIP) geometry was pumped by two pulses of a high repetition rate 100 Hz thin disk laser (TDL) chirped pulse amplification (CPA) system. The TDL system consists of a front-end with an Yb:KGW oscillator, stretcher and Yb:KGW regenerative amplifier followed by two regenerative amplifiers and one multipass amplifier. The frontend delivers an output energy of 0.3 mJ, at a pulse duration of about 1.5 ns at 1030 nm. The output is divided into two pulses and subsequently each of these is amplified in a regenerative amplifier to a level of about 100 mJ. The pulse from the first regenerative amplifier is compressed to a duration of approximately 150 ps using a grating compressor, the output of the second regenerative amplifier is fed into a thin disk multipass amplifier which amplifies the pulses to an energy up to 400 mJ and which is subsequently compressed in a grating compressor to about 2 ps pulse duration. The long pre-pulse (150 ps, E ≈ 70 mJ) is focused by a cylindrical (f = −500 mm) and a spherical lens (f = 380 mm) onto the target at normal incidence giving a line focus of about 30 µm in width. The generated plasma column will then be heated by a short pulse (2 ps, E ≈ 270 mJ) focused according to the GRIP method by a spherical mirror (Edmund Optics, f = 762 mm) into the preformed plasma. For the Mo target an optimum GRIP angle of 24 degree was determined. The delay between the two pulses has appeared as a very critical parameter 21,22,52 . Therefore, the delay between long and short pulse can be adjusted by adapting the round-trip time of the two regenerative amplifiers as well as fine tuning by an additional delay stage. Because of the high repetition rate all optical components are protected against debris by thin glass plates or foils, and the SXRL output is guided through an aperture to reduce debris contamination on the following optical elements. A Mo slab target with a length of 50 mm and a width 5 mm is used in the experiments. The target was attached to a motorized stage with four degrees of freedom allowing the adjustment in three axes as well as the continuous renewing of the target surface by translating the slab. The most stable SXRL operation was found if the target surface was renewed after 5-10 laser shots. The spectral output of the SXRL has been measured using a flat field grating spectrometer. It consists of a filter wheel equipped with Al filters (thickness 0.2 to 1 µm), an entrance slit (100 µm), an aberration corrected concave grating (HITACHI #0437, 1200 l/mm) on a rotational stage and a back-illuminated CCD camera (ANDOR, DO420A-BN,1024 × 256 pixel). For a 70 mJ/150 ps long pulse and 270 mJ/2 ps short pulse a lower limit for the energy of one single SXRL pulse was estimated to 300 nJ and a divergence of about 10 mrad.
6,781.6
2017-07-13T00:00:00.000
[ "Physics", "Materials Science" ]
Factors related to suboptimal recovery of renal function after living donor nephrectomy: a retrospective study Background The renal function of the remaining kidney in living donors recovers up to 60~70% of pre-donation estimated-glomerular filtration rate (eGFR) by compensatory hypertrophy. However, the degree of this hypertrophy varies from donor to donor and the factors related to it are scarcely known. Methods We analyzed 103 living renal transplantations in our institution and divided them into two groups: compensatory hypertrophy group [optimal group, 1-year eGFR ≥60% of pre-donation, n = 63] and suboptimal compensatory hypertrophy group (suboptimal group, 1-year eGFR < 60% of pre-donation, n = 40). We retrospectively analyzed the factors related to suboptimal compensatory hypertrophy. Results Baseline eGFRs were the same in the two groups (optimal versus suboptimal: 82.0 ± 13.1 ml/min/1.73m2 versus 83.5 ± 14.8 ml/min/1.73m2, p = 0.588). Donor age (optimal versus suboptimal: 56.0 ± 10.4 years old versus 60.7 ± 8.7 years old, p = 0.018) and uric acid (optimal versus suboptimal: 4.8 ± 1.2 mg/dl versus 5.5 ± 1.3 mg/dl, p = 0.007) were significantly higher in the suboptimal group. The rate of pathological chronicity finding on 1-h biopsy (ah≧1 ∩ ct + ci≧1) was much higher in the suboptimal group (optimal versus suboptimal: 6.4% versus 25.0%, p = 0.007). After the multivariate analysis, the pathological chronicity finding [odds ratio (OR): 4.8, 95% confidence interval (CI): 1.3–17.8, p = 0.021] and uric acid (per 1.0 mg/dl, OR: 1.5, 95% CI: 1.1–2.2, p = 0.022) were found to be independent risk factors for suboptimal compensatory hypertrophy. Conclusion Chronicity findings on baseline biopsy and higher uric acid were associated with insufficient recovery of the post-donated renal function. Background End-stage renal disease (ESRD) substantially increases the risk of death and cardiovascular disease [1][2][3][4]. Renal transplantation is the best treatment option for ESRD [5]. In Japan, due to the shortage of deceased donors, 89.2% of renal transplants are from living donors [6]. To minimize the risk of ESRD after donation, the selection of living donors requires great care [7]. The renal function of the remaining kidney in living donors usually recovers up to 60~70% of baseline function through a compensatory hypertrophy mechanism [8,9]. However, the degree of this compensatory hypertrophy varies from donor-to-donor. The reason for this between-donor difference is unclear; however, considering the wide range of the health status among living donors, the presence of subtle metabolic syndromes or preclinical renal diseases prior to transplantation are possible, [5] which could affect functional renal recovery after the donation. Despite meticulous efforts to avoid adverse events for living donors, the 15-year risk of ESRD in donors is 3.5 to 5.3 times higher than that of a matched population [10,11]. Therefore, accurate estimation of the residual glomerular filtration rate (eGFR) is crucial in order to maintain a donor's life-long renal function and to prevent cardiovascular events. We hypothesized that donors' baseline characteristics and findings on baseline renal biopsy would predict the extent of compensatory hypertrophy after renal donation [5,10]. Therefore, our aim in this study was to identify the factors related to a suboptimal recovery of renal function in living donors after donation. Study population We conducted a retrospective analysis of consecutive 111 cases of living renal transplantations performed at our institution from 2011 to 2016. The donor's split renal function was calculated by using MAG3 scintigraphy to determine the side of the kidney graft. Living donor nephrectomy was performed using a pure retroperitoneoscopic approach. Of these 111 cases, 8 cases were excluded due to unavailability of baseline biopsies (n = 3) and loss to follow-up (n = 5). The remaining 103 cases were divided into two groups: the compensatory hypertrophy group [optimal group, with a 1-year eGFR ≥60% of the pre-donation eGFR, n = 63] and suboptimal compensatory hypertrophy group (suboptimal group, with a 1-year eGFR < 60% of pre-donation eGFR, n = 40). The cut-off eGFR of 60% for classification of suboptimal compensatory hypertrophy was based on a previous study that reported a typical range of postdonation eGFR of 62.5~67% from baseline renal function [8]. We evaluated between-group differences in baseline characteristics and findings through the baseline biopsy obtained during kidney transplantation. Definition of the measurements EGFR was calculated using the following formula for the modified IDMS-MDRD Study equation for Japanese individuals: eGFR (ml/min/1.73 m2) = 194 × (Serum creatinine) -1.094 × (Age) -0.287 × 0.739 (if female) [12]. EGFRs were assessed at the initial visit and the annual visit at one year after the donation. Japan Diabetes Society (JDS) HbA1c values were converted into National Glycohemoglobin Standardization Program (NGSP) HbA1c values using the following formula, as recommended by the JDS: NGSP value (%) = 1.02 × JDS value (%) + 0.25%. We diagnosed hypertension as follows using the criteria defined by the Japanese Society of Hypertension (JSH): systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg [13]. Hyperlipidemia was defined as follows using the criteria recommended by the Japan Atherosclerosis Society; low-density lipoprotein cholesterol (LDL-C) ≥140 mg/dl, high-density lipoprotein cholesterol (HDL-C) ≤40 mg/dl, or triglycerides (TG) ≥150 mg/dl [14]. Salt consumption per day and estimated fractional excretion of sodium in urine were calculated using the formula recommended by the Japanese Society of Hypertension [13]. Hyperuricemia was defined as serum uric acid level ≥ 7.0 mg/dl in men and ≥ 6.0 mg/ dl in women [15]. In our study, 14 patients were diagnosed with hyperuricemia before donation, and none of them underwent uric acid treatment. Pathological diagnosis Baseline kidney biopsy was defined as biopsy performed at 1 h after re-perfusion during kidney transplant operation. No other biopsies with different timings or causes (e.g. 1-year protocol biopsy or episode biopsy) were included in this study. Pathological findings were evaluated using the current Banff score [16] of the chronic renal changes (at 1 h) identified in the baseline biopsy specimen, namely: interstitial fibrosis (ci), tubular atrophy (ct), arteriolar hyalinosis (ah), and glomerular atrophy. Based on the percentage of the renal cortical area visible, the ci was classified as minimal (≦5%), mild (6-25%), moderate (26-50%), or severe (≧50%), which corresponded to Banff scores of ci of ci-0, ci-1, ci-2, and ci-3, respectively. Ct was similarly categorized according to Banff scores of ct-0, ct-1, ct-2, and ct-3. Ah was classified as none, mild-to-moderate, moderate-to-severe, or severe, corresponding to a Banff score of ah 0, ah 1, ah 2, and ah 3, respectively. Glomerular atrophy was evaluated as the proportion of atrophic glomeruli to the total number of glomeruli in the specimen. Baseline biopsy data were collected retrospectively from the pathology reports. Statistical analysis Between-group differences were evaluated using Student's t-test for continuous data, and the chi-squared (χ 2 ) test for categorical data. We performed a logistic regression analysis, using a forward selection method for sex, body surface area (BSA), and the characteristics with significant between-group differences. All analyses were performed using SPSS (version 20, IBM, Chicago, Illinois, USA). Two-tailed p-values ≤0.05 were considered statistically significant. Values are expressed as mean ± standard deviation, unless otherwise specified. Histological findings The chronic histological changes on baseline biopsy are shown in Table 2. In terms of ah, ci, and ct, there were no significant differences between the two groups. The combination of ct and ci score (ct + ci≧1) tended to be higher in the suboptimal group, but this between-group difference was not significant. However, the incidence of having both an ah score and ct + ci score ≧ 1 (ah≧1 ∩ ct + ci≧1) was significantly higher in the suboptimal than the optimal group (optimal versus suboptimal, 6.4% versus 25.0%, p = 0.007). The rate of glomerular atrophy was not significantly different between the two groups (optimal versus suboptimal: 9.1% versus 11.4%, p = 0.280). Changes in values from pre-donation to 1-year postdonation are shown in Fig. 1 (A: eGFR, B: HbA1c, C: BUN and D: uric acid). BUN and Uric acid significantly elevated from pre to post in both optimal and suboptimal groups, but not true for HbA1c. Discussion We identified that hyperuricemia and chronic pathological changes (1 h after biopsy) are independent risk factors for suboptimal compensatory hypertrophy. Although pre-donation eGFRs were not different between the optimal and suboptimal groups, post-donation eGFR was nearly 10 ml/min/1.73m 2 lower in the suboptimal group in contrast to that in the optimal group. We defined suboptimal compensatory hypertrophy at 1-year post-donation by an eGFR < 60% from baseline, based on the findings of Colin et al. [8] who reported that renal function after donation recovered to about 62.5~67% of baseline values, which is consistent with the findings in other studies [8,9,17,18]. In addition, the rate of GFR decline was significantly higher in patients with a baseline GFR < 50 ml/min/1.73m 2 [2,19,20]. The risk of cardiovascular events and uremic symptoms significantly increased in patients with an eGFR < 45 ml/ min/1.73m 2 , [3,20] with this risk increasing from 13 to 51%, for an eGFR range of 7.5 to 15 ml/min/m at 1 year [21]. Thus, by setting the cut-off at 60%, we were able to differentiate donors close to chronic kidney disease (CKD) stage IIIA (45~59 ml/min/1.73m 2 ) from those with CKD stage IIIB (30~44 ml/min/1.73m 2 ), which allowed us to identify the clinically relevant risk factors for suboptimal compensatory hypertrophy. Interstitial fibrosis and tubular atrophy (IFTA) on baseline biopsy are more closely associated with lower long-term renal function in living donors than other abnormalities, including glomerulosclerosis and arteriolar hyalinosis [10]. However, IFTA is a pattern of injury that has many underlying causes, [22] which is why, in our study, we strived to specify the cause of IFTA by combining ct/ci and ah scores, which identified chronic ischemia induced by arteriosclerosis as the main cause of IFTA. Interestingly, the impact of this combination was independent of age, which is suggestive of a discrepancy between actual and biological age. Moreover, there was no correlation between chronicity score (ah≧1 ∩ ct + ci≧1) and glomerular atrophy. This result was consistent with the well-known fact that tubular atrophy is superior to glomerular pathology as a predictor of declining renal function [23]. It is impractical to obtain a baseline renal biopsy specimen as a component of the primary donor selection process. Instead, Ohashi et al. [5] showed that metabolic syndrome in donors is associated with chronic histological changes in the kidney and subsequent protracted recovery of kidney function after donation. In our study, hypertension, hyperlipidemia, and BMI were not significantly different between the two groups. Furthermore, HbA1c tended to be higher in the suboptimal group, but was not retained as an independent predictor on multivariate analysis. This may be due to the small number of donors. However, uric acid was an independent risk factor for suboptimal recovery of donor renal function. Although the uric acid levels of both groups were in the normal range in our study, this result suggests that [25], who demonstrated a significant association between uric acid and the development of early GFR loss. Sumiyoshi et al. [26] and Nagahama et al. [15] reported that higher uric acid levels were independently associated with a greater risk of incident metabolic syndrome and that hyperuricemia tends to have a clustering of cardiovascular risk factors. In addition, Antonini et al. [27] showed that carotid arterial stiffness is related to uric acid, independently of established cardiovascular risk factors. Although pre-donation hyperuricemia is not included in the donor evaluation guidelines [7], caution should be exerted when hyperuricemia is detected in a donor, regardless of normal renal function. Some limitations of our study were that it was a single-institution study with a small sample size; further, analysis was retrospective in nature and the follow-up term was relatively short. As biopsies are difficult to perform prior to donor selection, these findings cannot be included in the donor selection process. Additional studies are needed to investigate the added contribution of other factors to the health status of donors, such as presarcopenia, to predict chronic renal pathology from clinical findings. Conclusions Pathological findings in biopsy specimen at 1 h and higher uric acid level were associated with insufficient recovery of renal function at 1 year after donation. Living donors with hyperuricemia and a high chronicity score (ah≧1 ∩ ct + ci≧1) should be followed up with caution after donation.
2,854.6
2019-11-08T00:00:00.000
[ "Medicine", "Biology" ]
Tunable UV source based on an LED-pumped cavity-dumped Cr:LiSAF laser : We developed a light-emitting diode (LED)-pumped Cr:LiSAF laser operating in Q-switched and cavity-dumped regimes. The laser produces 1.1 mJ pulses with a pulse duration of 8.5 ns at a repetition rate of 10 Hz on a broad spectrum centered at 840 nm with a full width at half maximum of 23 nm. After frequency tripling in two cascaded LBO crystals, we obtained 7 ns pulses with an energy of 13 µJ at 280 nm and with a spectral width of 0.5 nm, limited by the spectral acceptance of the phase matching process. By rotating both LBO crystals, UV emission is tuned from 276 nm to 284 nm taking advantage of the broad infrared spectrum of the Cr:LiSAF laser. Introduction With the impressive development of high-power LED in the visible, LED-pumped lasers have found a regain of interest since their first demonstration in the 1960's [1]. Initially concentrated on Nd-doped crystals [1], LED-pumped lasers using tunable media have been reported with polymers [2], fibers [3] nearly 40 years later. However, tunability has been really demonstrated only very recently on LED-pumped lasers with crystals doped with transition-metal ions like alexandrite (Cr 3+ :BeAl2O4) with a tuning range between 715 nm and 800 nm [4], Cr:LiSAF (Cr 3+ :LiSrAlF6) between 810 nm and 960 nm [5], and Ti:sapphire (Ti 3+ :Al2O3) between 755 nm and 845 nm [6]. In these cases, the high pump power density necessary to reach the threshold is achieved by LED-pumped Ce:YAG concentrators emitting in the yellow-orange [7]. Providing a pump power density 10 to 20 times higher than the power density of a single LED, LED-pumped concentrators have unlocked the potential of LED-pumped lasers. Compared to other pumping sources, LED-pumped concentrators combine the advantages of semi-conductors (compacity and stability), with the advantages of flashlamps (energy, robustness and low cost). They can provide an energy in the 100 mJ range and a peak power reaching 10 kW/cm 2 [6]. With such pumping parameters, LED-pumped lasers can now move from simple configurations (for example two mirror cavities) to more complex designs. The purpose of this paper is to demonstrate the next step towards complex LED-pumped laser systems. With a broad tuning range in the near infrared, LED-pumped transition-metal lasers offer many opportunities for new low-cost and very robust laser sources like ultrashort lasers, or LiDAR for atmospheric sensing, altimetry and vegetation monitoring. In addition, the 280-360 nm band can be reached by frequency conversion on purpose to detect improvised explosive devices, chemical or biologic compounds [8]. This band can be addressed by frequency doubling of alexandrite for the wavelengths higher than 350 nm and frequency tripling of Ti:sapphire or Cr:LiSAF for lower wavelengths, typically below 300 nm. In this paper, we chose to investigate an LED-pumped Cr:LiSAF because it presented much better performance than LED-pumped Ti:sapphire [5,6]. Thanks to its better spectroscopic parameters. Cr:LiSAF has often been implemented in oscillators under longitudinal pumping with Nd:lasers [9] or laser diodes [10][11][12] with a maximum pump power in the watt level in continuous wave. For energies reaching the mJ range [13,14] and the J range [15], some papers reported transverse pumping of Cr:LiSAF with flashlamps at low repetition rates, with the wellknown drawbacks of theses pump sources (stability and lifetime). Pump pulsed operation with laser diodes were reported [16][17][18][19] including complex transverse pumping configuration including 4 laser diode stacks at 690 nm (675 W peak power each) for a total pump energy of 200 mJ during 74 µs at 10 Hz : this led to 47 mJ pulses of 300 ns in Q-switched operation in multimode operation [16]. Pulsed pumping of Cr:LiSAF with laser diodes has also been demonstrated at higher repetition rates (kHz) but with lower energies [17]. LED-pumped concentrators can provide the same range of pump energy than stacks in a simpler setup. We recently demonstrated a Cr:LiSAF oscillator emitting 8 mJ in free running operation transversally pumped by a Ce:YAG concentrator [5]. The paper presents the first active Qswitching and cavity dumped Q-switching of an LED concentrator pumped Cr:LiSAF oscillator and its frequency conversion in the ultraviolet. Description of the setup The experimental setup is reported on Fig. 1. A 5.5% doped Cr:LiSAF crystal is transversally pumped by a Ce:YAG concentrator, itself pumped by 2240 blue LEDs in a design presented elsewhere [5]. The Cr:LiSAF is 14 mm long with a section of 1x1 mm². The pump power remaining after propagation through the crystal thickness (1 mm) is reflected back into the Cr:LiSAF by a gold mirror. The Cr:LiSAF laser facets are cut at Brewster angle, with its c-axis in the plane of incidence. In order to take benefit of the strong absorption of the Cr:LiSAF on the c-axis, the pumping head is oriented in the vertical plane, perpendicular to the plane of the laser cavity . For mechanical reasons related to the size of the pumping head, the crystal is designed in a prism configuration that enables the laser beam to be in the same half space after refraction in the Cr:LiSAF. The Cr:LiSAF crystal is directly in contact with a water cooled heat sink (piece of aluminum linked to the LED's printed circuit board cooling system). For LiDAR applications, the axial resolution is related to the pulse duration. Less than 10 ns pulses are generally required to ensure an axial resolution in the meter range. This duration can hardly be obtained on simple Q-switched lasers like Cr:LiSAF which small signal gain is usually limited to values below 1.5 [5]. As a matter of fact, even in a short cavity (less than 10 cm), the pulse duration remained higher than 40 ns [5]. Hence, we choose to design a laser operating in cavity dumping as it has been previously reported with alexandrite lasers for LiDAR applications [20]. We designed a 3-mirror cavity (Fig. 1) ensuring a TEM00 laser operation with the largest size possible in the Cr:LiSAF, regarding its small aperture of 1 x 1 mm². The waist is fixed to 260 µm in the first part of the cavity including the Cr:LiSAF crystal. In the second part of the cavity, the waist radius is set at 660 µm, adapted to operate with a Pockels cell in Q-switched operation. The two concave mirrors (M1 R=0.5 m and M2 R=1 m) operate at distance of half their radii of curvature from the two waists. In this configuration, the cavity length is 1.27 m corresponding to a cavity roundtrip of 8.5 ns tailored to generate sub 10 ns pulses in cavity-dumping operation. Cavity-dumping in the infrared The laser operates at a repetition rate of 10 Hz in order to reduce thermal effects occurring in the Cr:LiSAF. The pump pulse duration is adjusted to 100 µs, slightly above the fluorescence lifetime of Cr:LiSAF (67 µs). The pump duration is optimized by monitoring the fluorescence signal emitted by the Cr:LiSAF. Indeed, we observed that the amplitude of the fluorescence signal increases for pump durations up to 100 µs and then tends to decrease for longer pump pulses. This can be related to thermal quenching and to the temperature increase induced by the pump. As a matter of fact, the temperature of the Cr:LiSAF crystal is 39°C in this operating range, below thermal quenching (the temperature is controlled measuring the fluorescence lifetime). At 100 µs, the pump energy delivered by the LED-pumped concentrator is 103 mJ. Despite the poor spectral matching between Ce:YAG emission spectrum and Cr:LiSAF absorption spectrum, we estimate that an energy of 32 mJ is absorbed in the Cr:LiSAF taking into account the gold mirror reflecting the unabsorbed pump energy during the first pass in the laser crystal. Because of transverse pumping, only a part of this energy can be used by the laser beam. Considering the pump intensity profile (Fig. 2), the volume of the laser mode in the Cr:LiSAF (radius of 260µm), and the Cr:LiSAF storage lifetime, we estimate that the maximum energy that can be used at the laser wavelength is 3.4 mJ. Before operation in cavity dumping, we tested operation in the Q-switched regime with a plane output coupler M3 having a transmission of 10 % at 850 nm. For this purpose, we inserted a KDP Pockels cell (QX-1020, Gooch & Housego) and a polarizer with a transmission of 98% for the horizontal polarization and a reflection of 80% for the vertical one. The Pockels cell is initially adjusted to be a quarter wave plate (λ/2 in double pass). The laser operation starts by applying a first step voltage transforming the Pockels cell into a half-wave plate (λ in double pass). We obtained a pulse energy of 2 mJ, a pulse duration of 135 ns and a buildup time of 530 ns. The long pulse duration in this configuration is related to the long cavity and the relatively low gain. Next, the output coupler is replaced by a highly reflective mirror. In this configuration, the buildup time is reduced to 480 ns. Once the pulse reaches its maximum, a second step voltage is applied on the Pockels cell, forcing the pulse to be ejected from the cavity by the polarizer. Fig. 3 presents the results obtained under cavity dumping. After the polarizer, we obtained pulses with an energy of 1.1 mJ. A photodiode put close to the mirror M1 gives the pulse evolution in the cavity (Fig. 3(a)). The output pulse is monitored by a second fast photodiode (rise time 1 ns). Fig. 3(b) shows few ripples after the ejection of the main pulse: this can be attributed to the residual transmission of 20% for the polarizer in the vertical polarization in the cavity axis. This means that a part of the pulse remains in the cavity after the cavity dumping, leading then to a few-10s ns small tail in the pulse as shown in fig. 3(b). The output spectrum of the pulses after cavity dumping is plotted on Fig. 4 (measured with an Ocean Optics HR 4000 spectrometer directly on a reflection of the pulse). It is worth to note that the spectrum is 23-nm broad at FWHM (826 nm-849 nm) despite the prism shape of the Cr:LiSAF that could have induced a spectral selectivity. Indeed, due to the small dispersion index in Cr:LiSAF, the angle variation between a beam at 849 nm and a beam at 826 nm is only of 0.5 mrad after passing through the Cr:LiSAF prism. This value represents one quarter of the total divergence of the beam in this part of the cavity (full divergence of 2 mrad for a beam waist radius of 260 µm). This low deviation value explains why the Cr:LiSAF prism induces no significant spectral selectivity. Insert of Fig. 4 gives the output beam profile, very close to a TEM00 profile. We measured a M² of 1.10 in the horizontal plane and 1.13 in the vertical plane. The pulse-to-pulse stability is measured better than 10%. It is worth to note that no particular attention has been paid on the laser design to improve this stability: the laser is composed of separated mechanical mounts on a classical breadboard without air regulation. Frequency conversion down to the UV The frequency conversion in the UV is obtained by frequency tripling in a two-stage cascade configuration (Fig. 5). The first stage consists in second harmonic generation (SHG) in an 8mm-long LBO crystal. The LBO is cut in type I for SHG at 850 nm (θ = 90°, φ = 27°). The output beam for the Cr:LiSAF laser is focused by a f=100 mm lens to obtain a waist radius of 43 µm in the LBO. We obtained a beam with an energy of 108 µJ at 420 nm. The blue beam is measured to be circular (Fig. 6 insert), in agreement with the angular acceptance of 4.3 mrad for this phasematching configuration, close to the divergence of the fundamental beam in the LBO crystal. The blue spectrum is measured and a bandwidth of 1.1 nm FWHM is found, which is far below the spectral width of the infrared pulses. Indeed, the blue spectrum is limited by the spectral acceptance. This also explains why the frequency conversion is only of 10% despite an IR peak power density reaching 2.2 GW/cm² in the LBO crystal. Far from being a drawback, this limited spectral acceptance can be advantageously used to tune the blue pulses. By rotating the LBO around the φ-axis, we are able to tune the blue pulses from 408 nm to 435 nm (a tunability of 9 nm FWHM plotted on Fig. 6). The second stage of frequency conversion is realized in a 10-mm-long LBO crystal cut for sum frequency between 850 nm and 425 nm. The crystal is in type I with the following angles: θ= 90°, φ = 54.6°. This configuration imposes to rotate the blue polarization by 90° to be parallel to the infrared polarization. For this purpose, the blue and the infrared beams are separated by a dichroic mirror (HT 400-450 nm HR 800-900 nm) and then combined back by another dichroic mirror (HR 400-450 nm and HT 800-900 nm). A half waveplate is inserted in the blue beam path to rotate its polarization. The beam waist in the first LBO crystal is reimaged in the second one using a 2f-2f configuration with a f=100 mm lens. After wavelength separation by a UV silica prism, we are able to characterized the UV beam. In the optimal configuration, we found an energy of 13 µJ at 280 nm, corresponding to the third harmonic generation (THG) of the peak of the infrared spectrum at 840 nm. This is a 1.2% efficiency limited by the spectral acceptance. The UV average power is measured with a Gentec thermal powermeter (noise in the 10 µW range), the measurement is performed with and without phase matching to be sure that the background noise and other wavelengths are subtracted. The pulse to pulse stability in the UV is measured to be 21%. We measured a spectral bandwidth of 0.5 nm (Fig. 8). The pulse duration is measured with a fast photodiode. We found a value of 7 ns FWHM ( Fig. 7(b)). By tailoring the rotation of the two LBO crystals, it is possible to tune the UV from 276 nm to 284 nm. Conclusion To conclude, an LED-pumped Cr:LiSAF laser operating in cavity-dumping is reported for the first time. We obtain 1.1 mJ pulses at 10 Hz with a pulse duration of 8.5 ns. After a first SHG stage in an LBO crystal, pulses with an energy up to 108 µJ at 420 nm are demonstrated with a Gaussian beam profile. After a second THG stage in a second LBO, the laser produced 7 ns pulses with an energy up to 13 µJ at 280 nm, a gaussian beam profile and a tunability over 8 nm. This new source can be seen as an alternative to classical sources covering this wavelength range in the UV with the appropriate parameters (kW peak power, ns pulse duration, good beam quality and tunability) relying on relatively complex and onerous pump systems. For instance OPO pumped by frequency quadrupled Nd:YAG lasers can cover the 300-2340 nm range [21]. Similarly, Ce:LiCAF laser oscillators pumped by Nd:YAG 4 th harmonic can reach the 280-300 nm range [22]. In order to increase the efficiency of the frequency conversions, the wavelength of the fundamental pulse can be narrowed with a birefringent filter or an etalon. Also, broader tuning range in the UV can be obtained tuning the fundamental pulse. In this work, the simplicity of the source is required for the application. Tuning the fundamental would break the simplicity of this source as this requires alignment of the cavity and of the non-linear conversion processes. Frequency tripling of Cr:LiSAF has already been proposed as a solution for this wavelength range [23][24][25]. In [24], with a pump energy of 60 J from flashlamps at 1 Hz, laser pulses of 20 ns at 840 and an energy of 120 mJ are generated, this led to 20 mJ pulses in the blue and 6 mJ in the UV. This is the first time that a LED pumped laser source can be considered for a real application as the performance achieved in the UV are convenient for middle range LiDAR applications. Namely, the detection range being proportional to the squareroot of the pulse energy, 10 µJ UV pulses can be useful to detect improvised explosive devices up to a range of 100 m with meter axial resolution. In addition, this cavity dumped oscillator can be easily transformed in a regenerative amplifier seeded by femtosecond pulses. The spectral width obtained in free running (23 nm FWHM) proves that this amplifier could support less than 50 fs pulses at 850 nm with a potential to reach the mJ range. Moreover, energy scaling can be easily achieved by increasing the concentrator dimensions and the Cr:LiSAF length. As Ce:YAG crystals are large size and low cost crystals and as 10 cm long Cr:LiSAF are common for flashlamp systems, LED pumped Cr:LiSAF systems has the potential to deliver 10-100 mJ pulses. Consequently, this work opens the route to a new generation of high energy femtosecond amplifiers with unique properties of simplicity, robustness, long lifetime and stability.
4,027.8
2019-07-31T00:00:00.000
[ "Physics" ]
Introduction to Reliability Tests of Unmanned Aircraft Used in the Armed Forces of the Republic of Poland This paper is a theoretical introduction to the reliability tests of unmanned aerial vehicles used in the Polish armed forces. The purpose of this article is to determine the type / model of the unmanned aircraft used in the service of Polish Armed Forces, which results from the conducted reliability tests, will be the basis for generalizing them to the largest group for the subsequent research. In order to achieve the assumed goal, the author, firstly, reviews the terms and definitions describing the subject of the study. The trends occurring in the description of the examined subject-matter were recognized. Then, the typologies and classifications of unmanned aerial vehicles are analyzed on the basis of Polish and international sources, as well as normative documents. The last part of the paper comprises of a comparison of tactical and technical data of unmanned aerial vehicles used by the Polish Armed Forces. Introduction The broad spectrum of using unmanned aerial vehicles (UAVs) on the battlefield and their relatively low cost (several dozen times lower than it is the case of crewed machines) generate an increasing interest in this equipment, not only in developed, but also developing countries. This is reflected in both the unmanned fleet already owned by Poland, as well as in the Plan of Technical Modernization of the Polish Armed Forces [Plan Modernizacji Technicznej Sił Zbrojnych RP], where UAVs are the most frequently mentioned devices in three main operational priorities (OP): OP Image and satellite reconnaissance, OP Modernization of Artillery (as an accessory to the RAK system), and in the task Warmate circulating ammunition (Ocena astanu, 2019) (Dziennik Zbrojny, n.a.) of view of maintaining these capabilities, it is equally important to increase the reliability parameters of devices already owned by the Polish Army. The constant need to carry out reliability tests is determined by the necessity to improve the UAVs' operation process, as well as by the need to enhance reliability allowing to complete the combat task. Bearing in mind the fact that reliability determines the probability that an object will perform its function in a given time in certain conditions, one may even be tempted to state that possessing modern equipment, as UAVs undoubtedly are, and not carry out an analysis and evaluation of the possible improvement of their reliability parameters, is unacceptable. For, this can cause the failure of a potential mission, and in extreme cases, it may endanger the life and health of soldiers who operate a given device, or whose task depends on the success of the UAV's mission (e.g., reconnaissance). Considering the above, it can be concluded that there is a justified need to perform reliability tests on unmanned aerial vehicles, and it is in the interest of the armed forces to perform such tests on the equipment used by the Polish Army. However, due to the existence of different types of UAVs, detailed testing of all unmanned aerial vehicles could prove too costly and time-consuming, and hence economically unjustified. The solution to this problem may be to find such an unmanned aircraft, whose examination results could be then extrapolated onto a larger group of aircraft (more types). Still, it is necessary to take into account the construction differences that occur in different types of UAVs. The analysis of the subject literature indicated shortcomings in the area of uniform studies related to this research area. Most of the researchers focus on one type of UAVs (e.g., MALE) (Goetzendorf-Grabowski, Frydrychewicz, 2006), or they are very general and do not analyze any specific aircraft (Petritoli, Leccese, 2017) (Caswell, Dodd, 2014). Bearing in mind the lack of synthetic scientific studies on the reliability of UAVs and their ever-increasing impact on state security, the author decided to undertake research aimed at determining the type / model of the unmanned aircraft used by the Polish Armed Forces. The results of the conducted reliability tests will be methodologically generalized and will be referred to the largest possible group of UAVs. To achieve this adopted goal, it was decided to begin the research with analyzing the term 'unmanned aerial vehicle' that would include the criterion semantic features (Anusiewicz, 1994) of the examined subject. This was dictated by the results of the preliminary analysis indicating the interchangeable use of several concepts related to the subject of the study, i.e., an unmanned aerial vehicle. The results of the tests carried out at this stage are also an additional value consisting in an attempt to systematize the terminology associated with unmanned aerial vehicles. Then, the typology of unmanned aerial vehicles was analyzed to finally select one model. The most important research methods employed for this study include: analysis, synthesis, comparison, abstraction, and inductive and deductive inference. It should be emphasized that this research is a basis for further work related to the enhancement of reliability of unmanned aerial vehicles carried out by the author in her doctoral dissertation. The Semantic Problem An analysis of the subject literature showed the occurrence of various terms related to unmanned aerial vehicles. These terms, often used interchangeably, can cause cognitive problems related to semantics. Therefore, further research investigations were carried out to refine the scope of research, including reliability tests. In the analysis of the literature, five basic terms related to the subject of the study were distinguished: 1. Unmanned Air System (UAS); 2. Unmanned Aircraft; 3. Remotely Piloted Aircraft (South Africa); 4. Remotely Piloted Aircraft System (RPAS); 5. Radio-Controlled Aircraft (RC Aircraft); 6. Unmanned Aerial Vehicle; 7. Drone. The analysis indicated that the term "unmanned aerial systems" belongs to the scope of a broad term defining the group of unmanned platforms "Unmanned Systems". It also took into account the center of operation of these systems. Currently, one can distinguish three basic types of Unmanned Systems: air, sea, and land (Cwojdziński, 2014). Due to the limited scope of the paper, the focus is solely put on unmanned aerial systems. In NATO, the term is defined as "a system whose components include the unmanned aircraft, the supporting network and all equipment and personnel necessary to control the unmanned aircraft" (AAP-6, 2011). Further analysis showed that unmanned aerial systems include (Adamski, Rajchel, 2013): • flight ground control station (GCS -Ground Control Station) with an antenna system and a data transmission system; • data transmission and exchange terminals and software; • communication systems (ground / air; air / ground); • a specified number of unmanned aircraft (including spare); • UAVs take-off and landing (recovery) devices; • means of communication (voice and data exchange) with air traffic management cells; • devices (equipment) necessary for the operation, storage and transport of UAVs; • all necessary documentation (technical, operational) regarding the abovementioned elements; • additional devices necessary to carry out tasks (still camera, video camera, means of destruction). When comparing the above with the term 'unmanned aerial vehicle', it should be noted that this term first appeared in military semantics in the 1990s (Gregorski, 2017). One of the first definitions of this term describes it as a reusable aerial apparatus (vehicle, ship, object) of any aerodynamic configuration, capable of carrying armament or other equipment, with no pilot-operator on board and capable of flying along a programmed route (Popularna Encyklopedia, 2002). This definition does not correspond to the current reality, e.g., Polish Armed Forces are in possession of disposable UAVs that are intended for "kamikaze" attacks. Another definition was created in 2005 and defines UAVs as powered and unmanned apparatus. In order to stay in air, it can use the lift generated by the laws of aerodynamics on fixed (wings), movable support surfaces (rotor), or aerostatic buoyancy (aerostat). It can be controlled by autonomous systems or remotely by the operator (from the ground, air, or ship). It has been designed to return and be reused. It can be a single-use aircraft (Karpowicz, Kozłowski, 2003). The above definition seems to reflect the essence of the term in question. However, it is very complex. Therefore, in order to find an appropriate definition elucidating the subject of the study that would also take into account the environment in which the research is carried out, the author has adopted the NATO definition, which states that a UAV is: a power-driven aircraft, disposable or reusable that uses aerodynamic forces to provide force for a carrier that flies independently or is remotely piloted; capable of carrying deadly or incapacitating loads (AAP-6, 2011). In 2011, ICAO introduced (ICAO, 2011) the concept of "remotely controlled aircraft," which is part of the remotely controlled air system. Pursuant to air traffic regulations, the term "remotely controlled aircraft" includes an "unmanned aircraft which is piloted from a remote piloting station" (Załącznik do obwieszczenia, 2011). This means that a remote-controlled aircraft is a much narrower concept than an unmanned aerial vehicle, as it does not include autonomous systems. However, similarly to the previously considered unmanned air system, the remotely controlled air system includes all other devices (elements) necessary for the implementation of the flight. In this case, it will be a remote pilot station (ICAO RPAS, n.a.) (ICAO, n.a.). Similar to a remote-controlled aircraft, a radio-controlled aircraft is defined as a UAV subtype. The main difference resulting from semantics is the way aircraft is controlled. In the case of the previous term, the word "remotely" specifies how the aircraft is controlled and not the control method, while in the term radio-controlled aircraft the word "radio" limits the control method to radio control. In addition, other characteristics that distinguish a radio-controlled vessel can be found in the literature, such as limiting the number of operators -to one and the number of hours of work in the air -up to two hours (Ministerstwo Infrastruktury, 2019). An analysis of the literature on the subject of research shows that the concept of "unmanned aerial vehicle" is synonymous with unmanned aircrafts. This is due to the fact that the term "unmanned aerial vehicle" defines the center in which the vessel operates -air. In contrast, the unmanned aerial vehicle determines the activity of a ship in the air, i.e. flight. According to the Aviation Law and accepted terminology by the scientific community, each flying apparatus (floating in the air) is an aircraft (Prawo lotnicze, 2002). The use of two terms to determine the same results from the previously misunderstood BSP characteristics, namely the use of the term "Unmanned Aerial Vehicles," which due to the fact that the pilot always operates the aircraft, was a mistake. The last term discussed is "drone," and in the case of unmanned aerial vehicles, drone and BSP describe the same devices. The reason for the next term for the same device results from the interchangeable use of these terms by the media (in particular Western media) (BOTLINK, n.a.). It should be emphasized that the word drone is becoming more popular due to frequent use by the media as shown in the figure below. The above charts illustrate the process of replacing all terms with the term drone. However, despite the growing popularity of the term "drone" due to the proliferation of unmanned systems themselves and the increasing use of the term by the media, it should be noted that it is slowly replacing the term "unmanned aerial vehicle" (Dougherty, 2016). Notwithstanding the above, analysis of the literature indicated that the most common term describing the subject of research in scientific literature is "unmanned aerial vehicle." The Problem of Typology and Classification Technical parameters and reliability parameters depend directly on the UAV type. Since the UAV type determines its construction, it thus also determines the structural elements used or tasks it will carry out, and in consequence, also the external factors to which it will be exposed during carrying out tasks. Given the above, it was considered important to analyze the UAV typology. The typology of all aircrafts, including UAVs, may depend on many factors, the most common in the subject literature are typologies associated with aircraft attributes, i.e., their characteristics. The characteristics of aircrafts may be related to their flight or take-off and landing characteristics (e.g., vertical take-off and landing). Other frequently used parameters describing aircrafts include: operating radius, flight time, equipment, load capacity, structure, aerodynamic system, etc. In addition, the division of aircraft may depend on their function and the scope and purpose of use. The generally accepted division concerns their use on the civil and military market. In relation to UAVs, other divisions can be found in the literature including e.g., the responsibility and risk associated with their use, or the business model where UAVs are divided into product and service. Keeping in mind the purpose of this paper, some of the most common UAV typologies are described below. The first typology refers to functions that can be implemented by using UAVs. At the same time, the general division of the implemented functions can be categorized into civil functions and military functions. In the civilian area, UAVs functions are classified as follows (Ministerstwo Infrastruktury, 2019): 1. Monitoring-related functions -terrain or air imaging to obtain data for further analysis. 2. Functions related to transport -activities related to the movement of people and material goods. Functions related to communication (telecommuni- cations) -ensuring the safe use of airspace by many types of UAVs, especially autonomous unmanned aerial vehicles. In the military area, the general division of UAVs divides them into reconnaissance, combat, and special ones. However, it should be noted that the division both in the civil market and in the military area is directly related to the currently performed tasks (functions) of these devices. Therefore, it is not difficult to imagine that this typology will evolve as the concept of using UAVs in both areas develops further. This typology can also take various shapes, e.g., including equipment carried by UAVs. One such example is the following breakdown of military reconnaissance UAVs: 1. IMINT -optical recognition -equipment: infrared sensors, lasers, and radar sensors; 2. SIGINT -interception and recognition of electromagnetic waves; 3. MASINT -detection and tracking of ballistic missiles, tracking and detection of means of air attack with the possibility of determining their impact parameters, traces of submarines and aircraft using boosters; 4. OTHER -warning against: radiation, electromagnetic attack and others, especially used in combating systems intended for SEAD tasks. However, in the subject literature, the most common classifications refer to UAVs. An example of such a classification is the distinguishability of these measures by the range of activity: 1. Close range up to 50 km; 2. Short range (performing reconnaissance and tracking operations) up to 150 km; 3. Medium range (carrying out complementary tasks for manned aircraft); 4. Long range (high altitude) -acquisition of information about the target; 5. UAVs of vertical take-off and landing used in the Navy. (Jane's Airport Review, 2007) Another characteristic feature of UAVs, and thus the most common division of this type of aircraft, is their weight. In the subject literature, the most common classification divides UAVs into five categories. This typology is presented in the table below. The mass division was adopted not only in the scientific and military environment, but it was also sanctioned by Polish legislation. For, the binding regulation also covers the classification of UAVs in Poland and divides them into two basic categories on the basis of their mass (Rozporządzenie Ministra Transportu, 2013). It should be noted, however, that due to technological development, including the miniaturization of electronic systems, the typology based on the mass of the device is progressively less useful. Therefore, more and more often one can find "hybrids" of various UAVs' attributes. Partial data resulting from the literature analysis are presented in the table below and include the classification taking into account four attributes: However, it should be noted that the above typologies have serious limitations. They make one attribute dependent on the second, which, due to the continuous miniaturization of aviation technology stemming from technological progress, prevents the proposed classification from reflecting real possibilities. Given the above, it can be concluded that UAV typology that takes into account such attributes as unladen mass, or joints together two features (e.g., mass and operating radius) is not precise. Moreover, these typologies do not have significant cognitive value in determining UAV reliability. Therefore, guided by research inquisitiveness, a new division was made, which distinguishes UAVs according to their structural element, and in particular, their aerodynamic system, namely, a fixed-wing aircraft, rotorcraft, and aerostats (rotorcraft as well as balloons and airships): It should be further noted that the aerodynamic system that includes all UAVs in its group is a system with fixed bearing surfaces i.e., fixed-wing vehicles. Therefore, it seems justified to carry out a reliability analysis taking into account the broadest UAV group. It should be emphasized that in the subject literature there are also other examples of typologies focusing on UAV constructions. One of them takes into account their propulsion systems: piston, jet, turbojet, and electric. Another one divides UAVs using the take-off and landing criterion: folding and retractable landing gear, fixed landing gear, UAVs fired from the launcher, carried by carriers, vertical take-off and landing, and multi-variant take-off systems that can also be equipped with classic landing systems, i.e., using a hook and airport braking ropes or a net or parachute (which is often treated as an emergency system). However, the analysis of the above classification has shown that it is impossible to unequivocally indicate the most common types of UAV structures in the above typologies. Comparison of Tactical And Technical Data of UAVs Used by the Polish Armed Forces Practical and technical data are a source of information on the structural elements used and constitute necessary knowledge about the expected operational values of particular UAVs, including the values affecting their reliability. On the basis of the analysis and synthesis of the data on particular technical parameters, it is also possible to indicate differences and similarities in the UAV design, which will ultimately allow us to achieve the adopted goal. FlyEye The first example of a UAV introduced in the Polish Armed Forces was the FlyEye, produced by WB Electronics. Currently, the armed forces are equipped with 15 sets of this UAV model. This unmanned aerial vehicle is characterized by its composite structure and the possibility of taking-off in an almost vertical position, the so-called steep-angle. There is also the possibility of carrying out a two-stage steep-angle landing -it facilitates completion of tasks in adverse conditions. The UAV possesses fully automated flight control systems and the ability to coordinate and correct them (FLYEYE, n.d.). The mounted space recognition elements are equipped with specialized optical as well as thermal imaging cameras. The UAV can perform a flight with a radius of up to 30 km, and stay airborne up to three hours with constant data transmission in real time. After completing the task, it proceeds to perform a two-phase landing consisting in: in the first phase -dropping the container with the head and electronics on the parachute, and in the second -its own landing. The FlyEye, thanks to its potential and modularity, can be transported by just one soldier, while the second soldier carries other pieces of equipment, such as ground flight control and data communication station (Brzezina, 2013). Orbiter Another discussed UAV, which is used by Polish Armed Forces is the Orbiter manufactured by the Israeli company Aeronautics Defense System Ltd. This aircraft was built in the arrangement of a flying wing with a single electric motor, which was mounted in the rear part of the fuselage, and the reconnaissance elements were installed in the forepart of the vehicle. The set includes: a portable launcher, one or more recognition cameras, and a communication console. The mounted reconnaissance elements are designed to operate in daytime and nighttime conditions. It also has a GPS receiver and inertial navigation systems (Brzezina, 2013 In order to perform a combat task, the above UAV should be placed on a small catapult or ejected by hand from a standing position into the air, after previous preparation. The main task of the service crew and the operator is to prepare the ground and flight control station for operations. Using the console components, one can plan the flight route and observe images in real time. If necessary, the control can be performed manually using the built-in joystick located next to the flight and mission console. The latest modernization of this weapon is the Orobiter-2B version, which is characterized by a range that is two times larger, duration of flight, and newer elements of the head with a built-in camera for HD reconnaissance (Modernization Plan, n.a.) (Aeronautics, n.a.) Taking into account the development of unmanned aerial vehicles and the adopted methodological limitations, only the construction of the Orbiter-2B version was characterized. The analysis of the tactical and technical data of the two discussed UAVs has shown that there are many similarities regarding their capabilities. The FlyEye has a longer operating range, can stay longer in the air, and is able to carry a heavier load than the Orbiter. On the other hand, Orbiter has a higher maximum speed and a higher altitude. Nevertheless, the differences are small and do not significantly affect their combat abilities. Conclusions Concluding the presented results of the theoretical research, it can be stated that despite the existence of several terms related to the subject of the study, the most suitable is "unmanned aerial vehicle". It should be emphasized, however, that it is much narrower than the unmanned aerial system and wider than the controlled aircraft. Moreover, the research has shown that the variety of functions and equipment, as well as the dynamic development of these objects often makes the adopted typology obsolete, or it is impossible to assign a particular UAV to one class (one type), and in consequence, it is difficult to formulate a detailed, unambiguous description of the UAVs types. The research has also shown that due to the large number of different types of UAVs, there is a justified need to limit reliability tests to a specific type of structure. Additionally, it has been demonstrated that despite the existence of numerous UAV classifications, it is the reliability tests that determine the usefulness of vehicles. The typologies focusing on tactical and technical data, although used in legislative documents, have little cognitive value from the point of view of the reliability of the objects. Therefore, in this particular case, the typologies based on structural elements seem to be most suitable for the research assumptions, and the typology based on the aerodynamic system showed unambiguously that the largest UAV group is the fixed-wing aircraft. Therefore, limiting the research group (which is justified from an economic point of view), it is expedient to carry out further research on the UAVs of the fixed-wing type.
5,303.2
2019-12-31T00:00:00.000
[ "Computer Science" ]
Visible-Light-Induced Decarboxylation of Dioxazolones to Phosphinimidic Amides and Ureas A visible-light-induced external catalyst-free decarboxylation of dioxazolones was realized for the bond formation of N=P and N–C bonds to access phosphinimidic amides and ureas. Various phosphinimidic amides and ureas (47 examples) were synthesized with high yields (up to 98%) by this practical strategy in the presence of the system’s ppm Fe. Optimization of the Reaction Conditions We commenced our study with the model reaction between 3-phenyl-1,4,2-dioxazol-Scheme 1. The construction of N=P and N-C bonds from dioxazolones. Optimization of the Reaction Conditions We commenced our study with the model reaction between 3-phenyl-1,4,2-dioxazol-5one (1a) and triphenylphosphine (2a) under visible light and N 2 atmosphere. The results are shown in Table 1. Initially, the reaction was carried out by employing DCE as the solvent under irradiation of 10 W 430 nm blue LED at room temperature, and the desired product N-(triphenyl-λ 5 -phosphinylidene)benzamide (3a) could be detected in 11% yield (entry 1). Afterwards, the solvent effect on the yield was investigated (entries [2][3][4][5][6][7][8]. Different solvents, such as 1,4-dioxane, CH 3 OH, acetone, CH 3 CN, DMF, THF, and CH 2 Cl 2 , were surveyed, and the reaction exhibited excellent reaction performance in CH 2 Cl 2 to provide the target product in 81% yield (entry 8). Further examination of the wavelengths of LED and substrate ratios showed no more positive results (entries [9][10][11][12][13][14]. Control reactions confirmed that nearly no amidation product 3a was detected at room temperature in the absence of visible light (entry 15). Moreover, when the reaction was carried out in the air, only a trace amount of the product 3a was detected (entry 16). Therefore, the optimized reaction conditions were illustrated as follows: 1a (0.1 mmol); 2a (0.1 mmol); and CH 2 Cl 2 (1 mL) in a N 2 atmosphere under the irradiation of 430 nm blue LED (10 W) for 24 h at room temperature (entry 8). Scheme 1. The construction of N=P and N-C bonds from dioxazolones. Optimization of the Reaction Conditions We commenced our study with the model reaction between 3-phenyl-1,4,2-dioxazol-5-one (1a) and triphenylphosphine (2a) under visible light and N2 atmosphere. The results are shown in Table 1. Initially, the reaction was carried out by employing DCE as the solvent under irradiation of 10 W 430 nm blue LED at room temperature, and the desired product N-(triphenyl-λ 5 -phosphinylidene)benzamide (3a) could be detected in 11% yield (entry 1). Afterwards, the solvent effect on the yield was investigated (entries 2-8). Different solvents, such as 1,4-dioxane, CH3OH, acetone, CH3CN, DMF, THF, and CH2Cl2, were surveyed, and the reaction exhibited excellent reaction performance in CH2Cl2 to provide the target product in 81% yield (entry 8). Further examination of the wavelengths of LED and substrate ratios showed no more positive results (entries [9][10][11][12][13][14]. Control reactions confirmed that nearly no amidation product 3a was detected at room temperature in the absence of visible light (entry 15). Moreover, when the reaction was carried out in the air, only a trace amount of the product 3a was detected (entry 16). Therefore, the optimized reaction conditions were illustrated as follows: 1a (0.1 mmol); 2a (0.1 mmol); and CH2Cl2 (1 mL) in a N2 atmosphere under the irradiation of 430 nm blue LED (10 W) for 24 h at room temperature (entry 8). With the optimized conditions in hand, the scope of the organophosphorus compounds and dioxazolones 1 was investigated (Scheme 2). To our delight, various 3-phenyl dioxazolones bearing different electron-donating groups (-CH 3 , -t Bu, and -OCH 3 ) or electronwithdrawing groups (-CF 3 , -F, -Cl, and -CN) on the phenyl ring at different positions could react smoothly with 2a to produce the desired products (3a-3n) in moderate to excellent yields (42-98%). Among these cases, a slight steric hindrance effect was observed, and parasubstituted 3-phenyl dioxazolones (3a-3h, 63-98%) showed higher reaction reactivities than those of ortho-substituted 3-phenyl dioxazolones (3m-3n, 42-47%). Moreover, the desired products 3o and 3p, which contain the skeletons of thiophene and furan, could also be successfully obtained in 43% and 50% of the yields, respectively. Additionally, electron-poor and electron-rich triphenylphosphine derivatives were all applicable to this transformation to access the desired products (3q-3v) in 51-91% yields. In addition, the phosphorus ligand, 1.1 -binaphthyl-2.2 -diphenylphosphine (BINAP), was also a suitable substrate to react with 1a, providing the corresponding product 3w in 51% yield. Scheme 2. Substrate scope for the synthesis of phosphinimidic amides. Scheme 2. Substrate scope for the synthesis of phosphinimidic amides. Then, we expanded the photocatalytic decarboxylation reaction of dioxazolones to the synthesis of unsymmetrical urea compounds (Scheme 3). To our delight, a wide range of 3-phenyl dioxazolones all reacted efficiently with diisopropylamine 4a to furnish the corresponding aryl ureas (5a-5m) in moderate to excellent yields (37-98%). In these cases, 3phenyl dioxazolones bearing electron-donating groups (-CH 3 , -t Bu, and -OCH 3 ) showed a better reaction efficiency than those of 3-phenyl dioxazolones bearing electron-withdrawing groups (-CF 3 , -F, -Cl). Moreover, the broad scope of the commercially available secondary amines all reacted smoothly in this transformation, adding to the formation of desired ureas (5n-5v) in good to excellent yields (80-96%). In addition, the primary amine, such as aniline, was also a suitable substrate for reaction with 1a, providing the corresponding product 5w in 52% yield. However, cyclohexylamine (4l) and benzylamine (4m) were not suitable in this transformation to react with 1a to access the corresponding products 5x and 5y. Compared with the previous report [32], our method effectively avoids the harsh conditions of high temperature, showing good sustainability. corresponding aryl ureas (5a-5m) in moderate to excellent yields (37-98%). In these cases, 3-phenyl dioxazolones bearing electron-donating groups (-CH3, -t Bu, and -OCH3) showed a better reaction efficiency than those of 3-phenyl dioxazolones bearing electron-withdrawing groups (-CF3, -F, -Cl). Moreover, the broad scope of the commercially available secondary amines all reacted smoothly in this transformation, adding to the formation of desired ureas (5n-5v) in good to excellent yields (80-96%). In addition, the primary amine, such as aniline, was also a suitable substrate for reaction with 1a, providing the corresponding product 5w in 52% yield. However, cyclohexylamine (4l) and benzylamine (4m) were not suitable in this transformation to react with 1a to access the corresponding products 5x and 5y. Compared with the previous report [32], our method effectively avoids the harsh conditions of high temperature, showing good sustainability. To our satisfaction, this method is also suitable for the reaction between 3-(p-tolyl)-1,4,2-dioxazol-5-one 1b and 1,3-diphenylpropane-1,3-dione 6 to give the corresponding amide product in 58% yield (Scheme 4a), which was previously reported in the presence of additional FeCl 3 catalyst [29]. To verify the practicability of this synthetic protocol, the gramscale synthesis of 3a was carried out (for details, see the Supplementary Materials). When the reaction was performed at a 5 mmol scale, the desired product 3a was isolated in 80% yield, indicating that this approach has a good practicability and application prospect (Scheme 4b). 1,4,2-dioxazol-5-one 1b and 1,3-diphenylpropane-1,3-dione 6 to give the corresponding amide product in 58% yield (Scheme 4a), which was previously reported in the presence of additional FeCl3 catalyst [29]. To verify the practicability of this synthetic protocol, the gram-scale synthesis of 3a was carried out (for details, see the Supplementary Materials). When the reaction was performed at a 5 mmol scale, the desired product 3a was isolated in 80% yield, indicating that this approach has a good practicability and application prospect (Scheme 4b). Furthermore, we also evaluated the sensitivity of the reaction of 1a and 2a. Compared with the standard conditions, the changes in concentration, temperature, oxygen level, water level, light intensity, and scale were measured. The yields were measured by 31 P NMR and the yield deviation was calculated (for details, see the Supplementary Materials). Among them, light intensity and oxygen levels are important parameters for the reaction. Moreover, this transformation is moderately sensitive to water. Other parameters, such as concentration and temperature, can be regarded as random errors, which have a negligible impact on reaction efficiency ( Figure S4, Supplementary Materials). Next, we calculated the E-factor [54,55] and EcoScale scores [56,57] of the chemical process to evaluate the safety, economic, and ecological properties of the method. The results are summarized in Tables S2-S5, Supplementary Materials. As can be seen, the Efactor is extremely low at 0.38 and 0.82, respectively, and the EcoScale penalty is also low, at 21.5 and 15.5. Both parameters reflect the excellent green chemistry metrics of the protocol. To understand the mechanism of this transformation, a set of control experiments were performed (Scheme 5). The phosphorylation of 4-methylbenzamide 8 with triphenylphosphine 2a was performed to determine whether the N=P bond was formed through the amide intermediate. However, 4-methyl-N-(triphenyl-λ 5 -phosphaneylidene) benzamide 3b was not detected (Scheme 5a). Moreover, intermolecular competition experiments of 1a and 8 were conducted, and only product 3a was obtained with 48% yield (Scheme 5b). These results demonstrated that the phosphorylation of dioxazolones was not conducted through amide intermediates. Furthermore, various radical trapping experiments were conducted (Scheme 5c). When (2,2,6,6-tetramethylpiperidine-1-yl)oxidanyl (TEMPO) was added to the model reaction under standard conditions, the reaction was significantly inhibited. The TEMPO-trapped acyl nitrene adducts were detected by high-resolution mass spectrometry (HRMS), with peaks at 277.1922 m/z. Subsequently, when another radical scavenger, 2,6-di-tert-butyl-4-methylphenol (BHT), was subjected under standard conditions, the reaction was also severely suppressed, indicating a radical process in the phosphorylation of dioxazolone with triphenylphosphine. Then, the radical trapping experiments of 1a and 4a were conducted (Scheme 5c). The decreased yields of product 5a indicated that the transformation also involved a radical process. Furthermore, we also evaluated the sensitivity of the reaction of 1a and 2a. Compared with the standard conditions, the changes in concentration, temperature, oxygen level, water level, light intensity, and scale were measured. The yields were measured by 31 P NMR and the yield deviation was calculated (for details, see the Supplementary Materials). Among them, light intensity and oxygen levels are important parameters for the reaction. Moreover, this transformation is moderately sensitive to water. Other parameters, such as concentration and temperature, can be regarded as random errors, which have a negligible impact on reaction efficiency ( Figure S4, Supplementary Materials). Next, we calculated the E-factor [54,55] and EcoScale scores [56,57] of the chemical process to evaluate the safety, economic, and ecological properties of the method. The results are summarized in Tables S2-S5, Supplementary Materials. As can be seen, the E-factor is extremely low at 0.38 and 0.82, respectively, and the EcoScale penalty is also low, at 21.5 and 15.5. Both parameters reflect the excellent green chemistry metrics of the protocol. To understand the mechanism of this transformation, a set of control experiments were performed (Scheme 5). The phosphorylation of 4-methylbenzamide 8 with triphenylphosphine 2a was performed to determine whether the N=P bond was formed through the amide intermediate. However, 4-methyl-N-(triphenyl-λ 5 -phosphaneylidene) benzamide 3b was not detected (Scheme 5a). Moreover, intermolecular competition experiments of 1a and 8 were conducted, and only product 3a was obtained with 48% yield (Scheme 5b). These results demonstrated that the phosphorylation of dioxazolones was not conducted through amide intermediates. Furthermore, various radical trapping experiments were conducted (Scheme 5c). When (2,2,6,6-tetramethylpiperidine-1-yl)oxidanyl (TEMPO) was added to the model reaction under standard conditions, the reaction was significantly inhibited. The TEMPO-trapped acyl nitrene adducts were detected by high-resolution mass spectrometry (HRMS), with peaks at 277.1922 m/z. Subsequently, when another radical scavenger, 2,6-di-tert-butyl-4-methylphenol (BHT), was subjected under standard conditions, the reaction was also severely suppressed, indicating a radical process in the phosphorylation of dioxazolone with triphenylphosphine. Then, the radical trapping experiments of 1a and 4a were conducted (Scheme 5c). The decreased yields of product 5a indicated that the transformation also involved a radical process. In 2021, Yu and Bao et al., disclosed that FeCl 3 (15 mol%) was required for the imidization of phosphines with dioxazolones under visible light irradiation [29]. While in our case, the transformations worked very well without any other additives. Considering the contamination issues in coupling reactions [58], we reasoned that some iron contamination might be possible in the manufacture of the starting materials. Therefore, the model reaction mixture was analyzed with inductively coupled plasma mass spectrometry (ICP-MS). Consequently, it is found that the Fe content of the reactions for the preparation of phosphinimidic amide (3a) and urea (5a) is approximately 27 ppm and 3 ppm, respectively (for details, see the Supplementary Materials). ICP-MS experiments were also performed on the starting materials of the model reactions (dioxazolone, PPh 3 , and amine), and the results showed that the iron contents of the dioxazolone, PPh 3 , and amine were 123 ppm, 420 ppm, and 0.9 ppm, respectively (for details, see the Supplementary Materials). It is reasoned that iron contamination issues in commercial chemicals are unavoidable during the production and transportation processes. When additional iron catalyst FeCl 3 (5 mol%) was added to the model reaction under standard conditions, the reaction time was shortened and the yield was increased. These results confirmed that this reaction could be facilitated by iron catalysis (for details, see the Supplementary Materials). These results suggest that, although it is not a real transition-meta-free system, it is still a synthetically useful procedure for the synthesis of phosphinimidic amides and ureas, especially from an industrial chemistry standpoint. In 2021, Yu and Bao et al., disclosed that FeCl3 (15 mol%) was required for t zation of phosphines with dioxazolones under visible light irradiation [29]. Wh case, the transformations worked very well without any other additives. Consid contamination issues in coupling reactions [58], we reasoned that some iron co tion might be possible in the manufacture of the starting materials. Therefore, t reaction mixture was analyzed with inductively coupled plasma mass spectrome MS). Consequently, it is found that the Fe content of the reactions for the prepa phosphinimidic amide (3a) and urea (5a) is approximately 27 ppm and 3 ppm tively (for details, see the Supplementary Materials). ICP-MS experiments were formed on the starting materials of the model reactions (dioxazolone, PPh3, and and the results showed that the iron contents of the dioxazolone, PPh3, and am 123 ppm, 420 ppm, and 0.9 ppm, respectively (for details, see the Supplementar als). It is reasoned that iron contamination issues in commercial chemicals are u ble during the production and transportation processes. When additional iron FeCl3 (5 mol%) was added to the model reaction under standard conditions, the Based on these control experiments and previous literature reports, a plausible reaction pathway is proposed in Scheme 6. Initially, the N atom of dioxazolones 1 coordinates with the Fe center to form complex B, which is excited by visible light to generate the highly active iron-aminyl radical C with the release of CO 2 . Subsequently, radical C reacts with triphenylphosphine 2a to form the complex D, followed by a reduction and elimination process to obtain product 3. On the other hand, intermediate C underwent Curtius rearrangement to form intermediate E, which further reacts with secondary amines 4 to obtain product 5. General Information All nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance 400 MHz in CDCl3 at room temperature (20 ± 3 °C), by using tetramethylsilane as the internal standard. High-resolution mass spectra (HRMS) were conducted on a 3000-mass spectrometer, using Waters Q-Tof MS/MS system with the ESI technique. Photochemical reactions were carried out under visible light irradiation by a blue LED at 25 °C. The RLH-18 8-position Photo Reaction System manufactured by Beijing Roger Tech Ltd. was used in this system ( Figure S1, Supplementary Materials). Eight 10 W blue LEDs were equipped in this photochemical reactor. The wavelength for blue LED is 430 nm, peak width at half-height is 18.4 nm ( Figure S2, Supplementary Materials). The distance from the light source to the irradiation vessel was approximately 15 mm. General Experimental Procedures for the Synthesis of (3a-3w) In a 25 mL reaction tube, dioxazolones 1 (0.2 mmol, 1.0 equiv), organic phosphine substrate 2 (0.2 mmol, 1.0 equiv) in 1 mL CH2Cl2 were allowed to stir with irradiation of 10 W blue LED under N2 atmosphere at room temperature for 24 h. After the reaction, the solvent was evaporated under vacuum, and the residue was purified by column chromatography on silica gel to afford the desired products 3a-3w. General Information All nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance 400 MHz in CDCl 3 at room temperature (20 ± 3 • C), by using tetramethylsilane as the internal standard. High-resolution mass spectra (HRMS) were conducted on a 3000-mass spectrometer, using Waters Q-Tof MS/MS system with the ESI technique. Photochemical reactions were carried out under visible light irradiation by a blue LED at 25 • C. The RLH-18 8-position Photo Reaction System manufactured by Beijing Roger Tech Ltd. was used in this system ( Figure S1, Supplementary Materials). Eight 10 W blue LEDs were equipped in this photochemical reactor. The wavelength for blue LED is 430 nm, peak width at half-height is 18.4 nm ( Figure S2, Supplementary Materials). The distance from the light source to the irradiation vessel was approximately 15 mm. General Experimental Procedures for the Synthesis of (3a-3w) In a 25 mL reaction tube, dioxazolones 1 (0.2 mmol, 1.0 equiv), organic phosphine substrate 2 (0.2 mmol, 1.0 equiv) in 1 mL CH 2 Cl 2 were allowed to stir with irradiation of 10 W blue LED under N 2 atmosphere at room temperature for 24 h. After the reaction, the solvent was evaporated under vacuum, and the residue was purified by column chromatography on silica gel to afford the desired products 3a-3w.
4,090.8
2022-06-01T00:00:00.000
[ "Chemistry", "Biology" ]
Mosaic Evolution of Brainstem Motor Nuclei in Catarrhine Primates Facial motor nucleus volume coevolves with both social group size and primary visual cortex volume in catarrhine primates as part of a specialized neuroethological system for communication using facial expressions. Here, we examine whether facial nucleus volume also coevolves with functionally unrelated brainstem motor nuclei (trigeminal motor and hypoglossal) due to developmental constraints. Using phylogenetically informed multiple regression analyses of previously published brain component data, we demonstrate that facial nucleus volume is not correlated with the volume of other motor nuclei after controlling for medulla volume. Our results show that brainstem motor nuclei can evolve independently of other developmentally linked structures in association with specific behavioral ecological conditions. This finding provides additional support for the mosaic view of brain evolution. Introduction Two competing models of brain evolution have dominated the neuroscience literature over the past 15 years. The first posits that the interspecific scaling of vertebrate brain components is explained mostly by a conserved pattern of neurogenesis, such that structures that develop later tend to be relatively large [1,2]. This is supported by the fact that later developing structures exhibit larger allometric exponents when scaled against overall brain size [1]. Supporters of the developmental correlation model argue that brain structure evolves due primarily to selection on overall brain size, as opposed to the specialization of particular areas for specific functions [2]. Thus, individual brain structures vary in size according to general scaling principles that constrain adaptive evolution, thereby limiting the impact of behavioral ecological conditions on brain structure. The alternative model posits that natural selection can act to expand or contract the size of individual brain components, independent of overall brain size, without necessarily altering the size of functionally unrelated regions [3,4]. Supporters of this mosaic evolution model argue that the coordinated evolution of individual brain regions is due to functional and/or structural connections [5,6]. According to this model, developmental constraints can be overridden by selection to enlarge separate neural systems in response to specific behavioral ecological conditions. This idea is supported by comparative analyses of neural specialization in species as diverse as primates [7], birds [8], and fish [9]. The mosaic evolution model also posits a role for constraints, but supporters of this model tend to emphasize energetic trade-offs influencing overall brain size [10] rather than developmental correlations per se [11]. In a previous paper, we examined the coordinated evolution of brain regions involved in producing and processing facial expressions in anthropoid primates [12]. The results of our study revealed that social group size is positively correlated with the relative size of the facial motor nucleus, which sends motor neurons from the brainstem to the muscles of facial expression [13]. This pattern, which we observed in catarrhines but not platyrrhines, is consistent with the idea that facial communication is an important form of conflict management and bonding within catarrhine social groups [14][15][16]. In addition, we found that facial 2 Anatomy Research International nucleus volume is positively correlated with primary visual cortex volume, after controlling for the size of the rest of the brain [12]. These results bolster the mosaic view of brain evolution. However, they do not preclude the possibility of developmental correlations within the brainstem, that is, correlated size changes in functionally unrelated motor nuclei. The purpose of the present study is to test the hypothesis that the size of the facial motor nucleus in catarrhines evolves in coordination with other brainstem motor nuclei due to developmental correlation. Developmental covariation among brainstem motor nuclei is to be expected since these nuclei show similar patterns of growth-factor receptor expression and coordinated modulation of neuronal proliferation and survival [17]. We will examine two comparative predictions of the developmental correlation model: (i) facial nucleus size is positively correlated with trigeminal motor nucleus size and hypoglossal nucleus size, after controlling for the size of the medulla and (ii) the relative sizes of the trigeminal motor nucleus and hypoglossal nucleus are positively correlated with social group size. The former prediction addresses the essence of the developmental correlation model: coordinated size changes due to a shared developmental basis. The latter prediction derives from the fact that facial nucleus size is correlated with group size [12]. To assess the specificity of this group-size effect, we examine the possibility that the other two brainstem orofacial motor nuclei are also correlated with group size. The results of our study contribute to debates regarding the relative importance of developmental constraints versus adaptive specializations in mammalian brain evolution. Materials and Methods Brain component volumes for 14 group-living, nonhuman catarrhine species were taken from previously published sources [13,18,19]. Group size data were taken from an unpublished dataset available on C. Nunn's website [20] (http://www.people.fas.harvard.edu/∼nunn/index.html). We examined trait correlations using multiple regression analyses. Two sets of analyses were carried out: (i) we examined the volume of the trigeminal motor nucleus and hypoglossal nucleus in relation to facial nucleus volume after controlling for medulla size [13] and (ii) we examined the relative volume of the trigeminal motor nucleus and hypoglossal nucleus in relation to group size. Autocorrelation, which can occur when the independent variable represents a large part of the dependent variable, is not a serious issue for our analyses because each nucleus comprises less than 0.5% of the volume of the total medulla [21]. All data were log-transformed (natural) prior to analysis. Regression coefficients and standard errors were generated using a phylogenetic generalized least-squares (PGLSs) approach [22]. We used COMPARE 4.6b [23] to perform PGLSs multiple regressions based on a single consensus tree with chronometric branch lengths [24], which we downloaded from the 10kTrees website (version 2) (http://10ktrees.fas.harvard.edu/). To take into account phylogenetic uncertainty, we also ran each analysis on a block of trees (N = 1000) in which the position of any given node varies as a function of its Bayesian posterior probability [24]. The null hypothesis (slope = 0) was assessed using 95% confidence intervals for each regression coefficient [25]. In the case of the tree block analyses, we incorporated both sampling variance and variance due to phylogenetic uncertainty into the calculation of confidence intervals [26]. Figure 1 demonstrates the strong degree of covariation between brainstem orofacial motor nuclei in catarrhines prior to size correction. However, the multiple regression results in Table 1 indicate that this pattern of covariation disappears after controlling for medulla size, that is, neither trigeminal motor nucleus volume nor hypoglossal nucleus volume is a significant predictor of facial nucleus volume independent of medulla volume. Moreover, social group size is not positively correlated with either trigeminal motor nucleus volume or hypoglossal nucleus volume after size correction ( Table 1). The results of the tree block analyses are identical with the consensus tree results because the degree of phylogenetic uncertainty in our sample is negligible. Thus, the hypothesis that catarrhine brainstem motor nuclei evolve in coordination with each other due to a shared developmental basis is not supported by our results. Instead, it appears that relative facial motor nucleus size evolves independently of the rest of the medulla and in association with social group size. Results and Discussion Taken together, the results of our previous work [12] and the present study provide additional support for the mosaic model of brain evolution [3]. Proponents of this model assert that natural selection can target functionally interconnected neural systems resulting in structural changes that are relatively unconstrained by developmental processes [11]. We found that the catarrhine facial motor nucleus evolves independently of other brainstem orofacial motor nuclei in response to a specific behavioral ecological condition, group size, and in coordination with a functionally linked region, the primary visual cortex [12]. The latter result is particularly striking because it involves coevolution between two brain components that are not structurally interconnected by direct axonal pathways. Moreover, because the medulla and neocortex undergo neurogenesis at different times [1], developmental correlation is an unlikely explanation for this pattern of correlated evolution. It appears that general trends in brain evolution observed at higher taxonomic levels can mask adaptive diversity at lower taxonomic levels. For example, the negative relationship between relative neocortex size and relative limbic structure size observed at the ordinal level in mammals does not apply within orders [27]. Similarly, it has been suggested that the mammalian central visual system exhibits a high degree of evolutionary conservatism [28]. However, numerous studies of the primary visual cortex in primates have demonstrated that species can deviate from allometric [7,12,[29][30][31][32]. Thus, it seems that much of the debate concerning the relative importance of adaptive specialization versus developmental constraint is driven by differences in the taxonomic sampling. Conclusions Previous research has shown that correlated evolution may occur within structurally interconnected neural systems [5,6]. Our findings are unique in demonstrating that mosaic brain evolution can also involve coordinated changes in the volume of brain components that are not structurally linked by direct axonal pathways, but that participate in a common adaptive complex [12]. These results also provide further support for the idea that neural specializations in mammals are not restricted to executive brain functions. Brainstem structures can also undergo adaptive specialization in response to the motor and/or sensory demands of specific behavioral ecological conditions [33][34][35][36][37].
2,135
2011-05-29T00:00:00.000
[ "Biology", "Psychology" ]
Explosive wave propagation in the presence of antiseismic protective curtain The objective of this work is to study theoretically the stress wave formation in the borehole-to-borehole space and on the boundary with the borehole contour set by creating a directed fracture system in it. The features of an interaction of the weak stress wave at a large-scale blasting in the short-delay blast mode (SDB) with the massif and the protective contour set of boreholes, are considered. It is noted the effect of the contour set parameters on wave processes. The constructive solutions are proposed in order to reduce the dynamic loadings in the peripheral rock massif. The proposals consist in applying a combined construction of garland charges to form the seismo protective structure that consists of a contour fracture and one-sided disrupted zone within the boundaries of the blasted block. Introduction The development of a seismically secure switching circuit for detonating system of largescale blasting involves the primary blast of a contour set of boreholes [1], which reduces the loading from the last-line blast on the peripheral rock massif, comes out from the simplified representation about charge groups interaction.In this case, three consecutive mechanisms of interaction are considered -at the level of interference of the stress waves, at the level of the fracture system development and at the level of contact between the rocks blocks, bounded by adjacent groups.The first level provides the preliminary disturbance of microstructural bonds in the massif being destroyed, the development of microdefects, and the creation of a stress state in the massif.The following two levels practically solve the problem of separating the moving massif into standard sized parts.However, under real conditions of large-scale blasting, the processes of charge interaction and the resulting dynamic phenomena [2] become more complicated, acquiring additional features that should be studied. Prerequisites for setting the research tasks When implementing a large-scale blasting in the SDB mode, a relatively long timeextended oscillation field, formed in three stages, is actually created in the rock massif. These stages involve the blast of borehole charges in a certain sequence. The first stage is realized during hundreds of microseconds in the process of sequential dynamic loading of the massif during the detonation decomposition of a separate elongated charge in the group and the radiation from shock wave at its contact with the rock massif.Taking into account that the propagation velocity of the detonation wave in the charge is of 4 -5 thousand m/s for modern explosives, at a charge length of 10 m, its detonation decomposition will be about 500 microseconds.With the parameters of the system of borehole charges 5 × 5 m, during this time, the stress wave front at the level of the intermediate detonator (gunner) activation at its velocity in the hard rock also about 5 thousand m/s, will cross the 2 next groups of charges in the SDB scheme, which should be blasted much later -in tens of milliseconds.In this case, the stress wave front is located at an angle of 45º to the horizontal.Accordingly, there are changes in geometry of the rock masses displacement and interaction, driven by the charges in the set, as well as adjacent groups. Thus, in the interval between two adjacent groups of charges due to non-simultaneous occurrence of the stress wave, the time difference in dynamic "pre-destruction" can be about 500 microseconds, and the time of the first arrival of stress front on an adjacent group will be 250 microseconds. The second mode is related to the sequence of blasting the borehole charges in a group.Based on the features of the charge system assembly, the first and last charge in the group only theoretically should be blasted simultaneously.In fact, these charges are activated in sequence or along a common segment of the detonating cord, as well as in case of application of individual segments of the detonating waveguide of any modern initiation system of the "NONEL" type.In the latter variant, each subsequent charge in the group at a waveguide detonation velocity of 2 thousand m/s will be activated through 2.5 milliseconds relative to the previous one.Together, the whole group of 5 charges will be activated in 10 milliseconds. The third mode assumes a consistent blasting of the charge groups with a deceleration within a few tens of milliseconds (from 20 to 70 -100 ms). Material and research results The processes of blasting in mining and construction are accompanied by the propagation of stress waves outside and within the boundary of the massif being destroyed, which along with the massif disruption to the specified granulometric composition, have a harmful effect on the surrounding buildings, mine workings and the massif itself in the form of elasticplastic (residual) and elastic deformation [3].With account of the danger of seismic phenomena within the boundary of the bench which should be destroyed, it is considered appropriate, if possible, to protect the massif outside the blast block from damage, which in the future will lead to a weakening of the massif adjacent to disintegration of rock.These phenomena will either be a source of increased oversized output, or will need to increase the explosives consumption.Consequently, a decrease in the seismic effect outside the massif being blasted will reduce the explosive materials (EM) consumption. In the specific conditions of production, the question arises of developing an integrated approach to managing the explosion energy -increasing the fraction of energy consumed for destruction, and reducing the seismic waves energy.One of such effective method of influencing the result of large-scale blasting is shielding [1]. The essence of the shielding method is in the formation on the boundary of the block, which should be destroyed, of the contour set of boreholes designed in two variants -either with the formation of a continuous mono gapping in the plane of contour charges system, or a plane zone of intense fracturing.The contour set being a curtain on the boundary of the surrounding massif from seismic oscillations, which reflects, scatters and refracts the waves generated by the blast.Outside the shield, the refracted wave carries only a part of the energy into the medium.It is important to note that the energy of the reflected wave, returning to the destroyed massif, increases the destructive effect of the blast. Physical phenomena that contribute to the shielding process have not been studied sufficiently, therefore, it is proposed to apply the shields in a complex, that is, to apply the common and new types of shields with altered geometric, kinematic and dynamic parameters in order to increase the effectiveness of blasting operations.Solving problems on optimal parameters determination of such shields will ultimately increase the blast destruction effectiveness and reduce the seismic effect in the protected zone. Such shielding consists in creating on the boundary of a blasted block, a zone of broken rock with a correspondingly reduced bandwidth throughput for elastic waves of sufficiently high intensity.In general, the effectiveness of destruction and the degree of shielding is estimated by the ratio of mass velocities in the massif with and without the shield at the equal distances from the charge. To assess the capabilities of such a shield, let us consider in a simplified form the interaction of two adjacent boreholes filled with a gas-generating emulsion substance, wherein the first charge performs the role of active, that is detonating charge, the second performs the passive role. In the first place, the patterns are investigated of stress wave propagation into a rock massif if to blast the cylindrical explosive charge, as well as its interaction with the material filling the adjacent borehole. The movement of the detonation products, the mine rock and explosive materials are studied within the framework of continuum mechanics [4].This motion is described by the laws of momentum and mass conservation.For the case of axial symmetry, they have the form [5]: , , ( ) where , z r are the coordinates; t is the time; The extension of the detonation products (DP) occurs by the binomial isentrope, the equation of which is proposed in [6]: 1 , where , , , A B n γ are the constants for this explosive material. For mine rocks, the ratios between stresses and deformations are written on the basis of the differential theory of plasticity [7].The values of the deformation tensor components are represented as the sum of the elastic and plastic deformations: The connection between the stress and elastic deformations is established by the formulas of the generalized Hooke's law for isotropic material: where , , E G ν are the physical and mechanical characteristics of the mine rock. In the case of plastic deformations, the ratio between the corresponding deformations and stresses is written as: , when using the plasticity condition with linear kinematic displacement: where λ is the coefficient of displacement; T σ is the ultimate strength. The emulsion explosive materials (EEM) of a cylindrical charge is modeled by twocomponent nonlinear-elastic medium, which consists of air and liquid components [8].It is believed that under the action of dynamic loading, each component has the same pressure and moves at the same velocity.At atmospheric pressure, the density of EM is determined by the formula: where 1 a , 2 a and 10 ρ , 20 ρ is the volumetric content of components and their density. The equation of volumetric compression of nonlinear-elastic two-component medium has the form: where 0 , (16) The DP and RM are the indices of the detonation products and rock mass, respectively. To solve the problem, the finite difference method is chosen in a moving Lagrangian coordinate system with a moving grid, which is automatically extended at the end of each computational cycle.The finite-difference scheme of the 'cross' type of the second order of accuracy [5] was used.The algorithm and the program for the PC were developed.In the numerical solution of the problem, a blast of EEM charge in granite was considered. The detonation characteristics of the EEM are the following: Since the energy of the explosive pulse is commensurate with the area of the curve shown in Fig. 1, the pulse existence time is extremely important.A known means for reducing the maximum pressure level in the initial pulse with the simultaneous lengthening of the pulse existence time while maintaining the total area of the pulse is the use of air gaps in the charge, that is, the dispersion of the elongated charge.This method leads to a decrease in Р MAX , reducing the size of the zone of rock overmilling and, consequently, the energy loss in the near area of the blast effect, and at the same time increases the pulse existence time.Due to energy losses in the zone of rock overmilling, or in the plastic deformations zone, the shock wave quickly transforms into a stress wave. The further development of the process is shown in Fig. 2, where curve 1 corresponds to the dependence of the maximum pressure distribution (MPa) in the incident stress wave, which propagates into the borehole-to-borehole space a = 5 m in the mine rock, the curve 2 -in the reflected stress wave.The reflection phenomenon is a sign of the effectiveness of the obstacle in the path of the incident stress wave movement. From Fig. 2 it is evident that in the borehole-to-borehole space at a distance of 25 -35 radii of the active (detonating) charge, a significant jump in the pressure is fixed compared to the background one.The greatest pressure in the reflected wave is observed in the mine rock at a distance of 30 charge radii (3 m) from the active charge, accounting for more than 7 MPa.At the boundary with the adjacent borehole ( 0 R = 45), the value of the average hydrostatic pressure reaches 3.8 MPa (Fig. 2). Since the passive borehole in the task plays the role of the shielding cavity, it should be filled with an effective absorbing material.If such a shield has to exist for a certain time, it should simultaneously keep the borehole walls from being destructed under the influence of dynamic loadings.In the task we use the condition of filling the borehole with gasgenerating emulsion material in two variants (Fig. 3). The dependence of the density on the coordinate in the passive charge under the action of the stress wave of the active charge has a nonlinear character, since the emulsion is deformed in accordance with the law of the nonlinear-elastic medium.As it follows from the calculations (Fig. 3), this compliant material with an initial density of 1200 kg/m 3 under the action of 3.8 -3.5 MPa pressure acquires a new density.In the first variant, the emulsion density increases on the front wall of the borehole from 1200 to 1478.5 kg/m 3 , and when the wave passes through the borehole, it decreases to 1476 kg/m 3 on the opposite wall.In practice, one can consider such a change insignificant.This means that since in the first variant of the problem the emulsion is only 2% saturated with gas bubbles, its role as a means of absorbing the wave energy is insignificant.In the second variant of the problem, an emulsion with a density of 800 kg/m 3 has been used as absorbing material.Based on the character of curve 2 in Fig. 3, it follows that by virtue of the greater absorption capacity of the shield material, thus, the dissipation efficiency of the stress wave energy also increases significantly.With a significant compaction of the emulsion material (up to 1250 kg/m 3 ) on the front wall of the borehole, its density on the opposite wall is much less and is 1050 kg/m 3 .Such a decrease in density within the borehole diameter indicates a significant increase in the effectiveness of the shield material. Consequently, the shield effectiveness depends to a large extent on the filler material, as evidenced by graphs 1 and 2 in Fig. 3.However, the filtering performance of the protective structure relative to the stress wave is determined by its size in the direction of the wave motion, which should be commensurate with the wavelength.It is known [9] that when the relative density of the shield material is 0.9, the relative wave resistance is 0.63, whereas at a relative density of the shield material 0.6 this value is reduced to 0.12, that is, almost by 5 times.However, a change in the ratio of the shield width to the wavelength from 0.05 to 0.8, that is, by 16 times, changes the ratio of energy in the refracted and incident waves from 0.65 to 0.33, that is, only twice.So, if to consider the effect of these factors separately, the construction of a protective shield should begin with the selection of the filler material, and then to project the shield size. Traditionally in mining, the contour set is arranged in such a way that the peripheral rock massif has got a minimum of damage.For this purpose, the diameter decreases of that boreholes, in which the garland charges of economical action are located, which are intended to create only fracture between the boreholes in the plane of the shield.Such a technique limits maximally the possibility of applying the main factor -shield density due to its inefficient width. Based on the above, there is another technical solution.A variant of the complex method is possible, when the boreholes of increased diameter are drilled in the contour set to fill with the material absorbing the wave energy.To enhance the absorption capacity of the shield, the rock massif surrounding the borehole in the area wrapped up to the incident wave should be pre-saturated with a system of radial and slabbing fractures.In this case, the peripheral rock massif is minimally damaged. To construct a contour shield, it is possible to use the properties of both factors in the complex by applying the garland charges of directional action.The peculiarities of the method is that the construction of such charges and the parameters of their location are based on the applying of systems of concentrated charges with a special shape, including the cumulative concavity.These charges are manufactured mainly at the blast site. Conclusions The analysis of the stress wave transformation in the conditions of shielding blast of the rock massif blocks by the system of borehole charges indicates the existence of a significant number of physical and technological factors determining the effectiveness of the contour method for protecting the rock massif adjacent to the place of large-scale blasting. Theoretical studies and existing experience allow to develop and apply a new solution for the creation of a protective plane zone on the boundary of the large-scale blasting place with the adjacent rock massif.It is considered expedient to use the possibility of controlled formation of a protective fractured zone using the features of the mechanical effect development if to blast the concentrated charges with conic shape, which are manufactured at the site of blast operations. the sound velocuty and the index of isentrope of the i -th component.The initial conditions of the problem are the following: Fig. 1 Fig. 1 Fig.1shows the theoretical dependence of the initial pressure on the time in the mine rock at the boundary with the detonation products during the blast of the EEM cylindrical charge.As shown in Fig.1, the pulse has a characteristic shape for the shock wave.According to the theory, the maximum pressure in the shock wave is achieved almost instantaneously (in fact, about 200 microseconds), but the time of the stress decrease to 0 reaches 100 milliseconds. Fig. 2 . Fig. 2. The distribution of the maximum pressure max P in incident (1) and reflected (2) stress waves between adjacent boreholes. Fig. 3 . Fig. 3.The density distribution of the porous filler under the action of a stress wave in the section of the contour borehole with an emulsion, gasgenerated with sensitizer in an amount: 1 -2% ; 2 -15%.
4,137
2018-10-01T00:00:00.000
[ "Geology" ]
Various Correlations in the Anisotropic Heisenberg XYZ Model with Dzyaloshinski—Moriya Interaction Various thermal correlations as well as the effect of intrinsic decoherence on the correlations are studied in a two-qubit Heisenberg XYZ spin chain with the Dzyaloshinski—Moriya (DM) interaction along the z direction, i.e. Dz. It is found that tunable parameter Dz may play a constructive role on the concurrence C, classical correlation (CC) and quantum discord (QD) in thermal equilibrium while it plays a destructive role on the correlations in the intrinsic decoherence case. The entanglement and quantum discord exhibit collapse and revival under the phase decoherence. With a proper combination of the system parameters, the correlations can effectively be kept at high steady state values despite the intrinsic decoherence. Entanglement is a kind of quantum nonlocal correlation and has been deeply studied in the recent years. [1−4] Quantum discord (QD), which measures a more general type of quantum correlation, is found to have nonzero values even for separable mixed states. [5] QD is built on the fact that two classical equivalent ways of defining the mutual information turn out to be inequivalent in the quantum domain. In addition, QD is responsible for the quantum computational efficiency of deterministic quantum computation with one pure qubit [6−8] albeit in the absence of entanglement. In recent years, QD has been intensively investigated in the literature theoretically [9−30] and experimentally. [7,31] Generally, it is somewhat difficult to calculate QD and the analytical solutions can hardly be obtained except for some particular cases, such as the so-called states. [10] Some research shows that QD, concurrence and classical correlation (CC) are independent measures of correlations with no simple relative ordering, and QD is more practical than entanglement. [7] Dakic et al. [24] have introduced an easily analytically computable quantity, geometric measure of discord (GMD), and have given a necessary and sufficient condition for the existence of nonzero QD for any dimensional bipartite states. Moreover, the dynamical behavior of QD in terms of decoherence [27,32,33] in both Markovian [11] and non-Markovian [12,34,35] cases is also discussed. In the previous studies, QD of a two-qubit onedimensional Heisenberg chain with an external magnetic field in thermal equilibrium has been studied, [36] where many unexpected ways different from the thermal entanglement have been shown. In Ref. [37] the authors investigated the effect of Dzyaloshinski-Moriya (DM) interaction, [38] which arises from spin-orbit coupling, on QD in an anisotropic model, and showed that with the increase of the DM interaction the QD gradually reduces at finite temperature. The effect of DM interaction on QD in a Heisenberg model has also been discussed in Ref. [16], in which the authors showed that QD can describe more information about quantum correlation than quantum entanglement. There are interesting papers discussing the QD qualitatively and quantitatively in Heisenberg spin chain models with various factors such as temperature, anisotropies and magnetic field. [17] In this Letter, we study the QD, CC and in an anisotropic Heisenberg model with the DM interaction in both the thermal equilibrium case and the intrinsic decoherence case, and discuss how the DM interaction influence the correlations in such a system. The present study on the correlations in a Heisenberg spin chain model will help us to understand the effect of DM interaction on the correlations and the phase decoherence resistance of the correlations more comprehensively. We consider the anisotropic Heisenberg model with the anisotropic, antisymmetric DM interaction along the direction ( 1 2 × 1 2 ). Then the Hamiltonian of such a model can be expressed as where , and are the coupling constants; the Hamiltonian can be expressed in the following matrix form where = + + 2 . We give a brief overview of various correlation measures. Given a bipartite quantum state in a composite Hilbert space ℋ = ℋ ⊗ ℋ , the concurrence [2] as an indicator for entanglement between the two-qubits is where ( = 1, 2, 3, 4) are the square roots of the eigenvalues of the "spin-flipped" density operator =︀ = ( 1 ⊗ 2 ) * ( 1 ⊗ 2 ) in descending order; is the Pauli matrix; and * denotes the complex conjugation of the matrix in the standard basis | ↑↑⟩, | ↑↓⟩, | ↓↑⟩, | ↓↓⟩. Let us now recall the original definition of QD. In the classical information theory, the total correlations in a bipartite quantum system with subsystems ( ) and ( ) are measured by the quantum mutual information defined as where ( ) = Tr ( ) ( ) is the reduced density matrix of the subsystem ( ) by tracing out the subsystem ( ). The quantum generalization of the conditional entropy is not the simply replacement of Shannon entropy with Von Neumann entropy, but through the process of projective measurement on the subsystem by a set of complete projectors , with the outcomes labeled by , then the conditional density matrix becomes which is the locally post-measurement state of the subsystem after obtaining the outcome on the subsystem with the probability where is the identity operator on the subsystem . The projectors can be parameterized as = | ⟩⟨ | † , = 0, 1 and the transform matrix Then the conditional Von Neumann entropy (quantum conditional entropy) and quantum extension of the mutual information can be defined as (Ref. [5]) following the definition of the CC in Ref. [5] CC( then QD defined by the difference between the quantum mutual information ( ) and the CC( ) is given by QD( ) = ( )−CC( ). If we denote S min ( AB ) = min {B k } S( AB |{B k }), then a variant expression of CC and QD [5,12] CC( A typical solid-state system at thermal equilibrium in temperature (canonical ensemble) is ( ) = where the elements of the matrix have been defined as According to the above definitions of , CC and QD, we will now discuss them with the corresponding plots. Figure 1(a) shows that in the case of the temperature with finite value, increases monotonously with the increasing of , by which one can also achieve maximum entanglement even at finite low temperatures. Both QD and CC are zero when the temperature is zero, which is totally different from the case of (it takes the maximum in this case). However, there is an apparent increase, which is sharper for larger , followed by a gradual decrease when the temperature is increased gradually starting from zero. More interestingly, the QD and CC show the same characteristics in their behavior following the 030303-2 increasing of the absolute value of , which is different from the entanglement. The saddle-like structure of QD and CC in this case reveals the constructive role of for the two correlations, one quantum, one classical, which is one of the interesting results of this work. All of the above three correlations do not undergo sudden death, instead, they tend asymptotically towards zero as the temperature is increased. Overall, we conclude that is an efficient parameter in increasing various correlations such as , CC and QD at finite temperature. This is partly contrary to the result for the case of model, in which the increase of the DM interaction suppresses the QD. [38] Moreover, the QD shows a different behavior from the in the response to the variation of DM interaction. Now, we take the influence of intrinsic decoherence on the various correlations into account. According to Milburn's equation [39] followed by the assumption that a system does not evolve continuously under unitary transformation for sufficiently short time steps, the master equation for pure phase decoherence is given by where is the phase decoherence rate. In the limit → 0 the Schödinger equation is recovered. The for-mal solution of the above master equation can be given by [40] ( ) = where (0) is the density operator of the initial system and ( ) is defined by By inserting the completeness relation ∑︀ | ⟩⟨ | = 1 of the energy eigenstate into the master equation, [39] we can write the explicit expression of the density matrix of the states as We assume that the system is initially prepared in the Bell state |Ψ (0)⟩ = 1 (20), the time evolution for this initial state can be obtained as where the elements of the matrix can be defined as In order to highlight the effect of the pase decoherence on the various correlations, we plot the time evolutions of correlations with different values of in Fig. 2. It can be seen from the lower part of the figure that the time evolution of the entanglement and quantum discord exhibit the interesting phenomena of "sudden death" and "sudden revival", [4] which occur when the spin-orbit coupling is large and spin-spin coupling is small. Secondly, with the aim to clarify the joint influence of the system parameters with the phase decoherence on the time evolution, the combination of the system parameters for the upper part of the figure is chosen as the optimum one based on the numerical analysis. All of the three correlations exhibit oscillatory behavior, which ultimately ends with a steady state value. Oscillations are suppressed obviously with the increase of . The CC ends with 030303-3 the maximum value, while the other two end with a smaller steady state value with respect to the starting maximum value. Importantly, the final steady state values of the entanglement and quantum discord all are still high despite the phase decoherence, implying that the optimum combination of the system parameters can keep the correlations highly immune to the pure phase decoherence. Moreover, one can see that the quantum discord is more fragile under the phase decoherence than the entanglement, which is different from the result that it is more resistant against the environment than entanglement. [11] Last but not least, the larger the DM interaction, the severer the collapse of the correlations, which is the opposite of the thermal case. In conclusion, we have studied the various correlations, particularly the quantum discord in an anisotropic two-qubit Heisenberg model with the presence of DM interaction. Results are presented for the case at thermal equilibrium and under phase decoherence and they show that the roles of the DM interactions in controlling the thermal quantum discord are opposite to the case of the . It is found constructive in the case of the model under our consideration. However, this is not the same story for the case of phase decoherence, where the DM becomes destructive. The time evolution of the entanglement and quantum discord shows the famous phenomena of collapse and revival. Though the quantum discord is shown to be more sensitive to the phase decoherence than the entanglement, optimum combination of the system parameters can protect the correlations effectively against the influence of the phase decoherence on the whole.
2,536.8
2013-01-01T00:00:00.000
[ "Physics" ]
Tunable Broadband Nonlinear Optical Properties of Black Phosphorus Quantum Dots for Femtosecond Laser Pulses Broadband nonlinear optical properties from 500 to 1550 nm of ultrasmall black phosphorus quantum dots (BPQDs) have been extensively investigated by using the open-aperture Z-scan technique. Our results show that BPQDs exhibit significant nonlinear absorption in the visible range, but saturable absorption in the near-infrared range under femtosecond excitation. The calculated nonlinear absorption coefficients were found to be (7.49 ± 0.23) × 10−3, (1.68 ± 0.078) × 10−3 and (0.81 ± 0.03) × 10−3 cm/GW for 500, 700 and 900 nm, respectively. Femtosecond pump-probe measurements performed on BPQDs revealed that two-photon absorption is responsible for the observed nonlinear absorption. The saturable absorption behaviors observed at 1050, 1350 and 1550 nm are due to ground-state bleaching induced by photo-excitation. Our results suggest that BPQDs have great potential in applications as broadband optical limiters in the visible range or saturable absorbers in the near-infrared range for ultrafast laser pulses. These ultrasmall BPQDs are potentially useful as broadband optical elements in ultrafast photonics devices. Introduction The upsurge of two-dimensional (2D) materials was triggered by the discovery of graphene in 2004 when it was isolated from its parent graphite [1]. Due to their outstanding physical and chemical properties and promising applications in various fields such as photovoltaics [2,3] and electronics [4,5], 2D materials became a new class of nanomaterials that have a groundbreaking impact on nanotechnology. Current studies on 2D materials mainly focus on graphene and wide-bandgap transitional metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS 2 ) [6]. Most importantly, there is significant interest in studying 2D materials' optical properties and their applications as nonlinear optical materials (optical limiters and saturable absorbers). Graphene and graphene oxide have been reported to display broadband optical-limiting properties due to strong two-photon absorption [7][8][9][10]. Broadband nonlinear absorption was also discovered in TMDs [11,12] and their applications in ultrafast lasers have been investigated [13,14]. Besides 2D materials, ultrasmall quantum dots (QDs) are being also extensively studied due to their unique electronic and optical properties arising from the quantum confinement effect [15]. Black phosphorous (BP), also known as phosphorene, is the latest member of the family of 2D materials [16,17]. BP is the most thermodynamically stable allotrope of phosphorus [18] and its unique structure as well as fascinating optical and electronic properties have attracted a lot of research interests [19]. As bulk BP consists of phosphorene monolayers stacked together by van der Waals force, it can be mechanically [20] or chemically exfoliated [21][22][23] into few-layer or single-layer nanosheets, or quantum dots. Due to its attractive properties, BP has been applied to field-effect transistors (FET) [17,24]. Few-layer BP can be used in thin-film solar cells [25] and p-n diodes [26]. It is also theoretically predicted to be used in flexible ambipolar transistors [27], energy storage [28] and moveable vibratory devices [29]. In particular, it has been previously reported that few-layer BP can act as an effective saturable absorber for ultrashort pulse generation in solid-state and fiber mode-locked lasers operating in the 1000-2000 nm wavelength range [30][31][32][33][34][35]. Unlike graphene's zero-bandgap nature which limits its electronic and photonic applications, it is noteworthy that the bandgap of BP depends on the number of layers, ranging from 0.3 (bulk) to 2.0 eV (single layer) [36,37]. It has also been found that the applied strain force [38,39], stacking order [25] and external electric field [40] can also modulate the bandgap of BP. The tunable bandgap of BP shows great potential in bridging the space between zero-bandgap semi-metallic graphene and wide-bandgap TMDs (1-2 eV). Hence, BP is considered to be suitable for extremely broadband nonlinear optical applications, including optical limiters and saturable absorbers [30][31][32][33][34][35]. The past decades have witnessed significant research efforts in developing broadband optical materials. Optical-limiting materials exhibit decreased transmittance at high-input laser intensity, which can be used to protect human eyes and sensitive instruments from damage by high-intensity laser beams [41]. In contrast, saturable absorbers show an increased transmittance at high-input laser intensity, which can be utilized in pulse compression, mode locking and Q-switching [42]. Herein, for the first time, black phosphorus quantum dots (BPQDs) are extensively investigated in broadband femtosecond nonlinear optical properties from visible to near-infrared (near-IR) range. Based on the simple solution exfoliation method, a suspension of BPQDs was prepared in N-Methylpyrrolidone solvent (NMP). The utrasmall BPQDs were experimentally demonstrated to display nonlinear optical responses in a broad wavelength range by femtosecond open-aperture Z-scan measurements. Under femtosecond laser excitation, BPQDs exhibited significant nonlinear absorption in the visible range, but saturable absorption in the near-infrared (near-IR) range. Femtosecond pump-probe measurements performed on BPQDs revealed that two-photon absorption is primarily responsible for the observed nonlinear absorption. The results suggest that BPQDs have great potential in applications as broadband optical limiters in the visible range or as saturable absorbers in the near-IR range for ultrafast laser pulses. These ultrasmall BPQDs are potentially useful as broadband optical elements in fiber lasers and other ultrafast photonics devices. Sample Preparation and Characterizations Mechanical and liquid exfoliation methods are commonly used as simple and effective techniques to prepare 2D nanomaterials and QDs from bulk crystals, such as BP [21,22] and graphene [43]. In the simple liquid exfoliation technique, solvents with a suitable surface energy can serve as stable dispersions for layered materials. Here in this article, the solvent exfoliation combined with probe sonication and bath sonication were used to fabricate BPQDs dispersed in N-Methylpyrrolidone (NMP) solvent [44]. BP has been known to be sensitive to water and oxygen and can be oxidized under visible-light irradiation [45,46]. In our experiments, BPQDs were prepared in ambient conditions and dispersed in NMP solution to avoid any exposure to air as much as possible. The absorption and Raman spectra were measured and compared before and after nonlinear optical property measurements to check for any possible degradation. More experimental details can be found in Section 3. Transmission electron microscopy (TEM) measurements were conducted to examine the morphology of the as-prepared BPQDs. The TEM image (Figure 1a) shows that the ultrasmall BPQDs have an average lateral size of about 2-3 nm, which corresponds to the stacking number of layers of 2 ± 1. The absorption spectra (Figure 1b) show that BPQDs have a broad absorption band, spanning from the UV to near-IR range. The absorption band at the UV-visible range is similar to those of other 2D layered materials such as graphene oxide, as well as few-layer BP. Compared to BP, a huge absorption band in the near-IR range from 1250 to 1630 nm with two absorption maxima at 1438 and 1550 nm was observed in the BPQD dispersion. This near-IR broad absorption band might be due to the defect state absorption induced by the ultrasmall size of the QDs [47], which indicates possible interesting nonlinear optical properties of BPQDs both in visible and near-IR regions. examine the morphology of the as-prepared BPQDs. The TEM image (Figure 1a) shows that the ultrasmall BPQDs have an average lateral size of about 2-3 nm, which corresponds to the stacking number of layers of 2 ± 1. The absorption spectra ( Figure 1b) show that BPQDs have a broad absorption band, spanning from the UV to near-IR range. The absorption band at the UV-visible range is similar to those of other 2D layered materials such as graphene oxide, as well as few-layer BP. Compared to BP, a huge absorption band in the near-IR range from 1250 to 1630 nm with two absorption maxima at 1438 and 1550 nm was observed in the BPQD dispersion. This near-IR broad absorption band might be due to the defect state absorption induced by the ultrasmall size of the QDs [47], which indicates possible interesting nonlinear optical properties of BPQDs both in visible and near-IR regions. BPQDs were also characterized by Raman spectroscopy. Samples were prepared by spin-coating the BPQD dispersion onto the quartz substrates, drying them on a heating plate under 348 K for 5 min, and then keeping them in N2 for the next 6 h. As shown in Figure 2, the three observed peaks can be attributed to one out-of-plane phonon mode (A 1 g) at 361.16 cm −1 , and two in-plane modes, which are B2g and A 2 g, at 438.22, and 465.65 cm −1 , respectively, consistent with the reported values [15]. Compared to the bulk BP, both B2g and A 2 g modes of BPQDs are red-shifted by 2.30 cm −1 , while the A 1 g mode is red-shifted by 2.32 cm −1 . It can be explained that the Raman shift is dependent on thickness and lateral dimensions. The frequency difference between the A 1 g and A 2 g modes of BPQDs equals 104.3 cm −1 , which is larger than the reported value of bulk BP [48] and confirms that we successfully reduced the thickness of the BP. BPQDs were also characterized by Raman spectroscopy. Samples were prepared by spin-coating the BPQD dispersion onto the quartz substrates, drying them on a heating plate under 348 K for 5 min, and then keeping them in N 2 for the next 6 h. As shown in Figure 2, the three observed peaks can be attributed to one out-of-plane phonon mode (A 1 g ) at 361.16 cm −1 , and two in-plane modes, which are B 2g and A 2 g , at 438.22, and 465.65 cm −1 , respectively, consistent with the reported values [15]. Compared to the bulk BP, both B 2g and A 2 g modes of BPQDs are red-shifted by 2.30 cm −1 , while the A 1 g mode is red-shifted by 2.32 cm −1 . It can be explained that the Raman shift is dependent on thickness and lateral dimensions. The frequency difference between the A 1 g and A 2 g modes of BPQDs equals 104.3 cm −1 , which is larger than the reported value of bulk BP [48] and confirms that we successfully reduced the thickness of the BP. examine the morphology of the as-prepared BPQDs. The TEM image ( Figure 1a) shows that the ultrasmall BPQDs have an average lateral size of about 2-3 nm, which corresponds to the stacking number of layers of 2 ± 1. The absorption spectra ( Figure 1b) show that BPQDs have a broad absorption band, spanning from the UV to near-IR range. The absorption band at the UV-visible range is similar to those of other 2D layered materials such as graphene oxide, as well as few-layer BP. Compared to BP, a huge absorption band in the near-IR range from 1250 to 1630 nm with two absorption maxima at 1438 and 1550 nm was observed in the BPQD dispersion. This near-IR broad absorption band might be due to the defect state absorption induced by the ultrasmall size of the QDs [47], which indicates possible interesting nonlinear optical properties of BPQDs both in visible and near-IR regions. BPQDs were also characterized by Raman spectroscopy. Samples were prepared by spin-coating the BPQD dispersion onto the quartz substrates, drying them on a heating plate under 348 K for 5 min, and then keeping them in N2 for the next 6 h. As shown in Figure 2, the three observed peaks can be attributed to one out-of-plane phonon mode (A 1 g) at 361.16 cm −1 , and two in-plane modes, which are B2g and A 2 g, at 438.22, and 465.65 cm −1 , respectively, consistent with the reported values [15]. Compared to the bulk BP, both B2g and A 2 g modes of BPQDs are red-shifted by 2.30 cm −1 , while the A 1 g mode is red-shifted by 2.32 cm −1 . It can be explained that the Raman shift is dependent on thickness and lateral dimensions. The frequency difference between the A 1 g and A 2 g modes of BPQDs equals 104.3 cm −1 , which is larger than the reported value of bulk BP [48] and confirms that we successfully reduced the thickness of the BP. The Nonlinar Optical Properties of BPQDs in Visible Range Open-aperture femtosecond Z-scan measurements were used to characterize the nonlinear optical properties of BPQDs. The BPQD dispersion was placed in a 1 mm cuvette for Z-scan measurements. The detailed experimental setup is described in Section 3. Briefly, femtosecond laser pulses with a pulse duration of 120 fs and a repetition rate of 1 kHz were focused onto the cuvette with the samples, which were moved towards and away from the focus by using a motorized translational stage to study the excitation power intensity-dependent transmission of samples. Similar Z-scan measurements were performed with femtosecond laser pulses at 500, 700 and 900 nm, which were generated by an optical parametric amplifier (TOPAS-Prime) pumped by a mode-locked Ti:sapphire oscillator-seeded regenerative amplifier. The Z-scan results showed that a very weak saturable absorption effect was exhibited in the visible spectra range upon low excitation intensity due to ground-state bleaching. As the excitation intensity increased, reverse saturable absorption occurred and finally became dominant. The results under the high excitation peak power intensity of 147 GW/cm 2 are shown in Figure 3. The subfigures (a-c) are corresponding to the Z-scan results at 500 nm (a), 700 nm (b) and 900 nm (c), respectively. NMP solvent in a 1 mm cuvette was measured and no nonlinear absorption was found, which excludes the effects of NMP and the 1 mm cuvette. For all wavelengths, the normalized transmittance gradually decreased as the sample move towards the focus point (Z = 0), indicating an optically induced reverse saturable absorption. The Z-scan curves can be fitted to achieve the nonlinear absorption coefficient β with the following approximate equation: where Z is the distance between the sample and the focus; Z 0 is the Rayleigh diffraction length; I 0 is the peak power density of the excitation laser pulses; L e f f = (1 − e −α 0 L /α 0 is the effective sample length; L is the sample length; α 0 is the linear absorption coefficient. The fitting values of nonlinear absorption coefficient β at 500, 700 and 900 nm were found to be (7.49 ± 0.23) × 10 −3 , (1.68 ± 0.078) × 10 −3 and (0.81 ± 0.03) × 10 −3 cm/GW for BPQDs, respectively. The obtained nonlinear absorption coefficients of these BPQDs are higher than those of other optical-limiting materials such as CdSe QDs with average size of around 2 nm ((1.1 ± 0.15) × 10 −3 cm/GW) [49], CdO nanoflakes (0.69 × 10 −3 cm/GW) [50] and Fe 2 O 3 hexagonal nanostructures (0.82 × 10 −3 cm/GW) [51], obtained by using the similar femtosecond laser pulses. These results indicated that BPQDs can act as good broadband optical-limiting materials in the visible range. The results are similar to the previous studies on the nonlinear absorption properties of BP nanoplatelets at 800 nm [52]. These nonlinear absorption properties can be well analyzed based on the band structure of BP. The bandgap of bulk BP is~0.3 eV, and the bandgap increases with the layer decreasing. With the size and layer of BPs decreasing, the quantum effects become increasingly important. Upon excitation by the photon with an energy of 1.3-2.48 eV (equivalent to 500-900 nm), electrons cannot be promoted from the top of the valence band (VB) through direct transition but can only jump by indirect transition or through the defects level, as the photon energy is much larger than the bandgap of BP. For the energy (2.6-4.96 eV) of two photons at 500-900 nm, when the excitation intensity is high, electrons can be directly promoted from the top of the valence band through the two-photon absorption (TPA) process, which deduced the observed significant optical-limiting properties. The Ultrafast Carrier Dynamics of BPQDs To better understand the possible mechanisms behind the nonlinear absorption properties in the BPQDs, transient absorption (TA) spectra as well as the single-wavelength dynamics were measured under excitation at 400 nm (3.1 eV) with a pump fluence of 16 μJ/cm 2 . TA spectra of BPQDs in NMP at various delay times are shown in Figure 4a. A broad negative transient differential transmission band spanning from 450 to 750 nm was found. The observed negative transmission change signal is commonly assigned to the excited state absorption. This result is also consistent with the previously reported photon-induced free carrier participated interband transient absorption in few-layer BP and graphene [53,54]. As mentioned above, the excitation photon energy (3.1 eV) is much larger than the bandgap of BP. Thus, the negative transmission change signal induced by interband transient absorption could be attributed to the multiphoton absorption process. Our pump-probe results further support the nonlinear absorption behaviors observed by the Z-scan measurement in the visible range. Figure 4b shows the single-wavelength dynamics probed at the wavelengths of 520 and 700 nm. The photo-induced absorption decays probed at different wavelengths show no difference and could be well fit with a bi-exponential equation with time constants of τ1 = 78 ± 6 ps (35%) and τ2 = 612 ± 30 ps (65%). The two characteristic time constants on the order of tens of picoseconds and hundreds of picoseconds could be attributed to the electron-phonon and slower phonon-phonon scattering during the interband transition. The Nonlinar Optical Properties of BPQDs in Near-IR Range We also investigated the nonlinear absorption responses of BPQDs in NMP at 1050, 1350 nm, respectively. The open aperture Z-scan measurement results under different excitation peak power The Ultrafast Carrier Dynamics of BPQDs To better understand the possible mechanisms behind the nonlinear absorption properties in the BPQDs, transient absorption (TA) spectra as well as the single-wavelength dynamics were measured under excitation at 400 nm (3.1 eV) with a pump fluence of 16 µJ/cm 2 . TA spectra of BPQDs in NMP at various delay times are shown in Figure 4a. A broad negative transient differential transmission band spanning from 450 to 750 nm was found. The observed negative transmission change signal is commonly assigned to the excited state absorption. This result is also consistent with the previously reported photon-induced free carrier participated interband transient absorption in few-layer BP and graphene [53,54]. As mentioned above, the excitation photon energy (3.1 eV) is much larger than the bandgap of BP. Thus, the negative transmission change signal induced by interband transient absorption could be attributed to the multiphoton absorption process. Our pump-probe results further support the nonlinear absorption behaviors observed by the Z-scan measurement in the visible range. Figure 4b shows the single-wavelength dynamics probed at the wavelengths of 520 and 700 nm. The photo-induced absorption decays probed at different wavelengths show no difference and could be well fit with a bi-exponential equation with time constants of τ 1 = 78 ± 6 ps (35%) and τ 2 = 612 ± 30 ps (65%). The two characteristic time constants on the order of tens of picoseconds and hundreds of picoseconds could be attributed to the electron-phonon and slower phonon-phonon scattering during the interband transition. The Ultrafast Carrier Dynamics of BPQDs To better understand the possible mechanisms behind the nonlinear absorption properties in the BPQDs, transient absorption (TA) spectra as well as the single-wavelength dynamics were measured under excitation at 400 nm (3.1 eV) with a pump fluence of 16 μJ/cm 2 . TA spectra of BPQDs in NMP at various delay times are shown in Figure 4a. A broad negative transient differential transmission band spanning from 450 to 750 nm was found. The observed negative transmission change signal is commonly assigned to the excited state absorption. This result is also consistent with the previously reported photon-induced free carrier participated interband transient absorption in few-layer BP and graphene [53,54]. As mentioned above, the excitation photon energy (3.1 eV) is much larger than the bandgap of BP. Thus, the negative transmission change signal induced by interband transient absorption could be attributed to the multiphoton absorption process. Our pump-probe results further support the nonlinear absorption behaviors observed by the Z-scan measurement in the visible range. Figure 4b shows the single-wavelength dynamics probed at the wavelengths of 520 and 700 nm. The photo-induced absorption decays probed at different wavelengths show no difference and could be well fit with a bi-exponential equation with time constants of τ1 = 78 ± 6 ps (35%) and τ2 = 612 ± 30 ps (65%). The two characteristic time constants on the order of tens of picoseconds and hundreds of picoseconds could be attributed to the electron-phonon and slower phonon-phonon scattering during the interband transition. The Nonlinar Optical Properties of BPQDs in Near-IR Range We also investigated the nonlinear absorption responses of BPQDs in NMP at 1050, 1350 nm, respectively. The open aperture Z-scan measurement results under different excitation peak power The Nonlinar Optical Properties of BPQDs in Near-IR Range We also investigated the nonlinear absorption responses of BPQDs in NMP at 1050, 1350 nm, respectively. The open aperture Z-scan measurement results under different excitation peak power intensities at the focal point are shown in Figure 5. The normalized transmittance gradually increased with the approaching of the BPQD sample with respect to the focal point (Z = 0), indicating that the absorption of BPQDs becomes saturated with the increase of the incident pump intensity. This is well known as the saturable absorption behavior. As shown in the absorption spectra, there is a strong absorption band in the near-IR range. The observed saturable absorption activity can be ascribed to the ground-state bleaching induced by photo-excitation. Figure 5 shows the power-dependent saturable absorption. It can be seen that the peaks of the open Z-scan curves increased with the increasing input power intensity. These results further confirm that the saturable absorption responses indeed originate from the intrinsic optical absorption effects in BPQDs other than from artifacts such as sample damage or contamination. When the excitation intensity increased further until it reached the photo-damage threshold, the reverse saturable absorption effect was not observed. These results can be ascribed to smaller two-photon absorption coefficients of BPQDs at the near-IR spectra range, which are not enough to overcome the ground-state bleaching. These results are consistent with the wavelength dependence of the two-photon absorption coefficient in the visible spectra range: the two-photon absorption coefficient decreased with the increasing wavelength. indicating that the absorption of BPQDs becomes saturated with the increase of the incident pump intensity. This is well known as the saturable absorption behavior. As shown in the absorption spectra, there is a strong absorption band in the near-IR range. The observed saturable absorption activity can be ascribed to the ground-state bleaching induced by photo-excitation. Figure 5 shows the power-dependent saturable absorption. It can be seen that the peaks of the open Z-scan curves increased with the increasing input power intensity. These results further confirm that the saturable absorption responses indeed originate from the intrinsic optical absorption effects in BPQDs other than from artifacts such as sample damage or contamination. When the excitation intensity increased further until it reached the photo-damage threshold, the reverse saturable absorption effect was not observed. These results can be ascribed to smaller two-photon absorption coefficients of BPQDs at the near-IR spectra range, which are not enough to overcome the ground-state bleaching. These results are consistent with the wavelength dependence of the two-photon absorption coefficient in the visible spectra range: the two-photon absorption coefficient decreased with the increasing wavelength. Similar measurements have also been performed at 1550 nm. Figure 6 shows the Z-scan curve of BPQDs under the excitation peak power intensity of 38 GW/cm 2 and the corresponding fitting curve of saturable absorption. Based on the relation between the laser beam spot size and the relative separation, the nonlinear saturable absorption curve could be derived (Figure 6b). The onset of saturation intensity was around 2.5 GW/cm 2 and the modulation depth was 8.2% at 1550 nm. Therefore, the measured results indicate that the as-prepared BPQDs can be used as a saturable absorber for ultrafast laser pulse generation in near-IR fiber lasers. Similar measurements have also been performed at 1550 nm. Figure 6 shows the Z-scan curve of BPQDs under the excitation peak power intensity of 38 GW/cm 2 and the corresponding fitting curve of saturable absorption. Based on the relation between the laser beam spot size and the relative separation, the nonlinear saturable absorption curve could be derived (Figure 6b). The onset of saturation intensity was around 2.5 GW/cm 2 and the modulation depth was 8.2% at 1550 nm. Therefore, the measured results indicate that the as-prepared BPQDs can be used as a saturable absorber for ultrafast laser pulse generation in near-IR fiber lasers. indicating that the absorption of BPQDs becomes saturated with the increase of the incident pump intensity. This is well known as the saturable absorption behavior. As shown in the absorption spectra, there is a strong absorption band in the near-IR range. The observed saturable absorption activity can be ascribed to the ground-state bleaching induced by photo-excitation. Figure 5 shows the power-dependent saturable absorption. It can be seen that the peaks of the open Z-scan curves increased with the increasing input power intensity. These results further confirm that the saturable absorption responses indeed originate from the intrinsic optical absorption effects in BPQDs other than from artifacts such as sample damage or contamination. When the excitation intensity increased further until it reached the photo-damage threshold, the reverse saturable absorption effect was not observed. These results can be ascribed to smaller two-photon absorption coefficients of BPQDs at the near-IR spectra range, which are not enough to overcome the ground-state bleaching. These results are consistent with the wavelength dependence of the two-photon absorption coefficient in the visible spectra range: the two-photon absorption coefficient decreased with the increasing wavelength. Similar measurements have also been performed at 1550 nm. Figure 6 shows the Z-scan curve of BPQDs under the excitation peak power intensity of 38 GW/cm 2 and the corresponding fitting curve of saturable absorption. Based on the relation between the laser beam spot size and the relative separation, the nonlinear saturable absorption curve could be derived (Figure 6b). The onset of saturation intensity was around 2.5 GW/cm 2 and the modulation depth was 8.2% at 1550 nm. Therefore, the measured results indicate that the as-prepared BPQDs can be used as a saturable absorber for ultrafast laser pulse generation in near-IR fiber lasers. Synthesis of BPQDs NMP was used to isolate the BPQDs from the bulk BP crystal. First, the BP bulk crystal (99.998%, purchased from smart elements) was pulverized to prepare the BP powder. The dispersed suspension of BPQDs was then prepared by ultrasound probe sonication followed by ice bath sonication of bulk BP powder in 1-methyl-2-pyrro-lidone (NMP, 99.5% anhydrous, purchased from Aladdin Reagents). Then 25 mg of the BP powder was dispersed into 25 mL of NMP in a 50 mL sealed conical tube and sonicated with a sonic tip for 3 h at the power of 1200 W. The ultrasonic frequency ranged from 19 to 25 kHz and the ultrasound probe worked 2 s with an interval of 4 s. Afterwards, an ultrasonic bath was adopted to sonicate the dispersion consecutively for another 10 h at the power of 300 W. During the sonication, the temperature of the sample solution was kept and monitored below 277 K in an ice bath. The dispersion was then centrifuged at the speed of 7000 rpm for 20 min. The supernatant containing BPQDs was collected. To further increase the concentration of the BPQDs dispersion, the collected solution was centrifuged for 20 min at the speed of 12,000 rpm. The resulting supernatant was collected and was kept in a sealed tube for further tests. Transmission electron microscopy (TEM) measurements were conducted to examine the morphology of the as-prepared BPQDs. The TEM images were taken on the Tecnai G2 F20 S-Twin transmission electron microscope at an acceleration voltage of 200 kV. The absorption spectra were taken by a Shimadzu UV3600 spectroscopy with QS-grade quartz cuvette at room temperature. Nonlinear Optical Property Measured by Z-Scan Technique The open aperture Z-scan measurements were performed by using an optical parametric amplifier (TOPAS-Prime) pumped by a mode-locked Ti:sapphire oscillator seeded regenerative amplifier (Spectra-Physics Spitfire Ace), which gives output laser pulses with tunable central wavelength from UV to near-IR range, pulse duration of~120 fs and repetition rate of 1 kHz. The laser beam was focused onto the sample with a beam radius of~23 µm. Samples were prepared in the quartz cuvette with the optical path length of 1 mm. The transmittance of the samples was measured as a function of input intensity, which was varied by moving the samples in and out of beam focus along the z-axis. At least five cycles were repeated to reduce the experimental error and the average was calculated to plot the Z-scan curves. Pump-Probe and Transient Absorption Spectra Mansurement In our experiments, a Ti:sapphire oscillator seeded regenerative amplifier laser system (Spectra Physics Spitfire Ace) with output pulse energy of 2 mJ at 800 nm and a repetition rate of 1 kHz was used as the source of the pump and beam pulses. The detailed experimental setup is described in Figure 7. The 800 nm laser beam was split into two portions. One portion passes through a BBO crystal to generate the 400 nm pump beam by second harmonic generation. The other portion of the 800 nm beam was used to generate white light continuum in a 2.5 mm sapphire plate. The white light beam was split into two portions: one as the probe and another as the reference. The pump beam was focused onto the sample with a beam size with 300 µm in diameter and overlapped with the smaller probe beam (100 µm in diameter). The probe beam is collected after passing through the sample and its intensity is monitored by a photodiode as the detector. Both the detectors of probe and reference beam are connected to the lock-in amplifier to correct the pulse-to-pulse intensity fluctuations. The time delay between the pump and probe pulse was varied by a computer-controlled translation stage. The pump beam was modulated by an optical chopper at the frequency of 500 Hz. In a single-wavelength dynamics scan, a fixed probe wavelength is picked and the transmittance change of probe with pump and without pump (∆T/T) was monitored as a function of the various delay between pump and probe beam. The transient absorption spectra at different delay time were measured by passing the probe through a monochromator before detector. Conclusions In conclusion, BPQDs were fabricated by a simple solvent exfoliation method. The nonlinear absorption properties were investigated with Z-scan measurements using femtosecond laser pulses at 500, 700, 900, 1050, 1350 and 1550 nm. These ultrasmall BPQS in NMP suspension exhibited broadband optical-limiting activities in the visible range. The nonlinear absorption coefficient was found to be (7.49 ± 0.23) × 10 −3 , (1.68 ± 0.078) × 10 −3 and (0.81 ± 0.03) × 10 −3 cm/GW for 500, 700 and 900 nm, respectively. Femtosecond pump-probe measurements performed on the BPQD suspension further support that nonlinear absorption due to two-photon absorption played an important role in the observed optical-limiting activities of the BPQDs. Furthermore, saturable absorption due to ground-state bleaching was found in BPQDs by photo-excitation at 1050, 1350 and 1550 nm. Our results will provide the basis for applications of BPQDs in broadband optoelectronic devices such as optical limiters in the visible range or saturable absorbers in the near-IR range. Conclusions In conclusion, BPQDs were fabricated by a simple solvent exfoliation method. The nonlinear absorption properties were investigated with Z-scan measurements using femtosecond laser pulses at 500, 700, 900, 1050, 1350 and 1550 nm. These ultrasmall BPQS in NMP suspension exhibited broadband optical-limiting activities in the visible range. The nonlinear absorption coefficient was found to be (7.49 ± 0.23) × 10 −3 , (1.68 ± 0.078) × 10 −3 and (0.81 ± 0.03) × 10 −3 cm/GW for 500, 700 and 900 nm, respectively. Femtosecond pump-probe measurements performed on the BPQD suspension further support that nonlinear absorption due to two-photon absorption played an important role in the observed optical-limiting activities of the BPQDs. Furthermore, saturable absorption due to ground-state bleaching was found in BPQDs by photo-excitation at 1050, 1350 and 1550 nm. Our results will provide the basis for applications of BPQDs in broadband optoelectronic devices such as optical limiters in the visible range or saturable absorbers in the near-IR range.
7,534.8
2017-02-01T00:00:00.000
[ "Materials Science", "Physics" ]
INFORMATION AND WEB TECHNOLOGIES . This article examines the features of information technology development in Spain, with a focus on 2018 – 2022 statistical data. The article highlights steady growth of the IT sector in Spain, with a projected growth rate of 5.5% for 2022. The impact of the COVID-19 pandemic on the digital transformation of industries in Spain is also discussed, along with the Spanish government's support for the development of the IT sector. The article also draws attention to the growing concern of cybersecurity threats in Spain and the need for robust cybersecurity measures. Finally, the article presents conclusions based on the statistical data and analysis, highlighting the opportunities and challenges arising in connection with the development of information technology in Spain. Overall, this article provides a valuable overview of the current state and future prospects of the IT sector in Spain, making it a useful resource for anyone interested in this topic. (IT) has significantly transformed the global economy and society in the past few decades.Spain, a country rich in culture and history, has emerged as one of the leaders in IT development in Europe.Spanish economy is one of the largest in Europe and has been rapidly expanding, with the IT sector being one of the most dynamic and promising industries in recent years.This scientific article aims to provide an overview of the features of IT development in Spain, including the main companies, research centers, and startups that are driving this industry forward.The article will examine the role of the Spanish government in supporting IT development, as well as the impact of IT on other sectors of the economy, such as tourism, healthcare, and transportation.By analyzing the key features of IT development in Spain, this article will contribute to a better understanding of the factors driving the country's economic growth and its position in the global IT market. According to RRF Annual Report on the, which is based on the pillar reporting methodology, a total of almost €130 bn in expenditure is allocated to the digital transformation pillar, of which more than a third is for the digitaliszation of public services (36%, €47 bn), followed by measures supporting the digitalization of businesses (20%, €26 bn) and human capital (20%, €26 bn).The highest expenditure in the digital pillar in absolute terms is from Italy and Spain (€27 bn and €18 bn, respectively).Amongst the countries devoting the highest percentages of their GDP to the RRF digital pillar, we find those countries that are lagging behind in the DESI (Romania, Bulgaria, Greece, Portugal, Croatia), thus, they are making a strong effort to close the gap [1]. Development of information technology in Spain has been rapidly accelerating in recent years.According to the latest statistics available, the IT sector in Spain is one of the fastest-growing industries in the country's economy.In 2020, the IT sector accounted for 6.8% of Spain's GDP, generating €61.4 billion in revenue.Furthermore, the IT industry in Spain employed over 446,000 people, accounting for 2.4% of the country's total workforce [2]. Software development is one of the main drivers of the IT sector in Spain.Software industry in Spain has been growing at an impressive rate, with a compound annual growth rate (CAGR) of 6.7% between 2017 and 2022.In 2020, Spain's software industry generated a revenue of €16.4 billion, making it the largest segment of the country's IT industry.Software industry is also the most profitable segment of the IT industry, with a profit margin of 10.5% [2].Cloud computing is another rapidly growing segment of the IT industry in Spain.In 2020, the estimated value of the cloud computing market in Spain was €2.9 billion, and it is expected to grow at a CAGR of 23.7% between 2020 and 2025.The COVID-19 pandemic has accelerated the adoption of cloud computing in Spain, with many companies and organizations turning to cloud-based solutions to enable remote work and digitalization [3]. The mobile app industry is also experiencing significant growth in Spain.In 2020, the mobile app market value in Spain was estimated at €1.7 billion, with a CAGR of 9.5% between 2020 and 2025.The mobile app industry in Spain is dominated by gaming apps, which accounted for 45% of the total revenue in 2020. Spain is home to many innovative startups in the IT sector.In 2020, Spain had over 4,200 active startups, with the majority of them operating in the IT industry.Some of the most successful IT startups in Spain include Cabify, Glovo, Jobandtalent, and Wallapop. The Spanish government has been actively supporting the development of the IT sector in the country.In 2021, the Spanish government announced creation of a €4 billion fund to support the digitalization of the Spanish economy.The fund will focus on such areas as 5G infrastructure, cybersecurity, and digital skills training [4]. In conclusion, the development of information technology in Spain has been rapidly accelerating in recent years, driven by the software, cloud computing, and mobile app industries.The IT sector is one of the most dynamic and promising industries in the Spanish economy, and it is expected to continue to grow in the coming years.The Spanish government's support for the digitalization of the economy is expected to further accelerate the growth of the IT industry in Spain. No 147 The impact of information technology on other sectors of the Spanish economy is also significant.For example, the tourist industry in Spain has been transformed by IT in recent years.The use of digital platforms for travel bookings and reservations has become ubiquitous, and many tourist companies have embraced digital technologies to improve their services and increase their efficiency.This pivot table 1 shows growth rate of the IT sector in Spain and the number of reported cybersecurity incidents from 2018 to 2022.The data reveals a consistent growth rate in the IT sector, with the growth rate increasing steadily each year.However, the number of reported cybersecurity incidents has also increased each year, which highlights the need for robust cybersecurity measures to be put in place as part of the development of information technology in Spain.Overall, this pivot table provides a clear and concise overview of the key statistical trends in the development of information technology in Spain over the past five years. The healthcare industry in Spain has also been transformed by IT.The use of electronic medical records and telemedicine has become more widespread in recent years, enabling doctors and other healthcare professionals to provide more efficient and effective care to patients. The transportation industry in Spain has also been impacted by IT.The use of digital platforms for ride-sharing and taxi services has become increasingly popular, with companies such as Uber and Cabify operating in several Spanish cities.No 147 In addition to the impact on specific industries, development of information technology in Spain has also had broader social and economic impacts.For example, growth of the IT sector has led to creation of many new jobs, including highly skilled positions in software development and data analysis.IT has also enabled greater connectivity and communication, making it easier for individuals and businesses to collaborate and innovate. However, there are also challenges associated with the rapid development of IT in Spain.One of the main challenges is the digital divide, which refers to the unequal access to digital technologies and skills.While IT has the potential to benefit all sectors of society, there are still many individuals and communities in Spain that lack access to digital technologies and skills.Addressing this digital divide will be crucial for ensuring that the benefits of IT are shared equitably across society. In conclusion, the development of information technology in Spain has been rapid and dynamic in recent years, driven by software development, cloud computing, and mobile app industries. The Spanish government's support for digitalization is expected to further accelerate the growth of the IT sector in Spain, with significant impacts on other sectors of the economy and broader society.While there are challenges associated with the rapid development of IT in Spain, addressing these challenges will be crucial for ensuring that the benefits of IT are shared equitably across society. Another challenge facing the development of information technology in Spain is cybersecurity.As the use of digital technologies becomes more widespread, the risk of cyber attacks also increases.In 2020, Spain experienced a significant increase in cyber attacks, with over 120,000 cyber security incidents reported.These incidents included phishing attacks, malware infections, and ransomware attacks, which can cause significant damage to businesses and individuals [1]. To address these challenges, the Spanish government has implemented several initiatives to improve cybersecurity in the country.In 2021, the Spanish government launched a cybersecurity strategy aimed at improving the resilience of The development of information technology in Spain has also been impacted by the COVID-19 pandemic.The pandemic has accelerated the adoption of digital technologies in Spain, as many businesses and organizations have had to shift to remote work and digital solutions to continue their operations.The use of digital technologies has also played a crucial role in supporting healthcare systems, enabling telemedicine and remote consultations. The pandemic has also highlighted the importance of digital skills and literacy.As many individuals and businesses had to rely on digital technologies for work, education, and communication, the need for digital skills and literacy has become more pressing.The Spanish government has recognized the importance of digital skills and has launched several initiatives aimed at improving digital literacy and skills across the country. In conclusion, the development of information technology in Spain has been shaped by several factors, including software development, cloud computing, and mobile app industries.Growth of the IT sector has had significant impacts on other sectors of the economy, as well as broader society.However, there are also challenges associated with the rapid development of IT in Spain, including the digital divide and cybersecurity risks.The COVID-19 pandemic accelerated adoption of digital technologies in Spain, highlighting the importance of digital skills and literacy.The Spanish government's support for digitalization and cybersecurity is expected to continue to drive growth of the IT sector in Spain in the coming years. Looking ahead, the development of information technology in Spain is expected to continue at a rapid pace.The Spanish government has set ambitious targets for digitalization, with the aim of making Spain a leader in digital transformation in Europe.The government's Digital Spain 2025 strategy aims to The Spanish government has identified the IoT as a priority area for investment and research, with the aim of promoting the development of innovative IoT solutions and services.The government has launched several initiatives to support the development of the IoT sector, including funding for research and development, support for startups and SMEs, and collaboration between public and private entities. Another area of focus for the development of information technology in Spain is artificial intelligence (AI).AI refers to the ability of machines to perform tasks that typically require human intelligence, such as recognizing speech, interpreting data, and making decisions. The Spanish government has identified AI as a priority area for investment and research, with the aim of promoting the development of innovative AI solutions and services.The government has launched several initiatives to support the development of the AI sector, including funding for research and development, support for startups and SMEs, and collaboration between public and private entities. In conclusion, the development of information technology in Spain has been rapid and dynamic in recent years, with significant impacts on other sectors of the economy and broader society.The Spanish government's support for digitalization is expected to continue to drive the growth of the IT sector in Spain, with a focus on areas such as the IoT and AI.Addressing challenges such as the digital divide and cybersecurity risks will be crucial for ensuring that the benefits of IT are shared equitably across society.The future of information technology in Spain is bright, and the country is well-positioned to continue to be a leader in digital transformation in Europe. Based on the statistical data and analysis presented in the article, we can draw the following conclusions about the features of the development of information technology in Spain: 1.The IT sector in Spain has experienced steady growth over the past years, with a growth rate of 5.5% projected for 2022.This indicates that there is a strong demand for IT products and services in Spain. 2. The COVID-19 pandemic has accelerated digital transformation of many industries in Spain, leading to an increased adoption of digital technologies and a higher demand for IT services. 3. The Spanish government has taken steps to support the development of the IT sector, including investing in research and development, promoting the adoption of new technologies, and providing incentives for businesses to innovate and expand their operations. 4. Cybersecurity is a growing concern in Spain, with the number of reported cybersecurity incidents increasing each year.This highlights the need for robust cybersecurity measures to be put in place to protect businesses and individuals from cyber threats. 5. Despite the challenges posed by cybersecurity threats, growth of the IT sector in Spain presents opportunities for businesses and individuals to innovate, create new products and services, and improve efficiency and productivity. In summary, the development of information technology in Spain is characterized by steady growth, government support, and a growing focus on cybersecurity.While there are challenges to be addressed, the opportunities presented by the growth of the IT sector in Spain are significant, and businesses and individuals can benefit from this development by embracing new technologies and innovations.
3,109.4
2023-03-20T00:00:00.000
[ "Computer Science", "Economics" ]
IOT Based Smart Home-Grown Scheme Using Bluetooth - In today’s situation involuntary classifications remain existence favoured over physical system. Home automation is playing significant part in humanoid lifespan. The paper is used intended for nursing and controlling the home-grown utilizations via World Wide Web which container interconnect through home automation system through an Internet entry, by means of announcement conventions. Home automation scheme uses the hand-held or vesture diplomacies as a user boundary. This paper goals at supervisory household utilizations via Smartphone using Bluetooth as announcement etiquette and interfaced with Arduino Board. It assimilates Passive Infrared (PIR) sensor, Temperature sensor, gas sensor, Light Dependent Resistor (LDR) sensor. At this time PIR sensor and Temperature sensors remained used for controlling the spotlight and fan. The statement through attendant permits the operator to excellent the fitting device. In the proposed organization an android app was developed to access the system wherever via Internet of things. The gas device used to designate the absorption of gas in the air. Buzzer attentive is given to warm others neighbouring home and also the possessor through internet via the smart receiver. The LDR is used to switch garden spotlight. This project provides a low cost and competent Homegrown Computerization System. Introduction Home-grown computerization system is becoming additional general diurnal by day due to its numerous advantages. This can be attained by resident networking or by distant controller. Home mechanization plays a vital character in controlling vigour and save changes. Here by we can regulate the electric belongings at homebased without visiting that residence this protects period. Home computerization is the usage of single or supplementary computers to switch uncomplicated household purposes and topographies robotically and every so often at all. An automatic homebased is every now and then called a smart home. The technique of governing or operating numerous, equipment, industrial developments, and additional requests using numerous switch schemes also with fewer or no humanoid interference remains labelled by way of computerization. "Smart Home" is the period frequently used to define a homebased that has employments, illuminations, heat, air conditioner, TVs, processers, entertaining acoustic & video systems, safety, and television camera systems that accomplished of communicating through one additional and can remain measured indistinctly by a period agenda, from any extent in the home, as well as in the least from slightly position in the ecosphere by receiver or cyberspace. Nowadays, mechanization of households is becoming prevalent. Straight controlling and flawlessly staying connected through the household systems you custom all day via a portable expedient would meaningfully improve your superiority of life. Home computerization deliver improved suitability, ease, energy efficiency and safety. Due to marvellous development in the present-day emerging skill in numerous conducts. Message is the progression of transporting evidence after single end to extra conclusion. It can be ended in two ways i.e., moreover by wireless message or wired announcement. It is essential that the changed manageable utilizations be interconnected and interconnect by each other. The rudimentary intention of home computerization is to switch or display signals from changed utilizations or elementary facilities. A smart receiver or disposable browser can remain used to rheostat or display the home-grown automation scheme. Home computerization remains precisely what it sounds like automating the capability to switch items everywhere the line by an easy thrust of a switch (or a voice command). Approximately activities similar situation up a spotlight to shot on and off at your instinct are humble and relatively reasonable. Others, like advanced remark cameras, might need an additional thoughtful speculation of period and change. There remain numerous smart home produce categories, so that you can switch all from illuminations and temperature to manes and sanctuary in your home-grown. Smart homes comprise diverse areas of microchip technology, construction, computation and communications. A keen home realizes a whole and entire switch of immeasurable quantity of employments. Its instructions the on/off instruction of national strategies such as fridge, television, laundry, cookery, and spring-cleaning gadget as well as electric devices as motors, pumps in directive to aquatic the houseplant by dampness and earth humidity. It governs the conservational scheme such as HVAC, and fans. This paper will designate the method in which we contrivance a regulatory and nursing scheme to switch numerous household employments with humanoid shrewd receiver. Related Work As per our survey at the existing nearby happens no scheme by inexpensive taxes. Numerous schemes are firm to connect, problematic to custom then preserve. Present systems are in overall branded and closed, not actual customizable by the termination operator. Most of this data explained by the prototypical for a small cost and stretchy homebased regulator and nursing scheme using an entrenched micro-web attendant, with Internet Protocol connectivity for retrieving and regulatory devices and appliances the least bit using robot based shrewd receiver app [1]. A scheme that will deliver distant switch of home employments and also deliver sanctuary against the misfortunes when the home-grown crowd is not on homebased [2]. It is primarily worried through the involuntary switch of sunlit or any supplementary home utilizations by means of internet. It is destined to except the electronic influence and anthropoid liveliness. Internet of Things (IoTs), which proposals competences toward classify and attach numerous corporeal devices into a united protected system [3]. As a part of IoTs, thoughtful anxieties are raised finished contact of individual and worldwide data pertaining to devices and separate discretion. This investigate about how reformed devices can be applied reformed admittance polices scheduled it. Wireless Bluetooth expertise to deliver distant admission from PC/CPU or canny phone. The project leftovers the current electric shifts and deliver supplementary security regulator on the switches with low power activating method [4]. The scheme projected to switch electrical utilizations and devices in household with moderately low-cost project, user-friendly border besides comfort of connection. The method used to variety of each home applications such as like illuminations, air conditioners, fans, washing machines to effort during the machine request in our shrewd receiver. Home-grown Automation System (HAS) has been intended for moveable receivers having machine stage to power an 8-bit Bluetooth interfaced microcontroller which panels a quantity of home applications similar illuminations, fans, corms and numerous additional using on/off communicate [5]. In the standing system near are restricted switch of home-grown applications to switch. Proposed System This paper purposes at controlling home-grown applications through smart receiver using Bluetooth as statement procedure and interfaced through Arduino boarding. In the proposed scheme an android submission was advanced to control the hole setup then the user can access the system wherever by IOT mechanization arrangement. An additional feature that improves the surface of fortification from passion accidents is its ability of finding the smoking in fireside by using the gas device component and associates a warning complete [6]. By exploitation this technique we can become effectual home computerization system, and likewise we offer an ascendable and worth actual home mechanization scheme. Figure 1 shows the Block Illustration of the Projected System Fig. 1: Block Illustration of the Projected System The planned organization is placed in home-grown wherever the home employments have to be supervised. The individual sensors are used to display approximately home applications. The prototypical comprise of dissimilar sensors similar temperature, gas, PIR. PIR sensor and Temperature sensors somewhere used aimed at controlling the sunlit and fan [7]. If, PIR sensor intelligence slightly drive confidential the area allusion is assumed to the operator then the employer turned ON lights through smart receiver. If here is slightly rise in the temperature allusion given to the employer then the manipulator turned ON fan via smart receiver buzzer is used to designate the slightly gas escape and high temperature in the home. If the infection exceeds the extent temperature then the enthusiast will be gunshot on automatically and it will off when the temperature comes to regulator [8]. Correspondingly, when around is a seepage of vapor in the house apprehension is raised philanthropic the attentive complete. The operator can also display the electrical appliances complete the net via web server. The electrical transmit is used to shift the electric applications like sunlit, enthusiast etc. The home utilizations are observed and measured. Arduino Uno It is low-priced, cross-platform and informal to program. both Arduino computer hardware and package are open basis and extensile [9]. Arduino is also influential contempt of its compressed size. The Arduino Uno is a microcontroller board grounded on the ATmega328. It consumes 14 numerical input/output pins in which 6 can be used as PWM outputs and 6 containers be analogy inputs. It has 16 MHz mineral oscillator, a USB assembly, an influence jack, an ICSP shot, and a rearrange switch. It covers all needed to provision the microcontroller; purely attach it to a processer by USB chain or supremacy it with an AC to DC connecter or battery to get in progress [10]. Power supply power ranges from 1.8V to 5.5V. Arduino UNO panel covers numerous models; now Arduino UNO R3 prototypical is used. The obverse opinion of Arduino panel is exposed in the Figure 2. Table 1, shows the Technical Specification of Arduino Uno Board Temperature Sensor LM35 is an accuracy IC temperature device output is comparative to the temperature (in o C). The radar mother board is wrapped and therefore it is not endangered to corrosion and other processes. With LM35, infection can be unhurried more precisely. It also owns low self-heating and does not basis additional than 0.1 o C temperature growth in motionless airborne. Table 2, shows the Specification of Temperature Sensor [11][12][13]. LM35 temperature device is used to quantity the ambient temperature. It is vigorous for all the period when the structure is on. The operating temperature series is from -55°C to 150°C. It is used to displays the area temperature to the operator when the employer interface is refreshed. In the mechanical technique this sensed temperature is active to switch the rapidity of fan connected at the production jot. Gas Sensor Gas detectors are used to amount and designate the attentiveness of convinced gases in an air through numerous skills. Figure 3, shows the gas sensor. Typically working to avert poisonous acquaintance and passion, gas detectors are frequently battery-operated devices secondhand for security determinations. Although numerous of the older, typical gas sensor units were originally invented to perceive one gas, contemporary multifunctional or multi-gas devices are accomplished of detecting numerous gases at once. Some detectors are whitethorn be utilizing as separate units to display small workstation areas, or units can be mutual or connected composed to make a defense organization. IR Sensor Circuit An infrared device is a microelectronic part of apparatus that produces and/or perceives infrared energy in order to intellect some feature of its surroundings. Infrared sensors be talented to quantity the warmth of an entity, as well as notice gesture. Numerous of these categories of sensors just amount infrared energy, slightly than emitting it, and thus are recognized as passive infrared (PIR) sensors. All substances crop some process of present energy, usually in the infrared spectrum. This radioactivity is unnoticeable to our eyes, but can be observed through an infrared device that obtains and appreciate it. In a typical infrared sensor similar a gesticulation pointer, energy reach the forward-facing and feasts the expedient itself at the middle of the convenient. This portion may be composed of supplementary than one separate device, apiece of them being comprehensive from piezoelectric rock-hard, whether normal or imitation. IR Sensor comprises photodiode and IR LED which piece the part of receiver and transmitter correspondingly. Figure 4, shows the IR Sensor Module. Table 3, shows the Specification of IR Sensor. Light Dependent Resister A light-dependent controller, else called LDR, photoresistor, photoconductor or photocell. It is an adjustable resistor whose worth decreases with increasing event bright strength. Here LDR is used to switch the outdoor lights or the plot lights be contingent on the thickness of the light. An LDR is comprehensive of a high-resistance semiconductor device. If light dwindling on the expedient is of tall sufficient frequency, photons engrossed by the semiconductor stretch restricted electrons adequate vigor to hurdle into the transference band. The resulting allowable electron (and its hole partner) behavior energy, thus lowering confrontation. Two of its initial applications were as share of fume, fire discovery systems and photographic camera light pulse. Because cadmium sulfide cells are lowcost and widely obtainable, LDRs are motionless used in microelectronic devices that essential light discovery ability, such as safety alarms, road lamps, and timepiece receivers. Buzzer The buzzer is an audial signalling device, which might be powered, electromechanical, or piezoelectric. Typical usage of buzzers and beepers is giving complete suggestion to the operators. Figure 5, shows the Buzzer. Table 4 shows the Specification of Buzzer. Arduino UNO Programming The Arduino panel can be programmed using the Arduino IDE processer package. When the Arduino IDE software is undone the journalistic space will exposed. This contains of two significant parts unity is arrangement portion and the second is core circle. The Arduino seaports such as contribution, production and continual purposes are defined in arrangement portion and iteration circumstances are coded in core twist. Then the package is accumulated to squared errors and warnings. After the effective debugging the cryptogram is determination in to the supervisor through the upload option. Bluetooth Module The Bluetooth component is the announcement procedure among the microcontroller and moveable receiver. The operator can ON the Bluetooth in transportable receiver and the Bluetooth element connected to the controller convey the employment feature to the smart receiver and the details are regarded by using the Android App. HC-05 component is an informal to custom Bluetooth SPP (Serial Port Protocol) module, calculated for clear radiocommunication sequential construction setup. Android Android is the designation of the transportable operating system. It maximum normally installed on a diversity of shrewd phones and tablets. In this paper android app is used to switch and display the home-grown utilizations. Loads are the spotlight and fan. In this app operator can excellent the attach selection and choice the Bluetooth. Then the app asks to chance ON the Bluetooth in the keen mobile, via the Bluetooth the home-grown utilizations are scrutinized. Figure 6, shows the Output Window of Android App. Conclusion In the proposed system enterprise and execution of the Smart Home-grown Computerization system using Bluetooth for Android transportable telephone has been discussed. The determination of this is to custom mobile receiver inherent Bluetooth capability for computerization of Home-based Employments. IoT is used to switch and display the home employment wherever in the world. The worker can acquisition the operational waiter for connecting IoT. In this paper the home appliances are measured and supervised by using the shrewd mobile through the Android App.
3,418.8
2020-12-30T00:00:00.000
[ "Computer Science" ]
Experimental Study on Thermal Runaway Behavior of Lithium-Ion Battery and Analysis of Combustible Limit of Gas Production : Lithium-ion batteries (LIBs) are widely used in electric vehicles (EV) and energy storage stations (ESS). However, combustion and explosion accidents during the thermal runaway (TR) process limit its further applications. Therefore, it is necessary to investigate the uncontrolled TR exothermic reaction for safe battery system design. In this study, different LIBs are tested by lateral heating in a closed experimental chamber filled with nitrogen. Moreover, the relevant thermal characteristic parameters, gas composition, and deflagration limit during the battery TR process are calculated and compared. Results indicate that the TR behavior of NCM batteries is more severe than that of LFP batteries, and the TR reactions becomes more severe with the increase of energy density. Under the inert atmosphere of nitrogen, the primarily generated gases are H 2 , CO, CO 2 , and hydrocarbons. The TR gas deflagration limits and characteristic parameter calculations of different cathode materials are refined and summarized, guiding safe battery design and battery selection for power systems. Introduction With the large consumption of traditional energy such as coal, oil, and natural gas in the world in recent years, energy supply based on fossil fuels is facing severe problems [1][2][3][4], and the demand for clean new energy is increasing all over the world [5][6][7]. Lithium-ion batteries (LIBs) are widely used in electric vehicles (EVs), energy storage systems (ESS), and various household digital products due to the high specific energy, high output voltage, low environmental pollution, and long cycle life [8][9][10][11][12][13]. However, due to the manufacturing process, materials, use environment, and other LIB problems, fire accidents caused by battery combustion and explosion frequently occur [14][15][16][17]. Reality according to survey results, there are more than 30 safety accidents of LIB ESSs in South Korea during 2017 to 2019 [18]. The investigation and analysis of the causes of these accidents show that LIBs are prone to TR under overcharge, external extrusion, high temperature, and overheating. Moreover, LIBs have an extremely high energy density, and the manufacturing materials contain flammable components [1,19]. Moreover, many flammable gases will be generated after the TR, leading to the rapid spread of the fire, inducing a chain reaction [20]. A fire accident caused by an LIB is difficult to extinguish, and the degree of reignition is high. Based on the structure and material composition of LIBs, we can better understand and prevent the TR disaster of LIBs. The main internal structure of an LIB includes five parts: cathode, anode, separator, electrolyte, and shell. In addition, there are cathode and anode insulation plates, leads, PTC elements, gaskets, exhaust holes, and safety valves [21,22]. When TR occurs in the LIB, the temperature and Battery Sample Four different types of commercially available LIBs were used in the experiment. The cathode of sample 1 is LiNi 0.6 Co 0.2 Mn 0.2 O 2 , the cathode of sample 2 is LiNi 0.8 Co 0.1 Mn 0.1 O 2 , the cathode of sample 3 is LiNi 0.9 Co 0.05 Mn 0.05 O 2 , and the cathode of sample 4 is LiFePO 4 , and the anode is mainly composed of graphite. The cathode current collector of the four batteries is aluminum foil, and the anode current collector is copper foil. The specific parameters of the battery samples are shown in Table 1. Experimental Instruments The experimental device used in this study is a sealed experimental chamber with a volume of 1000 L. The device is equipped with a pressure sensor to monitor the real time pressure. Multiple K-type thermocouples are used to monitor the battery temperature (TC1, TC2) and the ambient temperature (TC3-TC6), the battery test bench, the inert gas replacement pipeline, the gas collection pipeline, and the glass observation window. The structure of the experimental cabin is shown in Figure 1. During the experiment, the battery is fixed by the fixed clamp inside the cabin. The door of the experimental cabin is hydraulically driven and can achieve complete internal sealing. The cabin is equipped with a vacuum pump to replace the gas inside the cabin. The gas generated by TR is collected through the gas discharge pipeline and then passed to the gas analysis equipment (GC) for gas analysis. After completing the gas composition test, the remaining exhaust gas was discharged through the exhaust pipe. The gas analysis equipment used in the experiment was a Thermo Fisher Scientific gas chromatography analyzer, model Trace1300(Country of origin and manufactures: Singapore/Thermo Fisher), equipped with four detectors and eight chromatographic columns. Experimental Design A battery and a heating plate of the same size are placed side by side, and the battery TR is triggered by lateral heating. The heating plate shell is stainless steel, containing resistance wire. A 550 W constant power heating A thermocouple (TC1) is arranged in the center of the large surface of the battery, and a thermocouple (TC2) is arranged in the center of the battery and the heating plate. The two thermocouples are used to measure the real-time temperature of the battery and the heating plate. A heat shield is arranged outside to reduce the heat loss of the heating plate. Two copper wires are connected to two crocodile clips, which are insulated with rubber over the outside of the wire and then clamped onto the battery patch using crocodile clips to measure the real-time voltage during battery heating. Moreover, a fixed clamp is used. The experimental layout is shown in Figure 2. Experimental Design A battery and a heating plate of the same size are placed side by side, and the battery TR is triggered by lateral heating. The heating plate shell is stainless steel, containing resistance wire. A 550 W constant power heating A thermocouple (TC1) is arranged in the center of the large surface of the battery, and a thermocouple (TC2) is arranged in the center of the battery and the heating plate. The two thermocouples are used to measure the real-time temperature of the battery and the heating plate. A heat shield is arranged outside to reduce the heat loss of the heating plate. Two copper wires are connected to two crocodile clips, which are insulated with rubber over the outside of the wire and then clamped onto the battery patch using crocodile clips to measure the real-time voltage during battery heating. Moreover, a fixed clamp is used. The experimental layout is shown in Figure 2. During the experiment, the battery is arranged on the bench, and multiple thermocouples are arranged to measure the average ambient temperature in the test chamber. The TR of four battery samples were triggered under a nitrogen (N2) atmosphere, in which nitrogen had three effects: 1. As carrier gas; it will produce a dry inert oxygen-free atmosphere; 2. Prevent the risk of fire outbreak in the reaction vessel; 3. Control the temperature of the gas released from the reaction vessel at a manageable level for chemical analysis and quantitative analysis. A lithium iron phosphate battery will generate massive electrolytes, particles, and some gases after the TR. In order to ensure that the gas composition measurement is more accurate for the lithium iron phosphate battery, we first heated the During the experiment, the battery is arranged on the bench, and multiple thermocouples are arranged to measure the average ambient temperature in the test chamber. The TR of four battery samples were triggered under a nitrogen (N 2 ) atmosphere, in which nitrogen had three effects: 1. As carrier gas; it will produce a dry inert oxygen-free atmosphere; 2. Prevent the risk of fire outbreak in the reaction vessel; 3. Control the temperature of the gas released from the reaction vessel at a manageable level for chemical analysis and quantitative analysis. A lithium iron phosphate battery will generate massive electrolytes, particles, and some gases after the TR. In order to ensure that the gas composition measurement is more accurate for the lithium iron phosphate battery, we first heated the ambient temperature in the experimental cabin to 150 • C, waiting for three minutes to make the cabin temperature more stable. We then turned on the heating plate on the side of the battery, heating until the battery thermal runaway. The purpose of doing so was to make the ambient temperature reach the boiling point of the electrolyte inside the battery, so that the electrolyte would vaporize and change into a gaseous state during the eruption process, and therefore more accurate gas volume and gas composition could be obtained [36]. The experimental steps mainly include the following points: (1) Measure the open circuit voltage of the battery, ensure that the battery is in the specified state of charge, and record the initial quality of the battery before the test; (2) Arrange the battery, heating plate, and heat insulation plate on the battery rack, arrange the temperature measuring thermocouple and fix it with fixed clamps; (3) Vacuum the test chamber and rinse it with N 2 to reach the specified experimental atmosphere; (4) Turn on the heating plate for heating until the battery TR is triggered. After the TR of the battery, the internal temperature and pressure of the cabin is left to reach a stable state. While waiting for the natural cooling of the battery and the sedimentation of particles inside the chamber, the internal gases of the cabin were collected and the gas chromatograph (GC) was used to analyze the gas composition. In order to obtain the full component gas of the gas produced by the TR of the lithium battery, the gas analysis equipment was calibrated for more than ten common gases, including H 2 , CO, CO 2 , and short chain olefin gases. Battery Temperature and Voltage Changes The surface temperature and voltage response of the four battery samples in the process of TR through lateral heating are shown in Figure 3. During the experiment, we monitored the surface center temperature on the opposite side of the heating side of the battery and used this temperature value to define a new temperature boundary-critical temperature (when the battery is heated laterally, the surface temperature of the battery exceeds this value, the temperature will rise sharply, and the battery will also enter an uncontrolled severe TR stage). Due to the heating effect of the heating plate, the surface temperature of the four battery samples increased continuously, and finally reached the critical temperature. After this temperature point, the battery temperature rises rapidly, and TR occurs until the highest temperature is reached. As shown in Figure 3a, the heater was turned off after the TR of the battery. In the processing of the data, time 0 is set as the time when the battery surface temperature suddenly rises. The temperature curve before the '0' moment is when the battery is being heated by the heating plate, indicating that the temperature gradually increases, and the temperature curve will fluctuate due to the different volumes and heat absorption capacities of different batteries. With the progress of heating, the internal separator of the battery will eventually melt due to the increase in temperature, resulting in direct contact between the cathode and the anode of the battery and the ISC, resulting in a sudden voltage drop. It can be seen from Figure 3b that the voltage drops of the three batteries occurred two to eight seconds before the sudden temperature rise. Although the voltage drops of the NCM811 battery occurred approximately seven seconds after the TR, the voltage had decreased by 0.2 V approximately five seconds before the TR. For the development process of TR, the time of the TR can be more clearly confirmed by the voltage change rate. Figure 4 is a curve of the voltage change rate. It can be seen from the figure that the voltage of the four sample batteries decreased at a low rate first, which may be caused by the shrinkage of the internal separator of the battery [37]. Later, due to the failure of the separator, the anode and the cathode of the battery contact, and the short circuit promoted the voltage to further rapidly decrease until it reached 0. In the process of overall TR, the temperature rise and the voltage drop of the battery are a positive feedback, both promoting TR. The TR critical temperatures of the four battery samples are: NCM622-154.7 °C, NCM811-120.8 °C, NCM9/0.5/0.5-130.8 °C, LFP-145.2 °C. Moreover, it can be observed from Figure 3a that the temperature of the NCM9/0.5/0.5 battery is the highest during TR, reaching 842.1 °C; the maximum temperature of the NCM811 battery is 803.4 ℃, the maximum temperature of the NCM622 battery is 559.1℃, and the maximum temperature of the LFP battery is only 360.9 °C. The surface temperature of the battery during TR can also reflect the safety and harmfulness of the battery. The higher the critical temperature of TR, the higher the safety of the battery. The lower the maximum temperature of TR, the less the harmfulness of the battery [38]. The battery is heated until the surface temperature of the battery reaches a critical value, after which the battery indicates that the temperature has risen sharply and the battery has TR. When the heater is turned off, the battery enters the cooling stage. The surface temperature of the battery gradually decreases and the pressure and ambient temperature in the experimental cabin gradually decrease and finally tends to be stable. Battery Mass Loss In the case of battery TR, various chemical reactions will occur due to the reaction of electrolyte and the electrode material, generating heat and releasing a large amount of gas. The internal pressure of the battery will rise sharply in a short time. After the gas pressure reaches the release pressure of the safety valve, the safety valve will open to eject the internal gas, electrolyte, and active substances [39]. In the four battery samples, the mass loss rate before and after TR were 38.12%, 37.80%, 72.89%, and 22.80%, respectively. Comparing the different battery systems, the NCM622 battery, NCM811 battery, and NCM9/0.5/0.5 battery mainly eject black solid particles after TR, while the LFP batteries eject unreacted electrolytes. Due to the different degrees of TR and battery ejection products, the mass loss rate of the NCM622 battery, NCM811 battery, and NCM9/0.5/0.5 battery is higher than that of the LFP battery. Among them, the NCM9/0.5/0.5 battery, because of its high energy density, ejected a large number of battery internal coil material in the process of TR, so the mass loss rate is much higher than the other three samples. Based on the above analysis, the battery safety of the NCM system is lower than that of the LFP system. Through a comprehensive analysis of the temperature characteristics of the TR process of the battery, it can be found that the TR hazard of the LFP battery is the lowest. Figure 3a that the temperature of the NCM9/0.5/0.5 battery is the highest during TR, reaching 842.1 • C; the maximum temperature of the NCM811 battery is 803.4 • C, the maximum temperature of the NCM622 battery is 559.1 • C, and the maximum temperature of the LFP battery is only 360.9 • C. The surface temperature of the battery during TR can also reflect the safety and harmfulness of the battery. The higher the critical temperature of TR, the higher the safety of the battery. The lower the maximum temperature of TR, the less the harmfulness of the battery [38]. The battery is heated until the surface temperature of the battery reaches a critical value, after which the battery indicates that the temperature has risen sharply and the battery has TR. When the heater is turned off, the battery enters the cooling stage. The surface temperature of the battery gradually decreases and the pressure and ambient temperature in the experimental cabin gradually decrease and finally tends to be stable. Battery Mass Loss In the case of battery TR, various chemical reactions will occur due to the reaction of electrolyte and the electrode material, generating heat and releasing a large amount of gas. The internal pressure of the battery will rise sharply in a short time. After the gas pressure reaches the release pressure of the safety valve, the safety valve will open to eject the internal gas, electrolyte, and active substances [39]. In the four battery samples, the mass loss rate before and after TR were 38.12%, 37.80%, 72.89%, and 22.80%, respectively. Comparing the different battery systems, the NCM622 battery, NCM811 battery, and NCM9/0.5/0.5 battery mainly eject black solid particles after TR, while the LFP batteries eject unreacted electrolytes. Due to the different degrees of TR and battery ejection products, the mass loss rate of the NCM622 battery, NCM811 battery, and NCM9/0.5/0.5 battery is higher than that of the LFP battery. Among them, the NCM9/0.5/0.5 battery, because of its high energy density, ejected a large number of battery internal coil material in the process of TR, so the mass loss rate is much higher than the other three samples. Based on the above analysis, the battery safety of the NCM system is lower than that of the LFP system. Through a comprehensive analysis of the temperature characteristics of the TR process of the battery, it can be found that the TR hazard of the LFP battery is the lowest. In contrast, in the NCM batteries, the TR hazard of the battery increases with the increase of energy density. Table 2 lists the critical parameters of the four battery samples during TR. The exhaust time is the time from the opening of the valve to the end of the pressure relief of the battery. At this time, the gas volume in the cabin is stable, and more accurate gas production of the battery can be obtained. Gas Production and Composition The concentration and composition of toxic and combustible gas produced during the TR of the LIB is an important indicator for measuring the safety of the battery. The reactants produced during the TR of the batteries include the cathode and the anode of the battery and the electrolyte. For LIBs, the cathode is composed of lithium intercalated transition metal oxides. The more common cathodes are lithium iron phosphate (LiFePO 4 ), lithium nickel cobalt manganite (Li [Ni x Co y Mn z ]O 2 , x + y + z = 1), and lithium cobaltate (LiCoO 2 ). Meanwhile, the anode is usually composed of graphite, and the electrolyte is composed of one lithium salt and two or more solvents. Lithium hexafluorophosphate (LiPF 6 ) is generally used as the lithium salt. The solvent is ethylene carbonate (EC) combined with dimethyl carbonate (DMC), diethyl carbonate (DEC), and methyl ethyl carbonate (EMC). Because the decomposition temperature of the cathode material of the ternary lithium battery is low, oxygen and other combustible gases will be generated during the TR process, so the gas content generated during the TR process is higher than that of the lithium iron phosphate battery. The four battery samples will produce a large amount of gas during the TR process. The test results show that the gas produced by TR is mainly CO, CO 2 , H 2 , C 2 H 4 , CH 4 , and a small amount of hydrocarbon and other gases. Although the cathode materials of the four battery samples are different, the types of gases released after TR are roughly the same. The reason for this is that the battery cathode mainly releases O 2 without generating other gases. The intercalation of lithium into the anode material is an essential factor affecting the types of gas [31,40]. In addition, the electrolyte in the battery will also vaporize at high temperatures to generate macromolecular organic compounds. Although the gas production types of the four samples are almost the same, there are significant differences in gas production. The gas production formula of the battery is as follows: where n is the gas production, P 2 is the real time cabin pressure after TR, V 2 is the volume of the test cabin, R is the ideal gas constant, T 2 is the cabin ambient temperature after stabilization, and n 0 is the gas volume in the initial cabin. Because the temperature and pressure of the battery rise sharply after TR, the precise gas quantity can be obtained only when the temperature and pressure inside the test cabin are stable. Among them, the gas production of the NCM622 battery is 3.93 mol, the gas production of the NCM811 battery is 12.01 mol, the gas production of the NCM9/0.5/0.5 battery is 17.14 mol, and the gas production of the LFP battery is 10.3 mol. Although the gas production of NCM622 is the lowest, its capacity is only 50 Ah, much smaller than that of the other three samples. It can be seen that the gas production of battery TR is related to the battery capacity. In contrast, the LFP battery generates less gas after TR, which is attributed to the solid covalent P-O bond in the cathode of the lithium iron phosphate battery, reducing oxygen release [41,42]. Figure 5 shows the gas producing components and the volume percentage of the four samples. that of the other three samples. It can be seen that the gas production of battery TR is related to the battery capacity. In contrast, the LFP battery generates less gas after TR, which is attributed to the solid covalent P-O bond in the cathode of the lithium iron phosphate battery, reducing oxygen release [41,42]. Figure 5 shows the gas producing components and the volume percentage of the four samples. It can be seen from Figure 5 that the toxic and hazardous gases produced by the four samples are mainly CO, CO 2 , and H 2 , but no HF is detected. Previously, some scholars have studied the mechanism of battery gas. In the process of battery TR, gas will mainly experience the following reaction: SEI film decomposition reaction. The reaction is as follows: (CH 2 OCO 2 Li) 2 → LiCO 3 + C 2 H 4 + CO 2 After that, as the reaction heat releases, the temperature increases, and the released heat causes the intercalated lithium to react with the organic solvent in the electrolyte. The reaction is as follows: 2Li + C 4 H 6 O 3 (PC) → Li 2 CO 3 + C 3 H 6 (5) 2Li + C 3 H 6 O 3 (DMC) → Li 2 CO 3 + C 2 H 6 (6) At the same time, the electrolyte inside the battery will also decompose and produce gas at high temperature. The reaction formula is as follows [ Some scholars have shown that the source of HF is mainly TR reaction products and product reoxidation [43]. HF is a highly toxic gas, and only a tiny amount of gas may cause vomiting or even coma. The gas generation mechanism is as follows [44]: HF is not detected in this experiment because there is an inert nitrogen atmosphere in the test chamber. Some studies have shown that HF will not be formed under the water content of 10 ppm, and a large amount of hydrofluoric acid will be produced under 300 ppm [45]. However, the water content in the chamber is shallow due to the replacement of inert gas, and the intermediate substance PF 5 that generates HF does not react with it. Therefore, the HF content is too low to be measured. To analyze the gas producing components of the four samples, the volume concentration of the main gas components of the battery was normalized. We divided the volume of the different types of gas produced by each battery sample by the capacity of the corresponding battery, and obtained the normalized results in mol/Ah, as shown in Figure 6. Through the analysis of the gas production results of the four samples and the energy density of the battery, it can be concluded that the increase of the energy density of the battery will lead to the increase of the content of CO gas and the decrease of the content of other hydrocarbons. Figure 7 is the CO concentration and energy density fitting curve. The fitting curve shows that the increase of CO concentration is positively related to the energy density, with a coefficient of 0.39. In the process of TR of the batteries, the formation of CO has a high priority and can capture the most C atoms, while the remaining C atoms are used to generate hydrocarbons. It can be predicted that the concentration of CO will increase with the continuous increase of energy density, but the concentration of other toxic hydrocarbons will not necessarily increase. Another gas with noticeable concentration change is H 2 . Figure 8 shows the fitting curve between the H 2 concentration and energy density of the battery. From the fitting curve, it can be seen that the concentration change of H 2 is negatively related to the energy density, with a coefficient of −0.28. The reason for this may be that, with the increase in energy density of the battery, the TR reaction inside the battery becomes fiercer and the reaction time is shorter, resulting in insufficient reaction time for H 2 generation. It is also possible for the reducing gas to rapidly oxidize, resulting in insufficient reactants to generate H 2 . Until now, there is limited detailed research on the principle of H 2 generation by the battery, and further experimental research is needed in the future. battery will lead to the increase of the content of CO gas and the decrease of the content of other hydrocarbons. Figure 7 is the CO concentration and energy density fitting curve. The fitting curve shows that the increase of CO concentration is positively related to the energy density, with a coefficient of 0.39 In the process of TR of the batteries, the formation of CO has a high priority and can capture the most C atoms, while the remaining C atoms are used to generate hydrocarbons. It can be predicted that the concentration of CO will increase with the continuous increase of energy density, but the concentration of other toxic hydrocarbons will not necessarily increase. Another gas with noticeable concentration change is H2. Figure 8 shows the fitting curve between the H2 concentration and energy density of the battery. From the fitting curve, it can be seen that the concentration change of H2 is negatively related to the energy density, with a coefficient of −0.28. The reason for this may be that, with the battery will lead to the increase of the content of CO gas and the decrease of the content of other hydrocarbons. Figure 7 is the CO concentration and energy density fitting curve. The fitting curve shows that the increase of CO concentration is positively related to the energy density, with a coefficient of 0.39 In the process of TR of the batteries, the formation of CO has a high priority and can capture the most C atoms, while the remaining C atoms are used to generate hydrocarbons. It can be predicted that the concentration of CO will increase with the continuous increase of energy density, but the concentration of other toxic hydrocarbons will not necessarily increase. Another gas with noticeable concentration change is H2. Figure 8 shows the fitting curve between the H2 concentration and energy density of the battery. From the fitting curve, it can be seen that the concentration change of H2 is negatively related to the energy density, with a coefficient of −0.28. The reason for this may be that, with the increase in energy density of the battery, the TR reaction inside the battery becomes fiercer and the reaction time is shorter, resulting in insufficient reaction time for H2 generation. It is also possible for the reducing gas to rapidly oxidize, resulting in insufficient reactants to generate H2. Until now, there is limited detailed research on the principle of H2 generation by the battery, and further experimental research is needed in the future. In addition, the TR of the battery will also generate many hydrocarbons and CO2, which are combustible or toxic gases. Although CO2 gas is non-toxic, if people breathe it for a long time in a limited space, it will increase the absorption rate of asphyxiated objects, resulting in hypoxia, coma, syncope, and other symptoms. CH4, C2H4, and other gases In addition, the TR of the battery will also generate many hydrocarbons and CO 2 , which are combustible or toxic gases. Although CO 2 gas is non-toxic, if people breathe it for a long time in a limited space, it will increase the absorption rate of asphyxiated objects, resulting in hypoxia, coma, syncope, and other symptoms. CH 4 , C 2 H 4 , and other gases have extremely high combustion risks, promoting the flame spread. Preventing the potential combustion risk of gas and the toxicity risk to the human body is crucial to improving battery safety. Gas Production Characteristics and Deflagration Limit Due to the different cathode materials and energy systems of LIBs, the gas production characteristics will be different. The most obvious difference is the difference in gas volume. In order to better analyze the difference in the gas production characteristics of the four samples, the total gas production of the sample battery is normalized to the cell capacity. Figure 9 shows the normalization results of the gas production of the four samples. It can be seen from the figure that there are significant differences in the overall capacity of the four battery samples, leading to a tremendous difference in the gas production results during the TR process. However, the normalized gas production results after the TR show that the gas production of the battery is positively related to the energy density. Moreover, the normalized gas production results of the NCM622 battery, the NCM811 battery, and the NCM9/0.5/0.5 battery are higher than those of LFP battery, indicating that the NCM system's gas production is higher than that of the LFP system under the same battery capacity. With the increase of Ni element content, gas production will also increase significantly, indicating that the risk of TR will gradually increase with the increase of battery energy density. Among the gases generated during the TR process of the battery, the public is most concerned about the fire risk caused by the eruption of combustible gas. It can be seen from Figure 5 that the gases generated after the TR of the battery are H2, CO, CO2, CH4, and some combustible alkene gases. In order to obtain the flammability limit of the gases, this paper uses the formula calculation method (L-C formula) to calculate and analyze the flammability limit of different battery samples. This method has been used in previous studies. Li et al. [46]. obtained the combustible limit concentration of the released gas of a commercial 18650 LIB through experiments and compared the results with the results calculated by the Le-Chatelier formula. It was found that the numerical error of the two was minimal, proving that it is reliable to calculate the explosion limit of the TR gas produced by the battery using the L-C formula. At the same time, because the gas production component contains CO2, the calculation method containing inert gas is used. The "pairing elimination method" is utilized to pair the inert gas and the combustible gas in the mixed gas as a "new" combustible gas, and then calculate the explosion limit of the mixed gas using the theoretical calculation formula containing the explosion limit of various combustible gas mixtures [47][48][49][50]. The explosion limits of various gas mixtures are as follows: 100 ⋯ (16) Figure 9. Normalization results of gas production. Among the gases generated during the TR process of the battery, the public is most concerned about the fire risk caused by the eruption of combustible gas. It can be seen from Figure 5 that the gases generated after the TR of the battery are H 2 , CO, CO 2 , CH 4 , and some combustible alkene gases. In order to obtain the flammability limit of the gases, this paper uses the formula calculation method (L-C formula) to calculate and analyze the flammability limit of different battery samples. This method has been used in previous studies. Li et al. [46]. obtained the combustible limit concentration of the released gas of a commercial 18650 LIB through experiments and compared the results with the results calculated by the Le-Chatelier formula. It was found that the numerical error of the two was minimal, proving that it is reliable to calculate the explosion limit of the TR gas produced by the battery using the L-C formula. At the same time, because the gas production component contains CO 2 , the calculation method containing inert gas is used. The "pairing elimination method" is utilized to pair the inert gas and the combustible gas in the mixed gas as a "new" combustible gas, and then calculate the explosion limit of the mixed gas using the theoretical calculation formula containing the explosion limit of various combustible gas mixtures [47][48][49][50]. The explosion limits of various gas mixtures are as follows: where L m is the explosion limit of combustible gas mixture, %; L 1 , L 2 . . . L n are the explosion limits of components in the mixed gas, %; and V 1 , V 2 . . . V n are the volume fraction of each component in the mixed gas, %. There are two aspects to evaluate the battery safety by the gas flammability limit. One is the size of the lower flammability limit of the gas. From the perspective of the three factors of combustion reaction (combustibles, oxygen, and ignitors), the lower the flammability limit of the gas, the easier it is to act as a combustor, and the more likely it is to burn and explode. The second is the combustible concentration range (upper limit of flammability and lower limit of flammability). The more extensive the combustible concentration range, the easier it is for the concentration of combustible gas in the environment to meet the conditions for combustion and explosion. When the concentration of combustible gas is lower than the lower limit of flammability (LFL) or higher than the upper limit of flammability (UFL), there will be no explosion risk, even if there is an ignition source because the gas concentration in the environment is too low or the oxygen concentration is too low. Figure 10 shows the calculated gas flammability limit results. The flammability concentration range of the NCM622 battery is the largest, 54%, and the lower flammability limit is 7%. The flammability concentration range of the NCM9/0.5/0.5 battery is the most negligible, 41%, and the lower flammability limit is 8.2%. It can be seen that, with the increase of the battery energy density, the flammability limit range of the gas generated by TR is gradually reduced, especially in the NCM system. The lower flammability limit is gradually increased with the energy density increase. The above analysis shows that increasing the battery energy density will reduce the battery deflagration risk from the perspective of the gas deflagration limit. It is worth noting that the lower flammability limit of the LFP battery is the lowest among the four samples, 6%, taking into account the effect of battery electrolyte eruption. For the LFP battery, it is also an important method to improve the safety of the battery by reducing the deflagration limit range and increasing the flammability limit of the TR gas. is too low. Figure 10 shows the calculated gas flammability limit results. The flammability concentration range of the NCM622 battery is the largest, 54%, and the lower flammability limit is 7%. The flammability concentration range of the NCM9/0.5/0.5 battery is the most negligible, 41%, and the lower flammability limit is 8.2%. It can be seen that, with the increase of the battery energy density, the flammability limit range of the gas generated by TR is gradually reduced, especially in the NCM system. The lower flammability limit is gradually increased with the energy density increase. The above analysis shows that increasing the battery energy density will reduce the battery deflagration risk from the perspective of the gas deflagration limit. It is worth noting that the lower flammability limit of the LFP battery is the lowest among the four samples, 6%, taking into account the effect of battery electrolyte eruption. For the LFP battery, it is also an important method to improve the safety of the battery by reducing the deflagration limit range and increasing the flammability limit of the TR gas. Conclusions and Summary In this study, the surface temperature, voltage change, mass loss, and gas production characteristic parameters of four kinds of LIBs with different systems were obtained by conducting side heating experiments on the samples in an inert atmosphere in a self-made sealed test chamber. Based on the analysis of the data, the following conclusions are ob- Conclusions and Summary In this study, the surface temperature, voltage change, mass loss, and gas production characteristic parameters of four kinds of LIBs with different systems were obtained by conducting side heating experiments on the samples in an inert atmosphere in a selfmade sealed test chamber. Based on the analysis of the data, the following conclusions are obtained: (1) The critical triggering temperature of TR of the ternary high nickel system battery is lower than that of the LFP system, and the maximum temperature of the battery surface is much higher than that of the LFP system. Furthermore, the voltage drop of the battery during heating occurs 2-8 s before TR; (2) The NCM battery will eject gas and black solid particles during TR, while the LFP battery will eject unreacted electrolytes during TR, and the mass loss rate of the NCM battery during TR is higher than that of the LFP battery. (3) The batteries of the NCM system and the LFP system will produce CO, CO 2 , H 2 , CH 4 , C 2 H 4 , and other gases in the process of TR. The higher the energy density of the battery, the greater the concentration of CO gas produced, and the smaller the concentration of H 2 gas. The normalized gas production of the NCM9/0.5/0.5 battery is the highest, and the normalized gas production of the LFP battery is the lowest. (4) The deflagration limit of gas generated by the TR of LIBs is related to the battery energy density. The higher the energy density, the lower the deflagration risk of gas generated. Among the four battery samples used in this study, the lower flammability limit of gas produced by the TR was that of the LFP battery. Prospect Work In addition to this study, analyzing the TR characteristics and gas production components of power/energy storage LIBs with various other materials, capacities, and battery types are necessary. Through summarizing the results obtained, it is hoped that appropriate evaluation standards (including fire safety warnings and fire protection measures) can be given to the fire safety and gas flammability limit of LIBs. The application of electric vehicles puts forward an improved scheme with higher safety.
9,154.6
2022-11-21T00:00:00.000
[ "Engineering" ]
An unified cosmological evolution driven by a mass dimension one fermionic field An unified cosmological model for an Universe filled with a mass dimension one (MDO) fermionic field plus the standard matter fields is considered. After a primordial quantum fluctuation the field slowly rolls down to the bottom of a symmetry breaking potential, driving the Universe to an inflationary regime that increases the scale factor for about 71 e-folds. After the end of inflation, the field starts to oscillate and can, in principle, transfer its energy to the standard model particles through a reheating mechanism. Such a process is briefly discussed in terms of the admissible couplings of the MDO field with the electromagnetic and Higgs fields. We show that even if the field loses all its kinetic energy during reheating, it can evolve as dark matter due a gravitational coupling (of spinorial origin) with baryonic matter. Since the field acquires a constant value at the bottom of the potential, a non-null, although tiny, mass term acts as a dark energy component nowadays. The torsion plays an important role in this model giving rise to a `bump' in the Hubble function at the final stage of inflation. After this event, it becomes proportional and of the same order of magnitude of $H(t)$ until the present time. Therefore, we conclude that MDO fermionic field is a good candidate to drive the whole evolution of the Universe in an unified fashion, in such a way that the inflationary field, dark matter and dark energy are described by different manifestations of a single field. I. INTRODUCTION A new class of mass dimension one fermionic fields has been proposed by Ahluwalia and Grumiller [1][2][3], which are constructed by means of charge conjugation spinors. In its first formulation the spin sums of the quantized fields showed to be Lorentz violating due to the intrinsic definition of their dual fields, which rose several doubts on the applicability of this new field. Such an old construction were named Elko 1 fields. However, recently [4,5] a subtle deformation in the dual structure solved the problems of Lorentz violation, putting the theory in solid bases from the point of view of the quantum field theory, and the new fields are called just mass dimension one (MDO) fermionic fields. In this work we are interested just in the classical analogue of MDO fermionic fields, thus we can use Elko or MDO fields without loss of generality. The fermionic fields constructed from charge conjugation spinors are natural candidates to describe dark matter particles in the universe, since they are neutral and have canonical mass dimension one, which make them to couple very weakly to other particles of the standard model. Indeed, the only admissible couplings are with scalars and Higgs fields [6][7][8] and with electromagnetic stress tensor [9], in addition to quartic self-couplings. Different from Dirac fermion, which has mass dimension 3 2 , is parity conjugation invariant and admits several couplings with standard model particles, the MDO field born with an intrinsic dark character. Consequently, detection and constraints in its physical properties is difficult, once it does not couple directly with standard model particles. Several cosmological applications of this new field have been done, first in torsion free frameworks [10][11][12][13][14][15][16][17][18][19] and more recently considering its coupling to torsion [20][21][22][23] in an Einstein-Cartan framework, which turns the system of equations richer than the previous case, especially for inflationary applications. In particular, in Refs. [22,23] the numerical results describing inflation and dark matter evolution were presented together with the energy density evolution, in good agreement with the required inflationary number of e-folds and also with the expected energy density for the dark matter component. Notwithstanding, we refer the reader to other important recent results regarding the MDO fermionic field in the 1 Eigenspinoren des ladungskonjugationsoperators, or Self-spinor of charge conjugation context of the quantum field theory [24][25][26][27], and also in thermal field theory [28]. The study of scalar and tensor perturbations for Elko field has been done in [13,14] in the torsion free case and the study of first order vector perturbations has been done in [15]. In the present article we consider the fermionic MDO field as a candidate to drive the complete evolution of the Universe. A symmetry breaking potential is responsible for the inflationary phase, while a reheating like mechanism drives the field to the radiation phase. The effective mass of the field plays an essential role in the energy transfer process. After the field rolls down to the true vacuum of the potential, the system follows a dark matter evolution due to a natural (gravitational) coupling between the MDO field and the baryonic energy density. Finally, a dark energy accelerated phase can be obtained from the constant quadratic mass term, which acts exactly like a cosmological constant term. All different phases are connected and the free parameters can be constrained by observational data and by some reasonably physical assumptions. The cosmic coincidence problem can also be naturally understood in such a scenario. It is worth to stress that the search for an unified cosmological model is an old task and most of the models are based on a single scalar field [29][30][31]. The advantage to use scalar fields is that the inflationary phase and the generation of primordial perturbations are in good agreement with observations [32][33][34]. After reheating, if the decay of the scalar field is incomplete, it may act as dark matter, while its zero point energy acts as dark energy. Although this is a standard model framework, scalar fields does not furnish any physical interpretation for inflation and also no scalar field has been observed yet, except the Higgs boson. On the other hand, the general idea of considering the usual Dirac fermions to explain inflation, dark matter and dark energy have already been considered in the literature (see, e.g., [35][36][37][38]). In this spirit, the aim of the present article is to show that the MDO fermionic field is a more reasonable field to drive inflation and the subsequent stages of evolution of the Universe in an unified fashion. Being neutral and not interacting with other particles of the standard model, it is a good candidate to describe dark matter without the need of supposing an incomplete decay during reheating, an it happens in the case of scalar dark matter particles. Furthermore, as pointed out by Pereira et al. [22,23], the "MDO inflation" can be interpreted in light of the Pauli exclusion principle. When the fermionic field rolls down to the minimum energy state of the potential, the Pauli exclusion principle starts to act, not allowing all particles to occupy the lowest energy state. If the potential is stronger than the degeneracy pressure, the whole system responds with an abrupt expansion in order to accommodate all particles in the lowest energy state, since the spacing between energy levels in a bound system is inversely proportional to the size of the system. This effect allows all the particles to accommodate very close to the lower energy state after inflation. Moreover, since the MDO particles satisfy a Klein-Gordon like equation, the set of equations describing the evolution of the Universe is very similar to the scalar field case. Thus, several features of the scalar field cosmology are recovered, but for a fermionic field instead. The paper is organized as follows. Section II presents the basic Friedmann-Lemaître-Robertson-Walker (FLRW) equations already derived in [21][22][23] in the presence of torsion terms. The dark matter and the dark energy behavior of the MDO field are presented in the Section III, where some parameters are constrained with observational data. In the Section IV the MDO inflation is studied by means of a numerical solution of the evolution of the field subject to a symmetry breaking potential. Two possible ways of implement the reheating phase in the MDO cosmological scenario is briefly discussed in the Section V, and in the Section VI we finish with our conclusions. II. DYNAMIC EQUATIONS FOR THE MDO FERMIONIC FIELD The action for the model is [20,21] where κ 2 ≡ 8πG = 8π/m 2 pl with c = = 1 and S m is the usual action for other matter fields, as baryonic matter or radiation. The tilde represents torsion terms into the Ricci scalarR and covariant derivatives, namely, are the spin connections. In the Einstein-Cartan framework the contorsion terms generalizes the affine connectionΓ ρ µν = Γ ρ µν + K ρ µν and are given by where the only non-vanishing torsion terms obeying cosmological principle are [39] where the torsion functions h(t) and f (t) must be determined. Considering the flat FLRW metric the lapse function, the two Friedmann equations, the dynamic field equation for φ(t) and the torsion functions h(t) and f (t) can be obtained by taking the variation of the Lagrangian of the model with respect to N (t), a(t), φ(t), h(t) and f (t) respectively (see [20][21][22][23] for further details). Thus we obtain (setting N → 1 at the end) h(t) = − 1 8 where H =ȧ/a, as usual, and ρ m and p m are the energy density and the pressure of other matter components, which satisfies a conservation equation for a perfect fluiḋ On the other hand, the energy density and the pressure of the MDO field is given by The set of Eqs. (5)-(11) looks-like a generalization of the standard scalar field model, but it is important to keep in mind that here φ(t) is just the temporal part of the fermionic field Λ. Now, by substituting the Eqs. (5) and (6) in the Eqs. (10) and (11), it is possible to write the above expressions of ρ φ and p φ in a different useful form which can be straightforwardly combined to show thaṫ Notice also the presence of a coupling between the MDO field with the energy density and pressure of the standard matter into the first terms of Eqs. (12) and (13). Such terms are a manifestation of the coupling between the spin components of the MDO field with gravity, such that if one takes the Minkowski spacetime (H = 0) then ρ φ and p φ for the scalar field are exactly recovered, as can be directly checked by means of the Eqs. (10) and (11). From the above formulation, we see that the present model is equivalent in form to a model with zero torsion containing only perfect fluid components, such that one of them has the energy density and pressure defined by the Eqs. (12) and (13). Moreover, the Eq. (14) is obviously equivalent to the Eq. (7), and the Eqs (5) and (6) can now be written as Regarding the potential V (φ), two forms were previously considered in the literature. The first one is a symmetry breaking potential [22] where A and φ c are positive constants. It was showed that as the field rolls down to the true vacuum of the potential at φ = φ c the inflation occurs with the correct number of e-folds, depending on the initial value of φ and also on the constants A and φ c . After inflation, a dark matter evolution follows naturally, leading to correct energy densities for different phases. On the other hand, in the Refs. [21,23] the potential was chosen to be of the form where m is the physical mass of the field and α is a dimensionless coupling constant. Considering such a potential, the inflationary and dark matter evolution were also obtained [23], with the correct numerical estimates to the energy density at different epochs. Furthermore, a scenario where φ(t) is a slowly varying function at late times was interpreted as a time varying cosmological term proportional to H 2 [21]. Despite the successful to describe some individual phases of evolution with correct numerical estimates, the radiation phase and a smooth transition to late cosmic acceleration were not completely addressed by the model. In order to describe all the phases of evolution of the Universe in a consistent unified fashion, here we will consider a potential of the form with V 1 and V 2 given by (17) and (18), and also the presence of the standard matter fields. As it will be seen in the following sections, all the stages of evolution of the Universe can be recovered in a natural way. In order to better understand the role of the above potential in the dynamic of the field let us write it in the expanded form as where m 2 ef f = m 2 − 4A 4 /φ 2 c represents an effective mass of the field,ᾱ = α + 4A 4 /φ 4 c is an effective self coupling constant and C = A 4 a constant. The potential is attractive whatever the φ value ifᾱ > 0, which leads to α > −4A 4 /φ 4 c , and the minimum occurs at For m 2 << 4A 4 /φ 2 c and α << 4A 4 /φ 4 c the minimum occurs at φ min ≈ φ c , which will be the case when we fix the parameters with observational constraints. III. DARK MATTER AND DARK ENERGY EVOLUTION From now on we will consider that the only additional standard matter present is of pressureless baryonic type, namely ρ m = ρ b and p m = p b = 0. Let us start with the late time evolution of the Universe in order to constrain the parameters φ c , m and α with observational data. The value of φ c is of particular interest in the inflationary epoch for which we give a detailed description in the next section. For now, it is enough to suppose that, at early times, the field φ is initially at rest around the false vacuum of V 1 and that after a quantum fluctuation, it rolls down to the bottom of the potential acquiring, after a period long enough, the constant value φ c . Hence, when the kinetic energy of the MDO field is finally negligible, with the field satisfyingφ 0, we have V 2 >> V 1 . Therefore, at late times when only the baryonic matter and the MDO field are relevant for the cosmic dynamics, we can use the Eqs. (12) and (13) to obtain and where ρ b is the baryonic energy density and the potential has the fixed value V 2 (φ c ) = 1 2 m 2 φ 2 c + 1 4 αφ 4 c . From the above expressions, it is clear that at late times the energy density of the MDO field is an addition of two distinct contributions. The first one behaves as a pressureless fluid following the evolution of the baryonic matter, and the second one is an effective cosmological constant given by Inserting ρ φ and ρ b into the Friedmann equation (15), the Hubble function can be obtained where 2 Ω b = ρ b,0 /ρ crit,0 and now it is clear that the second term in the right-hand-side of the above equation can be interpreted as a dark matter component with the present density parameter defined as which comes from the gravitational coupling of the MDO field with standard baryonic mat- ter. The last term act as a cosmological constant term, with Ω Λ,φ ≡ Λ eff /κ 2 ρ crit,0 . Notice that Eq. (25) (24) and (26) in terms of the cosmological parameters as From (24) we obtain directly the value of the effective cosmological constant, namely Λ ef f = 4.37 × 10 −66 eV 2 , almost exactly the value obtained from Planck collaboration 3 [40]. From (27), (18) and the values of α andᾱ from (20) and (21) we can also constraint the value for the physical mass of the field. With A ∼ 10 14 GeV (see next section) and φ c 1.3 m pl , the α parameter must be in the interval −10 −18 α 10 −125 . Thus, the range for the mass is 0 < m 2.4 × 10 10 GeV. It is important to remark that, contrary to scalar field dark matter models [29], here the contribution to dark matter comes just from the value of the incomplete field decay, namely φ c , through (26), and not from the mass m of the field. The mass and α parameter are related to the cosmological constant like term, Ω Λ,φ , which represents the dark energy sector. This opens the possibility to mass be in a very large range, which must be constrained by other methods, as the growth of primordial perturbations. IV. MDO INFLATION WITH A SYMMETRY BREAKING POTENTIAL Now, we can ask ourselves if the energy density of the MDO fermionic field can be dominant at the very early stages of evolution of the Universe. If we remember that one of the admissible couplings of the MDO field Λ is with the Higgs field Φ, we may give an affirmative answer as follows. A scenario analogous to the hybrid inflationary model [41] can be pictured out for the present case. An interaction between Λ and Φ makes the latter goes to zero rapidly, while the Λ energy density could remain large for a much longer time since Λ does not interact appreciably with other fields. In the present article, nevertheless, we do not consider the details of such a coupling, but we just consider the stage of inflation at large energy density for the MDO field while Φ = 0, as it has already been discussed in the literature by one of us and other authors [22,23]. The coupling with the Higgs field could be particularly important in the last stage of inflation, and we intend to take this into account in forthcoming investigations. Therefore, let us analyze the inflationary epoch by assuming that the energy density of the Universe is initially dominated by the MDO field. At the high energy regime of inflation the only relevant potential is the symmetry breaking potential V 1 (φ). At the time that inflation begins, t i , the initial value of the field φ i φ c , and the constant A can be chosen as A (3H 2 i m 2 pl /8π) 1/4 ∼ 5.3 × 10 14 GeV, such that the Hubble parameter assumes the following value H(t i ) ≡ H i ∼ 10 35 s −1 [34] at the GUT scale. Conversely, it is remarkable that the constant φ c is not fixed by inflation, but by the ratio between the amount of dark matter and baryonic matter in the Universe as it was shown in the last section [Eq. (26)]. The MDO field is initially in a nearly equilibrium point of the false vacuum of V 1 (φ). Since the field has mass dimension one (or energy dimension), we can use a quantum uncertainty relation in order to establish its initial value. A quantum fluctuation ∆φ in a time interval ∆t must satisfy ∆φ∆t ≥ 1. Since inflation starts at t i ∼ 10 −35 s, we can choose ∆t of this order to arrive at ∆φ ∼ 1/t i ∼ 5.4 × 10 −9 m pl , which will be assumed as the initial condition for the field in what follows. On the other hand, the initial value of the first derivative of the fieldφ(t i ) ≡φ i can be obtained by taking the equations of motion (7) at the time t i . Let us consider that the conditionφ Hφ is satisfied when inflation starts, then we are lead tȯ If one takes φ c = 1.3 m pl , for instance, the initial value isφ i = 1.86 × 10 26 m pl s −1 . Having settled the initial conditions, one can numerically solve the system of differential equations (5)- (8) in order to study the evolution of the dynamical parameters of the system. In this Section we consider just the MDO field as a source of energy, not including baryonic matter or radiation, which are assumed to be not dominant in that phase. In the Fig. 1, the evolution of the field φ is shown as a function of the cosmic time t. From this figure it is clear that the field evolves from φ i to φ c while rolling down to the bottom of the potential, with small oscillations at the end of its evolution, which will be discussed in the next Section. In order to study the duration and the kinematics of the inflation, it is useful to analyze the evolution of the slow roll parameters governing the inflationary epoch. They are defined in the usual way as ≡ |Ḣ| H 2 , and η ≡φ Hφ . Moreover, it is convenient to analyze the evolution of the relevant quantities as functions of the number of e-folds achieved at the time t, defined as follows In the Fig. 2.a, the evolution of and η are shown. It has been also included the evolution of the Hubble parameter along with the torsion function and the field φ itself. Notice that assuming the above specified (well motivated) initial conditions, at the end of inflation ( = 1) we have the total e-folds N 71. But, from the uncertainty relation, the initial value of the field can be higher yielding a smaller number of e-folds during inflation. In this case, the field spends less time in the slow-roll regime which leads, therefore, to a decrease of N . Although we have imposed |η| 1 initially, this parameter grows very rapidly in the beginning of inflation and assumes a constant value (η 0.3) during almost the entire inflationary period. In the remaining 10 e-folds until the end of inflation |η| is nearly zero, and finally it becomes large in the last e-fold of inflation, corresponding to a substantial variation of , which indicates the end of inflation. Notice that the change in the parameter η is associated with the increase of the MDO field in the last 15 e-folds and also with a "bump" in the Hubble parameter in the same period. Such a bump, corresponding to a moderated increase in the energy density of the MDO field with respect to its initial value, is notably correlated with an increase of the torsion function h(t). From the Eq. (8) we see that initially, |h(t i )| is much smaller than the Hubble parameter, but it becomes of the same order of H as φ → φ c . In the Fig. 2.b it is also possible to note that the height of the bump and the corresponding transient increasing of |h(t)| are strongly dependent on the value of φ c . The bump can even disappears for a value of φ c small enough, but in this case changes in the initial conditions would be required in order to achieve the desired total e-folds of inflation. An interesting question about the bump at the end of inflation is whether it could leave an imprint in the cosmic microwave background (CMB) radiation, since it is characterized by a sudden increase in the energy density at that time. A better quantitative treatment must be done in order to study the constraints in CMB spectrum due to such bump. We have also studied the evolution of the deceleration parameter at the inflationary period. In the left panel of the Fig. 3 one can notice that q(t) < −1 for a short period preceding the maximum of the bump, for which we have again q max = −1 since (Ḣ) max = 0. Therefore, the present inflationary scenario for the MDO field exhibits a phase with a phantom like behavior. Again, it is explained by the increase of the function |h(t)| which contributes significantly for the increase of the effective energy density at that epoch. After the end of inflation, the torsion remains important until the present time and its contribution scales with the Hubble function H(t), since the field acquires a constant value of the order of the Planck mass. Such a behavior is expected, since we are assuming the whole Universe homogeneously filled with the MDO field, and, therefore, each point of space containing a fermionic field interacts with torsion, contributing to the energy-momentum tensor and providing a contribution to the evolution. On the other hand, in the case of the chaotic inflation with a potential described by a sum of a quadratic and a quartic selfinteracting term, the field goes to zero after inflation, thus the torsion which was initially important vanishes after inflation [23]. In this case, there is no bump at the final stage of inflation. V. REHEATING After the end of inflation ( = 1 and q = 0) the amplitude of the MDO fermionic field φ(t) starts to oscillate coherently around the minimum of the potential with a frequency ω φ = V (φ) as can be seen in the Fig. 1 and a reheating mechanism takes place. During this period, the deceleration parameter oscillates around its average value q(t) = 1/2 as shown in the right panel of the Fig. 3. Therefore, during the regime of coherent oscillations, the field behaves in average as non-relativistic particles, i.e. p φ = 0, in the same way it happens for a typical scalar field subject to a quadratic potential. Such a result has already been verified in the Ref [23] for the MDO inflation with a quadratic mass term plus a selfinteracting potential, and also for a symmetry breaking potential as can be seen in the Ref. [22]. Now, let us briefly discuss two possible mechanisms for a successful reheating process after the MDO inflation, namely: (i) the electromagnetic coupling of the MDO field at high energies, and (ii) the coupling of the MDO field with the Higgs boson in a preheating phase preceding the reheating. Now, if the reheating process occurs with a MDO decay rate Γ we need to add a damping term Γφ in the left-hand-side of the Eq. (7) which is negligible in the inflationary epoch. Therefore, the energy density of the MDO field is not conserved, but there is a term −(1 + κ 2 φ 2 /8)Γφ 2 in the right-hand-side of the Eq. (14). From the Eqs. (12) and (13) one can show that averaging in one period of oscillation (1+κ 2 φ 2 /8)φ 2 = ρ φ . Hence, by considering the mechanism (i), the field decays in photons and the evolution of the MDO field and the decay product's energy density are described by the coupled equationṡ where ρ r is the energy density of radiation. Notice that the above equations are identical in form to the case of reheating with a scalar field. As a result while the MDO field dominates the energy density, the deceleration parameter is 1/2 in average and the Universe is decelerating during all the reheating stage. Then, there is a transition to a radiation dominated epoch until the deceleration parameter reaches q rad = 1, which indicates the end of the reheating stage. In the Ref. [9], the interaction of the MDO field with photons at the tree level was The second possible mechanism for energy transfer after inflation is justified by the coupling of the MDO field with the Higgs field through the addition of the interaction term in the Lagrangian density of the action (1). Such an interaction has been considered to investigate the possibility of discovering the MDO field in the LHC [6][7][8]. Now, the full Lagrangian is given by where the last term is the Higgs Lagrangian. At the time of reheating, the MDO field oscillates as with a slowly varying amplitude φ 0 (t) which is initially of the order of the Planck mass. Then, if we vary the above Lagrangian with respect to Φ, one can obtain the equations of motion of the Higgs field coupled with φ(t), which will be similar to a Mathieu equation. The solutions of this equation are characterized by a parametric resonance which results in a fast amplification of Φ. After such a period (called preheating) we have the reheating since the coupling of the Higgs field Φ with the standard-model particles leads to thermalization. Obviously, the temperature at the end of reheating depends on the details of the model. In practice, both of the above described processes can happen simultaneously, although one might expect that one of them is dominant. Moreover, the success of the reheating process in the MDO cosmological scenario depends essentially in how strong are the couplings with the electromagnetic field and with the boson field. In this respect, a further investigation is necessary in order to constrain the parameter space of the models in a cosmological perspective. VI. CONCLUSIONS In the present article we have studied an unified cosmological evolution driven by a MDO fermionic field. We have shown that the MDO inflation with a symmetry breaking potential can successfully achieve a number of e-folds as large as 71, with an initial condition for the field given It is worth to stress that the dark matter behavior of the MDO fermionic field in the way it is understood in the present article is quite different from the preceding works on the topic [22,23]. In those works, the coherent oscillations of the field after inflation is interpreted as dark matter. This is because the oscillating field is pressureless in average in the same manner it happens with an oscillating scalar field subject to a quadratic potential. However, this is possible only in the case the MDO field interaction with any other field is negligibly small. Conversely, if the field couples with radiation or with the Higgs boson at the still high energetic regime after the inflationary epoch, than its energy density will decay exponentially in a reheating phase as discussed in general in the Section V. In this sense, the MDO dark matter behavior only survives if the field acquires a non-null constant value after reheating which is of the order of the Planck mass or, more precisely, φ c = m pl Ω DM,φ /πΩ b . Therefore, if this condition is satisfied, it does not matter what kind of potential the field is subject (neither the inflationary mechanism), the gravitational coupling of the field with baryonic matter leads to a contribution in the form of a pressureless matter. It is worth to emphasize that such a coupling has its origin in the definition of the spin connections in curved spacetimes. No explicit coupling between the MDO field with other fields were assumed. Moreover, if the potential is not null in φ = φ c , than the MDO field has an additional constant contribution to the energy density that works exactly as a cosmological constant. Such term is also responsible to constrain the mass of the field in the range 0 < m 2.4×10 10 GeV. Finally, the scenario described in this article provide us with a natural explanation for the cosmological coincidence problem. This is because the evolution of the energy density of the MDO field scales with the energy density evolution of other matter fields, resulting in a present value of the same order of magnitude of the usual matter. Therefore, we conclude that MDO fermionic field is a good candidate to drive the whole evolution of the Universe in an unified fashion, in such a way that the inflationary field, dark matter and dark energy are described by different manifestations of a single fermionic field. Additionally, as already pointed out at the Introduction, a possible interpretation of the inflationary phase as a consequence of the Pauli exclusion principle put this model on very interesting physical grounds.
7,420
2018-10-29T00:00:00.000
[ "Physics" ]
A distributional approach to fractional Sobolev spaces and fractional variation: asymptotics II We continue the study of the space $BV^\alpha(\mathbb R^n)$ of functions with bounded fractional variation in $\mathbb R^n$ and of the distributional fractional Sobolev space $S^{\alpha,p}(\mathbb R^n)$, with $p\in [1,+\infty]$ and $\alpha\in(0,1)$, considered in the previous works arXiv:1809.08575 and arXiv:1910.13419. We first define the space $BV^0(\mathbb R^n)$ and establish the identifications $BV^0(\mathbb R^n)=H^1(\mathbb R^n)$ and $S^{\alpha,p}(\mathbb R^n)=L^{\alpha,p}(\mathbb R^n)$, where $H^1(\mathbb R^n)$ and $L^{\alpha,p}(\mathbb R^n)$ are the (real) Hardy space and the Bessel potential space, respectively. We then prove that the fractional gradient $\nabla^\alpha$ strongly converges to the Riesz transform as $\alpha\to0^+$ for $H^1\cap W^{\alpha,1}$ and $S^{\alpha,p}$ functions. We also study the convergence of the $L^1$-norm of the $\alpha$-rescaled fractional gradient of $W^{\alpha,1}$ functions. To achieve the strong limiting behavior of $\nabla^\alpha$ as $\alpha\to0^+$, we prove some new fractional interpolation inequalities which are stable with respect to the interpolating parameter. The asymptotic behavior of the fractional gradient ∇ α as α → 1 − was fully discussed in [28] (see also [14,Theorem 3.2] for a different proof of (1.10) below for the case p ∈ (1, +∞) via Fourier transform). Precisely, if f ∈ W 1,p (R n ) for some p ∈ [1, +∞), then f ∈ S α,p (R n ) for all α ∈ (0, 1) with lim α→1 − ∇ α f − ∇f L p (R n ; R n ) = 0. (1.10) If f ∈ BV (R n ) instead, then f ∈ BV α (R n ) for all α ∈ (0, 1) with D α f ⇀ Df in M (R n ; R n ) and |D α f | ⇀ |Df | in M (R n ) as α → 1 − and lim α→1 − |D α f |(R n ) = |Df |(R n ). (1.11) We underline that, differently from the limits (1.6) and (1.8), the renormalizing factor (1 − α) 1 p does not appear in (1.10) and (1.11). This is motivated by the fact that the constant µ n,α encoded in the definition (1.3) of the operator ∇ α satisfies Concerning the asymptotic behavior of ∇ α as α → 0 + , at least for sufficiently regular functions, the fractional gradient in (1.3) is converging to the operator |y − x| n+1 dy, x ∈ R n . (1.12) Here and in the following, µ n,0 is simply the limit of the constant µ n,α defined in (1.5) as α → 0 + (thus, in this case, no renormalization factor has to be taken into account). The operator in (1.12) is well defined (in the principal value sense) at least for all f ∈ C ∞ c (R n ) and, actually, coincides (possibly up to a minus sign, see Section 2.1 below) with the well-known vector-valued Riesz transform Rf , see [40,73,74]. The formal limit ∇ α → R as α → 0 + can be also motivated either by the asymptotic behavior of the Fourier transform of ∇ α as α → 0 + or by the fact that stands for the Riesz potential of order α ∈ (0, n). In a similar fashion, the fractional α-divergence in (1.4) is converging as α → 0 + to the operator which is well defined (in the principal value sense) at least for all ϕ ∈ C ∞ c (R n ; R n ). As a natural target space for the study of the limiting behavior of ∇ α as α → 0 + , in analogy with the fractional variation (1.1), we introduce the space BV 0 (R n ) of functions f ∈ L 1 (R n ) such that the quantity Surprisingly, it turns out that D 0 f ≪ L n for all f ∈ BV 0 (R n ), in contrast with what is known for the fractional α-variation in the case α ∈ (0, 1], see [27,Theorem 3.30]. More precisely, we prove that where is the well-known (real) Hardy space. Having the identification (1.13) at disposal, we can rigorously establish the validity of the convergence ∇ α → R as α → 0 + . For p = 1, we prove that For p ∈ (1, +∞) instead, since the Riesz transform (1.12) extends to a linear continuous operator R : L p (R n ) → L p (R n ; R n ), the natural target space for the study of the limiting behavior of the fractional gradient is simply L p (R n ; R n ). In this case, we prove that The limits in (1.14) and (1.15) can be considered as the counterparts of (1.7) in our fractional setting. However, differently from (1.7), in (1.14) and in (1.15) we obtain strong convergence. This improvement can be interpreted as a natural consequence of the fact that, generally speaking, the L p -norm of the fractional gradient ∇ α allows for more cancellations than the W α,p -seminorm. Since the Riesz transform (1.12) extends to a linear continuous operator R : H 1 (R n ) → H 1 (R n ; R n ), the limit in (1.14) can be improved. Precisely, we prove that Here is (an equivalent definition of) the fractional Hardy-Sobolev space, see [75] and below for a more detailed presentation. One can recognize that so that (1.16) is indeed a reinforcement of (1.14). Naturally, if f / ∈ H 1 (R n ), then we cannot expect that ∇ α f → Rf in L 1 (R n ; R n ) as α → 0 + . Instead, as suggested by the limit in (1.7), we have to consider the asymptotic behavior of the rescaled fractional gradient α ∇ α f as α → 0 + . In this case, we prove that ( 1.17) for all f ∈ α∈(0,1) W α,1 (R n ). Note that (1.17) is consistent with both (1.7) and (1.14). Indeed, on the one side, by simply bringing the modulus inside the integral in the definition (1.3) of ∇ α , we can estimate for all f ∈ W α,1 (R n ) (see also [27,Theorem 3.18]), so that, by (1.7), we can infer lim sup [74, Chapter III, Section 5.4(c)] for example), and thus for all f ∈ H 1 (R n ) ∩ α∈(0,1) W α,1 (R n ) the limit in (1.17) reduces to in accordance with the strong convergence (1.14). Let α ∈ (0, 1) be fixed. In the standard fractional framework, by a simple splitting argument, it is not difficult to estimate the W β,1 -seminorm of a function f ∈ W α,1 (R n ) as for all β ∈ (0, α). Inequality (1.19) implies the bound 20) in agreement with (1.7). In a similar fashion (but with a more delicate analysis), an interpolation inequality of the form (1.19) has been recently obtained by the third and the fourth author for the for all β ∈ (0, α), where c n,α,β > 0 is a constant such that see [28,Proposition 3.12] (see [28,Proposition 3.2] also for the case α = 1). Here and in the following, we let [f ] BV α (R n ) be the total fractional variation (1.1) of f ∈ BV α (R n ). Thanks to (1.23), inequality (1.21) implies the bound coherently with (1.17). Although strong enough to settle the asymptotic behavior of the fractional gradient ∇ β when β → α − thanks to (1.22), because of (1.24) inequality (1.21) is of no use for the study of the strong L 1 -limit ∇ β → R as β → 0 + . To achieve this convergence, we thus have to control the interpolation constant c n,α,β in (1.21) with a new interpolation constant c n,α > 0 independent of β ∈ (0, α), at the price of weakening (1.21) by replacing the L 1 -norm with a bigger norm. This strategy is in fact motivated by the non-optimality of the bound (1.24) since, in view of the limit in (1.17), we can still expect some cancellation effect of the fractional gradient for a subclass of L 1 -functions having zero average. Note that this approach cannot be implemented to stabilize the standard interpolation inequality (1.19), since the bound in (1.20) is in fact optimal due to (1.7). At this point, our idea is to exploit the cancellation properties of the fractional gradient ∇ β by rewriting its non-local part in terms of a convolution kernel. In more precise terms, recalling the definition in (1.3), for R > 0 we can split for all Schwartz functions f ∈ S(R n ), where the convolution kernel K β,R is a smoothing of the function y → y |y| n+β+1 χ [R,+∞) (|y|). By the Calderón-Zygmund Theorem, we can extend the functional defined in (1.26) to a linear continuous mapping ∇ β ≥R : H 1 (R n ) → L 1 (R n ; R n ) whose operator norm can be estimated as 27) for some dimensional constant c n > 0. By combining the splitting (1.25) with the bound (1.27) and arguing as in [28], we get that for all β ∈ [0, α) and all f ∈ H 1 (R n ) ∩ BV α (R n ), whenever α ∈ (0, 1]. Exploiting (1.28) together with an approximation argument, we thus just need to establish (1.14) for all sufficiently regular functions, in which case we can easily conclude by a direct computation. To achieve the limit in (1.15) for p ∈ (1, +∞) and the stronger convergence in (1.16) for the case p = 1, we adopt a slightly different strategy. Instead of splitting the fractional gradient as in (1.25), we rewrite it as is the usual fractional Laplacian with renormalizing constant given by Since the Riesz transform extends to a linear continuous operator on L p (R n ) and H 1 (R n ) as mentioned above, to achieve (1.15) and (1.16) we just have to study the continuity properties of (−∆) Exploiting the good decay properties of the derivatives of m α,β (uniform with respect to the parameters 0 ≤ β ≤ α ≤ 1), by the Mihlin-Hörmander Multiplier Theorem the convolution operator in (1.31) can be extended to two linear operators continuous from L p (R n ) to itself and from H 1 (R n ) to itself, respectively. Going back to (1.29) and (1.30), we can exploit the continuity properties of the (extensions of) the operator T m α,β to deduce two new interpolation inequalities. On the one hand, given p ∈ (1, +∞), there exists a constant c n,p > 0 such that for all 0 ≤ γ ≤ β ≤ α ≤ 1 and all f ∈ S α,p (R n ). In the particular case γ = 0, thanks to the L p -continuity of the Riesz transform, we also have for all 0 ≤ β ≤ α ≤ 1 and all f ∈ S α,p (R n ). On the other hand, there exists a dimensional constant c n > 0 such that for all 0 ≤ γ ≤ β ≤ α ≤ 1 and all f ∈ HS α,1 (R n ). Again, in the particular case γ = 0, thanks to the H 1 -continuity of the Riesz transform, we also have for all 0 ≤ β ≤ α ≤ 1 and all f ∈ HS α,1 (R n ). Having the interpolation inequalities (1.33) and (1.35) at disposal, as before we just need to establish (1.15) and (1.16) for all sufficiently regular functions, in which case we can again conclude by a direct computation. As the reader may have noticed, in the above line of reasoning we can infer the validity of (1.32) and (1.34) only if we are able to prove the identifications (1.36) for p ∈ (1, +∞), and (1.37) respectively, with equivalence of the naturally associated norms, where (Id − ∆) − α 2 is the standard Bessel potential. While (1.37) follows by a plain approximation argument building upon the results of [75], the identification in (1.36) is more delicate and, actually, answers an equivalent question left open in [27], that is, the density of C ∞ c (R n ) functions in S α,p (R n ), see Appendix A for the proof. In other words, the equivalence (1.36) allows to identify the Bessel potential space with the distributional fractional Sobolev space S α,p (R n ) in (1.2). Thanks to the identification L α,p (R n ) = S α,p (R n ), many of the results established in [13,14] and in [67,68] can be proved in a simpler and more direct way. See also Appendix B for other consequences of this identification. Complex interpolation and open problems. To achieve the interpolation inequalities (1.28) and (1.32) -(1.35), we essentially relied on a direct approach exploiting the precise structure of the fractional gradient in (1.3). Adopting the point of view of [52,62], a possible alternative route to the above fractional inequalities may follow from complex interpolation techniques. According to [15,Theorem 6.4.5(7)] and thanks to the aforementioned identification L α,p (R n ) = S α,p (R n ), for all α, ϑ ∈ (0, 1) and p ∈ (1, +∞) we have the complex interpolation Here and in the following, we write A ∼ = B to emphasize the fact that the spaces A and B are the same with equivalence (and thus, possibly, not equality) of the relative norms. As a consequence, (1.38) implies that, for all 0 < β < α < 1 and p ∈ (1, +∞), there exists a constant c n,α,β,p > 0 such that for all f ∈ S α,p (R n ). In a similar way (we omit the proof because beyond the scopes of the present paper), for all α, ϑ ∈ (0, 1) one can also establish the complex interpolation and thus, for some constant c n,α,β > 0, for all f ∈ HS α,1 (R n ). Inequalities (1.39) and (1.41) suggest that, in order to obtain (1.33) and (1.35) with complex interpolation methods, one essentially should prove that the identifications (1.38) and (1.40) hold uniformly with respect to the interpolating parameter. We believe that this result may be achieved but, since we do not need this level of generality for our aims, we preferred to prove (1.32) -(1.35) in a more direct and explicit way. 1.5. Organization of the paper. We conclude this introduction by briefly presenting the organization of the present paper. Section 2 provides the main notation, recalls the needed properties of the fractional operators ∇ α and div α and, finally, deals with the properties of the space HS α,1 (R n ). Section 3 is devoted to the proof of the identification BV 0 (R n ) = H 1 (R n ), together with some useful consequences about the relation between H 1 (R n ) and W α,1 (R n ). In Sections 4 and 5, the core of our work, we detail the proof of the interpolation inequalities (1.28), (1.32) and (1.34) and, consequently, we prove both the strong convergence of the fractional gradient ∇ α as α → 0 + given by (1.15), (1.16) and the limit (1.17). We close our work with three appendices: in Appendix A we prove the density of C ∞ c (R n ) functions in S α,p (R n ); in Appendix B we state some properties of S α,p -functions; in Appendix C we establish some continuity properties of the map α → ∇ α . Preliminaries We start with a brief description of the main notation used in this paper. In order to keep the exposition as reader-friendly as possible, we retain the same notation adopted in the previous works [27,28]. 2.1. General notation. We let L n and H α be the n-dimensional Lebesgue measure and the α-dimensional Hausdorff measure on R n respectively, with α ≥ 0. A measurable set is a L n -measurable set. We also use the notation |E| = L n (E). All functions we consider in this paper are Lebesgue measurable. We let B r (x) be the standard open Euclidean ball with center x ∈ R n and radius r > 0. We set B r = B r (0). Recall that ω n := |B 1 | = π n 2 /Γ n+2 2 and H n−1 (∂B 1 ) = nω n , where Γ is the Euler's Gamma function, see [9]. For m ∈ N, the total variation on Ω of the m-vector-valued Radon measure µ is defined as We thus let M (Ω; R m ) be the space of m-vector-valued Radon measure with finite total variation on Ω. For k ∈ N 0 ∪ {+∞} and m ∈ N, we let C k c (Ω; R m ) and Lip c (Ω; R m ) be the spaces of C k -regular and, respectively, Lipschitz-regular, m-vector-valued functions defined on R n with compact support in the open set Ω ⊂ R n . For m ∈ N, we let S(R n ; R m ) be the space of m-vector-valued Schwartz functions on R n . where x a := x a 1 1 · . . . · x an n for all multi-indices a ∈ N n 0 . We let S ′ (R n ; R m ) be the dual of S(R n ; R m ) and we call it the space of tempered distributions. See [40, Section 2.2 and 2.3] for instance. For any exponent p ∈ [1, +∞], we let L p (Ω; R m ) be the space of m-vector-valued Lebesgue p-integrable functions on Ω. We let be the Fourier transform of the function f ∈ L 1 (R n ; R m ). As it is well known, the Fourier transform maps S(R n ; R m ) onto itself and may be extended to S ′ (R n ; R m ) (see [40, Sections 2.2 and 2.3] for instance). We let be the space of m-vector-valued Sobolev functions on Ω, see for instance [46,Chapter 10] for its precise definition and main properties. We let be the space of m-vector-valued functions of bounded variation on Ω, see for instance [4,Chapter 3] or [34,Chapter 5] for its precise definition and main properties. For α ∈ (0, 1) and p ∈ [1, +∞), we let be the space of m-vector-valued fractional Sobolev functions on Ω, see [32] for its precise definition and main properties. For α ∈ (0, 1) and p = +∞, we simply let , the space of m-vector-valued bounded α-Hölder continuous functions on Ω. Given α ∈ (0, n), let We recall that, if α, β ∈ (0, n) satisfy α + β < n, then we have the following semigroup property then there exists a constant C n,α,p > 0 such that the operator in (2.1) satisfies . As a consequence, the operator in (2.1) extends to a linear continuous operator from L p (R n ; R m ) to L q (R n ; R m ), for which we retain the same notation. For a proof of (2.2) and (2.3), we refer the reader to [ Given α ∈ (0, 1), we also let For α ∈ (0, 1) and p ∈ (1, +∞), let see [63,Theorem 27.3]. In particular, the function defines a norm on L α,p (R n ; R m ) equivalent to the one in (2.6) (and so, unless otherwise stated, we will use both norms (2.6) and (2.8) with no particular distinction). We recall that C ∞ c (R n ) is a dense subset of L α,p (R n ; R m ), see [1, Theorem 7.63(a)] and [63,Lemma 27.2]. Note that the space L α,p (R n ; R m ) can be defined also for any α ≥ 1 by simply using the composition properties of the Bessel potential (or of the fractional Laplacian), see [1,Section 7.62]. All the properties stated above remain true also for α ≥ 1 and, moreover, For m ∈ N, we let be the m-vector-valued (real) Hardy space endowed with the norm We refer the reader to [ Chapter III] for a more detailed exposition. We warn the reader that the definition in (2.9) agrees with the one in [74] and differs from the one in [41,73] for a minus sign. We also recall that the Riesz transform (2.9) defines a continuous operator R : L p (R n ; R m ) → L p (R n ; R mn ) for any given p ∈ (1, +∞), see [40,Corollary 5.2.8], and a continuous operator In the sequel, in order to avoid heavy notation, if the elements of a function space F (Ω; R m ) are real-valued (i.e. m = 1), then we will drop the target space and simply write F (Ω). 2.2. Overview of ∇ α and div α and the related function spaces. We recall the definition (and the main properties) of the non-local operators ∇ α and div α , see [27,28,70] and the monograph [61, Section 15.2]. Let α ∈ (0, 1) and set We let The non-local operators ∇ α and div α are well defined in the sense that the involved integrals converge and the limits exist. Moreover, since Thanks to [27, Proposition 2.2], given α ∈ (0, 1) we can equivalently write for all f ∈ Lip c (R n ; R n ) and ϕ ∈ Lip c (R n ; R n ), respectively. The fractional operators ∇ α and div α are dual in the sense that for all f ∈ Lip c (R n ) and ϕ ∈ Lip c (R n ; R n ), see [69,Section 6] and [27,Lemma 2.5]. In addition, given f ∈ Lip c (R n ) and ϕ ∈ Lip c (R n ; R n ), we have for all p ∈ [1, +∞], see [27,Corollary 2.3]. The above results and identities hold also for functions f ∈ S(R n ) and ϕ ∈ S(R n ; R n ). Given α ∈ (0, 1) and p ∈ [1, +∞], inspired by the integration-by-parts formula (2.11), we say that a function f ∈ L p (R n ) has bounded fractional α-variation if is a Banach space and that the fractional variation defined in (2.13) is lower semicontinuous with respect to L p -convergence. In the sequel, we also use the notation In the case p = 1, we simply write BV α,1 (R n ) = BV α (R n ). The space BV α (R n ) resembles the classical space BV (R n ) from many points of view and we refer the reader to [27, Section 3] for a detailed exposition of its main properties. Again motivated by (2.11) and in analogy with the classical case, given α ∈ (0, 1) and p ∈ [1, +∞] we define the weak fractional α-gradient of a function f ∈ L p (R n ) as the We notice that, in the case f ∈ Lip c (R n ) (or f ∈ S(R n )), the weak fractional α-gradient of f coincides with the one defined above, thanks to (2.11). As above, the reader can verify that the distributional fractional Sobolev space endowed with the norm is a Banach space. In the case p = 1, starting from the very definition of the fractional gradient ∇ α , one can check that [27,Theorem 3.23]. In the case p ∈ (1, +∞), the density of the set of test functions in the space S α,p (R n ) was left as an open problem in [27,Section 3.9]. More precisely, defining endowed with the norm in (2.15), it is immediate to see that S α,p 0 (R n ) ⊂ S α,p (R n ) with continuous embedding. The space (S α,p 0 (R n ), · S α,p (R n ) ) was introduced in [67] (with a different, but equivalent, norm) and, in fact, it satisfies S α,p 0 (R n ) = L α,p (R n ) for all α ∈ (0, 1) and p ∈ (1, +∞), see [67,Theorem 1.7]. In Theorem A.1 in the appendix, we positively solve the problem of the density of C ∞ c (R n ) in the space S α,p (R n ). As a consequence, we obtain the following result. According to Corollary 2.1, in the sequel we will also use the symbol S α,p to denote the Bessel potential space L α,p . In addition, consistently with the asymptotic behavior of the fractional gradient ∇ α as α → 1 − established in [28], we will sometimes denote the Sobolev space W 1,p as S 1,p for p ∈ [1, +∞). Thanks to the identification given by Corollary 2.1, we can prove the following result. Proof. By Corollary 2.1, we equivalently have to prove that the set S 0 (R n ) is dense in L α,p (R n ). To this aim, let us consider the functional M : Clearly, the linear functional M cannot be continuous and thus its kernel S 0 (R n ) must be dense in S(R n ) with respect to the L p -norm. Since the Bessel potential (Id − ∆) − α 2 : (S(R n ), · S α,p (R n ) ) → (S(R n ), · L p (R n ) ) is an isomorphism, the conclusion follows. 2.3. The fractional Hardy-Sobolev space HS α,1 (R n ). Following the classical approach of [75], for α ∈ [0, 1] let be the (real) fractional Hardy-Sobolev space endowed with the norm In particular, HS 0,1 (R n ) = H 1 (R n ) coincides with the (real) Hardy space and H 1,1 (R n ) is the standard (real) Hardy-Sobolev space. As remarked in [75, p. 130], HS α,1 (R n ) can be equivalently defined as In particular, the function For the reader's convenience we briefly prove the following density result. Proof. Since the set S ∞ (R n ) is dense in H 1 (R n ) by [74, Chapter III, Section 5.2(a)], the set is dense (and embeds continuously) in HS α,1 (R n ). Thus the conclusion follows. Exploiting Lemma 2.3, for α ∈ (0, 1), the space HS α,1 (R n ) can be equivalently defined as the space Indeed, if f ∈ S ∞ (R n ), then, by exploiting Fourier transform techniques, we can write for all f ∈ S ∞ (R n ), thanks to the H 1 -continuity property of the Riesz transform and the fact that where R j is the j-th component of the Riesz transform R. By Lemma 2.3, the validity of (2.18) extends to all f ∈ HS α,1 (R n ) and the conclusion follows. As a consequence, note that HS α,1 (R n ) ⊂ S α,1 (R n ) for all α ∈ (0, 1) with continuous embedding. We note that the well-posedness and the equivalence of the definitions of HS α,1 (R n ) given above and the stated results hold for any α ≥ 0 thanks to the composition properties of the operators involved. We leave the standard verifications to the interested reader. 3. The BV 0 (R n ) space 3.1. Definition of BV 0 (R n ) and Structure Theorem. Somehow naturally extending the definitions given in (2.10) to the case α = 0, for f ∈ Lip c (R n ) and ϕ ∈ Lip c (R n ; R n ) we define ∇ 0 f := I 1 ∇f and div 0 ϕ := I 1 divϕ. It is immediate to check that the integration-by-parts formula holds for all given f ∈ Lip c (R n ) and ϕ ∈ Lip c (R n ; R n ). Hence, in analogy with [27, Definition 3.1], we are led to the following definition (which is well posed, since div 0 ϕ ∈ L ∞ (R n ) for ϕ ∈ Lip c (R n ; R n )). The proof of the following result is very similar to the one of [27, Theorem 3.2] and is omitted. for all ϕ ∈ C ∞ c (R n ; R n ). In addition, for all open sets U ⊂ R n it holds As already announced in [28], the space BV 0 (R n ) actually coincides with the Hardy space H 1 (R n ). More precisely, we have the following result. Theorem 3.3 (The identification BV for every f ∈ BV 0 (R n ). Proof. We prove the two inclusions separately. Proof of H 1 (R n ) ⊂ BV 0 (R n ). Let f ∈ H 1 (R n ) and assume f ∈ Lip c (R n ). By (3.1), we immediately get that D 0 f = Rf L n in M (R n ; R n ) with Rf = ∇ 0 f in L 1 (R n ; R n ), so that f ∈ BV 0 (R n ). Now let f ∈ H 1 (R n ). By [74, Chapter III, Section 5.2(b)], we can for all k ∈ N. Passing to the limit as k → +∞, we get , Rf is well defined as a (vector-valued) distribution, see [74, Chapter III, Section 4.3]. Thanks to (3.2), we also have that Rf, ϕ = D 0 f, ϕ for all ϕ ∈ C ∞ c (R n ; R n ), so that Rf = D 0 f in the sense of distributions. Now let (̺ ε ) ε>0 ⊂ C ∞ c (R n ) be a family of standard mollifiers (see e.g. [27, Section 3.2]). We can thus estimate for all ε > 0, so that f ∈ H 1 (R n ) by [74, Chapter III, Section 4.3, Proposition 3], with D 0 f = Rf L n in M (R n ; R n ). ∈ (0, 1). The following hold. Relation between ( Proof. We prove the two statements separately. Proof of (i). Let f ∈ H 1 (R n ). By the Stein-Weiss inequality (see [66, Theorem 2] for instance), we know that u := I α f ∈ L n n−α (R n ). To prove that |D α u|(R n ) < +∞, we exploit Theorem 3.3 and argue as in the proof of [27,Lemma 3.28]. Indeed, for all ϕ ∈ C ∞ c (R n ; R n ), we can write by Fubini's Theorem, since f ∈ L 1 (R n ) and I α |div α ϕ| ∈ L ∞ (R n ), being thanks to the semigroup property (2.2) of the Riesz potentials. This proves that D α u = D 0 f = Rf L n in M (R n ; R n ), again thanks to Theorem 3.3. We end this section with the following consequence of Proposition 3.4. Interpolation inequalities 4.1. The case p = 1 via the Calderón-Zygmund Theorem. Here and in the rest of the paper, let (η R ) R>0 ⊂ C ∞ c (R n ) be a family of cut-off functions defined as η R (x) = η |x| R , for all x ∈ R n and R > 0, For α ∈ (0, 1) and R > 0, let T α,R : S(R n ) → S ′ (R n ; R n ) be the linear operator defined by for all f ∈ S(R n ). In the following result, we prove that T α,R is a Calderón-Zygmund operator mapping H 1 (R n ) to L 1 (R n ; R n ). Lemma 4.1 (Calderón-Zygmund estimate for T α,R ). There is a dimensional constant τ n > 0 such that, for any given α ∈ (0, 1) and R > 0, the operator in (4.3) uniquely extends to a bounded linear operator T α,R : Proof. We apply [41, Theorem 2.4.1] to the kernel First of all, we have so that we can choose A 1 = 2nω n R −α in the size estimate (2.4.1) in [41]. We also have where c n > 0 is some dimensional constant, so that we can choose A 2 = c ′ n R −α in the smoothness condition (2.4.2) in [41], where c ′ n > c n is another dimensional constant. Finally, since clearly 4.3) in [41]. Since n R −α for some dimensional constant c ′′ n ≥ c ′ n , the conclusion follows. With Lemma 4.1 at our disposal, we can prove the following result. Remark 4.3 (H 1 − W α,1 interpolation inequality). Thanks to for all R > 0. Hence for all R > 0, and the desired inequality follows by optimizing the parameter R > 0 in the right-hand side. (i) For all given p ∈ (1, +∞), the operator in (4.11) uniquely extends to a bounded linear operator T m α,β : The operator in (4.11) uniquely extends to a bounded linear operator T m α,β : Proof. Without loss of generality, we can directly assume that 0 ≤ γ < β < α ≤ 1. We prove the two statements separately. Proof of (i). Given f ∈ S α,p (R n ), we can write thanks to Lemma 4.4(i). By performing a dilation and by optimizing the right-hand side, we find that because f ∈ L α+γ,p (R n ) and by the L p -continuity property of the Riesz transform, we get that ∇ γ f ∈ S α,p (R n ; R n ) according to the definition given in (2.7) and the identification established in Corollary 2.1. Repeating the above computations for (each component of) the function ∇ γ f ∈ S α,p (R n ; R n ) with exponents α − γ and β − γ in place of α and β respectively and then optimizing, we get where c n,p = σ n n 1/2p max p, 1 p−1 . Thanks to Theorem A.1, Proposition B.1 and Proposition B.4, inequality (4.12) follows by performing a standard approximation argument. In the case γ = 0, inequality (4.13) follows from (4.12) by the L p -continuity of the Riesz transform. This concludes the proof of (i). Proof of (ii). Given f ∈ HS α,1 (R n ), arguing as above, we can write thanks to Lemma 4.4(ii). By performing a dilation and by optimising the right-hand side, we find that by Proposition 3.4(ii). Moreover, because f ∈ HS α+γ,1 (R n ) and by the H 1 -continuity property of the Riesz transform. Thus ∇ γ f ∈ HS α,1 (R n ; R n ). Repeating the above computations for (each component of) the function ∇ γ f ∈ HS α,1 (R n ; R n ) with exponents α − γ and β − γ in place of α and β respectively and then optimizing, we get where c n = σ n n 1/2 . Thanks to Lemma 2.3, inequality (4.14) follows by performing a standard approximation argument. In the case γ = 0, inequality (4.15) follows from (4.12) by the H 1 -continuity of the Riesz transform. This concludes the proof of (ii). Asymptotic behavior of fractional α-variation as α → 0 + In this section, we study the asymptotic behavior of ∇ α as α → 0 + . For β ∈ (0, α), the operator and where c n,p is as in (5.2). Finally, if p < +∞ and f ∈ C 0,α loc (R n ) ∩ L p (R n ), then ∇ 0 f is well defined and belongs to C 0 (R n ; R n ), (5.3) holds for β = 0, for all bounded open sets U ⊂ R n we have Proof. We divide the proof in four steps. Remark 5.2. It is easy to see that a result analogous to Lemma 5.1 can be proved for the fractional divergence operator. In particular, if ϕ ∈ C 0,α (R n ; R n ) ∩ L p (R n ; R n ) for some α ∈ (0, 1] and p ∈ [1, +∞], then div β ϕ ∈ L ∞ (R n ) for all β ∈ (0, α) with where c n,p > 0 is the constant defined in (5.2). If p < +∞, then div β ϕ ∈ L ∞ (R n ) for all β ∈ [0, α), the above estimate holds also for β = 0 and we have As an immediate consequence of Lemma 5.1 and Remark 5.2, we can show that the fractional α-variation is lower semicontinuous as α → 0 + . 5.2. Strong and energy convergence of ∇ α as α → 0 + . We now study the strong and the energy convergence of ∇ α as α → 0 + . For the strong convergence, we have the following result. (i) If f ∈ α∈(0,1) HS α,1 (R n ), then Remark 5.5. Thanks to Corollary 3.5, Theorem 5.4(i) can be equivalently stated as We prove Theorem 5.4 in Section 5.3. For the convergence of the (rescaled) energy, we instead have the following result. Proof of Theorem 5.4. Before the proof of Theorem 5.4, we need to recall the following well-known result, see the first part of the proof of [37,Lemma 1.60]. For the reader's convenience and to keep the paper as self-contained as possible, we briefly recall its simple proof. Proof. By means of the Fourier transform, the problem can be equivalently restated as follows: if ϕ ∈ S(R n ) satisfies ∂ a ϕ(0) = 0 for all a ∈ N n 0 such that |a| ≤ m, then ϕ(ξ) = n 1 ξ i ψ i (ξ) for some ψ 1 , . . . , ψ n ∈ S(R n ) with ∂ a ψ i (0) = 0 for all i = 1, . . . , n and all a ∈ N n 0 such that |a| ≤ m − 1. This can be achieved as follows. Fixed any ζ ∈ C ∞ c (R n ) such that supp ζ ⊂ B 2 and ζ ≡ 1 on B 1 , we can define for all i = 1, . . . , n. It is now easy to prove that such ψ i 's satisfy the required properties and we leave the simple calculations to the reader. Thanks to Lemma 5.7, we can prove the following L p -convergence result of the fractional α-Laplacian of suitably regular functions as α → 0 + , as well as analogous convergence results for the fractional α-gradient. As a consequence, if p ∈ (1, +∞) and f ∈ S 0 (R n ), then Proof. Let f ∈ S 0 (R n ) be fixed. If p ∈ (1, +∞), then by the L p -continuity of the Riesz transform, so that (5.11) is a consequence of (5.10). To prove (5.10), given x ∈ R n we write is the constant appearing in (2.4). One easily sees that On the one hand, we can estimate Integrating by parts, the reader can easily verify that for all p ∈ [1, +∞]. Hence we get for all p ∈ [1, +∞], so that we obtain (5.10) and (5.11). Finally, let f ∈ S ∞ (R n ), so that Rf ∈ S 0 (R n ; R n ), R(Rf ) ∈ S 0 (R n ; R n 2 ) and (−∆) and thus lim α→0 + ∇ α f − Rf H 1 (R n ; R n ) = 0 thanks (5.10) (which clearly holds also for vector-valued functions). Thus, we obtain (5.12), and the proof is complete. We can now prove Theorem 5.4. Proof of Theorem 5.4. We prove the two statements separately. Proof of (i). Let f ∈ HS α,1 (R n ). By Lemma 2.3, there exists (f k ) k∈N ⊂ S ∞ (R n ) such that f k → f in HS α,1 (R n ) as k → +∞. If β ∈ (0, α), then we can estimate (4.15) in Theorem 4.5(ii) and the H 1 -continuity of the Riesz transform, where c n , c ′ n > 0 are dimensional constants. Thus lim sup (5.12) in Lemma 5.8, where c ′′ n = c n + c ′ n . Hence (5.7) follows by passing to the limit as k → +∞ and the proof of (i) is complete. Proof of (ii). We argue as in the proof of (i). Let f ∈ S α,p (R n ). By Proposition 2.2, (4.13) in Theorem 4.5(i) and the L p -continuity of the Riesz transform, where the constants c n,p , c ′ n,p > 0 depend only on n and p. Thus lim sup (5.11) in Lemma 5.8, where c ′′ n,p = c n,p + c ′ n,p . Hence (5.8) follows by passing to the limit as k → +∞ and the proof of (ii) is complete. Remark 5.9 (Direct proof of (1.14)). The proof of (1.14), i.e., immediately follows from Theorem 5.4(i) and Remark 5.5. As briefly discussed in Section 1.3, one can directly prove (1.14) by combining the interpolation inequality proven in Theorem 4.2 with an approximation argument as done in the proof of Theorem 5.4. We let the interested reader fill the easy details. 5.4. Proof of Theorem 5.6. We now pass to the proof of Theorem 5.6. We need some preliminaries. We begin with the following result. Lemma 5.10. Let f ∈ L 1 (R n ) and let R ∈ (0, +∞) be such that supp f ⊂ B R . If ε > R, then Proof. Since µ n,α → µ n,0 as α → 0 + , we just need to prove that We now divide the proof in two steps. Thanks to Lemma 5.10, we can prove the following result. We are now ready to prove Theorem 5.6. In the following result, we recall the self-adjointness property the fractional Laplacian. The following result provides an L p -estimate on translations of functions in S α,p (R n ). It can be stated by saying that the inclusion S α,p (R n ) ⊂ B α p,∞ (R n ) is continuous, where B α p,q (R n ) is the Besov space, see [46,Chapter 14]. For a similar result in the W α,p (R n ) space, we refer the reader to [30]. Thanks to Corollary 2.1, this result can be derived from the analogous result already known for functions in L α,p (R n ). However, the estimate in (B.2) provides an explicit constant (independent of p) that may be of some interest. The proof of Proposition B.2 below can be easily established following the one of [27, Proposition 3.14](and exploiting Minkowski's integral inequality and Theorem A.1) and we leave it to the reader. Proposition B.2. Let α ∈ (0, 1) and p ∈ [1, +∞). If f ∈ S α,p (R n ), then for all y ∈ R n , where γ n,α > 0 is as in [27,Proposition 3.14]. A similar result holds for spaces BV α (R n ), indeed from [27, Proposition 3.14], one immediately deduces that the inclusion BV α (R n ) ⊂ B α 1,∞ (R n ) holds continuously for all α ∈ (0, 1). The next result shows that this inclusion is actually strict whenever n ≥ 2. Proof. By [27, Theorem 3.9], we just need to prove that B α 1,∞ (R n ) \ L n n−α (R n ) = ∅. Let η 1 ∈ C ∞ c (R n ) be as in (4.1) and (4.2), and let f (x) = η 1 (x)|x| α−n for all x ∈ R n . On the one side, we clearly have f / ∈ L n n−α (R n ). On the other side, for all h ∈ R n with |h| < 1, we can estimate where C > 0 is a constant depending only on n and α (that may vary from line to line). Thus f ∈ B α 1,∞ (R n ) and the conclusion follows. We conclude with the following result which, again, can be derived from the theory of Bessel potential spaces. We state it here since our distributional approach provides explicit constants (independent of p) in the estimates that may be of some interest. The proof is very similar to the one of [28, Proposition 3.12] and we leave it to the interested reader.
10,539.2
2020-11-08T00:00:00.000
[ "Mathematics" ]
A Numerical Method for Computing the Roots of Non-Singular Complex-Valued Matrices A method for the computation of the n th roots of a general complex-valued r × r non-singular matrix ? is presented. The proposed procedure is based on the Dunford–Taylor integral (also ascribed to Riesz–Fantappiè) and relies, only, on the knowledge of the invariants of the matrix, so circumventing the computation of the relevant eigenvalues. Several worked examples are illustrated to validate the developed algorithm in the case of higher order matrices. Introduction Complex-valued matrices are a natural extension of complex numbers, and matrix operations are well known to the reader [1] likely with the only exception of roots' computation. The normal situation for a complex number is that the n th root always has n determinations. The equivalent situation for an r × r matrix A is that the n th root of A should have n r determinations. The problem arises in the case of matrices of a special type, for which the computation of roots is ill-posed in the sense of J.Hadamard, as they admit no roots or, conversely, an infinite number of those. In general, the problem of computing the n th roots of general complex-valued matrices has not received the necessary attention so far. In relation to the simple case of 2 × 2 matrices, a few articles appeared in Mathematics Magazine [2][3][4] and in Linear Algebra and its Applications [5,6]. The Newton-Raphson method was applied by N.J. Higham in [7] for computing the square roots of general matrices, whereas I.A. al-Tamimi [8] and S.S. Rao et al. [9] used the Cayley-Hamilton theorem for computing roots of general 2 × 2 non-singular matrices. A necessary and sufficient condition for the existence of the n th root of a singular complex matrix A was given by P.J. Psarrakos in [10]. Two alternative techniques for the computation of the n th roots of a non-singular complex-valued matrix has been recently proposed. The first method was presented in [11] and is based on the Cayley-Hamilton theorem in combination with the representation of non-singular matrix powers in terms of Chebyshev polynomials of the second kind [12,13]. In this way, it is possible to express the roots of non-singular 2 × 2 or 3 × 3 complex-valued matrices by making use of pseudo-Chebyshev functions [14,15]. Unfortunately, the extension of this technique to higher order matrices can be hardly achieved, owing to the complicated inductive procedure that is necessary for this purpose. A second method, which is referred to as the FKN (F k,n functions)procedure, was described in [16]. This technique can be applied to non-singular r × r (r ≥ 2, r ∈ N) complex-valued matrices, though it is not valid for general matrices, such as nilpotent matrices. The F k,n functions can be expressed by generalized Lucas polynomials of the second kind [17][18][19][20]. Since the considered problem cannot be solved in its generality when dealing with rootless matrices [4], nilpotent matrices (i.e., Jordan blocks), or matrices with infinite roots [21], the present study is devoted to the illustration of a "canonical" method for computing the roots of general matrices in the regular case, so excluding the aforementioned exceptions. In particular, we will show that the FKN expansion can be avoided when representing the n th roots of a non-singular matrix A. As a matter of fact, the problem can be solved in an effective way by making use of the Dunford-Taylor integral, which traces back to previous works by F. Riesz and L. Fantappié, as well as a known formula for the resolvent of a matrix reported in [22]. It is shown that the evaluation of the roots of a given non-singular matrix A can be performed only on the basis of the knowledge of the matrix invariants, which are the coefficients of the characteristic equation (or equivalently, the elementary symmetric functions of the eigenvalues), and the relevant spectral radius R, which can be estimated using Gershgorin's theorem. In this way, a numerical quadrature rule can be adopted to compute a contour integral extended to a circle centered at the origin and having a radius larger than R. We also stress that, by using the proposed methodology, one can easily evaluate all the determinations of the root of a given matrix. The paper is organized as follows. First, we recall the formula for the resolvent of a matrix and then apply this formula in combination with the Dunford-Taylor integral to derive an explicit representation of a matrix root. Several worked examples are prepared and presented in the paper to prove the effectiveness of the procedure for arbitrary higher order non-singular complex-valued matrices. To this end, the computer program Mathematica © is used. The Dunford-Taylor Integral The Dunford-Taylor integral [23] is an analogue of the Cauchy integral formula in function theory. It can be applied to holomorphic functions of a given operator. In the finite-dimensional case, an operator is nothing but a matrix A. Theorem 1. Suppose that f (λ) is a holomorphic function in a domain ∆ of the complex plane containing all the eigenvalues λ h of A, and let γ ⊂ ∆ be a closed smooth curve with positive direction that encloses all the λ h in its interior. Then, the matrix function f (A) can be defined by the Dunford-Taylor integral: where (λI − A) −1 denotes the resolvent of A. As an example, given the natural number n, the n th power of A can be evaluated as: It is worth noting that an alternative technique has been presented in the scientific literature to compute matrix powers through the Cayley-Hamilton theorem and the so-called F k,n functions, which are solutions of linear recursions [12,13]. This method is purely algebraic and does not make use of quadrature rules, which are necessary to avoid Cauchy's residue theorem. It is useful to mention that, if A is non-singular, both the Equation (2) and the results reported in [12,13] are still valid for negative values of the exponent n, since the FKN functions can be defined for n < 0 as well [24]. For holomorphic functions of general matrices, the evaluation of the Dunford-Taylor integral is more convenient than the application of Cauchy's residue theorem from a computational standpoint. In fact, such an approach relies, only, on the knowledge of the entries of the considered matrix and the relevant invariants, whereas the knowledge of the eigenvalues is necessary where Cauchy's residue theorem is applied. Recalling the Resolvent of a Matrix The resolvent of an operator is an important tool for using methods of complex analysis in the theory of operators on Banach spaces. The holomorphic functional calculus gives a formal justification of the procedure used. The spectral properties of the operator are determined by the analytical structure of the functional. In this study, we consider the finite-dimensional case, so that the general operator under consideration can be represented as a r × r (r ∈ N) complex-valued matrix A. We denote as: the invariants of A, i.e., the coefficients of the characteristic polynomial: which are invariants under similarity transformations. Let: be the roots of P (λ), i.e., the eigenvalues of A. In [22] (pp. 93-95), the following representation of the resolvent (λI − A) −1 in terms of the invariants of A was proven: By using (5) and (6), we can easily derive the representation formula for matrix functions reported in [18]. Theorem 2. Let f (λ) be a holomorphic function in a domain ∆ of the complex plane containing the spectrum of A, and denote by γ ⊂ ∆ a simple contour enclosing all the zeros of P (λ). Then, the Dunford-Taylor integral is written as: It should be noted that, if ∆ does not contain the origin, a simple choice of γ is a circle centered at the origin and having a radius larger than the spectral radius of A, which can be determined using Gershgorin's theorem. Let us consider now the function f (λ) = λ 1/n , where n is a fixed integer number. As this function is always holomorphic in the open set ∆ = (C − {0}), i.e., in the whole plane excluding the origin, the preceding theorem can be re-formulated as follows: Theorem 3. If A is a non-singular complex-valued matrix and γ ⊂ ∆ is a simple contour enclosing all the zeros of P (λ), then the n th roots of A can be represented as: with the general coefficient ξ k given by: Recalling Cauchy's residue theorem [25] and denoting the integrand in (9) as Φ = Φ (λ), the contour integral can be evaluated as: Assuming, for simplicity, that the eigenvalues are all simple and, therefore: we find: where we have put, by definition, (λ − λ 0 ) = (λ − λ r+1 ) := 1. Finally, upon combining (10) and (11), it is trivial to verify that (9) becomes: A similar result can be found in the case of multiple roots of the characteristic polynomial by using the following more general equation, which holds true for a pole of order m at the point λ : Remark 1. Note that the knowledge of the eigenvalues is not necessary unless the integral in (7) is computed by means of Cauchy's residue theorem. In general, only the knowledge of the matrix invariants is sufficient to evaluate the considered contour integral by choosing, as γ, a circle centered at the origin with a radius larger than the spectral radius of A. Examples of computations using Cauchy's residue theorem were given in [16]. Remark 2. Note that, upon selecting the different determinations of the root function of degree n in the formal expression of A 1/n , it is possible to evaluate the various roots of A through Equation (8). Based on Theorem 3, the proposed algorithm for the evaluation of the n th roots of a matrix A can be described through the flowchart illustrated in Figure 1. It is straightforward to verify that the computational complexity of the relevant numerical procedure is O r 4 in the worst case. Square Root of a 6 × 6 Non-Singular Matrix Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 6 denoting the zero matrix of order six. Cubic Root of a 5 × 5 Non-Singular Matrix Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: We find: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 5 denoting, as usual, the zero matrix of order five. Fifth Root of a 4 × 4 Non-Singular Matrix Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 4 denoting the zero matrix of order four. Square Root of a 3 × 3 Non-Singular Matrix Consider the complex-valued matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: By taking the various determinations of the square root function, we find: For each one of the n r = 8 roots, it is not difficult to verify that: with O 3 denoting the zero matrix of order three. Conclusions A general method for computing the n th roots of complex-valued matrices was detailed. The proposed procedure was based on the application of the Dunford-Taylor integral in combination with a suitable representation formula of the matrix resolvent. In this way, it was possible to overcome the limitations of the techniques already available in the scientific literature, in terms of the matrix order, as well as of the root degree. The presented approach provided an effective means for evaluating all the determinations of the root of a given matrix with a reduced computational complexity and burden since it relied, only, on the knowledge of the matrix invariants, so circumventing the need for computing the relevant eigenvalues, while the spectral radius could be estimated using Gershgorin's theorem. Several worked examples were provided to prove the correctness of the procedure for higher order real-and complex-valued matrices.
2,851.8
2020-06-05T00:00:00.000
[ "Mathematics" ]
Introduction to Energy Systems Modelling SummaryThe energy demand and supply projections of the Swiss government funded by the Swiss Federal Office of Energy and carried out by a consortium of institutes and consulting companies are based on two types of energy models: macroeconomic general equilibrium models and bottom-up models for each sector. While the macroeconomic models are used to deliver the economic, demographic and policy framework conditions as well as the macroeconomic impacts of particular scenarios, the bottom-up models simulate the technical developments in the final energy sectors and try to optimise electricity generation under the given boundary conditions of a particular scenario. This introductory article gives an overview of some of the energy models used in Switzerland and — more importantly — some insights into current advanced energy system modelling practice pointing to the characteristics of the two modelling types and their advantages and limitations. Introduction Controversial discussions in energy science and energy policy communities about the perspectives, feasibility and impacts of future energy demand and supply can often be traced back to the different types of energy models used and their results (e.g. Krause 1996). Detailed techno-economic (or process-oriented) models can simulate the market penetration and related cost changes of a new energy technology or policy with a certain degree of technical detail (which is why they are called "bottom-up" models). However, they cannot project the corresponding economic, structural, or employment net impacts or net cost for society. The results of these models are often cited by environmentally-concerned scientists, NGOs and politicians to elucidate the feasibility of major changes to the energy system, particularly in the context of urgent and extensive change of the mainly fossil fuelled energy systems in almost all countries. On the other hand, macroeconomic models (also called top-down models) can simulate sector-specific future energy demand and supply including the impacts on economic growth, employment or foreign trade. However, they rely very much on energy price changes and financial policies and are not well suited to describe the development of specific technologies or sectoral policies and related changes in energy demand, related emissions, and investments at a sufficiently detailed level. They may also reflect rather constant trends in structural changes of the economy and, to an unsatisfactory extent, saturation processes and innovations. The results of these models are often cited by representatives of trade associations, large energy-intensive or energy supply companies, and conservative politicians. The present discussions in Switzerland or Germany are good examples, where anti-nuclear NGOs refer to the feasibility of a non-nuclear, low carbon Swiss energy system and the German government has re-decided for a phase out in the beginning 2020s by citing the results of bottom-up models. On the other hand, economiesuisse (the Swiss Business Federation) and other Swiss trade associations as well as the BDI (the German Trade Association of Industry) point to the high and unacceptable economic risks of such an energy system by referring to macroeconomic results of top-down models such as losses in economic growth, employment and competitiveness. Each of the two types of models has its specific advantages and limitations. This paper addresses in greater detail some of the challenges that have to be met when combining the two types of models via electronically-based modules (a so called 'hard' link). There is no doubt that combining the advantages of both types of energy models into a hybrid model will more adequately describe and project the changes of the energy system, which may improve the quality of discussions on future energy perspectives by being able to differentiate among technologies and sectors, and to analyse the macroeconomic implications of large policy portfolios and major changes of the energy system in a consistent and transparent manner (Böhringer, 1998). Overview of Energy Modelling Approaches -State-of-the-Art Energy models are used to project the future energy demand and supply of a country or a region. They are mostly used in an exploratory manner assuming certain developments of boundary conditions such as the development of economic activities, demographic development, or energy prices on world markets. They are also used to simulate policy and technology choices that may influence future energy demand and supply, and hence investments in energy systems, including energy efficiency policies. However, policy and technology choices induce a dilemma in the choice of energy model (Böhringer, 1998). Detailed techno-economic (or process-oriented) models were first developed in the early 1970s, particularly after the first oil crisis in 1973, when analysts started to examine the options of oil use and the more efficient use of final energies. Modern macroeconomic energy models have their origin in the late 1950s, when energy supply companies and energy administrations had to make decisions about the future energy supply to meet the rising energy demand of the rapidly developing OECD countries. Every modelling approach abstracts to a certain degree from reality using stylized facts, statistical average figures, past trends as well as other assumptions. Consequently, energy models represent a more or less simplified picture of the real energy system and the real economy; at best they provide a good approximation of today's reality. Nevertheless, it would be impossible to answer very specific questions on energy technologies or economic implications without making some cut backs and approximations, with an uncertain reliability on quantitative figures used by those models. A large diversity of modelling approaches has been developed over time depending on their target group (policy makers, scientific and research communities, large energy supply companies), intended use (data analysis, ex post evaluation, forecasting, simulation, optimization, estimation of parameters, etc.), regional coverage (regional, national, multinational), conceptual framework (top-down: underlying economic theory, bottom-up: technological focus/explicitness) and the information available (data on final energy, useful energy, energy demand by branches in the service, transport, or industrial sector). Obviously, both types of energy models, top-down and bottom-up, have specific advantages and limitations, of which modellers, users of the results, and policymakers are, however, often not sufficiently aware. Bottom-up models are generally constructed and used by engineers, natural scientists, and energy supply companies, whereas top-down models tend to be developed and used by economists and public administrations. The understanding of the two approaches has increased substantially over the last decade (Böhringer andRutherford, 2006, 2008;Hourcade et al., 2006); There have been first few attempts to combine both approaches in one hybrid energy model system, probably so late during the past decade due to the lack of interdisciplinary research teams of necessary larger funds for their operation (Hourcade et al., 2006;Schade et al., 2009;Catenazzi, 2009). Recent or current projections and studies of energy demand and supply using energy models (E3Mlab, 2007;BFE, 2007;WWF, 2009;IEA, 2010;Prognos, 2011) are not just made for routine decisions by decision makers in administrations or large energy companies; they also increasingly serve as a scientifically derived information basis for societal debate among governments, energy companies, trade associations, and NGOs. The recent discussions about greenhouse gas emission targets (and target sharing) in EU Member countries, phasing out nuclear energy after Fukushima in some European countries and Japan, and the speed of introducing renewable energies and realising energy efficiency potentials have been increasingly influenced by the results taken from various energy demand and supply models developed during the last two decades. The following sections describe selected top-down and bottom-up models, their main advantages and limitations as well as examples of their implementation. These descriptions are also intended to shed some light on the existing preferences of the various stakeholders in the energy field and in the economy. Top-Down Energy Models Top-down energy models try to depict the economy as a whole on a national or regional level and to assess the aggregated effects of energy and/or climate change policies in monetary units. In contrast to bottom-up modelling, these equationbased models take an aggregated view of the energy sectors and the economy when simulating economic development, related energy demand and energy supply, and employment. Driven by economic growth, inter-industrial structural change, demographic development, and price trends (rather than energy-related technological progress, innovations, or intra-industrial structural change), macroeconomic models try to equilibrate markets by maximizing consumer welfare using various production factors (labour, capital, etc.) and applying feedback loops between welfare, employment, and economic growth. Currently, macroeconomic energy models are often being used to evaluate the economic costs and environmental effects of general energy or climate policy instruments, such as energy or CO 2 taxes or surcharges, emission trading schemes (ETS), feed-in tariffs of renewable energies, etc. (Bataille, 2005). In the past, conventional top-down energy models considered technology developments mainly in the context of price-based policies (taxes, surcharges, or investment subsidies) and regulatory policies (technical standards, bans, and technological targets). In current top-down modelling approaches, efforts are made to extend the energy demand forecasting framework of the existing models to include technological and economic feedbacks (Löschel, 2002;Böhringer and Löschel, 2006) as well as non-price policies (technical standards, norms, etc.;Worell et al., 2004). A good example in this context is the global optimization model MERGE which combines a top-down approach to model economy and energy demand with a bottom-up approach to depict the energy sector (Manne and Richels, 2004).The following subsections present four different types of top-down models: input-output models, econometric models, computable general equilibrium models and system dynamics. Input-Output Models The traditional Input-Output Analysis is based on Francois Quesnays Tableau économique (1758) and Leon Walras and Wassily Leontiefs Input-Output Economics (1966). Used for a structural description of the regarded economy, it describes the total flow of goods and services of a country subdivided into different sectors and users in terms of value added and specific input/output coefficients. Input-output tables are more suitable for short-term evaluation of energy policies rather than long-term ones as they can only give a current picture of the underlying economic structure based on historical data (Catenazzi, 2009). For Switzerland, developed a two-step approach for generating a balanced input-output table based on the European Union inputoutput table structure (Eurostat, 2008). By using country comparisons (particularly with Austria and Germany) and the cross-entropy method, tackled data limitations concerning the Swiss economy and generated a good overall picture of its sectoral interdependencies. However, there is still room for general model improvements relating to deficient data as well as on a sectoral level for specific industries (e.g. paper industry) where major uncertainties have been observed. Various studies additionally introduced energy issues into the economic framework of input-output analysis. For example, showed for the special case of Germany that the application of input-output tables could be useful to examine the interrelationships between material use and the energy demand of an economy with regard to material efficiency improvements using sectorspecific energy intensities. Another example is the hybrid energy input-output table of the German Federal Statistical Office within the Environmental-Economic Accounting for Germany (Mayer, 2007) or the input-output analysis of the United Nations (United Nations, 1999). Econometric Models Econometric analysis has been defined as a combination of economic theory, mathematical tools and statistical methods (Tinter, 1953). Early (basic) econometrics was aimed at testing economic theory (estimation of economic relationships) using empirical evidence. However, over the course of time the requirements made of econometrics increased from pure hypothesis testing to the development of complex econometric models. Additionally, most of the econometric energy models are open-ended, growth-driven macro econometric models using/analysing time series data on a higher level of aggregation, e.g. output, etc. with no assumption of equilibrium, while cross section and panel data tend to be applied more in micro econometrics (Wooldridge, 2002;Greene, 2003). One major disadvantage of econometric models is their heavy reliance on data. To be able to generate credible results, econometric models need huge amounts of data for fairly long time periods. There may be a general problem of data availability for the modellers in the case of small macro econometric models, which will probably be exacerbated in the case of multi-country analyses where data for some countries might not be available at all or not comparable due to national 1 Éléments d'économie politique pure, ou théorie de la richesse sociale (1874) 2 Manuale d´economia polititica (1906) 3 The Existence of an Equilibrium for a Competitive Economy (1954) accounting or census differences. In this context the credibility and adequacy of data represent additional uncertainties for model quality (e.g. rough guesses, methods of estimation, etc.). E3ME (CAMECON, 2011) is an annual macro econometric model simulating GDP at the level of 41 branches for all EU Member States, with implications for employment, gross value added, prices and several other economic, energy and environmental variables. It has been developed by Cambridge Econometrics with the aim to address the long-term effects of energy-economy-environment (E3) policies at the European level, especially those concerning investment, R&D, and environmental taxation and regulation . Computable General Equilibrium Models Historically, Computable General Equilibrium (CGE) models have their origins in the general equilibrium theory developed by Léon Walras 1 in the 1870s, Vilfredo Pareto 2 in 1906 and Kenneth Arrow and Gerard Debreu 3 in the 1950s. However, current CGE models may use different approaches to analyse policy implications for economies (e.g. Keynesian models). Generally, these kinds of model assume that all markets are in perfect equilibrium to start with (no excess demand or supply, no obstacles to profitable potentials of energy efficiency). CGE models use Social Accounting Matrices (SAM) to represent their benchmark data in equilibrium. After policy intervention (e.g. introduction of special taxes or subsidies, etc.), the equilibrium is preserved by price adjustments which cannot be influenced by the involved agents (e.g. households, firms, and government) as they act as price takers and try to maximize their welfare or profits under certain constraints and quantity adjustments. In practice, researchers and international institutions commonly use CGE models for long term simulations like, e.g. the GEM-E3 model of the European Commission, the GTAP model consortium, or the World Bank models. Through its equilibrium approach, CGE models rule out energy efficiency gaps, adjustment delays and consequently neglect the importance of market failures and obstacles. Additionally, like the majority of pure macroeconomic models, CGE models do not take technological details into account which might be important for the assessment of certain policy measures (see Hourcade et al., 2006). The recursive dynamic GEM-E3 CGE model simulates interactions between the economy, the energy system and the environment as well as the macroeconomic effects of environmental policies (taxes, standards, tradable permits, etc.) for 15 (or 27) European countries and considering four economic agents (households, firms, governments and foreign trade) (Capros et al., 1996a(Capros et al., , 1996b. On the supply side, the market consists of five production sectors, agriculture, energy, manufactured goods and services (divided into 18 branches) with labour and capital as input factors. Technological progress enters the GEM-E3 model through its production function, either modelled endogenously or taken from outside the model (expert estimates, etc.). On the demand side, 13 categories of durable and non-durable consumption goods enter the markets. In addition, foreign trade (imports and exports) enters the model under the "Armington" assumption of imperfect substitutes. Extensions were made to the basic GEM-E3 model to cover macroeconomic effects under imperfect competition as well as geographical expansion (Eastern European countries, Switzerland) (CES, 2008). GEM-E3 Switzerland was developed by Bahn and Frei (2000) and was later merged into the European GEM-E3 model for the European Union. Their main aim was to compile a reliable Swiss database for GEM-E3 as well as to evaluate different CO 2 -emission reduction strategies for Switzerland. Another CGE model of that family is the GEMINI-E3 model (developed by the French Ministry of Equipment and the French Atomic Energy Agency). This model family consists of several CGE models, one of them specifically tailored to Switzerland computing the costs of the Kyoto Protocol for Switzerland with and without international emissions trading. GEMINI-E3 models very detailed indirect taxation and social contributions rates and puts a special focus on the measurement and analysis of welfare cost of policies (Bernard and Vielle, 2008). System Dynamics The modelling concept of System Dynamics (SD) was developed by Forrester (1958Forrester ( , 1962Forrester ( , 1971Forrester ( , 1980 in the 1950s at the Massachusetts Institute of Technology (MIT) and used to analyse the long-term behaviour of social systems like large industrial companies or entire cities. The aim here is to explain the behaviour of an interacting social system as a result of the assumed interdependencies considering dynamic changes over time (differential equations and analysis) among the various components that constitute the defined system. The model developer defines flows, stocks, and central components of the defined system, whose interconnections are established by feedback control systems or feedback loops represented by non-linear differential equations. Another element which is integrated in system dynamics theory and methodology concerns the decision theory of (designed) modelled complex social systems. In addition, information technologies for computational analysis and a graphical interface to represent feedback loops are typical features of system dynamics models and form the basis for discussions of the analysed social systems by interdisciplinary teams (Krail and Schade, 2010). The development over time of the defined systems is described using differential analysis with mathematical formulations. Forrester and his associates developed software tools in order to use difference equations to calculate the dynamic equations of the feedback system and its development over time incorporating expert judgements. These programmes provide the capabilities to follow an experimental modelling approach if no analytical or data-based solutions are available. Examples of system dynamics approaches applied to analyse long-term developments in the energy sector include the TIME-model investigating long-term structural developments within the worldwide energy system (de Vries et al., 1999), the POLES model replicating the whole energy system (Russ and Criqui, 2007) and the ASTRA model in the transport sector (Schade, 2004) that was extended to macroeconomic structures (Krail and Schade, 2010). The macroeconomic module integrates neoclassical production functions with Keynesian consumption and investment behaviour and with elements of endogenous growth theory to incorporate technological progress. Drawbacks of system dynamics relate to the validation and calibration of the assumed feedback loops, in particular with reference to the modelling of longterm developments in the energy systems (Fichtner et al., 2003) and also the inability to make detailed analyses and projections of sectoral technologies. Bottom-Up Energy Models The main characteristic of a conventional bottom-up energy model is its relatively high degree of technological detail (compared to top-down energy models) used to assess future energy demand and supply. In contrast to top-down models, bottom-up models use a business economics approach for the economic evaluation of the technologies simulated. They usually cannot consider macroeconomic impacts of energy or climate policies or related investments. They do not consider transaction costs which are implicitly covered by top-down models. Their technological detail and transparency considering technological progress and the diffusion of new energy technologies make bottom-up energy models unsuitable for very long-term energy demand and supply projections in technology areas with re-investment cycles of less than 20 years (e.g. future generations of a new technology may be quite different from its present type). Regarding the mathematical form, bottom-up energy models have been developed in the form of simulation or optimisation models, and more recently of multi agent models (see below). Bottom-up modellers try to identify the best technologies by assessing policies, their effects, investment, costs, and benefits, by calculating external benefits (e.g. environmental, etc.) of energy efficiency measures, by identifying synergy-effects between sectors, and sectoral costs and surpluses. However, most bottom-up models limit their investment and cost calculations to the conversion sector and cross cutting technologies in the final energy sectors if it comes to energy efficiency options. This fact is often overlooked and leads to questionable conclusions in cases of scenario comparisons and model comparisons. Partial Equilibrium Models Partial equilibrium models do not differ greatly from the already mentioned CGE models as the framework and mechanisms are very similar. However, partial equilibrium models assess only one sector or a certain subset of sectors. Partial equilibrium energy models focus on energy demand and supply. By neglecting certain interrelations and effects on the broader economy, they can include many more technological details than conventional CGE models. Important partial equilibrium models include, amongst others: the POLES (Prospective Outlook on Long-term Energy System) model from Enerdata, the WEM (World Energy Model) of the International Energy Agency, and the PRIMES Energy System Model of the European Commission. It should be pointed out, however, that the cited models are not pure partial equilibrium models because they already attempt to bridge macroeconomic and process-oriented approaches, e.g. by combining explicit technology choices with microeconomic relationships. The POLES model analyses the international energy markets for seven world regions, eleven sub-regions and 32 countries, considers about 40 technologies of power and hydrogen production and the final energy sectors in some detail. It is based on a recursive simulation process in which the energy demand and supply for each national or regional module reacts to international price changes in the previous period. Each module considers not only price effects but also technological and economic constraints and trends (Enerdata, 2011). The WEM is a large-scale mathematical model that generates medium-to long-term sectoral and regional projections of energy demand, power generation, etc. on a global and regional level. The model consists of several demand modules (final energy demand of industry, transport, the residential sector and services), a refinery module, a power generation module, three fossil fuel supply modules (gas, oil, and coal) as well as a module which calculates, amongst others, the CO 2 content factors for coal, oil and gas for different sectors and regions. Additionally the model estimates supply-side investments as well as the net change to demand-side investments based on its energy supply and demand projections for three scenarios (IEA, 2011). PRIMES is used to analyse, for example, the impacts of carbon emission trading and of renewable and energy efficiency policies on energy markets by simulating a market equilibrium for energy demand and supply up to 2030 within each of the EU Member States including endogenous energy price formation. The model consists of 11 sub-models, including several demand-and supplyside modules and, in contrast to other energy models, utilises more recently agent-based objective functions which, however, are not publicly documented (E3Mlab, 2007). Optimisation Models Optimisation models try to define the optimal set of technology choices to achieve a specific target at minimised costs under certain constraints leaving prices and quantity demanded fixed in its equilibrium. The MARKAL model analyses energy demand and supply on a country level using a bottom-up, dynamic modelling approach. Like the above mentioned partial equilibrium models, MARKAL already combines a detailed bottom-up model with a simplified macroeconomic approach. It was developed by the International Energy Agency and designed to support policymakers by providing them with detailed information about energy technologies on the demand and (mostly) supply sides. According to ETSAP (2011), MARKAL aims at identifying a least-cost energy system with cost-effective responses to restraints on emissions. Additionally, price-based policies (taxes, etc.) as well as new technologies and trends in technological change are evaluated and the degree of regional cooperation is estimated. Today, there are several versions of the original MARKAL model including a small macroeconomic model, a microeconomic model, and various added features such as endogenous energy demand projection, responsiveness to price changes, trade of emission permits, uncertainties regarding endogenous technology learning (Seebregts et al., 2002;Loulou et al., 2004). The TIMES model (The Integrated MARKAL-EFOM System) is one of these MARKAL family models and based on the same modelling approach as the conventional MARKAL model used for overall-and single-sector analysis of the energy market. In comparison with the usual MARKAL model, the TIMES model provides some special features: flexible time periods, data decoupling, process generality, flexible processes, commodity related variables, climate equations, etc. (ETSAP, 2005) Euro MM (European Multi-regional MARKAL), another offspring of the MARKAL model family, is an multi-country energy system optimisation model which evaluates policy and climate change impacts on the energy conversion sector by calculating the least-cost solutions for the energy system (Schade et al., 2009). Another energy supply optimization model frequently cited in literature is MESSAGE (Model for Energy Supply Strategy Alternatives and their General Environmental Impact) developed by the Austrian International Institute for Applied Systems Analysis (IIASA) hosting 11 regions and computing the evolution of the energy sector up to the year 2100 (Messner and Strubegger, 1995). The DIME (Dispatch and Investment Model for Electricity markets in Europe) model is designed as a linear optimisation model for medium-and long-term forecasting of the European (13 Central and Western European countries including Switzerland) electricity generation market covering 11 technologies for electricity generation. Based on the assumptions of a competitive power generation market it minimises costs and is applied to simulate allocation as well as investment decisions regarding the supply side of the electricity sector (EWI, 2011). The use of optimisation models is limited to discrete energy conversion technologies and typified energy uses (such as cars, different types of insulated houses) as the information on investment and operating cost are needed for the optimisation. This central requirement of optimisation models limits their application to certain technological areas and final energy sectors. It is impossible, for instance, to simulate the energy demand of the service and industrial sector due to their technological variety where cost information cannot be made available. In addition, optimisation models neglect the fact that severe market imperfections and obstacles in many final energy sectors and also the conversion sector (e.g. co-generation) are not simulated leading to unrealistically low projections of energy demand. Simulation Models Simulation models aim to replicate consecutive rules that describe the associations and interrelationships among various system elements, i.e., simulation models attempt to provide a descriptive, quantitative illustration of energy demand and conversion based on exogenously determined drivers and technical data with the objective to model observed and expected decision-making that does not follow a cost minimising pattern. Replicating in a simplified manner final user behaviour by modelling their technological choices is based on the variation of pre-defined drivers (e.g. income, population, employees, living area, mileage, government policies, energy prices, etc.). These drivers are correlated with the general economic and demographic development (i.e. scenarios) as well as other boundary conditions (e.g. energy and climate change policies). Traditionally, the operation and planning of the electricity sector have been simulated using cost minimisation models (see Section 2.2.2.). However, these models are not well suited to the more recent framework which developed due to the electricity sector being restructured and liberalised in most countries (Linares et al., 2008) and therefore several scientists have developed models that also consider imperfect competition as reviewed in Ventosa et al. (2005). Simulation models are flexible and allow aspects such as strategic behaviour or the absence of complete information to be integrated which help to mirror market imperfections and failures. Well-known examples of this category of simulation model include system dynamics (SD) and agent-based simulation models (see Section 2.2.4). Specific examples include the Residential End-Use Energy Planning System (REEPS); World Energy Model (WEM); Mesures d'Utilisation Rationnelle de l'Energie (MURE); and the National Energy Modelling System -Residential Sector Demand Module (NEMS-RSDM) (Mundaca and Neij, 2009). Game theory and accounting framework modelling approaches are also considered to be types of simulation models (Sensfuss, 2008), but also limited to the energy conversion sector. Game theory simulation methods concentrate on the interaction of players on energy markets (strategic decisions) and are commonly used in the energy conversion modelling of market design aspects and market power analysis, especially with respect to stable equilibria analysis (Nashequilibrium). In addition, models such as Cournot, Bertrand and Supply Function Equilibria are simulation approaches employed to research oligopolistic electricity markets. A recent model application for electricity sector analysis is the hybrid Bertrand-Cournot model, where Yao and Oren propose a simulation model of a simplified electricity sector with special emphasis on the transmission situation and prices and the analysis of market and power (Yao et al., 2010). Accounting frameworks can be considered to be a simple form of a simulation model which aims to account for the physical and economic flows of the energy system (Heaps, 2002;Mundaca and Neij, 2009). Instead of explicitly modelling 'players' decisions, this type of model accounts for the outcomes of the assumed development (i.e. of a scenario as a consistent bundle of boundary conditions or of a penetration of a particular new technology) in a descriptive manner (e.g. development of technologies resulting from re-investments of new generations of technologies; description of the present routines in decision making in energy technologies) or in a prescriptive manner (e.g. impacts from high-efficient technologies or renewable energies resulting from one or various policy instruments). This approach is commonly applied to project future energy demand of final energy sectors and the related emissions. Examples of accounting frameworks include models such as Long-Range Energy Alternatives Planning (LEAP); National Impact Analysis (NIA); Bottom-Up Energy Analysis System (BUENAS); Model for Analysis of Energy Demand (MAED); and the Policy Analysis Modelling System (PAMS). Due to their simple structure, accounting frameworks are not commonly applied to simulate decision processes. Multi-Agent Models Multi-agent modelling is a simulation approach which considers market imperfections such as strategic behaviour, asymmetric information and other non-economic influences. The concept and architecture of multi-agent models is derived from the distributed artificial intelligence concept whose application has been greatly extended across several research areas (e.g. macro level complexities) since the early 1990s. Advances in computational methods and resources and in complex, multi-disciplinary ecological and natural resource research methodologies combined with progress in more specialised statistical approaches have allowed researchers to expand the use of agent-based modelling, especially to decisionand policymakers (Foley et al., 2005;Heemskerk et al., 2003). Agent-based models are considered to be more than just innovative research tools for analysing complex systems, but are also regarded as an instrument for end-users to improve decision-making as well as to test specific policies and project alternative scenarios and futures (Alexandridis and Pijanowski, 2006). An important aspect with respect to micro-level interactions relates to the role of the defined agents as well as to the decisions and interactions between heterogeneous actors in the system. In this respect, agents have in common the ability to act autonomously, interact with other agents, react to the environment, and take the initiative to act (Wooldridge, 1995(Wooldridge, , 2009. Agent-based models applied to the electricity sector are widespread in literature. Traditionally, they tended to focus on operational aspects rather than on long-term simulations until recently when some agent models have been applied to long term planning due to the reasoning that the capacities are being built up as the result of investment decisions. Fichtner et al. (2003) suggest the combined application of an agent-based approach and a linear optimisation model for strategic planning patterns of electricity suppliers in liberalised markets. Further examples in this field include the research carried out by Wittmann (2008), who developed an agent-based model of energy investment decisions in urban energy systems with the focus on decentralised converting technologies. In addition, the tool PowerACE was also developed within an agent-based platform in order to analyse the German electricity and focused on three main topics. First, Sensfuss (2008) and Sensfuss et al. (2008) analysed the impact of renewable electricity generation on the electricity market. Another part of the model investigates the role of learning algorithms in price building mechanisms on the electricity market and the impact of market structure and design on electricity prices (Weidlich and Veit, 2008). The third topic looked at long-term developments in terms of investment decisions in the conventional power sector (Genoese et al., 2007) and market power (Möst and Genoese, 2009). So far, the multi-agent models are limited to applications of the energy converting technologies and a few applications on final energy sectors (e.g. Jochem, 2009). One major obstacle of developing and using multi-agent models is the enormous demand on additional empirical data in order to simulate the behaviour of the different agents. Top-Down versus Bottom-Up Between the late 1970s and the 1990s many debates about the quality of energy demand projections and the related impacts on the economy and society could be witnessed at international conferences, energy symposia, or seminars (Krause, 1996;Manne and Richels, 1990). There was insufficient understanding of the strengths, weaknesses, and limitations of the top-down and bottom-up models used by the different disciplinary communities, but with the increasing interdisciplinary competence of energy research teams over the last decade there has also been a growing demand to combine the two approaches and, hence, their advantages. Strengths, Weaknesses and Limitations In relation to the above mentioned models one can generalise certain advantages and disadvantages for top-down and bottom-up modelling. One major advantage of top-down energy models is their application of feedback loops to welfare, employment, and economic growth. This endogenous assessment of economic and societal effects results in higher consistency and facilitates a comprehensive understanding of energy policy impacts on the economy of a country or region. On the other hand, top-down models suffer from the lack of technological detail and deliver rather generalised information. Consequently, they might not be able to give an appropriate indication of technological progress (as they do not directly model technological change -this is only considered via substitution elasticities), non-monetary barriers to energy efficiency or specific policies for certain technologies or branches. Especially in the long run, when substantial technological change, saturation, and intra-sectoral structural change can be expected and has to be included in a plausible model, top-down models are not suited to show in a transparent manner credible technology futures. Furthermore, driven by the assumption of efficiently allocating markets, topdown modelling approaches tend to underestimate the complexity of obstacles and their non-monetary form like lack of knowledge, inadequate decision routines, or group-specific interests of technology producers or of whole sales. CGE models assume that any policy implies additional cost, although highly profitable (but unrealised) investments in energy efficiency may reduce cost and increase profits and tax income. Transaction costs are only implicitly covered and cannot be changed by relevant policies such as technical standards or energy efficiency networks. Finally, as they are focused on monetary terms, they consequently tend to favour monetary related policies, e.g. price-based (taxes, subsidies, etc.) policies or emission certificates and regulatory (bans and rules) policies (Hourcade et al., 2006). In contrast to macroeconomic modelling, bottom-up modelling approaches incorporate a high degree of technological detail which enables them to present very detailed pictures of energy demand and energy supply technologies, as well as plausible technology futures. Bottom-up models can also give detailed evaluations of sector-or technology-specific policies (Catenazzi, 2009). However, this high degree of detail means that bottom-up modellers are heavily dependent on data availability and credibility with regard to their many assumptions on technology diffusion, investments and operating cost. There are also criticisms of bottom-up modelling concerning the neglect of programme costs, the feedback of energy policies as well as the lack of macro-effects of the presumed technological change on overall economic activity, structural changes, employment, and prices. Hybrid Energy System Models To overcome the above mentioned weaknesses and limitations of conventional top-down and bottom-up energy models, energy modelling is currently moving in the direction of hybrid energy system modelling combining at least one macroeconomic model with at least one set of bottom-up models for each final energy sector and the conversion sector. According to Hourcade et al. (2006) and Bataille (2005) a high-quality hybrid model system should incorporate at least three properties: (1) technological explicitness, (2) microeconomic realism and (3) macroeconomic completeness. Top-down modelling on its own provides energy modellers with a high degree of macroeconomic completeness through the feedback loops for economy, welfare, etc. combined with microeconomic realism, e.g. the decision-making processes of the different agents, etc. (see Section 2.1). Pure bottomup modelling, on the other hand, offers a high level of technological explicitness and a low level of macroeconomic completeness (see Section 2.2). Merging these three properties into one hybrid system can take place in several different ways. The simplest form of linking top-down and bottom-up approaches, also called 'soft linking', is the manual transfer of data, parameters and coefficients. If this transfer is further evolved using automatic routines, a 'hard link' is established between the different models. This form of a 'soft link' has been applied in Swiss energy demand and supply projections in a rather complex model setting of a macroeconomic model and several bottom-up models for all final energy sectors and the conversion sector by four model teams (BFE, 2007). The rather simple hybrid bottom-up CGE model SCREEN (Sustainability Criteria for Regional Energy policies) for Switzerland, has also been developed, combining technological details of the electricity sector with a macroeconomic CGE framework (Kumbaroglu and Madlener, 2001). It was used to analyse the effects of a CO 2 tax in Switzerland, but was not further developed. The next step for connecting top-down and bottom-up models is to apply partial model elements (top-down or bottom-up) in their modelling counterparts. Catenazzi (2009) defines two hybrid energy model systems in this context: 'macroeconomic models with bottom-up energy supply models' and 'bottom-up models with some limited macroeconomic sub-models'. The MARKAL model family is an example for the latter (see Section 2.2.2). A similar approach is used by Hourcade et al (2006), who define the following three categories of hybrid energy models: 'bottom-up models with macroeconomic feedbacks', 'bottom-up models with microeconomic behavioural parameters for technology choices' and 'top-down models with more technological explicitness or parameters for endogenous technological change'. One of the presently established hybrid model systems was applied in the ADAM project (Adaptation and Mitigation Strategies), a European energy model . The model system in ADAM combines a macroeconomic model (E3ME), with a set of bottom-up models for the four final energy sectors (industry, services, transport and the residential sector which has been split up into buildings and electrical appliances). This hybrid system has been applied to project the energy demand and supply of 29 European countries up to 2050 in various scenarios Schade et al. 2009). Challenges facing hybrid energy modelling include, amongst others, the need to keep such combined model systems theoretically consistent and empirically valid without constructing huge models that are incomputable. Additionally, the endogenous consideration of structural change (inter-sectoral as well as intrasectoral) and technological progress are important issues that require further attention and research. Linking the Results of Process-Based Models and Macroeconomic Models Presently, there are scarcely any hard links between process-oriented energy models and macroeconomic models. This is due to the disciplinary cultures in which each type of energy model has been developed. Linking the models is mostly limited to a manual transfer of a few major drivers (e.g. population, gross value added of the economic sectors, or energy prices on world markets) and to investment figures (often only from the energy conversion sector of bottom-up models being transferred to macroeconomic models). On top of this, the few existing links have not been implemented in electronically based transformation modules ('hard links'), but are manually transferred by the researchers and teams involved (e.g. the Energy Perspectives of Switzerland; see BFE, 2007). The need to link these two 'worlds' is the challenge presently facing energy demand and supply modelling. Analysts have to simulate the projected futures in both types of models in a consistent way which may induce the need for one or two iterative runs between the two types of models simulating those policy scenarios that deviate substantially from a reference scenario (e.g. a climate change scenario with substantial reductions of greenhouse gas emissions during the next few decades, assuming substantial increases in energy and material efficiency and intensive use of renewable energies). For example, the TRANSFORM module of the ADAM project translates monetary production data of basic goods industries into physical production. It also incorporates improvements in material efficiency and material substitution and saturation by the MATEFF model; a bottom-up model for simulating those effects on basic products like steel, cement, Swiss Journal of Economics and Statistics, 2012, Vol. 148 (2) non-ferrous metals, paper, glass, etc. Finally, the IMPULSE module of this hybrid model system collects all data of the bottom-up models on investments, changing operating cost (including energy cost), and the programme costs, stemming from the policies assumed in a particular scenario. Schade et al. 2009). To relate the specific energy demand of basic materials to their physical production and not to monetary production data is very important in order to arrive at realistic and transparent results in energy demand projections of basic product industries. Additional policy efforts in energy and material efficiency as well as in renewable energies imply a substitution of energy uses by increasing capital investments and related employment. These policies generally induce programme costs for governments and industrial associations which can be relevant in real terms -or at least in the political debate. Hybrid models can analyse essential questions for detailed climate change policies reflected in bottom-up models on the one hand and the impacts of those mitigation scenarios on the economy by macroeconomic models on the other hand. Conclusions Hybrid energy system models help understand the advantages and limitations of the existing bottom-up and top-down energy models and to improve the consultation process of the energy analysts for decision-makers in governments, international institutions (e.g. IEA, UNEP) and large energy supply companies as well as energy technology producers. While the energy models on both levels (bottom-up and top-down) are further improved by more detailed structures, more empirically based equations and adding multi-agent aspects, the progress of the development of hard links of the two modelling levels is of crucial interest. In the near future, transformation modules should intensify the interaction between process-oriented models and macroeconomic models by implementing computer-based hard links; example are: (1) the development of living areas of the residential sector derived from relationships of demographic variables, income per capita, and other preferences of private households; (2) the mileage of cars, trucks, ship, or public transport depending on demographic variables, per capita income, foreign trade and industrial production, and inter-industrial structural change. In addition, the different impacts on material substitution or material efficiency in energy-intensive industries should be modelled in more detail based on numeric factors and relationships. In this context, export/import ratios and detailed recycling data of the different basic products should be taken into account. In the more distant future, company size (e.g. small, medium, big) and the influences of barriers and supporting factors of energy efficiency measurements should also be implemented in bottom-up models in order to improve the transparency between potentials, obstacles, and impacts of sector-or technology-oriented policies. The progress expected will lead to a more transparent simulation of sector-and technology-oriented policies by governments and trade associations and to more reliable information of the impacts of those policies at the economic and societal level. conditions of a particular scenario. This introductory article gives an overview of some of the energy models used in Switzerland and -more importantly -some insights into current advanced energy system modelling practice pointing to the characteristics of the two modelling types and their advantages and limitations.
9,991.8
2012-04-01T00:00:00.000
[ "Economics" ]
New Mixed Exponential Sums and Their Application Themain purpose of this paper is to introduce a newmixed exponential sums and then use the analytic methods and the properties of Gauss sums to study the computational problems of the mean value involving these sums and give an interesting computational formula and a sharp upper bound estimate for these mixed exponential sums. As an application, we give a new asymptotic formula for the fourth power mean of Dirichlet L-functions with the weight of these mixed exponential sums. Introduction Let ≥ 3 be an integer, and let be a Dirichlet character mod .Then, for any integer , the famous Gauss sums (, ) are defined as follows: where () = 2 .This sum and the other exponential sums (such as Kloosterman sums) play very important role in the study of analytic number theory, and many famous number theoretic problems are closely related to it.For example, the distribution of primes, Goldbach problem, the estimate of character sums, and the properties of Dirichlet -functions are some good examples. In this paper, we introduce new mixed exponential sums as follows: (, , , ; ) where , , and are any integers.We will study the arithmetical properties of (, , , ; ).About this problem, it seems that none has studied it yet; at least we have not seen any related results before.The problem is interesting, because this sum has a close relationship with the general Kloosterman sums, and it is also analogous to famous Gauss sums, so it must have many properties similar to these sums.It can also help us to further understand and study Kloosterman sums and Gauss sums. The main purpose of this paper is using the analytic method and the properties of Gauss sums to study the fourth power mean of (, , , ; ) and its upper bound estimate and prove the following three conclusions. where 0 is the principal character mod , (, , ) denotes the greatest common divisor of , , and , and exp() = . In Theorem 1, we only discussed the case, in which there exist two variables.For general case (with (≥ 3) variable), whether there exists a sharp estimate for the sums is an interesting problem. Let ≥ 3; whether there exists an exact computational formula for the 2th power mean, is also an open problem. Several Lemmas In this section, we will give several lemmas, which are necessary in the proof of our theorems.Hereinafter, we will use many properties of character sums, Kloosterman sums, and Gauss sums; all of these can be found in [1,[5][6][7], so they will not be repeated here.First, we have the following. Lemma 4. Let be an odd prime; then, for any integers , , and , one has the identity where denotes the solution of the congruence equation ⋅ ≡ 1 mod . Proof of the Theorems In this section, we will complete the proof of our theorems.First we prove Theorem and the estimate for Kloosterman sums (see [6]) is as follows: −1 Let be an odd prime.Then, for any integers , , and with (, 2 + 2 − , ) ̸ = , one has the asymptotic formula 1.In fact, from Lemmas 4 and 5, we may immediately deduce the estimate , if is the Legendre symbol mod ; 4 (2 − 7) , if is a complex character mod .
792.4
2014-06-19T00:00:00.000
[ "Mathematics" ]
Activation of PPARα by Oral Clofibrate Increases Renal Fatty Acid Oxidation in Developing Pigs The objective of this study was to evaluate the effects of peroxisome proliferator-activated receptor α (PPARα) activation by clofibrate on both mitochondrial and peroxisomal fatty acid oxidation in the developing kidney. Ten newborn pigs from 5 litters were randomly assigned to two groups and fed either 5 mL of a control vehicle (2% Tween 80) or a vehicle containing clofibrate (75 mg/kg body weight, treatment). The pigs received oral gavage daily for three days. In vitro fatty acid oxidation was then measured in kidneys with and without mitochondria inhibitors (antimycin A and rotenone) using [1-14C]-labeled oleic acid (C18:1) and erucic acid (C22:1) as substrates. Clofibrate significantly stimulated C18:1 and C22:1 oxidation in mitochondria (p < 0.001) but not in peroxisomes. In addition, the oxidation rate of C18:1 was greater in mitochondria than peroxisomes, while the oxidation of C22:1 was higher in peroxisomes than mitochondria (p < 0.001). Consistent with the increase in fatty acid oxidation, the mRNA abundance and enzyme activity of carnitine palmitoyltransferase I (CPT I) in mitochondria were increased. Although mRNA of mitochondrial 3-hydroxy-3-methylglutaryl-coenzyme A synthase (mHMGCS) was increased, the β-hydroxybutyrate concentration measured in kidneys did not increase in pigs treated with clofibrate. These findings indicate that PPARα activation stimulates renal fatty acid oxidation but not ketogenesis. Introduction The kidney is an organ with a high energy requirement due to its central role in the elimination of water-soluble metabolic waste products. Thus, energy metabolism is very active and important for renal physiology. In support of the high energy metabolism, renal fatty acid oxidation and carnitine biosynthesis are very active, generating ketone bodies when fatty acids are catabolized and in maintaining carnitine homeostasis, respectively [1]. Recently, a strong link between impaired renal energy metabolism and chronic kidney disease has been highly identified [2,3]. Peroxisome proliferator-activated receptor α (PPARα), a member of a large nuclear receptor superfamily, is expressed primarily in the liver, the intestine, and the kidney [4,5]. The critical role of PPARα activation in regulation of hepatic fatty acid oxidation, lipid metabolism, and inflammatory and vascular responses has been well studied [6]. In contrast with the liver, however, the data on the role of PPARα activation in the regulation of renal fatty acid oxidation and metabolism is scant, especially for developing animals. By comparison, both mitochondrial and peroxisomal β-oxidation enzymes are expressed in the liver and the kidney, but the enzymes in peroxisomes are less abundant in the kidney than in the liver. The response of mitochondrial and peroxisomal β-oxidation enzymes to PPARα activation in the kidney is also moderate [7]. Despite all this, the importance of peroxisomal β-oxidation in short-, long-, and very long-chain fatty acids has been well recognized. Moreover, the essential role of PPARα-induction of fatty acid metabolism in the prevention of renal ischemia and renal damage induced by drugs has been observed in rodent species [8][9][10]. Potential ligands for the PPARα transcription factor include fatty acids, eicosanoids, and pharmacological drugs such as the fibrates. Clofibrate is a potent PPARα activator that stimulates peroxisome proliferation and increases fatty acid oxidation in rodent species. The target genes of PPARα encode enzymes involved in peroxisomal and mitochondrial β-oxidation and ketone body synthesis. The peroxisome proliferation elicited by fibrates has drawn much attention because peroxisome proliferation has been associated with oxidative stress and hepatocellular carcinoma [11]. However, less is known about the impacts of the agonist in the kidney. Fatty acids are the preferred energy substrate for the kidney, and defects in fatty acid oxidation and mitochondrial and peroxisomal dysfunction are involved in acute renal injury and chronic disease. Indeed, PPARα signaling may play a protective role in acute free fatty acid-associated renal tubule toxicity [12]. PPARα activation has been recognized as essential for kidney function under both healthy and pathophysiological states [7]. Data regarding inborn errors in the kidney such as neonatal urea cycle defects and disorders of long-chain fatty acid oxidation associated with energy deficiency in infants is very limited in the literature. Understanding the renal kinetics and adaptation of energy metabolism is very important for human infant health. The domestic neonatal pig (Sus scrofa) ranks among the most prominent research models for the study of pediatric nutrition and metabolism due to the similarity of human infant and piglet physiology [13]. Unlike rodent species, the peroxisome proliferation and hepatocarcinogenic potencies of clofibrate are not observed in the livers of humans or pigs [14,15]. Peroxisomal β-oxidation (enzymes) increase with the age in the renal cortex of suckling rat pups, and this might be involved in PPARα-mediated mechanisms [16]. Similarly, previous work from our laboratory showed that fatty acid β-oxidation capacity was increased with age in the kidney of pigs as well, and the capacity was higher during the preweaning period than in adults [17]. The enzymatic responses to PPARα activation also were compared in the heart, kidney, and liver of pigs in our previous work, but effects of the activation on fatty acid oxidative metabolism were not determined. Promoting energy supply and thermogenesis after birth are critical for the survivor of neonatal piglets [17]. Therefore, to provide basic knowledge on the regulation of energy metabolism in the developing kidney, the present study assessed changes in peroxisomal and mitochondrial long-chain fatty acid oxidation in the kidney during early development in response to the activation of PPAR by clofibrate. β-Hydroxybutyrate Concentration No differences were detected in β-hydroxybutyrate concentration measured in plasma and kidney tissues between control and clofibrate-treated pigs (p > 0.05). The concentration of β-hydroxybutyrate was on average 8-fold higher in kidney tissue compared with plasma ( Figure 1). importance of peroxisomal β-oxidation in short-, long-, and very long-chain fatty acids has been well recognized. Moreover, the essential role of PPARα-induction of fatty acid metabolism in the prevention of renal ischemia and renal damage induced by drugs has been observed in rodent species [8][9][10]. Potential ligands for the PPARα transcription factor include fatty acids, eicosanoids, and pharmacological drugs such as the fibrates. Clofibrate is a potent PPARα activator that stimulates peroxisome proliferation and increases fatty acid oxidation in rodent species. The target genes of PPARα encode enzymes involved in peroxisomal and mitochondrial β-oxidation and ketone body synthesis. The peroxisome proliferation elicited by fibrates has drawn much attention because peroxisome proliferation has been associated with oxidative stress and hepatocellular carcinoma [11]. However, less is known about the impacts of the agonist in the kidney. Fatty acids are the preferred energy substrate for the kidney, and defects in fatty acid oxidation and mitochondrial and peroxisomal dysfunction are involved in acute renal injury and chronic disease. Indeed, PPARα signaling may play a protective role in acute free fatty acid-associated renal tubule toxicity [12]. PPARα activation has been recognized as essential for kidney function under both healthy and pathophysiological states [7]. Data regarding inborn errors in the kidney such as neonatal urea cycle defects and disorders of long-chain fatty acid oxidation associated with energy deficiency in infants is very limited in the literature. Understanding the renal kinetics and adaptation of energy metabolism is very important for human infant health. The domestic neonatal pig (Sus scrofa) ranks among the most prominent research models for the study of pediatric nutrition and metabolism due to the similarity of human infant and piglet physiology [13]. Unlike rodent species, the peroxisome proliferation and hepatocarcinogenic potencies of clofibrate are not observed in the livers of humans or pigs [14,15]. Peroxisomal β-oxidation (enzymes) increase with the age in the renal cortex of suckling rat pups, and this might be involved in PPARα-mediated mechanisms [16]. Similarly, previous work from our laboratory showed that fatty acid β-oxidation capacity was increased with age in the kidney of pigs as well, and the capacity was higher during the preweaning period than in adults [17]. The enzymatic responses to PPARα activation also were compared in the heart, kidney, and liver of pigs in our previous work, but effects of the activation on fatty acid oxidative metabolism were not determined. Promoting energy supply and thermogenesis after birth are critical for the survivor of neonatal piglets [17]. Therefore, to provide basic knowledge on the regulation of energy metabolism in the developing kidney, the present study assessed changes in peroxisomal and mitochondrial long-chain fatty acid oxidation in the kidney during early development in response to the activation of PPAR by clofibrate. β-Hydroxybutyrate Concentration No differences were detected in β-hydroxybutyrate concentration measured in plasma and kidney tissues between control and clofibrate-treated pigs (p > 0.05). The concentration of β-hydroxybutyrate was on average 8-fold higher in kidney tissue compared with plasma ( Figure 1). Clofibrate tended to increase the accumulation of 14 C in acid-soluble product (ASP) in peroxisomes from both [1-14 C]-C18:1 and C22:1 oxidation (p = 0.06), but the accumulation of 14 C in ASP from C18:1 and C22:1 in mitochondria and in homogenate were increased in clofibrate-treated compared to the control pigs (p < 0.006; Figure 2B). There was no difference between C18:1 and C22:1 in 14 C-ASP accumulation in peroxisomes, but the 14 C-ASP accumulation from C18:1 was greater than Clofibrate tended to increase the accumulation of 14 C in acid-soluble product (ASP) in peroxisomes from both [1-14 C]-C18:1 and C22:1 oxidation (p = 0.06), but the accumulation of 14 C in ASP from C18:1 and C22:1 in mitochondria and in homogenate were increased in clofibrate-treated compared to the control pigs (p < 0.006; Figure 2B). There was no difference between C18:1 and C22:1 in 14 C-ASP accumulation in peroxisomes, but the 14 C-ASP accumulation from C18:1 was greater than that from C22:1 in mitochondria. The accumulations of 14 C-ASP in the homogenates also were 1.5-fold higher from [1-14 C]-C18:1 compared with C22:1 (p < 0.001). No difference was observed in the percentage of 14 C accumulation in CO2 (less than 2%) in peroxisomes (p = 0.9), but clofibrate reduced the percentage of accumulation of C22:1 in CO2 in mitochondria (p < 0.01) ( Figure 3A). Over 98% of the oxidative metabolites were ASP in peroxisomes, while only about 60% (54-64%) of the ASP was detected in mitochondria ( Figure 3B). Clofibrate administration did not affect the percentage of ASP from C18:1 (p = 0.13) but increased the ASP from C22:1 significantly (p < 0.04) ( Figure 3B). The percentage of total oxidation (CO2 + ASP) from C22:1 in peroxisomes was 1.5-fold higher than that from C18:1, and the percentage of total oxidation from C18:1 in mitochondria was 1.5-fold higher than that from C22:1 ( Figure 3C). Renal Enzyme Activity The activity of carnitine palmitoyltransferase I (CPT I) was increased 25% by clofibrate (p < 0.05), but no effect on the activity of CPT II was detected (p > 0.05; Figure 4A). The activity of acyl-CoA oxidase (ACO) was increased 2.2-fold in clofibrate-treated pigs (p < 0.05; Figure 4B). Renal Enzyme Activity The activity of carnitine palmitoyltransferase I (CPT I) was increased 25% by clofibrate (p < 0.05), but no effect on the activity of CPT II was detected (p > 0.05; Figure 4A). The activity of acyl-CoA oxidase (ACO) was increased 2.2-fold in clofibrate-treated pigs (p < 0.05; Figure 4B). Discussion Activation of PPARα by oral clofibrate administration to newborn piglets resulted in a significant increase in renal fatty acid β-oxidation. Similar observations were reported in humans Renal Enzyme Activity The activity of carnitine palmitoyltransferase I (CPT I) was increased 25% by clofibrate (p < 0.05), but no effect on the activity of CPT II was detected (p > 0.05; Figure 4A). The activity of acyl-CoA oxidase (ACO) was increased 2.2-fold in clofibrate-treated pigs (p < 0.05; Figure 4B). Discussion Activation of PPARα by oral clofibrate administration to newborn piglets resulted in a significant increase in renal fatty acid β-oxidation. Similar observations were reported in humans Discussion Activation of PPARα by oral clofibrate administration to newborn piglets resulted in a significant increase in renal fatty acid β-oxidation. Similar observations were reported in humans and rats [18]. Fatty acid β-oxidation is the primary pathway of ATP production for the kidney to meet its daily function requirement. Therefore, this result implied that PPARα could play an important regulatory role in ATP production and energy metabolism in the developing kidney. We also noticed that the induction profiles were different in mitochondria and peroxisomes for the long-and very long-chain fatty acids, suggesting that the response of renal fatty acid β-oxidation to PPARα activation depends on the subcellular and substrates. The activation had no significant impact on the fatty acid β-oxidation ( 14 C accumulation in CO 2 or/and ASP) in renal peroxisomes, although the ACO activity increased 2.2-fold in clofibrate-treated piglets. Only a tendency of increase in ASP (p = 0.06) was observed, and the mild response of peroxisomal β-oxidation to the PPARα agonist was similar to that reported in adult rats [19]. As in mitochondria, fatty acid β-oxidation in peroxisomes involves multiple enzymes that ultimately yield acetyl-CoA [20]. However, the peroxisomal fatty acid β-oxidation is not coupled with ATP synthesis and catalase is required for H 2 O 2 produced in peroxisomes by transferring electrons to O 2 . It was reported that the activation of PPARα had no influence on catalase activity in 14-day-old piglets [21], and catalase increases fast after birth [22]. This result could be related to the catalase or other enzymes in β-oxidation system of peroxisomes such as the bifunctional protein and 3-ketoacyl-CoA thiolase during development. In addition, we did not find any difference in renal PPARα and ACO mRNA enrichments between control and clofibrate-treated piglets. The low response of PPARα and ACO mRNA to clofibrate induction was observed in the livers of newborn, 24-hour-old, and 4-day-old fasted neonatal piglets [21,23,24]. Besides, the ACO activity measured in kidneys of 14-day-old control pigs was not different from pigs treated with clofibrate [21]. Because the rates of mitochondrial and peroxisomal β-oxidation of palmitate change during postnatal development and food deprivation in pig kidneys [22], age or physiological status and even species could contribute to these differences. A similar 14 C-accumulation rate in CO 2 or/and ASP from both C18:1 and C22:1 was detected in peroxisomes, indicating that the chain-length of these two fatty acids had no effects on peroxismal fatty acid β-oxidation. However, the percentage of peroxisomal fatty acid β-oxidation increased with the increase in the fatty acid chain-length. The percentage of β-oxidation of C22:1 was on average 40% higher than that of C18:1, although the total fatty acid oxidation rate had no difference. A similar result was detected in the liver [23], demonstrating that C22:1 has a preference to be oxidized in peroxisomes. The preference for C22:1 appeared to be associated with the affinity of fatty acid activation systems for long-chain fatty acid and very-long-chain fatty acid identified in rat [25]. It was very interesting that a high percentage (about 42-67%) of the fatty acids were oxidized in renal peroxisomes with 98-99% as ASP and 1-2% as CO 2 , and the activation of PPARα had no influence on the percentage distribution of fatty acid oxidation. The contribution of peroxisomal fatty acid β-oxidation to the total fatty acid β-oxidation in the kidney was similar to that measured in the liver (40-47) and 2-fold higher than that in rats (20-35% [26]). Mitochondrial fatty acid oxidation was increased significantly by the activation of PPARα induced by clofibrate administration. Consistent with the increase in fatty acid β-oxidation, the CPT I activity was increased by 25% and mRNA expression was increased 3.5-fold. In addition, the chain-length of fatty acid significantly affected mitochondrial β-oxidation, and the 14 C-accumulations were much greater from C18:1 than C22:1 in both of CO 2 (2.6-fold) and ASP (2.3-fold). Similar results were observed in livers of PPARα-activated neonatal pigs with clofibrate administration [23]. Swine milk fat is known to be composed of mainly long chain fatty acids (LCFAs) and very long chain fatty acids (VLCFAs). These results indicate that mitochondrial oxidation of LCFAs provides an important source of energy for kidneys, and activation of PPARα could promote the utilization of LCFAs and VLCFAs in developing kidneys. of 11 The 14 CO 2 accumulation rates from C18:1 and C22:1 (µmol/h·g protein) were on average 64% and 50% higher in the kidney (10.7 and 4.3) than in the liver (3.9 and 2.2; [23]), while the 14 C accumulations in ASP from C18:1 and C22:1 were 52% and 55% greater in the liver (44.9 and 30.8; [23]) than in the kidney (29.6 and 19.9). It was recently demonstrated that, in rat kidneys, proximal tubules do not generate energy via glycolysis and are completely dependent on oxidative phosphorylation for ATP production, although energy production is primarily from fuels such as lactate, glutamine, and free fatty acids [27]. On the other hand, fatty acid elongation can occur in both livers and kidneys, but it was reported that the specific activity of the fatty acid elongation in the kidney is about 30% compared to the liver. Different incorporation rates [1-14 C] acetate into fatty acids were observed in the mitochondria elongation system between livers and kidneys in the presence of nicotinamide adenine dinucleotide + hydrogen (NADH), nicotinamide adenine dinucleotide phosphate + hydrogen (NADPH), or both NADH and NADPH as the hydrogen donor [28]. Thus, the results demonstrated that fatty acid catabolic metabolism in mitochondria and citric acid cycle is the primary emergent source in developing kidneys and that activation of PPARα might have a benefit to kidney development via improving fatty acid utilization. Compared with kidneys, the liver may need to produce more ASP in which acetate was found to be one of the primary product in piglets [29]. The mitochondrial 3-hydroxy-3-methylglutaryl-CoA synthase (mHMGCS) mRNA increased 9.7-fold in clofibrate-treated pigs, but the induction of mHMGCS had no influence on plasma and renal β-hydroxybutyrate concentrations. Although the activity of mHMGCS was not measured in this study, available evidence confirms that the enzyme activity in the liver remains low until the weaned age of pigs [30]. Ketone bodies are transferred in and out of cells by monocarboxylate transporter 1. In wild-type mice, treatment with WY 14,643 increased mRNA concentrations of monocarboxylate transporter 1 in the liver, the small intestine, and the kidney, but no upregulation was observed in PPARα-null mice [31]. This suggested that activation of PPARα could potentially promote ketone body production and transfer from organs to plasma. However, we found that β-hydroxybutyrate concentration was 8-fold higher in the kidney tissue than plasma, suggesting that the contribution of the kidney to plasma ketone bodies is minimal in this species. It has been well known that suckling pigs are hypoketonemic despite elevated dietary fat after birth [30]. Experiment Design and Animal Model All experimental procedures were approved by the North Carolina State University Animal Care and Use Committee. Ten male newborn pigs (Landrace × Yorkshire × Duroc), 2 from each of 5 L, were used in this experiment. The selected newborn piglets (Body weight (BW) = 1.61 ± 0.06 kg) were allocated randomly into two treatments: control and clofibrate. The control piglets were orogastrically gavaged with 2 mL of 2% Tween 80, and the clofibrate-treated piglets were orogastrically gavaged to 2 mL of 2% Tween 80 containing clofibrate (75 mg/kg BW; Cayman Chemicals, Ann Arbor, MI, USA) at 8:00 a.m. of each day for 4 days as described previously [23]. All piglets were kept with their dams and siblings at the North Carolina State University Swine Educational Unit in Raleigh, North Carolina during the experiment. The piglets were euthanized by AVMA-approved electrocution on Day 4 after gavaging and feeding, and kidney and blood samples were collected. Fresh kidney samples were collected and stored in a homogenate buffer, and extra kidney samples were immersed in liquid nitrogen and stored at −80 • C. The blood was sampled with vacutainer containing sodium heparin and centrifuged at 2500 rpm × 10 min. The plasma was collected and stored at −20 • C. β-Hydroxybutyrate Concentration A BioVision β-hydroxybutyrate assay kit (K632-100; BioVision, Milpitas, CA, USA) was used to measure the β-hydroxybutyrate concentration in the plasma and kidney samples. The standard curve and samples were prepared according to the BioVision assay procedure and allowed to develop at room temperature for 30 min. The samples were measured with a BioTek reader (Synergy HT, Winooski, VT, USA) at an absorbance of 450 nm. Fatty Acid Oxidation In Vitro Fresh kidney homogenates (~5 mg) were incubated in 3 mL of reverse Krebs-Henseleit bicarbonate medium with or without rotenone and antimycin A (10 + 50 µmol/L), blockers of mitochondrial respiratory system. Mitochondrial and peroxisomal fatty acid oxidations were measured in the medium using either [1-14 C]-labeled oleic acid (C18:1) or erucic acid (C22:1) purchased from American Radiolabeled Chemicals (ARC; Saint Louis, MO, USA) as substrate. The biochemical and radio-chemical purities of both C18:1 and C22:1 were greater than 99% based on TLC and HPLC analyses. The fatty acids were bound to fatty acid-free BSA (5:1, molar ratio) and dissolved in the reaction medium. The measurements were performed in 25 mL Erlenmeyer flasks containing 2 mL of the reaction medium. The medium was incubated with 2 µmol [1-14 C]-C18:1 (0.98 kBq/µmol) or [1-14 C]-C22:1 (1.37 kBq/µmol). The incubation was stopped after 30 min by the addition of 0.5 mL of 35% HClO 4 . The 14 C accumulation in CO 2 and acid-soluble products (ASP) were collected, processed, and analyzed by liquid scintillation spectrometry (Beckman LS 6000IC, Fullerton, CA, USA) according to the procedures by Lin et al. [24]. CPTI Activity Kidney mitochondria were isolated from fresh samples. The samples were homogenized in an isolation buffer and centrifuged with a gradient centrifugation [32]. The mitochondria pellet was collected, and the protein concentration was determined using the biuret method as previously described [32]. The CPTI activity was assayed in the mitochondria at 30 • C with 80 µmol/L palmitoyl-CoA following the method used previously [32]. The assays were performed with or without supplementation of 4.7 µg/mL of malonyl-CoA. The assay was initiated by the addition of 20 µL of 3 H-carnitine (166.5 kBq/µmol) purchased from ARC and terminated with the addition of 4 mL of 6% HClO 4 after 6 min incubation. The activity was determined by measuring the 3 H-labeled palmitoyl-carnitine generated from the reactions. The radioactivity was determined using the Beckman liquid scintillation spectrometry (Beckman LS 6000IC, Fullerton, CA, USA). ACO Activity The fatty acyl-CoA oxidase (ACO) activity was measured by using a fluorometric procedure with scopoletin, a fluorescing compound as described previously [24]. The reduction of the ACO produced H 2 O 2 was coupled to the oxidation of scopoletin to its non-fluorescing product. The control and treatment kidney samples were prepared as described previously [32] and were incubated at 37 • C for 20 min. A standard curve was generated consisting of (0-0.1 µm) concentrations of H 2 O 2 . The samples were measured with a BioTek reader (Synergy HT, Winooski, VT, USA) with an emission at 460 nm and an excitation at 360 nm. mRNA Expression Total mRNA was extracted using guanidine isothiocynate and phenol, and was quantified using NanoDrop spectrometer (Thermo Scientific, Wilmington, DE, USA). The mRNA was treated with Turbo DNase (Ambion, Austin, TX, USA) and transcribed using iScripTM Select cDNA synthesis kit (Bio-Rad Laboratories, Hercules, CA, USA). Primers were designed with the use of GenBank as described previously [32]. The mRNA abundances were measured with MyiQ Single Color RT-PCR (Bio-Rad Laboratories, Hercules, CA, USA). Statistical Analysis Data from plasma β-hydroxybutyrate, tissue enzyme activity and mRNA enrichment assays, were analyzed using the GLM procedure of SAS (Proprietary Software 9.3 (TS1M1), SAS Institute Inc., Cary, NC, USA) according to a randomized complete block design with 2 treatments (control and clofibrate), blocked by litter. Data from in vitro fatty acid oxidation measurements were analyzed with a split-plot design, including a main plot (control vs. clofibrate) in randomized blocks and a subplot modeling fatty acid chain length (C18:1 vs. C22:1) effects, subcellular (mitochondria vs. peroxisomes) differences, and interactions. Multiple comparisons between treatments were performed using Tukey's test, with significance declared when p ≤ 0.05 and tendencies noted when 0.05 ≤ p ≤ 0.1. Conclusions Activation of PPARα by clofibrate resulted in a greater increase in mitochondrial long-chain fatty acid oxidation in developing kidneys. The increase was elicited with induced enzyme activity and mRNA expression implies that PPARα activation could improve renal energy utilization during development. More than 40% of the catabolic metabolism occurred in mitochondria and citric acid cycle, suggesting that mitochondrial fatty acid oxidation plays a primary role in energy generation in developing kidneys. However, the activation did not alter the β-hydroxybutyrate concentration in plasma or kidneys.
5,717.6
2017-12-01T00:00:00.000
[ "Biology" ]
The microstructure and properties of high-strength structural steel for fruit auxiliary picking equipment The microstructure and mechanical properties of high-strength structural steel used in the fruit-picking equipment were studied by using a metallographic microscope, a tensile tester, and an impact toughness tester. The results revealed that the structure of the experimental steels was a mixed microstructure of ferrite and pearlite, with an average grain size of ferrite of 12 μm. The yield strength was between 515-540 MPa and the tensile strength was 635-645 MPa. The impact toughness at room temperature reached over 200 J. The physical properties of the studied steels completely fulfilled the requirements of the high-strength structural steel for fruit auxiliary picking equipment. The new method cut down the tempering heat treatment process and decreased production costs. Introduction The application of the fruit auxiliary picking equipment in the agricultural and rural fruit planting industry has become the focus of research both at home and abroad.The fruit auxiliary picking equipment is agricultural machinery with a rotating mechanical arm device.The mechanical arm of this equipment has to bear a huge load, therefore, it requires the use of high-strength structural steel with a yield strength of 460 MPa and a tensile strength of over 630 MPa [1][2][3] .This kind of steel for fruit auxiliary picking equipment is usually prepared by incorporating ultra-low carbon content (0.04%-0.07%wt.) and is produced by using the thermo-mechanical rolling process (TMCP) combined with the tempering heat treatment process to improve the quality of the produced steel [4] .However, this production process has certain drawbacks, such as a long process duration and a high cost.In this context, a novel type of high-strength structural steel for use in the fruit auxiliary picking equipment was designed in the present study with a medium carbon content of 0.16%-0.18%wt [5][6] .The chemical composition, production manufacturing technique, structure, and physical properties of the steel of fruit auxiliary picking equipment with different thickness specifications (8, 10, 12, and 14 mm) were investigated.Certain micro-alloy elements, such as Nb, V, and Ti, were added to refine the grain during the smelting process, which also improved the quality of the steel.In the production process, the advanced rolling process (CR) and cooling process(CC) were adopted.The produced steel plate could be used directly after rolling without the tempering treatment, which cuts down the tempering heat treatment process and reduces the production cost.The present study would serve as a reference basis for the production of such kinds of steel. Experimental materials and methods The molten iron pretreatment, converter steelmaking, LF furnace refining, RH furnace vacuum degassing, dynamic soft reduction, and electromagnetic stirring technology were employed to control the production of the cc slab during the test steel smelting.The slab is 200 mm in thickness, and its chemical composition is shown in Table 1.The steel samples with different thicknesses of 8, 10, 12, and 14 mm were rolled by using different rolling procedures, then subjected to accelerated cooling, cooling bed cooling, trimming process, and quality inspection, and finally were stored until use.The flowchart for the production process is presented in Figure 1.The rolling manufacturing process was roughing and finishing stages: rough rolling and finishing rolling.The heating temperature of rough rolling was controlled at 1, 240°C.The heat preservation was performed for 180 min.After the descaling process, rough rolling was conducted at 980-1, 040°C.After the completion of rough rolling, the temperature control stage commenced.After the completion of temperature control, the finishing rolling was conducted at 960-830°C.After the completion of finishing rolling, the accelerated cooling process was performed by using the relevant device and water for cooling.The cooling was performed at a rate of 20-25°C until the final cooling temperature of 600°C was reached.Subsequently, the natural air cooling was performed on a cooling bed to reach the temperature of 300°C, then cooled to room temperature in the air.The entire rolling process is illustrated in Figure 2. Steel making Steel rolling Microstructure and morphology of test steels The structure of the test steels is depicted in Figure 3.As visible in the figure, the metallographic microstructure of the test steel comprised black pearlite and white ferrite [7][8] .The black pearlite was distributed between the white ferrite in strips.The ferrite grain size of 8 mm in thick test steels was relatively small, and the average ferrite grain size was 12 m.With increase in the thickness of the experimental steel, the ferrite grain size also increased, while the ferrite grain level decreased.The statistical analysis was performed next.As depicted in Figure 3(a), the ferrite grain size reached Grade 14 for a thickness of 8 mm, Grade 13.5 for a thickness of 10 mm, and Grade 13 for thicknesses of 12 mm and 14 mm. Mechanical properties of test steel The yield strength of the steel was predicted and estimated by Formula (1) [9][10] provided below. The calculated A  was approximately 43.1 MPa. In the formula, D  denotes the dislocation strengthening, and the yield stress was then estimated through the general formula for calculating dislocation density, which is provided below: In the case of a cubic crystal,  is 0.5, while for a closely arranged hexagonal crystal,  is 1.1. The calculated D  was approximately 123.2 MPa. In the formula,   denotes the precipitation strengthening and was estimated by using Formula (4) provided below: After the above estimation, the yield strength was determined to be approximately 525 MPa, which was equivalent to the measured value presented in Table 2. The determined mechanical properties are shown in Table 2.The yield strength was between 515 and 540 MPa, and the tensile strength was between 635 and 645 MPa.The steel fulfilled all requirements for the steel used in the fruit auxiliary picking equipment, exhibiting excellent impact toughness, with the impact energy value reaching over 200 J. Conclusions (1) The structure of the experimental steel comprises a mixture of ferrite and pearlite.The grain size of ferrite is between Grades 13 and 14, and the average grain size of ferrite is 12 m. (2) The yield strength of the experimental steel is between 515 and 540 MPa, and the tensile strength is between 635 and 645 MPa.The steel exhibits excellent impact toughness at room temperature, with the impact energy value reaching over 200 J. (3) The tested steel is designed with medium carbon content and produced by rough rolling and finishing the rolling process.The physics properties of the studied steel completely fulfill the requirements of the high-strength structural steel for fruit auxiliary picking equipment.The new method cuts down the tempering heat treatment process and decreases production costs. Figure 3 . Figure 3. Microstructure and morphology of test steel. 1  the dislocation density  =1.5  10 8 mm 2 , the shear modulus  = 8  10 4 MPa, Burgess vector b =2.510 -7 mm, the particle diameter d = 20 nm, and the precipitation spacing l = 250 nm.The calculated value   was approximately 99.6 MPa.At IN  = 9.4  10 4  f, the value of f was 10 -3 , and the calculated value IN  was approximately 94.3 MPa.When the contribution of grain refinement was considered, k = 20 N/mm -3/2 was used, and the average grain size was d  12 m.The calculated value 2 kd was then approximately 115.2 MPa. Table 2 . Mechanical properties of experimental steel.
1,703.8
2024-03-01T00:00:00.000
[ "Materials Science" ]
The Standard Model Higgs as the origin of the hot Big Bang If the Standard Model (SM) Higgs is weakly coupled to the inflationary sector, the Higgs is expected to be universally in the form of a condensate towards the end of inflation. The Higgs decays rapidly after inflation - via non-perturbative effects - into an out-of-equilibrium distribution of SM species, which thermalize soon afterwards. If the post-inflationary equation of state of the universe is stiff, $w \simeq +1$, the SM species eventually dominate the total energy budget. This provides a natural origin for the relativistic thermal plasma of SM species, required for the onset of the `hot Big Bang' era. The viability of this scenario requires the inflationary Hubble scale $H_*$ to be lower than the instability scale for Higgs vacuum decay, the Higgs not to generate too large curvature perturbations at cosmological scales, and the SM dominance to occur before Big Bang Nucleosynthesis. We show that successful reheating into the SM can only be obtained in the presence of a non-minimal coupling to gravity $\xi \gtrsim 1$, with a reheating temperature of $T_{\rm RH} \gtrsim \mathcal{O}(10^{10})\xi^{3/2}(H_*/10^{14}{\rm GeV})^2~{\rm GeV}$. I. INTRODUCTION Compelling evidence supports the idea of an inflationary phase in the early Universe [1]. Its specific particle physics realization is however uncertain, so inflation is often parametrised in terms of an inflaton scalar field, with a vacuum-like potential. After inflation, the reheating stage follows, converting all the energy available into different particle species. The latter eventually 'thermalize' and dominate the total energy budget; an event that signals the onset of the 'hot Big Bang' thermal era. The details of reheating depend strongly on the model of inflation, and its connection to other matter sectors. Particle production mechanisms in reheating have been investigated in detail, see [2,3] for a review, and references therein. With few exceptions, e.g. [4][5][6][7][8], most works have focused on understanding the energy transfer from the inflaton into some matter sector, with no connection whatsoever to the Standard Model (SM) of particle physics. However, to reheat the universe successfully, the relativistic thermal plasma dictating the expansion of the universe before Big Bang Nucleosynthesis (BBN), must be dominated by SM species; this is a physical constraint that cannot be evaded. Therefore, even though the inflationary framework is not connected a priori to the SM, such a connection must exist. As the Higgs is the only scalar field in the SM, this naturally suggests its role as a gate connecting the SM and inflation. There are essentially three possibilities: 1) the Higgs is identified with the inflaton, 2) the Higgs is not the inflaton but it is coupled to it (either directly or via intermediators), or 3) the Higgs is neither identified with the inflaton nor it is coupled to it. In category 1) we find scenarios where the Higgs gravitational interaction is not minimal, its kinetic term is not canonical, and/or the Higgs is mixed with a hidden sector. Belonging to this category we find different scenarios, e.g. Higgs-Inflation [9], new Higgs-Inflation [10], Higgs G-inflation [11,12], or Higgs-portal inflation [13]. In this paper we will rather consider the inflationary sector characterized, as usual, by a singlet scalar inflaton field φ, unrelated to the SM Higgs H. As |H| 2 is the only SM operator of dimension ∆ = 2, Lorentz and gauge invariant, the Higgs bilinear can be coupled to the inflaton, for instance through the scale-free quartic operator g 2 φ 2 |H| 2 with dimensionless coupling g 2 , or via a trilinear interaction M φ|H| 2 with M some mass scale 1 . This corresponds to the category 2) above. If we consider e.g. the scale-free interaction, we learn that in order to avoid spoiling the inflationary predictions, it is required that g 2 g 2 max ∼ O(10 −3 ) for direct couplings [14], or even g 2 g 2 max ∼ O(10 −7 ) for couplings radiatively induced from hidden sectors [15]. At the same time, in order to achieve an efficient energy transfer into the Higgs, via non-perturbative broad resonance effects a la preheating, one needs g 2 g 2 np ∼ O(10 −8 ). The window g 2 np g 2 g 2 max can be therefore rather narrow. Furthermore, the inflaton-induced Higgs effective mass m 2 H = g 2 φ 2 will be sub-Hubble during inflation if g 2 < g 2 min ∼ O(10 −10 ). Therefore, unless the inflaton-Higgs coupling is in the range g 2 min g 2 g 2 max , the Higgs will be a light degree of freedom during inflation. This brings up category 3), where the Higgs is so weakly coupled to the inflationary sector (g 2 g 2 min ), that in practice it is decoupled from it. We will refer to this condition as the weak coupling limit. In the case of Higgs-inflaton trilinear interactions or irrelevant operators of dimension ∆ > 4, similar considerations can we put forward, defining the equivalent limit for the corresponding couplings. tionary constraints discussed above allow only for small coupling strengths. Hence, from this point of view, the weak coupling limit can be simply regarded as a specific choice within the allowed parameter space. We will argue that in this limit, the Higgs is universally excited in the form of a condensate around the time inflation ends. Following inflation, the Higgs condensate decays rapidly into the other SM species, due to non-perturbative parametric effects [16][17][18][19][20]. The SM particles, initially out-ofequilibrium, reach a thermal state soon afterwards. If the equation of state (EoS) of the inflationary sector become sufficiently stiff after inflation, the Higgs and its decay products will eventually dominate the energy budget of the universe; this provides a natural origin for the thermal plasma of SM species needed for the onset of the hot Big Bang thermal era. We will discuss the physical constraints that this mechanism need to satisfy in order to successfully reheat the universe, without spoiling other cosmological observations. From now on mp = 1 √ 8πG 2.44·10 18 GeV is the reduced Planck mass, a(t) the scale factor, t is conformal time and a subscript * denotes evaluation at the end of inflation. II. UNIVERSAL HIGGS EXCITATION DURING INFLATION, AND LATER DECAY In the unitary gauge the SM Higgs can be written as a real degree of freedom H = h/ √ 2, with effective potential V = λ(h)h 4 /4, where the self-coupling λ(h) encapsulates the radiative corrections to the potential [21,22]. Let us characterize inflation as a de Sitter period with constant Hubble rate H * . We require H * M EW , where M EW ∼ O(10 2 ) GeV is the electroweak scale, in order that the Higgs potential remains quartic. The running of λ becomes negative above some critical scale µc, with µc ∼ 10 11 GeV for the SM best fit parameters [23][24][25], though this scale can be pushed up to 10 16 GeV, considering the top quark mass 2 − 3σ below its best fit. For simplicity we will characterize inflation as a de Sitter background with physical Hubble rate H * ≤ H max * 9 · 10 13 GeV [1]. To guarantee the stability of the SM all the way up to inflation, we demand λ > 0, considering it as a free parameter, albeit chosen within the reasonable range 10 −5 < λ 10 −2 [19]. Within the weak coupling limit, we can consider two options: (i) Higgs minimally coupled to gravity -. In this case, the Higgs behaves as a light spectator field during inflation [16,26], performing a random walk at superhorizon scales. In de Sitter space, it reaches an equilibrium distribution within a relaxation time of 1/ √ λ efolds, with variance [27,28] In large-field inflation the adiabatic attractor is not reached and this result is corrected [29], but we do not expect our results to change significantly. (ii) Higgs non-minimally coupled to gravity -. An interaction ξ|Φ| 2 R with the Ricci scalar R, is required by the renormalization of the SM in curved space [30,31]. If ξ 0.1, the Higgs is light and we recover the case i). If ξ 0.1, the Higgs is heavy and hence it is not excited during inflation. The sudden drop of R at the transition from the end of inflation to a standard power-law post-inflationary regime, induces however a non-adiabatic excitation of the Higgs, which acquires a variance [32] In the weak coupling limit the Higgs is therefore always excited in the form of a condensate with a large vacuum expectation value (VEV): either during inflation [case i)] with a typical amplitude hrms ∼ H * /λ 1/4 , or around the time when inflation ends [case ii)] with typical amplitude hrms ∼ H * /ξ 1/4 . Given the weak dependence, respectively, on λ and ξ), the main difference between the two cases, rather than in the amplitude, lies in the scale over which the Higgs condensate amplitude varies: while the correlation length is exponentially large in case i), 1 [27], it is only of the size of the horizon at the end of inflation in case ii), H * l * 1 [32]. Soon after inflation ends, the Higgs condensate oscillates around the minimum of its potential. Each time the Higgs crosses zero, particle species coupled to the Higgs -the electroweak gauge bosons and charged fermions of the SM -are created in non-perturbative bursts [16][17][18][19][20]33]. Contrary to the standard case of inflaton preheating, where the inflaton dominates the energy budget of the universe, the Higgs here is rather a sub-dominant energy component of the total budget. One can easily see this by considering the Higgs amplitudes Eqs. (1), (2), from where the ratio of the initial Higgs energy density V * ∼ λ 4 h 4 rms to that of the inflationary sector ρ Inf = 3m 2 p H 2 * , is found as The post-inflationary decay of the Higgs has been studied recently in a series of papers [16][17][18][19][20]. Lattice simulations of the dynamics of the Higgs and the energetically dominant electroweak gauge bosons were carried out in [19,20], incorporating the nonlinear and nonperturbative effects of the SM interactions. During the initial Higgs oscillations, there is an abrupt transfer of energy from the Higgs into the gauge bosons, as expected in broad resonance. Eventually the gauge bosons backreact into the Higgs condensate, and break it apart into higher modes, making the Higgs VEV decrease significantly. The transfer of energy from the Higgs into the SM species ends at a time t = t end , when the (conformal) amplitude of the Higgs condensate stabilizes. This moment signals as well the onset of energy equipartition and a stationary regime, from where the system is expected to evolve towards equilibrium. The time t end , computed within an Abelian approach [19], is given by [19], so the Higgs decay after inflation is generically expected to be fast. The analysis in [19,20] describes the dynamics of the Higgs and the dominant decay species W ± , Z gauge bosons. The creation of Fermions through parametric non-perturbative effects [17], and the decay (scattering) of gauge bosons into (with) fermions, and vice versa, were not included. Therefore, the value of t end given by Eq. (5) should be interpreted only as an indicative scale of the relaxation time of the fields towards equilibrium. In section III.A.3 we will derive a simple estimate of the thermalization time scale, though its precise value will be unimportant in this paper. III. REHEATING INTO THE SM The oscillation-averaged energy density of the Higgs condensate, given the quartic nature of its potential, scales as 1/a 4 for as long as the Higgs remains homogeneous within its correlation domain. When the Higgs condensate breaks apart into a distribution of other SM species, the energy density of the decay products also scales as 1/a 4 [19]. Therefore, the energy density of the SM species after inflation scales as relativistic degrees of freedom, ρSM = 3m 2 p H 2 * r * /a 4 , where we have set a * = 1. The energy density of the inflaton in the period following inflation evolves as ρ Inf = 3m 2 p H 2 * /a 3(w+1) , with w the time averaged value of the EoS during that period, dictated by the inflaton potential. The ratio of the energy density of the SM species to the inflaton, evolves as with r * 1 [Eq. (3)] representing the initial suppression of the energy density of the SM to inflaton. The EoS w between the end of inflation and BBN is unconstrained by observations. We require −1/3 < w ≤ 1 after inflation (by definition w < −1/3 during inflation). Although it is typically assumed that 0 w 1/3, there is no reason to exclude a stiff case 1/3 < w ≤ 1. This is the case, for instance, of steep inflation [34,35] in brane world scenarios 2 . In fact, a post-inflationary stiff EoS can be easily implemented within any inflationary sector. Denoting by V and K the inflaton potential and kinetic energy, during inflation a slow-roll regime V K is typically attained. If a feature in the inflaton potential makes its amplitude drop to V < K/2, this triggers the end of inflation, as the EoS w = (K − V )/(K + V ) > 1/3 becomes stiff in that moment. The simplest realization of this Kination-domination (KD) regime [36,37], is to assume a rapid transition from V K during inflation, to some small value V K after inflation, the actual value of V being irrelevant. If after inflation, V = 0 then w = +1, while if V /K 1 but V = 0, then w +1 − O(V /K). If we define δw ≡ (w−1/3), then r(t) = r * a 3δw . If δw ≤ 0 (the standard assumption of w ≤ 1/3), then r(t) either remains as small as r * (w = 1/3), or decrease even further as ∝ a −3|δw| (0 ≤ w < 1/3). However, for a stiff EoS, 0 < δw ≤ 2/3 and r(t) grows. Despite starting from a very small value, r(t * ) = r * 1, for a stiff EoS there is always a time tSM for which r(t ≥ tSM) ≥ 1. By construction 1 = r * a 3δw SM , with aSM ≡ a(tSM) = r −1/3δw * . Using a(t) ∝ (H * t) 2/(2+3δw) , we find . (7) The energy budget of the universe becomes dominated by the SM fields at a time t = tSM after inflation. If the SM particles are already in thermal equilibrium when its dominance begins, one can compute the temperature TSM of the system at t = tSM. Using ρSM(tSM) ≡ π 2 30 gSMT 4 SM = 3m 2 p H 2 * r * /a 4 SM , it is obtained as GeV, (8) with gSM the SM thermal degrees of freedom at tSM. The process just described can be of course identified with a reheating mechanism, identifying Eq. (8) with a reheating temperature, but only if certain non-trivial circumstances are met, which we discuss next. A. Requirements for successful reheating 1) Ensuring small cosmological perturbations -. A sufficiently long period of KD allows the Higgs to generate the total energy density, making the Higgs a curvaton candidate. At t = tSM, the Higgs field perturbations are converted into adiabatic perturbations. In case i), where the Higgs field perturbations were generated during inflation, we have δh ∼ H * , and the power spectrum generated by the Higgs field using (1) is ∼ δh 2 / h 2 ∼ λ 1/2 . 2 In these scenarios the Friedmann equation is modified during inflation but it is recovered just after inflation, and then an expansion history with stiff equation of state develops. Unless λ finely tuned to a very small value, the result is far larger than the observed perturbation amplitude of 10 −9 which rules out case i), in agreement with [38]. In case ii) the Higgs is heavy during inflation, leading to its perturbations being exponentially suppressed. Fortunately, this does not lead to a completely smooth universe: unavoidable gravitational couplings between the inflaton and Higgs field mean that the inflaton perturbations are preserved, even after the inflaton energy density becomes negligible at t > tSM [39,40]. Therefore case ii) remains observationally viable, provided that the inflaton field is chosen such that it generates the observed perturbation spectrum. 2) Ensuring SM dominance before BBN -. For the above mechanism to represent a viable reheating scenario, we need the SM dominance to occur before BBN, i.e. TSM > TBBN MeV. Using Eq. to V K occurs in the inflationary sector, implying w +1, shows that the 'stiffness' requisite is not a strong constraint. Therefore, from now on we will adopt w = +1 as a fiducial case. It turns out that applying Eq. (8) for obtaining TSM in the case ii) -the only viable case -is misleading. This is because the Higgs field becomes tachyonic once the KD regime is established after inflation, acquiring a mass m 2 h = −6ξH 2 * (m 2 h = −3|3w − 1|ξH 2 * for arbitrary stiff EoS 1/3 < w ≤ 1). Eq. (8) was derived however on the basis of the Higgs amplitudes Eqs. (1), (2), implicitly assuming an outgoing m 2 ≥ 0 mass state [30]. As the tachyonic condition makes the Higgs amplitude to grow exponentially fast after inflation, h ∝ exp{ √ 6ξ (ȧ/a)dt}, this actually solves this problem: the Higgs self-interactions will naturally shut-off the tachyonic instability on a time scale much shorter than the initial Hubble time 1/H * , when λ h 2 6ξH 2 * (neglecting the time evolution of the Hubble rate for the simplicity). In order to avoid a cosmological catastrophe with the Higgs reaching a deeper vacuum than the electroweak one [41][42][43], there is a maximum amplitude h ≤ hvac that the Higgs should not surpass, with hvac(mt, mH , αs) a function depending sensitively on the top quark mass mt, the Higgs mass mH , and the strong coupling constant αs [23][24][25]. The maximum Hubble rate that maintains vacuum stability is hvac. The Higgs amplitude and Higgs energy fraction at the moment of tachyonic stabilization are The rapid tachyonic phase makes the Higgs amplitude experience a significant growth until the Higgs selfinteractions stabilizes the amplitude to Eq. (9). Afterwards the non-minimal coupling to gravity quickly be- comes unimportant, since ξR ∼ −6ξH 2 * /a 6 , so the Higgs oscillates around its potential as if ξ = 0. To compute the reheating temperature TRH taking into account the impact of the tachyonic phase, we need to use Eq. (10) [instead of Eq. (3), which was used to derive Eq. (8)]. The temperature at the time the SM dominates is GeV ∼ 3 · 10 10 ξ 2 λ where the second line assumes w = 1 ⇔ δw = 2/3. The maximum temperature is obtained using H * = H vac * . From TSM > TBBN ∼ 1M eV [44], we deduce that H * ≥ H min * ≡ (ξ 2 /λ) −3/8 · 10 7 GeV. Successful reheating is therefore only possible for relatively large inflationary energy scales. In Fig 1 we plot the temperature as a function of various model parameters. 3 Let us note that in reality we need the SM thermal plasma to dominate some time before BBN, so that when BBN is ignited, the expansion rate is sufficiently close to radiation domination. If we demand that T SM = pT BBN with p > 1, then the Hubble rate at BBN will be H(T BBN ) = H (RD) is the theoretically correct BBN expansion rate in an exact RD background. The relative difference is then Therefore, it is enough that p ≥ 10 so that the deviation of the initial BBN expansion rate with respect an ideal RD case, is less than 1%. 3) Ensuring thermal equilibrium before SM dominance -. The thermalization time of the SM fields can be estimated as tEq ∼ 1/(α 2 TEq), with TEq the temperature of the system when thermal equilibrium is first established, and α = g 2 /(4π) the relevant coupling(s). Defining tEq ≡ γt end , with t end given by Eq. (5), and using Eq , TEq ∼ 1/(α 2 γt end ) and aEq (2γH * t end ) 1/2 , with gEq the SM thermal degrees of freedom at tEq, we find γ ∼ O(10 3 )/ξ, where we have set g 2 0.3 and gEq 100. We confirm a posteriori, therefore, that in fact tEq tSM. Although a more elaborate calculations of tEq could be made [45][46][47], the precise value of tEq is irrelevant for the purpose of reheating the universe into the SM, as long as tEq tSM. At t ≥ tSM the expansion of the universe becomes driven by a thermal relativistic plasma of SM species, as required by the standard hot Big Bang paradigm. The temperature TSM [Eq. (11)] can therefore be identified with a reheating temperature, defined as the highest temperature reached by the SM thermal plasma when it first dominates the energy budget. For instance, for KD with w 1 (δw 2/3), H * = 0.1H vac * = 0.01H max * , and λ = 0.005, we obtain TSM 10 9 ξ 3/2 GeV. 4) Other considerations -. The inflaton as well as the Higgs field undergoes a non-adiabatic change in mass during the rapid transition from inflation to KD. The inflaton dominates the energy budget of the universe at this time, so even if a small fraction of the inflaton condensate decays into radiation during this transition, then the inflaton decay products might forever dominate over the Higgs energy density, spoiling our goal of achieving the hot Big Bang from the Higgs field. In the limit of a fast transition, the fraction of energy in inflaton decay products immediately after the transition can be estimated as where m φ is the effective inflaton mass just before the transition and η φ ≡ m 2 φ /(3H 2 * ) < 1 is a slow-roll parameter. Comparing this to the Higgs energy after its tachyonic growth is stabilised, Eq. (10), we see that for ξ ≥ 1 the Higgs strongly dominates the total radiation component at this time, by a factor ∼ ξ 2 /(λη 2 φ ) 1. IV. DISCUSSION The cosmological implications of the SM Higgs in the early Universe remain to be clarified. The possible role of the Higgs as an inflaton or as mediator field connected somehow to the inflationary sector, remains unknown. The circumstances to prevent a catastrophic cosmological instability by which the Higgs might reach a deeper (and negative) vacuum different than the electroweak one during inflation or preheating, has recently triggered a great deal of attention [15-20, 41-43, 48-59]. In this letter we consider the SM stable all the way up to the inflationary scale, and the SM Higgs sufficiently weakly coupled to the inflationary sector; a circumstance that we refer to as the decoupling limit. The Higgs is then universally excited either during or shortly after the end of inflation. We introduce a period of KD with stiff EoS and show that under such circumstance, the Higgs becomes a curvaton which generates unacceptably large perturbations in the absence of a non-minimal coupling to gravity. If a sufficiently large non-minimal coupling to gravity is considered ξ 1, the post-inflationary decay of the Higgs provides a simple explanation for the origin of the relativistic thermal plasma of SM species (the Higgs decay products), necessary to begin the 'hot Big Bang' radiation era. Currently the relation between two of the fundamental pillars of our understanding of the Universe, the SM of particle physics and the inflationary framework, is unknown. Therefore, obtaining a mechanism providing an origin of the thermal universe dominated by the SM species is not trivial. The mechanism we propose provides a possible explanation for the reheating of the Universe into the SM fields after inflation, with a reheating temperature that can be rather large. Our major requirement of the inflationary sector is that the background energy density is dominated by the kinetic part after inflation; a condition which is independent of the inflaton potential during inflation. A potentially observable consequence of the KD regime after inflation is that the otherwise (almost) scale invariant background of gravitational waves expected from inflation, will be boosted at the high frequency end of the spectrum [60][61][62]. Another consequence, though rather unlikely to be observable, is the production of a background of gravitational waves from the Higgs decay products themselves [17,63], with a peak amplitude today GW ∼ 10 −16 (for H * = H max * ) at fp ∼ 10 11 Hz. It will be interesting to explore the introduction of an inflation-to-KD transition as a model-dependent feature in the inflaton potential, as well as a proper study of the thermalization of the SM species after inflation. The need to produce dark matter [64][65][66][67] and to realize baryogenesis [68][69][70], within the setup we are proposing, are also interesting avenues to be explored. In summary, we have shown for the first time that one can generate the entire post-inflation SM radiation bath from the Higgs field, without spoiling the successful predictions of the observed perturbation spectrum from in-flation, and without any contribution or coupling to the inflaton field. We then quantified the required parameter space in which this is possible, and found that an order one or larger non-minimal coupling between the Higgs field and gravity is required in order to not spoil the observed spectrum of primordial perturbations.
6,055
2016-04-13T00:00:00.000
[ "Physics" ]
Bounds on the superconducting transition temperature: Applications to twisted bilayer graphene and cold atoms Understanding the material parameters that control the superconducting transition temperature T c is a problem of fundamental importance. In many novel superconductors phase fluctuations determine T c , rather than the collapse of the pairing amplitude. We derive rigorous upper bounds on the superfluid phase stiffness for multi-band systems, valid in any dimension. This in turn leads to an upper bound on T c in two dimensions (2D), which holds irrespective of pairing mechanism, interaction strength, or order-parameter symmetry. Our bound is particularly useful for the strongly correlated regime of low-density and narrow-band systems, where mean field theory fails. For a simple parabolic band in 2D with Fermi energy E F , we find that k B T c ≤ E F / 8, an exact result that has direct implications for the 2D BCS-BEC crossover in ultra-cold Fermi gases. Applying our multi-band bound to magic-angle twisted bilayer graphene (MA-TBG), we find that band structure results constrain the maximum T c to be close to the experimentally observed value. Finally, we discuss the question of deriving rigorous upper bounds on T c in 3D. Our work is motivated by the fundamental question: what limits the superconducting (SC) transition temperature T c ? Within BCS mean-field theory, and its extensions like Eliashberg theory, the amplitude of the SC order parameter is destroyed by the breaking of pairs, and T c scales with the pairing gap ∆. The material parameters that control the mean-field T c are the electronic density of states (DOS) at the chemical potential N (0) and the effective interaction, determined by the spectrum of fluctuations that mediate pairing. Beginning with the pioneering experiments of Uemura [1] and theoretical ideas of Emery and Kivelson [2] on underdoped cuprates, it became clear that the mean field picture of T c scaling with the pairing gap is simply not valid in many novel superconductors. The loss of SC order is then governed by fluctuations of the phase of the order parameter, rather than the suppression of its amplitude, and T c is related to the superfluid stiffness D s . The material parameters that determine D s are rather different from those that determine the pairing gap ∆. The question of mean field amplitude collapse versus phase fluctuation dominated SC transition is brought into sharp focus by a variety of recent experiments in narrow band and low density systems. One of the most exciting recent developments is the observation of very narrow bands in magic-angle twisted bilayer graphene (MA-TBG) leading to correlation-induced "Mott" insulating states [3] and superconductivity [4] in their vicinity. Flat bands are also also expected to arise in various topological states of matter; see, e.g., [5][6][7][8]. BCS theorybased intuition suggests that narrow bands have a large DOS N (0) and lead to high temperature superconductivity. Is this true or do phase fluctuations limit the T c ? The extensive compilation of data in Fig. 6 of ref. [4] suggests that all known superconductors have a T c that scales at most like a constant times the "Fermi energy E F ", though there is considerable leeway in defining E F in strongly correlated and multi-band materials. We also note that ultra-cold Fermi gases in the strongly inter-acting regime of the BCS-BEC crossover [9,10] exhibit experimental values [11] of k B T c /E F larger than those observed in the solid state. All of these observations raise the question of ultimate limits on the T c of a superconductor or paired superfluid. In this paper, we obtain sharp answers to these questions, especially in 2D. First, we derive an upper bound on the superfluid stiffness D s (T ) ≤ D(T ), where D is proportional to the optical conductivity sum rule. This inequality is valid in all dimensions and for arbitrary interactions. We then use the Berezinskii-Kosterlitz-Thouless (BKT) theory in 2D to obtain k B T c ≤ π D(T c )/2. While the bound on T c is of completely general validity, it is most useful in the strongly correlated regime of narrow-band and low density systems, precisely where conventional mean-field approaches fails. We show that D is necessarily "small" in such systems, and, in many cases of interest, D is essentially determined by the (noninteracting) band structure. We give several examples that illustrate the usefulness of our bounds for a variety of systems. For a single parabolic band we show that k B T c ≤ E F /8 in 2D. This exact result poses stringent constraints on the T c of the 2D BCS-BEC crossover in ultra cold atoms. We also describe bounds on T c for the 2D attractive Hubbard model, relevant for current optical lattice experiments [12], that demonstrate the tension between breaking of pairs and phase fluctuations, and highlight the connection with a pairing pseudogap [13,14]. Turning to multi-band systems, we use available band structure results [15][16][17][18][19] for MA-TBG to calculate D and thus constrain its T c without any assumptions about the pairing mechanism or order-parameter symmetry. We obtain a rigorous (but weak) bound of 15 K. Using physically motivated approximations, we estimate a bound on T c as low as 6 K. Finally, we discuss the question of deriving similar bounds in 3D. We show that the presence of non-universal pre-factors in the relation between T c and D s , as well their scaling behavior near a SC quantum critical point, pose challenges in deriving a rigorous bound in 3D. Results: We first outline our main results and then give a detailed derivation and specific applications. We consider a Fermi system described by the general Hamiltonian where k is crystal momentum, m is a band label, and σ the spin. H K is the kinetic energy and H int describes interactions (electron-phonon, electron-electron, etc.), including those that give rise to superconductivity. The external vector potential A enters H through a Peierl's substitution in the tight-binding representation of H K , but does not affect H int . For now, we ignore disorder and return to it at the end. The macroscopic superfluid stiffness D s determines the free energy cost of distorting the phase of the SC order parameter |∆|e iθ via the Boltzmann factor exp −D s d d r|∇θ| 2 |/2k B T . It is related to the London penetration depth via 1/λ 2 L = (4µ 0 e 2 / 2 )D s in 3D. Microscopically, D s can be calculated as the static, long wavelength limit of the transverse current response [20,21] to a vector potential. (Our results are equally valid for neutral superfluids with rotation playing the role of the magnetic field.) We obtain a rigorous upper bound valid in any dimension where Ω is the volume of the system and M −1 mm (k) is an inverse mass tensor that depends only on the electronic structure of H K ; see eq. (5) below. The temperature and interactions impact D only through c † kmσ c km σ , where the thermal average is calculated using the full H. We next use D s to provide an upper bound on the SC transition temperature in 2D. We use the Nelson-Kosterlitz [22] universal relation to obtain For a weak coupling superconductor, T c is well described by mean field theory and our result, though valid as an upper bound, may not be very useful. On the other hand, as we show below, for a strongly interacting system the bound gives insight into both the value of T c and on its dependence on parameters. Bound on superfluid stiffness: The intuitive idea behind D s ≤ D is as follows. is the optical conductivity spectral weight integrated over the bands in eq. (1), and 4πe 2 / 2 D s is the coefficient of the δ(ω) piece in Re σ(ω) in the SC state; (note: ∞ 0 dω δ(ω) = 1/2). The inequality (2) says that the weight in the SC delta-function must be less than or equal to the total spectral weight. To derive (2), we use the Kubo formula for D s as a linear response [20,21] to an external vector potential in an arbitrary direction a where D is the diamagnetic response ∼ δ 2 H/δA a 2 , while χ ⊥ is the transverse current-current correlation function. D is given by eq. (2) with Here α, β label orbitals/sites within a unit cell of a Bravais lattice, t αβ (k) is the Fourier transform of the hopping t αβ (r iα − r jα ), and U α,m (k) is the unitary transformation that diagonalizes t αβ (k) to the band basis m (k)δ m,m . The inverse mass tensor in eq. (5) also depends on the direction a = x, y, . . . through the derivative with respect to k a on the right hand side, however, we do not show this a dependence explicitly to simplify the notation. These results are derived in Appendix A, and the relation to the optical sum rule shown in Appendix B; see also ref. [23]. We next turn to the second term in eq. (4). From its Lehmann representation we see that χ ⊥ (q → 0, ω = 0) ≥ 0 at all temperatures; see Appendix C. We thus obtain For a single band system eqs. (2) and (5) simplify greatly and we get D = (4Ω) −1 k,σ (∂ 2 (k)/∂k 2 a ) n σ (k), where the momentum distribution n σ (k) = c † kσ c kσ . This allows us to recover well-known special cases. (1) With nearest neighbor (NN) hopping on a square or cubic lattice, ∂ 2 (k)/∂k 2 a ∼ (k), and D is proportional to the kinetic energy. (2) A parabolic dispersion (k) = 2 k 2 /2m leads to the simple result D = 2 n/4m, independent of T and of interactions. Here D s (T ) = 2 n s (T )/4m and our bound simply says that the superfluid density n s (T ) ≤ n the total density. For materials with non-parabolic dispersion and/or multiple bands, D depends on T and interactions. It is thus illuminating to derive a bound for D which depends only on the density. We describe the single band result here, relegating the multi-band generalization to Appendix D. We write H K = − Rδσ t(δ)c † R+δ,σ c R,σ + h.c. with translationally invariant hopping amplitudes t(δ) that depend only the vector δ connecting lattice sites R and R + δ. We couple the system to a vector potential and compute D, which involves terms like . We note that D ≥ 0, since it is the sum rule for Re σ(ω) ≥ 0. We then use the triangle inequality and Cauchy-Schwarz | c † i c j | ≤ n i n j = n to obtain D s ≤ D ≤ n δ δ 2 a |t(δ)|/2. This shows that for small hopping and/or low density, one necessarily has a small D s . T c bound in 2D: For a BKT transition in 2D, the T c and the stiffness D s are related by the universal ratio [22] , we then immediately obtain eq. (3). In an anisotropic system D depends on a = x, y through the ∂ 2 /∂k 2 a in eq. (5). We can use D = max D x , D y to obtain a bound on T c , however, we argue in Appendix H, We emphasize that eq. (3) with D(T c ) on the RHS is sufficient to derive the rigorous results below. However, to obtain the intuitively more appealing result k B T c ≤ π D(0)/2, we need to assume that D s (T ) is a decreasing . 2D Parabolic Dispersion: Consider a single band with (k) = 2 k 2 /2m with density n, so that the Fermi energy E F = π 2 n/m and arbitrary interactions that lead to pairing and superconductivity. Then M −1 (k) = m −1 and Ω −1 k,σ n σ (k; T ) = n independent of T and interactions, so that D = 2 n/4m. Eq. (3) then leads to the simple result which must be obeyed independent of the strength of attraction or order-parameter symmetry, provided the system exhibits a BKT transition. In a weak-coupling superconductor T c will actually be much smaller than E F /8 but, as we discuss next, the bound can be saturated in systems with strong interactions, such as the 2D BCS-BEC crossover experiments in ultra-cold Fermi gases. 2D BCS-BEC crossover: In ultra-cold Fermi gas experiments the two-body s-wave interaction between atoms is tuned using a Feshbach resonance. This has led to deep insights into the crossover [9, 10] from the weak coupling BCS limit with large Cooper pairs all the way to the BEC of tightly bound diatomic molecules. Asymptotically exact results are available in both the BCS and BEC limits, however, the crossover regime between the two extremes is very strongly interacting, with pair size comparable to the inter-particle spacing, and is much less understood. It is precisely here that our exact upper bound (6) is relevant. The 2D crossover for s-wave pairing is parameterized by the dimensionless interaction [24] log(E b /E F ), where E b is the binding energy of the two-body bound state in vacuum and E F the Fermi energy. In the weak- , with a pre-factor that has been computed including the Gorkov-Melik-Barkhudarov (GMB) correction [25,26]. Clearly T c is much smaller than our bound. In the BEC limit (E b E F ) the composite bosons have mass 2m, density n/2, and an inter-boson scattering length [27], which is valid in the regime log log 1. This too is smaller than our bound, though our exact result cautions against a naive extrapolation of the BEC limit result into the strong interaction regime. The results of the 2D Fermi gas experiment of ref. [28] seems to violate eq. (6) in the crossover regime. We note, however, that our bound is obtained for a strictly 2D system in the thermodynamic limit, while the experiment is on a quasi-2D system in a harmonic trap, from which it is difficult to accurately determine the BKT T c . The finite size of the trap raises T c ; even the non-interacting Bose gas in a 2D harmonic trap has a non-zero T c . Magic angle twisted bilayer graphene: Let us next turn to a multi-band system of great current interest. The existence of very narrow bands in MA-TBG was predicted by continuum electronic structure calculations [15,16] that pointed out the crucial role of where θ is the twist angle between the two layers, w is the interlayer tunneling, v 0 F the bare Fermi velocity, and K the Dirac-node location in monolayer graphene. It was predicted that v F in TBG can be tuned to zero [15], with a bandwidth less than 10 meV by choosing certain magic angles θ, the largest of which ≈ 1.1 • has now been achieved in experiments [3,4]. Recently, pressure-tuning of w has also resulted in very narrow bands [29]. Little is known at this time about the nature of the SC state or the pairing mechanism, though the observed nonlinear I-V characteristics [3,4] are consistent with a BKT transition. Proximity to a "Mott" insulator and narrow bandwidth suggest the importance of electron correlations, while the extreme sensitivity of the dispersion to structure suggests that electron-phonon interactions could also be important. We argue here that simply using the available electronic structure information for MA-TBG, and without any prejudice about the interactions responsible for SC, we can put strong constraints on its superconducting T c . There are two bands for each of the two valleys, one above and the other below the charge neutrality point (CNP) . Each band has a two-fold spin degeneracy, with bands for one valley related to those of the other by timereversal. We include these eight bands in the mm ,σ in eq. (2), while the k is over the moiré Brillouin zone, a hexagon with side 2K sin(θ/2) Kθ. We use the tight-binding model of ref. [17], a multi-parameter fit to the continuum dispersion [15], to calculate M −1 m,m (k) of eq. (5), which is block-diagonal in the valley index, so that there are no cross-valley terms in eq. (2). We can obtain a more stringent T c bound if use fur- ther physical inputs. The "Mott" gap in the correlated insulator is experimentally [3,4] known be ≈ 0.3 meV, and we expect a superconducting gap which is at most that value. Thus we may assume that, at half-filling away from CNP on the hole doped side, say, the bands above the CNP are essentially empty and unaffected by pairing. Before proceeding, we derive a general result valid for arbitrary interactions which shows that inter-band terms do not contribute to eq. (2) for completely filled or empty bands. To prove this, we again use the Cauchy-Schwarz inequality | c † kmσ c km σ | 2 ≤ n mσ (k)n m σ (k) = 0 when either band m or m is empty. A similar argument works for the filled case after a particle-hole transformation; see Appendix E. Thus c † kmσ c km σ = 0 for m = m , whenever either of the two bands is completely filled or empty, and only m = m terms survive in eq. (2). To bound T c for MA-TBG near half-filling on the holedoped side of the CNP, we take n m (k) = 0 for the empty bands above the CNP, as explained above. Keeping only band-diagonal terms and using the triangle inequality we obtain D ≤ ( 2 /4Ω) k,m,σ |M −1 mm (k)|n mσ (k). Using n(k) ≤ 1 for the bands below CNP we obtain the bound T c ≤ 14.4 K near half-filling for hole doping using the tight-binding model of ref. [17]. A similar calculation leads to T c ≤ 15.0 K near half-filling for electron doping; see Appendix F. We note that using M −1 and general constraints on n(k) leads to rigorous results, but weakens the bounds. Finally, we make a physically motivated estimate of D, which yields an improved, but approximate, result. We use the T = 0 band theory result c † kmσ c km σ = δ m,m Θ (µ − m (k)), with the chemical potential µ determined by the density Ω −1 k,m,σ n mσ (k). This, together with M −1 mm (k) calculated from the tight binding model of ref. [17], leads to the density-dependent estimate of D plotted in Fig. 1. We note that using ∂ 2 /∂k 2 versus ∂ 2 /∂k 2 y to calculate M −1 affects our estimates by less than a percent. The integrated optical spectral weight, given by 2πe 2 / 2 D, vanishes at the band insulators when all bands are either filled or empty. Clearly our bandstructure based estimate does not know about the "Mott" insulating states at half-filling away from CNP. (π/2) times the D plotted in Fig. 1 is an estimated upper bound on the SC T c . The system is not SC over most of the doping range, but our bound is the maximum attainable T c if the system were to exhibit superconductivity. We find the maximum T c to be about 6 K, while the experimental value is 3 K [29]. We note that the T c bounds are sensitive to the precise electronic structure results we use as input for calculating M −1 . As shown in Appendix F, using the tight binding results of ref. [18] for MA-TBG, leads to a T c estimate about 2.5 times higher than the one presented above, based on the band structure of ref. [17]. We emphasize that these differences arise from the fact that the details of the non-interacting band structure of MA-TBG are not very well established. Irrespective of that, our results suggest that MA-TBG is a strongly correlated SC in a phase fluctuation dominated regime. 2D attractive Hubbard model and optical lattices: We next obtain important insights on the value of T c and its interaction-dependence for the 2D attractive Hubbard model, where we can compare our bound with sign problem free Quantum Monte Carlo (QMC) simulations [30]. This system has also been investigated in recent optical lattice experiments [12]. Consider nearest-neighbor (NN) hopping on a square lattice with H = −t i,j σ c † i,σ c jσ + h.c. − |U | i (n i↑ − 1/2) (n i↓ − 1/2). For n = 1 the system has an s-wave SC ground state, exhibiting a crossover from a weak coupling BCS state (|U |/t 1) to a BEC of hard-core on-site bosons (|U |/t 1). The QMC estimate [30] of T c , obtained from the BKT jump in the D s , is a non-monotonic function of |U |/t at a fixed density n; see Fig. 2. The BCS mean field T MFT c correctly describes the weak coupling T c , (For a more accurate estimate, one should take into account the GMB correction [25] which suppresses the numerical pre-factor, but does not alter the functional form of T MFT c .) For |U |/t > 2, T MFT c is the scale at which pairs dissociate and lies well above T c . In the |U |/t 1 limit we see T c ∼ t 2 /|U |, the effective boson hopping. Our bound permits us to understand T c (|U |/t) in the intermediate coupling regime where there are no other reliable analytical estimates. To estimate D analytically, we need to make an approximation for n(k). If we choose a step-function (as we did for the MA-TBG) we get T c ≤ 0.3t for n = 0.7, independent of |U |/t. To obtain a better estimate, we note that, as |U |/t increases, the pair-size shrinks and n(k) broadens. In the extreme |U |/t-limit of on-site bosons, n(k) is flat (kindependent), leading to D → 0, since ∂ 2 /∂k 2 x is a periodic function with zero mean whose k-sum vanishes. To model this broadening of n(k), we use the results of the T = 0 BCS-Leggett crossover theory; see Appendix G. This gives us the (approximate) bound plotted in Fig. 2, which has the correct t 2 /|U | asymptotic behavior at large |U |. In general, we see that For temperatures between the pairing scale T MFT c and T c at which phase coherence sets in, the "normal state" exhibits a pseudogap due to pre-formed pairs [13,14]. Three dimensional systems: Experiments suggest that there may be an upper bound on T c in 3D systems; see, e.g., Fig. 6 of ref. [4]. We have not succeeded in deriving a rigorous bound on the 3D T c , unlike in 2D. There are two challenges that one faces in trying to derive a bound in 3D, one related to rigorous control on numerical pre-factors and the other to the functional form of the relation between T c and D s . Both are related to the fact that in 3D the superfluid stiffness does not have dimensions of energy, unlike in 2D. Following Emery and Kivelson (EK) [2], we focus on the 3D phase ordering temperature k B T θ = AD s (0) a, which could provide a bound on T c . Here A is a (dimensionless) constant and a is the length-scale up to which one has to coarse-grain to derive an effective XY model. EK use a 2 = πξ 2 , where ξ is the coherence length, and suggest, based on Monte Carlo results for classical XY models, that A 4.4 gave a reasonable account of experiments on underdoped cuprates and other materials. However, the coefficient A is non-universal and can vary from one system to another. Consider the 3D problem of the BCS-BEC crossover in ultra-cold Fermi gases [10] with 2 k 2 /2m dispersion and interaction, characterized by the s-wave scattering length a s , tuned using a Feshbach resonance. At unitarity (|a s | = ∞), the experimental k B T c 0.17E F [11], while QMC estimates [31,32] range from k B T c 0.15E F − 0.17E F . QMC shows the expected non-monotonic behavior of k B T c /E F as a function of 1/k F a s , with a maximum k B T c /E F 0.22 at a small positive 1/k F a s . The maximum value of k B T c /E F is larger than the non-interacting BEC result, consistent with the rigorous result [33] that repulsive interactions increase the T c of a dilute Bose gas in 3D. We choose ξ k −1 F near unitarity [34] and try to use k B T θ = A( 2 n/4m)( √ πξ) as a bound on T c . Consistency with the observed k B T c /E F 0.22 then requires A 7.4, quite different from the 4.4 quoted above. We do not know if there is a definite value of A that would give a "phase-ordering" upper bound on T c in 3D. The following argument suggests that there may, in fact, be no general bound on T c that is linear in D s (0) in 3D. From a practical point of view, one is interested in learning about the highest T c in a class of materials. But, if a general bound were to exist, it should be equally valid in situations where both T c and D s (0) are driven to zero by tuning a (dimensionless) parameter δ → 0 + toward a quantum critical point (QCP). From the action S = 1 2 D s β 0 dτ d d r|∇θ| 2 + . . . describing the phase fluctuations of the SC order parameter, we get the quantum Josephson scaling relation [35] D s (0) ∼ δ (z+d−2)ν . One also obtains, as usual, T c ∼ δ zν , where z and ν are the dynamical and correlation length exponents in d spatial dimensions. Thus T c ∼ [D s (0)] z/(z+d−2) near the QCP. In 2D, this gives a linear scaling between T c and D s (0). However, in 3D we get T c ∼ D s (0) z/(z+1) which, sufficiently close to the QCP, will necessarily violate an upper bound on T c that is conjectured to scale linearly with D s (0). This is not just an academic issue, as experiments see precisely such a deviation from linear scaling with T c ∼ D s (0), consistent with z = 1, both in highly underdoped [36,37] and in highly overdoped [38,39] cuprates. Concluding remarks: We have thus far ignored disorder. We note that D s of the pure system is necessarily larger than that in the disordered system. This can be seen by generalizing Leggett's bound [40] on the superfluid density (derived in the context of supersolids) to the case of disordered systems [41]. Thus our upper bounds for translationally invariant systems continue to be valid in the presence of disorder, although they can be improved. Although we have focused on narrow band and low density systems here, our bounds have also important implications for systems close to insulating states, either correlation-driven or disorder-driven. In either case, if there is a continuous superconductor to insulator transition, the superfluid stiffness will eventually become smaller than the energy gap and control the SC T c . As a design principle, it is interesting to ask if one can have multi-band systems where a narrow band has a large energy gap and large "mean field" T c interacting with a broad band that makes a large contribution to the superfluid stiffness, thus getting the best of both worlds. where we use the notation R = (r iα + r jβ )/2 and r = r iα −r jβ for simplicity. Since we are eventually interested in the long wavelength limit q → 0, we choose a very slowly varying vector potential and write riα r jβ Within linear response theory we can Taylor expand the exponential retaining terms which are linear (paramagnetic) and quadratic (diamagnetic) in A. We transform to Fourier space using t αβ (k) = r t αβ (r)e −ik·r and c iα = Ω −1/2 k e ik·riα d kα . We can then write the current operator j x = δH K /δA x as the sum of the paramagnetic (P ) and diamagnetic (D) current operators given by where we only show the x-component for simplicity. Note that the paramagnetic current operator, when transformed to the band basis, will in general have interband matrix elements [7,8]. The only property of j P x (q) that we will need to use below, however, is that it is a Hermitian operator; see equation (C1). The superfluid stiffness D s is defined as the static longwavelength limit of the transverse response of the current density j to a vector potential A with q x = 0, q ⊥ → 0, ω = 0 (A6) and ⊥ represents the orthogonal directions to x. Standard linear response theory leads to the Kubo formula where the first term is the diamagnetic term, which is of central interest in this work, and the second is the transverse paramagnetic current-current correlation function. We will focus on the latter in Appendix C, where we show that χ ⊥ jxjx ≥ 0 at all temperatures. Here we focus on the first term that can be read off from the form of the diamagnetic current operator. We find it convenient to write it in the band basis as with the inverse mass tensor given by The unitary transformation U that transforms from the orbital to the band basis is defined by This allows us to write the final result in the band basis using We note several important points about the inverse mass tensor M −1 mm (k). (i) It depends only on the bare band structure, and is independent of temperature and interactions, (ii) it has both diagonal and off-diagonal terms in the band indices. and (iii) it is not simply related to the curvature of the bands ∂ 2 m (k)/∂k 2 x , in contrast to the single-band case in equation (A12). The standard reference on the formalism for calculating the superfluid stiffness in lattice systems is Scalapino, White and Zhang (SWZ) [21]. Our normalization conventions differ from them and, more importantly, they focus on the special case of a single band model with nearest-neighbor (NN) hopping on a square (or cubic) lattice. Thus it may be useful for us to provide a "dictionary" relating our results to theirs. In the single-band case our expression for D reduces to where the momentum distribution This result is valid for arbitrary one-band dispersion. For the special case of nearest-neighbor (NN) hopping on a square (or cubic) lattice, it is easy to see that the right hand side of equation (A12) is proportional to the kinetic energy in the x-direction, −K x in the notation of SWZ. Our result thus reduces to Finally, we note that our superfluid stiffness D s is related to that of SWZ by Appendix B: Relation between D and optical spectral weight To see that D is proportional to the optical sum rule spectral weight, we identify the dynamical conductivity σ(ω) as the current response to an electric field E = Using the Kramers-Krönig relation and Reχ jxjx (ω → ∞) → 0, we obtain the sum rule for the optical conductivity as A similar argument for completely filled bands follows from a particle-hole transformation Thus we conclude that for filled and empty bands, the inter-band terms do not contribute to the sum in equation (E1), even in the presence of arbitrary interactions. Finally, we note the simple fact that within band theory there are no inter-band contributions to D. In the absence of interactions (denoted by subscript 0) we obtain where f is the Fermi function. Magic angles in twisted bilayer graphene were first predicted by the continuum model [15]. Following up on the experimental discovery of correlation-induced insulators and superconductivity in MA-TBG, there has been considerable progress in understanding its electronic structure [17][18][19]. We first focus on the bounds that we obtain from the tight binding model of Koshino et. al. [17], and then at the end of the Appendix compare these with the results we obtain from the tight binding model of Kang and Vafek [18]. The continuum model dispersion [15] is accurately reproduced by the multi-parameter tight binding fit of Koshino et. al. [17] (see Fig. 3) which takes into account hopping over distances up to 9|L M | where L M is the moire lattice vector. We use the hopping integrals presented in the Supplementary Information file eff hopping ver2.dat of ref. [17] to construct the noninteracting Hamiltonian H K of equation (A2). We then identify the unitary matrix U (k) that diagonalizes t αβ (k) (see equation (A10)) and use it together with t αβ (k) to compute the inverse mass tensor Note that we have made explicit here the direction a = x, y as an additional subscript on M −1 . The inverse mass tensor, obtained from the band structure information as described above, is used to compute D x and D y and bound T c as described in the paper. The additional input needed to determine D using equation (A8) is c † km c km , and we took two different approaches to compute this. In the first approach, we looked at SC near half-filling on the hole-doped side of the CNP, and argued that the chemical potential was sufficiently far from the CNP that we can take the band above the CNP to be empty. Then using the result of Appendix E we can ignore all interband terms with m = m . For the occupied band we only used the general constraint that n(k) ≤ 1. Using the triangle inequality, we then obtain where the empty bands above the CNP are excluded from the sum. A similar reasoning also works for SC in the vicinity of half-filling on the electron-doped side of the CNP, where we need to use the fact that the bands below CNP are filled to eliminate inter-band terms following Appendix E. We use a particle hole transformation c mk → h † mk , under which t αβ (k) → −t αβ (k) and thus M −1 → −M −1 . We write D in terms of the hole momentum distribution functions n h m (k) = h † mk h mk to get 4. Comparison of (a) the band structure and (b) the integrated spectral weight D for the models in ref. [17] (in black) and ref. [18] (in red). We have first used m U β,m (k)U † m,α (k) = δ β,α , which follows from the unitarity of U , and then the fact that ∂ 2 t αα (k)/∂k 2 a is a periodic function with zero mean, whose k vanishes. Using the triangle inequality and the general constraint n h (k) ≤ 1, we obtain an expression for electron doping which is similar to the hole-doped case: where now the filled bands below the CNP are excluded from the sum. These bounds, though rigorous, are weak because they involve |M −1 | and only very general constraints on n(k). The second (approximate) approach was to simply use a T = 0 (non-interacting) band-theory estimate. We thus use equation (E4) to obtain with the chemical potential µ determined by the density. We found that D x and D y calculated from the tight binding model of ref. [17] differ by less than a percent. The resulting density-dependent D is shown in Fig. 1 of the main paper. We note that there are many different tight binding models for describing the narrow bands in MA-TBG and our T c bounds depend on this input. We have focused above on the results based on ref. [17] with an electronic structure that has separate charge conservation at the K and K valleys. A rather different model without valley-charge conservation was derived [18] using only time-reversal and point group symmetry. We compare in Fig. 4(a) the band structures of ref. [17] in black and that of ref. [18] in red. The corresponding integrated spectral weights D are shown in Fig. 4(b) using the same color convention. The maximum T c based on the band structure of ref. [18] is 15 K, which is 2.5 times larger than that estimated from ref. [17]. strong coupling regime, and less useful in the weak coupling regime where T c is, in fact, well described by T MFT c , the pair breaking energy scale.
8,689.8
2019-09-17T00:00:00.000
[ "Physics" ]
Combined hyperthermia and radiotherapy for prostate cancer: a systematic review Abstract Optimization of treatment strategies for prostate cancer patients treated with curative radiation therapy (RT) represents one of the major challenges for the radiation oncologist. Dose escalation or combination of RT with systemic therapies is used to improve tumor control in patients with unfavorable prostate cancer, at the risk of increasing rates and severity of treatment-related toxicities. Elevation of temperature to a supra-physiological level has been shown to both increase tumor oxygenation and reduce DNA repair capabilities. Thus, hyperthermia (HT) combined with RT represents a compelling treatment strategy to improve the therapeutic ratio in prostate cancer patients. The aim of the present systematic review is to report on preclinical and clinical evidence supporting the combination of HT and RT for prostate cancer, discussing future applications and developments of this combined treatment. Introduction Radiation therapy (RT) represents one of the standard treatments for prostate cancer. Despite the curative intent, a variable proportion of patients treated with RT will develop a local relapse (LR) over time, defined as a prostate-specific antigen (PSA) increase in conjunction with positive prostate biopsy and/or positive positron emission tomography (PET) scanner findings [1]. In the definitive setting, this proportion ranges from 3 to 10%, the highest percentages being found in the high-risk disease population [2]. To date, several randomized trials showed both improved local and biochemical control with dose-escalation [3][4][5], with a dose of 76-78 Gy recommended as the standard dose for conventional fractionation. Still, if 78 Gy protocols could achieve a 95.6% and a 99.5% rate of 15-year local and biochemical control in low-risk disease population, these proportions dropped, respectively to 91.3 and 88.7% in high-risk disease patients [6]. Similarly, in the post-operative setting, the development of LR in the prostate bed represents a challenging situation for salvage RT, characterized by high variability in treatment paradigms and an overall poor outcome [7][8][9]. Treatment intensification with dose-escalated RT or addition of systemic agents, such as androgen deprivation therapy (ADT) has been explored in both the definitive [10,11] and salvage [12] settings to optimize outcomes of patients with unfavorable characteristics. However, despite improvements in irradiation techniques including routine use of intensity-modulated and image-guided RT (IMRT and IGRT), improvements in tumor control are often associated with increased long-term side effects [13], highlighting the need to find strategies to improve the therapeutic ratio. The combination of hyperthermia (HT) and RT represents an appealing treatment strategy to improve tumor local control without increasing the risk of toxicities to the surrounding healthy tissues. While several studies suggest a synergistic effect between RT and HT, Kok et al. estimated that the addition of HT provides an equivalent delivered dose of 10 Gy higher than RT alone [14]. From a clinical point of view, the addition of HT has been shown to improve local control in many tumor sites [15]. A 10-20% benefit in local control rates has been reported for cervix [16], rectal [17], and bladder cancer [18], translating also in a substantial survival benefit for cervix cancer (3-year overall survival of 51 vs. 27% with or without additional HT, respectively, p ¼ 0.009) [19,20]. Recent technical advances in HT systems make the combination of HT and RT an innovative strategy to improve tumor control and avoid long-term toxicity in the curative treatment of prostate cancer. The aim of the present systematic review is to report evidence supporting the combination of HT and RT for prostate cancer, including future applications and development of this new treatment strategy. Materials and methods A systematic review was performed using the Pubmed database on 21 July 2021, with the terms (prostate cancer) AND (radiotherapy) AND (hyperthermia). We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement for reporting [21]. After screening of 126 records, 54 were excluded, because they did not directly address the issue of combined RT and HT ( Figure 1). Limitations were placed with respect to publication language (English language was mandatory), year of publication, and full-text availability. Synergistic mechanisms of HT and RT From a radiobiological point of view, HT acts as a radiosensitizing agent through distinct mechanisms. The elevation of the temperature (between 41 and 43 C) to a supraphysiological level generates vascular dilatation and thus an increase in the oxygen supply to tissues, thereby reducing hypoxia and increasing radiosensitivity [22,23]. HT has been proven to enhance the effectiveness of RT by inhibiting the repair processes of DNA damage through inhibition of both base excision repair (BER) [24] and homologous recombination (HR) [25] pathways. The dominant mechanism of HT depends on the temperature level. Enhanced oxygenation is probably the more important mechanism at temperatures around 41 C, while inhibition of DNA repair may be more significant at temperatures around 43 C [14]. Combined HT and RT have also demonstrated a direct cell-killing effect, specifically on radioresistant hypoxic tumor cells [26][27][28], probably related to an accumulation of lactic acid [29]. HT induces the production of heat shock proteins and increases immune cell infiltration, leading to the activation of both innate and adaptive immune cells [30]. Pre-clinical data Both in vitro and animal experiments highlight the synergistic tumor-killing effect of HT and RT for prostate cancer. One of the first in vitro experiments was performed on DU145 prostate cancer cells cultured as spheroids. A higher rate of DNA damage was reported after exposure of cells to an HT session of 43 C for 90 min, performed with a radiofrequency (RF) capacitive system, followed by a 4 Gy external beam radiation therapy (EBRT) irradiation, as opposed to HT or EBRT only treatments [31]. As a result, a higher percentage of apoptotic cells was observed among cells treated with combined HT and RT (64.48 ± 3.40% vs. 27.70 ± 3.5% without HT), emphasizing the ability of HT to increase sensitivity to RT. Additionally, on the same cell lines, a DNA damage survey demonstrates that combined EBRT (4 Gy) and HT was found to be equally effective as a dose-escalated schedule combining 2 Gy delivered with EBRT with 2.24 Gy brachytherapy (BT) boost [32]. Even in radioresistant cell populations, such as prostate cancer stem cells, the combination of HT and RT has proven its ability in reducing colony survival fractions in comparison with RT alone (up to a factor of 100), indicating that the combined treatment may be a promising approach to enhance the radiation-induced cytotoxic effect [33]. Still, these preclinical models should be considered with caution, as each cell type has a distinct sensitivity to HT (e.g., Dunning R3327 cells being particularly heat resistant [34]), and each cell culture model its own limitations (e.g., spheroid cultures being probably more appropriate than monolayer cultures [35]). The combined effect of RT and HT was also investigated in athymic nude mice models inoculated with a prostatic carcinoma xenograft. Kaver et al. provided evidence that the combined treatment was the most effective in slowing tumor growth. The median tumor volume doubling time was 35.5 days with combined HT and RT, compared to 18 and 25.5 days for HT or RT alone, respectively [36], confirming at least an additive effect. In another study, Cohen et al. combined HT with a single dose of 12 Gy, in mice transplanted with human prostate cancer cells [37]. HT was delivered by the RF Oncotherm LAB EHY-100 device. The results were consistent with previous experiments, with an additive effect of HT and RT treatments (33.4 days doubling time when treatments were combined, as opposed to 30.4 and 4.5 days with RT and HT alone, respectively). In a rat population transplanted with a prostate anaplastic carcinoma, combined treatments also demonstrated improved therapeutic ratio, with a significant tumor growth delay when multiple HT sessions were delivered before BT (44 days of tumor growth delay when a single HT session was performed before RT compared to 53.7 days when HT was performed both before and after BT) [34]. Deger et al. included in a phase II trial 57 patients diagnosed with localized prostate cancer. Patients were treated with interstitial HT, performed once a week, combined with 3DCRT at the dose of 68.4 Gy in 38 fractions [50]. Cobalt-palladium thermoseeds were placed homogeneously within the prostate gland by a urologist under spinal anesthesia. HT sessions were conducted weekly and performed simultaneously with RT sessions. The 2-year biochemical relapse-free survival (bRFS) rate reached 95%, demonstrating excellent results with respect to the radiation dose delivered, significantly lower than the EBRT doses currently proposed in current guidelines (i.e., total equivalent doses of 78-80 Gy) [1]. These results were confirmed with a longer follow-up, as the median PSA level decreased from 11.6 to 0.5 ng/mL at 48 months. In another study, Hurwitz et al. recruited 37 patients with stage T2b-T3b prostate cancer and a median PSA level of 13 ng/mL [49]. HT treatment was performed using a transrectal ultrasound system. Placement of interstitial temperature probes, for online monitoring, was accomplished via a transperineal route using transrectal ultrasound guidance. The ultrasound power was delivered from a watercooled 16 element partial-cylindrical intracavitary array. The end point was defined as the attainment of a temperature of 42.0 C by at least one intra-prostatic temperature sensor for 60 min. Two HT treatments were administered at least one week apart during the first four weeks of a 70 Gy EBRT course. Patients had to receive radiation within 1 h of completion of hyperthermia. Six months of ADT was allowed. The 7-year failure-free survival and overall survival rates reached 61 and 94%, respectively. As an indirect comparison, in a cohort of patients treated for stage T1-T3 prostate cancer, prostate irradiation at a dose of 70 Gy resulted in a 6-year control rate of 43% only [5]. The 2-year disease-free survival benefit of this combined treatment was estimated at 20%, with respect to the short-term ADT arm of the RTOG 92-02 trial (84 vs. 64%). In another study, Anscher et al. included 12 prostate cancer patients with locally advanced disease treated with definitive RT (doses ranging between 65 and 70 Gy) and HT sessions performed once or twice a week with the goal of delivering at least 42.5 C to the prostate gland. All HT sessions followed the irradiation and were delivered using a Sigma 60 annular phased array (APA) microwave device (82 MHz). Although the desired tumor temperature was obtained in only 3.5% of the HT sessions, mostly due to pain experienced during HT, the patients achieved 36-months local control and disease-free survival rates of 93 and 68%, respectively, higher than an expected local control rate with RT alone of <50% [38]. Van Vulpen et al. also reported the outcomes of patients diagnosed with locally advanced prostate cancer (defined as T3, T4, Nx/0, M0 tumors) [39,40]. HT was delivered weekly, using either a regional or interstitial technique, and a total dose of 70 Gy was delivered to the prostate gland. Regional HT was delivered with the coaxial transverse electrical magnetic (TEM) system and the interstitial HT treatment was delivered with the 27 MHz multi-electrode current source interstitial HT technique (MECS-IHT). Despite the absence of ADT and prophylactic lymph node irradiation, clinical outcomes compared favorably with most published series, with a 70% bRFS rate at 36 months. Yahara et al. explored the combination of RT and HT in a population of patients diagnosed with high-or very high-risk prostate cancer (Gleason score 8-10, PSA > 20 ng/mL, T3b-T4 tumors) [41]. Regional HT was applied after the RT sessions, once or twice a week, using a Thermotron RF-8 system (Yamamoto Vinita, Osaka, Japan). The outcomes after combined treatment were retrospectively compared with a population of patients with the same initial characteristics, treated with RT alone, and suggested a benefit in 3-year bRFS with the addition of HT (78 vs. 72%, p ¼ 0.3). Kalapurakal et al. reported the outcomes of 13 patients diagnosed with either locally advanced hormone-refractory or locally recurrent prostate cancer [44,45]. Patients received 66.6 Gy in fractions of 1.8 Gy as definitive treatment, and 39.6 Gy in fractions of 1.8 Gy in case of re-irradiation. HT was performed twice weekly, at least 72 h apart, and $1 h after the RT session. A radiofrequency BSD-2000 Sigma-60 applicator was used, allowing the temperature delivered to the prostate gland to increase stepwise, as long as the patient could tolerate it. The median progression-free survival (PFS) was 12 months (4-27 months), while the observed local control rate was 77% (only three local failures were observed). Results were encouraging in this population, considering that rectal and bladder invasion was present in 46 and 62% of cases, respectively. Tilly et al. reported the outcomes of 22 patients, treated to 68.4 Gy in conventional fractionation in combination with weekly regional HT, performed either before or after RT sessions [46]. Of these, 15 patients were diagnosed with primary prostate cancer and 7 were diagnosed with an LR after primary RP. With a 6-year follow-up, a 60% bRFS rate was observed for patients treated in the primary setting, while the corresponding rate in patients treated with salvage intent decreased to 43%. Kaplan et al. analyzed six prostate cancer patients with an LR after a 125 Iodine BT implant, treated with split-course RT at a total dose of 60 Gy with or without HT [51]. An APA system was used to deliver regional HT. Three out of four patients treated with this multimodal modality were considered disease-free at the last follow-up, while one patient died of a metastatic progression. Toxicity Published studies report relatively low rates of treatmentrelated toxicities when combining HT and EBRT. Grade 2 acute genito-urinary (GU) toxicity rates ranged from 0% [44,45,52] to 55% [46], while grade 3 acute GU toxicities were reported in only three studies [41,42,46]. In the primary setting, this proportion remained anecdotic, ranging from 1% [41] to 4% [42]. In the case of re-irradiation, this incidence reached an 18% rate [46]. Grade 2 acute gastrointestinal (GI) toxicity was more frequently reported, reaching a 48% rate in the study conducted by Anscher et al., mostly consisting of diarrhea [38]. Grade 3 acute GI effects were only observed in patients undergoing re-irradiation for an LR, with a 14% rate of toxicity reported in the study by Van Vulpen et al. [39]. In a study by Yahara et al. no difference was observed in the occurrence of acute GU or GI grade !2 toxicities between patients treated with or without regional HT treatment [41]. As long-term toxicity is concerned, no late grade !3 toxicity was reported. Only two cases of grade 4 late toxicity were observed in two patients in the salvage setting (hemorrhagic cystitis in a patient with factor XI deficiency and chemotherapy for lymphoma, and rectovesical fistula after disappearance of a large, necrotic tumor, in another patient) [40]. Specific HT toxicity, consisting of skin burn or pain, was inconsistently reported across studies. Tilly et al. reported a 68% and a 9% rate of acute grade 1 and grade 2 skin toxicity, respectively [46]. Yahara et al. reported similar results, with grade 2 skin burn occurring in 6% of the patients [41]. Symptoms presented as a subcutaneous induration, resolving spontaneously after the completion of HT [41]. Within the study led by Kalapurakal et al., no patient developed HT-induced skin burns [45]. Kukielka et al. reported toxicity outcomes after combined interstitial HT and BT [47]. In a heterogeneous study population, the authors included patients treated with HT and exclusive BT (45 Gy in three fractions for low-risk patients) or EBRT with a BT boost (50 Gy þ 21 Gy in three fractions). Other patients with a radio-recurrent relapse were treated with 30 Gy in three fractions and HT. Early toxicity profiles showed the safety of the combined approach with HT performed before BT, with a toxicity profile similar to patients treated with exclusive BT. The most frequent GU toxicity consisted in urinary frequency (27%) and was mostly reported in patients treated with BT and HT in the primary definitive setting (67%). No grade 3 urinary toxicities were observed, even in the population of patients being re-irradiated after a previous EBRT course. Additionally, no early rectal complications were observed. The role of HT combined with EBRT was also evaluated in the HT-Prostate trial (NCT0415905) for patients in biochemical recurrence after RP [53]. In this study, 7-10 sessions of HT were performed in combination with RT at a dose of 70 Gy to the prostate bed. The interim analysis of this trial reported relatively low acute toxicity rates, with a 10 and 4% rate of acute grade 2 GU and GI toxicities, respectively. Forty-two percent of patients experienced acute grade 1 HTspecific toxicity, which consisted mostly of hotspots and skin pain. Thermal parameters and technical challenges Within this review, most treatments were performed using electromagnetic heating, through the use of RF devices [54]. While the majority of studies preferred regional HT (radiative with APA, or capacitive), a few studies preferred a local approach either through interstitial HT (usually coupled with an interstitial BT technique) or transrectal/transurethral ultrasound techniques. APA devices, which consist in positioning multiples antennas around the patient, have been widely used for deep HT treatments of pelvic tumors. This approach brings the great advantages of both remaining non-invasive and providing accurate tumor heating while avoiding normal tissue hotspots. Capacitive HT is another regional approach using RF. Still, it carries the disadvantage of preferentially heating fat subcutaneous tissues, thus it is considered less suitable for Caucasian prostate cancer patients. Application of ultrasound heating for prostate cancer patients can be considered challenging due to near-and far-field risks of thermal build-up. Careful adjustment of acoustic parameters is required. Ideally, multi-planar or 3D temperature monitoring should be available online. Only three studies reported outcomes with ultrasound heating, through a transrectal approach [42,48,49]. Another clinical pilot study reported 3 D-controlled HT with catheter-based ultrasound applicators in conjunction with high-dose-rate (HDR) BT [55]. Some trials have shown the correlation between thermal parameters and clinical outcomes. A higher bRFS was found among patients receiving HT over 43 [56]. Higher thermal parameters were also found to be associated with an improved bRFS in the study led by Yahara et al. [41]. Still, the optimal thermal doses to the prostate carcinoma required for an efficient combination treatment are not exactly known. Kok et al. estimated that the addition of HT delivered once or twice weekly provide an equivalent delivered dose of 10 Gy higher than RT alone [14]. In the present study, we reported studies using different RT-HT timings, HT being performed before, during, or after RT. While the rationale for performing HT before RT includes increasing tumor oxygenation through vascular dilation, paradoxical effects with increased hypoxia have also been reported at higher temperatures, due to vascular damage. Additionally, some studies demonstrated in animal models increased normal tissue toxicity when HT was performed before RT [57]. As the best results in terms of radiosensitization were found with simultaneous HT and RT [58], this sequence could be favored, still technical realization remains challenging. As to date, no consensus has been reached on combined HT and RT, careful planning and monitoring of heating are advised whatever the sequence used. Technical challenges exist with regards to treatment delivery, as temperature goals are difficult to be achieved on the prostate gland. In the RF approach using an APA device, disappointing results were published by Anscher et al., as temperature targets were reached in only 3.6% of the patients [38]. Despite the increased conformality for local treatments and the potential to achieve higher temperatures within the prostate gland, even with a TRUS approach, only 36% of the patients reached the temperature target of 42.5 C in the study led by Fosmire et al. [48]. Recent developments in ultrasound HT, such as magnetic resonance (MR)-guided focused ultrasound hold the potential to improve the therapeutic ratio, by delivering uniform isotherms inside the target, allowing at the same time a greater sparing of surrounding healthy tissues [59]. Lastly, the comparability between studies of HT-induced prostate temperatures is limited by several factors. A great disparity exists in temperature monitoring between studies, which could be performed either with urethral [39,40,46,50], rectal [41,47,49], or bladder probes [52]. Additionally, evaluation of thermal parameters in prostate HT remains hampered by a lack of information about both vasculature and perfusion [60], which can lead to an overestimation of the prostate temperature of 1-2 . As invasive thermometry remains the clinical gold standard technique, MR-based thermometry holds the potential to improve treatment outcomes, by providing full 3D temperature distribution registered with anatomical imaging [61]. Still, proton resonance frequency shift (PRFS) MR thermometry has the major drawback to be highly sensitive to tissue motion, which can be limiting for organs, such as the prostate gland. Indeed, it is prone to non-periodic motion originating from the neighboring rectal wall, due to the presence of moving gas pockets [62], gradual bladder filling, or cough. This matter is being extensively investigated in the field of prostate RT (prostate intrafraction motion is estimated to be around 3 mm in both infero-superior and antero-posterior axis [63,64]), data are still lacking for considerably longer HT sessions. Good accuracy of MR thermometry was obtained in prostate thermotherapy using the TUSLA system from Profound Medical (Mississauga, Canada), over sessions of 11-52 min [65]. Although MR thermometry was performed to monitor ablative temperatures (above 57 C), it allowed to obtain temperature radii around the prostate down to 37 C, as part of a safety strategy. To date, the system was used in 224 patients [66]. However, the setup carries the drawback of being invasive, as it includes a rigid urethral applicator of ultrasound to stabilize the prostate gland. Overall, using appropriate parameters for PRFS MR data acquisition and post-processing filtering, a global precision of thermometry near to 1 C was obtained. Dynamic MRI thermometry has been evaluated by different teams, for abdominal organs, with satisfactory results [67][68][69][70][71] Discussion Strategies able to enhance the therapeutic ratio for prostate cancer patients with locally advanced disease or recurrent tumors treated with curative RT are eagerly required. In men with high-risk or local advanced prostate cancer, dose-escalation performed either on the whole prostate using EBRT [3,5,72] and/or BT [73] techniques, or with a focal boosting on the dominant intraprostatic lesion [74], represents one of the mostly investigated strategies used to improve tumor control. Improvements in bRFS rates have been demonstrated with dose escalation, although this benefit has often been counterbalanced by an increase in GU or GI toxicity rates [48,58]. In the specific situation of locally-advanced tumors, definitive RT combined with longterm ADT remains the cornerstone treatment [75], providing a 5-year RFS of 74% in historical trials [76]. The addition of HT to conventional hormone-RT may represent a valuable alternative in this setting. In a population of patients diagnosed mostly with locally-advanced tumors (61% rate of T3-T4 tumors), Yahara et al. reported an improved disease control in both bRFS and cancer-specific survival at 5-year with the addition of HT [41]. Based on the promising results of studies presented in Table 1, prospective trials exploring the role of combined HT and RT vs. RT alone are eagerly awaited in this population of patients. Non-surgical salvage local therapies offer a chance of a curative local approach in radiorecurrent prostate cancer. Reirradiation, either performed with EBRT or BT, has been associated with approximately a 50% bRFS at 5-year [77], sometimes at the cost of long-term severe radiation-induced toxicities [78]. Although literature data are lacking, the combination of HT and RT holds the potential to improve disease control without the need to escalate the RT dose, often limited in these situations by the risk of inducing severe toxicities to the nearby organs. The ongoing HETERERO trial (NCT04889742) will probably provide in the next future new insights on the role of this treatment combination. Additionally, the combination of HT with BT is also currently being evaluated in the salvage setting, within two ongoing single-arms prospective trials (NCT02899221 and NCT03238066) ( Table 2). Prostate bed RT with or without ADT is the main treatment for patients with a biochemical recurrence after RP. The combination of HT with RT offers in this setting an appealing treatment alternative to ensure durable local control without increasing the radiation-induced toxicity expected with dose escalation. As interim results have been published, the HT-Prostate phase II trial (NCT0415905) is still recruiting patients with biochemical relapse after radical prostatectomy, with a primary endpoint of acute tolerance [53]. As prostate bed RT is associated with high rates of GU toxicities, both in the adjuvant and salvage setting (54% and 70% of grade !2 toxicity reported within the recently published ANZUP RAVES trial [79]), the combination of HT with RT may also represent a compelling strategy to de-escalate the RT dose while assuring the same bRFS rate of standard prostate bed RT doses. Similarly, the presence of a macroscopic LR within the prostate bed may represent another clinical situation for which the long-term local control can potentially be enhanced by this combined strategy. The limitations of this systematic review include the retrospective and non-randomized nature of the reported studies, as well as their time of publication. Indeed, many of the studies performed on prostate cancers were conducted in the 1990s, and most of the RT treatments were performed with 2D or 3DCRT techniques, at doses lower than our current standards. All these biases thus could have negatively affected the external validity of these studies. Moreover, the lack of data on ADT use and duration in many studies may have hampered our conclusions on long-term disease control of this combined strategy. Conclusions Assumed the expected dose-benefit of 10 or more Gy, the addition of HT to RT represents a promising treatment strategy to improve outcomes without increasing treatmentrelated toxicities of prostate cancer patients treated with curative RT. Patients at higher risk of LR, including those with high-or very high risk disease, with macroscopic LR after RP, or with radiorecurrent tumors, represent probably the optimal candidates for this combined treatment. By implementing new technological developments both in HT and RT delivery techniques, results of prospective randomized trials are awaited to define the role of this treatment strategy in the therapeutic management of prostate cancer. Disclosure statement The authors declare no conflict of interest.
6,098.4
2022-03-21T00:00:00.000
[ "Medicine", "Engineering" ]
How to Heat a Planet? Impact of Anthropogenic Landscapes on Earth’s Albedo and Temperature Today anthropogenic climate change is underway and predicted future global temperatures vary significantly. However, the drivers of current climate change and their links to Earth’s natural glacial cycle have yet to be fully re-solved. Currently, many on a local level understand, and are exposed to, the heat energy generated by what’s referred to as the urban heat island effect (UHI), whereby natural flora with higher albedos is replaced by manmade urban areas with lower albedos. This heat effect is not constrained to these regions and all anthropogenic surfaces with lower albedos need to be studied and quantified as the accumulated additional heat energy (infrared energy) is trapped within Earth’s atmosphere and could affect the Earth on a planetary level. Deployed satellites have detected critical changes to Earth’s albedo to lower levels, however the cause and impact of these changes have yet to be fully understood and incorporated into Global Circulation models (GCMs). Here it’s shown that industrialization of anthropogenic landscape practices of the past century has displaced millions of square kilometres of naturally high albedo grasslands with lower albedo agricultural landscapes. Utilising a fundamental Energy Balance Model, (EBM) it’s demonstrated these specific changes have generated vast amounts of additional heat energy which is trapped by the atmosphere, transferred and stored within the oceans of the Earth as shown in Figure 1. The total additional heat energy accumulated over the preceding 110 years correlates to that required to warm the Earth to the levels seen to date, altering Earth’s overall energy budget. This energy will continue to accumulate and warm the Earth to a predicted 1.60 ± 0.20 Celsius by 2050 over 1910 levels. These findings are independent of anthropogenic Greenhouse Introduction Anthropogenic landscapes have been around for millennia with mankind domesticating crops in abundant quantities to realize and exploit these new sustainable agricultural methods [1]. These practices have been overwhelmingly beneficial for humanity. Today, Earth's surface is vastly different to that of the early 1900's [2]. Currently anthropogenic landscapes are the largest disruptive development to the planet's ecosystem, with over 52 ± 13 million square kilometres (50%) [3] of habitable land converted to agriculture/urban areas. This equates to 33% of the Earth's land surface. Many first think of deforestation as the major alteration when it comes to land clearing changes in the last century. However, it's been estimated up to 90% [4] of the Worlds Grasslands have suffered the greatest clearing and this has occurred at a faster rate than forests due to grassland's general topography, annual rainfall, rich dark fertile soils [5] and ease of conversion to cropland. As it stands, these areas are one of the least protected regions of the world [5]. The calculated conversion ratio between grassland to cropland and forest to cropland is estimated at (60%:40%) [2]. With the onset of industrialization, today, most grasslands have been converted into agricultural landscapes. The transformation of the World's natural landscapes to agricultural land has increased by 6.7 ± 1.6 million square kilometres within the last 110 years (Figure 2), and now estimated at 15.0 ± 3.5 million square kilometres (1/10 of the Earth's land area) [2]. The juxtapose heat flux or albedo properties of these altered flora surfaces has been overlooked in the causation climate change debate. While much attention has been focused on the correlated anthropogenic CO 2 , (Greenhouse Gas Emissions (GHG)) increases, these increases have lagged temperature rise and questions remain about the overall causation mechanism [6] and have marred the study of other climate drivers. Without fully accounting for anthropogenic landscape heat effects over the past 110 years, the conclusion drawn to date maybe casuistic. At present there is no alternate, credible theory to readily explain and account for the energy necessary to raise the global temperature by the levels seen. Furthermore, using the fundamental laws of thermodynamics encompassing energy conservation, additional accumulated heat energy generated from the removal & conversion of 4.02 ± 1.6 million square kilometres of natural grasslands with high albedo properties to newly created anthropogenic darker agricultural landscapes with lower albedos, can be identified as the main contributor and causation of climate change [7]. The word Albedo (al-bee-doh) refers to a measure of how much light energy is reflected by a surface, or stays as short-wave energy, while the remaining amount is absorbed and transformed to longwave energy, i.e. heat/infrared radiation. Surfaces that appear whiter reflect most of the light energy that is radiated upon the surface and therefore has a high albedo, while darker surfaces absorbs/transforms most of the light energy into longwave, heat energy, indicating a lower albedo. The albedo scale is between 0 for full absorption to 1 for full ref- Figure 2) [2]. To date, these changes have not been incorporated into GCMs as it has been assumed that such changes do not play a significant role in Earth's overall heat flux budget [10]. This measured change to heat fluxes and additional build-up of heat energy is correct for short durations, however over decades and centuries the accumulated energy has significant differences. The natural grasslands converted to grafted anthropogenic agricultural croplands have been judged to exhibit the exact same heat flux properties in all GCMs [11]. While deforestation and clearing of the world's forested areas with associated lower albedos 0.142 ± 0.011 to make way for agricultural land with higher albedos of 0.163 ± 0.013 have partially offset the temperature rise in this time frame [10], these areas are less favoured croplands as they are documented to be less fertile with lower productivity to that of grassland/cropland conversions [6]. These cooling changes and heat fluxes have been recognized and accounted for in the GCMs currently used [11]. It's been further recognized that, changes to surface albedos are powerful climate drivers of local, regional land areas and ultimately Earth's climate [12]. Moreover, concerns remain for the arctic region as the high albedo (0.60 ±.10) sea ice is slowly transformed to deep ocean low albedo (0.08 ± 0.02) properties due to the shorter winter season caused by the warmer ocean temperatures. Additionally, fires and burnt areas of the world are seemingly increasing in frequency and area and while these are not directly linked to causing global warming, they also contribute to lower albedo terrain. Both alterations will only further exacerbate the warming currently underway due to the positive feedbacks, however, have not been considered in the accumulated heat energy calculations performed herein. In real terms additional worldwide cropland areas (6.7 ± 1.6 million square kilometres) has roughly increased by 90% of the area of Australia (7.6 million square kilometres) within 110 years. The full impact of these anthropogenic landscape changes has only recently been introduced into some land surface models (LSM) and have been an area of consideration when it comes to the associated net energy flux impacts to the Earth's energy budget [4] [14]. While at the same time other research has touted increased albedo/reflectance changes with lower heat fluxes or new geo-engineering practices to cropland areas as a major way to fully mitigate the current global warming trend currently being experienced [15]. Ultimately, global warming is dictated by the iron-clad laws of thermodynamics. If the additional heat energy is more than the previous year the Earth warms. In reverse, the Earth cools and if the heat energy stays the same, the Earth's temperature remains unchanged. Methodology In the following calculations the Earth is considered using an Energy Balance Model (EBM). EBMs do not simulate the climate, but instead consider the balance between the energy entering the Earth's atmosphere from the sun and the heat released back out to space and are the foundation and basis upon which To heat the Planet, you must essentially heat the oceans that stores 93% of the energy [19]. Due to the immense size of the ocean, the epipelagic zone or top 150 ± 50 meters (5.4 × 10 7 ± 1.8 × 10 7 cubic kilometres) of the ocean can mix with the atmosphere, with deeper parts taking thousands of years to completely overturn and absorb additional energy and increase in temperature. Furthermore, it's been estimated that the oceans take 2640 years to fully overturn [20]. When considering the energy impacts of the past 110 years, this equates to 4% or the dawn of atmospheric atomic testing, this completely removes any atmospheric cooling effects that may have been experienced at this time period [21]. The modelled temperature simulations (1910-2050) charts can also be expressed as additional accumulated energy contained within the oceans/atmosphere, and NOAA currently measures this at 1.6 × 10 23 Joules above 1990 levels [22]. To start this series of temperature calculations, an average sea depth of 100 m or 3.6 × 10 19 Litres will be set. Utilizing the heat capacity of saltwater (3.89 Joules per gram per Kelvin or 3890 Joules per kilogram per Kelvin), this equates to 1.45 × 10 23 Joules per Kelvin. Now applying the laws of thermodynamics, once the amount of accumulated additional heat energy reaches this figure the temperature of the Earth increases by 1-degree Kelvin or Celsius. To continue the EBM calculations the following 7 assumptions are made: 1) The surface change for the averaged albedo difference of the converted land area (considering the cropping cycles) is constantly maintained year on year due to the same anthropogenic practices. 2) The area of the change is permanent, once the alteration has been initiated. 3) The amount of energy at the surface of the Earth is taken from the average surface energy of the planet obtained from Earth's Global Energy Budget figure, now estimated at 184.0 Watts per square meter [23]. 4) Energy reflected by clouds and the Atmosphere is constantly maintained at 79 Watts per square meter [23], throughout this time period This may indeed overestimate the contribution of clouds in the early 1900's. 5) Average sunlight per square meter of terrain is 12 hours per day or (43,200 seconds per day), 365 days per year. 6) The amount of energy required to heat the Earth, 1 Celsius or Kelvin is estimated at 1.45 × 10 23 Joules based on 100 m ocean depth (3.6 × 10 7 cubic kilometres). 7) The remaining of Earth's radiation budget quantities and annual mean where c is the specific heat of the material from which an object is made. Results The first series of modelled temperature calculations derived from Equations (1)-(3) are performed using 10%, 15% and 20% reduction on the anthropogenic albedo surface alteration converting from grasslands to croplands for the entire introduced 6.7 million square kilometres anthropogenic area, with no accounting for the forest to cropland conversions resulting in a (100%:0%) ratio. making these highly probable they are correlated. The total energy (Joules) is plotted on the secondary y axis in Figure 4(e). An additional temperature chart is constructed, shifting the resulting modelled temperatures from 1946 to 1981 thereby excluding the Atmospheric atomic testing years; resulting in Figures 5(a)-(c). These temperature charts can be used to predict the future global atmospheric/ocean mean temperatures and are compared to the 1910 adjusted NOAA temperature chart 5(i). To date, Earth's global temperature has risen 1.33 Celsius above 1910 levels, (an increase of 0.25 Celsius above the normally quoted 1880 temperature increase of 1.08 Celsius), here it's shown that additional accumulated heat energy generated from darker anthropogenic landscape changes totalling 2.28 × 10 23 Joules from 1910, will continue increasing Earth's temperature but not exceeding 1.60 ± 0.20 Celsius by 2050 as shown in Figure 5 (b). Discussion While there are some inherent errors that exist within the calculations performed, from the tessellated grafted anthropogenic surface area changes, associated land and cloud albedos and resulting heat fluxes that culminates in a ±0.2 Celsius estimated error, these are outweighed by the basic, fundamental laws of thermodynamics employed as well as the reasonable, logical assumptions made in the semiempirical calculations performed within the EBM. This combined with the validation calculations predicting Earth's temperature and albedo at the last glacial maxima, shows how large-scale albedo changes can drive the planetary temperatures experienced in Earth's past, and may indeed be aligned to an Earth albedo cycle associated with Gaia theory [25]. Looking closely at the recorded NOAA global temperatures from 1910 to present and comparing them to the modelled temperatures seen in Figure 6(b), the modelled results appear are on the lower side of the actual NOAA global temperatures, and only 10 of the 110 of the NOAA global temperatures yearly results fall outside the predicted error results. This stems from the conservative estimates for the anthropogenic induced heat fluxes and total converted areas utilized. While lower latent heat in the early 1900's may also contribute to additional heat energy entering the system due to reduced cloud formation and therefore less reflected energy resulting in higher transmission of energy reaching Earth's lowered albedo surfaces and ultimately transformed to additional International Journal of Geosciences heat energy. This would result in increased temperatures than otherwise modelled. Current research indicates the latent heat measurements have increased over the century, resulting in greater cloud formation and rainfall [26] [27]. However, from the results obtained by the two separate methods above, the heat fluxes used appear to be maintained and accurate, and within acceptable limits over the 110-year period. Conclusion In conclusion, this paper shows that utilising an EBM, the unintended additional accumulated heat energy as a consequence of all ALHE alterations, has year on year resulted in increased global average temperatures since 1910, except for the atmospheric atomic testing years . A Pearson's correlation of 0.97, as well as a paired t-test result of 1.6 × 10 −10 was calculated between these results and NOAA average global temperatures, indicating a strong correlation. Using the Stefan-Boltzmann law the Earth's average absorbed heat flux will increase by 5.88 Watts per square meter from 1910 to 2050, as an indirect result of Earth's TOA albedo decreasing from 0.3160 to 0.2987 over this timeframe. Without any contrary change, or off-setting, this additional accumulated heat energy totalling 2.29 × 10 23 Joules, will continue to enter Earth's system, warming the planet to a predicted temperature of 1.60 ± 0.20 Celsius above 1910 levels by 2050 and is independent of anthropogenic GHG increases. These findings should be closely studied and incorporated into more complex GCMs. If all global warming seen to date can be attributed to surface albedo changes, this gives rise to the question of whether surface albedo changes, may, in fact, have been a larger contributor to global temperature fluctuations seen in the distant past and therefore been the primary driver of natural climate change, i.e. Ice Ages/interglacial periods, and may indeed be aligned to an Earth albedo cycle associated with Gaia theory. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
3,466.4
2020-06-08T00:00:00.000
[ "Environmental Science", "Geology" ]
The Investigation on Ultrafast Pulse Formation in a Tm–Ho-Codoped Mode-Locking Fiber Oscillator We experimentally investigate the formation of various pulses from a thulium–holmium (Tm–Ho)-codoped nonlinear polarization rotation (NPR) mode-locking fiber oscillator. The ultrafast fiber oscillator can simultaneously operate in the noise-like and soliton mode-locking regimes with two different emission wavelengths located around 1947 and 2010 nm, which are believed to be induced from the laser transition of Tm3+ and Ho3+ ions respectively. When the noise-like pulse (NLP) and soliton pulse (SP) co-exist inside the laser oscillator, a maximum output power of 295 mW is achieved with a pulse repetition rate of 19.85-MHz, corresponding to a total single pulse energy of 14.86 nJ. By adjusting the wave plates, the fiber oscillator could also deliver the dual-NLPs or dual-SPs at dual wavelengths, or single NLP and single SP at one wavelength. The highest 61-order harmonic soliton pulse and 33.4-nJ-NLP are also realized respectively with proper design of the fiber cavity. Introduction With the rapid development of ultrafast fiber lasers, increasingly complex ultrafast dynamics are discovered. The investigation of different ultrafast dynamics not only helps toward better understanding of the pulse evolution in an optical fiber, but also is useful for designing a mode-locking oscillator. Various ultrafast pulse evolution dynamics can be investigated theoretically based on the nonlinear Schrödinger equation [1,2], the coupled/complex Ginzburg-Landau equations [3,4], the Hirota bilinear formula [5], the Bogoyavlenskii-Schiff equation [6], and the Hirota-Satsuma-Ito equation [7]. The formation of soliton pulse is a well-known ultrafast dynamics which arises from the balance between the optical nonlinearity and anomalous chromatic dispersion [8][9][10][11]. The soliton pulse (SP) maintains its shape in both temporal (ps or fs) and spectral domain when propagating inside the fiber oscillator. The symmetrical Kelly sidebands distributed on the emission spectrum is a typical characteristic of the soliton pulse (see Figure 1a). In comparison, dissipative soliton is usually realized in the normal dispersion regime, always featured with a ps-long Gauss or sech shape in time domain and a square spectrum shape in frequency domain (see Figure 1b) [12][13][14]. Both of these two basic soliton pulses, SP and dissipative soliton, can turn into the dissipative soliton resonance (DSR) if large identical dispersion and high gain are simultaneously introduced into the mode-locking oscillator [15][16][17][18]. The DSR is characterized with a square pulse shape with an ns-or ps-pulse duration, smooth spectrum profile and large pulse energy (see Figure 1c). Furthermore, the basic soliton pulse will break into multiple pulses when the intracavity nonlinearity is overdriven by a large energy pulse. According to the pulse profiles in time domain, the pulse can be divided into different types: noise-like pulse (NLP) [19][20][21][22][23], bunched soliton pulses/optical soliton molecules [24][25][26][27] and soliton rain [28], et.al. NLP consists of a large number of small pulses randomly underlying in the same pulse envelope. Its spectrum is smooth without any spikes or modulations (see Figure 1d). Bunched soliton pulses also referred as optical soliton molecules are formed by multiple pulses gathered equal temporal distance. The typical characteristic of bunched soliton pulse is the interference fringes on the top of the spectrum (see Figure 1e). Soliton rain comprises three parts: a high peak pulse called condensed soliton phase similar to NLP with a group of multiple pulses under the envelope, the drifting pulses named drifting solitons emerging from the noise background and vanishing until reaching the condensed soliton phase, a wide noise background which manifests its existence as a small peak on the top of the spectrum in the frequency domain (see Figure 1f). These different types of pulses (soliton pulse, DSR, NLP, bunched soliton pulses, etc.) can exist in the harmonic mode-locking regime, in which the pulse reproduces itself with a multiplication of fundamental pulse repetition rate, further forming harmonic solitons [29], harmonic dissipative solitons [13], harmonic bunched solitons [27] and harmonic NLP [30]. Attractively, these pulses also can co-exist in a same fiber oscillator, which greatly enriches the ultrafast dynamics in a mode-locking laser. For example, near the zero-dispersion wavelength region of the glass fiber, two different SPs with non-equal pulse intensity are observed in an Er-doped fiber oscillator [31]. Additionally, in the Er-doped fiber oscillator, harmonic bunched-solitons and NLP are simultaneously achieved with a high nonlinear fiber [25]. On the other hand, fiber oscillators operating above the zero-dispersion wavelength region can provide a natural anomalous dispersion environment. These fiber oscillators including 2 µm thulium (Tm)-doped, holmium (Ho)-doped, or Tm-Ho-codoped fiber oscillators provide another platforms for the investigation of pulse evolution dynamics [32][33][34][35]. The large gain of the Tm-doped fiber mainly is located in the <2000 nm wavelength region, while with the assistance of Ho 3+ ion, the large net-gain can be extended easily to the >2000 nm wavelength region in a Tm-Ho-codoped fiber, resulting in a broadband wavelength emission ranging from 1.7 µm to 2.1 µm. Besides that, the dual ions doped ultrafast fiber laser can provide more abundant pulse dynamics due to the interaction between co-doped ions. In this work, we fist report the coexistence phenomenon of NLP and SP in a nonlinear polarization rotation (NPR) mode-locking Tm-Ho-codoped fiber oscillator. The harmonic soliton pulse and NLP also can be obtained separately with proper design of the fiber cavity. In the co-existence regime, a maximum average output power of 295 mW is realized with a pulse repetition rate of 19.85 MHz, resulting in a pulse energy of 14.86 nJ. The dual-NLPs or SPs at two different wavelengths, or single NLP and SP at one wavelength, are also obtained respectively by adjusting the wave plates. Moreover, harmonic soliton mode-locking with 61-order pulse is also realized by increasing cavity length. The physical formation mechanism for the coexistence of different mode-locking pulses is analyzed. Results and Discussion The NLP and SP coexisted mode-locking operation is realized by carefully adjusting wave plates at the pump power above 1.2 W. The power performance is recorded as Figure 2a. As the pump power scales up, the fiber laser gradually evolves from the continuous wave (cw) regime to the Q-switched mode-locking (QML) regime and finally to the dual-pulse coexisted mode-locking regime. The maximum average output power reaches to 292 mW at the pump power of 4.23 W. Figure 2b shows the spectrum of NLP locates at a short wavelength (~1947 nm) and possesses a smooth profile with a bandwidth of 22 nm. The small spikes riding on the NLP spectrum are attributed to the absorption of water vapor in air. The long emission spectrum, which is the spectrum of SP verified by the symmetrically distributed Kelly sidebands, locates around~2010 nm with a spectral bandwidth of about 5 nm. The typical mode-locking pulse train is shown as inset of Figure 2b, giving a pulse-to-pulse fluctuation of about 12%, which is deteriorated by the instable NLP mode-locking. For further estimating the stability of the mode-locking operation, the radio frequency (RF) spectrum is measured for different scanning ranges shown in Figure 2c. The RF spectrum of SP is overwhelmed by the RF spectrum of NLP, which possesses a wide width and two sidelobes at the bottom of the fundamental frequency. The fundamental frequency is 19.85 MHz with a signal-to-noise ratio (SNR) of 70 dB, matching well with the fiber cavity length. In a broad RF spectrum range up to 2 GHz, the RF spectrum (inset of Figure 2c) shows a broad comb of harmonics with a SNR higher than 40 dB. The pulse auto-correlation trace is also featured with the characteristic of the NLP, which consists of a narrow femtosecond spike and a hundred picoseconds pedestal ( Figure 2d) [22,30]. The cross-section of the pedestal in the auto-correlation trace increases as the pump power scales up, implying the simultaneous increasing of the pulse energy of NLP. In order to separately investigate the SP, a filter is utilized to move away the NLP in the short wavelength region (<2000 nm). The performances of the SP are shown in Figure 3. After the filter, the intensity of NLP is reduced remarkably, but the intensity of SP is almost unchanged relatively (see Figure 3a). Figure 3b shows the measured SP trains at the time scales of 2 µs and 20 µs. The pulse-to-pulse fluctuation is reduced from 12% (see Figure 2b) to 5%. The RF spectrum in Figure 3c shows the SNR of fundamental frequency is 49 dB and there is no obvious NLP induced sidelobes. The inset of Figure 3c indicates the SNR of the harmonic combs is still larger than 20 dB at the 2 GHz scanning range. The measured SP auto-correlation trace shown in Figure 3d has a pedestal with a 20 ps duration, arising from the residual NLP (see Figure 3a). Assuming a sech 2 pulse shape, the pulse duration is determined to be 1 ps (inset of Figure 3d). The time-bandwidth product is evaluated to be 0.353, approaching to the Fourier transformation limited value of 0.315. By carefully rotating wave plates at the maximum pump power, the SPs at one wavelength (1966.6 Figure 4. The wavelength spacing of the dual center wavelengths is always around 60 nm in different mode-locking regimes. Among these regimes, the maximum average output power of 512 mW is realized for the single NLP mode-locking at 1990.0 nm, resulting in a pulse energy of 25.8 nJ. By slightly changing the parameter of the laser cavity with increasing the cavity length to~25.4 m, the soliton harmonic mode-locking and NLP mode-locking are also realized, respectively. As the pump power scales up from 0.23 W to 4 W, the fiber oscillator can operate in cw regime, soliton harmonic mode-locking (HML) regime and NLP modelocking regime. These different regimes can be easily distinguished from the emission spectra, which are shown in Figure 5a. In the soliton HML regime, the pulse repetition rate can reach to 497. 15 MHz, corresponding to the 61-order of the fundamental pulse repetition rate. This is the highest soliton order compared with the previously reported Tm-Ho-codoped HML fiber oscillators. The pulse duration is 2.29 ps by assuming a sech 2 pulse shape (inset of Figure 5b) and RF spectrum shows a SNR of 41 dB (see Figure 5c). In NLP mode-locking regime, the dual-wavelengths with a spacing also around~60 nm are observed, and the central wavelengths approach to that in Figure 2b. The broader pedestal of NLP in Figure 5b indicates a much higher pulse energy (33.4 nJ) than that in Figure 2d. The RF spectrum in Figure 5d shows more obvious sidelobes at the bottom of the fundamental frequency with a SNR of 51 dB. It should be noted that the limited resolution of the instruments for charactering ultrafast pulses, the output instability of the fiber oscillator itself, and the environmental fluctuations can result in some uncertainties for the measured ultrafast pulse performances. In addition, the phase noise uncertainty can be precisely measured as that investigated in Reference [36]. In the experiment, dual wavelength operations are always achieved in different modelocking regimes. The dual wavelength operation could arise from the spectral filter effect in the birefringent fiber or the emissions of Tm 3+ and Ho 3+ ions in the Tm-Ho-codoped active fiber. The period of spectra filter ∆λ induced by birefringence can be expressed as [37]: where λ is the emission wavelength, L is the length of birefringent fiber, B m = n x − n y is the modal birefringence, n x and n y are the refractive index for different polarizations, n 2 is the nonlinear refractive index, P is the instantaneous power of the laser, θ is the angle depending on the rotation of wave-plates, and A eff is the effective mode area. In the experiment, the SMF can function as the birefringent fiber as Reference [37] so that the calculated modal birefringence B m is 4.9 × 10 −6 and the nonlinear refractive index n 2 is 2.7 × 10 −20 m 2 /W [37]. According to the experimental results, we set λ = 1950 nm, P = 0.1 W, A eff = 254 µm 2 , and ∆λ ≈ 60 nm, while a proper θ is unable to be solved with the experimental cavity length L of 10.4 m or 25.4 m. Therefore, we believe that the dual-wavelength emission is independent of the spectral filter effect in the SMF-based birefringent fiber. Moreover, we find that the dual-wavelength emission only is observed when the pump power exceeds a certain value as shown in Figure 5. This is a main characteristic of Tm-Ho-co-doped laser which requires a high pump power for energy transfer between Tm 3+ and Ho 3+ ions to emit dual wavelengths. The formation mechanism for dual-wavelength emission is different from those reported methods [30,[38][39][40][41][42][43][44]. The emission and absorption spectra of Tm 3+ and Ho 3+ ions are shown in Figure 6a. Although the gain wavelength region of the Tm 3+ ion is partly overlapped with the absorption of Ho 3+ ion, there still exists net gain in the wavelength region below 2000 nm. With the assistance of Ho 3+ ion, the large net-gain can be extended to above 2000-nm wavelength region in a Tm-Ho-codoped system. The ion transition processes in the Tm-Ho-codoped active fiber are simplified as shown in Figure 6b. The Tm 3+ ions at the ground state of 3 H 6 are excited to the upper energy level 3 F 4 by the 1560-nm pumping laser. When the pump power is low, most of the Tm 3+ ions at the energy level of 3 F 4 will return to the 3 H 6 accompanied by the laser emission in the short wavelength region below 2000 nm (laser emission 1 in Figure 6b). The energy transition between 3 F 4 in Tm 3+ ion and 5 I 7 in Ho 3+ ion can be ignored so that only one wavelength emission can be observed under weak pump power. As the pump power scales up, the energy level of 3 F 4 in the Tm 3+ ion is strongly occupied, which results in a large energy transition between Tm 3+ ion and Ho 3+ ion. So other than the laser emission in short wavelength region, the transition from the energy level 5 I 7 to the energy level 5 I 8 of Ho 3+ ions generates another laser emission above 2000 nm (laser emission 2 in Figure 6b). It should be noted that for the coexisted pulses at dual wavelengths, the laser emission in short wavelength region always possesses a large gain compared with that in the long wavelength region due to the low concentration of Ho 3+ ions. For example, as shown in Figures 2b, 4e and 5a, under strong pump power, the SP or NLP with a low pulse energy is formed at a long wavelength via the emission transition in Ho 3+ ion, while the formed pulse at short wavelength accumulates a large energy due to the large gain, which facilities the formation of large energy NLP. However, in Figure 4b-d, we find that the SP and NLP with large energies are also formed around 2000 nm. We believe this is attributed to the co-interaction of Tm 3+ and Ho 3+ ions in this wavelength region (see Figure 6a). Materials and Methods The schematic diagram of the Tm-Ho-codoped mode-locking fiber oscillator is shown as Figure 7. The pump source is a continuous wave 1562-nm Er-doped fiber laser amplifier (FLA), which delivers a maximum output power of 4.23 W with a power instability of 0.3% measured within 60 min. The pumping laser is guided into the Tm-Ho-codoped fiber ring cavity by a wavelength division multiplex (WDM). The fiber ring cavity consists of a 4.3 m long Tm-Ho-codoped single mode active fiber (Coractive, SM-TH512, 23 dB/m at 1570 nm, −56 ps 2 /km at 1900 nm, CAN), a polarization independent isolator (PI-ISO), a group of NPR free-space optical components, and a 5.4 m long passive single-mode fiber (Nufern, SMF-28e, −67 ps 2 /km at 1900 nm, USA). The NPR optical component includes two quarter-wave plates, a half-wave plate, and a polarization beam splitter (PBS). The PBS simultaneously functions as both polarizer and output coupler. Considering the pigtail fiber of all optical components inside the fiber cavity, the total length of the fiber ring cavity is close to 10.4 m. The mode-locking operation can be realized by carefully adjusting the wave plates. The output pulse train is detected by an InGaAs PIN detector (EOT, ET-5000, USA) and observed with an oscilloscope (Tektronix, DPO 4102B-L, USA). Conclusions In this work, first we observe the coexisted noise-like pulse and soliton pulse in the thulium-holmium-codoped ultrafast fiber oscillator. By carefully adjusting the wave plates, the coexisted noise-like pulse and soliton pulse can involve into the formation of dualnoise-like pulses or dual-soliton pulses at dual-wavelengths and single noise-like pulse or single soliton pulse at one wavelength. A 61-order harmonic soliton pulse and the 33.4-nJ-noise-like pulse are also realized respectively by prolonging the fiber oscillator length. We believe the dual-wavelength emissions are attributed to the transitions of Tm 3+ and Ho 3+ ions respectively of the gain fiber. The coexisted NLP and SP at different wavelengths depend on the different gain under a strong pumping power.
3,985.8
2021-06-01T00:00:00.000
[ "Physics", "Engineering" ]
The transition function of $G_2$ over $S^6$ We obtain explicit formulas for the trivialization functions of the $SU(3)$ principal bundle $G_2 \to S_6$ over two affine charts. We also calculate the explicit transition function of this fibration over the equator of the six sphere. In this way we obtain a new proof of the known fact that this fibration corresponds to a generator of $\pi_{5}(SU(3))$. Introduction The well-known classification of simple Lie groups shows that G 2 is the smallest among the exceptional types. Further interesting properties and applications of it are numerous. In this paper we revisit the compact real form G 2 from the viewpoint of differential geometry. We identify G 2 with Aut O ⊂ O(7), the automorphism group of the Cayley octonions. It is a classical fact that there is a fibration p : G 2 → S 6 , which makes G 2 a locally trivial SU (3)-bundle over S 6 . It is also known that the principal SU (3)-bundles over S 6 are classified by π 5 (SU (3)) = Z. A natural question is that to which element in π 5 (SU (3)) = Z does the fibration G 2 → S 6 correspond? In other words, what is the homotopy class of the transition function S 5 → SU (3) of the above fibration, where S 5 ⊂ S 6 is the equator of the six sphere? The answer was already used in the physics literature before the first rigorous mathematical proof appeared. The structure of the paper is as follows. In Section 2 we give a brief introduction to the algebra of Cayley octonions and to several known facts about the group G 2 . The new results of the paper are obtained in Section 3. Acknowledgements. The author would like to thank to Gábor Etesi and to Szilárd Szabó for several helpful comments and discussions. 2. Some known facts about G 2 2.1. Cayley octonions. To perform calculations in the group G 2 we collect some known facts about the Cayley algebra of octonions. We follow [6] and for completeness we reproduce the proofs of the required results. Let A be an algebra over the reals. A linear mapping a →ā of A to itself is said to be a conjugation or involutory antiautomorphism ifā = a and ab =bā for any elements a, b ∈ A (the caseā = a is not excluded). Definition 2.1 (Cayley-Dickson construction [6], [1]). Consider the vector space of the direct sum of two copies of an algebra with conjugation: A 2 = A ⊕ A. A multiplication on A 2 is defined as (a, b)(u, v) = (au −vb, bū + va). It is easy to check, that relative to this multiplication the vector space A 2 is an algebra of dimension 2 · dim(A). This is called the doubling of the algebra A. Remark. The correspondence a → (a, 0) is a monomorphism of A into A 2 . Therefore we will identify elements a and (a, 0) and thus assume A is a subalgebra of A 2 . If A has an identity element, then the element 1 = (1, 0) is obviously an identity element in A 2 . A distinguished element in A 2 is e = (0, 1). It follows from the definition of multiplication that be = (0, b) and hence (a, b) = a + be for all a, b ∈ A. Thus every element of the algebra A 2 is uniquely written as a + be. Moreover, the following identities are true: (1) a(be) = (ba)e, (ae)b = (ab)e, (ae)(be) = −ba. Lemma 2.2. If A is a metric algebra, then A 2 is also metric. Proof. For any a, b ∈ A (a + be)(a + be) = (a + be)(ā − be) = aā + beā − abe + bb = aā + bb, since beā = abe according to the rules of multiplication in (1). Therefore, (a+be)(a + be) ∈ R and it is obviously positive if a or b is not 0. To iterate the Cayley-Dickson construction it is necessary to define a conjugation in A 2 . This will be done by the formula This is involutory, R-linear and is simultaneously an antiautomorphism. Using this definition, the doubling R 2 of the field R is the algebra C of complex numbers and the doubling C 2 of C is the algebra of quaternions H. In the latter case e is denoted by j and ie is denoted by k, and thus a general quaternion is of the form r = r 1 + r 2 i + r 3 j + r 4 k, where r i ∈ R, i = 1, 2, 3, 4. Due to the identities (1) ea =āe for all a ∈ A. Therefore, A 2 is not commutative if the original conjugation is not the identity mapping. In particular H is not commutative. The doubling of the algebra of quaternions leads to an 8 dimensional algebra over the reals. By definition every octonion is of the form ξ = a + be, where a and b are quaternions. The basis of O consists of 1 and seven elements i, j, k, e, f = ie, g = je, h = ke. The square of each of these elements is −1, and they are orthogonal to 1. It is known that the Cayley algebra is an 8 dimensional alternative division algebra [6]. The algebra O is alternative, i.e. (ξη)η = ξ(ηη), ξ(ξη) = (ξξ)η for all ξ, η ∈ O. The associator of three elements is the trilinear map defined by The algebra is alternative precisely if for all a, b Both of these identities together imply that for an alternative algebra the associator is totally skew-symmetric (or alternative). That is, for any permutation σ. From this it follows that which means that (ab)a = a(ba). Due to Lemma 2.2 the algebra O is metric. Next we show that it is also normed. Lemma 2.4. The algebra O is a normed algebra with the norm generated by the metric. In particular, it is a division algebra. Suppose v = λ + v ′ , where λ ∈ R and v ′ ∈ H ′ , and hencev ′ = −v ′ . From this it follows that because aub + būā is real and therefore commutes with v ′ . Since O is a normed algebra with the norm induced by the metric, ab, ab = a, a b, b for all a, b ∈ O. Polarizing this first by b = x + y and then by a = u + v we obtain (9). Proof. Due to the identities of alternativity and elasticity one has and due to the skew-symmetry of the associator this leads to Taking into account the skew-symmetry of the associator the sums of the first two terms on each side are equal. For the same reason so are the sums of the last two terms. Therefore, Now we replace b with λb, where λ ∈ R, then we divide both sides by λ and take λ = 0. This results in which is (10) and (11) can be proved similarly. Moreover, (12) can be proved by using again the skew-symmetry of the associator, i.e. 2.2. The subgroup SU (3). Consider the subset of the vector space O ′ consisting of elements ξ, such that |ξ| = 1. This set is a 6-dimensional sphere, which is denoted by S 6 . An automorphism Φ : O → O sends the elements i, j and e to elements ξ = Φi, η = Φj and ζ = Φe in S 6 such that η is orthogonal to ξ and ζ is orthogonal to ξ, η and ξη. The next theorem shows, that these conditions are not only necessary but also sufficient for the existence of the automorphism Φ. The statement of the following theorem is classical. Let H be a unital (and therefore closed under conjugation) subalgebra of O other than O and let ξ be an octonion in S 6 orthogonal to H. Lemma 2.9. For any element b ∈ H the octonion bξ is orthogonal to H. In particular, bξ ⊥ 1 (since 1 ∈ H), so bξ = −bξ. Proof. First, applying (8) to x = ξ, y = b and taking into account that ξ ⊥ b and therefore ξ ⊥b we obtain the equation which is equivalent to the first identity. Using the identities of Lemma 2.10 we also have Proof of Theorem 2.7. It follows from the fact ξ, η ∈ S 6 , that This means that ξη ∈ O ′ , and because |ξη| = |ξ||η| = 1, one has ξη ∈ S 6 . Consequently (ξη) 2 = −1. Using the identity of alternativity ξ(ξη) = (ξξ)η = −η and (ξη)η = ξ(ηη) = −ξ. Adopting (9) to u = v = ξ, x = η and y = 1 yields to ξη, ξ = ξ, ξ η, 1 = 0, from which it follows that Similarly, it can be shown that η(ξη) = ξ and this means that multiplying any number of the elements ξ and η in any order only the elements ±1, ±ξ, ±η and ±ξη can be obtained. That is, the elements of the form constitute a 4 dimensional subalgebra H of O, which is an associative subalgebra due to Lemma 2.11. That is, the correspondences define an isomorphism of the algebra of quaternions H onto the algebra H. Because ζ is by assumption orthogonal to the elements 1, ξ, η and ξη, it is orthogonal to the entire algebra H, and therefore identities of Lemma 2.10 hold for it. From the second of these identities it follows that for the subalgebra generated by H and ζ the identity (8) is also true. Therefore, it is possible to extend linearly the isomorphism H → H to a homomorphism of a subalgebra of O onto the subalgebra generated by H and ζ by sending e to ζ. If a nonzero homomorphism of an unital division algebra is given, then it is a monomorphism, because if ξ = 0 were mapped to 0, ξ −1 would not have a finite image. Therefore, the extended homomorphism Φ is a monomorphism of O into itself and in this way it is bijective, i.e. it is an automorphism of O. This means that we have constructed an automorphism Φ : O → O sending elements i, j, e to ξ, η, ζ (and, of course, k, f, g, h to ξη, ξζ, ηζ, (ξη)ζ respectively). From Theorem 2.7 it follows that the group G 2 = Aut O acts transitively on S 6 , i.e. the mapping p : G 2 → S 6 defined by the formula Φ → Φi is surjective. Let us denote by K the stabilizer (isotropy) group of i. This means that Due to the standard theorem of transitive Lie group actions ( [5], Theorem 9.24) The subspace V = Span{1, i} ⊥ of the algebra O is closed under the multiplication by i and thus it can be considered as a vectorspace over the field C with basis j, e, g. The scalar product in O induces in V a Hermitian scalar product with respect to which the basis j, e, g is orthogonal. Any automorphism Φ : O → O which leaves the element i fixed, i.e. which is in the subgroup K, defines an operator V → V linear over C. This operator preserves the scalar product, and therefore it is an unitary operator. Its determinant is 1 because Aut O ⊂ SO(7) and therefore the group K is identified with some subgroup of the group SU (3). From Lemma 2.7 it also follows, that the group K coincides with the entire group SU (3). Thus it may be assumed, that Corollary 2.12. Consider the evaluation mapping p : This SU (3) action clearly carries fibers to fibers. Since SU (3) is free and transitive on itself, it behaves the same way on p −1 (i) and therefore on all of the fibers. 2.3. The subgroup of inner automorphisms. In an associative division algebra, such as the quaternions over the reals, the mapping q r : x → rxr −1 is always an automorphism for any invertible element r, which is called an inner automorphism. In a non-associative algebra it is not always true that for all x, r. Moreover, not every invertible element generate an inner automorphism. Still, in the case of the octonions a well defined linear transformation associated to an element r can be defined because of the following lemma. Proof. If the coordinates of r in the standard basis are (r 1 , . . . , r 8 ), then r −1 =r |r| 2 = 2r 1 −r |r| 2 . Therefore, using the identity of elasticity we have The following result classifies those elements r for which the linear map q r is an automorphism of O. For completeness, we reproduce its original proof. Proof. From (12) for a = r, b = xr −1 and c = ryr it follows that and therefore Substituting this into (14) leads us to ) · (xy · r))r = r((xy · r))r = r(xy)r 2 , for all x, y, r ∈ O. Multiplying this with r 3 from the right we get Comparing (15) to (16) we see that in order for q r to be an automorphism r 3 must be a scalar. 3. G 2 as an SU (3)-bundle over S 6 3.1. The trivialization functions. Our aim is to determine the transition function of the fibration p : G 2 → S 6 , Φ → Φi between two charts of S 6 given by S 6 \ {S} and S 6 \ {N }. The preimage of i is the set p −1 (i) = {(i, η, ζ) : η ⊥ i, ζ ⊥ Span{i, η, iη}}. As mentioned above this is isomorphic to SU (3) and this isomorphism will be called θ 1 . From now on, elements in p −1 (i) will be considered either as orthonormal vector triples in V i = T i S 6 or as operators that leave the vector i fixed. Proposition 3.1. The trivialization map over S 6 \ {S} is given by where ϕ(i) is the image of i under ϕ and θ ϕ(i) (ϕ) is given by (17) below. Proof. It follows from the earlier considerations that there is a complex structure This is clearly a V i → V i mapping and J 2 Thus, there is a θ i : V i → C 3 isomorphism, that assigns to each operator Φ ∈ p −1 (i), Φ : V i → V i its matrix representation in the complex basis {j, e, g}. As a consequence, for any ξ ∈ S 6 \ {S} and any ϕ ∈ p −1 (ξ), ϕ restricts to a mapping V i → V ξ , which is complex linear, unitary and has determinant 1. We will choose a complex orthonormal basis in V ξ and write the images of j, e and g in this basis. That is, we choose particular identifications V i ≈ C 3 , V ξ ≈ C 3 and we define θ ξ : p −1 (ξ) → SU (3) by assigning to each automorphism ϕ ∈ p −1 (ξ) the matrix of the mapping ϕ : To find a basis in V ξ we will define a translating automorphism Q ξ such that Q ξ (i) = ξ. Then, for a = Q ξ (j), b = Q ξ (e) and c = Q ξ (g) the set of vectors {a, b, c} is a complex orthonormal basis in V ξ with respect to the complex structure J ξ (v) = ξv. Particularly, Using this the trivializing map is given by Similarly, the preimage of −i is diffeomorphic to SU (3), and in this case the complex structure on V −i is given by J −i (v) = −iv. Therefore,θ −i is defined as η, j + I η, −k ζ, j + I ζ, −k ηζ, j + I ηζ, −k η, e + I η, −f ζ, e + I ζ, −f ηζ, e + I ηζ, −f η, g + I η, −h ζ, g + I ζ, −h ηζ, g + I ηζ, −h   . As we did in the previous case, for a general pont ξ ∈ S 6 \ {N } we will choose a translating automorphismQ ξ with the property thatQ ξ (−i) = ξ and thereforẽ Q ξ (j),Q ξ (e),Q ξ (g) ∈ V ξ form a complex orthonormal bases. Then we defineθ ξ : p −1 (ξ) → SU (3) by assigning to ϕ ∈ p −1 (v) the matrix of the corresponding linear mapping from V −i onto V ξ written in the bases {j, e, g} at V −i and {ã,b,c} := {Q ξ (j),Q ξ (e),Q ξ (g)} at V ξ . Similarly as in the proof Proposition 3.1 we obtain the following morphism Finally, the analogue of Proposition 3.1 is true for this chart. Proposition 3.4. The trivialization map over S 2 \ {N } is then given by where ϕ(i) is the image of i under ϕ andθ ϕ(i) (ϕ) is given by (18). To summarize, if Q ξ ,Q ξ ∈ G 2 are known as functions of ξ with the property that Q ξ (i) = ξ andQ ξ (−i) = ξ, then an appropriate basis in V ξ is a = Q ξ (j), b = g ξ (e), c = Q ξ (g), which are the translations of the basis j, e, g from V i in the case of the first chart. In the case of the second chartQ ξ translates j, e, g from V −i to V ξ . Thus, we need to find elements Q ξ ∈ G 2 andQ ξ ∈ G 2 . Knowing the first one is enough, because then second is given due to the identities It will be convenient to look for Q ξ in the form of an inner automorphism generated by an element r ∈ O. The easiest is to look for a unit length octonion that induces Q ξ . For a unit length octonion r the conjugate of i with r is: , 2(r 2 r 3 + r 1 r 4 ), 2(r 2 r 4 − r 1 r 3 ), 2(r 2 r 5 + r 1 r 6 ), 2(r 2 r 6 − r 1 r 5 ), 2(r 2 r 7 − r 1 r 8 ), 2(r 1 r 7 + r 2 r 8 )). Since r ξ ir ξ = ξ = (0, x 2 , . . . , x 8 ) is needed, the following system of equations is to be sold: From Theorem 2.14 it follows that r 1 = 1 2 is required. The general solution for an arbitrary ξ ∈ S 6 \ S of this system of equations is. 3.2. The transition function over the equator. As in the previous sections we cover the base space S 6 with two trivializing charts given by S 6 \ {S} and S 6 \ {N }. We are interested in the transition function between the two trivializations over the equator. This is enough to reconstruct the whole fibration, since the equator is a deformation retract of the intersection of the charts. The equator S 5 can be identified with a submanifold of where the coordinate functions u, v and w are the duals of j, e and g respectively. Proposition 3.5. The transition function between the two trivializations of the principal SU (3)-bundle G 2 → S 6 at the equator is From now on we assume that any ξ ∈ O is in the equator of S 6 , and thus x 2 = 0. In this case the solution (19) simplifies to 12ÁDÁM GYENGE Dut to the fact that iξ = (0, 0, −x 4 , x 3 , −x 6 , x 5 , x 8 , −x 7 ) we have It is easy to check that r ξ is really a solution, because in this case due to the identity of elasticity and Lemma 2.13 we may perform the multiplication in arbitrary order: Consequently, the required automorphisms for an arbitrary ξ ∈ S 6 \ {S, N } are Once again, the trivializing maps and the transition function between the two trivializations are ). As it was discussed above, the meaning of ψ 1 is the following: (ξ, η, ζ) →   η, Q ξ j + I η, Q ξ k ζ, Q ξ j + I ζ, Q ξ k ηζ, Q ξ j + I ηζ, Q ξ k η, Q ξ e + I η, Q ξ f ζ, Q ξ e + I ζ, Q ξ f ηζ, Q ξ e + I ηζ, Q ξ f η, Q ξ g + I η, Q ξ h ζ, Q ξ g + I ζ, Q ξ h ηζ, Q ξ g + I ηζ, Q ξ h   . The mapping Q ξ (v) = r ξ vr ξ is linear in v, because O is distributive and scalars commute with everything. Due to the construction Q ξ (x) maps the subspace V i to V ξ isomorphically. Proof. To compute Q ξ (v) 4 groups of identities will be necessary. (1) According to the definition of the scalar product in O and (8) (2) Similarly, Summing over the two equations this leads to (3) With essentially the same tricks one obtains (4) Once again, Putting these together, , where in the sixth equality the formulas (20), (21), (22), (23) and (24) were used, while in seventh equality the rule vξ = −ξv − 2 v, ξ was applied. Using this result the inverse function Q −1 ξ : V ξ → V i can be calculated as well by observing that the roles of i and ξ are played by −ξ and −i respectively. Taking into account that any v ∈ V ξ is perpendicular to ξ, virtually the same calculation leads to Moreover, for an arbitrary v ∈ V i more preparation is needed. 14ÁDÁM GYENGE (1) Applying (5) we obtain (2) By changing the order of terms in the multiplications one obtains Using (9) and the definition of multiplication it can be proved, that Therefore, and thus (3) By exchanging ξ with iξ in (27) one has (4) If a, b ∈ O ′ and a ⊥ b, then ab is orthogonal to both a and b. Thus (5) Finally, taking into account again the orthogonality assumptions and (9) As a consequence, this leads to To simplify calculation it is useful to get rid of the constant factor. According to Lemma 3.6 we have where in the fourth equality the formulas (26), (28), (29), (30) and (31) were used. To sum it up, the required transformation is given by Proof of Proposition 3.5. As mentioned earlier, the subspace V i is a complex linear space with basis j, e, g and complex structure J i : Since ξ ∈ V i , the coordinate expression of ξ in V i can be written as where u, v, w ∈ C, u i , v i , w i ∈ R for i = 1, 2. Because V i = V −i as a subspace, ξ can be expressed as a element of V −i as well. Here the basis is the same, but the complex structure is given by Therefore, the coordinate expression of the same ξ here is According to the multiplication rule of the basis vectors of O (which is represented by the Fano-plane) it is possible to compute the multiplication of ξ with the basis vectors from the left: because the resulting vector v, of which the terms are calculated here, is in V −i . Similarly, ξi = −u 1 k + u 2 j − v 1 f + v 2 e + w 1 h + w 2 g = (u 2 + u 1 I)j + (v 2 + v 1 I)e + (w 2 + w 1 I)g = (uI)j + (vI)e + (wI)g. 16ÁDÁM GYENGE Then, Putting all together, the matrix which represents the mapping and to get matrix of the same function as a V −i → V −i mapping each complex coordinate of ξ should be conjugated: This proves the statement. 3.3. The class of G 2 . As it is know the principal SU (3)-bundles over S 6 are classified by π 5 (SU (3)). The following fact is well know, but again we included a sketch proof of it.
5,871.6
2018-11-08T00:00:00.000
[ "Mathematics" ]
Clay Mineralogy of Basaltic Hillsides Soils in the Western State of Santa Catarina A commonly accepted concept holds that highly fertile, shallow soils are predominant in the Basaltic Hillsides of Santa Catarina State, in southern Brazil, but their agricultural use is restricted, either by excessive stoniness, low effective depth or steep slopes. Information about soil properties and distribution along the slopes in this region is, however, scarce, especially regarding genesis and clay fraction mineralogy. The objective of this study was to evaluate soil properties of 12 profiles distributed in three toposequences (T) of the Basaltic Hillsides in the State of Santa Catarina, two located in the valley of the Peixe River (Luzerna T1 and Ipira T2) and one in Descanso, in the far West of the state (T3). The main focus was the mineralogical composition of the clay fraction, identified by X-ray diffractometry (XRD), and its relations with the soil chemical properties. The morphological, chemical, and mineralogical properties of the soils of the toposequences differed from each other. In most soils, the position of the most intense XRD reflections indicated predominance of kaolinite (K) however, for being broad and asymmetric, a participation of interstratified kaolinite-smectite (K-S) was assumed. Soils of T2 and T3, located in regions with higher temperatures, lower water surplus, and lower altitude than those of T1, were more fertile, mostly redder, and contained higher proportions of smectites (S) and interstratified K-S mineral, accounting for a higher activity of the clay fraction of most soils. The T1 soils were generally less fertile, with lower clay activity and, aside from kaolinite, contained smectites with interlayered hydroxy-Al polymers (HIS). The low estimated smectite contents of the most fertile soils of all toposequences disagree with the high values of cation exchange capacity (CEC) and clay activity related to pure kaolinite soils. The broad and asymmetric reflections of most of the supposed kaolinites identified as dominant minerals indicate the presence of K-S interlayers, most likely contributing to raise the CEC of the soils. INTRODUCTION The area known as Basaltic Hillsides corresponds to little more than a quarter of the territory of Santa Catarina and consists of minor municipalities with small and medium-sized rural properties, whose main economic activities are agriculture and animal husbandry (Epagri, 2014). According to a generalized widespread concept, the soils on the Basaltic Hillsides are predominantly shallow to not very deep and highly fertile, whereas their agricultural use is restricted by excessive stoniness, low effective depth or limitations due to steep slopes (Potter et al., 1998). The geology of the region is determined by the Serra Geral mountain range, recently reclassified in the category Serra Geral Group (Santa Catarina, 2014), and composed of basic and intermediate lava flows that occurred approximately 120-135 million years ago in the Paraná Basin (Castro, 1994;Santa Catarina, 2014). During and after the lava flows, the pedogenesis on the basaltic hillsides was strongly influenced by climatic variations in the late Pleistocene and throughout the Holocene, e.g., the occurrence of glacial and interglacial eras (Nakata and Coelho, 1986;Ledru, 1993), forming the geomorphological unit of the Dissected Plateau of the Rivers Iguaçu/Uruguai (Santa Catarina, 2014). The area has a predominantly rugged multilevel topography, delineated by the succession of lava spills, forming relatively young, narrow valleys with intense dissection, and dynamic relief (Bigarella et al., 1965;Leinz and Amaral, 1969). Currently, these valleys have peculiar characteristics of specific microclimates (Sacco, 2010), resulting in a great variety of combinations of different relief phases, which may cause differences in soil properties within small distances. Mainly soils with fine texture were derived from the effusive rocks of the Serra Geral Mountain Range, with highly varied depth and colors, according to the other relief and climate conditions. The predominantly reported classes are Latossolos and Nitossolos (Oxisols) at sites with a flatter relief and Neossolos and Cambissolos (Inceptisols and Entisols) where the relief is more rugged (Embrapa, 2004). The commonly observed mineralogy in the soils developing from these rocks is predominantly kaolinitic. However, some soils have a mineralogical composition in which 2:1 minerals and oxyhydroxides (mainly hematite and goethite) predominate (Kämpf et al., 1995;Paisani and Geremia, 2010;Pedron et al., 2012;Teske et al., 2013). Another relevant and common occurrence in the mineralogy of these soils are kaolinites with diffraction patterns different from the kaolinites generally found in other Brazilian soils. These kaolinites are characterized by broad and asymmetric reflections in the hkl 001 and 002 atomic planes, resulting in d 001 values equal to or greater than 0.72 nm, which may indicate interstratified minerals, mainly of the kaolinite-smectite type (Bortoluzzi et al., 2007;Teske et al., 2013). In addition to the interstratified minerals, the description of 2:1 clay minerals with interlayered hydroxy-Al polymers, as those studied by Kämpf et al. (1995), is very frequent. Based on the assumptions of a relatively young age of the landscape, the mineralogical components of the originating rock, and the relatively high activity values of the clay fraction of most of these soils, the hypothesis of a considerable content of expandable clay minerals with high CEC in the clay fraction mineralogy was established. The objective of this study was to describe and characterize 12 soil profiles distributed in three toposequences of the Basaltic Hillsides of the State of Santa Catarina, with emphasis on an analysis of the mineralogical composition of the clay fraction, identified by X-ray diffractometry (XRD), and its relation to the soil chemical properties. MATERIALS AND METHODS The Basaltic Hillside region of western Santa Catarina is located mainly on spills of the basic sequence, which occurred between 120-135 million years ago, with predominantly Rev Bras Cienc Solo 2018;42:e0170086 basalts and phenobasalts from the Serra Geral Group (Santa Catarina, 2014). From the geomorphological point of view, the region corresponds to the Dissected Plateau of the Iguaçu and Uruguay Rivers, cut by deep valleys with levelled slopes (Embrapa, 2004;Santa Catarina, 2014). Three toposequences, with different microclimates, of the western region of the state of Santa Catarina were selected. Four soil profiles representing different segments of the landscape were described for each (Table 1). The first two toposequences were determined in the Peixe River valley (toposequence I, in the municipality of Luzerna, with Cfb climate and toposequence II, between the municipalities of Ipira and Peritiba, with Cfa climate); toposequence III was described in the Antas River valley, in the far west of the state, in the municipality of Descanso (Figure 1), also under Cfa climate, but with a lower soil water surplus, since the average temperatures are higher, favoring evapotranspiration. The soil morphology of each profile was described as proposed by Santos et al. (2005). After sampling, the material was air-dried, crumbled and ground, and the fraction <2 mm was separated by sieving to obtain the air-dried fine earth (ADFE) fraction. Physical analyses The particle-size distribution was determined after sample dispersion of the ADFE fraction with water and dispersant (NaOH 1 mol L -1 ). The gravel, pebbles, and sand fraction was separated by wet screen sieving. The clay fraction was determined by the densimeter method and silt calculated by subtraction (Claessen, 1997). Chemical analyses Chemical analyses included pH(H 2 O) which was determined by potentiometry; total organic carbon was determined by the Walkley-Black method (adapted) by oxidation of the organic compounds and quantification by the colorimetric method, according to Tedesco et al. (1995). Contents of Ca 2+ and Mg 2+ were determined by plasma spectrometry Figure 1. Image of the study area with the location of the municipalities where the three toposequences evaluated were described. after extraction with KCl 1.0 mol L -1 ; K + and Na + were extracted with 1.0 mol L -1 ammonium acetate and quantified by flame photometry; Al 3+ was extracted with 1.0 mol L -1 KCl solution and quantified by titration with 0.025 mol L -1 NaOH; and potential acidity (H+Al) was determined after extraction with calcium acetate buffered at pH 7.0 and quantified by titration with NaOH (0.0606 mol L -1 ). Based on the contents of these elements, the following parameters were calculated: sum of bases (S), effective CEC, CEC at pH 7.0, base saturation (V). All these analyses were performed according to Claessen (1997). The SiO 2 , Fe 2 O 3 , and Al 2 O 3 contents were determined by plasma spectrometry after digestion by the sulfuric attack method, according to Claessen (1997). From these contents, the Ki index [(SiO 2 × 1.70)/Al 2 O 3 ] was calculated. Mineralogical analyses The clay fraction of the sub-horizons of the main diagnostic horizons of each profile was analyzed by specific identification treatments of the component minerals. The preparation consisted of saturation of part of the samples with K (KCl) and with Mg (MgCl 2 ) solutions, removal of excess salts, and later oriented clay slides were prepared and air-dried. The dried slides of K-treated samples were subsequently heated to 100, 350, and 550 °C in a muffle oven, and readings performed after each heat treatment. After drying, the Mg-treated slides were placed in an ethylene-glycol-saturated atmosphere at 65 °C. The K-and Mg saturated and air-dried samples were analyzed in the angular range from 3 to 30 °2θ. The K-saturated and heat-treated, as well as the Mg-treated and ethylene-glycol-solvated samples of toposequences 1 and 2 were analyzed in an angular range from 3 to 15 °2θ, and those of the last toposequence from 3 to 40 °2θ. This procedure was adopted since in the first two, the objective was to investigate only possible changes in the position of the reflections of the 2:1-layer minerals that occur at the lower angles and, in the latter, possible changes that could occur at larger angles. For the identification of the mineral phases, a Philips X-ray diffractometer (XRD) was used, with vertical goniometer and θ/2θ geometry, in the step-by-step scanning mode (0.02 °2θ), using the Cu tube and Kα radiation. The results were interpreted based on the mineral-specific interplanar spacings, as proposed by Brown and Brindley (1980) and Whittig and Allardice (1986). In order to compare the kaolinite reflection pattern of the studied soils with that of more crystalline kaolinites, a fine clay fraction sample of a sandstone-derived Oxisol was used as reference, with small width at half height (WHH). For the sample analysis, a cobalt tube was used. The minerals in the clay fraction were semi-quantified based on the relative area of the main reflection of each mineral, in relation to the total area of minerals present, using the "fit profile" option of software X'Pert Highscore Plus, version 2.2b (Panalytical B. V. Netherlands, 2006). Toposequence 1 Toposequence 1 consists of four profiles (Table 1): the first two (P1 and P2) are located on the higher altitude plateau, in the interfluvial slope and upper hillside, respectively, and correspond to a Nitossolo Bruno Distroférrico and Nitossolo Háplico Distroférrico (Santos et al., 2013) or a Humic Hapludox by Soil Taxonomy (Soil Survey Staff, 2014). Both are dystrophic, with low CEC values at pH 7. The third profile [Cambissolo Háplico Ta Eutrófico -Typic Dystrudept (Soil Survey Staff, 2014)] is located on middle altitude, in the third-level pediment, with a declivity of around 20 % and high CEC values at pH 7, as well as high sum and base saturation in the B horizon (eutrophic), being the youngest profile of this toposequence. The presence of gravel and pebbles in the soil mass is high (around 35 %), which, together with the weak granular structure and steep slopes restricts the use for annual crops. The fourth profile, classified as Nitossolo Vermelho Eutroférrico Rev Bras Cienc Solo 2018;42:e0170086 [Humic Rhodic Eutrudox (Soil Survey Staff, 2014)], was sampled in a middle-third slope position, at a lower altitude, also with high sum of bases and CEC at pH 7. However, the excessive amount of stones, both in the soil mass and on the surface, hampers soil use and management. In this toposequence, the soils are not only situated at higher altitudes (between 800 and 575 m from P1 to P4), but the shape of the valleys is also more open than in Toposequence II. The climate is colder, favoring a greater accumulation of organic matter, with more brown/yellowish-brown colors (hues 7.5YR and 5.0YR) prevailing in the first three profiles ( Table 2). The mineralogical composition of the clay fraction was similar in most of the B horizon samples, with reflections at prevailing d-values around 0.720 and 0.360 nm, usually attributed to kaolinite, followed by less intense reflections (d value around 1,400 nm), indicating the presence of clay minerals of the 2:1 layer type, in some cases with interlayered hydroxy-Al polymers (2:1 HE) at higher or lower proportions ( Figure 2). There were also weak reflections indicating aluminum oxide (gibbsite) (d =0.480 nm) and some related to the main reflection of iron oxyhydroxide goethite (d =0.410 nm). In all samples, reflections with a mean d-value around 0.720 nm showed asymmetry at lower 2θ angles and a width at half height (WHH) ranging from 0.890 to 1.080 nm (Table 4). This suggests a kaolinite pattern different from that of other Brazilian environments, as described by Melo et al. (2002) in soils derived from sedimentary rocks of the Barreiras Group (ES), with WHH between 0.29 and 0.39. This asymmetry pattern along with high WHH values was observed in all soil profiles analyzed in this study, with varying degrees of intensity. These characteristics indicate the presence of interstratified minerals, possibly kaolinite-smectite (K-S) type, associated or not with kaolinites, according to Środoń (2006) and Ryan and Huertas (2009). However, ethylene glycol impregnation, and/or heat treatments generally used to identify their presence (Ryan and Huertas, 2009) induced no modification in the reflection patterns at this position, indicating that the interlayer space of the expandable portion of the interstratification may be blocked with hydroxy-Al polymers (Delvaux et al., 1990;Bühmann and Grubb, 1991). Thus, with the XRD tools alone, the type and proportion at which these interstratified minerals occur in the soil samples could not be identified. In the B horizon of profile 1 (Nitossolo Bruno -Humic Hapludox), there was a small shift of the 1.40 nm reflections to higher d values after ethylene glycol solvation and a reduction in the intensity of the reflections in relation to the Mg treatment. This indicates a slight expansion of some of the minerals, confirming the presence of an expansive mineral, probably smectite ( Figure 2). The K saturation-and heat) treatments to a 350 °C caused dilution of the reflections between 1.40 and 1.00 nm, indicating an irregular shrinkage of the layers, probably resulting from their differentiated occupation with hydroxy-Al polymers in the interlayers. Heating to 550 °C resulted in the definition of a broad reflection at a mean position around 1.050 nm, indicating smectites with interlayered hydroxyl-Al polymers (SIHP). A similar conclusion was drawn by Ryan and Huertas (2009) for soils with smectites in Costa Rica. The semi-quantification of the minerals suggested that SIHP correspond to approximately 15 % and the reflection in the position of plane 001 of "kaolinite" to 85 % of the clay minerals (determined from the relation between areas of the minerals shown in table 4). This large amount of low-load minerals conditioned the low values of the clay fraction activity in the B horizon (5.3-8.9 cmol c kg -1 ) ( Table 3). These values are therefore compatible with those generally attributed to kaolinite, suggesting that the presence of hydroxyl-Al polymers, both in SIHP and in the 2:1 layers of interstratified K-S, possibly present in association with kaolinite, lead to drastically lower CEC values in these minerals than their pure counterparts, consequently contributing very little to an increase in soil CEC. ; H+Al extracted with calcium acetate pH 7 buffered and determined by titulometry; CEC pH 7 = S + (H+Al); S = sum of bases; T = clay fraction activity, determined by the formula T = (CEC pH 7/clay) × 100 ; V = base saturation; nc = not calculated. Rev Bras Cienc Solo 2018;42:e0170086 In the B horizon of profiles 2 (Nitossolo Háplico -Humic Hapludox) and 3 (Cambissolo Háplico -Typic Dystrudept), the diffractogram pattern is similar to that of profile 1, differing only by the lower expression of the main gibbsite reflection and lower intensity of the reflections around 1.400 nm in P3 (Figure 2). The complete dilution of the reflections between 1.00 and 1.40 nm in the K-treated samples heated to 350 °C in P2, combined with the incomplete shrinkage of the layers by heating to 550 °C, both in P2 and in P3, indicates that the expandable 2:1-layer phyllosilicates have interlayered hydroxy-Al polymers (Barnhisel and Bertsch, 1989), which means that they are probably SIHP, as interpreted based on studies of Ryan & Huertas (2009). However, the amount of Al polymers in the P3 sample appears to be smaller, since the reflection is better defined and has a mean value at a position closer to 1.005 nm. In the B horizon of profile 4 (Nitossolo Vermelho -Humic Rhodic Eutrudox), intense reflections, corresponding to diffraction in the hkl planes 001 and 002 of kaolinite (0.723 and 0.355 nm), together with the absence of reflections in the interval between 1.00 and 1.40 nm in the Mg samples, glycolation, K-and heat (25 to 350 °C) treatments, apparently indicate that kaolinite is the only clay mineral present. However, heating of the potassium sample to 550 °C, apart from resulting in the disappearance of the reflection of kaolinite (at 0.723 nm), generated a broad and asymmetric reflection with an average position around 1.005 nm, indicating that 2:1 expandable minerals, with hydroxy-Al polymers, are also present. An explanation for the lack of definition of the reflections around 1.40 nm in the Mg-saturated sample read at room temperature is that the filling of the interlayer spaces with polymers by these phyllosilicates is most likely irregular, behaving as an interstratified mineral of the mica-smectite or mica-chlorite type. On the other hand, in almost all samples of these profiles, the reflections of the assumed kaolinite are broad and strongly asymmetrical; this pattern may indicate the presence of interstratified minerals, possibly of the kaolinite-smectite type, in association with kaolinite. The presence of interstratified kaolinite-smectite in basalt-derived Nitossolos and Latossolos in southern Brazil was recently confirmed based on the analysis of crystallographic parameters and identification techniques of interstratified minerals using software Newmod (Testoni et al., 2017). The mineralogical composition of the samples of profiles of this toposequence is therefore little compatible with the activity values of the calculated clay fraction (Table 3). This may be due to the following main reasons: the profiles with highest clay activity in the B horizon, in descending order, are P3 (33 cmol c kg -1 ), P4 (16.4 cmol c kg -1 ), P2 (14.3 cmol c kg -1 ), and P1 (5.28 cmol c kg -1 ). In P3, the presence of 2:1 layer clay minerals was not clearly evident, except after the heat treatment at 550 °C; in P1, with the lowest activity, the amount of these minerals was highest, although they are 2:1 minerals with a high amount of hydroxyl-Al, which may reduce the CEC drastically in relation to their pure counterparts. Thus, the hypothesis is plausible that the CEC of most of these soils is, in some way, due to the participation of 1:1-2:1 interstratified minerals in association with kaolinite. Toposequence 2 Toposequence 2 is located at altitudes slightly lower than toposequence 1 (Table 1), with higher temperatures and higher evapotranspiration. The valley in which the profiles were collected is more closed, forming narrow ledges, which favors intense colluvial deposition, described mainly in profiles 6 and 7 (Chernossolo and Argissolo, respectively, Typic Argiudoll). All profiles have a reddish-brown color, reflecting the comparatively warmer climate than of toposequence 1 ( Table 2). The values of sum and base saturation, clay fraction activity, and pH were higher than in toposequence 1, in addition to the absence or low quantity of exchangeable Al, resulting in the formation of chemically more fertile soils, even in profile 5 (Nitossolo Vermelho -Humic Rhodic Eutrudox) located in the interfluve position (Tables 2 and 3). The soil corresponding to profile 8 (Neossolo Litólico -Lithic Udorthents), located at lower altitude than the other soil profiles of this toposequence, and in the lower third of a steep slope (d =35 %), although containing a contribution of colluvial material, is very prone to losses by water erosion, forming constantly renewed soil. In this sense, the limiting factors for an intensive exploitation of the soils of this toposequence are physical, consisting both of the presence of rocks, and the high slope degree, and in the case of profile 8, of the small thickness of the soil profile. The main mineralogical differences between the soils of this toposequence and the first, are expressed in the type and quantity of clay minerals present in the clay fraction. The predominant mineral appears to be kaolinite (most intense reflections with d values around 0.72 and 0.36 nm). However, these reflections are very wide and asymmetrical, resulting in high WHH values (1.07 to 1.40 °2θ) (Figure 3), indicating the contribution of interstratified minerals of the kaolinite-smectite type. In addition, there is a small expression of reflections at d values around 1.00 and 1.40 nm, particularly in samples of the horizons of profiles 5, 6, and 7 (Figure 3), apparently indicating the absence of expandable or non-expandable 2:1 layer minerals. In P8 samples, the presence of 2:1 minerals is more evident. In profile 5 (horizon A), the reflections with larger area and expression occur at d values around 0.72 and 0.36 nm (Figure 3), but they are broad and asymmetrical, possibly indicating kaolinite in association with interstratified kaolinite-smectite, as evidenced by several authors analyzing younger basalt-derived soils (Bühmann and Grubb, 1991;Vingiani et al., 2004;Teske et al., 2013). The presence of goethite is indicated by the reflection at 0.417 nm. A small background rise in the region around 1.405 nm, coupled with dilution of this reflection towards lower 2θ angles after glycolation, indicates a low smectite content in the sample. The confirmation of the expansive character of this mineral is clearest in K-saturated samples heated to higher temperatures, where a reflection around 1.0 nm is very evident. However, after heating to 550 °C with the disappearance of kaolinite reflections, the formation of a plateau in the left portion of the 1.00 nm reflection was observed, attributed to the contribution of interstratified kaolinite-smectite (Wilson and Cradwick, 1972). The sum of the kaolinite and interstratified kaolinite-smectite, plus the small relative amount of 2:1 minerals observed help to explain why this soil has a clay fraction activity exceeding 20 cmol c kg -1 , but not higher than 27 cmol c kg -1 (Table 3). If the reflections at 0.72 nm were interpreted as due only to kaolinite, the small portion of identified 2:1 minerals would not be sufficient to explain why the clay fraction activity is so high, for being much higher than the reference standard for most kaolinites (Singh and Gilkes, 1992). The mineralogical pattern of profiles 6 and 7 was similar ( Figure 3). There is greater asymmetry of the reflection of 0.720 nm in relation to P5, both of the K and Mg treatments. In the Mg-saturated samples, although the reflections of 2:1-layer minerals are not clearly expressed in the diffractograms, there is a slight elevation of the background in the region between 1.30 and 1.40 nm, indicating their presence in small quantities. In the P6 sample, this elevation dilutes towards the lower 2θ angles after glycolation and in P7, a weak reflection is formed at d ≈1.761 nm, confirming the expansive character of these minerals and the presence of smectites. In K-saturated samples heated to 350 °C, symmetrical reflection occurs around 1.00 nm, indicating minerals with little or no Al in the interlayers. When heated to 550 °C, aside from the disappearance of the kaolinite reflection, a reflection at around 1.00 nm was observed, resulting from polymer dehydroxylation, but maintaining a "plateau" or shoulder toward the lower 2θ angles, indicating non-regular interstratified K-S (kaolinite-smectite) (Wilson and Cradwick, 1972). In the first three profiles of toposequence 2, the mineralogical composition of the clay fraction is therefore similar, with predominance of kaolinite, in association with interstratified K-S. Although smectite could be present, its semi-quantification was not possible, considering the absence of definition of reflections of this mineral (d ≈ 1.40 nm) in the Mg samples. In the B horizon of the Chernossolo profile (P6 -Typic Argiudoll), clay activity was high (2Bt2 >40 cmol c kg -1 ), while in the subhorizons B of profiles 5 and 7, these values were less than 30 cmol c kg -1 , but still high (Table 3). Therefore, these relatively high CEC values are not compatible with soils with no or very low amounts of 2:1-layer minerals, but are consistent when taking the presence of interstratified K-S in association with kaolinites into account. In the diffractograms of profile 8 (Neossolo Litólico -Lithic Udorthents) ( Figure 3) the broadest and most asymmetric reflections of all studied soils were observed in the kaolinite position (Table 4), indicating a more significant participation of interstratified kaolinite-smectite. In the Mg sample, a weak reflection at d ≈ 0.991 nm was also observed, indicating micas or illites, as well as another more intense reflection at d values between 1.30 and 1.40 nm, increasing to d values ≈ 1.725 nm in the treatment with ethylene glycol, confirming the expansive character of this mineral. When smectite peaks do not follow a rational series (absence of reflection 003, at d =5.00 nm, for example), and when the glycolation treatment promotes shifting of the reflections to values above 1.70 nm, as observed, this indicates the presence of interstratified K-S (Bühmann and Grubb, 1991;Righi et al., 1999). This higher content of smectite and interstratified kaolinite-smectite explains, therefore, the high CEC and clay activity values of this soil (Table 3). Toposequence 3 The soils of toposequence 3 are located at lower altitudes, developing under warmer microclimate conditions and with lower water surplus than those of the previous toposequences (Table 1). The valleys in this region are more open, with medium-sized slopes, but with steep declivity (between 10 and 30 %). In this toposequence, the sequence of descriptions was initiated with the profile of the valley bottom (P9 -Cambissolo Háplico -Dystric Eutrudept), followed by two protruding profiles of the footslope and the middle third of the slope (P10 -Chernossolo Argilúvico and P11 -Cambissolo Háplico -Typic Argiudoll and Dystric Eutrudept, respectively) and finally the interfluve profile (P12 -Nitossolo Vermelho -Humic Eutrudox). All toposequence soils had high values of pH, sum and base saturation, being eutrophic, but with physical restrictions to intensive agricultural use, due to the accentuated slopes, as well as the large volume of stones in the soil mass, particularly in profiles 10 and 11. The mineralogical composition of the soils of profiles 9 and 11 (Cambissolos Háplicos Eutroférricos -Dystric Eutrudepts) was similar, with more intense reflections at d values around 0.726 and 0.356 nm. However, the reflections are broad and asymmetric, indicating a probable participation of interstratified kaolinite-smectite in association with kaolinite ( Figure 4). Reflections at 0.416 nm were also observed, indicating goethite. In the samples treated with Mg, a reflection occurred around 1.5 nm in profile 9 and the background elevation formed a band near this value in profile 11, shifting to d values around 1.70 nm after glycolation, suggesting smectites. However, a high level was maintained as of this value, a feature which, as already mentioned, indicates the participation of interstratified K-S. Heat treatments at the highest temperatures shifted the reflections from d ≈ 1.50 to 1.00 nm, confirming the expansive property and absence or low number of hydroxyl-Al polymers in the interlayers of the smectite mineral, although their content is very low in these soils. As clay activity was high in both soils (T >40 cmol c kg -1 ) ( Table 3) and since a very low quantity of smectite was identified, the high CEC of these soils is possibly due to the expressive participation of interstratified K-S. In the B horizon of the soil of profile 10 (Chernossolo Argilúvico -Typic Argiudoll), the most intense reflection occurred at d ≈ 1.583 nm in the Mg sample, which shifted to 1.779 nm at the point of greatest sharpness, maintaining a "shoulder" in the direction of the smaller angles. This pattern indicates smectites, the dominant mineral in the clay fraction of this soil, with a probable contribution of interstratified K-S. The presence of this latter mineral is confirmed by the wide and asymmetric reflection at d ≈ 0.719 nm, probably in association with the small amount of kaolinite. The CEC was highest in this soil, resulting in a very high clay activity, compatible, therefore, with the observed mineralogy. The most intense reflections in the Mg-saturated sample of profile 12 (Nitossolo Vermelho -Humic Eutrudox) occurred at d ≈ 0.722 and 0.358 nm, followed by a less intense reflection around 1.376 nm (Figure 4) that may indicate, respectively, kaolinite and 2:1-layer minerals. There was no change in the position of the 1.40 nm reflections of the Mg-treated compared to the ethylene glycol-solvated samples, so they may indicate either smectites or vermiculites with interlayered hydroxy-Al polymers. After the gradual heating of the K-saturated samples, a reflection formation around 1.356 nm at 350 °C was observed and a great dilution of the reflections between 1.00 and 1.40 nm, forming a plateau with an elevated background, which refer to SIHP, and higherorder peaks of non-regular interstratified K-S-type minerals, as previously discussed. The reflections at d ≈ 0.722 nm have a strong asymmetry at lower 2θ angles, albeit to a lesser degree than in the previous profiles. Since there was no change in the position of the reflections in this region due to heating or glycolation treatments, this behavior indicates that the interstratified K-S have hydroxyl-Al polymers in the interlayer spaces, as described by Bühmann and Grubb (1991). Considering these results, the presence of SIHP, together with K-S interstratified with hydroxyl-Al polymers, although occurring in significant quantities, probably contribute little to the increase of CEC in this soil (clay fraction activity in the B2 horizon 14.04 cmol c kg -1 ), i.e., compatible with the observed mineralogy. This Nitossolo Vermelho (Humic Eutrudox) is located in a top elevation position, where vertical water flows are more intense, which probably favors a higher soil weathering degree, and therefore a clay mineralogy differentiated from the other soils. The greater evapotranspiration of the region, conditioning a water balance with a lower volume of surplus water, however, seems to have been favorable for the preservation of a sufficient quantity of bases to maintain the eutrophic character of the soil. Supposedly, the strong asymmetry in the kaolinite reflections, discussed extensively above and shown in detail in the two soil profiles ( Figure 5), indicates the presence of interstratified kaolinite-smectite. Comparing the different diffractograms, particularly the region of kaolinite reflections (between 10 and 14 °2θ), great variation in the width at half height (WHH) of the kaolinites of the different soils was observed. For the kaolinites with higher crystallinity of sandstone-derived Latossolo Vermelho (Figure 5a) used for comparison, the reflection in plane 001 is symmetric, (WHH = 0.38 °2θ). However, for the case of the two studied soils, less or more accentuated asymmetries were observed (Figure 5b and 5c, respectively), with far higher WHH values when calculated from the "medium" reflection shape. In the samples with more asymmetrical reflections, the WHH values were 0.96 and 1.24 °2θ, respectively, for Nitossolo Vermelho and Neossolo Litólico. These values are much higher than those generally cited for kaolinites in the literature and in comparison to the sample used as a reference ( Figure 5). When the three diffractograms are inserted simultaneously for comparison (Figure 5d), the differences in the reflection pattern of the supposed kaolinite appear with greater clarity. This pattern, although already mentioned as indicating the presence of interstratified K-S in many environments, in particular in basalt-derived soils (Bühmann and Grubb, 1991;Vingiani et al., 2004), is often ignored during interpretation, and will require more attention in the future. The evolution degree of the soils of the third toposequence is the lowest of the three, possibly due to the lower water surplus in this region, favored by the higher temperatures in the far west of Santa Catarina, increasing potential evapotranspiration. The active elements of the climate, notably precipitation, evapotranspiration, and temperature, which influence the leaching flows, have a marked influence on the mineralogical composition and consequently on the chemical fertility of the soils. These conditions are responsible for the formation and persistence of larger amounts of smectite and interstratified K-S, as well as greater preservation of bases in the soils of this toposequence. The precipitation -evapotranspiration balance, establishing a water deficit or excess (surplus water), is fundamental to the understanding of the current or past processes of soil differentiation (Kämpf and Curi, 2012). In this sense, in spite of a certain homogeneity of rainfall on the basaltic hillsides, the mean temperature variations that affect evapotranspiration can promote a higher or lower water surplus in the soil, thus facilitating or limiting leaching flows, influencing the mineralogical composition and chemical reserve of soils. This may explain the significant differences observed between the mineralogical composition and soil fertility of toposequences 1 and 2, both situated in the Peixe River valley, within a distance of a little over 45 m in a straight line. Fifty percent of the soils of the former are dystrophic (P1 and P2), while the latter are all eutrophic, with higher values of sum (S) and base saturation (V) ( Table 3). The soils of the second toposequence are situated closer to the Uruguay river channel, where average temperatures are higher, thus favoring a lower amount of surplus water in the soils. The relief, on the other hand, is also a conditioning factor of the water flows. Long, soft slopes generally favor internal vertical water flows; on the other hand, steep slopes stimulate horizontal flows, mainly those on the surface, favoring erosive processes that lead to the formation of shallower soils, such as the Neossolo Litólico (Lithic Udorthents) of toposequence 2 and Cambissolos (Inceptisols) of the other toposequences. CONCLUSIONS In all studied soils of the three toposequences, the mineralogical composition in the clay fraction was similar, with predominance of minerals with main reflections in the kaolinite position, followed by varying proportions of smectites with or without interlayered hydroxy-Al polymers, goethite and/or hematite and very little or no gibbsite. The reflections in the kaolinite position are broad and asymmetrical in most samples, indicating that the dominant minerals are composed of a mixture of kaolinite-smectite (K-S) and kaolinite. The soils situated in toposequences 2 (Ipira) and 3 (Descanso) are more fertile and have a higher clay fraction activity than those of toposequence 1 (Luzerna), incompatible with the small amounts of 2:1-layer phyllosilicates identified by XRD and have been interpreted as being due to the contribution of interstratified K-S in association with kaolinites. The clay fraction mineralogy and the chemical fertility of the soils was shown to be related to the climatic variations between the sites of the different toposequences, where the smaller water surpluses in toposequences 2 and 3 were less favorable for leaching and weathering.
8,204
2018-02-01T00:00:00.000
[ "Geology", "Agricultural And Food Sciences" ]
Beyond the Hubble Sequence - Exploring galaxy morphology with unsupervised machine learning Conventionally, galaxy morphological classifications are defined by visual assessment. However, visual classification systems such as Hubble types can be intrinsically biased due to the subjective judgement of human classifiers. Additionally, since morphological ”classi-fications” into types is an important and complementary process, it is not clear if we know what these ”best types” are, such as, whether a classification scheme results in relatively unique physical properties of the galaxies or traces the merger history in each class. As most machine learning applications in astronomical studies focus on the improvement of accuracy and efficiency, we apply machine learning to approach a new insight towards galaxy morphological classification. In particular, machines decide a classification system to describe the variation shown in galaxy morphology in the dataset. We explore galaxy morphology from SDSS imaging data with an unsupervised machine learning technique composed of a feature extractor using vector-quantisation variational autoencoder and hierarchical clustering. Our methodology results in 27 machine-defined classes which are physically distinctive from each other in stellar mass, absolute magnitude, physical size, and colour. When we merge these clusters into 2 clusters for binary classification, the unsupervised method provides an accuracy of 87% for separating early-type (ETG) and late-type galaxies (LTG) using a dataset with the morphology distribution of nearby galaxies (i INTRODUCTION Galaxy structure and visual morphology have a strong connection with their stellar population properties, such as surface brightness, colour, and the formation history of galaxies (Holmberg 1958;Dressler 1980). The dominant visual morphological classification system in use today was first<EMAIL_ADDRESS>structed by Hubble (1926), which was then revised by adding a class for lenticulars (S0), a type of galaxy has a disk structure without apparent spiral arms (Hubble 1936;Sandage 1961). Since then, a number of detailed classification systems were proposed such as ones including the notation for the inter and outer ring structure (de Vaucouleurs 1959) and different arm classes (Elmegreen & Elmegreen 1982, 1987, among others. However, visual classification systems can be intrinsi-cally biased due to the subjective judgement of different human classifiers. These human errors are unavoidable and sometimes cannot be reproduced for carrying out a statistical analysis. This greatly limits the ability to use galaxy classification in a formal quantitative way. These issues led astronomers to search for a quantitative description of galaxy structure based on the shape, structure, and physical properties of galaxies which can in principle be connected with visual morphology. For example, Principal Component Analysis (PCA) was applied to determine the number of dominant features needed to reproduce the variance shown in observation in Whitmore (1984) as well as to provide an objective procedure for analysing galaxy properties (also see Conselice 2006). Other studies such as non-parametric methods, e.g., concentration, asymmetry, smoothness/clumpiness, and gini coefficient Bershady et al. 2000;Abraham et al. 2003;Conselice 2003;Lotz et al. 2004;Law et al. 2007), and parametric methods, e.g., Sérsic profile (Sérsic 1963(Sérsic , 1968 for measuring galaxy structure were also proposed to provide a more objective and quantitative classification systems than visual assessment alone. Even though quantitative measures of galaxy structure are extremely useful for measuring properties such as the merger history (e.g., Conselice 2003), morphological 'classifications' into types is still an important and complementary process. However, it is not clear if indeed we know what these best 'types' are. Thus, in this study we build a galaxy morphological classification system that does not involve human bias; we do this through a machine learning approach. For this purpose, we use unsupervised machine learning which is trained without any prior knowledge (e.g., galaxy labels, such as Hubble types). This approach is able to give us suggestive classifications from the machine's perspective based upon input features. However, with an unsupervised machine learning technique it becomes more challenging to have a 'sensible' classification, that is one with more consistency with human opinion, when the dimensionality of a feature space becomes high (curse of dimensionality , Bellman 1954;Keogh & Mueen 2017). In astronomical studies, unsupervised machine learning applications have been mostly used in the studies of spectroscopic data which is less dimensional than applying to imaging data (e.g., Geach 2012; Krone-Martins & Moitinho 2014;Carrasco Kind & Brunner 2014;Siudek et al. 2018). Therefore, unsupervised learning for galaxy classification is still in its infancy. There are currently several types of astronomical studies that apply unsupervised machine learning techniques to images which reach reasonable results, including: galaxy morphology (Hocking et al. 2018;Martin et al. 2019), strong lensing identification (Cheng et al. 2020), and anomaly detection (Xiong et al. 2018;Margalef-Bentabol et al. 2020). For example, Hocking et al. (2018) and Martin et al. (2019) apply a technique called Growing Neural Gas algorithm (Fritzke 1994), which is a type of Self-organising Maps (SOMs, Kohonen 1997), to extract features from images. These features are then connected with a hierarchical clustering algorithm (Hastie et al. 2009). On the other hand, Cheng et al. (2020) use a fundamentally different approach by using a convolutional autoencoder (Masci et al. 2011), which includes an architecture of convolutional neural networks, for feature extraction. This method connects the ex-tracted features with a Bayesian Gaussian mixture model from which a clustering analysis can be done. In this study, we apply an architecture consisting of a convolutional autoencoder, as convolutional neural networks have demonstrated their capability for capturing representative and meaningful features from images (Krizhevsky et al. 2012). We do not use the same convolutional autoencoder as Cheng et al. (2020), but we apply a newly developed technique from Google DeepMind (van den Oord et al. 2017;Razavi et al. 2019) called 'Vector-Quantised Variational Autoencoder (VQ-VAE)'. This technique includes a vector quantisation method that accelerates the time-consuming process of feature extraction when using a convolutional autoencoder, as explained in Cheng et al. (2020). On the other hand, for clustering algorithms, we decide to apply a modified hierarchical clustering method to group the data in order to explore connections between the distances amongst extracted features in feature space, and the number of classification clusters. In this paper, we use this unsupervised machine learning technique to develop a galaxy morphology classification system defined by a machine, and compare it with traditional visual classification system such as the Hubble sequence. We furthermore also compare our machine developed classification with galaxy physical properties, such as stellar mass, colour, and physical size of galaxies. We use monochromatic images throughout to focus only on the impact of galaxy shape and structure on morphological classifications in this paper. The methodology we develop is introduced in Section 2, while the detailed description of how to approach using our method and the data used in this study are shown in Section 3. Section 4 presents the results in this study. Finally, we conclude the work in Section 5. METHODOLOGY In this section we explain our unsupervised machine learning methodology that is used throughout this paper. We give a brief overview here, before going into detail in the following subsections. Our unsupervised machine learning technique includes a feature learning phase with a vector-quantised variational autoencoder (VQ-VAE; Section 2.1 and Section 2.2) and a clustering phase using a hierarchical clustering algorithm (HC; Section 2.3). Several novel approaches for unsupervised machine learning applications are made in this paper: (1) the VQ-VAE considers both reconstruction and preliminary clustering results in the feature learning phase (Section 2.2 and also see Section 3.3); (2) multiple different distance thresholds are used to draw the decision lines on the merger tree in the clustering process (see details in Section 2.3). Vector-Quantised Variational Autoencoder (VQ-VAE) The vector-quantised variational autoencoder (VQ-VAE) was built by Google DeepMind (van den Oord et al. 2017;Razavi et al. 2019) and was originally used for high-fidelity image emulation. The task of image emulation is to learn the distribution of the data given a set of training images, and then to reproduce the images with the learnt distribution. In details, the structure of an autoencoder ( Fig. 1) contains an encoder with a posterior distribution q (z|x) and a prior distribution p (z) where x is the input data and z represents latent variable, and a decoder with a distribution p (x|z) for reproducing the input data. The VQ-VAE is a type of autoencoder which includes the structure of convolutional neural networks and applies a vector quantisation process (van den Oord et al. 2017) to make the posterior and prior distribution become categorical. By using a categorical distribution, the computational time for training an autoencoder is significantly reduced compared to other machine learning methods. For example, in Cheng et al. (2020), it takes up to 5 days to train 100,000 images by a convolutional autoencoder running on a NVIDIA GeForce GTX 1080 Ti GPU, while a VQ-VAE takes up to a few hours to train the same amount of data with the same device. This is an enormous difference and shows the power of the VQ-VAE method. Following the top coloured area in Fig. 1, the posterior categorical distribution q (z|x) is defined as (van den Oord et al. 2017;Razavi et al. 2019): where ze (x) is the output of the encoder (the blue part at the left in the figure), the value ej represents a vector in the codebook which is used for vector-quantising the ze (x), and k is the index for the vector used in the selected codebook (the top box of the yellow part in the figure). We then measure the vector-quantised representation zq (x), which is the input of the decoder (the blue shading at the right side in the figure), through Equations 1 and 2. The vector quantisation process is shown as the yellow part in Fig. 1. The output of an encoder, ze (x) can be represented by a combination of the index of different vectors, k, in the codebook (the square in the middle of the yellow part). For example, in Fig. 1, a three dimensional 'pixel' in the output of an encoder is represented by a vector, e3, after the vector quantisation. We then use the index of these vectors to build a two dimensional index map. For the pixel used in our example the value is 3. With this index map, we can rebuild the distribution, zq (x), with the same dimension as ze (x) but in this case each 'pixel' in zq (x) is quantised to one of the vectors shown in the codebook. For our example, the vector e3 is used for the pixel. The distribution of zq (x) is then used as the input for the decoder to reconstruct the images. The loss function of the original VQ-VAE contains three parts: reconstructed loss, codebook loss, and commitment loss. An additional penalty is considered later in the modified version of the VQ-VAE (see Section 2.2). The reconstructed loss is measured by comparing the reconstructed images with the input images. The codebook loss is used to make the selected codebook, ej, approach the output of the encoder, ze (x), while the commitment loss is applied to encourage the ze (x) to be as close as possible to the chosen codebook from the previous epoch. With these definitions, the loss function, L, for the VQ-VAE is defined as (Razavi et al. 2019): where the value sg is the stopgradient operator and β is used for adjusting the weight of the commitment loss. The study of van den Oord et al. (2017) found that these results correlate with the value of β, and no apparent change occurs when β ranges from 0.1 to 2.0. Therefore, we set β = 0.25 in this study which follows the setting in van den Oord et al. (2017). The details of the VQ-VAE architecture is shown in Table 1. Four convolutional layers are used in both the encoder and decoder, and residual neural networks (ResNets, He et al. 2016) are used in this architecture to create a deeper neural network with less complexity. The activation function applied in the convolutional layers is the Rectified Linear Unit (ReLu) (Nair & Hinton 2010) The VQ-VAE code is based upon the example provided in sonnet library (DeepMind 2018) 1 which is built on top of TensorFlow (Abadi et al. 2015) 2 . To train the VQ-VAE, we apply the Adam Optimiser (Kingma & Ba 2014) and the learning rate is set to 0.0003 which is used in Razavi et al. (2019). Modified VQ-VAE In this study, we apply a modification to our original VQ-VAE to consider both image reconstruction and a preliminary clustering result when extracting the representative features from images ( Fig. 1). To achieve this goal, a penalty defined by silhouette score (Rousseeuw 1987, Equation 4) is added (Equation 5). The silhouette score indicates how well clusters are separated from each other and is defined by the formula, where a represents the mean intra-cluster distance while b is the distance between a cluster and its nearest neighbour cluster. Therefore, a larger silhouette score indicates a better separation between clusters in feature space. To train our VQ-VAE, we minimise the final loss function combining the loss described in Equation 3 and the penalty defined as, where s represents the silhouette score and λ is a constant used for making the magnitude of this penalty of the same order as other losses used in the VQ-VAE (Section 2.1). The value of λ is equal to 0.1 in our case. As shown in Fig. 1, during the training of the VQ-VAE, we interpolate an instance-based clustering algorithm called 'k-medoid clustering' (Maranzana 1963;Park & Jun 2009) to obtain two preliminary classification clusters using a flattened index map. The two clusters are then used for measuring a silhouette score to evaluate the performance of the clustering. The Hamming distance (Hamming 1950 K-medoids Clustering Vector Quantisation Figure 1. A schematic architecture of the modified VQ-VAE used for feature extraction of images. The top aspect with a coloured background is the main architecture of the VQ-VAE, which is then modified to consider the silhouette score calculated using the two preliminary clusters given by k-medoids clustering as a part of the loss function when training VQ-VAE (see details in Section 2.2). The blue shading at the left and right represents the encoder and the decoder, respectively while the yellow part shows the vector quantisation process. The details of each layer are shown in Table 1 Type #channel kernel size stride size activation function as the distance metric as our data is represented by the indices of the vectors in the codebook whereby the number itself only represents a category rather than a real value of the vector (more description in Section 2.3). The 'k-medoid clustering' is used here for a fast evaluation; in the main clustering process after feature extraction, we apply hierarchical clustering algorithms (Section 2.3). Uneven Iterative Hierarchical Clustering In this section we describe our hierarchical clustering procedure for identifying different types of clusters. Hierarchical Clustering (HC; Johnson 1967; Hastie et al. 2009), in particular agglomerative HC (called sometimes 'bottom-up'), first assigns each input as an individual group, then merges two nearest (the most similar) groups together based upon the measured pair distance in the feature space, recursively. The 'bottom-up' HC structure allows a different number of datapoints in clusters because it starts with individuals ( Fig. 2). Other kinds of clustering such as 'top-down' HC and Kmedoid clustering used in Section 2.2 start with clusters themselves, which are more difficult to provide a starting point for an uneven number of datapoints for the initial clusters. The distance (similarity) measured in this study is the Hamming distance (Hamming 1950). As stated in Section 2.2, our data is represented by the index of the vectors selected from the codebook. This is such that an index indicates a category rather than the real value of a vector. We compare two data sets represented by a set of features labelled with indices. The Hamming distance is defined as the number of mismatched indices between the pair over the number of features used to represent the data. For example, assuming that an image can be presented by four different features labelled with the indices: 1, 2, 3, 4, after VQ-VAE; in this case the Hamming distance is 0 if the other image is represented as 1, 2, 3, 4 as well, and the Hamming distance is 1 if it is represented by 4, 3, 2, 1. For further clarification, Fig. 2 illustrates the clustering process. Within this study, we realise that when all the data are considered, the merging point can be less accurate due to the mixture of blindly measured distances from a great variety of extracted features in images. Therefore, we carry out an iterative clustering process with a reverse concept that we control the data used for doing HC from the top to bottom. We first make the HC merge all data into two top parent branches, then apply the second round of HC to the data of a parent branch to obtain two children branches, and apply the same procedure again to the sub-data of a child branch to get two grandchildren branches, and so on. The iterative action stops when it reaches a certain condition (the black circle in Fig. 2; see Section 3.4). In a typical HC, a uniform distance is used to determine the final clusters. However, a uniform distance threshold is not appropriate considering that galaxies' appearance in different morphological types have different complexity, such that spiral galaxies have a larger diversity in appearance than elliptical galaxies. Therefore, in this study, we propose to allow a different stopping point/distance threshold for each branch depending on the complexity of the objects in the branch (see Section 3.4). For example, a branch which consists of galaxies which can look very different within a class may continue for many iterations, while others may reach the stop criteria with fewer iterations due to a relatively monotonous structure within the data of the branch. For example, spiral galaxies can have a variety of spiral arms appearances, i.e., different number of arms, different positions of arms, etc. Therefore, the distance between spiral-like galaxies are generally larger than the distance between two elliptical-like galaxies. This consideration is sensible and is of great importance in morphological classification of galaxies; however, this is neglected in a typical HC algorithm. Therefore, to distinguish it from a typical HC algorithm, we call this setup 'uneven clustering' which provides us with a more precise distinction in galaxy shape, structure, and morphology. IMPLEMENTATION The pipeline of this study includes three main steps: (1) feature selection; (2) feature learning (using the modified VQ-VAE); and finally (3) clustering process. The data used in this study are introduced in Section 3.1. The feature selection is described in Section 3.2, and the setup for the feature learning process using the modified VQ-VAE (Section 2.2) is discussed in Section 3.3. Finally, in Section 3.4 we explain the details of the clustering process we use to classify galaxies. Data Sets The imaging data used throughout this work is from the Sloan Digital Sky Survey (SDSS) Data Release 7 (York et al. 2000;Abazajian et al. 2009) with a redshift cut of z < 0.2. In order to focus on the impact of galaxy shape and structure to morphological classifications, we utilise monochromatic -3 -2, -1 0 -2 3 -8 10 Table 2. The classification scheme used in this work and in Domínguez Sánchez et al. (2018, DS18;presented in T-Type). In DS18, they define the T-Type of -3 for ellipticals (E), -2 for lenticulars at the early stage (S0 − ), -1 for lenticulars at the intermediate to late stages (S0), 0 for S0/a, and the positive values of T-Type are for different stages of spirals. Finally the T-Type of 10 represents irregular galaxies (Irr). r-band images. An extension including colour and other factors is some to consider for the future. Here we are focused on single-band morphological classification on features seen and not in general a physical classification that might result from considering galaxy colours and colour distributions. To examine what types of systems our classification clusters contain, as well as to have the flexibility within the data distribution in our datasets, we use morphology labels defined by T-Type (de Vaucouleurs 1964) and the probability of being a barred galaxy (P bar ). Both quantities are obtained using deep learning techniques from Domínguez Sánchez et al. (2018, hereafter, DS18). We define eight labels including barred galaxies that contain significant features shown in the Hubble morphological system: ellipticals (E), lenticulars (S0), early spirals (eSp), late spirals (lSp), irregulars (Irr), barred lenticulars (SB0), early barred spirals (bar eSp), and late barred spirals (bar lSp). The comparison of the classification scheme is shown in Table 2; in which, S0, eSp, and lSp are separated into barred and non-barred galaxies based on the value of P bar . We additionally include labels of irregular galaxies from three other works: Fukugita et al. (2007), Nair & Abraham (2010), and Oh et al. (2013) to provide more irregular galaxies in our database. The morphological labels in our datasets are not used for training our machine, but to prepare an appropriate dataset with a specific data distribution, and as a way to examine the obtained clusters in terms of these types. To investigate the differences in the classification systems defined by humans and those from a machine, as well as potential application within our unsupervised machine learning technique in future surveys, we prepare two different datasets: which are 'balanced' and 'imbalanced'. In the balanced dataset, we artificially allocate the same number of galaxy images to each morphological type. The eight human defined morphological types have visually distinctive differences from each other; therefore, the purpose of this arrangement is to allow our VQ-VAE consider fairly the characteristics of each morphology type when extracting the representative features from input images. Otherwise it is possible that some type of bias would result if the distribution of the types we select are input into our VQ-VAE in the same fraction as they are found in the nearby universe. In this case we would find that the late-type disks would dominate over early disks and ellipticals (e.g., Conselice 2006). On the other hand, it is of great importance to know how an unsupervised machine learning technique can be applied in future surveys to explore a large scale of unknown galaxies' morphology in an 'as is' situation. That is, we need to know how our VQ-VAE performs when galax-ies are inputted from imaging observations of the real universe with no balancing. For this goal, we set up the 'imbalanced dataset' with a realistic distribution in terms of galaxy morphological types which follows the distribution of nearby galaxies at z=0.033-0.044 as presented in Oh et al. (2013). The type distributions of the balanced and imbalanced dataset are shown in Fig. 3. Feature Selection In this section we discuss a preprocessing procedure to reject irrelevant information from images. The feature selection procedure is used to select the pixels in images that are significant and which reflect the shape or structure of the targets. Cheng et al. (2020) showed that the background noise can result in an overfit to the noise when training the convolutional autonencoder. To solve this, Cheng et al. (2020) applied a simplified convolutional autoencoder to denoise the images and emphasise the pixels from the targets themselves before the main task is computed. However, a denoising process by another autoencoder is time-consuming and could potentially add artificial structure when reconstructing the images. Therefore, in this study, we simply use a one sigma clipping of pixel values measured through the background noises as our selection threshold. Any pixel value is below this criterion the pixel value is set as 0 (Martin et al. 2019). Whilst this will remove noise, it will also potentially remove outer fainter portions of the galaxies themselves. However, this will retain the brighter portions of the inner parts of galaxies where classification is done in any case. Removing this fainter light does not have an effect on our measurements as it would if we were measuring for example surface brightness profiles. Feature Learning As described, in this study, we apply a modified vectorquantised variational autoencoder (VQ-VAE) (see Section 2.2) to carry out our unsupervised learning. Our VQ-VAE basically learns the representative features from our images. It considers a preliminary clustering result by including an additional penalty (Equation 5) in the VQ-VAE (Section 2.2). This modification helps to find not only better representative features for image reconstruction, but also the features that can be well separated into two initial groups in feature space. The main advantage of the VQ-VAE technique is to accelerate the unsupervised feature extraction process which is over 30 times faster than using a typical convolutional autoencoder (e.g., Cheng et al. 2020) without a significant trade-off to the reconstruction accuracy (Razavi et al. 2019). This is achieved by quantising the values used for reconstruction (Section 2.1). The hyper-parameters setting used in this study follows the setup described in Razavi et al. (2019) except for the codebook size. It determines the number of vectors available in the quantisation process (Section 2.1). This number of vectors decides the 'resolution' of the reconstructed images. Namely, the more available vectors, the more details can be presented in images. Razavi et al. (2019) use 512 vectors in their codebook to generate high-fidelity emulated images of balanced imbalanced Figure 3. The type distributions of the balanced (left) and imbalanced (right) datasets. The latter follows the distribution of nearby galaxies (Oh et al. 2013). The number shown above the coloured bar represents the fraction of the type in all data. The fraction of barred galaxies are highlighted with hashed lines. The orange and light blue colouring represent early-type galaxies and late-type galaxies, respectively. different animals, e.g., dogs, cats. However, with a different goal from emulation in our study, we realised during analysis that a larger codebook size leads to a worse clustering result. This is because the machine with a larger codebook uses too many details of the images into account when carrying out the clustering. These details help to complete the puzzle when emulating images but they blur the boundary in the feature space when doing clustering. In this study, after a series of tests, we choose a size of 16 for our codebook, which forces the machine to use the provided vectors on the most significant features while still retaining a certain level of the reconstruction quality. This number of 16 was determined through experimental method, and is not based on any basic principles related to galaxies or machine learning. It may, and probably does, differ within different instances of use. Clustering Within the clustering task, we apply an uneven iterative hierarchical clustering (Section 2.3) on the data represented by a set of vector-quantised features obtained after the VQ-VAE. In this study, we propose a new approach to decide the number of clusters within unsupervised machine learning applications. This approach can be used in other instances beyond using a VQ-VAE. Part of this is inspired by the fact that the clusters can be highly sensitive to galaxy orientation. The concept we use is to take the threshold measured by the features of galaxy orientation on the merger tree to find where the effect of galaxy orientation in a branch starts to appear (e.g., gray dotted lines in Fig. 2). In other words, this threshold also provides the number of classification clusters that are not separated based on the galaxy orientation. This threshold is defined by the average distance between the artificially rotated images in a branch (drot), where N is the number of datapoints in the branch, and dij represents the distance between an image i and image j. The distance, dij, is measured through the Hamming distance. In this process we stop a branch and decide the number of clusters within that branch when one of two criteria is satisfied: (1) the drot suggests fewer than two clusters (≤ 2) in a branch; (2) the difference between the drot measured using the data of a parent branch and the data of a child branch are smaller than 0.015: that is, dp,rot −dc,rot ≤ 0.015. The first criterion indicates that galaxy orientation is considered when having more than two clusters (> 2) in this branch (e.g., circle 1 and 2 on Fig. 2). Two clusters are the minimal number to split; therefore, we stop the iterative clustering in a branch when this criterion is satisfied. On the other hand, the second criterion is used to decide whether a branch (the parent branch) should have more sub-branches (the child branches). The variation between branches is less significant when the difference in the distance between the data of a parent branch and a child branch is small (≤ 0.015). The value used in the second criterion is measured based on the branches stopped due to the first criterion. Therefore, there is no need to split a parent branch when the second criterion is satisfied. The suggested number of clusters by the drot of the parent branch is then the number of clusters in the branch without having the effect of galaxy orientation. For example in Fig. 2, the branch stops at the circle 3 by satisfying the second criterion, and the drot (gray dotted line) suggests three clusters without showing the effect of galaxy orientation in this branch. featured group less featured group Figure 4. Examples of galaxies found within our two preliminary clusters using the balanced dataset. Galaxies in one cluster have more features (left left), and galaxies in the other group have relatively fewer features (right). Unsupervised Binary Classification Starting with a simple examination, we enforce our machine to merge all galaxies in the balanced dataset into two preliminary clusters. Examples of galaxies within the two clusters are shown in Fig. 4. Galaxies in one cluster have clearly more features (featured group; e.g., arm structure) than the galaxies of the other cluster (less featured group; more elliptical). We examine the morphological distribution in both clusters (left column in Fig. 5); one cluster has ∼ 96% late-type galaxies (LTGs) and the other one has ∼ 60% early-type galaxies (ETGs). Due to an unequal number between the ETGs and the LTGs in the balanced dataset (Fig. 3), the fraction of ETGs and LTGs in each cluster might be biased. We examine another quantity, 'dominance', which represents the ratio between the fraction of a certain type in a given cluster to the fraction of this type within the dataset (right column in Fig. 5). This quantity removes the statistical influence from different number of types used in the input datasets; hence, it shows a better representation of the galaxy features emphasised in the cluster. Through the dominance distribution, we observe that the featured and less featured group are clearly dominated by the features of LTGs and ETGs, respectively. We further investigate the potential structural factors considered when separating the two clusters. With the analysis of the two clusters, we can decide what are the major structural factors in the clustering process. First of all it is clear that with our unsupervised learning we obtain a separation into two main clusters where one correlates with late-type galaxies and the other with early-type galaxies. This verifies with a machine this basic dichotomy which has existed in classification schemes for over 100 years. However, we also want to compare our clusters with more quantitiative measures. In Fig. 6, we compare a variety of structural measurements such as concentration, asymmetry, smoothness/clumpiness, Sérsic index, Gini coefficient, M20, apparent half-light radius (Re, arcsec), and r-band apparent magnitude (mr) between the two clusters. These measurements, except for the r-band magnitude, are provided from the catalogue of Meert et al. (2015), and the r-band magnitudes are from Simard et al. (2011). Within these measurements, the asymmetry, Sérsic index, Gini coefficient, and M20 show a clear separation between the two clusters in Fig. 6. This indicates that our machine takes galaxy structure which correlates with measurable strcutural parameters (asymmetry, Gini coefficient, M20) and light distribution (Sérsic index) into account rather than the apparent size and the apparent brightness of galaxies, when categorising galaxies into the two clusters. This is good, as it shows that our method does not depend on distance or the apparent sizes of galaxies but on the inherent morphologies and structures of the galaxies themselves. Note that the concentration and smoothness distributions show fewer differences between the two clusters. These two quantities also do not have apparent differences between the LTGs and ETGs in our dataset, because the galaxies in our datasets are relatively faint (∼ 74% galaxies fainter than mr = 16) and the image resolution is limited by the groundbased seeing (>1 arcsec; the image sampling is 0.396 arcsec per pixel). Although we cannot straightforwardly confirm the correlation between the two clusters and the concentration parameter, the Gini coefficient and M20 provide a connection with the concept of concentration. Based on our visual assessment, we proceed to associate the featured group to LTGs and the less featured group to ETGs in order to compare these machine-predicted labels with the catalogue labels. Using the balanced dataset, the machine-predicted and the catalogue labels agree with an accuracy of ∼ 0.75 in this binary classification. The accuracy is defined as the number of the correct matches between the machine labels and the catalogue labels from all galaxies in the dataset. In Fig. 7, we present the T-Type distribution between the two clusters. It shows that the main confusion in binary classification by our machine happens when classifying early spirals into either ETGs or LTGs, in particular, Sab . The distribution of visual galaxy morphology in each cluster obtained using the balanced input dataset. The left column shows the fraction of each morphology type in the clusters while the right column presents the dominance of each type. The 'dominance' is defined by the fraction of a certain morphology type in the cluster divided by a fraction of this type within the dataset. The top row shows the distribution of the 'featured group' while the bottom row presents the statistics for the 'less featured group'. galaxies (T-Type=2). When we exclude early spirals from the balanced dataset, the accuracy increases to ∼0.87 for binary classification. We discuss some plausible reasons for this misclassification compared to visual classification by our machine. For example, one uncertainty originates from the provided labels which combine the uncertainty of both visual classifications and machine learning predictions. Second, from our machine's perspective, in addition to the potential machine learning uncertainty, another possible uncertainty is caused by the reconstruction inaccuracy in the VQ-VAE, particularly within spiral galaxies with insignificant arm structures. However, although these causes are unavoidable, these conditions exist only in a fairly small fraction of the data in the input imaging dataset. The main reason for the mixture of early spirals in both clusters is due to the intrinsic difficulty of classifying this type into either ETGs or LTGs based only on galaxy structure. The 'early spirals' in fact include a wide range of transitional features which are difficult to accurately define. The separation may become better when including colour information; however, with our method, we state the difficulty to discriminate early spirals when con-sidering only galaxy appearance/structure in a unsupervised machine learning methodology. Machine Classification Scheme In the previous section, we enforce our machine to provide two initial clusters for a preliminary examination. However, the main motivation for this study is to investigate the classification system a machine would suggest when 'looking' at galaxies and classifying them through machine learning. We use the proposed method in Section 3 with the balanced dataset to let the machine explore freely and suggest a number of clusters to categorise the galaxies in the dataset. Galaxies in our dataset are categorised into 27 classification clusters by our machine. Comparing with previous work on unsupervised learning which produced 160 clusters (Martin et al. 2019). Our method suggests significantly fewer number of galaxy morphology classifications which is more in line with what one would surmise is a more accurate number of classes for galaxies. In addition to the different implementations applied in both works, the difference in the number of obtained clusters might be due to the fact that we only consider monochromatic images to investigate the impact of Figure 6. The comparison of structural measurements including: concentration, asymmetry, smoothness/clumpiness, Sérsic index, Gini coefficient, M20, half-light radius (Re), and r-band apparent magnitude (mr) between the two initial clusters. The blue shading represents the featured group while the orange shading is for the less featured group. Table 2. The blue shading shows the distribution of the featured group, while the light orange colour represents the less featured group. galaxy structure in this study, while Martin et al. (2019) used coloured images. Additionally, to have more available measurements of galaxy structure and properties, we choose to use the imaging data from the Sloan Digital Sky Survey (SDSS; York et al. 2000;Abazajian et al. 2009) which has a worse resolution and image sampling (0.396 arcsec per pixel) than the one used in Martin et al. (2019, 0.168 arcsec per pixel). This may be a reason for the resulting fewer number of clusters obtained in our work. To further investi-gate galaxy morphology classifications, the colour information and images with better resolutions will be considered in future work. Examples of images from each of the 27 clusters are shown in Fig. 8. The number shown on the bottom left is the average value of the T-Type in the clusters and the identification number of the cluster is shown on the top right. The identification numbers of groups are generated on the merger tree from left to right; therefore, they are simply labels without physical interpretation. Table 3 lists the characteristics of each cluster in structural measurements, galaxy properties, and statistics. This can be used to co-examine the figures shown from this section to Section 4.4. Through visual assessment in Fig. 8, we observe that galaxies in some clusters show bars (e.g., g15 and g16 in Fig. 8) or show more elongated in shape than in others. In Fig. 9, we re-examine the influence of the major structural parameters such as the Sérsic index, asymmetry, Gini coefficient, and M20 (Section 4.1), in separating clusters. Each coloured circle represents one cluster and is coloured by the average value of the T-Type in the cluster. We confirm again a clear correlation between our machine classification clusters and major structural features. Additionally, the given clusters show a transition along with the T-Type. This suggests the clusters are correlated with the visual morphology roughly from early-types to late-types. Machine Classifications versus Human Visual Classifications It is important to note that the goal of this work is not to find a perfect agreement between our machine-based classifi- Additionally, the statistics of each cluster are presented in the last four columns where Ng shows the number of galaxies in the cluster and Fg indicates the percentage of total samples. The Dg lists the dominated types in each cluster, which are selected based on the dominance of each morphology type, and F g,D shows the fraction of the dominated types in a cluster. The F g,bar is the fraction of barred galaxies in a cluster. Finally, D g,bar and D g,nobar is the dominance of barred galaxies and non-barred galaxies in a cluster, respectively. The ordering follows the group IDs which are simply labels for convenience. cation and the visual morphologies. Our goals are to understand the features used by our method to categorise galaxy images, and to introduce a novel classification scheme 'proposed' by our machine. That is, we want to develop a scheme whereby galaxies are classified by a reproducible and scientific computational way and not by human opinion. To better understand our machine-based classes, we compare them with visual morphological classes such as the Hubble sequence, and discuss the visual features extracted by our machine. To do this comparison, we associate each cluster with one or a mix of Hubble types based on the dominance of each type within each of the clusters (Fig. 10). As mentioned in Section 4.1, the 'dominance' of each type is the ratio between the fraction of a given morphology type in the cluster to the fraction in the dataset. We associate a given cluster with one or several morphology types when the dominance of a certain type is > 1. This selection in- Figure 8. Examples of images from each cluster listed in the order of the average value of the T-Type within that cluster ( Table 2). The number shown at the left bottom corner is the average value of the T-Type in the cluster. At the right top corner, the identification number of the belonging cluster for the image is presented. dicates which kinds of visual features considered in a visual morphology type are dominated in a cluster. In Fig. 10, we show the accumulated distribution of the classification clusters to one or a mix of visual morphology types. Each coloured bar represents one cluster and the deeper bluer colours indicate more barred galaxies than nonbarred galaxies within that given cluster. In Fig. 10, the darkest blue represents a cluster with the strong bar dominance, D g,bar ≥ 1 and the non-bar dominance, D g,nobar < 1 (see the last column in Table 3; e.g., g16 in the table). The medium blue is for a cluster with both bar and non-bar dominance ≥ 1 (weak bar dominance; e.g., g27 in Table 3). This criterion indicates that the features of a barred galaxy are not distinctive in a cluster. The lightest blue is used when the bar dominance is D g,bar < 1 (no/less dominance; e.g., g14 and g19 in Table 3). Through the highlight of the bar dominance in clusters in Fig. 10, our machine is shown to successfully discriminate between barred and non-barred galaxies. Examples of clusters with different bar dominance are shown in Fig. 11. We observe in Fig. 10 that no cluster is dominated by either elliptical galaxies or early spirals only. The features of elliptical galaxies are recognised to have a great similarity to some lenticular galaxies by our machine. Visually, we separate ellipticals and lenticulars mainly based on the disk structure. However, compared to the cluster dominated by only lenticulars (the g25 in Table 3) in Fig. 12, the galaxies in the two clusters dominated by E/S0 (g22; g23) lack significant disk structure, whereas 'g22' represents the 22th cluster, and so on (also see Fig. 8 and Table 3). However, clusters with more disky galaxies, such as g27 (blue solid line in Fig. 12), are dominated by a mix of S0 and eSp. This is likely an indication for an uncertainty in distinguishing ellipticals, lenticulars, and early spirals in the visual classi- Figure 9. The comparison of the major structural features such as the Gini coefficient, M20, Sérsic index and the asymmetry as a function of each cluster from Section 4.1. Each circle represents one classification cluster from our unsupervised machine learning process which is coloured by the average T-Type in the cluster. The average value of the data in the clusters are used for each structural feature value. Clearly strong trends can be seen that varies along the clusters. fication system we use and not a defect of our unsupervised learning. Only the lenticulars with a moderate range of Sérsic index (peaks at ∼ 3; yellow solid line in Fig. 12) can be separated from other morphology types. Additionally, as stated in Section 4.1, early spirals are difficult to categorised into either ETGs or LTGs, and as such it is difficult to have a distinctive cluster dominated by only this morphology type (Fig. 10) due to the broad transitional features in this type. This again indicates the intrinsic difficulty of visually separating early spirals from other morphology types, such as lenticulars and late spirals. Most of our clusters have a mixture of different Hubble types within them which indicates galaxies with similar features in appearance can be visually classifying into a variety of morphology types (see examples in Fig. 13). In other words, a mix of galaxy structure in fact exists in a visually defined morphology type. This result reveals an intrinsic vagueness of the visual classification systems such that they are not always accurately defined, with many galaxies not optimally classified as a certain T-Type due to the diversity of properties beyond a guessed at morphology. One exception from the above discussion is our cluster 21 (g21 in Table 3 with a mix of four morphology types: S0, eSp, lSp, Irr). This cluster is shown to have galaxies with bright companions which overwhelms the brightness of the central objects (the 'g21' row shown in Fig. 13). After the feature selection and normalisation in Section 3.2, the central objects might become negligible to the machine learning compared to the companions. This can result in difficulty for our machine to capture the structure of the central objects as well as group these galaxies correctly. On the other hand, galaxies with companions are more likely to experience galaxy mergers, and thus this cluster can be used as an indication to find potential merger events or compact groups of galaxies. Machine Classifications versus Physical Properties In previous sections, we show that our machine learning classifications trained with monochromatic images are categorised based on structural features (Section 4.2) and visual features (Section 4.3). In this section, we use the machine classification scheme to study the correlation of galaxy physical properties and galaxy morphology using the colourmagnitude diagram and the mass-size relation of galaxies. In Fig. 14, we examine our the machine classification clusters plotted on the colour-magnitude plane (left) and the mass-size plane (right). The colours and physical sizes are again taken from Simard et al. (2011) while the stellar mass originates from Mendel et al. (2014). Each circle represents one cluster, coloured by the average value of the stellar mass of the galaxies in the cluster for the colour-magnitude diagram and by the average value of colour for the masssize relations. These two plots show that each galaxy cluster as defined by the machine has distinctive physical properties in galaxy colour, absolute magnitude, stellar mass, and physical size. Additionally, our machine classes show a clear transition between galaxy morphology and galaxy properties on both the colour-magnitude diagram and the mass-size relations. Each star shows the average value of the data with a certain visual morphology type (written in black) for comparison. The machine-defined morphology types fill in the gap within the correlation of galaxy morphology and galaxy properties along with the Hubble types. This indicates that the machine classification scheme can complete the missing morphologies in the visual classification systems without involving human potential bias. It will be interesting to investigate the correlation of these machine-defined classifications with galaxy environment and other galaxy properties, but this will be left to study in a future paper. Additionally, we notice on the mass-size diagram (right in Fig. 14) that the five orange clusters above the eSp star- Figure 10. The accumulated distribution of the classification clusters compared with Hubble sequence morphological types. The x-axis shows one or a mix of visual morphology types which dominates the clusters listed in Table 3. All 27 clusters are plotted here, and each coloured bar represents one cluster. The different colours of the bars show different dominance levels of barred galaxies in the cluster, such that from deep to light blue represent more barred galaxies to no/fewer barred galaxies in the cluster. label are dominated by barred galaxies, in particular, the top cluster with the largest average size has ∼ 80% barred galaxies in the cluster (g16 in Table 3). Galaxies in this cluster have larger sizes, larger stellar masses, and are redder in colour than other clusters with a mix of typical spiral galaxies. Dataset with a realistic distribution To test the capability of our method on a realistic data distribution, we apply our method to the imbalanced dataset ( Fig. 3) which follows the distribution of intrinsic morphology for nearby galaxies (Oh et al. 2013, Section 3.1). In this section, we examine the performance using this dataset for: (1) binary classification (Section 4.5.1) and (2) multiple classification clusters (Section 4.5.2) using the imbalanced dataset, and compare the results with the one using the balanced dataset. Unsupervised binary classification Similar to Section 4.1 for the balanced dataset, we merge the imbalanced dataset into two preliminary clusters (Example of galaxies in each is shown in Fig. 15). Although the imbalanced data has a significantly different distribution in galaxy types from the balanced dataset, our machine obtains two preliminary clusters with similar features to the two clusters provided using the balanced dataset (Fig. 4). As before, one cluster is dominated by galaxies with many distinct features while the other has galaxies with significantly fewer features. Fig. 16 shows the morphological fractions of different types (left column) and the dominance of each morphology type in each cluster (right column). The dominance is, again, the ratio between the morphological fraction in the cluster to the fraction in the dataset. This quantity removes the impact of the imbalanced numbers between each type, and indicates the visual features emphasised in a cluster. The two clusters are clearly dominated by LTGs and ETGs, respectively. Additionally, the dominance distribution of the imbalanced dataset is completely consistent with that of the balanced dataset (Fig. 5). This confirms that no matter which data distribution is used, our machine is capable of separating the two clusters based on the specific features existing in the corresponding morphology types. Additionally, applying our method to the imbalanced dataset we get an initial accuracy of ∼0.87 in separating ETGs from LTGs. The accuracy is again defined as the number of correct matches from the total samples. The reason for a higher accuracy compared with the balanced dataset is due to a lower fraction of early spirals in the imbalanced dataset (∼ 8%) than the balanced dataset (∼ 25%). When we exclude the early spirals from the imbalanced dataset, the accuracy barely changes, and it is consistent with the accuracy obtained when using the balanced dataset (accuracy: ∼0.87; Section 4.1). These results show the ability of our method to achieve reliable binary morphological classification for large surveys with unknown morphological mixes. Multiple classification clusters Following Section 3.4, and using the imbalanced dataset, we obtain the same number of clusters, 27, as when we used the balanced dataset through our method of determining the number of clusters (Section 4.2). The clustering results for both datasets are very close to each other, with only very minor differences. For example, 7 clusters are separated under the less featured group using the balanced dataset while 8 clusters are obtained using the imbalanced dataset. Conversely, we obtain 20 clusters for the featured group using the balanced dataset, and 19 using the imbalanced dataset. In Fig. 17, we associate the classification clusters for the imbalanced, realistic, data set with Hubble types based on the dominance of each type. We find no clean clusters for ellipticals (E), lenticulars (S0), early spirals (eSp), irregulars (Irr) when using the imbalanced dataset. The lack of clusters for E and eSp is due to the same reasons for the balanced dataset discussed in Section 4.2: these two visual morphologies are intrinsically difficult to separated from other morphology types. Additionally, in Section 4.2, we conclude that to get a clean S0 cluster, galaxies have to show a moderate disk structure (Fig. 12). However, there is not a sufficient number of lenticulars with the relevant features due to the low fraction of this type in the imbalanced dataset (Fig. 3). It is impossible for the machine to classify a galaxy that does not exist in some abundence within the dataset; therefore, we miss the pure S0 cluster when using the imbalanced dataset. On the other hand, irregular galaxies do not have a specific structure; therefore, it is easy to be confused them with some late spirals with less structured appearances by our machine, based on only galaxy structure and without the prior knowledge of 'late sprials' or 'irregulars'. They also suffer from the similar cause of the missing S0 cluster: the insufficient number of irregular galaxies in our imbalanced set decreases the possibility of the distinctive irregulars to be picked out by our machine. Similar to the results of the balanced dataset, the separation between clusters might 'improve' in terms of being closer to a more physical classification when we consider colour information in our machine. Therefore, this will be an important part in future work. CONCLUSION In this paper, we present an unsupervised machine learning technique by applying a combination of a feature extractor -a vector-quantised variational autoencoder (VQ-VAE) and a hierarchical clustering algorithm (HC). This method involves a vector quantisation process which provides a rate of classification with a feature extractor in the learning phase at least 30 times faster than a typical convolutional antoencoder used in Cheng et al. (2020) on the same device. Figure 13. Examples of images of galaxies from clusters with a mix of many visual morphology types. Each row shows five randomly picked examples within the cluster, where 'g22' represents the 22th cluster, and so on. The morphology information is shown below each image. To sensibly explore galaxy morphologies and investigate the suggestive number of galaxy morphological classes, we propose some novel modifications to the machine learning algorithms used in this work (Section 2). First, we include a preliminary clustering result in the VQ-VAE architecture during the feature learning process. This helps to extract features that can not only reproduce the input images but also be well separated into two preliminary clusters in feature space. Second, different distance thresholds are used within each branch in the merger tree in the HC process rather than m featured group less featured group Figure 15. Examples of galaxies within the two preliminary clusters using the imbalanced dataset. Galaxies in one cluster are with more features (left), and galaxies in the other group are with relatively fewer features (right). a single distance threshold for a whole tree. This flexibility prevents the creation of unnecessary clusters separating galaxies with few features, while allowing more clusters for galaxies that show larger variation. Another innovation is to use galaxy orientation (a potential problem when classifying galaxies) to our advantage, helping to decide the number of clusters (Section 3.4). Using the monochromatic images from the Sloan Digital Sky Survey (SDSS), we first explore galaxy classifications using a dataset with a balanced number of galaxies in each morphological class (Section 3.1). This is done to reduce potential biases associated with number imbalances. Using this method we obtain 27 clusters within this balanced dataset. We find that our method separates the classification clusters based on galaxy shape and structure (e.g., Sérsic index, asymmetry, Gini coefficient, M20). We then associate our classification clusters with the Hubble sequence based on the dominance of each type in a given cluster (Section 4.2). Clusters with barred, weak-barred, and non-barred galaxies are well distinguished by our machine. However, when using the balanced dataset, no clean clusters are found for ellipticals or early spirals (Fig. 10). Additionally, most clusters are associated with a mixture of Hubble types. We thus conclude that there is a fundamental difficulty in separating accurately galaxies with transitional features such as lenticular galaxies and early spirals with a machine. This applies both to visual and machine classifications. In addition, we find that each machine classification cluster has characteristic galaxy properties (e.g., colours, masses, luminosities, sizes) that transition smoothly along the Hubble sequence. Overall, the machine classification clusters provide a reasonable and detailed scheme for galaxy morphological classification based on a combination of multiple structural parameters, avoiding human errors and biases. The dominated features in our classification clusters can be used as the foundation of an objective alternative to the Hubble sequence. Since our system separates well galaxies with different shape, structure, and physical properties, it may prove useful in generic galaxy formation and evolution studies. The system may be improved by including multi-colour imaging and velocity maps. It will also be interesting to apply our technique to higher redshift galaxies to see how the classification cluster change. To test the performance of our method with realistic morphological distributions, we also apply it to an imbalanced dataset which follows the morphological distribution of nearby galaxies. The results are very similar to the ones obtained with the balanced dataset, showing that our system is able to deal with large galaxy samples with more realistic morphological mixes. It also shows that our set up is not sensitive to different distributions of input galaxy morphologies, but can handle a range of distributions of various galaxy input 'types'. In the future, we intend to apply the techniques developed here to multi-colour images with better resolution such as the data from the Dark Energy Survey and the Euclid Space Telescope. Velocity maps from integral-field spectroscopic surveys could also be included. The resulting classification system(s) should prove very useful to better understand galaxy properties, their formation and evolution. We also expect that the future development of this work will result in a fundamental change in how we approach galaxy morphological classification -both visually and when using machine learning. Figure 17. The accumulated distribution of the classification clusters compared with the Hubble sequence for the imbalanced dataset. The x-axis shows one or a mix of visual morphology types. Each coloured bar represents one cluster. Different colours are different dominance of barred galaxies in the cluster, such that from deep to light blue represent more barred galaxies to no/fewer barred galaxies in the cluster.
13,469.4
2020-09-24T00:00:00.000
[ "Computer Science", "Physics" ]
Testing the Existence of the Ricardian Equivalence in Ghana in this 21st Century The Ricardian Equivalence Hypothesis formulated by a classical British economist David Ricardo argues that a reduced tax now is a tax increase in the future, the substitution of debt for current taxes has no effect on aggregate demand. The main objective of this paper is to examine empirically the existence of the Ricardian equivalency in Ghana by using time series data running from 1990 to 2017 and ARDL bound testing approach to cointegration and Error Correction Model framework developed by Pesaran and Shin (1995,1999). We examined the long run relationship between the dependent variable household final consumption expenditure and independent variables government expenditure, deficit, GDP per capita and gross debt. The long run results showed a positive and significant relationship between GDP per capita and household consumption expenditure. The result of analysis supports the Keynesian conventional theory and found strong evidence against the existence of the Ricardian Equivalency Hypothesis in Ghana. Introduction After the experience of the 2008 financial crisis, most countries witnessed an abnormal increase in fiscal deficit, since their public income expenditure have fail to increase at the same rate. This budgetary phenomenon was initially observed in many advance countries such as Spain, France, Portugal, Greece, Unite States etc. However, this effect was extended to many developing countries such as, Ghana. This phenomenon was accompanied by a huge rise in public debt of both the developed and developing economies. Aside the slow improvement that considered, budget deficit in developed and developing economies is mostly associated with austerity measure. The challenges associated with public debt is probable to exist in the medium term due to the substantial financial needs and lack of additional public revenue. In this situation, what is necessary, is to think of what will be the economic behavior of households when they anticipate an upcoming increase of public debt. The Richardian equivalence was formulated by a classical British economist David Ricardo as the name of the hypothesis imply. This hypothesis is forcefully argued by a neoclassical economist Robert Barro, the basis of his argument was that the Ricardian equivalency Hypothesis need professional attention and produces necessary policy prescriptions (Heijdra,2002). The Ricardian equivalency is of great importance when investigating the possible instruments linking fiscal policy to household consumption and its savings. The Ricardian equivalency can be approach in two different ways. They are the Keynesian proposition and the Ricardian equivalence hypothesis. The Keynesians argument is that an increase in government spending through budget deficit improves domestic output and this inspires the economy in the short run by making household feel richer by increasing total private consumption and expenditure. The theory can further be explained as when household are in the known that the level of government debt in recent period will lead to higher taxes in the long run, they have the tendency to save towards the future increases. The present value of the long run saving due to the anticipated increase in future taxes would be completely compensated deficit, so that replacing the debt with takes will not have effect on the wealth of the private sector (Descamps, & Page, 1994). In this case consumption will not change which does not support the Keynesian theory, which states that an increase in public deficit will increase aggregate demand. In a whole, the effectiveness of a macroeconomic fiscal policy to a large extent, associated with as to whether households that are prevailing in the economy or the Ricardian ones. Literature Review This section provides both theoretical and empirical literature that are relevant for the study. Theoretical literature, is on theoretical requirement for the existence of the Ricardian equivalent hypothesis. The empirical literature is on studies that support or reject the Ricardian equivalence hypothesis. Theoretical Requirement for the existence of the Ricardian equivalence hypothesis It is very important to have in mind that for the Ricardian equivalence hypothesis to hold, mechanism based on intergeneration transfer must exist, in this sense, individual most always have it mind to leave a positive legacy for their offspring's. However, for household to become Ricardian, then they need to decide on their consumption on the basis of their fix income, which is associated with the present value of their wage after tax deduction. Household discounted expected present value of future taxes will be the same as current reduction in taxes or present rise in public spending. In other words, household must be anticipating and accept the rational expectation hypothesis. The existence of a perfect capital market (liquidity unconstraint) is an essential element to support the REH. According to Hayashi (1987), if consumers face quantity constraint (due to high-interest rate) on their borrowing, they face the liquidity constraint. Therefore, they are not able to smooth out their consumption over an entire lifetime, and they will lack an opportunity to select the tax burden, and they will be indifferent. The other prerequisite for the existence of REH is the presence of lump-sum taxes. The lumpsum taxation requires that a tax now be precisely equivalent to a tax next year which raises the same present value of revenue by assumption. Debt and taxes must be equivalent. Moreover, failure to allow fully for the future by virtue either of finite horizons or fiscal illusion are inconsistent with the lump-sum assumption. Any lump-sum tax must temporally be neutral, such that it does not distort between the present and future consumption when used in all periods at a constant rate and in the sense that a tax differential between periods does not induce any taxpayer response (Brennan and Buchanan, 1980). However, in reality, taxes are not lumpsum. The reality is the tax liability is substantial if future income is high and low if the income is low. Hence, household's lifetime resources became uncertain, which may lead to an increase in current consumption (Romer, 1996;Marinheiro, 2001). According to Romer (1996), if individuals do not optimize their consumption fully over the long horizon, the Ricardian equivalence will not hold. Further, the perfect foresight assumption is one of a strong assumption for the occurrence of REH even though it is difficult to exist in an uncertain world (De Grauwe, 1996Marinheiro, 2001. The empirical literature in table1 shows the results of studies conducted on the Ricardian equivalency. Some of the studies supported the Ricardian equivalency hypothesis, whiles other rejected the REH. The reason being the kind of variable used their studies, the model, methodology, period of study and country of case study. Generally, most of Studies conducted in developed and developing countries, the Ricardian equivalence hypothesis is rejected. Methodology The Ricardian Equivalence Hypothesis is empirically examined by analysing the effect of substitution of tax for debt on aggregate consumption and interest rate. Most of the studies on the Ricardian equivalency used the former variables which can be categorized into reduced form(structural) consumption functions and Euler equation specification. The reduce form(structural) consumption is faced with the problem of endogeneity. However, when instrumental variables and accurate income, interest rates and wealth variables are used in the estimation, the reduced(structural) form consumption functions provide perfect result compared with the Euler equation specification approach under rational expectation conditions (Bernheim, 1987). This study will adopt the structural consumption function approach proposed by Bernheim, 1987. His standard model for private consumption is specified in (Eq. 1) = 0 + 1 1 + 2 + 3 + 4 + 5 + + + (1) Where C is household final consumption, Y is GDP, DEF is a budget deficit, G is government expenditure, D is government debt, W is wealth and X represents a vector of variables capturing the socio-economic conditions of the countries. Because of data problem, we eliminated variables such as Wealth and retain the main REH and estimate the model in equation (2) = 0 + 1 + 2 + + 3 + 4 + + Where represent household final consumption expenditure as measured by the market value of all goods and services at time t, is Gross Domestic Product at current market prices at time , is government expenditure at time t, is total government debt at time t, and is budget deficit at time t. The coefficient 0 is the constant term of the equation, 1 , 2, 3 , 4 5 are the long run coefficients that will be estimated in the equation. The Ricardian Equivalency Hypothesis will exist in the case of Ghana if 2 = 3 = 4 . Data Sources The data for the study will be primarily secondary data drawn from International Monetary Fund (IMF) and World Bank (World Development Indicators) database. The data set were cross-checked with other international databases for consistency before being used for the analysis. Estimation Procedure To test for the existence of the Ricardian equivalency hypothesis in Ghana, the study will use Bound testing approach to Cointegration and error correction model within the ARDL framework developed by Pesaran and Shin (1999). We will first test the time series property of data. This will be done by using the Augmented Dickey-Fuller(ADF) and Philip-Perron (PP) to determine the stationarity of the variables used for the study. The will be achieved by carrying out a unit root test. The long run and short run relationships among the variables will be determined using ARDL bound testing approach to cointegration and the Error Correction Model. Finally, the stability and diagnostic test statistics of the ARDL model will be carried out to ensure reliability and goodness of fit of the model ARDL Model Specification The long run relationship and the dynamic relations among the variables of interest were empirically determined using ARDL bound test developed by Pesaran and Shin (1999) and modified by Pesaran, Shin, and Smith (2001). We used the Bernheim (1987) approach to developed a general ARDL model for the study. This is specified in equation (3 The coefficients b1, b2, b3, and b4 in equation (3) represents the long-run multipliers and α0 is the constant term. The short-run dynamic structure is represented by the coefficients of lagged values of difference. of the variables show the short-run dynamic structure. The symbol Δ represent the first difference operator, and p is the optimal lag length. The unit root test is conducted using the Augmented Dicks Fuller(ADF) and Phillips-Perron's test to determine the stationarity of the variables, the results showed that Government expenditure is stationary in levels with zero (0) order of integration. Households final consumption, GDP per capita, Gross Debt, and Deficit are also stationary and does not contain unit root. According to the Augmented Dicks Fuller (ADF) and Phillips-Perron's test the variables became stationary after the first difference and their order of integration is 1, I(1). The results of the analysis in table 2 and 3 called for testing of cointegration among the variables. In our case we used Bound test which is appropriate for ARDL estimation. ARDL estimation is appropriate if variables are purely I (0), I(1) or both (Duasa,2007). Since our regressors are mixed, we tested the cointegration using Bound test instead of Johansen Cointegration test. The optimum Lag selection for our ARDL is carried out using Final prediction error, Akaike information criterion SC: Schwarz information criterion and Hannan-Quinn information criterion to produce the output in table 4. A maximum lag of 3 is chosen by all the criteria. Table 3 reports the results of the lag selection. The lag selection chosen by all the criteria is used in the Bound Testing and Error Correction estimation. We tested the presence of cointegration in our variables using the Wald F test statistics against Pesaran and Shin (1995) show that the null hypothesis of no cointegration must be rejected at all levels (Table 5). This implies that there exist a long run equilibrium association running through the variables. The error correction model below, calculates the error correction term for the adjustment of the model to short run equilibrium when there is any disequilibrium in the system as a result of shock. EC = HFC -(-0.8571*GROSS_DEBT + 0.0168*GDP_CAPITA__PPP_ + 1.3309*EXPENDITURE -0.5201*DEFICIT -42.7995 ) Table 6 reports the results of the error correction. The error correction indicates the long run changes in the model. The error correction term ECT (-1) is a measure that indicate how the variables in the model to equilibrium. The error correction term for our model is statistically significant with a negative sign (-1.5970). The t-statistic of -8.1351 and p-value of 0.000 confirms a long run causality running from the independent variables to the dependent variable. The ECM (-1) coefficient of (-1.5970) implies a very high level of convergence of the dependent variable and independent variables to equilibrium. If the dependent variable, Households final consumption is out of equilibrium, the scheme converges back to equilibrium at a rate of 159%. This shows that the actual household consumption deviates from its equilibrium value of 1.59 every year. In this study, to become equilibrium, we will need less than a year full adjustment. Note: ***, **, * represents Significant level at 1%,5% and10% respectively. Source: Authors construction from Eviews 10 The coefficient determination ( 2 ) measure the percentage of variations in Household consumption that is explained by the explanatory variables Deficit, GDP per capita, Gross Debt, and Government revenue. It also the fitness of the model, an 2 value of 0.9188 means that 92% of the variations in Household consumption is explained by Government expenditure, Deficit, GDP per capita, Gross Debt, and the remaining 3% is been explained by unknown and observed factors ibn Ghana from 1990 to 2017. This also implies that the model fits the data and can predict household consumption in Ghana. The F statistics is a measure for the general significance of the explanatory the variables. The F-statistic value of 186.0333 and greater than 5 with an F-statistic probability of 0.0000 implies that the explanatory variables in the model jointly explains the trend in household consumption from 1990 to 2017. The results in table 7 shows that there is a significant positive relationship between government expenditure and household consumption. The positive coefficient for government expenditure (1.3309) implies that all other explanatory variable being constant, a 1% increase in government expenditure will increase household consumption by approximately 1.3309%. The long run results showed a positive and significant relationship between GDP per capita and household consumption. The coefficient of -0.8571 for Gross Debt implies that 1%increase in gross debt will reduce household consumption by approximately 0.8571% when all other variables in the model remains unchanged. The negative and statistically significant influence of gross debt on household final consumption is in support of the Keynesian conventional theory. Holding the effect of all the variables in the long run model constant, the negative and statistically significant constant term in the long run model means that household final consumption in Ghana will approximately reduce by 42.8% due to the effect of all other variables that are not considered in the model. The positive relationship between government expenditure and household consumption does not support both Keynesian conventional theory and the Ricardian Equivalency Hypothesis. However, the results of our analysis shows that 2 ≠ 3 ≠ 4 ≠ 0. This shows that the Ricardian equivalency does not hold in the case of Ghana. The Ricardian equivalence theory will hold as discussed in the theoretical literature, if . 2 = 3 = 4 . In our analysis none of the coefficient is equal to zero (0). The coefficient for Gross debt (-0.8571), Deficit (-0.5201) and government expenditure (1.3309). The results of our analysis is in line with previous studies conducted in developing countries, specifically Africa, which found no evidence of the Ricardian equivalency. They are Pickson and Ofori-Abebrese (2018), Sub-Saharan Africa, Aderemi (2014), Nigeria;and Mosikari and Eita (2017), Lesotho. Model Diagnostic and Stability Test There is an empirical warning that parameters estimated from time series data might vary over time (Hansen,1992). Based on this evidence, it is important to conduct parameter test because there is a possibility of specifying the model incorrectly. This may result from unstable parameters which has a high probability of providing bias results. In order to check this misspecification in ARDL estimation, the significance of the variables included in the model are checked using diagnostic and structural stability test. These diagnostics test for the study can be seen in Appendix 2 The diagnostic test in Appendix I shows that there is no evidence of serial correlation based on Breusch-Godfrey Serial Correlation LM Test and the Jarque-Bera test for normality also proved that the error is normally distributed. Additionally, the model passed the Breusch-Pagan-Godfrey test for heteroscedasticity. The Durbin Watson test statistic value of 1.993 also showed that there is no evidence of serial correlation in the residuals. The stability to test of residual(CUSUM) and CUSUM square conducted indicates that all the coefficients in the model are stable and cannot be rejected. Conclusion The main of object of this study is test the existence of the Ricardian equivalence Hypothesis in Ghana in this 21 st century using ARDL Bound testing approach to Cointegration and Error Correction Model framework developed by Shin (1995, 1999). We examined the existence of a long run relationship between Household final consumption and GDP per capita, Government expenditure and Gross Debt. We run an ARDL model developed by Shin (1995, 1999) using Bernheim (1987) approach for tesingt the existence of REH. The results of our analysis showed that there is long run relationship running from GDP per capita, Government expenditure and Gross Debt to Household final consumption. However, we found a strong evidence against the Ricardian Equivalence Hypothesis in Ghana and a support for the Keynesian debt non-neutrality for the period 1990 to 2017. The Ricardian equivalency will hold in the case of Ghana if government expenditure, and government debt does not affect household final consumption level; and all the theortical assumption of Ricardian equivalence Hypothesis met. Net lending (+)/ borrowing (-) is calculated as revenue minus total expenditure. This is a core GFS balance that measures the extent to which general government is either putting financial resources at the disposal of other sectors in the economy and non-residents (net lending), or utilizing the financial resources generated by other sectors and non-residents (net borrowing). This balance may be viewed as an indicator of the financial impact of general government activity on the rest of the economy and non-residents. IMF GDP Per Capit(PPP) GDP at purchaser's prices is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. It is calculated without making deductions for depreciation of fabricated assets or for depletion and degradation of natural resources WDI Household final consumption expenditure is the market value of all goods and services, including durable products (such as cars, washing machines, and home computers), purchased by households. It excludes purchases of dwellings but includes imputed rent for owner-occupied dwellings. It also includes payments and fees to governments to obtain permits and licenses. Government Expenditure Total expenditure consists of total expense and the net acquisition of nonfinancial assets. IMF Gross Debt Gross debt consists of all liabilities that require payment or payments of interest and/or principal by the debtor to the creditor at a date or dates in the future. This includes debt liabilities in the form of SDRs, currency and deposits, debt securities, loans, insurance, pensions and standardized guarantee schemes, and other accounts payable.
4,530.4
1970-01-01T00:00:00.000
[ "Economics" ]
Multi-core On-The-Fly Saturation Saturation is an efficient exploration order for computing the set of reachable states symbolically. Attempts to parallelize saturation have so far resulted in limited speedup. We demonstrate for the first time that on-the-fly symbolic saturation can be successfully parallelized at a large scale. To this end, we implemented saturation in Sylvan’s multicore decision diagrams used by the LTSmin model checker. We report extensive experiments, measuring the speedup of parallel symbolic saturation on a 48-core machine, and compare it with the speedup of parallel symbolic BFS and chaining. We find that the parallel scalability varies from quite modest to excellent. We also compared the speedup of on-the-fly saturation and saturation for pre-learned transition relations. Finally, we compared our implementation of saturation with the existing sequential implementation based on Meddly. The empirical evaluation uses Petri nets from the model checking contest, but thanks to the architecture of LTSmin, parallel on-the-fly saturation is now available to multiple specification languages. Data or code related to this paper is available at: [34]. Introduction Model checking is an exhaustive algorithm to verify that a finite model of a concurrent system satisfies certain temporal properties. The main challenge is to handle the large state space, resulting from the combination of parallel components. Symbolic model checking exploits regularities in the set of reachable states, by storing this set concisely in a decision diagram. In asynchronous systems, transitions have locality, i.e. they affect only a small part of the state vector. This locality is exploited in the saturation strategy, which is probably the most efficient strategy to compute the set of reachable states. In this paper, we investigate the efficiency and speedup of a new parallel implementation of saturation, aiming at a multi-core, shared-memory implementation. The implementation is carried out in the parallel decision diagram framework Sylvan [16], in the language-independent model checker LTSmin [22]. We empirically evaluate the speedup of parallel saturation on Petri nets from the Model Checking Contest [24], running the algorithm on up to 48 cores. Related Work The saturation strategy has been developed and improved by Ciardo et al. We refer to [13] for an extensive description of the algorithm. Saturation derives its efficiency from firing all local transitions that apply at a certain level of the decision diagram, before proceeding to the next higher level. An important step in the development of the saturation algorithm allows on-the-fly generation of the transition relations, without knowing the cardinality of the state variable domains in advance [12]. This is essential to implement saturation in LTSmin, which is based on the PINS interface to discover transitions on-the-fly. Since saturation obtains its efficiency from a restrictive firing order, it seems inherently sequential. Yet the problem of parallelising saturation has been studied intensively. The first attempt, Saturation NOW [9], used a network of PCs. This version could exploit the collective memory of all PCs, but due to the sequential procedure, no speedup was achieved. By firing local transitions speculatively (but with care to avoid memory waste), some speedup has been achieved [10]. More relevant to our work is the parallelisation of saturation for a shared memory architecture [20]. The authors used CILK to schedule parallel work originating from firing multiple transitions at the same level. They reported some speedup on a dual-core machine, at the expense of a serious memory increase. Their method also required to precompute the transition relation. An improvement of the parallel synchronisation mechanism was provided in [31]. They reported a parallel speedup of 2× on 4 CPUs. Moreover, their implementation supports learning the transition relation on-the-fly. Still, the successful parallelisation of saturation remained widely open, as indicated by Ciardo [14]: "Parallel symbolic state-space exploration is difficult, but what is the alternative?" For an extensive overview of parallel decision diagrams on various hardware architectures, see [15]. Here we mention some other approaches to parallel symbolic model checking, different from saturation for reachability analysis. First, Grumberg and her team [21] designed a parallel BDD package based on vertical partitioning. Each worker maintains its own sub-BDD. Workers exchange BDD nodes over the network. They reported some speedup on 32 PCs for BDD based model checking under the BFS strategy. The Sylvan [16] multi-core decision diagram package supports symbolic on-the-fly reachability analysis, as well as bisimulation minimisation [17]. Oortwijn [28] experimented with a heterogeneous distributed/multi-core architecture, by porting Sylvan's architecture to RDMA over MPI, running symbolic reachability on 480 cores spread over 32 PCs and reporting speedups of BFS symbolic reachability up to 50. Finally, we mention some applications of saturation beyond reachability, such as model checking CTL [32] and detecting strongly connected components to detect fair cycles [33]. Contribution Here we show that implementing saturation on top of the multi-core decision diagram framework Sylvan [16] yields a considerable speedup in a shared-memory setting of up to 32.5× on 48 cores with pre-learned transition relations, and 52.2× with on-the-fly transition learning. By design decision, our implementation reuses several features provided by Sylvan, such as: its own fine-grained, work-stealing framework Lace [18], its implementation of both BDDs (Binary Decision Diagrams) and LDDs (a Listimplementation of Multiway Decision Diagrams), its concurrent unique table and operations cache, and finally, its parallel operations like set union and relational product. As a consequence, the pseudocode of the algorithm and additional code for saturation is quite small, and orthogonal to other BDD features. To improve orthogonality with the existing decision diagrams, we deviated from the standard presentation of saturation [13]: we never update BDD nodes in situ, and we eliminated the mutual recursion between saturation and the BDD operations for relational product to fire transitions. The implementation is available in the open-source high-performance model checking tool LTSmin [22], with its language-agnostic interface, Partitioned Next-State Interface (PINS) [5,22,25]. Here, a specification basically provides a next-state function equipped with dependency information, from which LTSmin can derive locality information. We fully support the flexible method of learning the transition relation on-the-fly during saturation [12]. As a consequence, our contribution extends the tool LTSmin with saturation for various specification languages, like Promela, DVE, Petri nets, mCRL2, and languages supported by the ProB model checker. See Sect. 4 on how to use saturation in LTSmin. The experiments with saturation in Sylvan are carried out in LTSmin as well. We used Petri nets from the MCC competition. Our experimental design has been carefully set up in order to facilitate fair comparisons. Besides learning the transition relation on-the-fly, we also pre-learned them in order to measure the overhead of learning, and eliminating its effect in comparisons. It is well known that the variable ordering has a large effect on the BDD sizes [29]. Hence, our experiments are based on two of the best static variable orderings known, Sloan [26] and Force [1]. In particular, our experiments measure and compare: -The performance of our parallel algorithm with one worker, compared to a state-of-the art sequential implementation of saturation in Meddly [4]. Preliminaries This paper proposes an algorithm for decision diagrams to perform the fixed point application of multiple transition relations according to the saturation strategy, combined with on-the-fly transition learning as implemented in LTSmin. We briefly review these concepts in the following. Partitioned Transition Systems A transition system (TS) is a tuple (S, →, s 0 ), where S is a set of states, →⊆ S × S is a transition relation and s 0 ∈ S is the initial state. We define → * to be the reflexive and transitive closure of →. The set of reachable states is R = {s ∈ S | s 0 → * s}. The goal of this work is to compute R via a novel multi-core saturation strategy. In this paper, we evaluate multi-core saturation using Petri nets. Figure 1 shows an example of a (safe) Petri net. We show its initial marking, which is the initial state. A Petri net transition can fire if there is a token in each of its source places. On firing, these tokens are consumed and tokens in each target place are generated. For example, t 1 will produce one token in both p 2 and p 5 , if there is a token in p 4 . Transition t 6 requires a token in both p 3 and p 1 to fire. The markings of this Petri net form the states of the corresponding TS, so here |S| = 2 5 = 32. From the initial marking shown, four more markings are reachable, connected by 10 enabled transition firings. This means |R| = 5, and |→| = 10. Notice that transitions in Petri nets are quite local; transitions consume from, and produce into relatively few places. The firing of a Petri net transition is called an event and the number of involved places is known as the degree of event locality. This notion is easily defined for other asynchronous specification languages and can be computed by a simple control flow graph analysis. To exploit event locality, saturation requires a disjunctive partitioning of the transition relation →, giving rise to a Partitioned Transition System (PTS). In a PTS, states are vectors of length N , and → is partitioned as a union of M transition groups. A natural way to partition a Petri net is by viewing each transition as a transition group. For Fig. 1 this means we have N = 5 and M = 6. After disjunctive partitioning, each transition group depends on very few entries of the state vector. This allows for efficiently computing the reachable state space for the large class of asynchronous specification languages. LTSmin supports commonly used specification languages, like DVE, mCRL2, Promela, PNML for Petri nets, and languages supported by ProB. x2 : Decision Diagrams Binary decision diagrams (BDDs) are a concise and canonical representation of Boolean functions B N → B [7]. A BDD is a rooted directed acyclic graph with leaves 0 and 1. Each internal node v has a variable label x i , denoted by var(v), and two outgoing edges labeled 0 and 1, denoted by low(v) and high(v). The efficiency of reduced, ordered BDDs is achieved by minimizing the structure with some invariants: The BDD may neither contain equivalent nodes, with the same var(v), low(v) and high(v), nor redundant nodes, with low(v) = high(v). Also, the variables must occur according to a fixed ordering along each path. Multi-valued or multiway decision diagrams (MDDs) generalize BDDs to finite domains (N N → B). Each internal MDD node with variable x i now has n i outgoing edges, labeled 0 to n i − 1. We use quasi-reduced MDDs with sparse nodes. In the sparse representation, values with edges to leaf 0 are skipped from MDD nodes, so outgoing edges must be explicitly labeled with remaining domain values. Contrary to BDDs, MDDs are usually "quasi-reduced", meaning that variables are never skipped. In that case, the variable x i can be derived from the depth of the MDD, so it is not stored. A variation of MDDs are list decision diagrams (LDDs) [5,16], where sparse MDD nodes are represented as a linked list. See Fig. 2 for two visual representations of the same LDD. Each LDD node contains a value, a "down" edge for the corresponding child, and a "right" edge pointing to the next element in the list. Each list ends with the leaf 0 and each path from the root downwards ends with the leaf 1. The values in an LDD are strictly ordered, i.e., the values must increase to the "right". LDD nodes have the advantage that common suffixes can be shared: The MDD for Fig. 2a requires two more nodes, one for [2,4] and one for [1], because edges can only point to an entire MDD node. LDDs suffer from an increased memory footprint and inferior memory locality, but their memory management is simpler, since each LDD node has a fixed small size. Variable Orders and Event Locality Good variable orders are crucial for efficient operations on decision diagrams. The syntactic variable order from the specification is often inadequate for the saturation algorithm to perform well. Hence, finding a good variable order is necessary. Variable reordering algorithms use heuristics based on event locality. The locality of events can be illustrated with dependency matrices. The size of those matrices is M × N , where M is the number of transition groups, and N is the length of the state vector. The order of columns in dependency matrices determines the order of variables in the DD. Figure 3a shows the natural order on places in Fig. 1. A measure of event locality is called event span [29]. Lower event span is correlated to a lower number of nodes in decision diagrams. This can be seen in LDDs in Figs. 4a and b that are ordered according to columns in Figs. 3a and b respectively. Event span is defined as the sum over all rows of the distance from the leftmost non-zero column to the rightmost non-zero column. The event span of Fig. 3a is 22 (= 4+2+2+5+5+4); the event span of Fig. 3b is 16, which is better. Optimizing the event span and thus variable order of DDs is NP-complete [6], yet there are heuristic approaches that run in subquadratic time and provide good enough orders. Commonly used algorithms are Noack [27], Force [1] and Sloan [30]. Noack creates a permutation of variables by iteratively minimizing some objective function. The Force algorithm acts as if there are springs in between nonzeros in the dependency matrix, and tries to minimize the average tension among them. Sloan tries to minimize the profile of matrices. In short, profile is the symmetric counterpart to event span. For a more detailed overview of these algorithms see [3]. In our empirical evaluation we use both Sloan and Force, because these have been shown to give the best results [2,26]. The Saturation Strategy The saturation strategy for reachability analysis, i.e., the transitive closure of transition relations applied to some set of states, was first proposed by Ciardo et al. See for an overview [11,13]. Saturation was combined with on-the-fly transition learning in [12]. Besides reachability, saturation has also been applied to CTL model checking [32] and in checking fairness constraints with strongly connected components [33]. Saturation is well-studied. The core idea is to always fire enabled transitions at the lower levels in the decision diagram, before proceeding to the next level. This tends to keep the intermediate BDD sizes much smaller than for instance the breadth-first exploration strategy. This is in particular the case for asynchronous systems, where transitions exhibit locality. There is also a major influence from the variable reordering: if the variables involved in a transition are grouped together, then this transition only affects adjacent levels in the decision diagram. We refer to [13] for a precise description of saturation. Our implementation deviates from the standard presentation in three ways. First, we implemented saturation for LDDs and BDDs, instead of MDDs. Next, we never update nodes in the LDD forest in situ; instead, we always create new nodes. Finally, the standard representation has a mutual recursion between saturation and firing transitions. Instead, we fire transition using the existing function for relational product, which is called from our saturation algorithm. As a consequence, the extension with saturation becomes more orthogonal to the specific decision diagram implementation. We refer to Sect. 3 for a detailed description of our algorithm. We show in Sect. 5 that these design decisions do not introduce computational overhead. Multi-core Saturation Algorithm To access the three elements of an LDD node x, Sylvan [16] provides the functions value(x), down(x), right(x). To create or retrieve a unique LDD node using the hash table, Sylvan provides LookupLDDNode(value, down, right). Furthermore, Sylvan provides several operations on LDDs that we use to implement reachability algorithms, such as union (A, B) to compute the set union A ∪ B and minus(A, B) to compute the set difference A \ B. For transition relations, Sylvan provides an operation relprod(S, R) to compute the successors of S with transition relation R, and an operation relprodunion(S, R) that computes union(S, relprod(S, R)), i.e., computing the successors and adding them to the given set of states, in one operation. All these operations are internally parallelized, as described in [16]. We implement multi-core saturation as in Algorithm 1. We have a transition relation disjunctively partitioned into M relations R 0 . . . R M −1 . These relations are sorted by the level (depth) of the decision diagram where they are applied, which is the first level touched by the relation. We say that relation R i is applied The saturate algorithm is given the initial set of states S and the initial next transition relation k = 0 and the initial decision diagram level d = 0. The algorithm is a straightforward implementation of saturation. First we check the easy cases where we reach either the end of an LDD list, where S = 0, or the bottom of the decision diagram, where S = 1. If there are no more transition relations to apply, then k = M and we can simply return S. When we arrive at line 4, the operation is not trivial and we consult the operation cache. If the result of this operation was not already in the cache, then we check whether we have relations at the current level. Since the relations are sorted by the level where they must be applied, we compare the current level d with the level d k of the next relation k. If we have relations at the current level, then we perform the fixed point computation where we first saturate S for the remaining relations, starting at relation k , which is the first relation that must be applied on a deeper level than d, and then apply the relations of the current level, that is, all R i where k ≤ i < k . If no relations match the current level, then we compute in parallel the results of the suboperations for the LDD of successor "right" and for the LDD of successor "down". After obtaining these sub results, we use LookupLDDNode to compute the final result for this LDD node. Finally, we store this result in the operation cache and return it. The do in parallel keyword is implemented with the work-stealing framework Lace [18], which is embedded in Sylvan [16] and offers the primitives spawn and sync to create subtasks and wait for their completion. The implementation using spawn and sync of lines 12-14 is as follows. The implementation of multi-core saturation for BDDs is identical, except that we parallelize on the "then" and "else" successors of a BDD node, instead of on the "down" and "right" successors of an LDD node. To add on-the-fly transition relation learning to this algorithm, we simply modify the loop at line 9 as follows: The learn-transitions function provided by LTSmin updates relation i given a set of states S. The function first restricts S to so-called short states S i , which is the projection of S on the state variables that are touched by relation i. Then it calls the next-state function of the PINS interface for each new short state and it updates R i with the new transitions. Updating transition relations from multiple threads is not completely trivial. LTSmin solves this using lock-free programming with the compare-and-swap operation. After collecting all new transitions, LTSmin computes the union with the known transitions and uses compare-and-swap to update the global relation; if this fails, the union is repeated with the new known transitions. Contributed Tools We present several new tools and extensions to existing tools produced in this work. The new tools support experiments and comparisons between various DD formats. The extension to Sylvan and LTSmin provides end-users with multicore saturation for reachability analysis. Tools for Experimental Purposes For the empirical evaluation, we need to isolate the reachability analysis of a given LDD (or BDD or MDD). To that end, we implemented three small tools that only compute the set of reachable states, namely lddmc for LDDs, bddmc for BDDs and medmc for MDDs using the library Meddly. These tools are given an input file representing the model, compute the set of reachable states, and report the number of states and the required time to compute all reachable states. Additionally we provide the tools ldd2bdd and ldd2meddly that convert an LDD file to a BDD file and to an MDD file. The LDD input files are generated using LTSmin (see below). These tools can all be found online 1 . Tools for On-The-Fly Multi-core Saturation On-the-fly multi-core saturation is implemented in the LTSmin toolset, which can be found online 2 . The examples in this section are also online 3 . On-the-fly multicore saturation for Petri nets is available in LTSmin's tool pnml2lts-sym. This tool computes all reachable markings with parallel saturation. The command line to run it on Fig. 1 is pnml2lts-sym pnml/example.pnml --saturation=sat. The tool reports: pnml2lts-sym: state space has 5 states, 16 nodes. Additionally, it appears the final LDD has 16 nodes. Here the syntactic variable order of the places in pnml/example.pnml is used. To use a better variable order, the option -r is added to the command line. For instance adding -rf runs Force, while -rbs runs Sloan's algorithm (as implemented in the well-known Boost library). Running pnml2lts-sym pnml/example.pnml --saturation=sat -rf reports that the final LDD has only 12 nodes. The naming convention of LTSmin's binaries follows the Partitioned Next-State Interface (PINS) architecture [5,22,25]. PINS forms a bridge between several language front-ends and algorithmic back-ends. Consequently, besides pnml2lts-sym, LTSmin also provides {pnml,dve,prom}2lts-{dist,mc,sym} and several other combinations. These binaries generate the state space for the languages PNML, DVE and Promela, by means of distributed explicit-state, multicore explicit-state and multi-core symbolic algorithms, respectively. Additionally, LTSmin supports checking for deadlocks and invariants, and verifying LTL properties and µ-calculus formulas. In this work we focus on state space generation with the symbolic back-end only. We now demonstrate multi-core saturation for Promela models. Consider the file Promela/garp 1b2a.prm which is an implementation of the GARP protocol [23]. To compute the reachable state space with the proposed algorithm and Force order, run: prom2lts-sym --saturation=sat Promela/garp 1b2a.prm -rf. On a consumer laptop with 8 hardware threads, LTSmin reports 385,000,995,634 reachable states within 1 min. To run the example with a single worker, run prom2lts-sym -saturation=sat Promela/garp 1b2a.prm -rf --lace-workers=1. On the same laptop, the algorithm runs in 4 min with 1 worker. We thus have a speedup of 4× with 8 workers for symbolic saturation on a Promela model. Empirical Evaluation Our goal with the empirical study is five-fold. First, we compare our parallel implementation with only 1 core to the purely sequential implementation of the MDD library Meddly [4], in order to determine whether our implementation is competitive with the state-of-the-art. Second, we study parallel scalability up to 16 cores for all models and up to 48 cores with a small selection of models. Third, we compare parallel saturation with LDDs to parallel saturation with ordinary BDDs, to see if we get similar results with BDDs. Fourth, we compare parallel saturation without on-the-fly transition learning to on-the-fly parallel saturation, to see the effects of on-the-fly transition learning on the performance of the algorithm. Fifth, we compare parallel saturation with other reachability strategies, namely chaining and BFS, to confirm whether saturation is indeed a better strategy than chaining and BFS. To perform this evaluation, we use the P/T Petri net benchmarks obtained from the Model Checking Contest 2016 [24]. These are 491 models in total, stored in PNML files. We use parallel on-the-fly saturation (in LTSmin) with a generous timeout of 1 hour to obtain LDD files of the models, using the Force variable ordering and using the Sloan variable ordering. In total, 413 of potentially 982 LDD files were generated. These LDD files simply store the list decision diagrams of the initial states and of all transition relations. We convert the LDD files to BDD files (binary decision diagrams) with an optimal number of binary variables. We also convert the LDD files to MDD files for the experiments using Meddly. This ensures that all solvers have the same input model with the same variable order. See Table 1 for the list of solving methods. As described in Sect. 4, we implement the tools lddmc, bddmc and medmc to isolate reachability computation for the purposes of this comparison, using respectively the LDDs and BDDs of Sylvan and the MDDs of Meddly. The on-the-fly parallel saturation using LDDs is performed with the pnml2lts-sym tool of LTSmin. We use the command line pnml2lts-sym ORDER --lace-workers=WORKERS --saturation=sat FILE, where ORDER is -rf for Force and -rbs for Sloan and WORKERS is a number from the set {1, 2, 4, 8, 16}. All experimental scripts, input files and log files are available online (see When reporting on parallel executions, we use the number of workers for how many hardware threads (cores) were used. Overview. After running all experiments, we obtain the results for 413 models in total, of which 196 models with the Force variable ordering and 217 models with the Sloan variable ordering. In the remainder of this section, we study these Table 2, which shows the number of models for which each method could compute the set of reachable states within 20 min. To correctly compare all runtimes, we restrict the set of models to those where all methods finish within 20 min with any number of workers. We retain in total 301 models where no solver hit the timeout. See Table 3 for the cumulative times for each method and number of workers and the parallel speedup. Notice that this is the speedup for the entire set of 301 models and not for individual models. Comparing LDD saturation with Meddly's saturation. We evaluate how ldd-sat with just 1 worker compares to the sequential saturation of Meddly. The goal is not to directly measure whether there is a parallel overhead from using parallelism in Sylvan, as the algorithm in lddmc is fundamentally different because it uses LDDs instead of MDDs and the algorithm does not in-place saturate nodes, as also explained in Sect. 3. The low parallel overheads of Sylvan are already demonstrated elsewhere [15,16,18]. Rather, the goal is to see how our version of saturation compares to the state-of-the-art. Table 2 shows that Meddly's implementation (mdd-sat) and our implementation (ldd-sat 1) are quite similar in the number of solved models. Meddly solves 375 benchmarks and our implementation solves 388 within 20 min. See Table 3 for a comparison of runtimes. Meddly solves the 150 models with Sloan almost 2× as fast as our implementation in Sylvan, but is slower than our implementation for the 151 models with Force. We observe for individual models that the difference between the two solvers is within an order of magnitude for Parallel Scalability. As shown in Table 3, using 16 workers, we obtain a modest parallel speedup for saturation of 6.2× (with Sloan) and 4.7× (with Force). On individual models, the differences are large. The average speedup of the individual benchmarks is only 1.8× with 16 workers, but there are many slowdowns for models that take less than a second with 1 worker. We take an arbitrary selection of models with a high parallel speedup and run these on a dedicated 48-core machine. Table 4 shows that even up to 48 cores, parallel speedup keeps improving. We even see a speedup of 52.2×. For this superlinear speedup we have two possible explanations. One is that there is some nondeterminism inherent in any parallel computation; another is already noted in [20] and is related to the "chaining" in saturation, see further [20]. Comparing LDD saturation with BDD saturation. As Table 3 shows, the ldd-sat and bdd-sat method have a similar performance and similar parallel speedups. On-the-fly LDD saturation. Comparing the performance of offline saturation with on-the-fly saturation, we observe the same scalability with the Sloan variable order, but on-the-fly saturation requires roughly 2× as much time. With the Force variable order, on-the-fly saturation is slower but has a higher parallel speedup of 7.9×. Comparing saturation, chaining and BFS. We also compare the saturation algorithm with other popular strategies to compute the set of reachable states, namely standard (parallelized) BFS and chaining, given in Fig. 5. As Tables 2 and 3 show, chaining is significantly faster than BFS and saturation is again significantly faster than chaining. In terms of parallel scalability, we see that parallelized BFS scales better than the others, because it can already parallelize in the main loop by computing successors for all relations in parallel, which chaining and saturation cannot do. For the entire set of benchmarks, saturation is the superior method, however there are individual differences and for some models, saturation is not the fastest method. Conclusion We presented a multi-core implementation of saturation for the efficient computation of the set of reachable states. Based on Sylvan's multi-core decision diagram framework, the design of the saturation algorithm is mostly orthogonal to the type of decision diagram. We showed the implementation for BDDs and LDDs; the translation relation can be learned on-the-fly. The functionality is accessible through the LTSmin high-performance model checker. This makes parallel saturation available for a whole collection of asynchronous specification languages. We demonstrated multi-core saturation for Promela and for Petri nets in PNML representation. We carried out extensive experiments on a benchmark of Petri nets from the Model Checking Contest. The total speedup of on-the-fly saturation is 5.9× on 16 cores with the Sloan variable ordering and 7.9× with the Force variable ordering. However, there are many small models (computed in less than a second) in this benchmark. For some larger models we showed an impressive 52× speedup on a 48-core machine. From our measurements, we further conclude that the efficiency and parallel speedup for the BDD variant is just as good as the speedup for LDDs. We compared efficiency and speedup of saturation versus other popular exploration strategies, BFS and chaining. As expected, saturation is significantly faster than chaining, which is faster than BFS; this trend is maintained in the parallel setting. Our measurements show that the variable ordering (Sloan versus Force), and the model representation (pre-computed transition relations versus learned on-the-fly) do have an impact on efficiency and speedup. Parallel speedup should not come at the price of reduced efficiency. To this end, we compared our parallel saturation algorithm for one worker to saturation in Meddly. Meddly solves fewer models within the timeout, but is slightly faster in other cases, but parallel saturation quickly overtakes Meddly with multiple workers. Future work could include the study of parallel saturation on exciting new BDD types, like tagged BDDs and chained BDDs [8,19]. The results on tagged BDDs showed a significant speedup compared to ordinary BDDs on experiments in LTSmin with the BEEM benchmark database. Another direction would be to investigate the efficiency and speedup of parallel saturation in other applications, like CTL model checking, SCC decomposition, and bisimulation reduction.
7,260.4
2019-04-06T00:00:00.000
[ "Computer Science" ]
Interaction of the Integrin β1 Cytoplasmic Domain with ICAP-1 Protein* In a yeast two-hybrid screen, a protein named ICAP-1 (β1 integrin cytoplasmic domain associated protein) associated with the integrin β1 cytoplasmic tail but not with tails from three other integrin β subunits (β2, β3, and β5) or from seven different α subunits. Likewise in human cells, ICAP-1 associated specifically with the β1 but not β2, β3, or β5 tails. The carboxyl-terminal 14 amino acids of β1 were critical for ICAP-1 interaction. ICAP-1 is a ubiquitously expressed protein of 27 and 31 kDa, with the smaller form being preferentially solubilized by Triton X-100. Phosphorylation of both 27- and 31-kDa forms was constitutive but was increased by 1.5–2-fold upon cell spreading on fibronectin, compared with poly-l-lysine. Also, ICAP-1 contributes to β1 integrin-dependent migration because (i) ICAP-1 transfection markedly increased chemotactic migration of COS7 cells through fibronectin-coated but not vitronectin-coated porous filters, and (ii) support of β1-dependent cell migration (in Chinese hamster ovary cells transfected with various wild type and mutant β1 forms) correlated with ICAP-1 association. In summary, ICAP-1 (i) associates specifically with β1 integrins, (ii) is phosphorylated upon β1 integrin-mediated adhesion, and (iii) may regulate β1-dependent cell migration. Integrin-dependent cell adhesion helps to control cell proliferation and apoptosis, as well as cell spreading, migration, morphogenesis, and differentiation (1)(2)(3)(4)(5). Upon cell adhesion, integrin engagement leads to downstream activation of focal adhesion kinase, mitogen-activated protein kinase, and many other key signaling molecules (6). At the same time, integrindependent reorganization of cytoskeletal proteins and signaling complexes facilitates growth factor signaling (7,8). A distinctive property of integrins is that they not only deliver "outside-in" signals upon engagement with ligand but also their function is regulated by "inside-out" signals (9 -12). In this regard, integrin function can be strongly modulated upon overexpression of various oncogenes (13)(14)(15) or upon engagement of various cell-surface receptors with ligands or antibodies (10,16,17). We also have undertaken a yeast two-hybrid screen to identify ␤ 1 tail-associated proteins. In the yeast, we initially identified two candidate ␤ 1 tail-interacting proteins. These were (i) a fragment of RACK1 and (ii) a protein called ICAP-1 (integrin cytoplasmic tail associated protein). Additional yeast two-hybrid studies suggested that the RACK1 interaction was nonspecific. However, the ICAP-1 protein did show specific interaction with the ␤ 1 tail, both in yeast and in human cell lines. Furthermore, the site of ICAP-1 association was mapped to the 14 carboxyl-terminal amino acids of ␤ 1 (which includes an NPXY motif); ICAP-1 phosphorylation was found to be regulated upon cell spreading on fibronectin, and ICAP-1 appeared to play a role in ␤1-dependent cell migration. Yeast Two-hybrid Screening-Yeast genetic screening for proteins that bind to the integrin ␤ 1 cytoplasmic tail was carried out essentially as described (49,50). Integrin ␤ 1 subunit cDNA encoding for carboxylterminal amino acid residues 750 -798 (see Table I) was amplified from full-length ␤ 1 cDNA by polymerase chain reaction (PCR). This PCR product was ligated into plasmid pEG202N to generate the LexAintegrin fusion "bait" plasmid pEG202-␤1. Host yeast strain EGY48 (MAT␣, his3, trp1, ura3-52, leu2::pLEU 2-LexAop6, constructed by E. Golemis, Massachusetts General Hospital, Boston) was cotransformed with pEG202N bait and pSH18-34 reporter plasmids to verify that the bait plasmid is itself transcriptionally inert. Also, by using the pJK101 reporter plasmid we confirmed that baits (LexA-integrin cytoplasmic tail fusions) could be expressed inside the nucleus of EGY48 yeast cells and bind to LexA operator. A yeast expression library with a complexity of 10 6 was generated from oligo(dT)-primed cDNA from HeLa (human cervical carcinoma cell line) mRNA. The cDNA was cloned unidirectionally into the EcoRI/XhoI sites of galactose-inducible, TRPϩ yeast expression vector pJG4-5 (constructed by J. Gyuris, Massachusetts General Hospital, Boston). For genetic screening, yeast strain EGY48 was transformed sequentially with pEG202-␤1, pSH18-34, and pJG4-5 using the lithium acetate method (51). Approximately 2 ϫ 10 6 yeast transformants were pooled and subjected to selections as described (49,50). Positive interaction is defined as (i) growth on leucine-deficient, galactose-conditioned medium but not on leucine-deficient, glucose-conditioned medium, and (ii) forming blue colonies on galactose X-gal plates but not on glucose X-gal plates. Plasmid DNAs from positive colonies were rescued using Escherichia coli KC8. Retransformation of EGY48 with prey plasmid DNA, pSH18-34, and pEG202N-␤1cyto plasmid DNA was done to confirm the interaction. As described above for pEG202-␤1, other bait plasmids were constructed to encode for the various integrin cytoplasmic domains listed in Table I. Also constructed were bait plasmids containing chimeric ␤ 1 /␤ 5 integrin cytoplasmic domains (bottom of Table I). Cloning of Full-length ICAP-1 cDNA-A human HeLa S3 5Ј-stretch plus cDNA library in bacterial phage lambda gt11 vector (CLONTECH, Palo Alto, CA) was plated and transferred to Colony/Plaque Screen membrane (NEN Life Science Products) according to the manufacturer's protocol. The library was screened by in situ hybridization (52) with 32 P-labeled insert as a probe (obtained from Clone No. 4, Fig. 1). After an additional two rounds of purification, eight positive lambda phage clones were obtained, and the cDNA inserts were sequenced using a double-stranded DNA cycle sequencing kit (Life Technologies, Inc.). For stable eukaryotic expression, we used the pECE vector containing a full-length ␤ 1 insert (54) which is designated here as pECE-␤ 1 cyto1.1. The pECE-␤ 1 cyto5.5 construct codes for wild type ␤ 1 extracellular and transmembrane domains fused to the ␤ 5 cytoplasmic domain (25). Additional ␤ 1 /␤ 5 cytoplasmic tail exchange mutants, in the pECE vector, were generated by sequential PCR (52). These contained ␤ 1 extracellular and transmembrane domains, fused to ␤ 1 , ␤ 5 , or ␤ 1/5 chimeric cytoplasmic tails listed at the bottom of Table I. CHO cells negative for dihydrofolate reductase gene (dhfrϪ) were grown in MEM ␣ϩ medium with 10% FCS, and then switched to MEM ␣Ϫ with 10% dialyzed fetal calf serum (JRH Biosciences, Lenexa, KS) after transfection. The dhfrϩ p901 vector (55) was provided by Dr. M. Rosa (Biogen Co., Cambridge, MA). For transfection, CHO cells were electroporated with a mixture of p901 dhfrϩ plasmid DNA and pECE-␤ 1 or -␤ 1 /␤ 5 mutant plasmid DNA at a ratio of 1 to 10, using a gene pulser (Bio-Rad) set at 280 V and 960 microfarads. Growth of transfected CHO cells in MEM ␣Ϫ medium and selection of positive clones by flow cytometry were carried out as described (25). The CHO-␤ 1 and -␤ 1 /␤ 5 mutants were selected to have comparable surface expression levels as measured by flow cytometry using anti-␤ 1 mAb A-1A5. For eukaryotic transient expression, full-length ICAP-1 cDNA was ligated in frame into a modified pMT2HA vector (56) to form pMT2HA-ICAP-1, which encodes for the influenza hemagglutinin (HA) antigen epitope just upstream of ICAP-1. Calcium phosphate (52) was used to transiently transfect pMT2HA-ICAP-1 into HEK293 or COS7 cells, and cells were analyzed after 48 h. Immunofluorescence analysis revealed that ICAP was typically expressed in 20 -40% of HEK293 cells, and transfection into COS7 was at least as efficient. In Vitro Immunoprecipitation and GST Fusion Protein Assays-The HEK293 cell line was labeled with ϳ0.15 mCi [ 32 P]orthophosphate (10 mCi/ml) in sodium phosphate-deficient DMEM supplemented with 10% dialyzed FCS and antibiotics. Labeling was typically begun at 48 h after HEK293 cell transfection and continued for 3 h unless otherwise indicated. For immunoprecipitation, cells were lysed either in Triton X-100 buffer (1% Triton X-100, 25 mM HEPES, pH 7.4, 150 mM NaCl, 5 mM MgCl 2 , 2 mM phenylmethylsulfonyl fluoride, 20 g/ml aprotinin, and 10 g/ml leupeptin) or in RIPA buffer (Triton X-100 buffer supplemented with 0.2% SDS and 1% deoxycholate) at 4°C for 1 h. After centrifugation at 12,000 rpm for 10 min, soluble material was precleared by incubation with normal rabbit serum immobilized on protein A-Sepharose 4B beads (Amersham Pharmacia Biotech) at 4°C for 1 h. Next, immune complexes were collected on protein A beads already pre-bound with rabbit anti-ICAP-1 antibodies and washed four times with cell lysis buffer. Immunoprecipitated proteins were separated on SDS-polyacrylamide gel electrophoresis, and then dried gels were exposed with Tail sequences from each integrin subunit include residues from the end of the transmembrane domain to the carboxyl terminus. Amino acid sequence derived from the ␤ 1 tail is shown in bold. b These peptides were also incorporated into GST fusion proteins. c Besides being tested in yeast, the listed carboxyl-terminal domains, together with ␤ 1 extracellular and transmembrane domains, were tested in the context of intact integrin (Figs. 6 and 8C). For GST fusion protein assays, HEK293 and CHO cell lysates (prepared as above) were incubated with glutathione-conjugated Sepharose beads (Amersham Pharmacia Biotech) for 1 h to remove background binding material. Lysates were then incubated overnight at 4°C with GST or GST fusion proteins pre-bound to glutathione-conjugated Sepharose beads. Beads were then washed three times with lysis buffer, and bound proteins were eluted in Laemmli sample buffer and subjected to SDS-polyacrylamide gel electrophoresis under reducing conditions. Separated proteins were electrophoretically transferred to nitrocellulose membranes (Schleicher & Schuell) at 4°C overnight. Membranes were blocked with 5% fat-free dried milk in PBS/Tween 20 buffer at 25°C for 1 h and then sequentially blotted with specific mAb and horseradish peroxidase-conjugated goat anti-mouse IgG antibody, followed by four washes (15 min each) with PBS/Tween 20 buffer after each blot. Proteins were visualized using Renaissance chemiluminescent assay (NEN Life Science Products). Cell Migration Assay-Migration assays were performed essentially as described (25), using 96-well chambers and framed polycarbonate filters with 8-m pores (Neuroprobe, Cabin John, MD). Filters were spotted with fibronectin, vitronectin, or poly-L-lysine diluted in 0.1 M NaHCO 3 , allow to dry, rinsed with PBS, and assembled with matrixside down in the chamber. Lower wells of the chamber contained 33 l of MEM ␣ medium (for CHO cells) or DMEM (for COS7 cells) with 10% FCS, unless indicated otherwise. Cells harvested in PBS with 2 mM EDTA were labeled using BCECF-AM (2Ј,6Ј-bis(2-carboxyethyl)-5(6)carboxyfluorescein acetoxymethyl ester; Molecular Probes, Eugene, OR) for 30 min, pelleted, and resuspended at 3 ϫ 10 5 cells/ml in 1% FCS (for CHO cells) or 0.1% FCS (for COS7 cells). After no preincubation (COS7 cells) or with anti-hamster ␣ 5 ␤ 1 mAb PB1 for 30 min on ice (CHO cells), cells (suspended in 100 l) were added to upper wells of the chamber and allowed to migrate at 37°C for 4 h. After migration, cells attached to the upper side of the filter were mechanically removed by scraping, and cells on the lower side were quantitated using a Cytofluor 2300 fluorescence measurement system (Millipore Corp., Bedford, MA). Percent cell migration equals: (cell fluorescence on filter with matrix coating Ϫ control cell fluorescence on filter without matrix)/(total fluorescence of input cells) ϫ 100. Yeast 2-Hybrid Selection and Cloning of ␤ 1 Integrin Tailbinding Proteins-A HeLa cell library was expressed in 2 ϫ 10 6 yeast transformants, selection for interaction with the integrin ␤ 1 cytoplasmic tail protein was carried out, and 25 positive clones were obtained. Among these clones, 9 coded for protein fragments that included the carboxyl-terminal half of the receptor of activated protein kinase C (RACK1) protein. The carboxyl-terminal half of RACK1 interacted strongly with ␤ 1 , weakly with ␤ 5 , and not at all with ␤ 2 or ␤ 3 integrin tail bait proteins. However, yeast 2-hybrid analyses also revealed interactions between the RACK1 carboxyl-terminal fragment and integrin ␣ V and ␣ 4 cytoplasmic tail bait proteins (Table II). Because the ␣ V and ␣ 4 tail sequences show no obvious similarity to the ␤ 1 tail (Table I), the RACK1 interactions appeared to be nonspecific and were not pursued further. Another 7 of the initial 25 positive clones coded for a related group of polypeptides, with identical carboxyl termini but variable amino termini (Fig. 1). These results suggest that regions essential for interaction with the integrin ␤ 1 tail reside within the 162 residues present in the shortest clones (clones 4 and 5). Two of the polypeptide sequences contained divergent amino termini (clones 6 and 7), which did not appear in full-length clones (as obtained below), and thus may be cloning artifacts. Open reading frames coding for the longest polypeptides did not include a methionine start site. Thus, to obtain a full-length sequence for the ICAP-1 protein, we used a cDNA probe corresponding to clone 4 to screen a bacteriophage lambda gt11-HeLa cell cDNA library. The resulting sequence contained a putative ATG start codon, just upstream of the sequence represented in clone 1. This methionine is present in a near consensus translation initiation sequence (57) and is located downstream of stop codons in all three frames, suggestive of an authentic start codon (Fig. 2). The full-length ICAP-1 consists of 200 amino acids and is rich in serine (16%), with the amino-terminal 50 amino acids containing 19 serine residues. There are three possible protein kinase C phosphorylation sites at Ser-20 and Ser-46 and Ser-197 (58), and one cAMP or cGMP-dependent protein kinase phosphorylation site at Ser-10 (59). No signaling motifs or domains, such as SH2 or SH3, were found in ICAP-1. Subsequent to our isolation of ICAP-1, an identical protein was described and named "ICAP-1" (44). Also, an unpublished sequence coding for the mouse homologue of ICAP-1 has appeared in GenBank TM (accession number AJ001373). Interaction of ICAP-1 with the Integrin ␤ 1 Cytoplasmic Tail Is Highly Specific-In the context of the yeast two-hybrid system, ICAP-1 (as a pJG4.5-ICAP-1 prey construct) interacted strongly with the ␤ 1 tail but failed to interact with the integrin ␤ 2 , ␤ 3 , or ␤ 5 tails (Table II). Also ICAP-1 did not associate with 7 different integrin ␣ chain tails ( Table II). All of the pEG202encoded bait proteins containing integrin ␣ or ␤ chain cytoplasmic domains were able to bind to LexA but by themselves were transcriptionally inert, thus they meet the criteria for bait constructs suitable for study in the two-hybrid system. In other yeast two-hybrid experiments, ICAP-1 failed to interact with additional bait proteins including phosphatidylinositol 3-kinase 85-kDa subunit, Max, v-Myc, p300 CH3 domain, CD2 cytoplasmic domain, and the LAR phosphatase cytoplasmic domain. To determine the subregion of the ␤ 1 cytoplasmic domain that is critical for ICAP-1 association, we utilized bait plasmid pEG202 to synthesize chimeric ␤ 1 /␤ 5 cytoplasmic tail mutants (listed in Table I, bottom). In yeast, both the wild type ␤ 1 tail (cyto.11) and the cyto.51 chimera showed strong interaction with ICAP-1, whereas cyto.15 and cyto.55 did not. Tissue Expression and Biochemical Features of ICAP-1-Northern blotting showed that ICAP-1 mRNA is present in nearly all human tissues (Fig. 3). It was highly expressed in heart, colon (mucosal lining), skeletal muscle, and small intestine, barely detectable in liver, and present at intermediate levels in all other tissues. The major ICAP-1 transcript was 1.2 kilobase pairs, with variable amounts of another form at ϳ1.8 kilobase pairs. Anti-ICAP-1 antiserum immunoprecipitated a protein of The clone B1-7 encoding RACK1 141-317 residues was used as a prey. b Positive interaction denotes growth of blue colonies only on X-gal indicator plates lacking leucine. c ␤-Galactosidase (␤-Gal) activity is the mean of four independent measurements. FIG. 3. Tissue expression of ICAP-1 mRNA. Filters containing mRNA from multiple human tissues (CLONTECH) were used for Northern blotting according to manufacturer's instructions. ICAP-1 cDNA probe was prepared by EcoRI/XhoI digestion from pJG4.5 vector and labeling with [␣-32 P]dCTP using RadPrime DNA kit (Life Technologies, Inc.). After stripping of the ICAP-1 probe, filters were rehybridized with 32 P-labeled human actin cDNA. kb, kilobase pair; PBL, peripheral blood lymphocyte. ϳ27 kDa from Triton X-100 lysate of 35 S-labeled ICAP-1-transfected HEK293 cells that was not obtained from mock-transfected cells and was not seen using preimmune rabbit serum (not shown). A protein of ϳ27 kDa was also obtained upon anti-HA Western blotting of HA-tagged ICAP-1 from Triton X-100 lysate of ICAP-1-transfected COS7 cells (data not shown). From A431 cells, HeLa cells, Jurkat cells, human endothelial cells, and human fibroblasts lysed in RIPA buffer, anti-ICAP-1 antiserum blotted endogenously expressed ϳ27and ϳ31-kDa proteins, with the latter being particularly prominent in endothelial cells and fibroblasts (not shown). Whereas the ϳ27-kDa protein was routinely observed when using mild detergent extraction (e.g. Fig. 4, lanes 3 and 4), visualization of the ϳ31-kDa protein was enhanced by use of stringent detergent lysis conditions (Fig. 4, lanes 7 and 8). Analysis of the Triton X-100 pellet revealed that the ϳ31-kDa protein was indeed retained in the Triton-insoluble fraction (compare lanes 9 and 10). In contrast, analysis of the RIPA-insoluble fraction (lane 12) indicated that the majority of both 27-and 31-kDa proteins had already been extracted (lane 11). Adhesion to fibronectin (in comparison to cell suspension) did not alter the appearance of either form of ICAP-1 protein (Fig. 4, compare lanes 3 and 4 and 7 and 8). In a reciprocal experiment we next analyzed binding of solubilized ␤ 1 integrin to immobilized GST-ICAP-1. First, CHO cells were transfected to stably express wild type or mutant human ␤ 1 subunits. In each case, the ␤ 1 extracellular and transmembrane domains were present, whereas the cytoplasmic tail was either unaltered or fully or partly exchanged with regions of the ␤ 5 tail (See Table I, bottom, for sequences). Upon incubation with GST-ICAP-1 fusion protein, selective binding of wild type ␤ 1 (␤ 1 cyto.11) and ␤ 1 cyto.51, but not ␤ 1 cyto.55 or ␤ 1 cyto.15, was observed (Fig. 5B), as detected by Western blotting with anti-human ␤ 1 mAb A-1A5. No ␤ 1 was found to associate with immobilized GST control protein (not shown). Wild type human ␤ 1 and various tail mutants were present in CHO cells at comparable levels as seen by cell-surface flow cytometry (Fig. 6) and also as indicated by blotting with antihuman ␤ 1 mAb A-1A5 (not shown). Regulation of ICAP-1 Phosphorylation-Because of the high serine composition and putative protein kinase C phosphorylation sites, we tested whether ICAP-1 might be phosphorylated. First, ICAP-1-transfected HEK293 cells were incubated with [ 32 P]orthophosphate for 2 h while in suspension, and for another 1 h while spreading, prior to lysis using RIPA buffer. Then, anti-ICAP-1 antibody was used to immunoprecipitate phosphorylated proteins of ϳ27 and 31 kDa from 32 P-labeled 1-8). Alternatively, HA-ICAP-1-HEK293 cells were lysed (with Triton or RIPA) in suspension at 4°C for 30 min (lanes 9 and 11), and insoluble materials were further solubilized in Laemmli sample buffer (lanes 10 and 12). After separation by SDS-polyacrylamide gel electrophoresis, Western blotting was carried out using anti-HA mAb 12CA5. HEK293 cells (Fig. 7A, lanes 3 and 4). These proteins were not precipitated using preimmune serum or from mock-transfected cells (lanes 1, 2, and 5-8). Notably, phosphorylation was enhanced by ϳ2-fold for the 27-kDa protein, and 1.5-fold for the 31-kDa protein when ICAP-1-transfected HEK293 cells were spread on FN (lane 4) compared with poly-L-lysine (lane 2). In contrast, the level of phosphorylation of background proteins was unchanged as determined by comparison of protein band densities (FN/PLL ratios ϭ 1.0). A long exposure of Fig. 7A confirmed that none of the many phosphorylated non-ICAP-1 proteins (including Control Band 1 and Control Band 2) were altered. In a separate experiment (not shown), phosphorylation of 27-and 31-kDa ICAP-1 proteins was again increased (by 1.8and 2.0-fold), respectively, upon adhesion to fibronectin compared with PLL. Again, phosphorylation of all other (non-ICAP-1) proteins was unchanged. A report elsewhere (44) has suggested that the more slowly migrating form of ICAP-1 may represent a phosphorylated form of the protein that may appear at elevated levels upon cell adhesion and spreading on fibronectin for 15 or 30 min (44). Thus, to supplement our results obtained upon adhesion to fibronectin for 1 h (Fig. 7A), we analyzed additional time points (Fig. 7B). At no time point from 15 to 120 min did we observe that the slowly migrating form of ICAP-1 (ϳ31 kDa) was highly phosphorylated relative to the 27-kDa protein, even though the 31-kDa protein was well represented (e.g. see Fig. 4). Indeed, under identical extraction conditions, the 31/27-kDa ratio was 0.42 in terms of total protein but only 0.13 in terms of phos-phorylated protein. ICAP-1 May Contribute to Cell Migration-In further experiments, COS7 cells transiently transfected with ICAP-1 were found to undergo increased transwell migration, when the FCS chemoattractant was held constant at 10% and different FN levels were coated onto the underside of the filter (Fig. 8A, left panel). Also, ICAP-COS7 cells showed preferential migration compared with Mock-COS7 cells when FN coating was held constant at 10 g/ml and different FCS chemoattractant levels were used (Fig. 8A, right panel). Although ICAP-1 caused an elevation of ␤ 1 -dependent migration on fibronectin, it did not alter ␤ 1 -independent migration on vitronectin, as seen in two separate experiments (Fig. 8B, right and left panels). Because COS cells express moderate to high amounts of ␣ V and ␤ 5 , but little ␤ 3 , we suspect that vitronectin-dependent migration is largely mediated by ␣ V ␤ 5 . CHO transfectants stably expressing comparable surface levels of human wild type or chimeric ␤ 1 (see Fig. 6) were also tested for migration. The assay was performed in the presence of anti-hamster ␣ 5 ␤ 1 mAb PB1 to block the contribution of endogenous hamster ␣ 5 ␤ 1 (Fig. 8C). The CHO-␤ 1 .cyto11 and -␤ 1 .cyto51 transfectants showed substantially more migration than either the CHO-␤1.cyto55 or -␤1cyto1.5 transfectants. This differential migratory behavior precisely coincides with the differential abilities of these mutants to bind to ICAP-1 (e.g. as seen in Table I and Fig. 5B). In the absence of 10% FCS as a chemoattractant, none of the cells showed very much migration (not shown). DISCUSSION Specific Association of ICAP-1 with Integrin ␤ 1 Tail-Here we have identified and characterized ICAP-1, a 200 amino acid phosphoprotein specifically associating with the ␤ 1 integrin tail. Interaction seen in a yeast two-hybrid assay was confirmed in reciprocal experiments using ICAP-1-and ␤ 1 integrin-transfected human and hamster cell lysates. In both systems the association was highly specific. In both mammalian cell lysates and in yeast, replacement of the carboxyl-terminal 14 amino acids of ␤ 1 with the carboxyl-terminal 24 amino acids of ␤ 5 resulted in loss of ICAP-1 association. Conversely, the reciprocal exchange (␤ 5 tail with terminal 14 residues from ␤ 1 ) allowed strong association. Thus the carboxyl-terminal "SAVT-TVVNPKYEGK" sequence in ␤ 1 is required for ICAP-1 interaction. While this work was in progress, another group de-scribed ICAP-1 as a protein that associated selectively with the integrin ␤ 1 tail (44). Consistent with results shown here, residues critical for ICAP-1 association resided within the carboxyl-terminal 13 residues of the ␤ 1 tail (44). Association of ␤ 1 Tail with RACK1?-In another report, the carboxyl-terminal portion of RACK1 was isolated by a yeast two-hybrid approach and suggested to interact specifically with integrin ␤ 1 , ␤ 2 , and ␤ 5 tails (43). We also obtained a carboxylterminal fragment of RACK1 upon yeast two-hybrid screening but did not study it further due to an apparent lack of interaction specificity. Although it is still possible that RACK1 could specifically participate in integrin functions, future studies will need to explain its ability to bind to multiple peptide sequences that are seemingly unrelated. Distribution and Size of ICAP-1-Northern blotting showed FIG. 7. Effects of cell adhesion on ICAP-1 phosphorylation. A, ICAP-1-HEK293 and mock-HEK293 cells were incubated with [ 32 P]orthophosphate while in suspension for 2 h. Then these cells were plated on plastic surfaces that had been pre-coated with either 10 g/ml fibronectin (Life Technologies, Inc.) or 10 g/ml PLL (Sigma) and blocked with 0.1% heat-inactivated bovine serum albumin (Sigma). An additional incubation with [ 32 P]orthophosphate (in phosphate-free DMEM) was then carried out for 1 h at 37°C while cells were adhering and spreading. Cells were then washed, lysed in RIPA buffer, and immunoprecipitated as described under "Experimental Procedures," using the indicated antibodies. The indicated ratios (FN/PLL) were determined using integrated density values (AlphaImager 2000 Documentation & Analysis System, Alpha Innotech Co., San Leandro, CA) for phosphorylated protein bands obtained from cells on fibronectin and poly-L-lysine. B, ICAP-1-transfected HEK293 cells were incubated as in A, except that adhesion to fibronectin was carried out for various periods. that the ICAP-1 protein is widely expressed in many human tissues, as previously shown for the ␤ 1 integrin subunit. Also by Western blotting, ICAP-1 was ubiquitously expressed in most cultured cell lines. It is not yet clear whether the appearance of RNA of two different sizes (1.8 and 1.2 kilobase pairs) represents alternative splicing or different polyadenylation sites as previously suggested (44). Chang et al. (44) described an apparent alternatively spliced 16-kDa form of ICAP-1 (ICAP-1␤) that lacks amino acids 128 -177 and does not interact with integrin ␤ 1 cytoplasmic domain. We did not observe such a form while screening for full-length ICAP-1, possibly because we screened a different cDNA library. In analyses of several cell lines, and cells transfected with ICAP-1 cDNA, we detected major (ϳ27 kDa) and minor (ϳ31 kDa) ICAP-1 proteins, with the latter only being seen using stringent detergent conditions. Both the ϳ27 and ϳ31-kDa proteins incorporated 32 P label, with phosphorylation of the more rapidly migrating ϳ27-kDa form being particularly prominent. Elsewhere, it was suggested that the more slowly migrating form of ICAP-1 may be preferentially phosphorylated, because it disappeared upon incubation of lysate in the absence of phosphatase inhibitors (44). Our direct phosphorylation results contradict that conclusion. We cannot explain why phosphatase inhibitors may have facilitated the maintenance of the more slowly migrating form, except to suggest that this effect may be indirect and possibly involve other components in the cell lysate. At present, the biochemical basis for the larger size of the 31-kDa ICAP-1 protein and its relative resistance to detergent extraction (compared with the 27-kDa protein) are not clear. Elsewhere it was also shown that appearance of the larger ICAP protein form was favored upon cell adhesion to fibronectin, whereas it was greatly diminished when cell matrix interaction was disrupted (44). We did not observe an adhesion-dependent change in levels of either the 27-or 31-kDa form of ICAP-1 (e.g. see Fig. 4). This disparity possibly could be explained by our use of a human embryonic kidney cell line (HEK293) instead of the osteosarcoma cell line (UTA-6) used in the other study. Functional Relevance of ␤ 1 Tail Association with ICAP-1-Association of the ␤ 1 tail with ICAP-1 may be relevant for multiple reasons. First, phosphorylation of both 27-and 31-kDa forms of ICAP-1 was selectively promoted upon cell adhesion and spreading on fibronectin but not on poly-L-lysine. Thus, ICAP-1 phosphorylation appears to be regulated during the outside-in signaling that occurs upon integrin engagement with ligand. In future studies, it will be important to place ICAP-1 phosphorylation into the context of established integrindependent signaling events, such as the phosphorylation of focal adhesion kinase, paxillin, and other downstream targets (6). A previous report suggested that constitutively activated RhoA might down-regulate ICAP-1 phosphorylation (44). In contrast, we found that RhoA.V14 transfection into NIH3T3 cells caused no elevation in ICAP-1 phosphorylation (not shown). This discrepancy is perhaps easily explained, considering (as discussed above) that Chang et al. (44) appear not to have been actually measuring ICAP-1 phosphorylation. Second, ICAP-1 interactions with the integrin ␤ 1 tail may support cell migration. In one set of experiments, expression of ICAP-1 in COS7 cells was associated with increased ␤ 1 -dependent cell migration on fibronectin but not ␤ 1 -independent migration on vitronectin. In another set of experiments, the carboxylterminal amino acids within the ␤ 1 tail that were needed for ICAP-1 association were also required for enhanced cell migration. The carboxyl terminus of ␤ 5 could not substitute for ␤ 1 to support migration. These results may help to explain previously noted differences between the integrin ␤ 1 and ␤ 5 tails in terms of supporting cell migration (25). Other functions known to require the carboxyl-terminal 14 amino acids of ␤ 1 could potentially also involve ICAP-1. For example, the carboxyl-terminal "SAVTTVVNPKYEGK" sequence in the ␤ 1 tail includes amino acids (Thr-788, Thr-789, Asn-792, and Tyr-795 in human ␤ 1 ) that help to regulate integrin affinity for ligand (33) and amino acids (Asn-792 and FIG. 8. ICAP-1 effects on cell migration. Migration that was both chemotactic and haptotactic was carried out as described under "Experimental Procedures." A, migration of ICAP-transfected COS7 cells was determined using porous filters coated on the underside with various doses of fibronectin with 10% FCS in the lower well (left panel), or using 10 g/ml fibronectin with different doses of FCS in the lower well (right panel). B, filters were coated with either 10 g/ml fibronectin or 10 g/ml vitronectin, with 10% FCS in the lower well. Two separate experiments were carried out on different days, using cells derived from different transfections. In each experiment, a common pool of transiently transfected COS7 cells was divided for testing on the two different substrates. In each of the seven experiments on fibronectin (A and B), ICAP-COS7 migration was significantly greater than Mock-COS7 migration (two-tailed p value Ͻ 0.008). Migration on vitronectin was not significantly different. C, migration of CHO-␤ 1 and -␤ 1/5 transfectants was determined using 10 g/ml fibronectin coating and 10% FCS in the lower well, with 1% FCS in the upper well. For all migration experiments (A-C) each data point represents the mean Ϯ S.D. from six replicates. Tyr-795) required for cytoskeletal association (32). Deletion of the carboxyl-terminal 13 amino acids results in loss of integrin co-localization with talin, ␣-actinin, and focal adhesion kinase (60). Furthermore, alternatively spliced ␤ 1 B (21) and ␤ 1 C (61) tails lack the critical carboxyl-terminal residues present in ␤ 1 A and thus should not bind to ICAP-1. This could at least partially explain why functions of ␤ 1 A are markedly different from the functions of ␤ 1 B or ␤ 1 C (21,23). In this regard, expression of ␤ 1 B in CHO cells resulted in a severe reduction of cell motility on fibronectin (62), analogous to the diminished motility of our non-ICAP-1 binding ␤1cyto.15 mutant. Other ␤ 1 Tail-associated Proteins-At present, the integrin ␤ 1 tail has been suggested to interact directly with cytoskeletal proteins (␣-actinin (38), talin (39,40), filamin (40), and paxillin (37,41)), protein kinases (focal adhesion kinase (37), integrinlinked kinase (42)), and other proteins (RACK1 (43) and ICAP-1 (44)). Interestingly, the ICAP-1 protein shows no similarity to any of these other proteins, and as far as we are aware, the ICAP-1 interaction is the only one to map to the carboxyl-terminal 14 amino acids of the ␤ 1 tail. In addition, the ICAP-1 protein is completely distinct from proteins that may interact with other integrin ␤ tails, such as cytohesin-1 (63) and ␤ 3 -endonexin (64). Despite the growing list of integrin cytoplasmic tail-associated proteins detected by yeast two-hybrid screening (65), few if any of these associations have yet been independently established by more than one laboratory. A strength of the current report is that now the ICAP-1 interaction with the ␤ 1 tail has been independently determined, by at least two distinct laboratories, upon screening of two different cDNA libraries. In conclusion, we have demonstrated a direct and specific interaction between the widely expressed ICAP-1 protein and the widely expressed integrin ␤ 1 A cytoplasmic tail. Furthermore, we provide evidence that the integrin-ICAP-1 interaction may be relevant to cell migration and adhesion and spreading functions carried out by ␤ 1 integrins, and we suggest that phosphorylation of ICAP-1 could play a role in these events.
7,364
1999-01-01T00:00:00.000
[ "Biology" ]
Proposed System of New Generation LMS Using Visual Models to Accelerate Language Acquisition Language skill is a rule-like operation, based on generalized connections. The main property of language skills is awareness. They are formed with conscious mastery of the language means of communication (phonetic, lexical and grammatical). Language acquisition is a complex process which includes a large number of different parameters. Therefore, the study and improvement of language learning and teaching require creative collaboration between experts from different domains. We propose to transfer knowledge about the structure of the language from the verbal to the visual form, thereby creating the opportunity to use them as an indicative basis for planning, managing, controlling and correcting the training of primary language skills both by the teacher and by the student himself. The proposed method allows to develop the ability to organize sentences to convey meaning by means of Visual Models and to describe the sequential steps to choose the most effective ways for working with audio to improve listening skills. The use of visual tools for the analysis of abstract systems and theories makes it possible to discover new patterns and simplify their understanding. The aim of our study is to put principles for building a new effective system using new methods for acquiring language skills. Introduction This paper is an extension of work originally presented in the Sixth International Conference on Digital Information, Networking, and Wireless Communications (DINWC), 2018, Beirut, Lebanon [1]. It presents the use of new methods to facilitate teaching and learning. The new results were obtained by using a Structural Visual Method and Visual Models and tools of systems analysis and information technologies. Distinguishing feature of this method is using colors to encode meanings in structural diagram. The 21st century is the century of the dominance of information and information technology. They are rapidly conquering the world, penetrating into all spheres of human activity, which led modern society to a general historical process called Informatics. This process consists in the free access of any citizen to information, penetration of information technologies into scientific, industrial, public spheres, high level of information services. To accelerate the acquisition of language for adults requires a new approach that differs about the traditional goal-setting, the selection of new education type, and is realized in a practical way. Our research singled out among the teaching methods "central" -a support for the language and gives a practical experience of adult learners who, in the process of learning, analyze and understand, generalize and evaluate it. This is the basis for selecting the content of education, the choice of new methods and the organization of the pedagogical process as a whole. under study, but "minus" in the existing stereotypes of thinking, established ideas that impede the perception of the new not appropriate. The effectiveness of the adult education process will be high when a person is put in the position of a researcher who independently searches for a solution and is able to coordinate it with others. The goal of this article is to use the visual modeling system that helps you to learn how to build English sentences without thinking about rules and theoretical aspects. This work is under way to introduce this approach into training applications, programs and new type of LMS. The visual model comes here to the rescue, allowing you to analyze English grammar not in words, but with the help of models, visual images. This model is abstract forms, colors, arrows, images. They are similar to the subway scheme. All schemes allow you to control meanings using a visual system that is much more powerful than a verbal one. This method helps to understand the grammatical of English. In just one session you get a holistic view of the dynamics of English grammar. And this is important for an adult. Of course, this does not exclude the need for intensive training. But now it becomes clear what to train. In addition, today this article has a unique grammar training system, similar to which not met anywhere. This paper is organized as follows, Section 2 we do the analysis of the research methods about possible training to get a new proposed way of competency. Section 3 we display the methods for improving adult's foreign language skills as well as the methodology of the study. Sections 4 we create the structure of new LMS. Section 5 we describe a stage of designing and development a new system for adults. The conclusion is discussed in the last section. Research of possible training ways Analysis of the model of the psyche [2], in spite of its simplicity, allows us to see interesting patterns. The first conclusion is that the only thing that a person is able to manage and what it is possible to teach is activities and ways of doing it. As Galperin [3] showed, the training of mental activity necessarily includes the stage of its implementation in the form of external, physically performed actions. Therefore, the acquisition and assimilation of information is only an intermediate stage in the process of forming the skills of performing certain activities and should not be an end in itself of the learning process, as often happens. The second conclusion -motivation and interest are the most important component of the learning process, and without them the process cannot be effective. The following is that human behavior is controlled only by a small degree of consciousness and thinking, and there are at least several different behavioral control loops, often unconscious and inaccessible to direct impact and change. The human psyche is a very complex and complex phenomenon, and any of its models, including the most detailed and complex, cannot reflect all its properties and ensure full compliance. Therefore, several simple models can bring more practical benefits than attempts to create a complete and all-encompassing model. Let's consider known models of obtaining skills (competence) and trace the ways of their obtaining on the model of the psyche. The model (Figure 1) for obtaining unconscious competence (skill), (usually attributed to Abraham Maslow [4], which is not true). It is widely known in the online learning environment and motivational and psychological training. It is structurally identical with the model of the psyche and is directly derived from it, and vice versa [2]. 1. A person observes the activities of other people, but in the absence of motivation he does not realize the need for such a skill for himself. 2. Awareness of the need causes emotion and the desire to receive this skill. 3. A person studies information and acquires knowledge, which leads to the ability (conscious competence) to perform the necessary actions. 4. Training and practice lead to the formation of a skill that allows performing actions without conscious control, automatically. Practical applications for real problems lead to better skills and increased skill. This is the classical way of conscious learning. It is declared in most educational systems and organizations, but, unfortunately, in the absence of motivation for students, a formal approach to the subject of instruction and a minimum of practice, it degenerates into a formal educational path typical for formal educational structures (path 1-3 in Figure 1, shown by the black arrow). The knowledge obtained in this way is not supported by practice, causes a minimum of interest and does not imply subsequent practical use. Therefore, after a formal check and receipt of marks, they are usually very quickly forgotten, and this all ends. In Figure 1, we can observe other ways that lead to the desired skill, and correlate them with known or new training approaches. So, direct way 1-4 is a way of imitation, copying. Very simple and effective, as it involves specially designed for this brain mechanisms -mirror neurons. Unfortunately, it is used mainly only with young children and primary school students. Although in some areas it can give better and faster results in comparison with the theoretical approach and for adults. The path of imitation, reinforced by motivation and interest (1-2-4), we called modeling. It is well represented in the well-known classical studies (for example, in the concept of social training Bandura [5]). Path 2-4 is a path of trial and error, possible in the absence of sources of information and a sample for modeling. He is also the path of insight or insight in the implementation of new, previously non-existent activities. Also on the scheme, you can observe more exotic, but at the same time more efficient ways. For example, as in the approach of Galperin [3] 2-1-4-3-4: motivation -demonstration -independent execution in reality -pronouncing out loud -talking to oneselffull automation of the skill. Of special interest is the path of visual modeling proposed by the authors of this article [6] (1-2-1-4). In this case, the verbal, abstract description of the mode of activity is replaced by a schema, a visual model, which serves as an indicative basis for the activity being mastered see Figure 2. Such an approach has recently been rapidly gaining popularity in connection with the development of computer systems, visual interfaces, visualization tools, infographics and methods of visual thinking. Methods for Improving Adult's Foreign Language Skills The objective of this project is to elaborate a new inductive methodology of language skills on the basis of the Structural-Visual method (SVM) and the Visual-Auditory Shadowing method (Nakayama & Mori, 2012) [7]. The Structural-Visual Method is a new inductive language learning methodology, based on mapping of the structure of linguistic knowledge in a graphic form using color to encode the most common patterns [2]. The models thus obtained replace the textual explanations (rules) in the formation of the corresponding skills. The Visual-Auditory Shadowing method (VAS) is also an inductive language learning methodology which facilitates learning of phonological knowledge (pronunciation) and ideographical knowledge (spelling) [8]. Improving listening skills Shadowing is the best method to develop a phonological loop to form the skills of understanding by using communication of auditory images with speech movements. according to Tamai (2005) [9], Shadowing is "an act or a task of listening in which the learners track what he/she heard in speech and repeats it as accurately as possible while listening attentively to the in-coming information (p.34)", the shadowing method is effective in improving phonological loop process, and it leads to improvement in listening skills. VA shadowing method is a combination of auditory shadowing and visual shadowing. Auditory shadowing requires so-called online processing of auditory input, which requires listening and repetition at the same time. Visual shadowing requires learners to read text aloud employing online processing [10] according to Nakayama (2017), VA shadowing method better improves learners' listening skills, compared to shadowing method alone. By using auditory images allow one to understand different voices and accents, it is played an important role to implement the concept of "Repetition without repetition", where each presentation of the same material is produced by different voices, with different speed, tone, intonation and accent, is of interest. For more rapid formation of sound filters and pronunciation patterns, it is proposed to use the technologies Textto-Speech, voice recognition, audio file decoding and other achievements of computer technology. The need for multiple repetitions can be used to simultaneously develop both speech skills and linguistic competencies. For this purpose, a combination of the VA shadowing technique, enhanced by the use of computer technologies, and the technique of Structural-Visual modeling. Improving language skills In the study of language, the language is the subject of research, and at the same time the research tool, which is already methodologically incorrect. The researcher is produced in the native language of the researcher, and the grammar structure of this language inevitably mediates the way and structure of his thinking. Therefore, language implicitly is also a method of investigation. The result of the research is usually a scientific publication, which is a language product -text. Thus, in the study of thinking, language, language activity and teaching methods, language is also a subject of activity, an instrument of activity, a mode of activity and the result of activity. What cannot but lead to confusion, contradictions and excessive complexity due to logical closure and looping. To eliminate this confusion, it is suggested to make a description of the structure of the language from the sphere of the same language and apply the Visual Metalanguage. Moreover, information about the structure and laws of processes is not coded by words and terms, but by the parameters of abstract visual objects and figures-by the color, shape, size, relative location, boundaries, etc., as well as by special signs and symbols. Particularly productive was the use of this approach to explain to the student the structure of the language being studied and the principles of constructing sentences. As you know, conscious practice is much more effective than mechanical copying and repetition. This is repeatedly confirmed in the studies and underlies the theory of successive formation of mental actions Galperin [3] and developed on the basis of its methods of practical development of skills in various fields of activity [3,11]. But in the field of language teaching, this approach does not work because of the above contradiction -the language must simultaneously be both a subject of educational activity, and an instrument of this activity, and the way of this activity, and the result of this activity. If the student does not know how to express an idea and form a phrase in the language he is studying, he cannot do this because of this ignorance. And if he got the knowledge how to do this, with the help of rules and terms, he still cannot do this because the speech area of the brain, which should implement the act of speaking, is busy with the language information on how to carry out this activity. The proposed visual approach to the coding of grammatical information allows us to translate the orientation basis of activity, information on how to properly perform this activity, from the language, verbal form to the visual, visual. Releasing this speech zone from the functions of planning and controlling utterances and creating conditions for easy and unhindered performance of speech activity. Language activity itself is meaningless. Language is an instrument for thinking and communicating, and manifests its properties only in such activities. Therefore, the study of language as an end in itself is also unjustified and illogical. It is necessary to teach not language, but activity through language, that is, thinking and communication, which is emphasized in the documents of the Council of Europe [12]. Psychological science did a great deal in the 20th century to study human activity, and to find the patterns of obtaining skills for its implementation (Skinner [13], Bandura [5], Leontiev [14], Galperin [3], etc.). The obtained data made it possible to develop sufficiently detailed theories and, on the basis of them, to create systems for accelerated training of specialists in specific industries (the army, special services, certain spheres of production and technology, large corporations). The closed nature of these areas and the lack of interest in disseminating such experience have led to the fact that these studies are unknown to the overwhelming majority of specialists engaged in the development of similar systems in other industries and countries. This forces them to use outdated, inefficient models and approaches developed for other conditions and tasks and applied in pedagogy because of the conservatism of educational systems. Galperin [3] in his theory of the step-by-step formation of mental actions pointed to the need for an indicative stage in mastering the skill of mental action, emphasizing the importance of auxiliary tools that facilitate this orientation -the so-called schemes of the Basis of Activity. A similar approach in Western pedagogy is widely known as Instructional scaffolding. This theory was developed around the same time by (Ninio & Bruner) [15]. Both these theories are based on Vygotsky's idea of a zone of proximal development [16]. Additional support tools and instructions can be provided through the sensory, motor and verbal channels. A similar phenomenon was discovered in the experiments of Zhinkin [17] in the middle of the last century, and the low effectiveness of grammatical rules in the mastering of oral speech was repeatedly declared by many linguists and psychologists (Krashen [18], Pinker [19] and etc.). To resolve this discrepancy, suggest applying SVM. SVM in linguistics is a layout of the structure of linguistic knowledge in graphical form using the color of visual objects to encrypt the most common patterns. The obtained models thus replace the textual explanations (rules) in the corresponding skill formation. For this it is proposed to use another part of the psyche -a nonverbal visual system. The one that is usually located in the right hemisphere of the brain. The higher efficiency of this method compared to other Visual Aids follows from the features of the functioning of the human visual system, shown in recent studies by Kozlovsky [20]. In them, the conceptual model of Baddeley's work memory [21] was confirmed at the physiological level, in particular the part where the presence of visual working memory is declared, which functions independently of verbal working memory ("visualspatial matrix"). The visual approach to learning foreign language removes the main contradiction of the grammatical method -grammar rules block speaking. If the student does not know how to say the phrase in English, he cannot say anything, because he does not know how to do it. But if you learned the rule how to do it, you still cannot say a phrase. Because that area of the brain, which is responsible for speaking, is occupied with this rule.  Diagrams -the structure of relations between elements of the English language.  Dynamics -the structure of the Russian language and its relations to the described processes and phenomena.  Models -the structure of the English proposal and how to build it. This is the main training material for training on Visual-Body Modeling of the structure of the English language. Visually -The body method of modeling the structure of the English language radically changes the very approach to the study of English grammar. The structure of the division of the grammatical system into its component parts is changed, as is the sign system of the description of this structure. Verbal rules are replaced by diagrams:  Abstract scientific names of forms and phenomena are replaced by specific visual parameters of the graphic elements of the schemes -color, shape, location.  Instead of searching for direct links between grammatical forms of two languages that are extremely complex, ambiguous and not formularized, an intermediate sign system is introduced in the form of abstract graphic images uniquely related to real situations in a certain context.  Instead of classifying grammatical phenomena in form, another structure is introduced, where the elements are linked by the general logic of development of processes and meaning.  Instead of analyzing abstract sentences from educational texts, specific physical actions of the teacher and student are performed, which are carried out in reality. Logical operations to understand structure and connections are carried out not on the verbal plane, but with the help of figurative and subject-active thinking. We developed the LingvoMap -a visual model of the structure (grammar) of the English language. This technique [22] replaces:  Complicated rules -easy and comprehensible schemes.  Abstract theorypractical and comfortable tools.  Formal book phrases -usual physical actions.  Hundreds of pages of verbal description -a complete and harmonious system, located on one sheet. Without going into the theory, we can create phrases of any complexity. This will allow you to understand how the English language is organized in just a few hours. And hundreds of hours of saved time you can send to the practice of phonetics, vocabulary and conversation. Models contain the essential set of temporary structures and development of the kinds of supply required for this level of language capability or as per the educational programs of this training course. For instance, the least difficult model for building a basic account sentence in 4-time forms for an activity verb appears in Figure 3. Figure 4 shows a more detailed model that allows you to build on it proposals of different forms (narrative, interrogative, negative) and for more times. In Figure 5 shows the full model of all types of active forms of active voice. It can be applied at higher levels of grammatical competence to systematize knowledge and understand the complete structure of a system of times. Also, models have been created to study individual grammatical topicspassive voice, modal verbs, impersonal verbs, conditional sentences, and many other difficult to understand topics. Training skills The research method is to conduct classes with control groups of trainees in various methods of teaching foreign languages. Based on the results of each lesson, a detailed statistical analysis of the results is carried out, dynamic learning curves for each student are displayed, coefficient tables are established and the speed of the formation of listening comprehension skills is determined, speech levels from the initial to the threshold level of spontaneous speaking are determined in accordance with the CEFR [12] scale. The peculiarity of the proposed approach is the logical interconnection of the entire language system, the economics of the time of mastering the material and the dynamics of the use of language structures. The achievement of results should be achieved by maintaining the level of the student's effort at a sufficiently long interval of time (months, years), which plays a decisive role in foreign language learning, " Figure 6". The research will enable to achieve the following goals: 1. Identify the mathematical patterns of the formation of basic language skills; test some hypotheses of linguists about the nature of these processes (Krashen [18], Pinker [19], etc.). 2. Integrate models of neurolinguistics and cognitive psychology into pedagogical practice and methodological development. 3. Create effective computer tools for measuring, managing and controlling basic language skills. 4. Create a framework for the effective interaction of representatives of computer technologies and the humanities. After passing the training, without going into the theory and without using formal rules, you will be able to:  Correctly formulate your thoughts in English;  To see the logic of application of different designs, their interrelation and correspondence to different contexts;  Make phrases of any complexity, in all persons, pledges, times and forms;  Use as learning materials not abstruse textbooks and tedious texts, but your favorite films, serials, songs, books;  Freely transform any sentence from these sources into hundreds and thousands of others grammatically correct;  Independently control the correctness of their statements in conversations, letters, texts, translations and exercises and not depend on teachers, teachers, translators. Training, like any other theoretical preparation, requires the material to be fixed with practical exercises in the proportion of "1 hour of theory -10 hours of practice" Without long training, the knowledge and skills you have gained will not do you any good. Language is a skill and all its aspects should be brought to full automaticity! The fact that new tools allow you to get a skill easier and faster does not eliminate the need for practice, but only increases its effectiveness. Methodology of the Study Learning activities for mastering the language are also infinitely complex and have an unlimited number of parameters and connections. Therefore, to manage this activity it is necessary to apply an abstract scientific model with a minimized number of parameters. In [23] we distinguish the following with minimum set of events. Set of events (Single event): A single event is one and the same event, the development of which in time is described by the same verb in different forms. For example: I will eat an apple. I am eating the apple. I have eaten the apple. I ate the apple. The minimum set of vocabulary: We confine ourselves to the most minimal set of vocabulary, sufficient for conducting measurements. In The minimum set of grammatical: To measure the speed of acquiring a skill, let's select the minimum unit of the grammatical skill -this is the ability to compose a phrase according to one template (one form of the sentence for one-time form and one person). For example, the first template: This is an apple. This is a pear. This is a peach. He is eating a peach. He isn't eating a banana. Is he eating a peach? Yes, he is. Is he eating a banana? No, he isn't. Who is eating a peach? What is he doing? What is he eating? As we can see, this grammatical set forms 8 x 4 x 7 = 224 variants of constructions for describing a single-type event. Structure of New LMS for IT Development In the modern era of information technology development, the information process penetrates into all spheres of human activity, including education. There is an improvement and mass dissemination of modern information technologies (IT). The relevance of the study of the problem of using IT in the teaching of foreign languages is that information technologies have high communicative capabilities, promote the development of knowledge and skills of speaking and listening skills, actively include them in educational activities and effectively develop the skills of communicative competence. All this is necessary for a successful life in the modern world. It should also be noted the enormous popularity of the Internet and computer technologies among young people. The purpose of the study is to consider the possibilities of using IT in the teaching of EFL. IT include various software and hardware devices and devices that operate on the basis of computer technology, as well as modern means and information exchange systems that provide collection, storage, storage, production and transmission of information which help to facilitate of production new type of software for the self-teaching the languages. The use of computer technology contributes to the removal of the psychological barrier of the student on the way to the use of a foreign language as a means of communication. One of the manifestations of this barrier is the so-called fear of error. Trainees note that when using computer technology, they do not feel uncomfortable, making mistakes, and receive fairly clear instructions on how to overcome them. It should be noted that ITs are not only a means of supplying material, but also a means of control. They provide high quality presentation of the material and use various communication channels (text, touch, graphic, sound, etc.). New technologies allow individualizing and intensifying the learning process. The trainee can choose his educational route and move along it at a convenient pace. A differentiated approach creates the conditions for successful activity of each student, provoking positive emotions and, thus, increasing his educational motivation. Another positive aspect of the use of IT in the learning process is its ability to make the student's assessment more objective (assignments with pre-determined evaluation criteria make it possible to avoid the subjectivity of the assessment). To control the process of formation of language and work skills of adults, the authors recommend a conceptual solution in the form to create a training system [24]. The generalized structure of the new generation LMS is shown in Figure 7. This new training system [8] will integrate the following points: 1) Methodical principles substantiated in the works of Bandura [5] and Galperin [3]. 2) Unification of visual models and interactive visual-auditory tracking, generate a synergistic effect, both in the first stage of mastering a foreign language and in a "barrier to overcome" phase. 3) Use the achievements of the field of information technology as a tool to ensure the implementation of learning objectives with continuous monitoring of the current situation and obtain the results of learning assured in a limited number of steps. 4) Further development of the main components of the LMS is carried out in the following interrelated areas. 5) It is proposed to use SVM in combination with the modified VA method. Using a new type of LMS provides:  Step-by-step deceleration and acceleration of the speed of speech in a programmatic way to facilitate the formation of correct pronunciation.  Application of reference sound templates and speech simulators.  Repetition without repetitionuse for each approach of a material changed by some parameter (voice, tempo, height, lexical or grammatical transformations). Setting up a subsystem for continuous assessment and realtime learning management will allow us to create logarithmic support for the learning curve and to compensate for the basic requirements of the learning curve to deteriorate to expected loss of efficiency. The use of modern information technology with the use of effective skills acquisition models allows to reduce the impact of various psychological obstacles and thus accelerate learning and improve its success by conveying a synergistic effect to all phases of language skills [24], especially "barriers to overcome" ( Figure 8). Training organized in this way will provide the process of controlling the formation of speech skills to a threshold level, allowing the transition from studying the language to improve the process of use. Measuring and verifying the digital parameters of this process will allow access to the data needed to build an automatic language acquisition control system, which in fact opens a new direction to research the construction of the LMS. The Proposed Approach The result of our analysis is a link between theory and practice, IT and humanities field, science and social processes to create a new generation of Learning Management System (LMS), this system currently under construction, so the requirements of this software are collected; now we are going to a stage of designing and development. The main contributions of this paper are:  Working with the real-time voice processing and voice recognition that provide user inputs and system outputs to reach a correct pronunciation.  We present an API for JavaScript specifically designed to support building such design for "LingvoMap" (Map for construction a sentence (SVM)).  The effectiveness of this approach by conducting a several prototypes of "LingvoMap" developed using the proposed API requiring well-timed inputs and outputs in real-time.  This prototype of software runs on both Desktop and Android platforms, which corroborate that cross-platform, web LMS to accelerate learning for learner inputs provided via voice and recognition processing and view the action images. The software developers formulated several requirements for this system that will be developed in accordance to:  Availability is the ability of the system to locate and access to training components from a remote access point, and to supply them to other points.  Adaptability is the ability to adapt the curriculum according to the needs of organizations and individuals.  Efficiency is the ability to increase productivity, reducing the time and costs of delivering instructions.  Longevity is the ability to meet new technologies without additional and expensive refinement.  Interoperability is the ability to use learning materials regardless of the platform on which they are created. Prototype of Platform with Speech Recognition System In order to achieve the prototype [25], we consider the following steps: Database Design: It is a relational data tables contain the information about lessons to extract the information to the system and the result about any training lesson will be stored in database. The implementation of the LMS Platform: The implementation of the Platform will be using API for C# and JavaScript, as the HTML5 standard emerged, new opportunities for the development of robust and efficient APIs that support voice processing, and this powerful, widely adopted standard includes interactions with different media, protocols and programming languages [26]. An HTML5 page can process voice and recognition captured directly from devices possibly available on the user's hardware. In addition, WebGL programs consist of control code written in JavaScript and special effects code (shader code) rendering in an HTML, use developments element to draw WebGL graphics for visual programming of the (SVM) flexibility with the introduction of programmable visual effects and these interact with the web page by means of scripts. Adapt the speech recognition system in the LMS system: We suggest a mechanism to facilitate interaction between the user and the platform through an interface that uses voice recognition. In our platform, the template consists of an interactive image and SVM representation, which reflects the internal structure of the issue of voice recognition. This module will be integrated into the server and also linked to the ASR system. To do this, the following operations are required:  Voice-to-text: The process used for adapt the voice recognition system is the ASR. We will focus specifically in API Google Speech to Text (STT) Service because it is a cloud computing system and does not compromise the performance of the local computer.  Text-to-Speech: To convert Text to Speech (TTS), we propose to use the cloud services of Google. Using Google Translator, however, is a less complex process compared with ASR. System Architecture In " Figure 9" we see the system architecture and explained as follow: Information Request and Display Module (IRDM): In this module the images that represent the action of sentence are displayed as action image and SVM, and this module allows the user to control by the options of program through the keyboard and computer screen, this information is sent to a Webserver. Once information is processed, the requested are shown to the user as data lesson. Knowledge Data Formation Module (KDFM): The module consists of a database schema that contains all the relational tables, and is used to describe and represent area of knowledge such as data lessons. Database contains the stored procedures to provide a way to create the content of lesson under user requests depend on the selected options. Web server contains the API for the client part to transfer the received data from database server into a graphical interface for client, the connection from web server to database using data access layers these layers contain connection manger and business layers which contain a set of objects (Classes) that return the data as dataset to be used by web services and JavaScript and WebGel with the use the services of Google for speech and recognition. Speech Recognition Module (SRM): This module allows to the learner to interact in our Platform by using voice. For instance, the system returns the information to the SRM in order to be converted to speech using TTS (Text-to-Speech). The user speaks for information in the microphone and such consult is converted to text in the ASR system. In web server the java script functions check from this input data if it is correct to the displayed text at IRDM. Finally, the feedback of correct speaking will be saved in database for evaluation. Working of Software An educational object of LMS is any educational material that can be displayed in a web browser (for example, texts, pictures, voice and Map, web pages), as well as any combination thereof, intended for educational purposes and collected together in a special way. In addition, the browser must be implemented and enabled for the JavaScript language. Thus, this software standard describes database for training materials contains:  Educational materials in the form of data in relational tables.  Arbitrary dynamic content: JavaScript code and other objects that can be displayed inside the browser. Dynamic content can inform the LMS of the student's progress.  All study materials are structured (i.e. break into lessons or level of Lessons).  The description of the material flow sequence is indicated. For example, a specific text should be provided to the learner only after he has read other texts or passes the test. The main flow of events: begins when the user intends to work with training courses containing SVM-objects and voice recognition. The system should offer one of the following options: 1. Load data lesson from a database. 3. The simulator should speak, and the learner should listen and vice versa, the learner should speak, and the simulator should understand it and give feedback. 4. The result of training will be saved in database to display the level of training at learning curve. In the IST (Interactive Speech Trainers), the data will be loaded from database into the interface, we set the interface to select the lesson then choose the level of each lesson, so the level will be determine the change of model of structural visual model, each level of lesson contain the structure of (SVM) start from simple then going to full model of (SVM) due to the structure of lesson and its words, so the data of lesson will be set from back-end content management system (CMS), in " Figure 10" we see the start workflow of the process of the lesson , the material of a lesson's level contain set of sentences with its components that will be generated automatically on running the system (Images ,Sound, SVM), the user will interact with these material by hearing and speaking a sentence and save the feedback in Database to display these results in learning Curve. The User Interface of the webpage is shown in Figure 11, which let the user login to the system. When the login is a success then the user can access to material that contains a list of lessons for the user to select the target lesson. Visually, the description of LMS can be represented as a set of lessons, each describing a certain part of learning process " Figure 6". From a list of lesson that displayed on screen, select any lesson to load the content and to display the SVM and images of lesson, " Figure 12". In addition to providing training courses, the Continuous Evaluation subsystem of the lessons can inform us about the results of the training to do more evaluation by user to reach the satisfactory level " Figure 8". For example, when completing a task, the student received a certain number of points and spent a certain amount of time performing; Learning curves are adequate for evaluating performance improvement due the positive effect of learning. Content Management system Managing Web content is becoming a top priority for improving business performance. Careful planning and elaboration of this process is required, and timely and prompt updating of the information content of the resource plays one of the main roles in the success of the project. The educational content of Current software is understood as the set of educational lessons collected in sentences, SVM, action images, learning result. These content units are designed as a template in such a way they can be used repeatedly in a different lesson. For example, once the template of lesson content is created, it can then be used in any new lesson with a change of words, when a need arises. To Control the content and images of lesson, we need to fill the data for each lesson, in " Figure 13" we have the flow charts of lesson that composed of Template, language and words the words related to word types (Verb, object, …) and Tense Rule (Past simple, Future,…). To create the lesson, there are main components that will be considered in order to complete the structure of the lesson, where Template, Language and words form the complete data of lesson. Template: it is a set of structures and form type (positive sentence, negative sentence, question, …) of a sentence, structure is a set of combination of word type (Subject, Verb, Object, Auxiliary, …) by aggregation of these word type by add symbol '+' between them for example (Subject+Verb+Article+Object), by generate the function of a given template of a lesson we get a set of lesson data from this structure by replace subject by (I,He,She , …) and Verb by (eat )and Article by (the) and Object by (Apple) replace '+' by ' ' the result is a sentence (I eat the apple) , for instance the set of the structure for English language for a given lesson at a given level as (Subject+Auxiliary+Verb+Article+Object, Subject+Verb+Article+Object, Subject+Auxiliary+not+Verb+Article+Object, Auxiliary+Subject+Verb+Article+Object+?, ….). Conclusion In this paper, we have discussed the different techniques and new methods of learning to be used in current technology framework for acquisition a language and also reviewed the results obtained from previous papers on this subject. The application of this approach explaining linguistic phenomena and teaching foreign languages, proposed in this work, allows us to translate the planning and control of speech activity into the right hemisphere and thus greatly simplify and simplify the process of teaching languages. A prototype was created. To date, we get an encouraging result about the experiments 300 students during the training related with the development of this LMS system. Exploratory confirmation of preparing materials, test systems and proposed educational programs components of (SVM) on a restricted gathering of pupils demonstrated outcomes like those acquired through the quick advancement of other training methods for the Galperin theory [3]. There was a decrease in preparing time to perform particular work in (3-30 times) and increment in preparing accomplishment by 10-25% to 80-95%. These are primer outcomes that require extra check and independent testing. Based on this approach, we are developing a new generation of training and computer programs and we are working to carry out intensive testing and collection of statistical data. Visual models and pedagogical theories, which have received a new incarnation in them, open other horizons and scale prospects for improving educational technologies.
9,636.2
2018-01-01T00:00:00.000
[ "Computer Science" ]
Very low frequency IEPE accelerometer calibration and application to a wind energy structure . In this work, we present an experimental setup for very low frequency calibration measurements of low-noise integrated electronics piezoelectric (IEPE) accelerometers and a customised signal conditioner design for using IEPE sensors down to 0.05 Hz. AC-response IEPE accelerometers and signal conditioners have amplitude and phase deviations at low frequencies. As the standard calibration procedure in the low-frequency range is technically challenging, IEPE accelerometers with standard signal conditioners are usually used in frequency ranges above 1 Hz. Vibrations on structures with low eigenfrequencies like wind turbines are thus often monitored using DC-coupled micro-electro-mechanical system (MEMS) capacitive accelerometers. This sensor type suffers from higher noise levels compared to IEPE sensors. To apply IEPE sensors instead of MEMS sensors, in this work the calibration of the entire measurement chain of three different IEPE sensors with the customised signal conditioner is performed with a low-frequency centrifuge. The IEPE sensors are modelled using infinite impulse response (IIR) filters to apply the calibration to time-domain measurement data of a wind turbine support structure. This procedure enables an amplitude and phase-accurate vibration analysis with IEPE sensors in the low-frequency range down to 0.05 Hz. Introduction In recent years, the expansion of offshore wind energy has been driven forward with ever larger wind turbines. This leads to ever smaller natural frequencies of wind turbine support structures. Waves in the low-frequency range down to 0.05 Hz also have an impact on these structures. For instance, Penner et al. (2020) observed that the highest forces and displacements occur in the frequency range between 0.05 and 0.2 Hz when monitoring a suction bucket offshore foundation. Therefore, low-frequency structural dynamics should be considered when monitoring such structures. Structural health monitoring (SHM) based on dynamic measurements relies on measurement data from a sensor network installed on the structure to be monitored. For a reliable monitoring of support structures of offshore wind turbines, the measurement chain should be designed for the low-frequency range. For onshore settings, the displacement for this frequency range can, for instance, be measured using photogrammetry (Ozbek et al., 2010). However, optical measurement systems require fixed reference points, which is generally not available for offshore installations. Furthermore, the resolution of the camera limits the obtainable signal-to-noise ratio (SNR). Strain gauges can also be used to monitor low-frequency vibration. However, field experiences in offshore wind energy turbines show that strain sensors are less reliable than accelerometers for long-term applications (Maes et al., 2016). Therefore, various virtual sensing concepts have been developed to estimate dynamic strains at fatigue-critical locations using accelerometers (Tarpø et al., 2020). Acceleration sensors are commonly used in the wind energy industry for support structure monitoring of wind turbines. In the low-frequency range, DC-coupled microelectro-mechanical system (MEMS) capacitive accelerometers are usually applied, because these sensors have a linear transfer behaviour in the low-frequency range (Anslow and O'Sullivan, 2020). A relatively high noise level is a disadvantage of this sensor type. This limits the range of application of MEMS sensors, since a high SNR is an important prerequisite for reliable displacement and strain estimation using accelerometers. In addition, a high SNR also leads to better identification of modal parameters (Au, 2014). Regarding low-noise accelerometers, the integrated electronics piezoelectric (IEPE) sensor type is the industry standard. This type of sensor is a piezoelectric (PE) sensor with a preamplifier integrated into the sensor casing. In contrast to conventional PE sensors, this leads to a low output impedance, which results in a significantly improved noise behaviour (Levinzon, 2005). The integrated preamplifier requires a constant current source. To connect the sensor with standard analogue digital converters (ADCs), a high-pass filter is integrated into the supply. The sensor supply consisting of the current source and the filter is also called the IEPE signal conditioner. Due to the measurement principle, IEPE sensors are AC-response sensors. This sensor class cannot measure constant acceleration, leading to a frequency-dependent transfer behaviour. Thus, low-noise IEPE sensors are typically used in the frequency range above 1 Hz. Occasionally, IEPE sensors are also used for monitoring the tower of offshore wind turbines in the frequency range above 0.05 Hz due to their low noise level (Weijtjens et al., 2017). However, to the best of our knowledge, the transfer behaviour of IEPE sensors in the low-frequency range has not been considered specifically so far. For laboratory experiments on rotor blades, there are experiments with IEPE acceleration sensors where a calibration for frequencies starting at 0.5 Hz was carried out (Gundlach and Govers, 2019). In order to correct measurement errors in the frequency range below 1 Hz, the transfer behaviour should be represented using a filter model. The simplest model of a measuring chain with an IEPE sensor consists of two cascaded first-order high passes (D'Emilia et al., 2019). To determine the filter coefficients, it is necessary to calibrate the sensor and the signal conditioners below 1 Hz. The transfer behaviour of an IEPE signal conditioner can be analysed using a frequency generator and an IEPE simulator (Ripper et al., 2014). Klaus et al. (2015) calibrated different IEPE signal conditioners in the frequency range from 0.1 Hz to 100 kHz using a sinusoidal excitation. It was shown that the different designs of the built-in high-pass filters lead to large deviations in the frequency range below 3 Hz. The calibration of acceleration sensors is regulated in the ISO 16063 "Methods for the calibration of vibration and shock transducers" series of standards. ISO 16063-21 (2016) regulates the calibration using a reference sensor in the frequency range from 0.4 Hz to 10 kHz. In the calibration procedure, an acceleration sensor is excited using a electrodynamic shaker. In addition, the excitation is measured using a calibrated reference acceleration sensor. In the low-frequency range, long-stroke shakers are used for the calibration, so that sufficient displacement is achieved. To be able to calibrate frequencies down to 0.002 Hz, He et al. (2014) developed a special long-stroke shaker with a stroke of 1 m. However, the amplitudes in the low-frequency range are still very low. To achieve higher amplitudes at low frequency, the sensor can also be rotated in the Earth's gravity field (Dosch, 2007). This results in acceleration amplitudes of ±1 g independently from the rotation frequency. Seismic sensors can have a measuring range smaller than 2 g. The acceleration amplitude can therefore be adjusted by tilting the centrifuge. For example, Olivares et al. (2009) describe a tilted non-motorised centrifuge which is used to calibrate a gyroscope. In addition to the frequency response, the spectral noise level is also an important parameter for evaluating a measurement chain. A widely used method to measure the noise level is the huddle test (Holcomb, 1989). In this test, several sensors are measured simultaneously, while the external accelerations of all sensors has to be the same. This is achieved by mounting the sensors to a stiff plate and aligning them in the same direction. When using two sensors, it is assumed that both sensors have the same noise level. For three sensors, the three-channel test is recommend to determine the noise level of each individual sensor (Sleeman et al., 2006). In this work, we present an approach for the design of very low frequency measurement chains for low-noise IEPE accelerometers. This measurement chain can be used in different applications, such as vibration-based SHM in heterogenous sensor setups or load monitoring in offshore wind turbine support structures. Our approach is to use a custom IEPE signal conditioner with a low cutoff frequency to achieve a higher SNR compared to a standard signal conditioner. To determine the transfer behaviour, we apply a motorised centrifuge to perform a low-frequency calibration between 0.027 Hz and 1 Hz. The limits of these frequency bands are determined by the technical limitations of the centrifuge. Using this approach, constant acceleration up to ±1 g is possible in the low-frequency range with a cost-effective experimental setup. We calibrate three different IEPE sensors to study differences in their transfer behaviour. To apply the calibration results to measurement data, a filter model is identified for each sensor. This is used to investigate the physical noise level with and without calibration. Finally, the filter models are applied to measurements of other calibration procedures as well as to measurements of tower vibrations of a wind turbine in order to demonstrate calibration of timedomain measurement data down to 0.05 Hz. Theory In this section, we present the theoretical foundation of the proposed calibrated measuring chain. First, we give a summary of IEPE sensor technology. Then we introduce the the-ory of calibration of accelerometers using a centrifuge. For data evaluation and further processing, the Vold-Kalman filter and other filter theories are presented. Finally, the numerical methods for the identification of the calibration filter coefficients are introduced. IEPE sensors Piezoelectric sensors have been used in vibration analysis for frequency ranges above 1 Hz for a long time. In the first generation of such devices, the piezoelement is directly connected to the measurement line. This results in a highimpedance setup with very low current in the measurement lines. Due to cable microphonics and the susceptibility to stray fields, noise and hum issues, especially in setups with long measurement cabling, the signal quality deteriorates. This can be improved to a limited extent by using very expensive, highly shielded cables with low microphonic interference. In an industrial atmosphere, however, mechanical vibrations and electromagnetic interference have to be expected. The current generation of piezoelectric sensors are IEPE devices. The key difference to the previous generation is a preamplifier, which is integrated into the sensor casing. For the measurement system, the sensor thus becomes a lowimpedance load, which leads to an improved noise characteristic (Levinzon, 2005). The IEPE sensor is a two-terminal design, which is realised by employing a field effect transistor (FET) with the gate connected to the piezo crystal. The preamplifier is powered by an IEPE signal conditioner, which provides a constant current and a bias voltage of around 10 V. The IEPE sensor typically has a measuring range of ±5 V. The sum of the bias voltage and the measuring range delivers an output voltage of 5 to 15 V. To interface with standard analogue-digital converters (ADCs), a coupling capacitor C C is introduced, as shown in Fig. 1. This coupling capacitor separates the constant excitation current I e , and thus the bias voltage, from the ADC. For zero acceleration, the voltage at the ADC input is thus zero. A large resistance R C is placed across the ADC, which defines the output impedance and enables charging of the coupling capacitor. This decoupling circuit results in a firstorder high-pass filter behaviour with a cutoff frequency of The time constant is defined by the time required for the high pass to decay to 67 % of the output value of a step response. After some time, the measured signal thus becomes zero for constant accelerations. The same effect takes place inside the sensor as well, since the piezo crystal discharges due to leakage current. This is the reason why this type of accelerometer cannot be used to measure constant accelerations. Inside the sensor, a resistor R S is placed parallel to the piezoelement to limit the settling time and avoid temperature drifting as shown in Fig. 1. This further elevates the cutoff frequency. For lowfrequency applications, the resistor R S needs to have a large value, which leads to long settling times of about several minutes. To optimise the transfer behaviour of the sensor, further electronics are installed in the sensor by the manufacturer. Seismic grade sensors can attain cutoff frequencies below 0.1 Hz, coupled with very low noise and high sensitivity (Levinzon, 2012). To exploit the full range of IEPE sensors, the cutoff frequency of the IEPE signal conditioner must be lower than that of the sensor itself. Off-the-rack measurement systems with integrated IEPE signal conditioner usually have cutoff frequencies around 0.4 Hz. This is due to space restrictions in the casing, since film capacitors with the required capacitance rating have case dimensions of several centimetres. Discrete IEPE signal conditioner units typically achieve a cutoff frequency of 0.1 Hz, and consequently they have larger case dimensions. Regarding the amplitude response, this results in an acceptably low amplitude loss. However, the phase response is still affected. Signal conditioners with even lower cutoff frequencies lead to longer settling times, which is undesirable for most applications. However, some manufacturers offer special versions with a long settling time for lowfrequency applications. In addition to the sensor and the signal conditioner, a measuring channel also consists of the cable and the measuring system. Since these components have a linear response in the low-frequency range, they play a minor role and are not considered further. Calibration of the measurement chain A measured signal y generally consists of a deterministic signal component s and a stochastic noise component n where the index k is the time variable of the time-discrete signal normalised to its sampling rate. The deterministic part corresponds to the physical quantity to be measured. The stochastic part is attributed to the noise of the measuring chain. It consists of a frequency-independent component (white noise) and a frequency-dependent component (1/f noise or pink noise). In the low-frequency range, 1/f noise is the decisive component. The huddle test is employed to investigate the incoherent noise of the accelerometer measurement chain. In this test, at least two identical sensors are placed as close as possible to each other. By means of the coherence function γ 1,2 among the two sensors, the auto power density spectrum of both signals S 1,1 and S 2,2 can be separated into the signal component S s,s and noise component S n,n depending on the frequency f (Brincker and Larsen, 2007). For the calibration of the deterministic signal component, it is assumed that the entire measurement chain is a linear time-invariant system. Hence, the frequency response is not dependent on the time or the amplitude of the input (e.g. Klaus et al., 2015). Therefore, the transfer between signal input x(z) and output y(z) can be described with the timeinvariant transfer function H (z) where z is the discrete frequency obtained from the z transform. The aim of the calibration is to determine the transfer function. Various excitation signals can be used for calibration, the most common being a mono-frequent sinusoidal signal with the amplitude A, the angular frequency and the phase shift ϕ. In order to calibrate the low-frequency range, a long measuring time with a mono-frequent sinusoidal signal is necessary. Therefore, a multi-sinus excitation can be used to reduce the length of the time series required for the calibration (Bruns and Volkers, 2018) x For the calibration of an IEPE signal conditioner, there is already an established procedure (e.g. Ripper et al., 2014;Klaus et al., 2015). Following this approach, an excitation signal is generated by a signal generator. The signal type used for this procedure is arbitrary and only limited by the type of waveforms the signal generator can generate. Due to the electrical impedance and bias voltage mismatch, a signal generator cannot be directly connected to the IEPE signal conditioner. Therefore, an IEPE simulator is used as an impedance converter, which is connected between the signal generator and the IEPE signal conditioner. This calibration setup is shown in Fig. 2. The standard ISO 16063-21 (2016) for the calibration of acceleration sensors proposes a long-stroke shaker for the calibration of sensors in the lowfrequency range. Using a shaker, the obtainable acceleration amplitude is very low in the low-frequency range due to the physical relationship between displacement u and acceleration signal a of a harmonic signal which makes shaker-based calibration below 1 Hz technically challenging and expensive. In order to obtain a frequency-independent acceleration for the calibration, a possibility is to employ the Earth's gravity field and an inclined plane of rotation such as Olivares et al. (2009) proposed to calibrate a gyroscope. The rotation of the centrifuge shown in Fig. 3 leads to a tilting motion relative to the gravitational acceleration g. The acceleration resulting from this motion depends on the tilting angle φ and the angular frequency . If the sensors measured in the tangential direction of the rotational motion, one revolution of the centrifuge translates to one oscillation period for the sensor. In addition to the gravitational acceleration a grav , the centripetal acceleration a cent acts on the sensor as well. There are additional influences on acceleration, such as higher harmonics of the centrifuge motor and measuring uncertainty, which are summarised in the term a e . The acceleration acting on the sensor is thus Acceleration due to gravity measured at the sensor depends on the tilting angle θ of the centrifuge, the angular velocity t and the gravitational acceleration g: The centripetal acceleration is determined by the distance from the rotation axis and the angular velocity. For a constant angular velocity, the centripetal acceleration is where r x and r y denote the position of the sensor relative to the axis of rotation. The centripetal acceleration acts in the radial direction, perpendicular to the axis of rotation. To avoid distortion of the measured signal, the centripetal acceleration should be as low as possible. Also, the centripetal acceleration is constant and thus cannot be measured with IEPE sensors; it can, due to the transverse sensitivity of the sensor, negatively impact the measurement result. In the case of DC-capable MEMS sensors, the centripetal acceleration can easily be removed by digital filtering, since it is constant. Vold-Kalman filter The measured acceleration data obtained at the centrifuge are contaminated with noise and higher harmonic frequency components and in the case of MEMS sensors also with centripetal acceleration. Digital filtering is thus required for an accurate determination of amplitude and phase. To avoid phase shifts, we use the second-generation Vold-Kalman filter (Vold and Leuridan, 1993). This filter is a time domain decomposition method for order tracking. For a given phase signal, the filter extracts only the harmonic component from the measured signal. The method is based on two equations for each time step, which are minimised in a system of equations over the entire data length. The data equation ensures that the signal components a e [k] that do not originate from the harmonic component are minimised where a[k] is the measured signal, A[k] is the instantaneous amplitude and ω[k] is the given phase signal. The second equation is the structural equation. This equation leads to a smooth amplitude trend by keeping the change in amplitude as low as possible over several time steps k. Thus, it acts as a low-pass filter for the amplitude. Therefore, abrupt amplitude changes in the measurement data lead to transient oscillations at the filter output. The structural equation depends on the filter order. For a first-order filter, the equation is where η[k] is the change of the amplitude. The entire system of equations is solved using a least-squares algorithm. The filter property is changed by a weighting factor between the structural equation and the data equation. This weighting factor can be calculated from the so-called filter bandwidth B (Tuma, 2005). In this work, we calculate the filter bandwidth with the bandwidth factor κ. The bandwidth thus depends on the excitation frequency . The more accurate the phase signal and more constant the amplitude, the smaller the factor κ can be selected. A small bandwidth leads to a more precise determination of amplitude and phase. However, the settling time of the filter increases with decreasing bandwidth as described by Tuma (2005) and Herlufsen et al. (1999). Due to the non-causal filter characteristics, the Vold-Kalman filter cannot be applied in real time and has to be used as a postprocessing method. Filter model of the transfer behaviour of the measurement chain The IEPE signal conditioner and the IEPE sensor act as highpass filters (D'Emilia et al., 2019). The discrete transfer function of a first-order high-pass filter can be expressed as where z is the discrete frequency obtained from the z transform and T s is the period duration of the sampling frequency. The cutoff frequency f c of an RC filter can be calculated according to Eq. (1). The transfer behaviour of an exemplary high-pass filter is shown in the Bode diagram in Fig. 4. By multiplication of the filters in the z domain, several filters can be combined: To calibrate the measurement data, the filter representing the measurement chain has to be inverted. The transfer function can be inverted by exchanging the numerator and denominator coefficients The high-pass filter characteristic of the IEPE measurement chain removes the mean acceleration from the signal so that it cannot be reconstructed in the calibration. In addition, 1/f noise enters the signal from the measurement chain, which can lead to drift when the high-pass filter is inverted. This is due to the pole of the inverted high pass at 0 Hz, which makes the filter semi-stable. In order to enable calibration in the low-frequency range, a shelving high-pass filter can be used. A shelf gain G shelf is introduced to limit the amplitude in the low-frequency range. The transfer function for a shelved high-pass filter is Using this shelved filter, low-frequency components are limited in amplitude when the filter is inverted. This prevents drifting and thus leads to valid signals when the calibration is conducted. The Bode diagram of a high-pass filter with and without shelf is shown in Fig. 4. However, the shelf filter introduces a phase error below the cutoff frequency. Therefore, the phase behaviour should be carefully considered when selecting the shelf gain. Identification of filter parameters In order to identify filter coefficients for the measured transfer functions, a parameter identification is required. This is accomplished using a numerical optimisation method. As the objective function we use a weighted Euclidean distance between the measured and modelled complex transfer function: where H meas is the measured and H model is the modelled transfer function. A weighting based on the measured transfer function is necessary because the magnitude of the transfer function H of a high-pass filter below the cutoff frequency approaches zero quickly. This is shown in Fig. 5a. In order to weight each measured point equally in the curve, a normalisation with the inverse of the absolute measured transfer function is performed as shown in Fig. 5b. This plot demonstrates that the real part dominates above the cutoff frequency and the imaginary part dominates below it. Equation (20) can be solved for the cutoff frequency using a global optimisation algorithm, such as the global pattern search algorithm (Hofmeister et al., 2019). To calibrate the measurement data, the identified filter model is inversely applied to the measurement data. The well-known effect of the drift of the measured data after time integration can also be observed when applying the inverse filter. By employing shelving high-pass filters, the increase in amplitude in the low-frequency range can be limited, thus preventing drift. Due to the phase error introduced by the shelf, there is a conflict of goals between phase fidelity and amplitude limiting, as illustrated in Fig. 4. For postprocessing calibration, the amplitude in the low-frequency range can be reduced without changing the phase by applying a high-pass filter forwards and backwards in time. To design this high-pass filter, the design variable can be the maximum gain G max of the measurement signal through the filter sequence. Using the objective function, the corresponding cutoff frequency of the high-pass filter can be determined by means of numerical optimisation. The identification of the cutoff frequency for a given order of the high-pass filter can be done by means of the target function where H HP (z) is the transfer function of the high pass. IEPE signal conditioner circuit For a precise investigation of the sensor behaviour in the lowfrequency range, an IEPE signal conditioner with a low cutoff frequency and low noise is required. Therefore, we propose a custom IEPE signal conditioner circuitry which fulfils these design criteria. The circuit diagram is shown in Fig. 1. As a current source, the LT3092 integrated circuit is used. The coupling capacitor C C is a foil type with a low dissipation factor and with a capacitance of 47 µF. The resistor R C has an electrical resistance of 330 k . According to Eq. (1), the cutoff frequency of this signal conditioner design is 0.0103 Hz. Figure 4 shows the theoretical transfer behaviour of the resulting high-pass filter. To enable long-distance cabling in adverse electromagnetic conditions, we implement a shielding concept. We use twisted pair cabling for the signal ground and sense wires to protect against magnetic fields. A common copper mesh shields against interference from electric fields. The standard connector on industrial-grade IEPE accelerometers is of the type MILC-5015, which enables a full enclosure of the signal wires inside the metallic shield. For the signal conditioner, we use cheaper XLR connectors instead of MIL-type connectors. This type of connector also fully encapsulates the signal wires and mechanically ensures inverse polarity protection. The sensor cables are thus designed to convert from XLR male to MILC-5015 female connectors. The housing and electrical circuit board of the signal conditioner are shown in Fig. 6. Calibration of the IEPE signal conditioner In order to check the functionality of the custom IEPE signal conditioner and to obtain the exact transfer function, we carry out the calibration according to Ripper et al. (2014). For comparison, the integrated signal conditioner of the measuring system is additionally calibrated. According to the data sheet, it has a cutoff frequency of 0.34 Hz. The measurement setup is shown in Fig. 2. We employ a commercial IEPE simulator with a flat low-frequency response down to DC. To speed up the calibration measurement, a multi-sine signal is fed to the IEPE simulator using a signal generator. Therefore, a signal with one fundamental and seven higher harmonics is applied with an amplitude of 0.5 V each. After a settling time of 2 min, 18 periods of the fundamental oscillation or at least 5 min measuring time are used for the calibration. The data analysis is carried out using the secondgeneration first-order Vold-Kalman filter described in Sect. 2.3. We set the bandwidth factor introduced in Eq. (15) to κ = 0.001. In the evaluation, the first and last six periods of the fundamental oscillation or at least 120 s are not used due to the transient response of the Vold-Kalman filter. The phase signal required for filtering is calculated from the known frequencies of the signal generator. The calibration of the signal conditioners results in an amplitude dispersion of less than 0.02 % and a phase scatter below ±0.01 • . Besides the Vold-Kalman filter, the signal generator, the IEPE simulator and the measuring system are the contributors to this measurement uncertainty. Klaus et al. (2015) estimate the expanded uncertainty of this calibration method in the per mille range, which is consistent with our results. The frequency response of both signal conditioners are shown in Fig. 7. The Bode diagrams resemble the transfer behaviour of first-order high-pass filters as described in Eq. (16). Model identification using the identification approach proposed in Sect. 2.5 provides the cutoff frequencies of the signal conditioners. The cutoff frequency of the custom signal conditioner is 0.0106 Hz, and for the integrated signal conditioner it is 0.3474 Hz. The results of the model identification are also shown in Fig. 7. The measurement data of both sensor supplies fit very well with the model. Both cutoff frequencies of the power supplies are higher than the theoretical or manufacturer's specifications. One cause could be the input impedance of the measuring device, which is connected in parallel to the signal conditioner output and thus reduces the effective value of R c . The determined transfer functions clearly show that for the application of IEPE sensors in the low-frequency range the transfer behaviour of the signal conditioner should be examined in detail. If the cutoff frequency of the signal conditioner is in the frequency range to be measured, there is a considerable amplitude error. This can be calibrated, but the SNR will suffer. If the cutoff frequency is lower, the amplitude error is small, but a significant phase error still remains. The phase distortion leads to a group delay, which is imposed onto the signal characteristics. In the following section, the calibration of the entire measurement chain down to 0.027 Hz is carried out. The custom signal conditioner is used for calibration, as it leads to a higher SNR in the low-frequency range due to the lower cutoff frequency. This calibration method covers the entire measurement chain including the signal conditioner. Therefore, the calibration of the signal conditioner with the IEPE simulator is not absolutely necessary for applications. The dedicated calibration of the signal conditioner may still be useful to check its functionality. Further, it illustrates the influence of the signal conditioner on the low-frequency performance of the measurement chain and also enables a validity check of filter coefficients determined in the calibration using the centrifuge. Low-frequency calibration of IEPE accelerometer measuring chains In addition to the IEPE conditioner, the sensor itself has a high-pass characteristic. Therefore, the sensor should be calibrated for accurate measurement in the low-frequency range as well. We perform the procedure considering the entire measurement chain on a custom motorised centrifuge with a tilted plane of rotation. The calibration is carried out in the frequency range from 0.027 to 1 Hz. The phase signal required for the application of the Vold-Kalman filter is measured by an optical tachometer. The setup is shown in Fig. 8. The inclination angle of the rotation plate is According to information by the German Federal Agency for Cartography and Geodesy (2022), the gravitational acceleration is g = 9.812628 m s −2 at the site of the centrifuge. According to Eq. (11), the expected frequency-independent acceleration in the measurement plane is a = g sin(θ ) cos( t) = 0.421 m s −2 cos( t). (23) For the calibration measurement, we use a 24-bit Delta Sigma AD converter with a sampling rate of 2500 Hz and a measuring range of ±10 V. A digital Bessel filter with a cutoff frequency of 500 Hz is used to prevent aliasing. The AD converter is connected to the previously calibrated custom IEPE signal conditioner and mounted on the rotating platform of the centrifuge. We calibrate three different IEPE sensors to investigate differences in their lowfrequency transfer behaviour. The first and second sensors (IEPE A, B) are low-frequency variants of a general purpose sensor. The third sensor IEPE C is a seismic high-sensitivity type. For comparison, a DC-capable MEMS sensor is calibrated in addition to the IEPE sensors. The characteristics of these sensors are listed in Table 1. The centrifuge is controlled so that each calibration frequency has at least 50 oscillation periods and a minimum measuring time of 200 s. We chose 17 equally spaced calibration frequencies in the frequency range of 0.027 to 1 Hz on a logarithmic scale. The time required to complete the calibration procedure with these parameters is 152 min. For the sensor calibration, we ignore the first and last 12 oscillation rotations for each frequency so that transient oscillations of the centrifuge and the Vold-Kalman filter do not falsify the resulting frequency response data. The data measured on the centrifuge are contaminated with higher harmonic oscillations and other disturbing effects. Figure 9a shows five periods of the calibration measured with the sensor IEPE B at an excitation frequency of 0.055 Hz. The Vold-Kalman filter is applied to extract only the fundamental oscillation component of the signal. The required phase signal is calculated on the basis of the tachometer signal, which triggers once per revolution of the centrifuge. Figure 10 shows the influence on the selected bandwidth factor of the Vold-Kalman filter for the IEPE B sensor at an excitation frequency of 0.055 Hz. Figure 9a shows the least-square error between the Vold-Kalman filter and a bandpass filter (passband 0.0275-0.0825 Hz) as a function of the bandwidth factor. A bandwidth factor that is too high will result in larger errors. Figure 9b and c show the resulting amplitude and phase between the excitation and measurement. Below a bandwidth factor of 0.003, the amplitude increases briefly. This is probably a numerical effect. Therefore, we set the bandwidth factor of the Vold-Kalman filter to κ = 0.005 for further analysis. Such a low bandwidth factor is possible due to the fact that no amplitude changes are to be expected during one excitation frequency. The result of the filtering with the Vold-Kalman filter is shown in Fig. 9b. The filter output contains a signal which is phase-shifted and attenuated when compared to the excitation. This error results from the combined frequency response of the sensor and the signal conditioner. Figure 9c shows the noise signal resulting from the difference between the measured signal and the filtered signal. The calibration results are verified statistically by computing the average values and the minima and maxima from the instantaneous phase and amplitude obtained from the Vold-Kalman filter. Figure 11 shows the relative deviation of the amplitude and the absolute phase deviation of all investigated sensors. The amplitude varies up to 0.4 %, and the phase deviation is below 0.2 • . One reason for the scattering of the phase at higher frequencies is the low quality of the employed phase trigger. With elevated speed of the centrifuge, the phase signal becomes less accurate, which is reflected in the statistical scatter. Moreover, mechanical warping due to centrifugal forces at higher frequencies can lead to higher deviations. The results of the calibration are shown in Fig. 12. All three IEPE sensors have a typical high-pass characteristic of higher order. A significant difference in amplitude between the three sensors can be observed below 0.2 Hz, while the phase differs significantly below 0.5 Hz. As expected, the MEMS sensor has a linear transfer behaviour in the lowfrequency range, which results in a flat amplitude and phase response. It should be noted that the IEPE C was calibrated in another calibration run with slightly different frequencies, which result from friction in the drivetrain of the centrifuge. A filter model is required to apply the calibration of IEPE sensors to measurement data. To identify a filter model using the method proposed in Sect. 2.5, the number of cascaded high passes and the gain of the shelf have to be defined in advance. The number of cascaded high passes is determined by analysis of the high-pass behaviour of the corresponding sensor. A first-order high-pass filter would lead to a phase shift of 90 • within 3 decades in the low-frequency range, as shown in Fig. 4. Steeper phase response corresponds to a higher-order high-pass filter. This consideration results in a filter order of 3 for the sensor IEPE A, whereas the sensor IEPE B can be modelled from two cascaded high passes and IEPE C with a first-order high pass. In addition to the highpass filters of the sensors, a further high pass is introduced for the signal conditioner. When setting the shelf gain G shelf , it should be taken into account that a higher gain reduces the settling time of the filter and leads to higher noise suppression in the low-frequency range. However, in addition to an amplitude deviation, the shelf gain has a big influence on the phase response of the model as shown in Fig. 4. To prevent this phase error of the shelf from distorting the filter model, only measurement data above 0.04 Hz are used for the filter parameter identification. The adaptation of the model to the determined transfer functions is achieved using the complex objective function as shown in Eq. (20) and using the global pattern search algorithm. The transfer functions of the filter models are shown superimposed with the measured transfer behaviour in Fig. 12. Generally, the identified models fit well with the measured data. The phase deviation in the lower frequency range is caused by the shelf filter. Additionally, small amplitude deviations can be observed. IEPE C in particular seems to have a more complex transfer behaviour than a firstorder high-pass filter. Table 2 lists the selected orders, shelf amplitudes and cutoff frequencies for each sensor. Application In this section, we apply the calibration results to measurement data. First, the noise of the calibrated sensors is analysed, and the sensor types are compared. Second, the calibration filters are applied to time series measurement data. Noise level of the measurement chain The noise level of the measurement chain is an important parameter for sensor selection. In the low-frequency range, the 1/f noise typically dominates the noise amplitude. Furthermore, the acceleration amplitudes in the low-frequency range are often low, and thus the noise level is more important for low-frequency measurements than for high-frequency measurements. In addition, the calibration of the IEPE sensors using an inverse high-pass filter increases the noise amplitude in the lower frequency range. We determine the noise level by means of a coherence analysis of two sensors of the same type as described in Sect. 2.2. The spectral noise is calculated according to Eq. (5) using data measured during a time span of 100 min. For the evaluation with the Welch method, a rectangular window of 1000 s length is used. This leads to a frequency resolution of 0.001 Hz. The resulting spectral noise of the three sensor types is shown in Fig. 13. In Fig. 13a, the uncalibrated sensor IEPE C shows a typical 1/f noise characteristic. In case of the sensors IEPE A and B, the noise flattens below 0.04 Hz due to the high-pass characteristic of the sensor and the signal conditioner. The 1/f characteristic is recovered for all IEPE sensors, when the data are calibrated, as shown in Fig. 13b. An interesting effect is that IEPE B has significantly lower noise in the low-frequency range than IEPE A in the uncalibrated data. However, after the calibration is applied, the differences in the noise level between IEPE A and B diminish. Therefore, only calibrated sensor signals should be used when comparing sensors based on their noise level. The noise level of the MEMS sensor is only better than that of the calibrated sensors IEPE A and B below 0.1 Hz. Thus, using these IEPE sensors, a SNR higher than with the MEMS sensor can be expected above 0.1 Hz. The seismic sensor IEPE C has a significantly better noise performance than the other sensors down to 0.01 Hz. However, it also has a smaller measuring range, which makes it suitable only for applications with low acceleration amplitudes. Differences in the design of the internal electronic components of the sensor types used in this study lead to varying noise levels. MEMS acceleration sensors are capacitive sensors that require a carrier frequency for measurement, leading to a higher noise level. The different noise levels exhibited by the IEPE sensors have several reasons. A sensor with a lower measuring range usually houses a larger piezoelectric crystal, which in turn leads to a lower noise level due to lower impedance. Another influence is the integrated electronic pre-amplifier, which significantly affects the noise level of the IEPE sensor (Levinzon, 2005). Calibration of measurement data To calibrate measurement data, the developed filter model is inverted and applied to the time-domain data. In the model, a shelf is added to the filter model to limit the amplitude increase at low frequencies. However, the shelf leads to a phase error. For a phase-true calibration, the shelf gain cannot be set high enough to suppress the low-frequency noise and keep the settling time low. To avoid amplifying the lowfrequency noise too much, frequency components below the frequency range of interest should be removed after the calibration. However, this also changes the phase response when used in a real-time filtering scenario. If the calibration is applied in post-processing, the phase can be maintained by filtering forwards and backwards in time. To determine the required high-pass filters, the objective function from Eq. (21) is used. For this purpose, it is necessary to determine the maximum gain. The maximum gain G max is chosen to be as low as possible without affecting the amplitude response above 0.05 Hz. Figure 14 shows the resulting calibration filters. The settings of the high-pass filters applied for the removal of low-frequency noise are listed in Table 3. In general, the lower the cutoff frequency and order of the high pass of the sensor, the lower the maximum gain can be set. A lower maximum gain thus leads to an increased noise suppression below 0.05 Hz. We apply the calibration procedure to measurement data taken during another run of the centrifuge. The graphs in Fig. 15 show two periods of rotation at a frequency of 0.055 Hz. For comparison, the signal of the MEMS sensor is shown. For these plots, an additional second-order low-pass filter with a cutoff frequency of 2.5 Hz is applied forwards and backwards in time to remove high-frequency signal con- tamination. In particular the sensors IEPE A and B have a significant phase and amplitude deviation at this excitation frequency. The deviations are not as pronounced in the case of IEPE C due to its excellent low-frequency performance. Using the filter models identified for the respective IEPE sensors, the amplitude and phase deviation between the MEMS and the IEPE sensors can be completely corrected. Calibration of tower vibrations of a wind turbine The MEMS sensor and the sensors IEPE B and C combined with the custom IEPE conditioner were used to measure the tower vibrations of a 3.4 MW on-shore wind turbine. The sensor setup is shown in Fig. 16. These measurement data enable a sensor comparison in a realistic scenario. The measuring point is located at a height of 96 m. During the startup process of the wind energy turbine, strong tower vibrations are observed in the measurement data. Figure 17 shows a part of this vibration time series. The fundamental oscillation frequency in these data is around 0.3 Hz. For better visualisation, all signal components above 10 Hz are removed by a low-pass filter. Figure 17a shows the time series of the uncalibrated sensor data. In this case, a phase shift is discernible between the sensors. This phase shift is larger from IEPE B to the MEMS sensor when compared to IEPE C. This observation is in agreement with the previously determined transfer characteristics of the sensors. By applying the calibration filters, the phase of the signal can be corrected, as shown in Fig. 17b. During operation of the wind turbine, lower-frequency signal components can be observed. By applying a double time integration, these signal components become visible in the measurement signal. The displacement estimation is shown in Fig. 18. To prevent drift due to integration, the measurement data below 0.04 Hz are removed by a high-pass filter. Figure 18a shows the calculated displacement of the uncalibrated IEPE sensors compared to the MEMS sensor. Besides the phase error of the IEPE sensors, an amplitude error is visible. This leads to a different time series. The calibration filters can correct both errors. This is shown in Fig. 18b. It should be noted that a tilt error occurs in the measurement data due to gravitational acceleration (e.g. Tarpø et al., 2021). This is not taken into account in the evaluation. The advantage of the lower noise level of IEPE sensors in frequency ranges above 0.1 Hz becomes apparent at a low signal level. This is expected at lower measurement planes in the tower as well as during downtime of the wind turbine. Of the latter an auto power spectral density (PSD) is shown in Fig. 19. In this case, the lower noise level of the IEPE C is observable in comparison with the other sensors. Above 0.02 Hz, the IEPE B sensor shows better noise level than the MEMS sensor. These values are for illustrative purposes only, as they depend on the measured acceleration and thus on the excitation of the structure. Summary and outlook In this work, we demonstrate that measurements of very low frequency structural dynamics down to 0.05 Hz can be achieved using IEPE accelerometers. To this end, we introduce a custom IEPE signal conditioner with low noise and a low cutoff frequency. The necessary low-frequency calibration of the measurement chain is carried out in the range from 0.027 up to 1 Hz using a tilted plane centrifuge. For the signal calibration, it is sufficient to model the transfer behaviour of IEPE sensors including the signal conditioner with a highorder high-pass filter. In comparison to a MEMS sensor, the investigated calibrated IEPE sensors have a better signal-tonoise ratio in the range above 0.1 Hz. The sensor comparison in the tower of a wind turbine shows that the calibrated IEPE sensors provide amplitude and phase-confident signals above 0.05 Hz. The lower noise level of the IEPE sensors leads to improved measurements at low acceleration amplitudes, such as downtime of the wind turbine, low wind speed and lower measurement planes. Precise measurements, also for low amplitudes, are an important prerequisite for lifetime extrapolation based on measurements. Taking into account the transfer behaviour of the measurement chain, the use of IEPE accelerometers designed for the low-frequency range is therefore recommended for all wind turbine components in a frequency range above 0.1 Hz. In the range of 0.05 and 0.1 Hz, both sensor types have similar performance, and a decision has to be made considering the requirements in each particular case. This recommendation is only valid when signal conditioners with a very low high-pass cutoff frequency are employed. Seismic IEPE sensors should not be considered for rotating systems due to their low measuring range. Regardless of the acceleration sensor type, the tilt error has to be considered for measurements in the low-frequency range due to the contamination of the structural acceleration with the gravitational acceleration caused by the bending of the structure (Tarpø et al., 2021). This should be compensated for when exact acceleration amplitudes are desired. In the future, a precise uncertainty investigation and refinement of the presented calibration method should be carried out. Therefore, a more accurate phase sensor and a centrifuge optimised for low frequencies should be used. In addition, more precise filter models for modelling the IEPE sensors should be investigated. The techniques developed in this paper can be used to estimate the displacement and strain of large structures using low-noise IEPE accelerometers. Thus, further investigations should validate the displacement estimated from the acceleration using low-noise IEPE accelerometers by comparison to independent displacement measurements. In addition, a modal expansion technique can be used to estimate strains in areas of structures where measurement is difficult or impossible, such as offshore structures below sea level. For this purpose, the influence and the correction of the tilt error of accelerometers in the monitoring of support structures of wind turbines should be investigated. In many applications of SHM, heterogeneous sensor networks are applied. In contrast, operational modal analysis techniques rely on a homogeneous sensor network to obtain in-phase mode shapes. In frequency ranges with linear transfer behaviour of all sensors, the use of heterogeneous sensor networks is possible for modal analysis. The calibration method for the IEPE accelerometer allows them to be included in the heterogeneous sensor networks in the lowfrequency range without phase and amplitude errors as well. Furthermore, the influence of the lower noise level of the measurement chain on the uncertainty of the modal analysis should be investigated. Code availability. The source code of the signal processing can be requested by contacting the corresponding author. Data availability. The data that support the findings of this study are available from the corresponding author, Clemens Jonscher, upon request.
11,141.2
2021-09-13T00:00:00.000
[ "Engineering" ]
3D Vase Design Based on Interactive Genetic Algorithm and Enhanced XGBoost Model : The human–computer interaction attribute of the interactive genetic algorithm (IGA) allows users to participate in the product design process for which the product needs to be evaluated, and requiring a large number of evaluations would lead to user fatigue. To address this issue, this paper utilizes an XGBoost proxy model modified by particle swarm optimization and the graphical interaction mechanism (GIM) to construct an improved interactive genetic algorithm (PXG-IGA), and then the PXG-IGA is applied to 3D vase design. Firstly, the 3D vase shape has been designed by using a bicubic Bézier surface, and the individual genetic code is binary and includes three parts: the vase control points, the vase height, and the texture picture. Secondly, the XGBoost evaluation of the proxy model has been constructed by collecting user online evaluation data, and the particle swarm optimization algorithm has been used to optimize the hyperparameters of XGBoost. Finally, the GIM has been introduced after several generations, allowing users to change product styles independently to better meet users’ expectations. Based on the PXG-IGA, an online 3D vase design platform has been developed and compared to the traditional IGA, KD tree, random forest, and standard XGBoost proxy models. Compared with the traditional IGA, the number of evaluations has been reduced by 58.3% and the evaluation time has been reduced by 46.4%. Compared with other proxy models, the accuracy of predictions has been improved up from 1.3% to 20.2%. To a certain extent, the PXG-IGA reduces users’ operation fatigue and provides new ideas for improving user experience and product design efficiency. Introduction As a distinctive craft and utility item, ceramics have held significant importance throughout the history of human civilization.From the earliest earthenware to the later exquisite porcelain, the narrative of ceramics is full of stories and legends.In ancient civilizations, ceramics were essential in everyday life, serving as vessels for food and other necessities.With the advancement of technology and the refinement of craftsmanship, ceramics diversified into a plethora of forms and functions. With the advancement of society, people's demand for ceramics has gradually surpassed traditional styles, and they are beginning to pursue more unique designs and shapes.However, traditional vase designs often rely on manual drawing by designers with accumulated experience.While they possess a high degree of artistry, they also have certain limitations.Designers' personal aesthetics and creativity may be constrained by their own knowledge and experience, thus being unable to fully explore all possibilities in the design space.Additionally, the manual design and modification process is time-consuming and laborious, making it difficult to quickly generate a large number of design proposals with different styles and forms. One of the key steps in ceramic design is modeling, which involves transforming creative ideas into actual product forms.Modeling requires mathematically representing the design's appearance and structure, often using mathematical surfaces to describe the shape of the product, with Bézier surfaces being widely applied.Previous studies have used neural networks to control 3D models to address wound reconstruction in the medical field [1,2], providing insights into using algorithms to control modeling to solve various problems.Therefore, ceramic modeling designs combined with artificial intelligence have emerged.However, AI-generated designs may produce unexpected ceramic shapes, failing to meet the desired outcomes [3]. In this context, the interactive genetic algorithm (IGA) [4,5], as an intelligent optimization algorithm, demonstrates its unique advantages and potential.By simulating the processes of natural selection and genetic mutation, the interactive genetic algorithm is able to automatically generate and optimize design solutions.Users play the role of "selectors" in the design process, evaluating each generation of design individuals.The algorithm continuously adjusts and optimizes design solutions based on these evaluations.This interactive process not only combines human aesthetic judgment with the powerful computational capabilities of computers but also enables the exploration and discovery of innovative designs that traditional methods may find difficult to achieve in a relatively short period of time. IGA is developed from the genetic algorithm (GA).The GA is an evolutionary optimization algorithm that can solve some problems that can be defined by mathematical formulas.The GA involves the optimization of target systems, models, and performance in multiple fields.In recent years, the GA has been applied to effective feature selection for IoT botnet attack detection [6], active disturbance rejection control of bearingless permanent magnet synchronous motors [7], gesture recognition CAPTCHA [8], state-ofcharge estimation of lithium-ion batteries [9], and residential virtual power plants [10].The application of the GA in these areas has become increasingly widespread, with more and more researchers applying the GA to solve various practical problems in their own fields. The IGA is also an optimization algorithm.The most significant difference between the IGA and the traditional GA and other metaheuristic algorithms is that it can realize human-computer interaction.Through communication with people, the IGA can be guided in the process of evolution.Unlike the GA, the IGA is not limited to solving problems with clearly defined formulas but is also able to deal with some problems that cannot be clearly defined by formulas.Currently, the IGA has been widely applied in automatic terrain generation systems [11], 3D gaming model design [12], fashion design [13,14], and music melody composition [15], etc. However, there are many factors that affect the IGA, such as individual knowledge reserve, personal preferences, thinking, and emotions, which can affect the fitness evaluation of the algorithm.These factors can cause fluctuations in fitness, resulting in different optimal solutions for everyone.In the IGA, the evolutionary direction of the population is uncertain.Since the user only needs to evaluate the individual's fitness, the IGA reorganizes the population characteristics according to the individual's fitness, thereby generating the next generation.Its evolutionary process does not align with the users' thinking, thus the next generation generated by such a process does not always suit the users' aesthetic tastes.Therefore, users usually need to evolve for multiple generations.In the process of small population evolution, in the IGA, due to the limited number of populations, the number of individual characteristics is also limited, which leads to the phenomenon that it is easy to produce a local optimum solution at the end of evolution.If no new population is added, the population may fall into a local optimal solution and be difficult to overcome.Each time, the population evolution needs to receive the users' fitness evaluation.However, the core issue with the IGA is that a large number of user evaluations and interactive operations may lead to the user's fatigue, Specifically, the interactive genetic algorithm requires users to evaluate and provide feedback on multiple individuals in each generation of the population.As the number of generations increases, the amount of information and the number of interactions that the user needs to handle increase dramatically.This high frequency of interactions not only consumes the user's energy and time but also may lead to subjective bias and inconsistent evaluation standards during the evaluation process.For instance, users may carefully evaluate each design individual at the initial stage but, as time goes on, the increasing fatigue may cause users to become impatient and make hasty evaluations.This situation not only reduces the reliability and accuracy of user evaluations but also may affect the convergence speed of the algorithm and the quality of the final optimization results. To address the above issues, a large number of researchers have used proxy model methods to predict the fitness value, thereby reducing the number of user evaluations and alleviating fatigue.Huang et al. [16] constructed KD tree proxy models and random forest proxy models to assist with evaluations based on historical user evaluation information.Lu et al. [17] constructed a user cognitive proxy model based on the BP neural network (BPNN).Gypa et al. [18] proposed an IGA integrated with a support vector machine for propeller optimization.Zhen and Nie [19] constructed the objective fitness values of the IGA based on the weight values.Sheikhi and Kaedi [20] tackled the user's fatigue problem in the interactive genetic algorithm by using the candidate elimination algorithm.As users may not be professional product designers, they may not be able to accurately evaluate the product and can only have a rough interval estimation of the product.Therefore, some researchers have thought of using individual interval fitness values to reduce the uncertainty of fitness values.Sun et al. [21] proposed an improved semi-supervised learning cotraining algorithm to assist the IGA, which considers the uncertainty of interval-based fitness values when training and weighting two co-training models.Gong et al. [22] proposed an IGA based on the proxy model of individual interval fitness. XGBoost is an efficient and flexible machine learning algorithm that is used in various fields in combination with the GA.Deng et al. [23] proposed a hybrid gene selection method based on XGBoost and a multi-objective genetic algorithm for cancer classification.Wu et al. [24] used an improved genetic algorithm and XGBoost classifier for transformer fault diagnosis.Ghatasheh et al. [25] employed a genetic algorithm to optimize XGBoost for spam prediction.Gu et al. [26] used the genetic algorithm in combination with an XGBoost model for prediction of maximum settlement in mines.Li et al. [27] utilized a genetic algorithm and the XGBoost algorithm for identification of mixed mine water inrush. Currently, there are fewer applications of XGBoost combined with the IGA.In order to solve the core problem of the user's fatigue, this study proposes a method to improve the IGA using an XGBoost proxy model and the GIM and using particle swarm optimization to optimize the hyperparameters of the XGBoost proxy model.This method uses the collected user evaluation information to construct the XGBoost model.The proxy model predicts the fitness value for each individual, assisting users in their evaluations.If users feel that the predicted score differs significantly from the expected value, they can make modifications.The GIM helps users to adjust the shape and appearance of the product independently and enables them to find satisfactory individual solutions more quickly.The main contribution of this paper is the proposal to use the XGBoost proxy model to improve the IGA.By predicting the users' evaluation fitness value based on collected historical user data, the number of user evaluations is reduced, thereby addressing the issue of the user's fatigue.Additionally, the GIM is integrated into the interactive interface, allowing users to fine-tune the shape of the evaluated individuals, enhancing user satisfaction while avoiding the fatigue caused by repetitive operations.In this study, the algorithm is applied to a 3D vase design platform to verify its accuracy, optimization capability, and ability to mitigate the user's fatigue. The Proposed Method The algorithm of this study is based on the combination of the interactive genetic algorithm and the XGBoost proxy model improved by particle swarm optimization.The proxy model is constructed by users' historical data to predict the individual's fitness value and help users to evaluate and reduce user operations.To allow users to participate more intuitively in the process of product design, a graphical interaction mechanism has been implemented in this study.The users can freely modify the characteristics of the individual product, enabling the interactive genetic algorithm to efficiently generate a customized design.Such an approach reduces the time required for design and enhances users' satisfaction. Principle of XGBoost Algorithm XGBoost [28], originally proposed by the team led by Tianqi Chen, is an optimized distributed gradient enhancement library designed to achieve higher efficiency, flexibility, and portability.It is an efficient and widely used gradient lifting algorithm for machine learning and data mining tasks.The basic components of XGBoost are the decision trees, which are referred to as weak learners.These weak learners collectively form the XGBoost.The core idea is to grow a tree by constantly adding trees and constantly splitting features.Each time a tree is added, it is actually learning a new function ().There is a sequence between the decision trees that make up XGBoost; the generation of the latter decision tree will consider the prediction results of the previous decision tree; that is, the deviation of the previous decision tree will be taken into account.The data set required for each decision tree is the entire data set, so the process of generating each decision tree can be regarded as the process of generating a complete decision tree. When predicting a new sample, it is necessary to input the new sample into each decision tree of XGBoost in turn.In the first decision tree, a predictive value will be generated.In the second decision tree, another predicted value will be generated.By analogy, the new sample is consistently put into all the decision trees.Finally, the predicted values calculated by all the decision trees are aggregated to obtain the final predictive value for the new sample.The prediction model is then constructed, and it is defined in the following way: In Equation ( 1), ( ) is the predicted value of the decision tree and is the number of decision trees.The XGBoost algorithm retains the prediction of the previous − 1 round during each model training and adds a new function ( ) to the model, which is the prediction result of the th sample at the th model training. The prediction accuracy of the model is determined by the deviation and variance of the model.The loss function represents the deviation of the model and, in order to keep the variance small, the regularization term needs to be added to the objective function to prevent over-fitting.Therefore, the objective function is composed of the loss function of the model and the regularization term that suppresses the complexity of the model.The definition of the objective function is as follows: In Equation ( 2), it can be seen that the objective function () is composed of two parts: is the sum of the errors generated between the true value and the predicted value of test samples, and is the loss function of XGBoost. Ω( ) is the regularization penalty function of model complexity, which is defined as In Equation ( 3), represents the penalty coefficient, and represents a fixed coefficient. is used to control the number of leaf nodes in the decision tree.When the number of leaf nodes is too large it is easy to produce over-fitting. is used to control the weight of each decision tree, to ensure that its value is not too large, and to avoid limited space for the subsequent decision tree. represents the number of leaf nodes of the decision tree. , is the vector formed by all the leaf node values of the decision tree, and this formula is the complexity of a single classifier. The objective function is simplified by Taylor's formula and defined as In Equation ( 4), is the sum of the first gradient of the th leaf node of all decision trees and is the synthesis of the second gradient of the th leaf node of all decision trees.When each decision tree is generated, the structure is determined, and then , and will also be determined. To achieve the best performance of XGBoost, it is essential to use appropriate methods to construct the optimal structure of the decision tree.There are two ways to split the nodes of the XGBoost algorithm: the greedy algorithm and the approximation algorithm.The greedy algorithm is the main node-splitting method in XGBoost.Starting from the root node, the greedy strategy is used to select the best splitting feature as the splitting node to segment the training data.Then the greedy strategy is continuously used to split until the decision tree can no longer continue to split.The information gain of each splitting feature is calculated.The feature with the largest information gain is the best splitting feature, which is defined as In Equation ( 5), is the information gain, and are the values of the left subtree, the right subtree and the undivided decision tree, respectively.When < 0, the decision tree gives up segmentation. Principles of Particle Swarm Optimization Particle swarm optimization (PSO) [29] is an optimization algorithm based on the foraging behavior of birds in nature, and it is used to solve optimization problems.The PSO simulates the social behavior and collaborative learning processes between individuals in bird flocks to find the optimal solution. 1. Individual representation: In the particle swarm optimization algorithm, each candidate solution in the solution space is called a particle, and each particle represents the hyperparameters of XGBoost, which includes the objective function, the learning rate, the maximum depth of each tree, the sub-sampling rate of the training samples of each tree, the sub-sampling rate of the features of each tree, and the number of trees.The loss function is used to minimize the mean square error, as shown in Equation (6): 2. Fitness function: The problem solved by the particle swarm optimization algorithm usually needs to maximize or minimize an objective function, which is called a fitness function.The fitness function is the RMSE of the XGBoost proxy model.The smaller the value is, the better the model's effectiveness will be, as shown in Equation ( 7): 3. Initialization: At the onset of the algorithm, a certain number of particles are randomly generated, and their position and velocity are initialized.In general, the positions of particles are randomly distributed in the solution space, and the velocity is initialized as a zero vector.4. Fitness evaluation: For each particle, the fitness value is calculated corresponding to its position.5. Individual optimal position update: For each particle, the individual optimal position should be updated, the current fitness value should be compared with the fitness value of the individual optimal position, and it should be updated if it is better.6. Global optimal position update: The position of the particle with the best fitness value among all individuals should be selected as the global optimal position.7. Velocity and position update: According to certain rules, update the velocity and position of the particles in order to move towards the individual optimal position and the global optimal position.The velocity update depends on the historical velocity of the particle, the individual optimal position, and the global optimal position.The velocity update equation is shown in Equation ( 8): In Equation (8), where is the historical individual optimal position of the th particle, and is the global optimal position.The inertia weight represents the influence of the velocity of the previous generation of particles on the velocity of the current generation of particles.A larger inertia weight contributes to global optimization, while a smaller inertia weight contributes to local optimization. is the th generation of the population, 1 and 2 are the individual velocity factor and the global velocity factor, respectively, and 1 and 2 are random numbers between 0 and 1. In Equation (9), is the position of the th particle at time . 8. The termination condition: According to the set termination condition (such as the number of iterations to reach the preset value or the fitness to reach the threshold), determine whether to end the algorithm.If the termination condition is not met, go back to step 4. 9. Output results: Output the solution corresponding to the global optimal position as the optimal hyperparameters of XGBoost. Proxy Model Flowchart and Pseudocode When using PSO to optimize the hyperparameters of XGBoost, the hyperparameters of XGBoost are typically treated as the optimization variables in PSO.The specific steps are as follows: First, the dataset for training and testing is collected, including features and labels.Next, a fitness function is defined, which takes the hyperparameters of XGBoost as the input and returns the RMSE of the model on the training data.Then, the parameters for the PSO algorithm are set, as detailed in Table 1.Subsequently, the PSO algorithm is used to search for the optimal hyperparameter combination of XGBoost to minimize the fitness function.Once the optimal hyperparameters are found, these parameters are used to train the XGBoost model.Finally, the performance of the trained XGBoost model is evaluated on the test set, and the trained model is saved to a file for future use.The PSO-XGBoost algorithm flow is shown in Figure 1, and the pseudo-code is shown in Algorithm 1. Parameter Numerical Value number of particles 10 number of dimensions 5 number of iterations 50 inertia weight 0.5 c1, c2 1.5 r1, r2 random generation (0-1) Figure 1.The implementation process of evaluation of the proxy model. Data Collection and Update The evaluation of the proxy model helps users to assess products.The proxy model, also known as an approximation model, is a model constructed to substitute for users in evaluation.Generally, a large amount of evaluation sample information makes the evaluation of the proxy model more accurate.The number of evaluations per user is limited, and too many ratings will cause users' fatigue.In order to solve this problem, this paper collects all the evaluation information data of users who have used the 3D vase design system [30], including the users' personal information, the individual characteristics data, and the fitness value information of the vase.These data are used to construct a PSO-XGBoost model, which can allow the current user to find the other similar users quickly and obtain their historical evaluation data to predict the fitness of the new individual. However, it should be noted that the evaluation of the proxy model based on similar individual information and the adaptive value predicted by the proxy model may be different from the actual evaluation of the user.Therefore, users are allowed to modify and submit the individual fitness predicted by the proxy model, and the user's evaluation data are saved.Considering that updating the proxy model is a time-consuming process, in order to avoid affecting the user's design experience, a system has been designed to automatically update the proxy model when the population evolution is terminated by the user.As shown in Figure 2, this ensures that the model is updated in a timely manner. Vase Construction and Coding In order to meet the various needs of different users and generate different types of vases, the vases are designed based on bicubic Bézier surfaces [31].First, this method requires constructing a mesh model of the vase using the control points of the Bézier surface and then changing the coordinates of the control points to change the shape of the vase, thereby generating different vases.The definition of a bicubic Bézier surface is as follows: In Equation ( 10), ,3 () and ,3 () are cubic Bernstein basis functions, and , is the control point of the surface.The equation can be changed into In Equation (11), The construction of the vase model is relatively simple and mainly includes three parts: the bottle mouth, the bottle body, and the bottle bottom.Due to the central rotational symmetry of the vases, the vase model can be considered as formed by the rotation of the Bézier curve.To enhance the intricacy of the vase curve, this study utilized two cubic Bézier curves for the vase contour [32], as depicted in Figure 3.The control points 0 - 6 form a cubic spline curve, with 3 serving as the connection point for the two curves.To ensure a smooth connection between the two curves, it is essential to maintain the collinearity of points 2 , 3 , and 4 .On the right side of the image is the control point grid model for the double cubic Bézier surfaces.To maintain the central rotational symmetry of the vase, the x and y coordinates of the initial control points in the same row are multiplied by the same scaling factor.This ensures a uniform reduction or enlargement of the control points in the same row, preventing deformation of the vase.The control points aligned with 0 , such as 00 - 03 , as shown in Figure 3, undergo scaling by the scaling factor applied to the 0 - 6 control points.Therefore, this study adjusts the scaling factor to 0 - 6 , allowing the control points of the vase to be modified, thereby simplifying the users' operation in shaping the vase.Before constructing the vase, it is necessary to generate the vase mesh model using control points.When creating the mesh model, a quadrilateral mesh is used; however, in the rendering and computation processes, triangular meshes are generally more stable, especially during deformation or transformations.Therefore, when using Three.js to add materials and render the vase, the mesh model is converted into a form composed of triangles.The vase mesh model is illustrated in Figure 4.As two cubic Bézier curves were employed to form the vase contour, it becomes essential to divide the vase body into upper and lower sections.The upper section comprises four bicubic Bézier surfaces, and the lower section is formed by four surfaces.The vase mouth is constructed by a circle formed by a curve, and the bottom is composed of a circle formed by the concatenation of four Bézier surfaces.When joining the upper and lower surfaces, the last row of the upper surface serves as the first row of the lower surface.Similarly, when joining the left and right surfaces, the last column of the left surface serves as the first column of the right surface.To ensure the smoothness of surface concatenation, the symmetric control points at the junction are first uniformly adjusted and then the normal vectors of the surfaces are calculated.This ensures the correct computation of normal vectors at each surface point.Correct normal vectors are crucial for rendering smooth and realistic surfaces, especially for handling lighting and shading.When light strikes the surface, normal vectors are used to calculate the angle between the light and the surface, influencing the scattering and reflection of light.This is essential to achieve visual smoothness and a realistic texture, particularly in the context of lighting and shading processing.In order to elevate the authenticity of the designed vase, a richer pattern has been incorporated onto its surface.Employing advanced image texture mapping technology [33], this intricate pattern is seamlessly fused with the vase, creating a harmonious and lifelike aesthetic.The individual utilizes binary encoding, consisting of control point parameters, vase height, and texture images, as illustrated in Figure 6.The x and y coordinates of control points 0 to 6 are represented by seven sets of 8-bit binary codes, ranging from 0 to 2.55 times their original values.The z-axis coordinates of control points 0 to 6 are represented by 8-bit binary codes, also ranging from 0 to 2.55 times their original values.There are a total of 64 texture images, each represented by a 6-bit binary code.Therefore, the genetic encoding for a vase comprises 70 bits.Once the population evolution has been completed, the decoding of the corresponding binary codes allows the retrieval of individual vase characteristics, which can then be displayed on the interactive platform. Evolutionary Operators The evolutionary operators use roulette selection and elite strategy, multi-point crossover, and uniform mutation.Figure 7 shows the operation diagram of the crossover operator and the mutation operator.When the chromosomes intersect, multiple crossover points are randomly set in the individual chromosomes, and then gene exchange will be performed.When the chromosome mutates, the mutation probability of each gene point is the same, and the gene point mutates when the probability is satisfied. Graphic Interaction Mechanism In this paper, the GIM is introduced in the middle and late stages of individual evolution.With the GIM, users can modify the characteristics of individual vases according to their personal preferences so that they can quickly find the products they are satisfied with.Considering that not all users have a good understanding of vase design, computer graphics are integrated [34] and a parametric method is used to construct a 3D model.As shown in Figure 8, the picture of the vase on the left is a 3D vase generated by the bicubic Bézier surfaces, and several buttons on the right are used to control the parameters of the vase.With the GIM, the user can change the vase parameters by dragging the button so that the shape of the vase will be changed.When the users are not satisfied with the shape of the vase, they can use the GIM to change the shape and add their own preferred individuals to the population.This enriches the population diversity, so that the evolution does not easily fall into the local optimum. Algorithm Procedure In response to the IGA's core problem of the user's fatigue, this study has made improvements to the IGA by adding a proxy model to assist users to evaluate individuals.If users feel unsatisfied with the fitness values predicted by the proxy model, they have the option to adjust them to better align with their expectations.However, some users may have limited understanding of the designed products, resulting to uncertainty in the initial evaluation.In this study, the GIM has been introduced in the middle and later stages of evolution, allowing users to freely adjust the vase model according to their preferences.Once the evolution generation reaches the set generation, the evolution concludes and the user's evaluation data are stored to the database to update the proxy model, thereby enhancing its performance.The algorithm flow chart is shown in Figure 9. Algorithm 2 shows the pseudo-code of the algorithm. Parameters Setting of Interactive Genetic Algorithm The main problem of human-computer interaction is the user's fatigue.Appropriate population size and evolutionary termination generation can not only help users reduce fatigue, but also improve efficiency.Therefore, the population size of the system is set to six, and the generation of evolutions is set to 20.The system will judge when the number of iterations reaches the set number and the evolution will be automatically terminated.The crossover probability is set to 0.9 and the mutation probability is set to 0.1.The genetic parameters of the IGA are shown in Table 2. Parameter Numerical Value maximum generation 20 population size 6 crossover probability 0.9 mutation probability 0.1 Comparison of Proxy Models In order to verify the prediction performance of the PSO-XGBoost proxy model, this study compares the accuracy of the XGBoost model and the PSO-XGBoost model.Due to the current single database in the vase design platform, where all user evaluation data and individual evaluation details are stored, the platform's database is utilized as the dataset for training the surrogate model.The dataset is divided into training and testing sets in a 9:1 ratio.The training set is used to train the surrogate model, while the testing set is employed to validate the effectiveness of the predicted results.Because there is too much data in the test set, 100 samples in the test set are randomly selected for comparison, and the predicted value of the model is compared with the real value, as shown in Figure 10.It can be seen from Figure 10 that the PSO-XGBoost model curve better matches the real value curve than the XGBoost model curve.The performance indicators of the two models are shown in Table 3.It can be seen from Table 3 that this study compares the RMSE, the MAE, the accuracy, and R 2 of the two proxy models under the same data set.XGBoost (1), XGBoost (2), and XGBoost (3) are models trained under different hyperparameters of XGBoost.The specific parameters are shown in Table 4. Compared with the XGBoost model, the RMSE of the PSO-XGBoost model has decreased by 0.0391-0.0572, the MAE has decreased by 0.024-0.077,the accuracy has increased by 1.1-1.3%, and the R 2 has increased by 0.016-0.021.It is concluded that the PSO-XGBoost model has better performance and better prediction effect.In order to further prove the effectiveness of the PSO-XGBoost model, this study compares the K-D tree (KDT) proxy model, the random forest (RF) proxy model, and the PSO-XGBoost proxy model.Randomly select 100 samples in the test set for comparison and use different proxy models to predict user evaluation, as shown in Figure 11.It can be seen from Figure 11 that the predicted value curve of the PSO-XGBoost model is consistent with the real value curve, and the predicted value curves of the KDT model and the RF model have certain gaps from the real value curve.The PSO-XGBoost proxy model is compared with the KDT model and the RF model performance indicators, as shown in Table 5.It can be seen from Table 5 that this study compares the RMSE, the MAE, the accuracy, and R 2 of different proxy models under the same data set.Compared with the KDT and the RF proxy models, the RMSE of the PSO-XGBoost model decreased by 1.5391 and 0.1247, respectively, the MAE decreased by 1.335 and 0.132, respectively, the accuracy increased by 20.2 % and 3.9 %, respectively, and the R 2 increased by 0.277 and 0.048, respectively.It is concluded that the prediction effect of the PSO-XGBoost model is better than that of KDT model and RF model.In addition to algorithms in the field of machine learning, this paper also compared the BPNN algorithm [18] in the field of deep learning.From the performance indicators, it can be seen that the prediction performance of PSO-XGBoost is slightly better.To further demonstrate the performance of PSO-XGBoost, this study also took into account the randomness in the PSO optimization parameters.Therefore, PSO-XGBoost was run 10 times to obtain the average performance.From Table 5, it can be observed that compared to the optimal PSO-XGBoost model, the performance of PSO-XGBoost (average) is slightly inferior, but it is better than that of the previous three algorithms.In addition, this paper also utilized Balancing Composite Motion Optimization (BCMO) [35] to optimize the hyperparameters of XGBoost, referred to as BCMO-XGBoost.As shown in Table 5, BCMO-XGBoost exhibits slightly better performance in terms of RMSE and accuracy compared to PSO-XGBoost.However, BCMO-XGBoost shows slightly worse performance in terms of MAE and R 2 compared to PSO-XGBoost.The optimization stability of BCMO-XGBoost is expected to be better, albeit with longer runtime.In general, PSO-XGBoost performs well. Comparison of Optimization Ability In order to prove the effectiveness of the methods proposed in this study, the traditional IGA, IGA using KDT proxy model (KDTGIM-IGA) [17], IGA using RF proxy model (RFGIM-IGA), IGA using XGBoost proxy model (XGBGIM-IGA), and IGA using PSO-XGBoost proxy model (PXG-IGA) were compared.Using the method of controlling variables, the evolutionary parameters of the five methods are set to the same value.The evolutionary operator uses roulette selection and elite strategy, multi-point crossover, and uniform mutation.In this study, five users were selected to operate each algorithm once.The users operated through the interactive interface, and the scoring interval was 0-10 points.When the population evolves to the end of the 20th generation, the interactive interface is shown in Figure 12.This study mainly compares the average fitness, average maximum fitness, number of user evaluations, and user evaluation time of five different algorithms to verify the ability of the method proposed in this study to optimize performance, reduce the number of user evaluations, and alleviate the user's fatigue.According to the experimental results, Figures 13 and 14 compare the fitness distribution of the five users using the five methods.Figure 13 shows the trend graph of the average fitness value of each generation of evolutionary individuals.These five curves represent the change trend of the average fitness value of each generation of the five methods.The curve of the PXG-IGA proposed in this study shows an upward trend with the evolution of generations, and the curve of the PXG-IGA is higher than that of the other algorithms after the tenth generation, indicating that the PXG-IGA is better than the other algorithms.In the first 10 generations, there was no significant difference between the traditional IGA and other IGA curves with the proxy models.However, after 10 generations, due to the use of the graphical interaction mechanism, the curves of the other four algorithms are significantly improved compared with the IGA curves.From Figure 14, the change trend of the average maximum fitness value of the user evaluation individual with the increase in the generations can be observed.Obviously, the average maximum fitness curve of the PXG-IGA is higher than that of the other methods. Comparison of User Fatigue Alleviation The greatest feature of the IGA is using human-computer interaction to solve those implicit performance index problems.However, humans are prone to fatigue and if users are required to operate too frequently it will increase their sense of fatigue.The user's fatigue will lead to noise in user evaluation and deviation in fitness.This study introduces the PSO-XGBoost proxy model to help users to evaluate, helping users to reduce the number of evaluations, thereby reducing fatigue.In order to prove the effect of the algorithm proposed in this study, the experiment compares the number of evaluations and time of evaluations of users using different algorithms.The number of evaluations refers to the number of individuals evaluated by the user.The more users evaluate the number of individuals, the more likely they are to be fatigued.The time of evaluations can reflect the difficulty of the user's evaluation of the population and the performance of the system.The longer the evaluation time, the longer the user participates in the design, and the more likely the user is to be fatigued.Therefore, the number of evaluations and time of evaluations can be used to measure the indicators of the user's fatigue.The number of evaluations and time of evaluations of the five algorithms are shown in Table 6. Figures 15 and 16 illustrate the number of evaluations and times of evaluations, as well as their means and standard deviations, for five users on the vase design platform using IGA, KDTGIM-IGA, RFGIM-IGA, XGBGIM-IGA, and PXG-IGA, respectively.From the comparison of mean values, the number of evaluations of the PXG-IGA is 3-70 evaluations fewer than that of the other algorithms, and the times of evaluations are 36.6-206.8s shorter than those of the other algorithms.From the perspective of standard deviation comparison, because the number of evaluations of the IGA is 120 times each time, its standard deviation is 0, and the standard deviation in the number of evaluations in the PXG-IGA is the smallest compared with the other three algorithms.The standard deviation in the time of evaluations of the PXG-IGA is lower than that of IGA, KDTGIM-IGA, XGBGIM-IGA, and higher than that of RFGIM-IGA.This is because the evaluation behavior of different users may be very different, resulting in large fluctuations in the times of the evaluations of different users, so the standard deviation in the number of evaluations of the PXG-IGA is the smallest.On the contrary, the standard deviation in the time of evaluations will be larger than that of the RFGIM-IGA.By comparing the number of evaluations and time of evaluations of these five algorithms, it can be seen that the performance of the IGA with the proxy model is better than the traditional IGA.The average number of evaluations and time of evaluations of the PXG-IGA are lower than the other proxy model algorithms, and the stability of the algorithm is more stable than the other algorithms.This shows that the effectiveness of the PXG-IGA is better and can effectively mitigate the user's fatigue. Conclusions This system aims to promote the development of product design in a more intelligent and more convenient direction.A method for improving the IGA has been proposed which utilizes PSO-XGBoost as the proxy model and introduces the GIM.Based on this method, a 3D vase design system has been constructed.Based on the user's historical data, this method assists users to evaluate individual products by training the PSO-XGBoost proxy model, and constantly adds data of new users to update the model for improving accuracy.In order to design the preferred products faster, this study uses the GIM, which allows users to dynamically change individual features, introduce new features to the population, and increase its diversity. The experimental results indicate that the PSO-XGBoost model has significant advantages in prediction performance regarding proxy model comparison.As for optimization ability comparison, the PXG-IGA is obviously superior to the IGA, KDTGIM-IGA, XGBGIM-IGA, and RFGIM-IGA in terms of average fitness and average maximum fitness, especially after the tenth generation, when its effect stands out prominently.As for comparison of the alleviation of user fatigue, the PXG-IGA shows a significant reduction in the number of evaluations and evaluation time compared to the IGA, KDTGIM-IGA, XGBGIM-IGA, and RFGIM-IGA, providing users with a better evaluation experience.Finally, it can be concluded that the method proposed in this study can effectively mitigate the user's fatigue and enable the faster design of products that satisfy users.However, in terms of alleviating the user's fatigue, different users may have varying habits and preferences.This study does not examine the impact of individual differences on the effectiveness of fatigue relief, which is an aspect that requires further, in-depth research. Normal University (KJ19015), the Program for the Introduction of High-Level Talent of Zhangzhou, and the National Natural Science Foundation of China (no.61702239). Figure 2 . Figure 2. Update the proxy model. Figure 4 . Figure 4. Vase mesh model.The IGA's gene coding is made up of three parts: texture picture, vase height, and control point parameters.The random combination of the three parts constitutes different chromosomes representing different vase models.The composition of the vase gene coding is shown in Figure 5.The curve of the bottle body is composed of anchor points ( 0 , 3 , 6 ) and curvature control points ( 1 , 2 , 4 , 5 ).The mouth and bottom of the vase are controlled by 0 and 6 , respectively. Figure 5 . Figure 5.The coding composition of the vase. Figure 10 . Figure 10.The predicted values of the XGBoost and PSO-XGBoost models are compared with the real values. Figure 11 . Figure 11.The predicted values of the KDT, RF, and PSO-XGBoost models were compared with the true values. Figure 13 . Figure 13.The average fitness comparison of users' evaluation individuals. Figure 14 . Figure 14.Comparison of average maximum fitness values of users' evaluation individuals. Figure 15 . Figure 15.Comparison of the mean value of number of evaluations and time of evaluations. Figure 16 . Figure 16.Comparison of standard deviation of number of evaluations and time of evaluations. Table 3 . Performance comparison of proxy models. Table 5 . Performance comparison of proxy models.
9,639
2024-06-21T00:00:00.000
[ "Computer Science", "Engineering" ]
VEGA-CONSTELLATION TOOLS TO ANALIZE HYPERSPECTRAL IMAGES Creating high-performance means to manage massive hyperspectral data (HSD) arrays is an actual challenge when it is implemented to deal with disparate information resources. Aiming to solve this problem the present work develops tools to work with HSD in a distributed information infrastructure, i.e. primarily to use those tools in remote access mode. The main feature of presented approach is in the development of remotely accessed services, which allow users both to conduct search and retrieval procedures on HSD sets and to provide target users with tools to analyze and to process HSD in remote mode. These services were implemented within VEGAConstellation family information systems that were extended by adding tools oriented to support the studies of certain classes of natural objects by exploring their HSD. Particular developed tools provide capabilities to conduct analysis of such objects as vegetation canopies (forest and agriculture), open soils, forest fires, and areas of thermal anomalies. Developed software tools were successfully tested on Hyperion data sets. INTRODUCTION Recently a remarkable progress has been observed towards greater open access to hyperspectral data (HSD).So, for example, NASA space data centers and NASA contractors increased access to data of hyperspectrometers Hyperion (EO-1, 2003) and HICO (HICO, 2014) mounted onboard satellites NASA EO-1 and International Space Station (ISS), respectively.In 2013, the Russian satellite "Resurs-P" with domestic hyperspectrometer HSA produced by S.A. Zverev Krasnogorskiy Zavod was launched (Arkhipov et al., 2014), whose data also became available for wide research community.This new situation calls for creating data provider services to manage HSD, including not only data ordering capabilities but also a set of analytic tools for remote analysis and processing HSD on data provider facilities. One of the main obstacles to widening hyperspectrometer application is the fact that satellite hyperspectrometers cannot provide regular and sufficiently frequent coverage of Earth areas on global or even on regional scale.This is why currently hyperspectrometer applications are interesting mainly for scientific researchers.In this regard, the possibility of joint analysis of HSD with multispectral data (MSD) is of great interest.It is necessary primarily for a clear and deeper understanding of various objects and phenomena that we can now permanently monitor using a number of satellite Earth observing systems.Therefore, the introduction of capabilities to work simultaneously with hyperspectral and multispecral data into Earth observation (EO) information systems (IS), whose services and information resources are designed to solve scientific and applied tasks, is actual and relevant. Assimilation of HSD in EO IS implies not merely adding HSD sets but also upgrading existing functionality to deal with unique HSD features.At least, such software (SW) tools as spectral analysis, HSD classification, and hyperspectral indices exploration should be added.One should keep in mind that these tools have to manage massive and rapidly growing HSD arrays in distributed infrastructure.A common approach to deal with the problem was elaborated in (Loupian et al., 2012).The main conclusion of this work is that efficient usage of EO data (fast data processing, reliable and comprehensive results, capabilities to analyze multiyear global data sets, etc.) can be achieved if we provide users not only with EO data sets and products but also with remotely accessible SW tools to manage and analyze these data sets via Web interfaces.This approach to managing HSD is implemented in the present work. The main aim of the work was to develop tools to work with HSD in a distributed information infrastructure, i.e. primarily in the remote users access mode via Web interfaces.They should allow users to not only do search-and-retrieval operations on HSD but also provide capabilities to analyze and configure HSD processing procedures in remote mode. Currently there is an abundance of EO IS (see e.g.references in (Loupian et al., 2012)) providing Web access to catalogues and archives of EO data.However, implementations of SW tools to work with EO data, i.e. to analyze and process them by user requests, in remote mode via common Web interfaces of an EO IS are scarce.Such type of IS was implemented in VEGA-Constellation IS family by research and development team of the Space Research Institute of the Russian Academy of Sciences (IKI RAS) (Loupian et al., 2011;Bartalev et al., 2012;Uvarov et al., 2014;Savorskiy et al., 2014;VEGA, 2015).Design and development of VEGA-Constellation information systems is based on original GEOSMIS technology also elaborated by the IKI RAS team. GEOSMIS TECHNOLOGY ARCHITECTURE GEOSMIS technology was designed by IKI RAS development team (Tolpin et al., 2011;Savorskiy et al., 2012).GEOSMIS architecture consists of two functional levels whose functional interactions are shown in Figure 1 Presentation level is a subsystem that supports interaction with user.This level allows users to work with the system via Web interface or via specialized applications, e.g.GIS.As typical GIS is provided with all basic tools to work with map data, its interactions with GEOSMIS should run on data service level.This is the reason why GEOSMIS is oriented on the development of unified data access interfaces.Presentation level is shown in Figure 2 (Tolpin et al., 2011). Work of such interfaces is enabled by a kernel consisting of two object modules:  Map object module is implemented as smisMap;  Metadata object module is implemented as smisMeta. Map object module displays map information retrieved by data services.In addition, the module allows to send spatial queries and to export displayed data to print service or even to other systems.Metadata object module supports catalog data search and retrieval of data necessary for information displaying on rendering service. Application level is a subsystem that should enable interaction between interfaces and data.This level consists of CGI services, core services, system API, and plugins as shown in Figure 3 (Tolpin et al., 2011;Savorskiy et al., 2012).All three types of Web services are realized in frame of one common strategy i.e. are integrated in one common service. Those common services enable support of map data receiving (via GetMap request that is generated in interface by smisMap module), queries on metadata receiving (via GetMetadata request that is generated in interface by smisMeta module), requests on data addition and modification, e.g.addition of polygons and information about them.So, CGI interfaces are capable to provide direct interaction with metadata retrieving services and map information displaying.It is important that they can interact with any amount of spatially distributed services. System API is a separated part of Application level.It is composed of a set of Perl modules that are located on servers and can get data and work with them on server side.All the other Application level services work with data via those modules.Each module takes responsibility for a definite data type or system part.This allows flexible system configuration, error fixing, and fast software development. The above approach formed the basis for flexible and robust development of analytical SW tools running in remote access mode with distributed Earth observation data sets.This is proved by GEOSMIS technology implementation (Tolpin et al., 2011) in various monitoring systems operating in Russia, including:  Integrated interface to work with the data of informational system of remote monitoring of the Federal Forestry Agency of Russia (Bartalev et al., 2010;Efremov et al., 2011);  Web interface to work with data of industry-wide monitoring system of Federal Agency for Fisheries of Russia (Solodilov et al., 2011);  Integrated data catalog of the Scientific Center for Earth Operative Monitoring of Russian Space Agency.The interface provides the capability to analyze data from various satellite systems (Bourtsev et al., 2011a);  Joint access system of the European, Siberian and Far Eastern centers for receiving and processing satellite data "Planeta" of Rosgidromet (Bourtsev et al., 2009;Bourtsev et al., 2011b). HSD TOOLS IMPLEMENTING TASKS VEGA-Constellation is a long-term project of IKI RAS aiming at creating information systems or services based on a unified GEOSMIS technology.The different information systems within the VEGA-Constellation are described in detail in (Loupian et al., 2011;Bartalev et al., 2012;Uvarov et al., 2014;Savorskiy et al., 2014).Its interface is shown in Figure 4.In order to work in information systems of VEGA family with HSD, it was necessary to solve the following tasks:  create a technology for obtaining HSD from various sources including remote ones;  create a technology for HSD backup which should ensure rapid search, selection and display of HSD from long-term and operational archives;  create basic interfaces that should provide capabilities to analyze HSD, including complex analysis of HSD and their products, together with the other information products stored in information systems of VEGA-Constellation family. HIGH PERFORMANCE ALGORITHMS FOR HYPERSPECTRAL DATA MASSIVE MANAGEMENT The data files from Hyperion USGS archive (Hyperion, 2014) were used as the main source of HSD during the work phase of design and implementation of basic HSD management and analytic software for usage in VEGA environment.For these purposes the IKI RAS team developed its original technology of data exchanges.This technology is described in detail in (Loupian et al., 2012a).It should be mentioned that using the same approach one can realize HSD uploading from another, different from USGS, data sources or providers. The data were downloaded in the form of zip-compressed files containing 242 GeoTiff images, one per each spectral channel of Hyperion (in the band range from 355.59 to 2577.8 nm), and an ASCII text file with metadata in MTL format.Each file contains a 16-bit (int16) image in UTM projection.Each pixel represents radiation flux intesity. After downloading, all data are automatically included into a 242-channel GeoTiff file.This procedure is completed by inclusion in the file of a 5-level (5-scale) pyramid.Scale pyramid is formed in order to provide scaling of data in order to significantly speed up access to data sets.This is particularly important for viewing many images in on-line remote mode.After testing, the best compression DEFLATE algorithm was chosen.It allows reducing the storage capacity by 10% compared to the original archive volume.Original spatial resolution, projection and values of the basic scale image remain unchanged.So, we are able to display, analyze and process data in subsequent procedures without losing their original quality. Along with the data file, the text annotation in a format suitable for mastering a specialized database is automatically generated.This specialized database is organized by basic software for management of satellite data archives, also developed by IKI RAS (Balashov et al., 2013).Depending on the scene angle, the special product identifiers specific to daytime and nighttime data are received.Such information is supplied to archives built using FDB (File Data Base) technology developed by ISR RAS (Efremov et al., 2004) as well.Hyperion data archive is organized in the same way as the majority of VEGA-Constellation archives, developed by IKI RAS.Namely, high spatial resolution data sets (Landsat, SPOT, Canopus-B, etc) are stored in the same way (Loupian et al., 2011;Bartalev et al., 2012;Uvarov et al., 2014).Curreently IKI RAS HSD archives contain more than 11,000 scenes (mostly of the territory of Northern Eurasia).Their total volume exceedes 3.5 TB. TECHNOLOGY OF HSD ANALYTIC TOOLS Specialized capabilities that operate within the Web mapping interfaces based on GEOSMIS technology were created (Kashnitskii et al., 2015) in order to work with HSD.These capabilities allow integration of functions that can work with EO data in a variety of information systems based on GEOSMIS technology.Created capabilities allow developing analytic tools to deal with data of individual spectral channels, build color synthesis of arbitrarily selected channels, conduct joint analysis of HSD and data from other satellite sensors, to deal with hyperspectral indices, etc. (see Section 6-8 for descriptions of these capabilities).The technology enabling HSD analysis in remote access mode should possess specialized SW components (aggregated in 3 subsystems as presented in Figure 5) in order to implement the following typical milestones for satellite data processing (Kashnitskii et al., 2015): According to the above requirements, GEOSMIS technology principal schematic was designed (Figure 5).The technology was implemented via the following main components (Kashnitskii et al., 2015): . IC provides remote control of processing procedures (data selection, parameter setting, execution control, etc.) and analysis of the results.IC is implemented in frame of Web mapping interfaces that are based on GEOSMIS technology.Standard SW means for data search and selection of information from satellite data archives are used (Tolpin et al., 2011).With the development of GEOSMIS technology, a number of universal interfaces, which can be used for various data management purposes, were built. Task assignments and results storage capacities (TARSC).TARSC consists of assignments database (ADB), file archive with the results of data processing, SW access library, and standardized modules of data source interfaces for the display service, aka smiswms (Tolpin et al., 2011).  Task execution component (TEC).Functionally TEC consists of a task manager with a separate plugin for each type of data processing and programs to control job queue. Data preparation component (DPC).DPC prepares a list of data sets from the archives in accordance with the user settings and transfer listed data to processing procedures.The IKI RAS team designs all components, which are created as part of GEOSMIS technology, as multipurpose ones for application in various information systems and remote sensing data centers.In accordance with the described principal schematic, a variety of data processing procedures is supported by common life cycle of processing request which is shown in Figure 6.Via Web mapping interface (running on IC) user can make preferable job and task settings.Since the performance of individual processing procedures can take a long time, the interface supports control of processing execution.User task assignment is recorded in ADB.After it, user is informed that the task is queued.Each processing stage with all its parameters is recorded in ADB.If user waits for the job completion, Web interface periodically interrogates ADB task manager TEC and informs user on the status of the job running. VEGA DATA SELECTION POSSIBILITIES Selection tools are of great importance for HSD handling due to the necessity to search and retrieve HSD sets associated with disparately located and rarely observed objects.Therefore, search and detection of target objects by standard means (Savorskiy et al., 2014), i.e. without any auxiliary information, is very difficult time-consuming procedure leading to inefficient use of system services.To solve this problem, many efforts were devoted to enhancing the selection capabilities of VEGA system with different kinds of auxiliary information available in VEGA databases.VEGA Web interfaces allow using MSD obtained in simultaneous observations, e.g.Landsat, SPOT, Canopus-B data from IKI archives, and active wild fire maps from IKI databases as auxiliary information when selecting HSD sets (Savorskiy et al., 2014). Along with studying mobile flares from wild forest fires, VEGA services added a capability to detect and explore static, or immobile flares that often can be attributed to anthropogenic activities.The procedure for selecting HSD subsets that describes thermal anomalies of anthropogenic origin is based on VEGA information product called Permanent Flares.These objects are registered in special VEGA DB.They are retrieved from multispectral or hyperspectral observations as fire flames.However, unlike wild fires, e.g.forest or prairie ones, permanent flares do not change their location within few days. VEGA services allow visualizing the Permanent Flares objects and use this capability for the identification and localization of permanent flares depicted as tiny blue circles in Figure 7. Figure 8 shows an image of one of such permanent flare sites (© Google Planet Earth) registered as Permanent Flares object in VEGA DB.Analysis of the location indicates that this object is apparently a gas-flaring torch on the territory if a petroleum-producing enterprise. Figure 7. VEGA visualizing and subselection services: "Permanent Flares" objects (Hyperion observations, 20.05.13,West Siberia).Target gas flare is located in red circle.1 Figure 9 shows a hyperspectral image of the Permanent Flares object presented in Figure 8. Spectral features confirm that there is a possible gas flare.Note that image in Figure 9 is presented in default visualization mode, however, even in this case one can detect particular hyperspectral features associated with fires (see GENERAL-PURPOSE ANALYTIC TOOLS In addition to tools implementing selection capabilities of VEGA system, IKI RAS team developed VEGA Web interfaces that incorporate a set of general-purpose analytic tools.They are designed for exploration of a variety of Earth objects in contrast to more specialized ones oriented for investigation of a definite type, or class, of natural objects (Section 8).These tools can also handle both HSD and MSD.Their functionalities were realized in the following remotely accessible user services (Savorskiy et al., 2014;Kashnitskii et al., 2015):  Web mapping (cartographic) interfaces for user access to HSD resources combined with means of MSD access;  space data visualization procedures which were enhanced to work both with HSD and MSD;  data classification Web tools which were upgraded to work with HSD (Figure 10),  HSD spectral analyzer. VEGA capabilities were tested by applying the VEGA Web tools for classification of landscapes that contained vegetation cover and thermal anomalies.For example, results presented in Figure 10 show a determination of a Open Flame class on a typical Siberian forest landscape.One can easily select fire fronts (red ring in the center), both visually and by numeric analysis (as belonging to Open Flame class).One of the main conclusions of the test analysis is an independent confirmation of the necessity to develop dimensionality reduction algorithms in order to increase reliability of class differentiation due to decreasing a required volume of teaching data samples for reaching necessary representative statisitics.This is one of the main objectives for introducition of hyperspectral indices (HSI) for using in HSD analysis (Section 8.2).They allow determining both thermal anomaly location and type.One of such features is a presence of spectral intensity maximum near 2.2 μm that is comparable with spectral intensities near 1.1 μm.In absence of flame on a forest plot, the spectral intensities near 2.2 μm is substantially lower than spectral intensities near 1.1 μm.This is illustrated in Figure 11, where one can see an increase of registered spectral intensities in spectral band over 2.2 μm in presence of open flame. SPECIAL ANALYTIC TOOLS In addition to general-purpose analytic tools, VEGA incorporates a set of special tools designed for investigation of particular classes of natural objects.Two of such tools are presented in Sections 8.1 and 8.2.They are spectral portraits system and hyperspectral index system (HSI) for investigation of vegetation canopy. Spectral portrait system The spectral portraits system accumulates knowledge about spectral properties of different surface types.A library of spectral portraits is being created to enable comparability of spectral images of individual plots of surface with reference standards to make decisions on their belonging to determined classes.The library is populated with reference spectral portraits, each produced by averaging a set of spectral profiles of the same definite surface type.It allows us to reduce the effect of uncertainties caused by spatial/temporal variability of surface parameters and provide representative statistics. Spectral profiles obtained from HSD are distinguished by a high degree of details and appear in the shape of continuous curves.A large number of hyperspectral channels makes it difficult to use the entire data array for classification.Therefore, reference spectral portraits, which are of particular interest for research purposes, have to contain only the most informative parts of the spectrum for later use in classification tasks.Using reference spectral portraits implies the following user scenario: 1. Conduct visual detection of a group of characteristic points in a satellite hyperspectral image that can be assumed to belong to a particular surface type.2. Start Web interface depicting spectral profiles related to the selected set of points of the hyperspectral image. 3. Conduct visual analysis of spectral profiles for the group of points in order to estimate the representativeness of the samples set and spectral separability of object types.4. Assign the averaged profile of the group of characteristic points to be standard spectral portrait of particular surface type, name the spectral portrait, and save it in "Spectral portrait" DB. 5. In order to check class affiliation of any new surface, compare its spectral profile with previously defined standard spectral portraits. Storage of spectral portraits is accomplished by MS MySQL DB.VEGA table structure allows to store unlimited number of portraits for every user if they have unique portrait names.Each portrait may include a series of spectral profiles that are based on various characteristics, such as spectral brightness or spectral reflectivity. Implementation of the spectral portrait system is illustrated in Figure 12.It shows spectral portraits of some plant species obtained under the described technology (Figure 12 (a)).Figure 12 (b) presents the map of vegetation cover prepared using the spectral portrait data. Hyperspectral index system VEGA system also includes a toolkit called Image Algebra.It allows user to perform arithmetic, logic and various mathematical transformations on any data, including hyperspectral data.The values in the individual channels of satellite images can be converted by a formula given by user directly in the interface.The result is bitmaps created from existing raster layers using arithmetic and logical expressions involving integer and floating point numbers, normalization procedures, and mathematical functions. With Image Algebra, VEGA cartographic Web interface is capable to calculate different spectral indices "on the fly" and to test experimentally new thematic products.For example, the user can build difference indices using absorption and emission spectral bands for the study of mineralogical and chemical composition of the underlying surface.It should be noted that such operations can be carried out on all data sets available in the archives of VEGA family systems.It means that users can combine not only HSD channels of one scene, but also any channels of multi-temporal scenes of any instrument stored in VEGA archives.Effectiveness of preset hyperspectral index applications is illustrated by VEGA presentation of an Aerosol Free Vegetation Index (AFVI1600) (Karneili_et_al, 2001) map of the vicinity of a gas flare (Figure 16).Notice drastic enhancement in visualization capabilities of the AFVI1600 map (Figure 14.b) in the presence of smoke in comparison to NDVI (Rouse J.W. et al., 1974) (Figure 14.a) or RGB composites (Figure 9).AFVI1600 mapping is also a very productive way to get information on forest fire environment required to produce reliable forecasts of fire dynamics, since, unlike NDVI map, this map provides information on wood stock under smoke layer (Figure 15). CONCLUSIONS The paper presents a variety of remotely accessed services that enable both search and retrieval procedures on HSD sets and their processing and analysis.These services were developed by the IKI RAS team within VEGA-Constellation information systems on the basis of GEOSMIS technological solutions.In particular, the developed tools enable spectral analysis of HSD sets in remote access mode.They also enable flexible applications of hyperpsectral indices for exploration of natural resources.Efficiency of the proposed approach is confirmed and illustrated by Hyperion (EOS-1) data analysis results.The tools proved to be especially useful for HSD analysis in presence of clouds, aerosols or smoke.Aerosol Free Vegetation Index (AFVI1600) NDVI (Tolpin et al., 2011):  Presentation level includes Web and GIS interfaces;  Application level includes: o Web services (map services, metadata services, and data management services); o System API, enabling access to data, metadata, and various resources and interfaces. Figure 3 . Figure 3. Structure of Application level functional modules CGI services receive data via HTTP protocols.In order to provide Application level functionality, three types of CGI services are envisaged:  Map service;  Metadata service;  Data service. Figure 5 . Figure 5.The principal scheme of GEOSMIS technology implementation. Figure 6 . Figure 6.Life cycle of processing request. Figure9shows a hyperspectral image of the Permanent Flares object presented in Figure8.Spectral features confirm that there is a possible gas flare.Note that image in Figure9is presented in default visualization mode, however, even in this case one can detect particular hyperspectral features associated with fires (see Figure10.b).Substantial improvement in recognition of targeted objects can be achieved by application of hyperspectral index technique (see Section 8.2 and Figures 14). Figure 8. "Permanent Flares" object recognized as a gas flare in petroleum-producing enterprise.May 2014, West Siberia.© Google Planet Earth. Figure 10 . Figure 10.Determination of Open Flame class (5) in zone of forest fire.Hyperion observations (08.06.2011,Central part of East Siberia). Figure 11 . Figure 11.Spectral profiles of controlled forest plots: a) undamaged by forest fire, b) flame front.Hyperion data (08.06.2011, 60 0 46′ N 88 0 59′ E) Spectral features of HSD can be analyzed on-line by VEGA Spectral Analyzer (VSA) instrument.VSA is a Web interface tool that was developed by the IKI RAS team for retrieving the parameters of spectral profiles of interactively selected points of hyperspectral image attributed to explored Earth objects.It is one of the principal analytic tools designed as Web services for HSD applications.As an example, Figure 11 shows the results of examining a forest fire.This is the same forest fire as was used in classification presented in Figure 10.So, these services are complementary to each other and available via one common Web interface. Figure 13 . Figure 13.TCARI map of irrigated agriculture area (Hyperion, Saratov region).In addition to to create, store and edit new hyperspectral indices, the VEGA system provides users with preset ihyperspectral indices for such target areas as vegetated (forest and agriculture) plots, open soil surfaces, forest fires locations, areas of thermal anomalies.For each of them VEGA offers a specific set of indices for estimating parameters of the study objects with great accuracy and reliability.Experiments on the use of different indices for Hyperion data showed good results in the study of characteristics of these objects.A general view of VEGA Web interface for work with hyperspectral indices is shown in Figure 13.It demonstrates a Transformed Chlorophyll Absorption Ratio Index (TCARI) (Haboudane et al., 2002) map of an agricultural area near the southern part of the Volga River.One can see a significant Figure 5.The principal scheme of GEOSMIS technology
5,895.4
2016-06-13T00:00:00.000
[ "Computer Science" ]
B′-protein phosphatase 2A is a functional binding partner of delta-retroviral integrase To establish infection, a retrovirus must insert a DNA copy of its RNA genome into host chromatin. This reaction is catalysed by the virally encoded enzyme integrase (IN) and is facilitated by viral genus-specific host factors. Herein, cellular serine/threonine protein phosphatase 2A (PP2A) is identified as a functional IN binding partner exclusive to δ-retroviruses, including human T cell lymphotropic virus type 1 and 2 (HTLV-1 and HTLV-2) and bovine leukaemia virus (BLV). PP2A is a heterotrimer composed of a scaffold, catalytic and one of any of four families of regulatory subunits, and the interaction is specific to the B′ family of the regulatory subunits. B′-PP2A and HTLV-1 IN display nuclear co-localization, and the B′ subunit stimulates concerted strand transfer activity of δ-retroviral INs in vitro. The protein–protein interaction interface maps to a patch of highly conserved residues on B′, which when mutated render B′ incapable of binding to and stimulating HTLV-1 and -2 IN strand transfer activity. The prokaryotic expression construct for His 6 -TEV-B'(11-380) (PDB ID: 2JAK) (28) was obtained from Source Bioscience UK Limited and provided from the University of Oxford Structural Genetics Consortium (Stockholm clone name PPP2R5CA-c005). Constructs used to express C-terminally His 6 -tagged HTLV-1 IN was described previously (56), and pHTLV-2 IN-His 6 was kindly provided by Peter Cherepanov (Cancer Research UK, Clare Hall Laboratories). Unless stated otherwise, all ORFs were amplified using a HeLa cDNA library (Clontech) as template. pCDF-H6P-PPP2CA was generated by amplifying the PPP2CA gene using primers GM93 and GM95 followed by amplification of this PCR product with primers GM94. This produces the following sequence His 6 -BamHI-HRV 3C-EcoRI-PPP2CA-STOP-SalI-HindIII-NotI which allows the expression of an N-terminally His 6 -fused protein of which the His 6 -tag can be removed by Human Rhinovirus (HRV) 3C digestion. The B'' ORF was amplified using primers GNM278 and GNM279 and cloned in between BamHI/SalI restriction site of pGEX-6P1 (GE Healthcare), giving pGM-GST-B''. pCDF-H6P- and pCDF- were generated using the Stockholm clone PPP2R5CA-c005 (28) as a template with primers GM142 and GNM276 for deletion mutant B' and GNM271 and GM143 for deletion mutant B' . The amplicons were digested with EcoRI/SalI (B'(11-194)) or EcoRI/NotI (B'(195-380)) and cloned in between the respective restriction sites of pCDF-H6P-PPP2CA. pET28a-SUMO-PPP2R5D(76-501) was amplified using GM112 and GM55 and ligated in between the BamHI/SalI restriction sites of pET28aSUMO (kindly given by Dr. Andre Ambrosio). For the expression of the  isoform of the scaffolding subunit, PPP2R1A was amplified using primers GM103 and GM97 and cloned in between the BamHI/XhoI restriction sites of pET28aSUMO. pCDF-H6P-PPP2R5E(51-401) was generated by ligating the amplicon made with primers GNM315 and GNM316 and digested with MfeI and SalI into EcoRI/SalI digested pCDF-H6P-PPP2CA. The HTLV-1 IN synthetic gene was amplified from pQHTLV-1 IN S -Flag (15) using primers GM109 and GM110 and ligated into EcoRI/SalI digested pET28aSUMO, giving pET28aSUMO-HTLV-1 IN s . The HTLV-2 IN ORF was amplified using pHTLV-2 IN-His 6 as a template with primers GM144 and GM131. This amplicon was digested with MfeI/SalI and ligated into EcoRI/SalI digested pET28aSUMO. All B'(11-380) point mutants described were sub-cloned into pET28a-SUMO to express as His 6 -SUMO fusions to produce recombinant protein. All plasmids were sequence verified. Protein purification For expression of HTLV-1 IN-His 6 , HTLV-2 IN-His 6 , HIV-1 IN-His 6 , FIV IN-His 6 , and GST-B'', the corresponding prokaryotic expression plasmids were transformed into the PC2 strain (22). Bacteria were grown in Luria Bertani medium at 30°C until an OD 600nm of 0.9 was reached. The temperature was reduced to 25°C and protein expression was induced by addition of 0.01% IPTG. Four hours later, bacterial pellets were collected and stored at -80°C until use. All other recombinant proteins were expressed in the Rosetta2(DE3)pLacI strain (Novagen) and grown in Terrific Broth. Transformed bacteria grown in the appropriate selective media were allowed to reach an OD 600nm of 2.5-3 upon which the temperature was reduced to 25°C and protein expression induced by addition of 0.01% IPTG. Four hours later the bacteria were collected by centrifugation and pellets were frozen at -80°C until further use. All further procedures were done on ice or at 4°C. To purify the IN-His 6 proteins, bacterial pellets were thawed, resuspended and sonicated in core buffer (50 mM Tris pH7.4, 1 M NaCl, 7.5 mM CHAPS) supplemented with 1 mM PMSF. Cellular debris was removed by centrifugation at 50 000 g. Supernatant was supplemented with 10 mM imidazole and bound to HisSelect resin (Sigma). After extensive washes in wash buffer (core + 10 mM imidazole), IN proteins were eluted in 10 1 ml fractions with elution buffer (core buffer + 200 mM imidazole). Positive fractions were pooled, supplemented with 5 mM DTT, concentrated and supplemented with 10% glycerol final concentration, aliquoted and snap frozen in N 2 (l). To produce untagged IN proteins, the His 6 -SUMO-HTLV-1 and 2 IN proteins eluted from the HisSelect column were supplemented with 5 mM DTT and Ulp1 sumo-protease to remove the His 6 -SUMO tag. Digestion was done overnight at 4°C. Cleaved protein was then diluted four fold in ice cold buffer A (25 mM Tris pH7.4, 7.5 mM CHAPS) before binding to an SP sepharose column (GE Healthcare) equilibrated with 25 mM Tris pH7.4, 250 mM NaCl, 7.5 mM CHAPS. After extensive washes, untagged HTLV-1 or 2 IN was eluted by applying a linear NaCl gradient. Positive fractions were pooled and further purified by size exclusion chromatography (HiLoad 16/60 SD200 column) in 25 mM Tris pH7.4, 1 M NaCl, 7.5 mM CHAPS. Positive fractions were pooled, supplemented with 5 mM DTT, concentrated and snap frozen in N 2 (l). Purification of B'(11-380) was done as previously described (28). Bacterial pellets were resuspended in 25 mM TrisHCl pH7.4, 0.5 M NaCl, 1 mM PMSF, supplemented with 0.1 mg/ml lysozyme, sonicated and the soluble fraction was bound to HisSelect. After extensive washes in the used sonication buffer supplemented with 10 mM imidazole, His 6 -SUMO-B'(11-380) point mutants were eluted by increasing the imidazole concentration to 200 mM. Positive fractions were pooled, supplemented with 5 mM DTT and the His 6 -SUMO tag was cleaved off by Ulp1 protease treatment overnight at 4°C. Following a 5 fold dilution of the cleaved protein in 25 mM Tris pH7.4, the proteins were purified as wild type B'(11-380). His 6 -SUMO-A was expressed and purified as the His 6 -SUMO-B'(11-380) point mutants, with the exception that size exclusion chromatography of the untagged protein was done in ice cold 25 mM Tris pH7.4, 500 mM NaCl. B'(76-501) was expressed as a His 6 -SUMO fusion protein. The bacterial pellets were resuspended in 25 mM Tris pH7.4, 150 mM NaCl, 1 mM PMSF. After sonication, the soluble supernatant was bound to HisSelect, after elution the His 6 -SUMO-tag was removed by Ulp1 cleavage overnight. The untagged protein was then purified by size exclusion in 25 mM Tris pH7.4, 150 mM NaCl. Positive fractions were pooled, supplemented with 5 mM DTT, concentrated and flash frozen in N 2 (l). GST-B'' was extracted from the bacterial pellets in 25 mM Tris pH8, 100 mM NaCl, 1% TX-100, 1 mM CaCl 2 , 1 mM PMSF. Following sonication and removal of debris by centrifugation, GST-B'' was allowed to bind glutathione sepharose (GE Healthcare). After extensive washes B'' was released from the beads by HRV 3C protease digestion overnight. Untagged B'' was further purified by anion exchange chromatography (linear NaCl gradient from 50 mM to 500 mM). Positive fractions were pooled, supplemented with 2 mM DTT, concentrated and flash frozen in N 2 (l). Tissue culture, stable cell lines and immunostaining HEK293T and HeLa cell lines were maintained in Dulbecco's Modified Eagle Medium (Sigma) supplemented with 10% fetal bovine serum (Sigma), 100 IU/mL penicillin, and 100 µg/mL streptomycin (Sigma). The HEK293T cell line stably expressing Flag-tagged HIV-1 IN s was published previously (57), and was maintained in 300 g/ml hygromycin B supplemented medium. Retroviral particles were produced as described previously (55). Fourty eight h post-infection the cells were selected with 0.5 g/ml puromycin. For immunostaining, HeLa cells were plated out in 8-well Lab-Tek II Chamber slides (Nunc) to reach 80% confluence the next day. HeLa cells were transfected using 150 ng of plasmid DNA in total by X-tremeGENE 9 transfection reagent (Roche) following manufacturer's instructions. Twenty h post-transfection the cells were fixed for 10 min in 4% paraformaldehyde (diluted in phosphate buffered saline (PBS)) followed by permeabilization using 0.1% Triton X-100 diluted in PBS. All antibodies were diluted in blocking buffer (10% FBS, 20 mM NH 4 Cl in PBS). Flag-tagged IN proteins were detected using the monoclonal M2 anti-Flag antibody (Sigma, 1:500), and EGFP-B'(11-380) was detected using the rabbit anti-EGFP antibody (Life Technologies, 1:2000). Goat anti-mouse IgG conjugated to Texas Red (Life Technologies) and Alexa 488 conjugated goat anti-rabbit IgG (Life Technologies) were diluted 1:400. DNA was visualized by 4',6-diamidino-2-phenylindole (DAPI, Life Technologies) staining. Images were acquired using an Olympus microscope with a 60x Plapon oil objective (NA 1.4). DAPI was excited with a 405 nm laser beam, whilst 488 nm, respectively 559 nm laser beams were used to excite the Alexa 488 and TexasRed dyes. Images were acquired sequentially at 40 s/pixel, 1024x1024 image resolution. To make the extracts, the resuspensions were allowed to thaw fast at 37°C and immediately centrifuged at 16 000 g, 30 min at 4°C. Supernatants were collected, supplemented with 0.1 M NaCl, 0.5 % Nonidet P-40 (NP-40) and 0.5 mM PMSF. The extracts were pre-cleared over 100 l washed Protein G agarose (GE Healthcare) followed by binding to 100 l anti-Flag agarose (Sigma). Flag-tagged protein complexes were allowed to bind to the beads for 3h by end-over-end rocking at 4°C. Beads were washed extensively with wash buffer (FTB supplemented with 0.5 % NP-40, 0.1 M NaCl) and bound proteins were eluted with 0.04 mg/ml Flag peptide (Sigma) in wash buffer. Eluted proteins were precipitated by trichloric acid, pellets were dissolved in SDS loading buffer and proteins were separated on a 4-20% BisTris gel (Life Technologies). The gels were stained in Colloidal Coomassie (Sigma) and bands were excised and sent for tandem mass spectrometry analysis to the Taplin Mass Spectrometry Facility. The MS data was analyzed by the Taplin Mass Spectrometry facility and Sequest was used to search data. The data was filtered based on XCorr and dCn values and then manually inspected for proteins that only had three or few peptide matches. Values of 1.5 for peptides with one or two charges, 3.0 for three charges for XCorr and 0.1 or higher for dCn were used. The data was also searched allowing for either partial tryptic peptides or with no enzyme specificity and required that all peptides be tryptic. Only proteins with minimally 2 unique peptide matches, and that were absent in the negative control sample are listed in Supplementary Tables S2 and S3. For small scale IPs, 293T cells grown in 1 10 cm dish, were harvested by trypsinization and washed in ice cold PBS. All procedures were done on ice or at 4°C. Cells were lysed in 5 volumes of IP buffer (10 mM TrisHCl pH7.5, 150 mM NaCl, 10% glycerol, 1% NP-40, 2 mM MgCl 2 , Complete EDTA free (Roche), 2 mM DTT), left on ice for 10 min and cellular debris was removed by centrifugation at 16 000 g for 30 min. To verify the binding between Flagtagged B'(11-380) and the scaffold and catalytic subunit, IP buffer without detergent was used. Supernatants were allowed to bind to 25 l pre-washed anti-Flag agarose beads (Sigma) by end-over-end rocking at 4°C. Beads were washed 4 times in 1 ml of IP buffer. After removing all remaining liquid from the beads, proteins were eluted by boiling the beads in 45 l of Laemmli buffer. After separation of the proteins on an 11% SDS-PAGE denaturing gel, proteins were electrotransferred onto nitrocellulose membrane. Blots were blocked in 5% milk/PBS and probed with the following antibodies: horse radish peroxidase ( Phosphatase assays To isolate B'-PP2A holo-enzymes from mammalian cells, a HEK293T cell line was generated that stably expresses full-length wild type Flag-B'by retroviral transduction as described above. The cell line was maintained in 0.5 g/ml puromycin. Flag-B'-PP2A holoenzymes were purified as described previously (31) and verified by gel and western blot to confirm the presence of all three subunits. To quantify the amount of Flag-B'-PP2A purified, 10 l was separated on an 11% SDS-PAGE gel next to a dilution series of BSA. Following silver staining, using ImageJ it was estimated that the concentration of holo-enzyme in our eluate was 28 nM. The colorimetric malachite green phosphatase assay was used to measure PP2A enzymatic activity using the PP2A specific phospho-Threonine peptide (K-R-pT-I-R-R) as a substrate. Absorbance was read at 620nm. A standard curve was made using a dilution series of potassium phosphate ranging from 0 to 2000 pmoles phosphate. Reactions with the phospho-Thr substrate were done in the following phosphatase assay buffer: 25 mM Tris-HCl pH 7.4, 1 mM EDTA, 1 mM EGTA, 1 mM DTT and 0.25 mg/ml BSA (31) and allowed to take place for 30 min at 37°C before the malachite green substrate was added. Absorbance was measured following a 15 min incubation at room temperature with Ni-NTA pull-downs Five g His 6 -tagged bait protein was allowed to bind to 5 g prey protein in a volume of 0.8 ml pull-down buffer (PDB, 25 mM TrisHCl pH 7.4, 150 mM NaCl, 2 mM DTT, 20 mM imidazole, 0.5% CHAPS), to which 40 l of Ni-NTA slurry pre-equilibrated in PDB was added. Ten g of BSA was added to reduce non-specific binding. After 3h of end-over-end rocking at 4°C, the Ni-NTA beads were pelleted by centrifugation (1 000 g, 2 min 4°C) and washed extensively in PDB. Bound proteins were eluted by boiling the beads in 20 l 2x Laemmli buffer supplemented with 5 mM EDTA. Ten l was loaded on gel. Representative gels of pull-downs repeated at least 3 times are shown. 11 Isoform 1 of Thyroid receptor-interacting protein 13 7 Isoform 1 of HEC1/NDC80-interacting centrosome-associated protein 1 7 HAUS8, Isoform 1 of HEC1/NDC80-interacting centrosome-associated protein 1 7 Isoform ATE1-1 of Arginyl-tRNA-protein transferase 1 6 RCL2, reticulocalbin 2 6 26S proteasome non-ATPase regulatory subunit 3 5 weakly similar to Uro-adherence factor A (Fragment) 5 isoform 1 of CDC42 effector protein 1 5 T complex protein 1 subunit 3 4
3,361.4
2015-12-10T00:00:00.000
[ "Biology", "Medicine" ]
Treatment of petroleum refinery wastewater by adsorption using activated carbon fixed bed column with batch recirculation mode Water pollution and the lack of access to clean water are general global problems that result from the expansion of industrial and agricultural activities. Petroleum refinery wastewaters consider as a major challenge to the environment and their treatment is mandatory. The present work concerns with the removal of chemical oxygen demand (COD) from petroleum refinery wastewater taken from Iraq's Al-Diwaniyah petroleum refinery plant by using an activated carb fixed-bed column operated at a batch recirculation mode. The fixed bed column used in this work was composed of three sections: upper, central, and bottom compartments. The bottom compartment serves as a feeding chamber to the central adsorption chamber while the upper compartment serves as a collecting effluent chamber. By adopting response surface methodology (RSM), in the pacts of various operational parameters such as packing level, pH, and time on the COD removal efficiency were investigated. The optimal conditions were an activated carbon packing level of 80%, pH of 5.7, and adsorption time of 73 min approximately, which resulted in a COD removal efficiency of 96.70%. The results indicated that the packing level of activated carbon had a major effect on COD elimination followed by pH, while time had a minor effect. The model equation's adequacy was demonstrated by its strong R 2 value (0.975). The present study demonstrates that the adsorption system by activated carbon is an effective method for removingcondomODom Al-Diwaniyah petroleum refinery wastewaters. Introduction One of the most serious environmental challenges nowadays is waste oil created by industrial sectors, particularly oil refineries and petroleum distribution businesses. [1][2][3]. Wastewaters of oil refinery have different organic substances with a high value of COD due to the variance in characteristics of crude oil and different processes that used for treating crude oil. Due to the presence of metal ions and organic hydrocarbon components in these wastewaters, their discharge without treatment can be extremely detrimental to the environment [4,5]. The importance of treating these wastes also resulted in the development of various cleaning methods, including biological treatment [5,6], reverse osmosis [7], ion exchange resins [8,9], chemical precipitation [9], granular activated carbon adsorption [10][11][12][13][14][15], coagulation and coagulant aids [16], electrocoagulation [17,18], catalytic vacuum distillation [19], and electrochemical oxidation [20][21][22][23]. Petroleum Refinery wastewaters (PRWs) consider as refractory wastewaters which contain complex aromatics organic and inorganic compounds. Wastewaters generated from refineries have been recognized as extremely poisonous and more refractory to natural degradation compared to other types of wastewater generated from various industrial activities [24]. Coelho et al. [25] documented that, during the production stage in oil refinery processing, the amount of water consumed varies between 0.4 and 1.6 times the volume of processed oil, resulting in substantial environmental damage. Based on the complexity of the refinery, the generated wastewater in oil refineries comprises many different chemical compositions where typically COD value could be in the range of 300-600 mg/L; phenol concentration in the range of 20-200 mg/L; benzene in the range 1-100 mg/L; and heavy metals levels, for example, chromium (0.1-100 mg/L) and lead (0.2-10 mg/L) [24,[26][27][28]. Adsorption is one of the most effective procedures for reducing organic and inorganic chemicals remaining in effluents following conventional treatment. The widest adsorption process is based on the adsorption by activated carbon [29]. Some of the relative advantages of adsorption over other advanced oxidation approaches include: (1) It has the capability in removing both organic and inorganic elements at extremely low concentrations, (2) There is no formation of sludge, (3) It is a simple and safe method of operation, and (4) The adsorbent is regenerable and reusable. Besides, the procedure is affordable because it utilizes readily available materials that can be employed as adsorbents after proper processing [30 -32]. Using of adsorption contacting systems for PRWs treatment has become more prevalent in recent years. Activated carbon is the most frequently utilized adsorbent in adsorption applications [33]. Activated carbon (AC) adsorbents are complicated substances that are difficult to identify based on their behavior, surface qualities, properties, or utility. However, they are frequently categorized according to their particle shape and size, with around 55% of activated carbons produced as a powder, 40% as granular, and the remainder as pellets. Around 80% of the total output (powder, granular, and pellets) is used in liquid-phase applications, whereas 20% is used in gaseous-phase applications [34]. Activated carbon exhibits several unique qualities, including a high internal surface area, chemical properties, and excellent access to internal pores. There are three types of pores: macropores (diameter greater than 50nm), mesopores (diameter between 2 and 50 nm), and micropores (diameter less than 2 nm) [35]. Micropores typically account for a significant portion of the interior surface area. Macro and micropores can be thought of as entrances to the carbon particle. Combining the appropriate raw material and activation technique results in the desired pore shape for an activated carbon product [35]. Award. et al., [36] verified a matched pair method using activated carbon (AC) for refinery wastewater processing and accomplished a powerful COD elimination (90%). Many works have been conducted for the treatment of petroleum refinery process wastewater using activated carbon [10][11][12][13][14][15]. Most of these works adopted batch or continuous operation mode in the adsorption. However, many scientists have turned to use a reactor with a batch recirculation mode as a highly adaptable laboratory -scale reactor [37]. Based on the authors' knowledge, no previous work on the reduction of COD from petroleum refinery wastewater using granular activated carbon (AC) adsorption process operated at batch recirculation mode had been conducted. Therefore, this work aims to investigate the feasibility of COD removal from petroleum refinery wastewater using granular activated carbon adsorption technology in a batch recirculation mode of operation. Response surface methodology (RSM) combined with Box -Behnken design (BBD) was used to study and optimize the effects of main operating factors including packing level of activated carbon, pH, and time on the COD removal from wastewater produced by the Al-Diwaniyah petroleum refinery. Experimental work From the Al-Diwaniyah refinery plant, 50 liters of wastewater was taken from the feeding tank prior to the biological treatment unit and kept covered in containers at a temperature of 4 ºC until use. Table 1 provides the specifications of the raw effluent. The adsorption system was composed of a cylindrical tank with a capacity (of 1.25L), an adsorbing column, a dosing pump (type-HYBL5LNPVF001, Italy,) with a maximum pressure of 10 bar, and a flow rate in the range of ( 1-3 L/h), a liquid flow meter ( type-ZYIA,25-250 ml/min, china). The acrylic cylindrical reservoir has dimensions (20 cm in height, outer diameter of 10 cm, and thickness of 0.4cm) having a cover with dimensions (outside diameter of 12 cm and thickness of 1 cm). The reservoir has two outlets one at its bottom and the other at its lateral side located above its base at a distance of (3 cm). Each outlet was provided with a PVC valve. The cover was provided with two inlets; the first is for the recycle from the adsorbing column, while the second is for feeding the solution. Figure 1 shows the schematic diagrams for the adsorption system. The adsorbing column is the backbone of the adsorption system. It is a new design adopted in the present work. It was made from transparent acrylic material, Perspex type. It was composed from tof compartments: upper, central, and bottom compartments. The bottom compartment serves as a feeding chamber for the adsorption process. It has a cylindrical shape with dimensions (outside diameter of 7 cm, total length of 5 cm, and thickness of 0.4 cm) ended at its upper face with a flange having dimensions (10 cm in diameter and thickness of 1cm) and contained four holes (0.5 cm in diameter) for fixing the compartment with the others via bolts and nets. The bottom compartment has an inlet pipe having a diameter of 1cm for entering the solution is located at the side of the compartment. Inside the cavity the of bottom compartment, a bed of spherical glass pellets was put which serves as a calming section. The diameter of the glass bead was 0.5 cm. The central compartment has a cylindrical shape with dimensions (outside diameter of 7 cm, total length of 7 cm, and thickness of 0.4 cm) ended at its upper and lower faces with flanges having dimensions (10 cm in diameter and thickness of 1 cm) and each one contained four holes. Upper flange was provided by perforated disc has dimensions (outside diameter of 6.8 cm, and thickness of 0.3cm) made from the same material of the compartment, while the lower flange was provided with the same disc as used at the upper flange. Both discs were perforated uniformly with hole of 1mm at equal distance among them. The upper compartment serves as a collecting chamber. It has a cylindrical shape with dimensions (outside diameter of 7 cm, total length of 5 cm, and thickness of 0.4 cm) ended at its lower face with a flange having dimensions (10 cm in diameter and thickness of 1 cm) and containing four holes. Figure 2 shows the schematic design of the adsorbing column. All chemicals used in the present work were analytical grade, H2SO4 (98%, Thomas baker, India) and NaOH (purity99%, BDH, England). Activated carbon (Zhengzhou Kelan Company, China). Table 2 shows the characteristics of the material as provided by the supplier (company). The BET surface area of AC was measured using ISO-9277-2010 method at petroleum R &D center, ministry of oil in Iraq using BET surface area analyzer model-No. Qsurf9600, Thermo Finnegan Co. USA. The IR spectra of the AC samples were obtained using a Model Perkin Elmer 1100 series FT-IR operating in the range 4000-400 cm -1 and utilizing KBr pellets with a resolution of 1 cm -1 . A pellet was created for infrared research by combining a specific sample with KBr crystals and pressing it into a pellet. Before starting each experiment, activated carbon was sieved between 1.7mm and 0.85 mm then 200 g of AC was rinsed with 0.5 L of distilled water several times until its pH became 7, after that the washed AC was separated from water by filtration then dried at 100 ºC for 1hr using the oven (Type-LA MER, GERMANY). 1L of wastewater was taken and poured inside a 2liters beaker mounted on a magnetic hot plate stirrer (Hieroglyph, MR Hei-standard, Germany) then Its pH was modified to get the desired value by 1M H2SO4 or 1M NaOH, then transferred to the reservoir of the adsorption system. A certain amount of activated carbon that corresponds to the required packing level was placed in the central compartment of adsorbing column. The dosing pump was turned on for circulating the solution through the adsorbing column at a Liquid flow rate of 200 ml/min and the adsorption process was continued for some time at a constant temperature of 25±2 ºC. At the end of each run and before carrying out COD tests, the sample of treated wastewater was taken and filtered then tested for its COD value. A digital pH meter was used to measure the electrolyte pH (HNNA Instrument Inc.PH211, Romania), whereas conductivity and TDS were determined by utilizing a conductivity meter model COM-100 from HM Digital Inc. in Korea. Solution turbidity was measured by Jenway-6035, Germany. SO4 -2 and Cl -1 were analyzed by using Photo Flex. Series, WTW model no 14541, Germany. Figure 2 Schematic diagram of the chemical reactor Effluent COD was used to measure the number of organic compounds in the waste stream. the quantity of COD in the petroleum refinery effluents was determined by taking a sample (0.2 ml) of effluent digested for 120 minutes at 150 °C with K2Cr2O7 as an oxidizing agent using a Thermos reactor (RD125, Lovibond). To analyze the digested sample, it had to be cooled down to room temperature first then COD was measured by spectrophotometer (MD200, Lovibond). Method 8047 of the Hach Company/Hach Lange GmbH, USA, was used to measure phenol concentration. The COD was measured three times, with the averages used in this study. The effectiveness of COD removal was determined using eq.1 [38] Where The removal efficiency (RE%), the initial COD (mg L1), and the final COD (mg L1) are all represented by the letters COD in the formula. When digesting a kg of COD. Design of experiments Response surface methodology (RSM) is summarized as a collection of mathematical and statistical tools for determining a regression model equation that correlates an objective function with its independent variables [39]. Box-Behnken design (BBD) was adopted to examine the impact of process variables on COD removal. The removal effectiveness of COD (RE %) was considered as a response, while the packing level (X1), time (X2), and pH (X3) were taken as process parameters [39]. The scales of the process components were designated as follows: low (-1), middle or center point (0), and high level (1). Table 3 shows the process parameters with their selected levels while Table 4 shows the experiments array provided by BBD for the current work, which was obtained by the Minitab-17 program. The amount of AC for each packing level corresponds to 10 g for 20%, 25 g for 50%, and 40 g for 80%, which results in a dose of adsorbent 10g/l, 25g/l, and 40g/l respectively. In this work, the following second-order model with the least-squares approach was used to determine the correlation between COD removal and its independent variables. [40,41]: a0 is referred to as the intercept term., x1 is the first process variable, x2 is the second process variable, and xk is the last process variable. Main effects ai, aii, and aij all reflect the first order (linear) effects; the interaction effects are represented by aij. Analyzing variability and then calculating a correlation coefficient (R 2 ) confirmed the model's suitability. Characteristics of Activated carbon The results of the BET surface area analysis were a BET Surface Area of 1204.2337 ± 39.5518 m²/g which is a desirable quality in wastewater treatment applications. The average pore volume and diameter were found to be 0.636180 cm 3 /g and 2.11314 nm, respectively. These are the properties of a mesoporous substance. The adsorption-desorption plot for AC was presented in Fig. 3. Nitrogen uptakes rose as the relative pressure increased across the whole pressure range. AC exhibited type I features with a hysteresis loop at 0.4≤ p/p≤0 0.9, which is consistent with the categorization of the International Union of Pure and Applied Chemistry (IUPAC). As a result, the presence of mesopores with highly absorbent surfaces were confirmed, as has been previously reported in similar investigations [42,43]. Results of experimental design According to BBD design, fifteen runs were performed to investigate the optimum conditions for COD removal. Table 5 summarizes experimental findings regarding COD removal effectiveness (RE percent). Results showed that the efficiency of COD removal was in the range of 84% -96.26%. As a preliminary inspection, a comparison between run(1) and run (15) showed that the packing level of activated carbon has a considerable impact on the efficiency of COD removal where RE% increased from 87 to 95.9% making a difference of 8.6% as packing level of activated carbon increased from 20%to 80% at pH 5 and time of 60 min, while the comparison between run (5) and (6) showed that pH followed the packing level in its effect on the COD removal efficiency where Where: • X1X2, X1X3., and X2X3. denote the effect of model parameters on one another. • (X1)2, (X2)2, and (X3)2 all reflect a measure of the model parameters' major effect. The expected values of COD removal efficiency were computed and summarized in Table 4 using equation 3. In eq.3, the positive coefficient in front of any parameter reveals that RE% increases with its increase and vice versa. The acceptability of BBD was identified by the use of analysis of variance (ANOVA). Figure 3. N2 adsorption-desorption isotherm of AC Results of experimental design It is an analytical technique that utilizes Fisher's F-and P-tests to determine the model's and its parameters' significance [44]. In general, bigger Fvalues and smaller p-values indicate that the coefficient terms are more important [45]. The response surface model is illustrated in Table 6 using ANOVA. This table contains, Contr.% denotes the percentage of contribution of each variable, DF represents the degree of freedom of the model and its parameters, and the statistical terms are represented by the sum of the square (Seq. SS), the adjusted sum of the square (Adj. SS), and adjusted mean of the square (Adj. MS) respectively. P-values of (0.002) and F-value of (22.54) were obtained which elucidate that the regression model is highly significant. The model's coefficient of multiple correlations was 0.9759, indicating that the regression is statistically significant and that the model confirms only (0.0241) of the total variations. The adjusted multiple correlation coefficient (adj. R 2 ) equals 0.9326, while the predictable multiple correlation coefficient (pred. R 2 ) equals 0.790 in this model was well-matched since the difference between them is less than 0.2 [46]. Results of table 5 showed that packing level has the major effect with a contribution of 52.27% followed by pH with a contribution of 23.06%. While time has a lower effect with a contribution of 5.09%. These results confirm that adsorption is governed by two operating parameters (packing level and pH). These results are expected because during the adsorption process, with increasing pH, the negative surface charge of the adsorbents decreases, resulting in a strengthening of the electrostatic adsorption force between the adsorbent and contaminants with a positive charge, hence boosting the removal of pollutants [47]. Besides increasing packing level means increasing adsorbent dosage hence more active sites with the AC are available for adsorbing more organic materials resulting in high removal of COD. The interactions among the variables are non-significant except for the highly significant double interaction of pH. In the present study, the lack-of-fit P-value (0.677 > 0.05) indicates that the lack of model fit was not statistically significant in comparison to the pure error [48]. As a result, the model can generate an adequate prediction that corresponds to the response values. The contribution of the linear term was 80.42% while the square and 2-way interactions were 15.82% and 1.35% respectively. Hence the interaction effect generally is significant. The Influence of process factors on the efficiency of COD removal Graphical representations of RSM can be used to illustrate the interactive effects of the selected variables and their effect on the response. The figures, (4-a, b) illustrate the influence of pH on various packing levels (20% -80%) over a constant period (70 min.). The response surface plot is shown in Figure 4-a, as well as the contour plot is shown in Figure 4-b. The shape of the control plot indicates the nature and extent of the interactions. From the layout of the surface, it was observed that, at any pH value, as the packing level is increased from 20% to 80%, the efficiency of COD elimination increases. The increase in RE% seems to be linear. A similar observation was identified by previous works [14]. This behavior is in agreement with the fact that adsorption occurs via the production of carbonoxygen surface complexes. The nature and quantity of carbon oxygen bonds are determined by the carbon surface, the o xidative treatment, the surface area, temperature, and pressure [49]. At any value of packing level, RE% increases with increasing of pH 3-7 then start to be approximately constant at higher pH values. The related contour map demonstrates that the 96 percent COD removal efficiency value happened Figures (5-a,b) show the impact of time on the RE% for various values of packing level (20%-80%) at constant pH (5). Figure 5-a demonstrates that the efficiency of COD removal increases exponentially with increasing time at low packing levels. While at a high value of packing level the effect of time is little with no significant effect. The results showed that the reaction time has a positive effect on the progress of the adsorption process only at low packing levels. This is explained by the fact that there are initially a large number of vacant surface sites accessible for adsorption. Additionally, it was hypothesized that there was a strong attraction between the pollutants and the sorbent and that as contact time grew, the remaining unoccupied surface locations became harder to occupy due to saturation. This could also be a result of a shortage of suitable sorption sites after the sorption process resulting in nearly constant removal efficiency. This outcome validates by prior research [50]. The related contour map fig.(5-b) reveals that the COD removal efficiency of 96 percent is concentrated in a narrow area with packing levels ranging vinside a limited area with a pH range of 5-7 and a packing level in the range (of 60%-80%). (a) from 70% to 80% and time frames ranging from 60 to 80 minutes. As a result, the implementation of RSM will enable the identification of feasible optimum values for the researched parameters, as well as its function in providing valuable information about the interactions between the variables. The optimization and confirmation test Optimizing process conditions is critical and should be accomplished. Numerous standards have been identified for optimizing the system by maximizing the desired function (DF) by varying the importance or weight, which may alter the objective's characteristics. The variable's target fields have five options: maximizing, objective, minimizing, within the range, and none. The target of removal of COD was selected as 'maximum' with corresponding 'weight'1.0. The independent parameters examined in the study have been specified in a range of designed levels (activated carbon ratio "packing level" from (20%-80%) time: (60-80min.) and pH (3-7). The lower limit value of the efficiency of COD removal was assigned to be 87%, whereas the upper limit value was assigned to be 96.26%. The optimization procedure was carried out within those constraints and the outcomes are reported in Table (7) with the function of desirability (1). Two confirmatory experiments with expanded parameters were conducted to validate them; the results are shown in Table ( 8). After 72.7 minutes of adsorption and packing at an 80 percent level, an average COD removal efficiency of 96.8 percent was obtained at pH=5.7, which is within the range of the ideal value obtained through optimization analysis using the desirability function of (1) Table (7). As a result, combining the Box-Behnken design with the functional desirableness is effective and efficient in maximizing COD removal. Table ( 9) compares the parameters of wastewater effluent and treated effluent based on the results of the current work in AC. As can be observed, treated wastewater has improved characteristics and conforms to the standard limitations for effluent discharge Tabula (9). The present study established the efficacy of adsorption activated carbon in the treatment of wastewater generated by the Al-Diwaniyah petroleum refinery plant by achieving a COD removal efficiency of 96.8 percent, a phenol removal efficiency of 93.2 percent, and a turbidity removal efficiency of 95.87 percent based on the raw effluent properties. Results of the present work reveal that adsorption using an activated carbon system can be applied successfully for the treatment of the Al-Diwaniyah petroleum refinery. Starting from an initial COD of 2428 ppm, it could be achieved a COD removal efficiency of 96.70% at 73 min with a packing level of 80%. These results prove that the adsorption process absorbs the refractory natural or organic compounds that exist in petroleum refinery wastewater more proficiently. FTIR spectral analysis To investigate the effect of adsorbed organic compounds during the adsorption process in the removal of COD from petroleum refinery wastewater, the FTIR spectrum was obtained for the activated carbon before and after adsorption at the optimum conditions as shown in figure 6. The FTIR spectrum of material can provide valuable information about its chemical composition. during adsorption where shifting in the spectra, as well as disappearance or reduction of the peaks, can indicate the efficiency of the adsorption process [51]. . The peaks 3845.45,3830.02, 3780.07, and 3737.84cm -1 are assigned to stretching vibration of the O-H bond caused by the presence of chemisorbed water and surface hydroxyl groups, which may be responsible for the organic adsorption interaction [51,52,53]. Peak at 3391.07cm -1 corresponds to O-H stretches in hydroxyl, carboxylic, and phenolic groups [54]. This peak is not found in activated carbon before treatment. The peak at 3031.45 cm -1 corresponds to the aromatic C-H groups [55,51]. This peak is disappeared in the activated carbon after treatment. The peaks 2435.8, 2355.02, and 2319.4 cm -1 are assigned to C≡C stretching vibrations in alkyne groups [56,57]. The peak at 1684.9 cm -1 is due to the C=O stretching vibrations of ketones, aldehydes, lactones, and carboxyl groups. [58,51,55]. This peak is not found at activated carbon before treatment. The peaks 1582.98 and 1533.14 cm -1 could be due to C=C stretching in the monosubstituted and para-disubstituted benzene rings [59,56,53,57]. Peaks of 1152.66 and 1185.14 cm -1 correspond to the C-O stretching vibrations in alcohols, phenols, or ether [60,56]. Finally, peak 675.89 cm -1 corresponding to C-C stretching is found in activated carbon after treatment [61]. Comparison with previous works Most of the previous works were conducted at either batch or once through the continuous mode of operation. Table 10 shows a comparison between our results and the results of familiar works. As can be observed, the current system performed better in terms of COD, turbidity, and phenol removal. the reason behind these results could be the higher turbulence promotion observed by using the batch recirculation mode of operation. Therefore adopting batch recirculation in the removal of COD considers a promising step in the application of such a mode of operation for the treatment of different types of wastewater generated from different industrial activities. Conclusions This study was concerned with COD removal from petroleum refinery effluent using an activated carbon-adsorption process operated in a batch recirculation mode. The response surface approach was used to conduct experiments to determine the influence of operating parameters such as packing level of activated carbon, pH, and time on the removal of COD from petroleum refinery effluent generated by the Al-Diwaniyah refinery plant in Iraq. Based on BBD, the best conditions were achieved at a packing level of 80%, a pH of 5.7, and a time of 73 minutes, in which COD removal of 96.7% was obtained. The high R 2 , adj.R 2, and pred. R 2 values indicate that the model fitted very well to the experimental data the results indicate that RSM can be successfully used to analyze the impact of various operating factors and develop the required optimum conditions thus reducing the number of runs, time, and cost of experiments. The efficiency of the activated carbon-adsorption process was found to be dependent on two main parameters (packing level and pH). Time was found to have the least effect. The batch recirculation mode was able to operate the system without operational problems (experimental observations), and attain good COD removal during a circulation time of 73 min. From this study, the activated carbon-adsorption process seems to be an environmentally friendly process to remove COD from petroleum refinery wastewater. Type of wastewater Mode of operation Characterization of wastewater Optimum conditions Efficiency Ref.
6,370
2022-06-01T00:00:00.000
[ "Engineering" ]
Simulation of a single interaction of an abrasive particle with the surface of a part during blasting . It is shown that when modeling a single interaction of an abrasive particle with the surface of a part in the process of collision under air (liquid) pressure, it is necessary to take into account the specifics of impact-abrasive wear, when the separation of a wear particle is preceded by metal destruction. In the process of schematization of the contact interaction during abrasive blasting, some commonality with the shot blasting process and a number of assumptions typical for grinding, when micro cutting with a single abrasive grain is considered, are taken into account. Introduction Abrasive blasting of metal surfaces of parts refers to grinding with a free abrasive, the totality of which is directed to the surface to be treated at a certain angle α (angle of attack) and under pressure p of compressed air or as part of an anti-corrosion liquid (hydro abrasive blasting). It is known that the grinding process has much in common with the process of abrasive wear during friction, but there are a number of distinctive features: 1) the working surface of the tool is much rougher compared to the surface of the abrasive body; 2) grinding grains have high hardness, heat resistance and wear resistance with relatively high brittleness; 3) high intensity of metal removal per unit time, and the resulting chips during grinding are much larger in relation to friction wear products. The systematization of the force contact of the abrasive with the wear surface is based primarily on the types of friction -sliding friction, rolling friction, impact of the abrasive with the metal surface, transformation of the geometric and physical and mechanical properties of the surface layer of the metal. This systematization has signs of universality. So, for example, under conditions of sliding friction, the nature of the force interaction of a single abrasive particle with the wear surface is close to when, instead of a separate abrasive particle, a certain protrusion acts on the friction surface, imitating the case of fixing the particle on the contact (grain on the grinding wheel). Regardless of the difference in the fundamental schemes of the interaction of abrasive particles with the wear surface, they have an element of commonality, which consists in the fact that in each case the separation of the wear particle is preceded by the destruction of the metal, i.e. mechanical (abrasive) wear is observed. The specificity of abrasive blasting of metal surfaces is that the sliding friction of abrasive particles is preceded by an impact with a destructive effect, thereby causing shock-abrasive wear. With this type of wear [1,2], the direct penetration of a solid particle into the metal under the action of a shock pulse creates a depression in the form of a dimple on its surface, which approximately copies the geometry of the particle. Methods The polydeformation process inherent in abrasive blasting due to the many single introductions of particles at each successive impact forms a kind of macro profile on the wear surface in the form of alternating dimples and bridges between them without characteristic marks of directional orientation, typical for abrasive wear during sliding friction. The abrasive particle at the initial moment of dynamic contact upon impact must overcome the resistance of the metal to this penetration, which is possible only if the hardness and strength are higher than those of the metal. Depending on the angle of attack α of the abrasive particle, regime parameters and physical and mechanical properties of the processed (wearable) material, abrasive grains, having penetrated to a certain depth due to the kinetic energy reserve and elastic aftereffect, the contact zones can form short risks in the form of small scratches, i.e. . to carry out micro cutting in the process of sliding along the formed surface [3,4]. Abrasive blasting is characterized by simultaneous and multiple impacts of abrasive grains having different angles of attack within the flow falling on the metal surface in the form of a so-called solid particle jet. The essence and thermodynamics of contact interaction during the impact of deforming and cutting particles on the treated surface depend on many factors: the physical and mechanical properties of the contacted materials, the size and speed of impact of solid particles, the pressure of the working medium (air, liquid), processing time, angle of attack, density flow [5][6][7]. Due to the complexity and polydeformation nature of dynamic contact during processing with free abrasive particles, schematization is necessary for this interaction of solids in order to build a mathematical model of the abrasive blasting process. When drawing up the schematization of the contact interaction for abrasive blasting, some common features with the shot blasting process [8][9][10][11][12] were taken into account and a number of assumptions were made that are typical for grinding when micro cutting with a single abrasive grain is considered. So, the schematization of abrasive blasting is based on the following assumptions. 1. From the flow of a jet of solid particles, we select one abrasive grain and assume that it hits the surface of the body being treated with an average flow velocity ʋ at a given angle of attack α, moreover, some of the abrasive particles fall on the surface at an angle close to 900. The particle is introduced into the body, followed by sliding. 2. From the flow of a jet of solid particles, we select one grain of abrasive and assume that it hits the surface of the body being treated with an average flow velocity ʋ at a given angle of attack α, moreover, some of the abrasive particles fall on the surface at an angle close to 900. The particle is introduced into the body, followed by sliding. 3. To describe the deformation process in dynamic contact, we simulate in the form of a single-act collision of a rigid non-deformable solid particle of a spherical shape. The assumption of non-deformability of abrasive particles is acceptable due to their increased hardness and strength. 4. The surface of the machined (wear) part is assumed to be smooth and the deformable body (surface layer) is represented as an elastic half-space. The legitimacy of this assumption is determined by the fact that the dimensions of the plastic imprint (hole) and traces of micro cutting (scratching) are significantly smaller than the dimensions of the body. 5. The processed material is considered homogeneous and isotropic, which is fundamental in the theory of elasticity and plasticity. 6. The intensity of impacts of abrasive particles on any treated area remains constant with the same specific gravity. 7. The probability of hitting a treated area with an area ΔS of two blows in a very small but finite period of time is negligible compared to one blow. Results and discussion In the contact external force action of a hard abrasive particle on the treated surface, the wear mechanism during sliding is manifested, in which two successive stages can be distinguished (Figure 1). The first stage is characterized by the impact of the abrasive particle on the treated surface and ends with its penetration into the thin surface layer of the metal to a certain depth h. A necessary condition for implementation is the superiority in hardness and strength of the abrasive particle over the metal of the treated surface, as well as sufficient kinematic and dynamic conditions for the contact of solids. At the second stage, the abrasive particle, having penetrated to a certain depth, performs translational motion and forms a wear surface, while carrying out a complex of complex interrelated processes: plastic deformation, micro cutting (scratching), elastic displacement, etc. Ultimately, the features of these phenomena during contact interaction are mechanism of wear of the surface layer of metal in the abrasion zone. The external force action of a single abrasive particle on the workpiece surface is inevitably accompanied by its deformation and further formation of local fracture centers with the separation of wear particles. Depending on the intensity of the force factor, the deformation in the contact zone of the abrasive particle with the metal can be elastic or plastic if the intensity of normal stresses qi exceeds the physical (or conditional) yield strength of the material being processed σт (σ0,2). Since abrasive wear is characterized by continuous and in many cases significant removal of metal from the friction surface, then, taking into account the final result of the action of an abrasive particle, one should keep in mind mainly plastic deformation. A complex deformation mechanism in the zone of movement of an abrasive particle along the friction surface is predetermined by the complex shape, geometry, and particle size, i.e. on the surface of one abrasive particle there can be areas with different cutting properties, which creates heterogeneity of deformation processes during micro cutting. In the process of grinding the grains of the circle, mass micro cutting is carried out -the scratching of the surface layer of the material being processed, therefore, the study of the operation of a separate grinding grain is reduced primarily to the study of the mechanism of the process of scratching the material. The scheme of micro cutting (scratching) of a material by a rounded grain cutting element [13] does not fundamentally differ from the classical scheme of free and non-free cutting, adopted in the theory of metal cutting [14,15]. On Fig. 2 shows a diagram of the process of micro cutting with a single free abrasive having a rounded cutting element (rounding radius ρ) and coming into contact with the material being processed after impact under compressed air pressure. Let us consider the case of micro cutting during the translational movement of the scratching element, which has a rounded apex of radius ρ, which is affected by the external impact force Pud. Expanding the force Rud. into the components Рz and Ру, we establish that the force Рz cuts off the chips, and the force Ру presses the scratching element against the surface to be machined. The rounding of the scratching element provides its high mechanical strength, large actual cutting angles. In the process of scratching, plastic deformation of the metal occurs in front of the scratching element in the zone A1, A2, on the sides of it (in the zone L1, L2) and below the cut line in the zone H1, H2 (Fig. 2). An increase in the thickness of the removed layer a leads to an increase in the volume of metal involved in plastic deformation in all indicated directions: if a2>a1, then we have A2>A1, L2>L1, H2>H1. The translational movement of the scratching element, accompanied by continuous chip removal, is possible under the action of shearing (shearing) stresses τ in the shear plane, which are greater than the true resistance of the material being processed to shear (shear) shear: τ≥τshear, where shear is the shear yield strength of the material being processed [16]. In this way, the plastic deformation that occurs during the contact interaction of the abrasive particle with the metal surface of the workpiece is fully responsible for the quality and condition of the thin surface layer, characterized by a set of geometric and physicalmechanical parameters. In metals, the process of plastic deformation is mainly carried out by sliding, carried out by the movement in the slip plane of individual imperfections (violations) of the spatial crystal lattice -dislocations [17,18]. At present, the theory of dislocations is widely spread due to its universality and allows solving a wide range of problems in plasticity, fracture and strength of metals, thermal physics and thermodynamics. The undoubted advantage of the theory of dislocations is that it connects the micro-and macro-representations of the process of plastic deformation of metals through such parameters as the shear modulus, Poisson's ratio, Burgers vector, dislocation density, normal and shear stresses, and yield strength.
2,760.8
2023-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
The effect of perceived convenience and perceived value on intention to repurchase in online shopping: the mediating effect of e-WOM and trust Abstract This article investigated the effect of perceived convenience and perceived value on intention to repurchase in online shopping. We also assessed trust and e-WOM as mediators between perceived value and repurchase intention. During March-July 2022, a sample of 298 responses were collected from consumers that use online shopping in North Macedonia. We analysed the research model using PLS structural equation modelling (SEM) and used bootstrapping technique for testing the hypotheses. The findings showed that all independent variables (perceived value, and perceived convenience, trust, and e-WOM) affected repurchase intention. Moreover, the findings revealed that trust and e-WOM mediate the relationship perceived value in its relationship with repurchase intention. Perceived convenience and value contributed significantly to repurchase intention during online shopping, and perceived value had greater impact on e-WOM. Results provide some theoretical and practical implications regarding the effects of factors that impact repurchase intention during online shopping in North Macedonia. Introduction In recent years, internet technologies have enabled companies to use their webs for reaching their potential targets.But, the real challenge for companies remains their connectedness with customers.Retaining and gaining new customers require companies to improve their products and services offered to their targets (Shin et al., 2022;Wu et al., 2014).It is not enough for companies just to be present online, but to find ways how to get interconnected with their customers. In most of the research regarding repurchase intention, many authors consider perceived convenience to affect repurchase intention in online endeavours CONTACT Veland Ramadani v.ramadani@seeu.edu.mk(Jebarajakirthy & Shankar, 2021;Shankar & Rishi, 2020).Therefore, perceived value impacts positively and significantly the repurchase intention (Dlačić et al., 2014).Previous research emphasises more transaction costs that consumers incurred during their online purchasing, but still these research studies do not explain fully what motivates consumers during repurchase intention (Wu et al., 2014;Yu & Chen, 2018).Therefore, it is crucial to get customers' insights regarding what consumers evaluate more during their repurchase intention (Galetić & Dabić, 2021). Although, the position of value on repurchase intention is important in previous studies (Chakraborty, 2019;Chen & Lin, 2019;Zeithaml, 1988) still there is scarce research examining perceived value and its relationship with repurchase intention during online shopping.Furthermore, some studies analysed repurchase intention on both transaction cost and value perspectives in order to comprehend their impacts on consumers' repurchase intentions (Woodruff, 1997). Shopping convenience seem to be very meaningful to customers during their intention to repurchase (Arya et al., 2022;Fernandes et al., 2022;Jiang et al., 2013).Convenience motivates customers during online shopping in their intention towards online shopping.Kruh et al. (2017) indicate that convenience during shopping has become among the most important reasons why consumers decide to shop online.In addition, a substantial review of literature has been conducted (Berry et al., 2002;Seiders et al., 2007) on customer convenience in a service economy, by defining convenience in service industry as the time and effort that customers preserve associated with purchasing or utilising a service.Based on this, effort and time costs incurred during the process of online shopping influence the perceived convenience in service industry.Although some literature distinguishes between goods and service convenience (Kelley, 1958), Berry et al. (2002) point out that all businesses provide services to their clients, hence convenience appear to be important to goods and services.Therefore, the primary determinants of perceived service convenience are related with non-monetary expenses that are related to time and effort. Based on the above, this article attempts to find out about the relationship of perceived convenience dimensions with intention to repurchase products online, by proposing a research framework based on perceived convenience, and online repurchase intention.The current study uses convenience dimensions (Jiang et al., 2013), and also investigates its relationship with perceived value, e-WOM and trust. Therefore, the following study contributes by expanding previous research by providing more theoretical and empirical evidence regarding repurchase intention in an emerging economy.Second, this article develops a conceptualisation of perceived convenience and perceived value and assesses their impact on consumers repurchase intentions and perceived value.In addition, this article adds more robust explanation of trust and e-WOM as mediators regarding perceived convenience and perceived value on repurchase intention and, hence enriching the existing literature.Lastly, since the most of previous studies address the relationship of online convenience with purchase intentions, the current study supplements by closing the gap by exploring the relationship between online perceived convenience, perceived value, trust, and e-WOM.Therefore, this study supports managers by identifying dimensions that may positively influence repurchase intentions in order to improve service delivery to customers (Hur et al., 2021). The study follows this outline.First, it introduces the research problem in the context of North Macedonia, framing consumer repurchase intention in online settings.We then review literature to develop and present the hypotheses, explain the methodology used in the study.Then we proceed with data presentation and analysis of the findings.The final section provides some implications from research for theory and practice, and limitations before providing some useful directions for research in the future. Perceived convenience Convenience concept was first coined by Copeland (1923), who used it to describe a category of goods that consumers used to buy frequently with low involvement and at easily convenience stores.In this line, some studies have used the term convenience in order to classify products that are purchased by customers with low risk and low involvement in their buying process (Bucklin, 1963;Brown, 1990;Copeland, 1923).Additionally, convenience saves consumers' time and effort, which speeds up their intention to repurchase (Seiders et al., 2005). The convenience concept has been used in marketing since it integrates both goods and services and needs to be analysed more thoroughly (Berry et al., 2002).The convenience concept initially found to be used with the convenience that preserved customers' effort and time during their purchasing of goods (Farquhar & Rowley, 2009;Yale & Venkatesh, 1986).Thus, convenience studies have pointed out that consumers' convenience has been linked to all products, whether they be tangible or intangible that preserve consumer's effort and time during the process of shopping (Berry et al., 2002).The convenience dimensions, time and effort are found to be very consistent in previous research and was used as a convenience notion of products and services that reduced the non-monetary price (Kelley, 1958;Kotler & Zaltman, 1971). The convenience concept apart from its focus to products has gotten attention to service convenience attributes (Jiang et al., 2013).According to Berry et al. (2002) most of researchers have linked convenience while distinguishing consumer's interest in preserving their time and their effort during their intention to repurchase products.The convenience concepts used in this study are based on Jiang et al. (2013) constituting the dimensions below. Access convenience Access as a convenience factor characterises the ease and speed of reaching a retailer (Seiders et al., 2000).In retailing sector, access convenience is a very significant element, since it provides the consumer an opportunity to access an online service (Duarte et al., 2018).In contrast to physical retailers, consumers in an online environment can shop from different locations.According to King and Liou (2004), the access convenience dimension is thought to be as the most vital aspect in consumers' perceptions in online shopping. Search convenience Search convenience applies to the convenience during the process of identifying and searching for a product or a service by consumers during their repurchase intention.According to Beauchamp and Ponder (2010) define search convenience as how easily and how fast consumers identify and select products during their purchase intention. The Internet has provided companies with new ways of using different tools to improve their communication by providing useful information for their clients using their websites, paid ads, or any other form of social media (Duarte et al., 2018).Consumers benefit from these tools since it prevents them from wasting and reducing their time (Beauchamp & Ponder, 2010;Shankar & Rishi, 2020) and spending much less effort for escaping travel to physical stores (Seiders et al., 2000). Evaluation convenience Evaluation as a convenience factor means the degree of availability of products that can be evaluated by potential consumers.Jiang et al. (2013) associates evaluation convenience with various presentation contents that are easily understood, such as, texts, videos on websites of companies.When companies engage in creating good contents, consumers have clear picture about products with less required time and spent effort.Information and website content positively influence consumers' opinion about products (Chen & Wells, 1999).In this line, Elliott and Speck (2005) states that all product information relates to product characteristics, accuracy, amount of information, info graphs, audio and video.Therefore, websites that provide product information which facilitates the process of locating, utilising that information in a timely manner, satisfies customers (Kim & Gupta, 2009). Transaction convenience Transaction convenience can be referred to consumer perception in avoiding time and effort during any online transaction with a company.Transaction convenience is defined as the ease and the speed of effecting transactions (Seiders et al., 2005), and amending transactions (Beauchamp & Ponder, 2010).Consumers' value online paying that is easy and without any extra effort.Online users search for rapid and easy transactions due to the nature of online buying (Srinivasan et al., 2002) and are more likely to buy online when the transaction process itself is less complicated and riskfree (Dekimpe et al., 2020).Therefore, according to (Jiang et al., 2013) transaction convenience refer to customer's time and effort incurred during the process of fulfilling a transaction. Possession and post purchase convenience Possession as a convenience element means the perceived effort and time required by consumers to gain what they want from the company.possession convenience means the money and time spent by consumers to get the desired product (Jiang et al., 2013), and how easily and at what pace consumers can attain their desired products (Seiders et al., 2000).Moreover, possession convenience relates to the money and time that consumers must invest in order to obtain their desired possessions (Jiang et al., 2013).Therefore, with online stores, consumers have to wait for product delivery, time delivery and safe product shipment before the product is in their possession (Jiang et al., 2013).In other hand, the post-purchase convenience is very important for consumers because they need to contact the company for any eventual after-sale service.Nowadays, the post-purchase convenience is very crucial for consumers since they face many obstacles while they need to return products bought online (Berry et al., 2002).Therefore, the positive perceived online convenience is reached when consumers handle successfully a failed service with less time and effort.Based on the above, we come with the following hypotheses: H1: Perceived convenience positively impacts repurchase intention H2: Perceived convenience positively impacts perceived value Perceived value Perceived value is the very reason why consumers decide to purchase online because of the little effort they make (Sharma & Klein, 2020).Perceived value as a concept is very important in marketing since consumers are attracted by products that exert perceived value to them.Perceived value is based on the value that customers perceive for a product or service (Zeithaml, 1988), or when a consumer compares the benefits and costs perceived from a marketing offer (Lovelock, 2001;Hasani & Zeqiri, 2015).Customers' perceived value can be explained from different viewpoints.Perceived value provides consumers benefits, for example, Kuo et al. (2009) considers that besides benefits, perceived value means also money and quality for customers.According to Bishop (1984) value is created when consumers spent less on products.A lot of studies point out that perceived value is positively related to repurchase intention (Kuo et al., 2009;Wang et al., 2004;de Morais Watanabe et al., 2020).Therefore, the value may be characterised in terms of cheap price, what customers desire from the goods, the quality received for the money, and what is gotten for what has been provided (Rahab et al., 2015).Thus, we posit the following hypothesis: H3: Perceived value has a positive impact on repurchase intention Perceived trust is thought to play a more significant role in online market settings compared to traditional offline markets because of the perceived risk and uncertainty that may be present in the online shopping context (Kim et al.,2017).Consumer perceived value leads to consumer engagement and consumer involvement in the online process of shopping.Consumer perceived value is strongly correlated with perceived trust, and this relationship in turn has a significant link with consumers' intentions to engage in online shopping.(Sharma & Klein, 2020).The findings reveal that confidence in the website significantly increased visitors' intentions to make purchases there (Chen, 2012).Additionally, trust is a crucial determinant of consumer behaviour and an essential component of online shopping success.Therefore, we hypothesise the following: H4: Perceived value has a positive impact on trust Perceived value refers to what consumers receive from a product or service, and how they evaluate the utility of that offer (Rouibah et al., 2015).Consumers rely more on information received from friends and companions during their communication, due to the fact that they are considered more candid compared to commercial ads, henceforth, people have started to trust more word of mouth (Ismail & Changalima, 2022;Palalic et al., 2021).Consumers are more likely to use e-WOM and spread bad words to others if a product or service does not deliver what it was expected to deliver to them (Talwar et al., 2021), Therefore, using e-WOM to spread negative words effects negatively the performance of the company, henceforth, evaluating negative WOM by companies is a very important issue to tackle (Chen & Zhang, 2022).On contrary, using e-WOM to spread positive words enhances brand credibility (Banerjee & Sreejesh, 2022).Therefore, we hypothesise the following: H5: Perceived value has a positive impact on e-WOM Trust Trust is a very important factor in online shopping because customers are separated from products and salespersons.This makes online shopping riskier due to eventual monetary and other losses that may occur.Trust and risk are crucial factors in determining customer behaviour in online settings (Chen, 2012;Sharma & Klein, 2020).Consumers are reluctant with online transactions because they lack confidence (Jarvenpaa et al., 2000).Because during online transactions, consumers are faced with perceived risk and uncertainty involved in online shopping, consumer's trust with these transactions plays a crucial role in the online market than in physical markets (Head & Hassanein, 2002).Trust helps consumers avoid the reluctancy during their repurchase intention in an online environment.Thus, consumers would buy products from online stores they trust in order to reduce uncertainty and eventual risk during online shopping.A lot of research on online shopping has revealed that customers' intentions to make purchases from an online store are positively influenced by their trust in the retailer (Chae et al., 2020;Lien et al., 2015;Ponte et al., 2015).Therefore, we come with the following hypothesis. H6: Trust is positively related with repurchase intention E-WOM Consumers when they want to buy products online, in most cases they look for online reviews and comments from other consumers' experiences before they decide to purchase products from online stores.Therefore, the consumer power lies in e-WOM, where online reviews and experiences from previous customers empower the online shopper (Park et al. 2011).e-WOM stands for a statement made form a customer about a product or a company (Handi et al., 2018).e-WOM statements can be positive or negative regarding customer experience with the product or the company.Thus, consumers can use various online tools to spread their opinions to other consumers that may affect their intention to repurchase products.Previous studies reveal that e-WOM is an important factor apart from other factors and is positively related with the repurchase intention.Sweeney et al. (2014) noted that services compared to physical goods, are more difficult to evaluate because of the intangibility nature of services.Therefore, consumers rely more on online word of mouth before any decisionmaking process when they need to repurchase any service.Moreover, positive and a negative e-WOM is very much related to repurchase intention (Sweeney et al., 2014;Liang et al., 2018;Sampat & Sabat, 2021).Therefore, we posit the following hypothesis: H7: A positive significant relation exists between e-WOM and repurchase intention Based on the above we propose the following research concept.Figure 1 presents the conceptual research framework of this study. Data collection and scales The hypothesised relationships were analysed from the data collected from an online survey.The original survey scales were in English, and then translated into Albanian and Macedonian languages because of the respondents from North Macedonia.The online questionnaire was pretested by sending the link by mail to some respondents in order to check for any eventual mistakes or misunderstandings.The structured questionnaire was designed in two sections.The first section dealt with demographic profiles of respondents, and the second section with dimensions proposed in the model.Respondents were expected to evaluate the dimensions by using 5-point Likert scales, by indicating the scale of their agreement with the statements concerning the dimensions in the proposed model.The items used in dimensions were developed from the literature review.Items from convenience dimension were developed from Jiang et al. (2013) and Benoit et al. (2017) comprising of 17 items, perceived value with 3 items from De Toni et al. ( 2018), e-WOM with 3 items from Kajtazi and Zeqiri (2020), trust dimension with 9 items from Raman (2019), and Doney and Cannon (1997), and repurchase intention items from Toska et al. (2022). Sample The structured questionnaire was distributed using Google forms using a convenience sample technique.A sample of 298 responses were collected from consumers that use Data analysis The obtained data was analysed using SPSS 26 statistical software and Smart PLS version 3.3.9for carrying partial least square SEM analysis for assessing the measurement model and the bootstrapping technique for assessing the structural model.Initially, measurement model was used to evaluate the construct reliability and validity, then the structural model assessed the significance relationships of the proposed hypotheses (Table 1). Measurement model This analysis (measurement model) evaluates the quality of the constructs in the study which commences with evaluating the factor loadings, followed by construct reliability and construct validity assessment (Emini & Zeqiri, 2021), before assessing the hypothesised model. Convergent validity as a test is used to assess the closeness of the items and how much they are related to each other in a construct.The convergent validity tests analyse factor loadings, AVE (average variance extracted), Cronbach's alpha, and composite reliability (Rahman et al., 2015).The analysis in Table 3 reveals that Cronbach's alpha values of all dimensions vary (0.724 to 0.399), showing those results being above the proposed threshold of 0.60 (Ursachi et al., 2015), which is recommended in social sciences research.In addition, the values of the composite reliability range (0.841 to 0.949) are above the proposed threshold of 0.70.Moreover, the AVE (average variance extracted) values vary from (0.645 to 0.904), denoting that those values are over the suggested threshold of 0.50, recommended by Fornell and Larcker (1981).Based on the results presented in Table 3, convergent validity was reached (Henseler, 2017). Factor loadings Factor loadings denote the extent of the correlation coefficient of an item with a given variable in the correlation matrix.The loadings values can vary from −1.0 to +1.0, where items that have higher loading values denote a higher correlation of that item with a given factor (Pett et al. 2003).In our study, all items had factor loadings above the value threshold (0.50) as suggested by Hair et al. (2016).Henceforth, no items needed to be removed, as it is shown in Table 3. Indicator multicollinearity In order to test the issues related with the multicollinearity of indicators, Variance Inflation Factor (VIF) statistic was utilised (Fornell and Bokstein, 1982).When VIF values are below 5, then there are not any multicollinearity issues (Hair et al., 2016).Results in Table 3 reveal that all the VIF values for the indicators for each of the indicators in this study are below the suggested threshold. Reliability analysis Reliability is referred to with consistent results.When a scale is measured repeatedly and when it produces the same results, then the scale is seen as being reliable.Reliability, according to Mark (1996), represents the degree to which a measure is consistent and stable.The measure can be consistent when it provides us with the same results or findings if used again and again.The Cronbach alpha and Composite reliability (CR) are usually used as the two most common methods for checking the scale reliability. Results The study used partial least squares (PLS-SEM) to analyse the proposed research model.We used structural equation modelling (SEM) to assess both models, measurement and structural model.The measurement model analyses provide information concerning construct validity, such as convergent and discriminant validity. Convergent validity In order to find out about convergent validity, we checked the outer loadings and the average variance extracted (AVE) values to find out how close items converge while measuring the same construct (Ramayah et al., 2018;Zeqiri et al., 2022).Hence, the convergent validity test is used to provide information regarding Cronbach's alpha, composite reliability, as well as AVE and the factor loadings (Sarstedt et al., 2019;Zeqiri et al., 2022).When the AVE values are greater than or equal to the suggested threshold value of .50, then the convergent validity is established as suggested by Fornell and Larcker (1981).The Cronbach's Alpha ranged from .724 to .939whereas Composite Reliability statistics ranged from .841 to .949 as can be seen in Table 4. Based on the obtained results, we can conclude that both indicators of reliability are over the required threshold of .70 (Hair et al., 2017).Therefore, all constructs established reliability.In addition, the AVE values in this study vary from 0.646-0.909,denoting that all values are above the recommended threshold, which is more than 0.50.Thus, convergent validity statistics in this study results that all the constructs higher values of recommended AVE value threshold.Table 4 reveals the AVE value for each of the dimensions. Discriminant validity Conversely to convergent validity, discriminant validity shows the extent to which dimensions are unrelated or different in the construct.According to Bagozzi et al. (1991) state that discriminant validity is achieved when the construct measures do not correlate highly to each other.Therefore, discriminant validity provides evidence about the extent to which construct measures are (highly) or are not (highly) correlated with each other.In addition, Fornell and Larcker (1981) pointed out that the criteria for establishing a discriminant validity occur when the square root of AVE for the construct is greater than its correlation with all other constructs.Therefore, this study reveals that the AVE square root for all constructs is bigger than its correlation with other constructs (Table 5). Validating higher order constructs Perceived convenience was the higher-order construct used in this study based on five lower-order constructs: access, search, evaluation, post-purchase, and transaction convenience.To establish a higher-order constructs validity, we should assess outer weights and loadings, t-statistics, p-values, and VIF.Based on the obtained results, the outer weights are significant (Hair et al., 2017).In addition, outer loadings are greater than the .50recommended threshold value for each of the lower-order constructs (Sarstedt et al. 2019).In the end, in order to check the collinearity issues, we assessed the VIF values.Table 6 denotes that all VIF values are less than the suggested value of 0.05 (Hair et al., 2016).Therefore, based on all assessments, all criteria for the HOC validity are met. Structural model The PLS-SEM was used to analyse the obtained empirical data in order to assess the hypothesised relationships and validate the proposed model and hypotheses.Figure 2 significant effect on PV (B = 0.422, t = 7.892, p < .000).Therefore, we support H1. Perceived convenience (PC) showed to have a positive impact on Repurchase intention (RI) (B = 0.224, t = 3.369, p < .001),thus supporting H2.The result also showed a positive relationship between perceived value (PV) with repurchase intention with an effect (B = 0.286, t = 4.720, p < .000).Henceforth, H3 is supported.In addition, H4 evaluated whether the perceived value (PV) was positively related to trust.The results revealed that PV had a positive significant impact on trust (B = 0.394, t = 7.531, p < .000).Thus, H4 was supported.Moreover, H5 evaluates the impact of PV on e-WOM.The results revealed that PV impacts e-WOM (B = 0.504, t = 9.057, p < .000), in support of H5.The results also revealed that trust had a positive effect on repurchase intention (RI) (B = 0.136, t = 2.177, p < .000),supporting H6.Finally, H7 evaluated whether e-WOM had an impact on repurchase intention.Table 7 revealed that e-WOM had a positive relationship with repurchase intention (RI) (B = 0.293, t = 4.842, p < .000). Mediation effect The proposed model analysed the mediation effect of perceived value, e-WOM, and trust.As provided in Table 8, the findings showed that e-WOM mediates the effects of perceived value on repurchase intention (B = 0.151, t = 4.210, p < .000). In addition, results revealed that trust does not mediate the relationship between perceived value and repurchase intention (B = 0.054, t = 1.879, p < .060).Furthermore, perceived value mediates the relationship between perceived convenience and trust (B = 0.167, t = 4.315, p < .000),and finally perceived value mediates the relationship between perceived convenience and e-WOM (B = 0.213, t = 4.913, p < .000).Since the direct effects of predictor were significant, we can conclude that the mediators partially mediated the relationship of predictors and the observed variables. Theoretical contributions The findings of this research exhibit some useful insights regarding the role that perceived convenience and perceived value have on repurchase intention.In addition, this research enhances the existing theoretical literature by providing an original framework that investigates how perceived value and trust are related to consumer intention to repurchase.As pointed out by other research perceived convenience and perceived value are very crucial regarding the decision-making repurchase products online (Kuo et al., 2009;Wang et al., 2004;de Morais Watanabe et al., 2020).This study provides more evidence regarding the understanding of factors that drive repurchase intention during online shopping.First of all, this study addresses some important research issues in the context of online shopping.It explains the relationship between perceived convenience in the context of online shopping and eventual implications on repurchase intention. Results showed that customers value products they purchase frequently, with low involvement, and in an easily convenient shopping environment.Moreover, our findings support previous research that convenience during purchasing process satisfies the ability of the customer to realise his or her intent since it conserves customers' time and effort during their purchasing of goods (Yale & Venkatesh, 1986;Farquhar & Rowley, 2009), and thereby facilitating repurchase intention (Seiders et al., 2005). Secondly, trust seems to be very important in an online shopping process.Based on the empirical evidence, this study provides more evidence to the existing literature as found out by other research studies that trust is positively related to perceived value (Kim et al.,2017;Sharma & Klein, 2020;Chen, 2012) and to customer's intention to repurchase.In addition, perceived value arising from online shopping convenience and online trust affect customers repurchase intention.Thus, customers are more inclined to make online purchases from the stores they trust.The findings in this research are in line with many previous studies that analysed online shopping context, revealing that customers' intentions to make purchases from an online store were positively influenced by their trust in the retailer (Chae et al., 2020;Lien et al., 2015;Ponte et al., 2015). Importantly, perceived convenience is translated to providing more perceived value to consumers, that consumers buy products with low risk and low involvement, conserving customers' time and effort and thereby increasing their perceived value during their ability to fulfil their intention to buy online, while perceived value has the greatest influence on e-WOM.Our findings support previous studies that perceived value has an impact on e-WOM (Rouibah et al., 2015;Ismail &Changalima, 2022;Palalic et al., 2021;Talwar et al.,2021;Chen &Zhang, 2022;Banerjee & Sreejesh, 2022). This study also investigates how search, access, evaluation, transaction, and postpurchase convenience affect perceived convenience in an online shopper repurchase intention and shows that evaluation, transaction, and search convenience in this high-order construct (perceived convenience) significantly affect repurchase intentions by providing more empirical evidence to the theoretical part.Therefore, consumer ability to evaluate products online, the ease of transaction, and the search for product information such as company websites, etc., contributed more to repurchase intention during online shopping.Consumers gain lots of benefits from these tools since they do not waste time and not spending much effort in the process of decision-making (Beauchamp & Ponder, 2010;Shankar & Rishi, 2020;Seiders et al., 2000). Another contribution from this research relies on the fact that the conceptualisation of this study explores the mediating role of trust and E-WOM at the same time with perceived convenience, perceived value and repurchase intention.Therefore, this makes this research among the first studies exploring the mediation effect of trust and E-WOM on online repurchase intention. Managerial implication This study offers some additional insights to marketing managers and companies in order to improve their marketing activities during online repurchase intentions of their customers in the following directions. First of all, it is very important to enhance perceived convenience to their customers, i.e., marketing managers and companies need to create for their clientele a more convenient shopping environment that is very meaningful to them during their decision to repurchase products (Jiang et al., 2013).Perceived convenience during online shopping is seen as very crucial for creating a real positive value for customers in their intention towards online shopping.This evidence is also supported by a previous study done by Kruh et al. (2017) that revealed that convenience during shopping is among the main reasons why consumers intend to shop online.Therefore, it is imperative for companies that expect to sell products using online tools to develop convenience strategies that save customers' effort and time, for example, securing wide-ranging and innovative approaches to customers to realise their process of online shopping.In this way, managers can use certain strategies in order to promote convenience by providing detailed information about their marketing offer. Second, companies and managers should know that the more convenience is perceived by customers, such as evaluation, transaction, and search convenience by customers, then customers are more likely to repurchase and use e-WOM and eventually recommend the product to other customers.In addition, our findings provide interesting insights for managers.For example, the possibility of evaluation of product information was found to be as the most important factor that determines perceived convenience for customers during their online shopping.Moreover, the results revealed that access and transaction were very essential for customers.Therefore, managers should assure providing their customers with easily accessed information regarding their offer.Specifically, marketing managers should pay attention on shared information on company websites, or any social media platforms, and along with search engine optimisation to contain valuable information for their customers, and not just posting information, but information that represents value for customers.Therefore, the findings from managerial perspective offer managers some important insights regarding convenience dimensions and what dimensions and factors to improve in order to provide customers more convenience and value in their intention to repurchase products from their online stores and thereby enhance trust and e-WOM. Limitations and further research Like other studies, this study acknowledges some limitations since it collected data only from customers and not analysing any industry specifically.Therefore, the obtained results are general customer perceptions about regarding factors that might enhance the intention to repurchase online.In addition, a larger sample could produce different and more robust results. Although the focus of the research was to explore the relationship between perceived convenience, perceived value, and repurchase intention which is limited to only perceived benefits during repurchase processes.Therefore, future studies should focus on other factors, for example, new research can be expanded to combine a more wholesome model embedding both dimensions of perceived benefits versus perceived risks.In addition, since our study explored products entailing as a concept both goods and services, using a multigroup analysis can offer more consumer insights concerning factors that contribute to repurchase intention. Our research revealed that evaluation convenience contributed more to the perceived convenience dimension, surpassing access and search from the convenience dimension.Therefore, these results recommend some future directions for research.First, we provide some clues to further research to focus on information posted in an online context and environment, since customers value the content of information. Moreover, we found out that certain mediators, like trust or e-WOM, strengthened the relation of perceived convenience, and perceived value with repurchase intention, in that way, using some other moderators may trigger some other insightful results.For example, using internet penetration and online service usage as moderators to find out if they moderate repurchase intention during online shopping.
7,499.8
2023-01-25T00:00:00.000
[ "Economics", "Business" ]
Antimicrobial and Thermal Properties of Coating Systems Modified with ZnO Nanoparticle and its Hybrid Forms: (A Review) This review examines the unparalleled chemical and physical properties of ZnO nanoparticles and its hybrid forms. The influence of these multifunctional materials within the polymeric matrix of organic coatings was discussed. The scanning electron microscope is seen to provide relevant information about the dispersion of the hybrid and composite coating systems. This review provides concise information about the antimicrobial and thermal stability of composites. INTRODUCTION The concer ns over the increase in worldwide epidemic and infectious diseases have accentuated the need to design antibiotics and promoting sanitary practices.World smallest life (microbes) are establised to be the most virulent since the stone age.Diseases like malaria caused by protozoa (Plasmodium spp), pneumonia caused by bacterial, fungus and virus, White pox caused by Serratia marcescens, whooping cough (Bordetella pertussis) and tuberculosis (Mycobacterium tuberculosis) top the list of world infectious diseases 1 .Microbes such as Escherichia, Streptococci and Staphylococci are responsible for most hospital (nosocomial) infections.The United States of American alone have a statistical record of two (2) million cases of hospital acquired infections and about 90,000 deaths per year 2 .The development of drugs such as antibiotics have helped to reduce the deadliness of the infections.Years after there applications (i.e.antibiotics), scientists have noticed the resistant strains of bacteria to designed antibiotics.The misuse, protracted use and bacterial evolution was observed to have caused the resistant strain.The campaign against microbes have geared manufacturals towards formulating products like mouthwash, room spray, kitchen soaps, glass cleaner, shampoo, surface cleaners and perfume with antifungi and antibacterial chemicals 3 .The instinctual ability of bacteria to attach itself to surfaces and form thin colonies layers, which (for example) is capable of covering medical device surfaces and thereby creating medium of infection could be tackled by developing coating formulations able to inhibit its growth.Devices like catheters, mechanical heart valves, contact lenses, intrauterine devices, surgical consumables and orthopedic implants can become mediums for infection if not rightly protected 4 .There are various types of protection that could inhibit the growth of these microbes.This review primarily emphasis the formulation of eco-friendly coating systems modified with ZnO nanoparticles and its hybrid forms derived from seed oils.Coating systems are formulations uniquely designed for surface covering of substrates.They provides functional, decorative, and in some cases both on its applied substrate 5 .Plant seed oils based coating systems has recently been studied by researchers in a bid to substitute those derived from conventional petrochemicals base coating systems.Unlike petrochemical feed stocks that create environmental concerns ranging from air to water pollutions and global warming disaster; plant seed oils are renewable biodegradable resource materials that can be formulated into effective coating systems [6][7][8] .However, the nature of most plant seed oils (i.e. the composition of their fatty acid profiles) deters them from been used directly in coating formulations hence, the need to modify the seed oils in other to create reactive functional sites.Modified polyols systems prepared through aminolysis, transestrification, hydroformylation, epoxidation, partial glyceride (PG) formation, blown oil are pathways towards creating functional sites for coating systems [8][9][10][11] .Coating systems such as polyesteramides, polyetheramides, polyesteramide-urethanes are synthesized from plant base polyol platforms as exemplified in Scheme 1 8,10 .The synthesized pristine organic coating products however shows limited thermal, corrosion, antimicrobial, rigidity, and chemical characteristic properties required in structural applications 12- 14 .It is in the light of this, that researchers are incorporating designed nanoparticles within the polymeric matrix of pristine coatings in order to reinforce the characteristic properties of the polymer composite thereby, creating innovative solutions in the area of optics, electronics, biomedical and material science.Quite a number of nanoparticles have been used in composite coating formulation processes.Commonly used based nanoparticles in this regard are TiO 2 , Al 2 O 3 , SiO 2 , CaCO 3 , ZnO, etc.The later (i.e.ZnO nanoparticle) possesses unique characteristics such as improved chemical, biological, semi-conductor properties and drug delivery.Nano-sized ZnO particles have also been found to be inert, non-toxic in nature and possess a blanket for UV radiation (due to its wide band gap of 3.37 eV, large bond strength and its heavy 60 meV excitation binding energy at room temperature) [15][16][17][18] .The inorganic amphoteric oxide (i.e.ZnO) is nearly insoluble in water and reacts slowly with fatty acids of triglycerides hence, the need to modify the periphery hydroxyls functional groups of the ZnO with organic compounds [19][20] .This will provide the required ambience for reaction between the triglyceride polyols, polyesters, polyethers and urethanes with the modified hybrid ZnO nanoparticles.This paper aims to review synthesis routes of preparing organic coating composites via the introduction of hybrid ZnO nanoparticles (ZnO-APTMS or ZnO-APTES) within the pristine polymer matrix and discussed thermal and corrosion stabilities, chemical resistance and antimicrobial sensitivity of seed oil based coating composites.Section 2 present basic discussions on ZnO nanoparticle and synthesis routes for preparation of hybrid-ZnO nanoparticles and its composites.In section 3, property evaluations (such as thermal stability, corrosion studies, chemical resistance and antimicrobial sensitivity) of coating composites were reviewed.The concluding remarks are presented in section 4. ZnO nanoparticles Over the years, ZnO nanoparticles have received extensive attention owing to its versatile industrial applications.The symmetric center deficiency in its wurtzite structure model (Figure 1) compounded with wide electromechanical coupling ensues in strong pyroelectric and piezoelectric properties.These effects results in the use of ZnO material in mechanical actuators and piezoelectric sensors 21 .This inorganic metal oxide (ZnO) with high thermal and mechanical stability at room temperature belongs to group II-IV semiconductor and has its covalence between ionic and covalent semiconductors limits [22][23] . The conductometric method of analysis shows that the antibacterial activity of CaO, MgO and ZnO can be ascribed to the reactive surface oxygen species on these oxides 24 .These inorganic oxides contain requisite mineral elements in human systems, which when administered in small amounts present antimicrobial agent tendencies.This tendency is reported to be dependent on the particulate size of these oxides 25 .This multifunctional material (ZnO) creates new chemistries for organic base coatings.However, ZnO nanoparticles have significant tendency to agglomerate in coating systems.Hence, the need to modify the hydroxyl functional peripheral groups of ZnO nanoparticles especially via solgel method of creating hybrid materials (Scheme 2) for various industrial applications, such as the 26 .This method is simple, reliable, low cost, and repeatable synthetic route that provides for it surface modifications 27 . Synthesis of hybrid-ZnO (ZnO-APTMS or ZnO-APTES) nanoparticles and its seed oil based composites Recently, researchers have designed several methods of modifying the surface of ZnO material in a bid to reducing the effect of agglomeration and intercalation (when combined with other materials such as montmorillonite) without impairing the physicochemical properties of the compound 28 .The following sections present scientific contributions towards modifying ZnO hybrid systems and composites formulations for coating purposes. Dhoke et al 29 reported an attempt to synthesize a nano-composite by incorporating nano-ZnO within a developed alkyd-based waterborne coating at different loading percentages.The composite were synthesized by combining alkydbased waterborne coating with hexamethylolmelamine as cross-linker in the mixing ratio of 70:30.This base matrix coating was treated with 0.01, 0.02, and 0.03 wt.% nano-ZnO.Dispersion was carried out using mechanical stirrer initially then culminating with ultrasonication.The application of the coating was Table 1: Antimicrobial activity of pure polymer and PGU-APTMS-ZnO hybrid coatings 33 done via dipping of pretreated mild steel panels in the coating (nanocomposite).Curing was at 130 ºC for 15 minutes.Similar procedure was reported by Dhoke et al 30 were mechanical and heat-resistance properties of waterborne silicone-modified alkydbased coatings were investigated alongside effect of nano-ZnO addition.Li et al 31 prepared reinforced polyurethane coatings by mixing hydroxyl-acrylic resin (HAR) with an average molecular weight of 15,600 and hexamethylene-1,6-diisocyanate (HDI trimer).Planetary ball milling instrument was used in mixing along with percentages of ZnO nanoparticles.30 g of thinner was mixed with some ZnO nanoparticles for 1.5 hour at 40 rpm.Subsequently, the suspension obtained was agitated with 40 g HAR and 10 g of HDI trimer.The mixture was later ball-milled for 1 hour at 40 rpm.Reinforced polyurethane composite coatings were later prepared.Jena et al 32 modified the peripheral of ZnO with 3-aminopropyl-triethoxysilane and afterwards incorporate the 3-aminopropyl-triethoxysilane-ZnO (APTES-ZnO) within the polymeric matrix of hyperbranched polyurethane-urea.This was achieved by adding 10 g of ZnO into a 50 g toluene in a round bottomed flask.The content was stirred with a magnetic stirrer for 5 minutes.Ultrasonic bath was used to improve the solubility of ZnO in toluene. 1 g of APTES coupling agent was added to the suspension and stirred at room temperature.After the appearance of a yellow-transparent dispersion the mixture was refluxed for 24 hours.Rotatory evaporator was used to dry the modified material. Bacillus Staphylococcus E. coli Klebsiella Unreacted APTES molecules were removed by washing with ethanol (3 times).The resultant power 33 was dried at 50 ÚC for 1 hour.Finally, the powder was ground and dried at 100 ÚC for 2 hours.In the same vein, Siyanbola et al 8,33 also modified ZnO using 3-aminopropyl-trimethoxysilane. Evaluations Antimicrobial sensitivity Over the years, researchers have formulated different types of coatings systems from seed oil base feedstock, which have shown reasonable antimicrobial inhibitive tendency.Siyanbola et al 33 compare the antimicrobial sensitivity of pristine polyurethane coating of a partial glyceride polyol and those that have their polymeric matrix doped with ZnO and modified ZnO (APTMS-ZnO) in varying percentages.The coating films were tested on gram positive (Staphylococcus aureus and Bacillus subtilis), gram negative (Escherichia coli and Klebsiella pneumonia) bacterial strains and a fungal stain (Aspergillus niger) grown on Czapek-Dox medium.The report shows inhibitive zone beneath and in the surrounding of the coating films.Polyurethane composite of 2% APTMS-ZnO was found to be highly resistive to growth on Staphylococcus aureus and Escherichia coli.Bacillus subtilis growth was not impaired by the placing of the coating in the medium (as shown on Table 1).The inhibitive activities of the hybrid coatings were perceived to be chiefly as a result of ZnO nanoparticle present in the composite films Chao et al 34 .Similarly, Siyanbola et al 8 carried out antibacterial activity on the same base seed oil plant (Thevetia peruviana) but on its fatty amide polyol [N,N'-bis (2-hydroxyethyl) Thevetia peruviana fatty amide] urethanes pristine and composites.Though, the synthesized composites in this case contain higher percentages of the hybrid materials, which also reflect on the degree of zone of inhibition.Furthermore, The antibacterial test carried out by Li et al 31 on polyurethane resin composites vividly show the effect of ZnO nanoparticles percentage increase in the synthesized composites.As the percentage weight of ZnO increases in the coating system the zone of inhibition increases especially in E. coli. Thermal stability The differential thermogravimetric analysis (DTG) reveals that synthesized coating composites with ZnO and its modified forms usually have a two stage degradation steps 8, 35 .The contrary, were recorded with systems developed by Siyanbola et al 33 with a three stage degardation steps (shown in Figure 2).This observation may be due to surface morphology achieved during the coating formulation (Figure 3), which may also be in regard to the material percentage composition within the polymer matrix.The thermal stability of ZnO and its hybrid coating composites generally increases as the percent composition of materials increases (Figure 4) 33, 35 .Siyanbola et al 33 reported 278.85 °C onset degradation step for PGU-2% APTMS-ZnO while PGU-1 % APTMS-ZnO shows 22.70 °C less than PGU-2% APTMS-ZnO.The storage modulus of these coating films also reflects the influence of hybrid composites in the prinstine polyurethane.The modulus profile formation is as a result of strong netmork structures between the urea groups and surface hydroxyl groups on the polyurethane composites through hydrogen bonding.Mishra et al 35 also synthesized aquoeus polyurethane hybrid dispersion using nano ZnO as filler.The research report corroborates the findings about higher thermal stability for hybrid films than that of mother (prinstine) polymer. CONCLUSION The peripheral hydroxyl functional group surrounding ZnO confers on this oxide robust modification pathways.These product pathways developments have led to the formulation of different coating composites (when the hybrid material is dispersed within the pristine polymeric coatings).These composites are capable of impeding antimicrobial growth and stabilizing thermal interferences.
2,807
2017-02-18T00:00:00.000
[ "Materials Science" ]
Multilanguage Semantic Interoperability in Distributed Applications JOSI is a soware framework that tries to simplify the development of such kinds of applications both by providing the possibility of working on models for representing such semantic information and by offering some implementations of such models that can be easily used by soware developers without any knowledge about semanticmodels and languages.is soware library allows the representation of domain models through Java interfaces and annotations and then to use such a representation for automatically generating an implementation of domain models in different programming languages (currently Java and C++). Moreover, JOSI supports the interoperability with other applications both by automatically mapping the domain model representations into ontologies and by providing an automatic translation of each object obtained from the domain model representations in an OWL string representation. Introduction Semantic information is assuming more and more importance both for the development of knowledge-based applications and for supporting the interoperability among different applications [1][2][3][4].In particular, ontologies have been gaining interest for the representation of the application domain models and their use has been spreading in different applications �elds [5][6][7][8]. Domain models are increasingly speci�ed as formal ontologies through the use of a semantic Web language (e.g., OWL [9,10]), but such models remain difficult to be utilized in applications developed through the used soware languages and libraries.In fact, the mapping of such models into the code of a typical application development language oen is not possible because of the different expressive power of the modeling and the implementation language.Moreover, when it is possible, the obtained implementation is too complex to be used by the large part of soware developers. However, the development of domain models that represent semantic information is very difficult without the use of a semantic language.To cope with this problem, a possible direction is to integrate usual programming techniques with some meta-programming techniques.In particular, the Java programming language supports metaprogramming through annotations and re�ection [11].In fact, while annotations allow the decoration of the Java code with new concepts and idioms, re�ection allows the retrieval of the information associated with annotations and then to use them for either modifying the usual execution of the Java code or for building new Java code. In this paper, we present a soware framework, called JOSI (Java and OWL for System Interoperability), whose goal is to simplify the development of the soware libraries for managing the data that implement the domain models shared by the systems of a distributed enterprise application.e next section introduces related work on the use of annotations for the development of soware and on the mapping between OWL ontologies and Java code.Section 3 describes the JOSI soware framework.Section 4 describes how a domain model is represented.Section 5 presents how an implementation of a domain model is built starting from its JOSI representation.Section 6 introduces how domain model implementations are used in a soware application.Sections 7 and 8 represent and discuss the experimentation of the JOSI soware framework.Finally, Section 9 concludes the paper sketching some future research directions. Related Work e idea of using Java annotations for extending the Java language is not new and several research teams worked in that direction. AspectJ [12] is probably the �rst important work that shows how Java annotation can provide a meta-programming layer on the top of Java programming structures.In particular, AspectJ is an aspect-oriented extension of the Java programming language that uses Java annotations for realizing declaring aspects, point-cuts, and advices. Andreae et al. [13] proposed a soware framework that supports pluggable type systems in the Java programming language by the de�nition of custom constraints on Java types through Java annotations. AVal [14] is a soware framework for the de�nition and checking of rules for programs written by using an attribute domain-speci�c built on the top of Java.is soware framework allows the validation of such kinds of program through a set of prede�ned Java annotations.Moreover, it allows to the users of the framework to add new annotations to provide new kinds of validation. Bordin and Vardanega [15] used Java annotations to embed in the source code a declarative speci�cation of the required concurrent semantics and then for producing the source code that implements the declared concurrent semantics. Cimadamore and Viroli [16] proposed a soware framework that tried to simplify the seamless integration of Prolog code into Java applications taking advantage of Java annotations to incorporate the declarative features of Prolog into Java programs. A lot of work has been done also towards the mapping of OWL ontologies into Java code and vice versa. e �rst important work that shows the partial translation of OWL ontologies in Java code is the Protégé Bean Generator [17].In particular, it transforms Protégé frame-based ontologies into Java source code for developing JADE agents [18,19]. RDFReactor [20] is a toolkit for dynamically accessing an RDF model through domain-centric methods (getters and setters).In particular, it allows the access to the RDF model through a set of proxy objects that provide the methods for querying and updating the RDF elements. A more sophisticated approach was presented by Kalyanpur et al. [21].is approach deals with issues as multipleinheritance by mapping OWL classes in Java interfaces.However, there is not a soware tool which takes advantage of this approach for mapping OWL ontologies into Java code. SeRiDA [22] is a methodology for enabling a three-tier mapping along ontologies, object-oriented java beans and relational database.In particular, it allows the generation of both an object-oriented and a relational model starting from a domain conceptualization expressed in OWL.is methodology has been experimented by realizing a soware tool that generates programming interfaces as enterprise Java beans and Hibernate object-relational mappings from OWL ontologies. Quasthoff and Meinel [23] presented a mechanism that allows application developers, with limited knowledge about RDF and OWL, to easily map arbitrary Java classes and interfaces to corresponding OWL concepts by using Java annotations.In particular, this mechanism has already been experimented in the development of a social network application testing new access control mechanisms on usergenerated content with the help of Semantic Web rules [19]. OWLET [24] is a Java soware environment based on an object-oriented model, which allows a simple and complete representation of ontologies de�ned by using OWL DL pro�le, and provides a complete set of reasoning functions together with a graphical editor for the creation and modi-�cation of ontologies.OWLET supports the development of heterogeneous and distributed semantic systems where nodes differ for their capabilities (i.e., CPU power, memory size, etc.).In fact, it offers a layered reasoning API that allows to deploy a system where high power nodes take advantages of all the OWLET reasoning capabilities, medium power nodes take advantages of a limited set of OWLET reasoning capabilities (e.g., reasoning about individuals) and low power nodes delegate reasoning tasks to the other nodes of the system. Finally, the OWL API can be considered the reference Java API for managing ontologies [25,26].In fact, besides providing the manipulation of ontologies, it offers: a general purpose reasoner interfaces, the validators for the various OWL pro�les, and the support for parsing and serializing ontologies in a variety of syntaxes.e API also has a very �exible design that allows third parties to provide alternative implementations for all major components. Different works cope with the problem of de�ning models for integrating different data sources in enterprise information systems. Astrova and Kalja [27] proposed an approach for system interoperability that maps relational database schemas into OWL ontologies and allows an improvement of database schemas by identifying "hidden" (implicit) semantic relationships and bad design solutions. Lin and Harding [28] proposed a general manufacturing system engineering knowledge representation scheme to facilitate communication and information exchange in interenterprise, multidisciplinary engineering design teams.It has been developed and encoded in the standard semantic web language.e proposed approach focuses on how to support information autonomy that allows the individual team members to keep their own preferred languages or information models rather than requiring them all to adopt standardized terminology. Salguero et al. [29] proposed a framework which encompasses the entire data integration process.e data source schemas as well as the integrated schema are expressed using an OWL extension which allows the incorporation of metadata to support the integration process. Software Framework Overview JOSI (Java and OWL for System Interoperability) is a soware framework that tries to simplify the development of the soware libraries for managing the data that implement the domain models shared by the systems of a distributed enterprise application. e main features of such soware library are as follows: (i) a strict separation between the representation of a domain model and its implementation, (ii) an automatic generation of an implementation of the representation of the domain model in different programming languages; (iii) an automatic generation of an OWL ontology from the representation of a domain model and vice versa, and (iv) the possibility of using an OWL string representation of the domain model data to support the interoperability between systems implemented in different programming languages and so the possibility of translating domain model data to OWL string representations and vice versa. JOSI is implemented in Java and takes advantage of Java interfaces and annotations to build a representation of a domain model and uses Java re�ection to drive the processing of the information maintained by such interfaces and annotations for generating the source code of the classes that de�ne the concrete implementation of the domain model. e following sections will describe how a domain model is represented through Java interfaces and annotations, how the Java classes providing a concrete implementation of such a domain model are generated from such interfaces and annotations, and how such a soware framework enables an application to use a concrete implementation of a domain model.T 1: Java annotations used in the representation of a domain model.@Abstract @Getter @Name @Symmetric @AllValuesFrom @HasValue @Ordered @SomeValuesFrom @Binding @Immutable @Set @Transitive @Cardinality @InverseOf @Setter @Version Domain Model Representation A domain model is represented by a set of Java interfaces.Each domain entity is represented by a Java interface (from here called entity interface) that de�nes the two methods for reading and modifying its attributes.Moreover, an additional Java interface (from here called factory interface) provides both some general information about the domain model and the factory methods for the creation of the Java classes which implement the different entity interfaces.Figures 1 and 2 show some entities of two domain models represented through the use of Java interfaces and annotations.Table 1 lists the Java annotations used in the representation of a domain model. To support the creation of the implementation of such entities, each Java interface is enriched by some Java annotations and constant declarations. e two annotations: @Getter and @Setter are applicable to the entity interface methods and de�ne the reading and modifying methods of a speci�c attribute.e type of the [] } F 2: Two entities of a domain model describing the life-cycle of a soware agent. attribute is identi�ed by both the return type of the reading method and the type of the argument of the modifying method (of course they need to identify the same type).In particular, the value of any attribute must be: a Java primitive data, an instance of the String class, an instance of a class implementing an entity interface, or an array of the previous kinds of value. e four annotations: @Abstract, @Immutable, @OneOf, and @Singleton, are applicable to the entity interfaces.e �rst annotation identi�es an abstract entity, that is, an entity that does not have any direct implementation.e second annotation identi�es an entity that has an immutable implementation, that is, the interface cannot de�ne methods that modify the value of its attributes and the implementation of its reading methods will be de�ned to return either the value of an attribute (if it is an immutable value) or a copy of the value (if it is a mutable value).e third annotation is used for identifying entities that have an extensional description (e.g., that can be de�ned through an enumeration).Finally, the forth annotation is used for the de�nition of some special entities that can be represented by a single class object. Oen the use of an implementation of a domain model inside an application needs the availability of operations for the comparison and ordering of their entities.In a Java implementation, such operations can be performed by implementing the compareTo, equals, and hashCode methods.e annotation @Comparator is introduced for this scope.In fact, it identi�es the sequence of attributes on which the previous three methods must work. In a domain model oen is necessary both to restrict the value that some attributes can assume and to establish a relationship between the attributes of some entities.It is done by associating some additional annotations to the reading methods of the entity interfaces. e four annotations: @AllValueFrom, @SomeValues-From, @Cardinality, and @HasValue, de�ne the most known constraints that OWL applies to the properties of an ontology.In particular, the �rst annotation constrains the values of an attribute to belong to speci�c type (of course, an implicit constraint of such a kind, is de�ned when the reading and modifying methods of an attributed are de�ned.�owever, an additional constraint can be added by imposing that the values of an attribute must belong to a subtype of the declared attribute type).e second annotation imposes that some of the values of an attribute must belong to a speci�c type (of course, such a type must be a subtype of the declared attribute type).e third annotation imposes that an attribute can have either a �xed number of values or a variable number of values de�ned by a minimum and�or a maximum value.Finally, the forth annotation imposes that an attribute must always contain some values (in this case, for the limited set of value types that can be associated with the attributes of an annotation, the values of such constraints are de�ned through constant variables and the annotations refer to the names of such constant variables). In some cases it can be necessary to impose that an attribute does not have duplicated values and that its values are maintained ordered: the two annotations: @Set, and @Ordered, impose the previous two constraints (in particular, the second constraint is implemented either by using the natural ordering between values or the ordering de�ned by the compareTo method built through the @Comparator annotation introduced above). e three annotations: @InverseOf, @Symmetric, and @Transitive, de�ne the most known constraints that ��L applies to the relationship between properties of an ontology.e �rst annotation de�nes an inverse relationship between attributes.e second annotation de�nes a symmetric relationship between the entities that have such kind of attribute.Finally, the third annotation de�nes a transitive relationship between the entities that have such kind of attribute. Finally, the two annotations: @Name and @Version, are applicable to the factory interfaces: the �rst annotation indicates the name associated with the domain model and the second annotation identi�es the version of the model.Lastly, the annotation @Binding is associated with a factory method of a model interface.is annotation identi�es the attribute that each argument of the factory method will initialize. Domain Model Implementation � domain model representation, de�ned as described in the previous sections, contains all the information for building an implementation of such a domain model.is implementation is realized by an annotation processor that builds a Java class for each Java interface of the model.Figures 3 and 4 show the source code of the Java classes obtained through the naming domain model introduced in the previous section.e result of such an annotation processor is a set of Java �les.Each Java �le contains the source code of a class that implement an interface of the domain model representation.Moreover, each class that implements an entity interface provides a method for building an OWL string representation of an entity class instance, and each class that implements a model interface provides a method for building an entity class instance from its OWL string representation. e annotation processor used for generating the domain model implementation is composed by two soware modules.e �rst module, called processing module, e�tracts the information from the domain model representation, generates an intermediate representation and then calls the second module.en the second module, called generation module, builds the domain model implementation from the intermediate representation. e intermediate representation is based on a two level tree where the root object maintains the information about the model interface and each leaf object maintains the information about an entity interface. e processing module is independent from the implementation of the generation module because it calls a generation module by a Java interface and the generation module implementation is a parameter of the processing module constructor.erefore, it is very easy to provide different implementations of some domain model representations by de�ning new generation modules able to process in different ways the intermediate representation built by the processing module.In particular, the current version of the soware framework provides another generation module which builds OWL ontologies from the domain model representations and stores them in RDF format [30]. Domain Model Application Aer the creation of an implementation of a domain model, its use inside an application is very simple.In fact, the JOSI soware framework provides a class, called DataStore, which has the duty of both maintaining the information about the different domain models available for the current application and providing the access to their implementation through the creation of an instance of the class that implements their domain interface.In particular, the Datastore instance can access to the list of the domain models used by the application through a property �le.erefore, aer the creation of an instance of the Data-Store class, the code of the application can create instances of any class implementing the factory interface of a domain model and then use it for creating instances implementing any entity interface of such a domain model.Figure 5 shows a sample of Java code performing the operations described above. Experimentation We are using the JOSI soware framework for the development of the models and then the implementations of the data necessary for supporting the basic interactions among the components of a distributed system realized through the HDS soware framework.Moreover, JOSI was experimented for de�ning the domain models of some applications in the �elds of distributed information sharing and social networks. HDS (Heterogeneous Distributed System) is a soware framework that tries to simplify the realization of pervasive applications by merging the client-server and the peer-topeer paradigms and by implementing all the interactions among the processes of a system through the exchange of typed messages and the use of composition �lters for driving and dynamically adapting the behavior of the system [31]. Typed messages are one of the elements that mainly characterize such a soware framework.In fact, typed messages can be considered an object-oriented "implementation" of the types of message de�ned by an agent communication language and so they are means that make HDS a suitable soware framework both for the realization of multiagent systems and for the reuse of multiagent model and techniques in nonagent based systems. In particular, the type of a message is de�ned by its content and its content is de�ned by an entity of a speci�c domain model de�ned with the JOSI soware framework.erefore, we used JOSI foe the de�nition of the domain models that support the basic interaction among HDS processes, that is, the managing of the processes themselves and of the resources that can they used in a distributed application.Moreover, we used JOSI for de�ning the domain models used for realizing the typical coordination algorithms of intelligent distributed systems. RAIS (Remote Assistant for Information Sharing) is a peer-to-peer multiagent system supporting the sharing of information among a community of users connected through the Internet [32].RAIS offers search facility similar to Web search engines, but it avoids the burden of publishing the information on the Web and it guaranties a controlled and dynamic access to information through the use of agents.e use of agents in such a system is very important because it simpli�es the realization of the three main services: (i) the �ltering of the information coming from different users on the basis of the previous experience of the local user; (ii) the pushing of the new information that can be of possible interest for a user; and (iii) the delegation of access capabilities on the basis of a network of reputation built by the agents on the community of users. RAIS is composed of a dynamic set of agent platforms connected through the Internet.In this case, JOSI has been used for the de�nition of the domain models supporting the de�nition of the interaction of agents for the retrieval and pushing of the information and for the management of the user pro�les. About the applications in the �eld of the social networks, we are starting the development a system for the study of the most known social networks and, in particular, of the social networks that provide semantic support for the management of both the pro�les and the information published by the users [33]. In particular, we built a system that can simulate the behavior of some of the most known social networks and can compare them with some enhanced versions of such networks that provide semantic support through the use of JOSI domain models.In particular, we de�ned some domain models for representing the user pro�les of different social networks and some domain models for supporting users in the publishing and retrieval of information related to some sample topics (e.g., computer science and music). Experimental Results e results of the experimentation of the soware framework showed that the de�nition of a domain model can be done by any programmer with knowledge about the Java programming language, but does not require any knowledge about any knowledge engineering and semantic Web techniques and technologies.Moreover, if the entities of a domain model are de�ned as immutable objects, then the performance of managing such entities is similar to the one of managing JavaBean objects. Other important results come from some tests that compared the result of the work of groups of students, which developed domain models using JOSI, with the work of other groups of students, which developed domain models without using it.In fact, while the �rst set of groups developed the domain model in few time spending a very limited part of it for code correction, the second set of groups developed the domain model in a very long time spending its large part for code correction.Moreover, the performance measures of the tests showed that the implementations of the domain model based on the JOSI framework provided better measures or at least similar to the ones provided by the "custom" implementations.Of course, while the use of JOSI guaranteed implementations in different programming languages (currently Java and C++) without additional costs, it was not true for "custom" implementations. Conclusion is paper presented a soware framework, called JOSI (Java and OWL for System Interoperability), that has the goal of simplifying the development of the soware libraries for managing the data that implement the domain models shared by the systems of a distributed enterprise application. is soware framework allows to represent a domain model through Java interfaces and annotations and then to use such a representation for automatically generating a Java implementation of the domain model.Moreover, it provides the interoperability with other kinds of systems both automatically mapping the Java domain representation in an OWL ontology and providing an automatic translation of each object de�ned by the domain model representation in an OWL string representation. JOSI derived from O3L (Object-Oriented Ontology Library), a soware library that provides a complete representation of ontologies compliant with OWL 2 W3C [34].O3L has not the goal to be used for the creation and manipulation of ontologies, but provides a simpli�ed and e�cient API for the realization of applications, that interoperate through the use of shared ontologies, and allows: (i) the use of OWL individuals as data of the applications, (ii) the exchange of OWL individuals between applications, (iii) the reasoning about OWL individuals, and (iv) the classi�cation of OWL classes and properties.e experimentation of O3L showed that it is a powerful means for developing applications but with two main limits: developers must have a good knowledge of semantic techniques and technologies and oen applications cannot provide the required performances. Current and future research activities are dedicated, besides to continue the experimentation of the current implementation of JOSI, to: (i) the development of a soware generation module that allows the automatic generation of a C++ and Python implementation from a JOSI model representation, (ii) the generation of a JOSI model representation from an OWL ontology compliant with the JOSI domain model representation, (iii) the generation of OWL ontologies compliant with such a representation from OWL ontologies that contain classes and properties that cannot be de�ned through the annotations de�ned in the JOSI soware framework, (iv) the introduction of new annotations for increasing the expressive power of the JOSI model representation. F 3 : Java implementation of the entities of the naming domain model. F 4 : Java implementation of the model of the naming domain model. F 5 : Java code for creating instances of the entities of two domain models.
5,857.6
2013-01-27T00:00:00.000
[ "Computer Science" ]
Improving robustness against electrode shift of high density EMG for myoelectric control through common spatial patterns Background Most prosthetic myoelectric control studies have concentrated on low density (less than 16 electrodes, LD) electromyography (EMG) signals, due to its better clinical applicability and low computation complexity compared with high density (more than 16 electrodes, HD) EMG signals. Since HD EMG electrodes have been developed more conveniently to wear with respect to the previous versions recently, HD EMG signals become an alternative for myoelectric prostheses. The electrode shift, which may occur during repositioning or donning/doffing of the prosthetic socket, is one of the main reasons for degradation in classification accuracy (CA). Methods HD EMG signals acquired from the forearm of the subjects were used for pattern recognition-based myoelectric control in this study. Multiclass common spatial patterns (CSP) with two types of schemes, namely one versus one (CSP-OvO) and one versus rest (CSP-OvR), were used for feature extraction to improve the robustness against electrode shift for myoelectric control. Shift transversal (ST1 and ST2) and longitudinal (SL1 and SL2) to the direction of the muscle fibers were taken into consideration. We tested nine intact-limb subjects for eleven hand and wrist motions. The CSP features (CSP-OvO and CSP-OvR) were compared with three commonly used features, namely time-domain (TD) features, time-domain autoregressive (TDAR) features and variogram (Variog) features. Results Compared with the TD features, the CSP features significantly improved the CA over 10 % in all shift configurations (ST1, ST2, SL1 and SL2). Compared with the TDAR features, a. the CSP-OvO feature significantly improved the average CA over 5 % in all shift configurations; b. the CSP-OvR feature significantly improved the average CA in shift configurations ST1, SL1 and SL2. Compared with the Variog features, the CSP features significantly improved the average CA in longitudinal shift configurations (SL1 and SL2). Conclusion The results demonstrated that the CSP features significantly improved the robustness against electrode shift for myoelectric control with respect to the commonly used features. Introduction Surface electromyography (EMG) signals, which contain neural information [1], have long been used as control inputs of myoelectric prostheses [2][3][4]. With most conventional, commercially available myoelectric prostheses, a control scheme based on using amplitude or power of the EMG signals to control one degree-of-freedom (DOF) has been employed for several decades. To improve the functionality and provide more intuitive control of myoelectric prostheses, pattern recognition methods have been employed to classify EMG signals towards multifunctional prosthesis control for more than 20 years [5][6][7][8][9]. The pattern recognition-based control scheme is based on the assumption that amputees can activate consistent (same motion) and distinctive (different motions) EMG patterns using residual stump muscles [10]. In general, there are two types of surface EMG, low density (less than 16 electrodes, LD) EMG and high density (more than 16 electrodes, HD) EMG, which are classified by the number of electrodes. Electrode shift is an identified problem existing in both LD and HD EMG applications. It may occur during repositioning or donning/doffing of the prosthetic socket. It is one of the main reasons for degradation in classification accuracy (CA) [11]. In LD EMG, some researchers proposed efficient methods to reduce the CA degradation of electrode shift. Hargrove et al. proposed a strategy training the classifier with EMG signals from all expected displacement locations [12]. However, this strategy needing long-time training can be often frustrating for the user and leading to frequent device abandonment [13]. Young et al. demonstrated that electrode with larger size reduced the sensitivity of shift while performing worse than electrodes with smaller size without shift [11]. They suggested that electrodes oriented in longitudinal direction with the muscle fibers performed better than that oriented in transversal direction. They also showed that time-domain autoregressive (TDAR) features achieved the best classification performance and was least affected by electrode displacements. They further demonstrated that a greater interelectrode distance improved classification performance, and a combination of longitudinal and transversal electrode configurations also improved the performance in the presence of electrode shift [7]. Recently, HD EMG signals become an alternative for myoelectric prostheses [14][15][16][17][18][19]. Huang et al. showed that double differential spatial filter on HD EMG signals could improve the myoelectric control performance on targeted muscle reinnervation (TMR) patients [14]. However, for HD EMG application, the electrode shift is also very common and serious. Stango et al. used variogram (Variog) of HD EMG signals to provide features robust to electrode number and shift for myoelectric control [18]. The Variog is a statistical measure of the spatial correlation and widely used as spatial-domain feature for classification in geostatistic [20,21]. It can be also called semivariance since it is a graph of the semivariance against the distance. To solve the electrode shift problem of HD EMG, common spatial patterns (CSP), a method widely used in electroencephalogram (EEG) study has drawn our attention [22,23]. In general, EEG has many electrodes (64 ∼ 128), which are similar to the HD EMG condition. Therefore, we expect the excellent capacity of CSP in EEG can also be suitable for HD EMG. Actually, Hahne et al. demonstrated that CSP feature showed a higher robustness against noise than time domain (TD) feature for myoelectric control [17]. However, they did not investigate the performance of CSP feature in the presence of electrode shift. Huang et al. also used an improved CSP for EMG classification, but they targeted LD EMG and did not consider the problem of electrode shift [24]. In this study, we investigate whether the CSP of HD EMG signals can improve the myoelectric control performance under electrode shift for eleven classes of hand and wrist motions. We test nine able-bodied subjects. The performance of CSP feature is compared with the commonly used TD, TDAR and Variog features. Linear discriminant analysis (LDA) classifiers are used to process the EMG data. Subjects Nine able-bodied subjects (eight males and one female; aged 22-27; referenced as Sub1-Sub9) participated in the experiment. The subjects had no neurological disorders. This work was approved by the Ethics Committee of Shanghai Jiao Tong University. All subjects participating in the experiment signed informed consent and the procedures were in compliance with the Declaration of Helsinki. Experiment setup Eleven classes of hand and wrist motions were performed by the subjects in order, i.e., hand close (HC), hand open (HO), key grip (KG), tip prehension (TP), wrist flexion (WF), wrist extension (WE), radial deviation (RD), ulnar deviation (UD), forearm supination (FS), forearm pronation (FP) and "no movement" (NM). In each trial, the subjects were asked to perform each motion for 10 s. Ten trials were performed by each subject. To avoid fatigue, the subjects had a 1-min rest between each trial. Data acquisition Monopolar surface EMG signals were measured and collected using a grid of 192 electrodes (3 semi-disposable adhesive matrix, 64 electrodes, ELSCH064NM3) composed by 8 rows and 24 columns, with 10 mm interelectrode distance (IED) (Fig. 1). The skin surface of forearm was rubbed lightly with alcohol to reduce impedance. The grid was mounted around the circumference of the forearm (Fig. 1), starting from the ulnar bone. The grid was mounted on the skin by adhesive foam and a reference electrode was mounted at the wrist. The matrixes were connected to a multichannel surface EMG amplifier (EMG-USB2+, OT Bioelettronica, Torino, Italy) and the signals were amplified with a gain of 500, band-pass filtered (pass band 10-500 Hz), sampled at 2048 Hz, and A\D converted with 12-bit resolution. Common spatial patterns CSP is a supervised two-class method to design linear spatial filters simultaneously maximizing the variance of one class and minimizing the variance of another class [22]. In this way, the classes can be maximally separated by their variances. CSP is widely used in motor imaginary-based brain computer interface (BCI) for classification of EEG signals [23,25]. The raw EMG signals of class j and class k were represented as X j and X k with dimensions c × l, where c was the number of channels, and l was the number of samples per each channel (here l was 408). The objective was to find the ω of the spatial filter y = ω T X, which maximized the variance of class j and minimized the variance of class k. Thus, the optimization process was formulated as following: where j = 1/(n − 1) * X j * X j T and k = 1/(n − 1) * X k * X k T were the covariance matrix of class j and class k respectively. This was realized by finding the matrix W that simultaneously diagonalized both j and k : The row vectors of W were c spatial filters. Applying the full filter matrix W to the raw EMG signals would give c output signals Y = W * X, which were called components. The variance of each component for class j was indicated by the corresponding eigenvalue of D j , for class k of D k . With the constraint (4), the eigenvector corresponding to the largest eigenvalue for D j would had the smallest eigenvalues for D k , and the eigenvector corresponding to the largest eigenvalue for D k would had the smallest eigenvalues for D j . These two eigenvectors were chosen as the spatial filters in this study. Multiclass CSP Since there were eleven motion classes in this study, we extended the two-class CSP into multiclass CSP by using one versus one (CSP-OvO) and one versus rest (CSP-OvR) scheme [17]. In the CSP-OvO scheme, the two-class CSP was designed for all possible class combinations. The filters were chosen in the same way as in the two-class CSP. Thus, there were M = N * (N − 1)/2 combinations for N classes. The features of all selected components were concatenated into one feature vector. In the CSP-OvR scheme, each filter was designed to maximize the variance of one class and minimize the average of the variances of all other classes. The filters were chosen in the same way as in the two-class CSP. This process was repeated for all classes. Thus, there was N combinations for N classes. The features of all selected components were concatenated into one feature vector. Feature extraction The logarithm of the variances of the selected CSP components were calculated as features in the CSP-OvO and CSP-OvR scheme. Here, the length of analysis window was set to 200 ms and the increment of two adjacent windows was set to 50 ms. The length and the increment were chosen to ensure response time of the system was below 300 ms for reducing users' perceived lag [5]. A feature set was computed on each of the CSP component, and then concatenated to form a feature vector. To compare the proposed feature extraction method with the state of the art technology, TD features, TDAR features and Variog features, which were effective and robust with electrode shift [2,5,11,18,26,27], were used in this study. These features were extracted using the same window length and the same increment as those specified in above paragraph. Classification As a simple and efficient classifier, the LDA classifier has been widely used for pattern recognition of EMG signals [7,28]. Researchers have presented in previous studies that the LDA classifier can have the comparable performance to other more sophisticated classifiers [29] and generalizes better than the nonlinear multilayer perceptron classifier with electrode shift [11]. Hence, the LDA classifier was employed to identify the CSP features (CSP-OvO and CSP-OvR) and the two classic features (TD and TDAR) in this study. Since the Variog features performed better with support vector machine (SVM) classifier compared with LDA classifier [18], the SVM classifier was employed to identify the Variog features in this study [30]. A five-fold cross-validation procedure was used. Four fifths of the data were randomly selected and used as a training set to train the LDA classifier, while the remaining one fifth were used as a testing set. Electrodes shift Shift transversal and longitudinal to the direction of the muscle fibers were taken into consideration. We expected that shift in longitudinal or transversal direction would be the extreme situation. Meanwhile, the influence of electrode shift occurring along both axes would be between the influences of electrode shift in longitudinal and transversal directions. Since a shift of 10 mm or less was considered more likely in clinical applications [11], the shift distance was chosen as 10 mm to simulate the worst shift situation in the current study. To simulate the shift transversal to the direction of the muscle fibers, half of the columns were used for training and the remaining half for testing, which corresponded to a 10-mm shift for a configuration of 96 electrodes. Figure 2 shows the shift in transversal direction of the muscle fibers. Shift leftwards (ST1): the white color electrodes were used for training, while the red color electrodes were used for testing. Shift rightwards (ST2): the red color electrodes were used for training, while the white color electrodes were used for testing. To provide a control for transversal direction shift, the same color electrodes in Fig. 2 were used for both training and testing, referred as ST. It should be noted that the electrodes distance in the transversal and longitudinal direction was 20 mm and 10 mm respectively. Similar method was used to simulate the shift in longitudinal direction of the muscle fibers (Fig. 3). To provide a control for longitudinal direction shift, the same color electrodes in Fig. 3 were used for both training and testing, referred as SL. It should be noted that the electrodes distance in the transversal and longitudinal direction was 10 mm and 20 mm respectively. Quantification of feature space To investigate the variations in the EMG feature space before and after the electrode shift, relative center shift (RCS) was defined in the current study. RCS was defined as the ratio between the mean value of the Mahalanobis distance of the same motion before and after the electrode shift across N motions and the mean value of the Mahalanobis distance of the different motions across N motions after the electrode shift: where μ i and μ si were the centroid of the ellipsoid of motion i before and after the electrode shift, S i and S si were the covariance of the data for motion i before and after the electrode shift. The value of RCS was positively correlated to the relative center shift in the EMG feature space. As different feature sets would have different dimensionality of feature vector, prior to computation of RCS, the Fisher linear discriminant (FLD) [31] was adopted to reduce the dimension of feature vectors to the same level of N − 1, where N is the number of motions, which was eleven here. Since the Variog features were identified by SVM classifier but not LDA classifier, the FLD was not suitable to process the Variog features. Therefore, the RCS was not computed on the Variog features. Visualization of CSP patterns To understand the improvements of the CSP features, the corresponding patterns of the motions before and after the electrode shift were visualized for a representative subject (Sub3). CSP patterns were columns of the inverse of filter matrix W. The ith pattern represented the source signal distribution to the sensors that produced activity in the ith CSP component. CSP patterns provided valuable information about the underlying electrophysiology processes and the related muscles. Contrary to the EMG amplitude patterns, which only showed muscle activation information, the CSP patterns emphasized the locations that provided most information to discriminate different motions. Figures 4 and 5 show the last CSP patterns of motion 1 and motion 4 for CSP-OvO extension scheme in the transversal direction shift (ST1) and longitudinal direction shift (SL1) respectively. Figures 6 and 7 show the first CSP pattern of each active motion and rest motions for CSP-OvR extension scheme in the transversal direction shift (ST1) and longitudinal direction shift (SL1) respectively. Statistical analysis A two-way repeated measures ANOVA was used to analyze CA. The ANOVA included the following two factors: Shift (ST1, ST2, SL1 and SL2) and Feature (CSP-OvO, CSP-OvR, TD, TDAR and Variog). Similarly, a two-way repeated measures ANOVA was used to analyze RCS. The ANOVA included the following two factors: Shift (ST1, ST2, SL1 and SL2) and Feature (CSP-OvO, CSP-OvR, TD and TDAR). In all ANOVA tests, the full model was conducted first. When a significant interaction was detected, a simple-effects analysis was conducted by fixing the levels of one of the interacting factors. When no interaction was detected, a reduced ANOVA model with only the main factors was performed. Whenever significance was detected for the main factors, a Tukey comparison was performed. Only a significant difference was reported for Figure 8 shows the average CA of all features (CSP-OvO, CSP-OvR, TD, TDAR and Variog) across all subjects for the half grid configuration of 96 electrodes (ST and SL) and the different shift configurations (ST1, ST2, SL1 and SL2). The average CA of CSP-OvO and CSP-OvR was slightly higher than that of TD and was comparable with that of TDAR for the half grid configuration without electrode shift (ST and SL). The average CA of CSP-OvO, CSP-OvR, TD and TDAR was 8 % higher than that of Variog for the half grid configuration without electrode shift (ST and SL). Since the average CA of all features without electrode shift was over 90 %, it demonstrated that the half grid configuration without electrode shift was sufficient to provide good myoelectric control performance for all features (CSP-OvO, CSP-OvR, TD, TDAR and Variog). However, the average CA for TD was decreased to 67.2 % in ST1, 65.0 % in ST2, 81.9 % in SL1, and 85.4 % in SL2; the average CA for TDAR was decreased to 72.1 % in ST1, 74.5 % in ST2, 87.9 % in SL1, and 89.9 % in SL2; the average CA for Variog was decreased to 78 % in ST1, 78 % in ST2, 82.8 % in SL1, and 84.4 % in SL2. The average CA for CSP features (CSP-OvO and CSP-OvR) was ∼80 % in the electrode shift in The two-way ANOVA revealed a statistically significant interaction between Shift and Feature (p < 0.001). The simple-effects analysis was conducted to break down the ANOVA into subsequent one-way ANOVA, looking separately at the ST1, ST2, SL1 and SL2 for main effect of Feature. Classification accuracy For the Shift ST1, the one-way ANOVA revealed a main effect of Feature (p < 0.001). Tukey comparison showed that the CA of CSP-OvO was not significantly different with that of CSP-OvR (p = 0.997) and Variog (p = 0.869) but significantly higher than that of TD (p < 0.001) and TDAR (p = 0.002); the CA of CSP-OvR was not significantly different with that of Variog (p = 0.973) but significantly higher than that of TD (p < 0.001) and TDAR (p = 0.006); the CA of TD was not significantly different with that of TDAR (p = 0.138) but significantly lower than that of Variog (p < 0.001); the CA of TDAR was significantly lower than that of Variog (p = 0.04). For the Shift ST2, the one-way ANOVA revealed a main effect of Feature (p < 0.001). Tukey comparison showed that the CA of CSP-OvO was not significantly different with that of CSP-OvR (p = 0.531) and Variog (p = 0.777) but significantly higher than that of TD (p < 0.001) and TDAR (p = 0.019); the CA of CSP-OvR was not significantly different with that of TDAR (p = 0.536) and Variog (p = 0.995) but significantly higher than that of TD (p < 0.001) and TDAR (p = 0.006); the CA of TD was significantly lower than that of TDAR (p < 0.001) and Variog (p < 0.001); the CA of TDAR was not significantly different with that of Variog (p = 0.3). For the Shift SL1, the one-way ANOVA revealed a main effect of Feature (p < 0.001). Tukey comparison showed that the CA of CSP-OvO was not significantly different with that of CSP-OvR (p = 0.995) but significantly higher than that of TD (p < 0.001), TDAR (p < 0.001) and Variog (p < 0.001); the CA of CSP-OvR was significantly higher than that of TD (p < 0.001), TDAR (p < 0.001) and Variog (p < 0.001); the CA of TD was not significantly different with that of Variog (p = 0.944) but significantly lower than that of TDAR (p < 0.001); the CA of TDAR was significantly higher than that of Variog (p < 0.001). For the Shift SL2, the one-way ANOVA revealed a main effect of Feature (p < 0.001). Tukey comparison showed that the CA of CSP-OvO was not significantly different with that of CSP-OvR (p = 0.783) but significantly higher than that of TD (p < 0.001), TDAR (p < 0.001) and Variog (p < 0.001); the CA of CSP-OvR was significantly higher than that of TD (p < 0.001), TDAR (p = 0.002) and Variog (p < 0.001); the CA of TD was not significantly different with that of Variog (p = 0.933) but significantly lower than that of TDAR (p = 0.006); the CA of TDAR was significantly higher than that of Variog (p < 0.001). Figures 9 and 10 show the average confusion matrix of the five features (CSP-OvO, CSP-OvR, TD, TDAR and Variog) across all subjects in ST1 and ST2 respectively. We could find that the improvements of CSP features were mainly from NM, WF and UD in ST1 and were mainly from NM, WF and WE in ST2. Figures 11 and 12 show the average confusion matrix of the five features across all subjects in SL1 and SL2 respectively. We could find that the improvements of CSP features were mainly from NM, WF and UD in SL1 and were mainly from NM, TP, and WE in SL2. Comparing (a) and (b) of Figs 9, 10, 11 and 12, we found that the CA for each motion was similar between the two CSP features in all shift configurations (ST1, ST2, SL1 and SL2). Furthermore, we found the misclassifications of one motion vs. another (e.g. HO vs. UD) were also similar between the two CSP features in all shift configurations. These results demonstrated that the separability from one motion to another was very similar between the two CSP features and could explain why the classification performance of the two CSP features was not significant different in all shift configurations. Comparing (a)-(d) and (e) of Figs. 9, 10, 11 and 12, we found that the misclassifications of one motion vs. another of Variog feature were pretty different from that of the other four features. We suggested that this phenomenon was induced by the different type of classifier that the Variog feature used. The two-way ANOVA revealed a statistically significant main effect of Feature (p < 0.001). No other significant two-way interaction or main effect was revealed. For the factor of Feature, Tukey comparison showed that the RCS of CSP-OvO was not significantly different with that of CSP-OvR (p = 1.0), but significant smaller than that of TD (p < 0.001) and TDAR (p < 0.001). It also showed that the RCS of CSP-OvR was significant smaller than that of TD (p < 0.001) and TDAR (p < 0.001). However, the RCS of TD was not significantly different with that of TDAR (p = 1.0). The results demonstrated that the significant improvement in CA of the CSP features (CSP-OvO and CSP-OvR) was induced by the significantly smaller RCS in the feature space compared with the classic features (TD and TDAR). Unlike the CSP features (CSP-OvO and CSP-OvR), the significant difference in CA between TDAR and TD was not reflected on the RCS. The main reason was likely that the difference in CA between TDAR and TD was much smaller compared with the difference in CA between the CSP features and the classic features. Figures 6 and 7 show the first CSP pattern of each active motion and rest motions for CSP-OvR extension scheme in the transversal direction shift (ST1) and longitudinal direction shift (SL1) respectively. We found that the locations emphasized by the CSP patterns before and after the shift were very similar. We believed that this was due to the underlying electrophysiology processes were not changed even in the presence of electrode shift. Thus, the CSP patterns of the EMG signals before electrode shift could emphasize the most discriminative locations after electrode shift and improve the CA in all electrode shift configurations (ST1, ST2, SL1 and SL2). Discussion As shown in Fig. 8, CSP features (CSP-OvO and CSP-OvR) significantly improved the CA over 10 % with respect to TD features in all shift configurations (ST1, ST2, SL1 and SL2) (p < 0.05). The CSP-OvO feature achieved the highest average CA in all electrode configurations (ST, SL, ST1, ST2, SL1 and SL2) and significantly improved the average CA over 5 % with respect to TDAR features in all shift configurations (p < 0.05). Except shift configuration ST2, the CSP-OvR feature significantly improved the average CA with respect to TDAR features (SL1 and SL2). Thus, the CSP features could improve robustness against electrode shift for myoelectric control with respect to classic features. Although there was no significant difference between the CA of CSP-OvO feature and that of CSP-OvR feature in all electrode configurations in Fig. 8, the average CA of CSP-OvO feature was slightly higher than that of CSP-OvR feature in all electrode configurations. We attributed this to the fact that the number of features extracted in CSP-OvO scheme was much more than the number of features extracted in CSP-OvR scheme. Therefore, the CSP-OvO feature extracted more helpful information for classification from the HD EMG signals with respect to the CSP-OvR feature. Furthermore, the results showed that the CSP features (CSP-OvO and CSP-OvR) performed best in longitudinal shift configurations (SL1 and SL2). We believed that this was presumably due to the fact that the electrode configuration was shifted in this case along the muscle fiber direction. The results also showed that TDAR features significantly improved the CA with respect to TD features in shift configurations ST2, SL1 and SL2 (p < 0.05). These confirmed the result that TDAR features significantly reduced sensitivity to electrode shift compared with TD features of a previous study [11]. Furthermore, the results showed that the Variog features significantly improved the average CA with respect to TD features in shift configurations ST1 and ST2 (p < 0.05) and improved the average CA with respect to TDAR features in shift configuration ST1. However, the CA of the Variog features was not significantly different with that of TDAR features in shift configuration ST2 (p = 0.3) and significantly lower than that of TDAR features in longitudinal direction shift (SL1 and SL2) (p < 0.05). These results were partially consistent with the results of previous study [18]. Since parameters choosing was very important when using the Variog features and SVM classifiers, we suggested this might be due to that we could not find the optimized parameters in the current study. Moreover, this might be due to the number of motions considered in the current study was eleven which was larger than seven in that previous study. As shown in Fig. 13, the average RCS of CSP features (CSP-OvO and CSP-OvR) across all subjects was significantly smaller than that of classic features (TD and TDAR) in all shift configurations (p < 0.001). Since the average value of the feature vector of each motion and the covariance of all EMG data determined the parameters of the LDA classifier, the smaller RCS indicated that the LDA classifier trained before the electrode shift was more suitable for identifying the EMG data after the electrode shift. Thus, the CA of the features with smaller RCS should be greater than that with larger RCS. Here, we attributed that the improvement of CSP features (CSP-OvO and CSP-OvR) with respect to classic features (TD and TDAR) was induced by the relatively smaller RCS compared with the classic features. For noise investigations, Hahne et al. have evaluated the performance of CSP features with a high baseline noise of individual channels and proved that the CSP features outperformed the classic features [17]. Thus, we did not test this effect, but only concentrated on the electrode shift in the current study. The results showed that the proposed CSP features could improve the robustness against electrode shift for myoelectric control compared with the commonly used features. However, a limitation existed in the current study was that the proposed CSP features were not suitable for LD EMG. Geng et al. used CSP method to select LD channels from HD EMG, but they targeted channel selection and did not consider the problem of electrode shift [19]. Huang et al. also used an improved CSP for EMG classification, but they targeted LD EMG and did not consider the problem of electrode shift [24]. For LD EMG application, the proposed CSP features should be modified to common spatio-spectral pattern (CSSP) features and then evaluate their performance against electrode shift. In CSSP, several finite impulse response (FIR) spectral filters were embedded into CSP to constitute a spatiospectral filter [20]. Since the embedded FIR filters would improve the number of channels for CSP, it could make the CSP suitable for LD EMG. We will investigate the performance against electrode shift of CSSP features for LD EMG application in the future. As this work is an off-line analysis, an on-line study should be taken into account. In the future, the CSP features (CSP-OvO and CSP-OvR) will be tested in real-time experiments measured by three performance metrics, i.e. motion completion rate, motion completion time and motion selection time [32,33]. There is a limitation in the current study that the subjects are intact-limb subjects. Although Scheme et al. showed that the results from intact-limb subjects could be generalized to amputees [34], the CSP features (CSP-OvO and CSP-OvR) should be tested on amputees in future work. To test the applicability of CSP features in practice, whether the computation capability of current micro-controller is enough for the analysis of HD EMG signals in myoelectric control should be investigated in the future.
7,036.4
2015-12-01T00:00:00.000
[ "Computer Science" ]
Generation of an avian influenza DIVA vaccine with a H3-peptide replacement located at HA2 against both highly and low pathogenic H7N9 virus ABSTRACT A differentiating infected from vaccinated animals (DIVA) vaccine is an ideal strategy for viral eradication in poultry. Here, according to the emerging highly pathogenic H7N9 avian influenza virus (AIV), a DIVA vaccine strain, named rGD4HALo-mH3-TX, was successfully developed, based on a substituted 12 peptide of H3 virus located at HA2. In order to meet with the safety requirement of vaccine production, the multi-basic amino acid located at the HA cleavage site was modified. Meanwhile, six inner viral genes from a H9N2 AIV TX strainwere introduced for increasing viral production. The rGD4HALo-mH3-TX strain displayed a similar reproductive ability with rGD4 and low pathogenicity in chickens, suggesting a good productivity and safety. In immuned chickens, rGD4HALo-mH3-TX induced a similar antibody level with rGD4 and provided 100% clinical protection and 90% shedding protection against highly pathogenic virus challenge. rGD4HALo-mH3-TX strain also produced a good cross-protection against low pathogenic AIV JD/17. Moreover, serological DIVA characteristics were evaluated by a successfully established competitive inhibition ELISA based on a 3G10 monoclonal antibody, and the result showed a strong reactivity with antisera of chickens vaccinated with H7 subtype strains but not rGD4HALo-mH3-TX. Collectedly, rGD4HALo-mH3-TX is a promising DIVA vaccine candidate against both high and low pathogenic H7N9 subtype AIV. Introduction At the beginning of 2013, H7N9 subtype avian influenza virus (AIV), which caused human infection along with five waves in China [1,2], was initially identified as a low pathogenic AIV (LPAIV) to poultry and spread widely in different provinces, especially in live poultry markets [3,4]. Since the second half of 2016, H7N9 subtype AIV evolved into highly pathogenic AIVs (HPAIV), confirmed by a multi-basic amino acid motif appearing at the cleavage site of HA [5]. Subsequently, H7N9 subtype HPAIV caused outbreaks in chickens, leading to a large number of chicken deaths in several provinces in China [2]. At present, vaccination has become one of the effective strategies to control avian influenza. After the implementation of the national vaccination program with the H5/H7 bivalent and trivalent inactivated vaccines, the prevalence of H7N9 virus in poultry was dramatically limited [6], more importantly, the vaccination of poultry successfully prevented the emergence of new waves of human H7N9 infection [7]. Avian influenza is an important target for eradication, which was listed at the national medium and longterm plan for prevention and control of animal epidemic devised by the government of China [8]. Pathogenic surveillance is an ideal screening strategy, however, diagnostic sensitivity is a high requirement and the monitoring window period is very short, which limit the screening of positive poultry. Serological studies can detect the infection with influenza virus in the absence of symptoms or positive viral characterizations [9][10][11]. In the influenza serological assays, such as hemagglutination inhibition (HI) or microneutralization (MN), the quantification of virusspecific antibodies can be as an indicator of infection [12]. However, serological monitoring can be affected after vaccination. Therefore, it is necessary to design a novel vaccine that distinguishes infected from vaccinated animals (DIVA). This strategy has been successfully applied to the prevention and control of pseudorabies. Since 1990, pseudorabies has been eradicated in some farms and regions by using DIVA strategy based on gE-deleted vaccines and the matched diagnostic kits, such as gE-ELISA kits, which can effectively detect specific antibodies induced by wide type virus [13][14][15]. Several DIVA strategies have been designed for avian influenza vaccines. Nonstructural protein 1 (NS1) that is only detected in infected cells was considered as a target in the design of DIVA vaccines; however, NS1 antibodies is too low to hardly detection [16]. Neuraminidase (NA) is another design target based on different NA subtypes between vaccine strain and epidemic strain [17], whereas possible failures cannot be ignored because continual emerging viruses has a higher probability to acquire similar NA subtype with vaccine strains. Hemagglutinin (HA) protein is a glycoprotein located on the AIV envelope membrane, which can induce protective neutralizing antibodies. The current vaccines mainly induce neutralizing antibodies against the HA1 protein, which can provide a very effective protection against homologous strains. However, the epitopes located at HA1 protein are prone to mutate, and the vaccine needs to be updated continuously to guarantee the immune efficacy. HA2 protein, located at the stem of HA, has high homology among different subtypes of AIV and chimeric recombination is not prone to occur [18]. If the conservative peptides can be screened at the HA2 protein and replaced with the corresponding peptides of other subtypes of viruses, the DIVA strategy will be possible. Therefore, HA2, as a candidate target for DIVA vaccine design, is very promising. Previously, we successfully screened a specific epitope on the HA2 protein of H7N9 subtype AIV, named H7-12 peptide, based on peptide microarray technology. Subsequently, a serological DIVA vaccine was developed by using the chimeric HA epitope approach based on LPAIV JD/17 strain [19]. With the highly pathogenic H7N9 strain became the dominant epidemic strain, therefore, a H7N9 avian influenza DIVA vaccine was successfully developed based on substituted 12 peptides of H3 subtype influenza virus located at HA2 in this study. In addition, the multi-basic amino acid motif located at the HA cleavage site was removed for attenuation to ensure the safety of vaccine production, and a virus skeleton from a H9N2 subtype AIV (TX strain) was introduced for increasing viral production. Furthermore, vaccine immune protection against H7N9 subtype LPAIV and HPAIV were also evaluated. The DIVA properties can be easily detected by our established competitive inhibition ELISA method based on 3G10 McAb, which is a more suitable method for wide application in production practice. Biosafety and animal care 3-and 6-week-old specific pathogen-free (SPF) chickens were purchased from Sinopharm Yangzhou Weike Biological Engineering Co., Ltd. All experiments involving H7 viruses were approved by the Institutional Biosafety Committee of Yangzhou University and were performed in animal biosafety level 3 (ABSL-3) facilities according to the institutional biosafety manual (CNAS BL0015). The protocols of all animal studies were approved by Jiangsu Province Administrative Committee for Laboratory Animals (approval number: SYXK-SU-2016-0020) and complied with the guidelines of Jiangsu Province Laboratory Animal Welfare and the ethics of Jiangsu Province Administrative Committee of Laboratory Animals. Viruses, cells, and plasmids A total of 15 AIV strains were isolated from live poultry markets or diseased chickens by our laboratory (Table S1), including ten different subtypes of AIVs (H1N1, H3N2, H4N6, H5N2, H5N6, H5N1, H6N2, H7N9, H9N2, and H10N3), and antiserum against each virus was prepared from SPF chickens immunized with whole inactivated virus. Viruses were propagated in 10-day-old SPF embryonated chicken eggs (ECEs) at 37°C and the viral allantoic fluids were collected and stored at −80°C. Viral infectivity of each strain was determined by serial titration in 10-day-old ECEs, and was calculated as 50% of the egg infective dose (EID 50 )/ .1 mL by the Reed-Muench method [20]. Madin-Darby canine kidney (MDCK) cells and human embryonic kidney (293T) cells were maintained in Dulbecco's Modified Eagle's Medium (HyClone, USA) supplemented with 10% fetal bovine serum (Gibco, USA) at 37°C in an atmosphere of 5% CO 2 . Chicken embryo fibroblast (CEF) cells were prepared and maintained in M199 medium (HyClone, USA) with 4% fetal bovine serum. The eight plasmids (HA, NA, PB2, PB1, PA, NP, M, and NS based on pHW2000 vector) of TX strain were previously constructed [21] and kept in our laboratory. Generation of antiserum The viral allantoic fluids were inactivated by mixing .1% formaldehyde solution at 4℃ for 24 h, and then the inactivation level was tested by performing serial passages on eggs. The completed inactivated virus was emulsified by white oil adjuvant (Sinopharm-vacbio, Yangzhou, China) for preparing inactivated vaccines. Immune antiserum was generated by vaccinating 21day-old SPF chickens (n = 5) with the inactivated vaccine and identified by hemagglutination inhibition (HI) assays. For preparation of infected antiserum, 7-weekold SPF chickens (n = 3) were infected intranasally with GD4 strain with 10 4 EID 50 /.1 mL. After 14 d. p. i., infected antiserum was generated after HA titer ≥7. 3-week-old SPF chickens (n = 5) were infected intranasally with JD/17 strain with 10 6 EID 50 /.1 mL. After 21 d. p. i., infected antiserum was generated after HA titer ≥7. Phylogenetic analyses HA gene sequences from JD/17, XZ-1, GD4, HZLH2, and XT-3 were obtained by sequencing, and the gene sequences from all other viruses were obtained from GISAID and GenBank. The complete sequences were further chosen to perform multiple sequence alignment by MEGA (Version 6), and then maximum likelihood phylogenetic trees were inferred with 1000 bootstraps. Thermostability and pH stability For thermostability assay, all viruses were diluted to the same EID 50 and then incubated in water bath at 37℃ or 42℃. At day 1, 3, and 5. The HA titer of virus was determined. For 56℃ thermostability, all viruses were incubated in water bath at 56℃ for 0, 5, 10, 15, 30, 60, or 90 min, and then the treated samples were quickly placed at 4℃ for 5 min, and the HA titer was determined. The pH stability was assayed as previously described [22]. In brief, viruses were mixed with an equal volume buffer with different pH levels and incubated at 37℃ for 10 min. The titers of all samples were determined using hemagglutination assay with 1% chicken red blood cells. Construction of a H7N9 rGD4 HAlo-mH3-TX strain Briefly, the HA gene of GD4 with removed the multibasic amino acid motif was amplified by overlapping PCR using primer pairs, including HA Lo -1-F, HA Lo -1-R, HA Lo -2-F, and HA Lo -2-R. Furthermore, HA Lo gene of GD4 virus with substitution of H3 subtype 12 peptide at HA2 was generated by overlapping PCR using primer pairs, including HA Lo-mH3 -1-F, HA Lo-mH3 -1-R, HA Lo-mH3 -2-F, HA Lo-mH3 -2-R, HA Lo-mH3 -3-F, and HA Lo-mH3 -3-R (Table 1). Next, the HA Lo-mH3 was cloned into pHW2000 vector, which was combined with the NA plasmids from GD4 and high-yield inner viral backbone from H9N2 subtype TX strain (containing PB2, PB1, PA, NP, M, and NS plasmid) to generate a recombinant strain, named rGD4 HALo-mH3 -TX, by a reverse genetics method as described previously [23]. Sequencing results showed that all the viral genes were genetically stable without any unwanted mutation. The italic represents the restriction endonuclease BsmbI site; Underline represents the changed sequence. IVPI assay Fresh infective allantoic fluid was diluted 1:10 in sterile phosphate buffer saline (PBS). A sample (.1 mL) of the diluted virus was injected intravenously into each of 6-week-old SPF chickens (n = 10). All of the chickens were examined daily for 10 days and scored based on the condition of each chicken as described in the OIE Manual [24]. Vaccination and challenge assay 3-week-old SPF chickens (n = 10) were injected via neck subcutaneous with .3 mL each formalin-inactivated vaccine (rGD4 HALo-mH3 -TX or rGD4, 10 6 EID 50 / .1 mL) or .3 mL of PBS as controls. Every week after vaccination, serum from all chickens was collected for testing. Three weeks after vaccination, the chickens were challenged intranasally with 10 6 EID 50 of H7 virus (GD4 or JD/17). Oropharyngeal and cloacal swabs were collected at 1, 3, and 5 days post-challenge for the detection of virus shedding. All chickens were observed for signs of disease or death during 14 days after challenge. Preparation of McAb Four 6-week-old SPF BALB/c mice were subcutaneously injected with 50 μg H7-12 peptide-BSA conjugates that were emulsified with the same volume of Freund's complete adjuvant (Sigma Aldrich, St. Louis, MO, USA). After three weeks, the mice were immunized by subcutaneous injection of H7-12 peptide-BSA with the same volume of Freund's incomplete adjuvant (Sigma Aldrich, St. Louis, MO, USA). The HI titer of serum isolated from the immunized mice was determined. Once a high titer was obtained, the spleen cells were isolated and fused with the cultured SP2/0 myeloma cells at ratio of 5:1 in the presence of 1 mL PEG1500 (Roche, Switzerland) for preparing hybridoma cells [25]; The supernatant of cell culture from individual wells containing hybridoma cells was collected and detected after 10 d by indirect ELISA (iELISA) as described [26]. After limiting dilution and three subclonal screenings, the hybridoma cells with high affinity and specificity against H7-12 peptide were injected intraperitoneally into the BALB/c mice for the production of McAb in ascites [27], and then the soluble IgG antibodies were purified. IFA CEF cells were infected with different AIVs at multiplicity of infection (MOI) = 10, and cultured for 12 h at 37℃. Then, the cells were fixed with cold methanol, after washing with PBS, the plates were incubated with monoclonal antibody (3G10, 1:2000) against H7-12 peptide for 1.5 h at 37℃, then rinsed three times for 5 min. The 1:5000 Alexa Fluor TM 488 goat anti-mouse IgG (H+L) (ThermoFisher, USA) were incubated at 37℃ for 1 h. Next, the plates were rinsed three times and visualized with a fluorescence microscopy (Olympus, Tokyo, Japan). Competitive inhibition ELISA ELISA plates were coated with H7-12 peptide (50 µg/ mL, GL Biochem Ltd, China) overnight at 4°C Then, the plates were blocked for 2 h at 37℃ with 5% nonfat milk. Infection or immune serum were prediluted 1:4, and incubated on the plates for 1 h at 37℃. After extensive washing three times, the plates were incubated for 1 h at 37℃ with an enzyme-labeled monoclonal antibody (3G10) were prediluted 1:200 (100 μL/ well). After three washings, the plates were overlaid with 3,3,'5,5'-tetramethylbenzidine (TMB, Beyotime Biotechnology, China). Reactions were stopped by using 2 M H 2 SO 4 . Optical density (OD) was read at 450 nm of each well. Inhibition rate (I) = (1-OD 450 positive value(P)/negative value(N)) × 100%. 30 negative serums were detected by competitive inhibition ELISA. The average (×) of the inhibition rate of 30 negative serums was 8.17 and the standard deviation (SD) was 4.03. X + 2 SD was as the negative cutoff value, which was 16.14%; X + 3SD was as the positive cutoff value, which was 20.23%. The value between 16.14% and 20.23% was suspicious and needs to be tested again. Statistical analysis Results were expressed as the means ± standard deviations (SD). A one-way ANOVA analysis of variance was employed to compare the variance between the different groups. *P < .05 , **P < .01. H7N9 subtype HPAIV shows a far genetic relationship in comparison to H7N9 subtype LPAIV The phylogenetic tree showed that the mean genetic distance of HA in the ingroup of HPAIV and LPAIV was .004 and .018, respectively. The average genetic distance of HA among groups between HPAIV and LPAIV was .026 (Figure 1). These results showed that GD4, HZLH2, XT-3, and SD098, belonged to HPAIV, have a far genetic distance from LPAIV. Therefore, it is necessary to develop a novel vaccine against both HPAIV and LPAIV. Rescue of rGD4 HAlo-mH3 -TX virus The 50% tissue infective doses (TCID 50 ) in chicken embryo fibroblast (CEF) cells and 50% egg infectious doses (EID 50 ) of 3 HPAIVs, including GD4, HZLH2 (A/Chicken/Guangdong/HZLH2/2017, H7N9), and XT-3 (A/Chicken/Hebei/XT-3/2017, H7N9), were tested for evaluating the basic biological characteristics of the strains. The results showed that GD4 had higher TCID 50 and EID 50 titer compared with other two strains (Table 2). Furthermore, the thermostability and pH stability of viruses were performed for evaluate the viral stability. We found that GD4 and HZLH2 showed a higher thermostability at 37℃, 42℃, or 56℃ compared with XT-3 strain (Figure 2(a-c)). Meanwhile, the GD4 of pH stability displayed a highest level in comparison with other strains (Figure 2(d)). Therefore, GD4 was considered to develop a recombinant vaccine candidate strain by reverse genetic technology. The multi-basic amino acid motif located at the HA cleavage site was removed for attenuation and the H7-12 peptide ( 463 ADSEMDKLYERVKRQLRENA 482 ) located at HA2 of GD4 was replaced by H3 subtype 12 peptide ( 463 ADSEMNKLFEKTKKQLRENA 482 ), which is as a marker for DIVA strategy (Figure 3(a)). The recombinant HA fragment, named the HA Lo-mH3 , combined with the NA plasmids from GD4 and high-yield viral backbone from H9N2 subtype TX strain (containing PB2, PB1, PA, NP, M, and NS plasmids), was used to construct a recombinant virus based on our established reverse genetic manipulation [22], named rGD4 HALo-mH3 -TX (Figure 3(b)). Sequencing results showed that all the virus genes were genetically stable without any unwanted mutation after passaging 5. rGD4 HAlo-mH3 -TX strain displays a good reproductive ability and low pathogenicity After removing the multi-basic amino acid motif located at the HA cleavage site, the EID 50 titer of rGD4 HALo (8.0 log 10 /.1 mL) was lower than that of rGD4 (8.5 log 10 /.1 mL). Notably, after substitution of a high-yield viral backbone from H9N2 subtype TX strain, The EID 50 titer of rGD4 HALo-mH3 -TX strain were same with that of rGD4 (8.5 log 10 /.1 mL). Meanwhile, The HA titer of rGD4 HALo-mH3 -TX maintained a high level (10 log 2 ). These data suggested that our vaccine design strategy still maintain a good viral reproductive ability (Table 3). In the chicken embryo infection assay, rGD4 induced mortalities of embryos within 36 h, while the chicken embryos infected with rGD4 HALo or rGD4 HALo-mH3 -TX were still alive at least 120 h. Furthermore, chicken intravenous pathogenicity index (IVPI) assay was introduced to evaluate the pathogenicity in vivo. Chickens inoculated with rGD4, rGD4 HALo or rGD4 HALo-mH3 -TX via the intravenous route died 48 h or alive, with 2.55, .10 or .01 IVPI values, respectively. The above results indicated that the pathogenicity of the rGD4 HALo-mH3 -TX has been weakened significantly. rGD4 HAlo-mH3 -TX strain shows a good immune efficacy and cross-immunoprotection against H7N9 subtype HPAIV and LPAIV Three-week-old SPF chickens were immunized with inactivated rGD4 or rGD4 HALo-mH3 -TX virus and the protective efficacy was evaluated. The HI titers (log 2 ) of rGD4 and rGD4 HALo-mH3 -TX immune serum against GD4 were 6.10 ± 1.57 and 7.15 ± 1.63, respectively, at 2 weeks after first immunization, and the titers rose to 8.70 ± 1.22 and 8.60 ± 1.60, respectively, at 3 weeks post first immunization ( Figure 4). The result of cross-antibody titers showed that the average HI titers (log 2 ) of vaccinated chickens with rGD4 or rGD4 HALo-mH3 -TX against JD/17 (6.85 ± 1.63 or 7.25 ± 1.55, respectively) were low than that against GD4 (8.70 ± 1.22 or 8.60 ± 1.60, respectively) at 3 weeks after first immunization (Table 4). These results indicated that rGD4 HALo-mH3 -TX can induce similar antibody levels with rGD4 and also can produce good cross-antibody levels against LPAIV JD/17. Next, chickens were challenged with 10 6 EID 50 of HPAIV (GD4) strain at 3-week post-vaccination, and all PBSvaccinated chickens died within 5 days. All vaccinated chickens with rGD4 HALo-mH3 -TX and rGD4 showed a 100% clinical protection. Of note, two chickens vaccinated with rGD4 appeared cloaca shedding and one chicken had oropharyngeal shedding (80% shedding protection), whereas no cloaca shedding and low oropharyngeal shedding (1/10) in the chickens vaccinated with rGD4 HALo-mH3 -TX (90% shedding protection) were detected. After challenge with 10 6 EID 50 of heterologous H7 LPAIV (JD/17), all PBS-vaccinated chickens shed virus within 5 days. All vaccinated chickens with rGD4 HALo-mH3 -TX or rGD4 showed no clinical signs during observation. The chickens vaccinated with rGD4 HALo-mH3 -TX had a higher shedding protection (80%) than rGD4 immunization group (70%). These data suggested that rGD4 HALo-mH3 -TX showed a good immune efficacy and cross-immunoprotection against H7N9 subtype HPAIV and LPAIV. rGD4 HAlo-mH3 -TX strain shows a good DIVA property by using a matched competitive inhibition ELISA The established competitive inhibition ELISA method based on 3G10 McAb was used to detect H1, H3, H4, H5 (Re-8, Re-11, and Re-12), H6, H7 (H7-Re2), H9, and H10 positive serums and negative serum. Meanwhile, GD4, JD/17, and rGD4 HALo-mH3 -TX immune or infected serums were also detected. As shown in Table 5, the inhibition rate of H7-Re2, GD4, and JD/17 immune serum was 48.56 ± 3.98%, 52.24 ± 1.83%, and 54.87 ± 2.81%, respectively, which is higher than the negative value of 16.14%, while the serum inhibition rate of other HA subtypes is less than 16.14%, demonstrating that the established competitive inhibition ELISA method showed a good specificity to identify H7 subtype wild type virus. In addition, the inhibition rate of GD4 and JD/17 infected serum was 56.79 ± 3.76 and 55.33 ± 3.06, respectively, which showed a strongly positive response and indicated that the simulated infection serums can also be accurately detected by this method. Importantly, the inhibition rate of rGD4 HA Lo-mH3 -TX immune serum was 4.89 ± 1.43%, implying an ideal DIVA property for distinguishing between vaccine strains and wild-type strains. Discussion The option of vaccination against AIV is major prevention and control measure in poultry industry. However, traditional vaccines, such as the inactivated whole-virus vaccines, resulted in a malfunction of serologic diagnostic assay for surveillance and identify of infected animals, which is critical way for eradication of epidemic disease. In this study, a novel DIVA vaccine was developed with a good immune efficacy and crossimmunoprotection against H7N9 subtype HPAIV and LPAIV. In addition, the developed competitive inhibition ELISA method based on 3G10 McAb with a good specificity can accurately distinguish between immune serum of rGD4 HALo-mH3 -TX and infected serum of wild type H7N9 strains. The current influenza vaccines mainly induce anti-HA antibodies, which specifically target the antigenic sites located at the globular head domain of the HA1 region for blocking receptor binding [28,29]. However, viral mutations occur frequently at HA1 region, which resulted in a low neutralizing activity of antibodies induced by influenza vaccines. Compared with HA1, HA2 is more conservative and gradually become a main target protein in the study of universal epitopes [30], which implying that a potentially conserved peptide could be chosen as a DIVA marker. The results of IFA experiments also showed the 3G10 McAb against our H7-12 peptide located at HA2 only responded with cells infected by H7 subtype AIVs but not that infected by other 9 subtype AIVs, including H1, H3, H4, H5 (Re-8, Re-11, and Re-12), H6, H9, and H10 subtype, suggesting H7-12 peptide showed a good specificity for H7 subtype AIVs. Furthermore, 3G10 McAb can well identify both H7 subtype HPAIV and LPAIV despite a far genetic relationship was appeared in recent, indicating that the H7-12 peptide reflected a highly conservative feature in H7 subtype AIVs. In order to achieve the DIVA strategy, H7-12 peptide was replaced by corresponding H3 sequence based on five amino acids difference. A recombinant chimeric influenza virus GD4 HALo-mH3 -TX strain was successfully rescued by reverse genetic technology, implying that only the mutations with five amino acids did not influence viral biological properties. Moreover, 3G10 McAb cannot identify the chimeric virus after replacing H7-12 peptide with five amino acids mutations in IFA assay. These results revealed that rGD4 HALo-mH3 -TX strain is expected to be a marker vaccine strain candidate to distinguish infected from vaccinated animals and 3G10 McAb with good specific and broad-spectrum properties has a promise to be applied in DIVA diagnosis. The classic virus serological test, such as hemagglutination inhibition or microneutralization, can detect virus-specific antibodies as indicators for monitoring infection [12], however, these methods cannot distinguish between the antibodies induced by vaccination and wild virus infection in the host. Although a peptide microarray for DIVA identification was successfully generated in our previous study, and showed much more sensitive than that by HI test [19], a special equipment is needed currently. In comparison with these serologic methods, ELISA technique is widely regarded as an essential approach superior in accuracy, rapid, and throughput for early diagnosis, shows obvious advantages in wide promotion and apply [31]. Among different kind of ELISA assays, competitive inhibition ELISA, also called epitope-blocking ELISA, can achieve a distinction between immune serum and infected serum. Therefore, a highly specific McAb is necessary to recognize a broadly conserved antigenic epitope throughout H7 subtype strains which can consistently induce antibody response in infected but not rGD4 HALo-mH3 -TX vaccinated hosts. Our generated 3G10 McAb-based competitive inhibition ELISA method showed that only the serum against H7 subtype HPAIV or LPAIV wild type has a positive inhibition rate, whereas the serum against the chimeric virus rGD4 HALo-mH3 -TX has a typical negative inhibition rate, demonstrating that 3G10 McAb showed a high specificity and broad-spectrum for covering both H7 subtype HPAIV and LPAIV, and the 3G10 McAb-based competitive inhibition ELISA showed an excellent capacity for DIVA diagnosis. H7N9 subtype AIV is an important zoonotic pathogen, thus the safety of vaccine production also needs to be considered. HPAIVs usually contains continuous basic amino acids at the cleavage site of the HA gene. Our previous study successfully achieved the attenuation of H5N1 subtype HPAIV based on modifying basic amino acids at the cleavage site of the HA gene [32]. Meanwhile, the attenuation strategy might be resulted in the decline of virus production, thus, replacement of the internal backbone with high-yield property is also an important solution. Subbarao et al. [33]. removed the multibasic amino acid motif in the HA gene of epidemic strains and transfected a recombinant virus from in a background of internal genes derived from PR8 for the first time, and a new attenuated recombinant candidate strain was successfully developed. H7N9 viruses initially reported in 2013 were generated on the basis of complete internal genes from H9N2 subtype AIVs [34,35], implying that the internal backbone of H9N2 virus is better matching the H7N9 virus. We previous found that TX is a natural attenuated strain of the avian H9N2 subtype and possesses a strong virus replication capacity in chick embryo [36], implying its internal backbone could be used for virus production. Therefore, based on the consideration of production safety and high yield, the multi-basic amino acid motif located at the HA cleavage site was removed and the TX internal backbone was replaced successfully into H7N9 GD4 strain. Our constructed rGD4 HALo-mH3 -TX recombinant strain still maintained a comparable viral reproductive ability with rGD4 and showed a low pathogenicity that chicken intravenous pathogenicity index (IVPI) remarkably decreased to .01. Immune efficacy is unaffected in the design of DIVA vaccine, which is also needed to considered. Our designed the rGD4 HALo-mH3 -TX strain shows a minimal mutation with five amino acids at HA2, but not HA1, thus the strain can induce a similar antibody level with rGD4, indicating that the substitution of at HA2 did not affect the ability of antibody production, which also overcomes the disadvantage of poor antibody inducibility of DIVA vaccine [37,38]. In addition, rGD4 HALo-mH3 -TX strain is well matched currently epidemiological characteristics of highly pathogenic H7N9 strains in China, and provided 100% clinical protection and 90% shedding protection against HPAIV GD4. Meanwhile, rGD4 HALo-mH3 -TX strain also produced good cross-antibody level and 80% shedding protection against LPAIV JD/17, which is comparable with rGD4 strain. The data suggested that HA2 replacement did not affected the cross-protection capacity of rGD4 HALo-mH3 -TX strain against both high and low pathogenic H7N9 virus. Despite the avian influenza viruses shows a rapid evolution, our research platform based on a novel DIVA vaccine strategy can realize rapid vaccine updating by HA modification for matching epidemic strains. Taken together, we successfully developed a rGD4 HALo-mH3 -TX-based inactivated H7N9 recombinant vaccine with high safety, high-yield, and high immune efficacy. Moreover, DIVA strategy was also successfully designed based on the replacement of H7-12 peptide at HA2, and the infection serum of wild type strains and the vaccine immune serum can be accurately distinguished by our established 3G10 McAb-based competitive inhibition ELISA method, which will be expected to push the eradication process of H7N9 subtype avian influenza. Disclosure statement No potential conflict of interest was reported by the author(s).
6,491.8
2022-03-14T00:00:00.000
[ "Biology" ]
Janthinobacterium sp. Strain SLB01 as Pathogenic Bacteria for Sponge Lubomirskia baikalensis Sponges (phylum Porifera) are ancient, marine and inland water, filter feeding metazoans. In recent years, diseased sponges have been increasingly occurring in marine and freshwater environments. Endemic freshwater sponges of the Lubomirskiidae family are widely distributed in the coastal zone of Lake Baikal. The strain Janthinobacterium sp. SLB01 was isolated previously from the diseased sponge Lubomirskia baikalensis (Pallas, 1776), although its pathogenicity is still unknown. The aim of this study was to confirm whether the Janthinobacterium sp. strain SLB01 is the pathogen found in Baikal sponge. To address this aim, we infected the cell culture of primmorphs of the sponge L. baikalensis with strain SLB01 and subsequently reisolated and sequenced the strain Janthinobacterium sp. PLB02. The results showed that the isolated strain has more than 99% homology with strain SLB01. The genomes of both strains contain genes vioABCDE of violacein biosynthesis and floc formation, for strong biofilm, in addition to the type VI secretion system (T6SS) as the main virulence factor. Based on a comparison of complete genomes, we showed the similarity of the studied bacterial strains of Janthinobacterium spp. with the described strain of Janthinobacterium lividum MTR. This study will help expand our understanding of microbial interactions and determine one of the causes in the development of diseases and death in Baikal sponges. Earlier, an analysis of 16S rRNA gene amplicons revealed a significant increase in the number of opportunistic microorganisms, including Betaproteobacteria of the Oxalobacteraceae family, in diseased freshwater sponges and in the cell culture of primmorphs [17,18]. Moreover, we isolated, sequenced, and analyzed the genome of the strain Janthinobacterium sp. Sampling of Sponges and Cell Culture of Primmorphs Specimens of the healthy sponge L. baikalensis Pallas, 1776 (Demospongiae, Haplosclerida, Spongillida, Lubomirskiidae) were collected by scuba divers in individual containers from Lake Baikal in the Olkhon Gate Strait, Central Siberia, Russia (53 • 02 21 N; 106 • 57 37 E), at a depth of 10 m (water temperature 3-4 • C). The cell cultures of primmorphs were obtained via the mechanical dissociation of cells according to the previously described technique [30]. A clean sponge sample was crushed, and the obtained cell suspension was subsequently filtered through sterile 200-, 100-, and 29 µm nylon meshes. The gel-like suspension was diluted 10-fold with Baikal water, placed in a refrigerator, and stored for 3 min at 3-6 • C until a dense precipitate formed. Healthy primmorphs were placed into 200-500 mL cultural bottles (Nalge Nunc International, Rochester, NY, USA). Cell cultures of primmorphs were cultivated in natural Baikal water (NBW) at 3-4 • C under illumination with a light intensity of 47 lx or 0.069 W with a 12 h cycle of day and night changes for a month. The cell culture of primmorphs was then used for experimental infection. Bacteria Isolation In this study, we used the strain Janthinobacterium sp. SLB01 isolated from a sample of the diseased sponge L. baikalensis, collected in Lake Baikal, Central Siberia, Russia [19] for subsequent experimental infection. Healthy primmorphs (diameters of 2-4 mm) were transferred to 24-well plates (Nalge Nunc International, Rochester, NY, USA), with one piece per well in 2 mL of NBW, and infected with the Janthinobacterium sp. strain SLB01 with an initial dose of bacteria of 2.5 × 10 4 CFU/mL in 50 µL. The infection was repeated at least three times. The infected primmorphs were cultivated at 3-6 • C with a 12-h day and night cycle for 14 days. During the infection of primmorphs, observations were carried out with daily descriptions and sampling for DNA isolation and sequencing. The cell suspension from the infected primmorphs was then homogenized and filtered using an MF-Millipore membrane filter of 0.45 µm pore size (Merck, Zug, Switzerland), and 10 µL was transferred to the nutrient medium. The bacteria were cultured on a nutrient media with R2A (0.05% yeast extract, 0.05% tryptone, 0.05% casamino acids, 0.05% dextrose, 0.05% soluble starch, 0.03% sodium pyruvate, 1.7 mM K 2 HPO 4 , 0.2 mM MgSO 4 , final pH 7.2 adjusted with crystalline K 2 HPO 4 or KH 2 PO 4 ) agar plates (Merck KGaA, Darmstadt, Germany) at pH 7.2. The dishes were inoculated with three repetitions and cultivated at a temperature of 22 • C for 5 days, with the growth of the strain observed daily. Microscopy We observed daily changes in infected cell cultures of primmorphs with Janthinobacterium sp. strain SLB01 over 14 days. The samples were stained with a NucBlue Live ReadyProbes reagent (Thermo Fisher Scientific Inc., Waltham, MA, USA). The cell morphology was determined via light microscopy on an Axio Imager Z2 microscope (Zeiss, Oberkochen, Germany) equipped with fluorescence optics (self-regulating, blue HBO 100 filter, 358/493 nm excitation, 463/520 nm emission). The samples were prepared for Scanning Electron Microscopy (SEM) analysis. Fixation was performed according to the following procedure: pre-fixation in 1% OsO 4 (10 min), washing in a cacodylate buffer (30 mM, pH 7.9), (10 min), fixation in 1.5% glutaraldehyde solution on a cacodylate buffer (30 mM, pH 7.9), (1 h), washing in a cacodylate buffer (30 mM, pH 7.9), (30 min), postfixation in 1% OsO 4 solution on a cacodylate buffer (30 mM, pH 7.9) (2 h), and washing in filtered Baikal water for 15 min at room temperature followed by dehydration in a graded ethanol series. The specimens were placed into SEM stubs, dried to a critical point, and coated with liquid carbon dioxide (BalTec CPD 030) using a Cressington 308 UHR sputter coater before examination under a Sigma series scanning electron microscope (Zeiss, Oberkochen, Germany) operating at 5.0 190 kV. The samples were prepared for Transmission Electron Microscope (TEM) analysis. We took both healthy primmorphs and infected primmorphs for 24 h, 3 and 7 days with the strain Jantinobacterium sp. SLB01. The samples were fixed with 2.5% glutaraldehyde in a 0.1 M cacodyllate buffer (pH 7.2) for 24 h at 4 • C. The material was then washed in a 0.1 M cacodyllate buffer (pH 7.2) 3 times for 1 h, and 1% OsO 4 diluted in 0.1 M cacodyllate buffer (pH 7.2) was fixed for 30 min. After washing from the fixative in distilled water (3 times for 30 min), the material was dehydrated in a series of increasing concentrations of ethanol and acetone. Next, the material was embedded in a mixture of Epon and Araldite (Sigma, Missouri, MO, USA). Semi-thin and ultra-thin sections were made using a Leica UC7 ultramicrotome (Leica Microsystems, Wetzlar, Germany). Ultrathin sections were contrasted with a 0.5% aqueous solution of uranyl acetate (20 min) and Reynolds lead citrate (10 min). Ultrathin sections were analyzed using a Libra 200 FE transmission electron microscope (Carl Zeiss, Oberkochen, Germany) and a Libra 120 (Carl Zeiss, Oberkochen, Germany). Genome Assembly, Annotation and Phylogenetic Relationship Raw read error correction and filtering with FastP tool were performed with default settings [31]. Genomes were assembled with SPAdes version 3.11.0 [32] using the default settings. Contigs from draft assembly with a length of more than 10 Kbp were scaffolded with [33] (https://github.com/fenderglass/Ragout, accessed on 9 March 2021) using Janthinobacterium sp. LM6 chromosome (GenBank accession no. CP019510) as the reference. We used the same software set for genome assembly and annotation to prevent genome variations depending on reference and assembly software versions. Although the prokaryotic genome annotation pipeline (PGAP) version for annotating the genome Janthinobacterium sp. strain SLB01 was 4.13 (used when the genome was released in NCBI RefSeq), to annotate Janthinobacterium sp. PLB02, we used the newest available NCBI PGAP version (5.1), which can annotate more genes because its database is richer. Gene annotations were performed using PGAP (https://github.com/ncbi/pgap, accessed on 9 March 2021). Core genome construction was accomplished with Roary version 3.13.0 using default settings [34]. Genome completeness analysis was performed with benchmarking universal single-copy orthologs (BUSCO) version 5.0.0 using the dataset "burkholderiales_odb10" [35]. Strains of Janthinobacterium spp. identification was carried out via phylogenetic analysis with PhyloPhlAn 3.0 [36] based on a comparison of 400 universal marker genes (a maximum-likelihood method) [37] using the "supermatrix_aa" and "low diversity" modes with the "phylophlan" database. We acquired 10 closely related strains of Janthinobacterium by 16S rRNA from the Basic Local Alignment Search Tool (BLAST/-NCBI) to build a phylogenetic tree. Statistical Analysis All of the infection experiments were performed at least three times. The data were reported as the means ± standard deviation (SD). A statistical analysis was then carried out (single-factor (ANOVA) followed by Tukey's multiple range test) using the SPSS.16 software. Differences in mean values were considered significant at p < 0.05. Bacterial Isolation and Microscopy We used a model cell culture of healthy primmorphs of sponge L. baikalensis for experimental infection with the Janthinobacterium sp. strain SLB01 isolated from a diseased sponge, L. baikalensis, to determine the bacterial pathogenicity. The healthy primmorphs were bright green in color and with bright red autofluorescence of chlorophyll in the cells due to the presence of green symbiotic microalgae belonging to the taxon Chlorophyta in their composition. We observed that the cells of the sponges contained a strict arrangement of symbiotic microalgae in the amoebocytes of the uninfected primmorphs ( Figure 1A,B). A completely different picture was observed in primmorphs infected with the Janthinobacterium sp. strain SLB01. Dirty scurf, a fetid odor, and biofilm formation were observed in the infected cultures, which were likely associated with the growth of bacteria. The primmorphs lost their green color after infection with the strain Janthinobacterium sp. SLB01 ( Figure 1C). We observed the destroyed cells of amoebocytes and a chaotic arrangement and adhesion of microalgae. On the third day of cultivation of primmorphs infected with Janthinobacterium sp. strain SLB01, we observed the suppression of autofluorescence ( Figure 1D). After 7 days of cultivation of the infected culture of primmorphs, we observed dead cells of sponges and a chaotic arrangement of microalgae with an increase in the number of short rod-shaped bacteria ( Figure 1E,F). The loss of chlorophyll autofluorescence and the death of microalgae were observed in all experimental samples. A completely different picture was observed in primmorphs infected with thinobacterium sp. strain SLB01. Dirty scurf, a fetid odor, and biofilm formation w served in the infected cultures, which were likely associated with the growth of The primmorphs lost their green color after infection with the strain Janthinobacte SLB01 ( Figure 1C). We observed the destroyed cells of amoebocytes and a chaotic ment and adhesion of microalgae. On the third day of cultivation of primmorphs with Janthinobacterium sp. strain SLB01, we observed the suppression of autofluo ( Figure 1D). After 7 days of cultivation of the infected culture of primmorphs, we o dead cells of sponges and a chaotic arrangement of microalgae with an increas The autofluorescence of chlorophyll-containing intracellular microalgae is shown in red (C,D) chaotic arrangements of green microalgae and increases in bacteria in primmorphs infected with Janthinobacterium sp. strain SLB01 on day 3 of cultivation. Bacteria are shown with a blue color (indicated by the arrows); (E,F) the primmorphs infected with Janthinobacterium sp. strain SLB01 on day 7 of bacteria cultivation (blue color; shown by arrows). The samples of primmorphs were stained with the NucBlue Live ReadyProbes reagent for fluorescence microscopy. Scale bars: 10 µm. Using SEM, we found that the microalgae contained spheroidal cells 2.5-3.0 µm in diameter with a clean cell wall in the healthy primmorphs ( Figure 2A). However, we experimentally observed the interaction of bacteria with host cells in the primmorphs infected with Janthinobacterium sp. strain SLB01 ( Figure 2B). The squa mous epithelium was destroyed, and the symbiotic microalgae were packed entirely in thick microbial biofilm with short rod-shaped bacteria ( Figure 2B). We observed an interaction of bacteria with the host cells in the primmorphs and symbiotic microalgae infected with the Janthinobacterium sp. strain SLB01 through the us of ultrastructural analysis ( Figure 3). However, we experimentally observed the interaction of bacteria with host cells in the primmorphs infected with Janthinobacterium sp. strain SLB01 ( Figure 2B). The squamous epithelium was destroyed, and the symbiotic microalgae were packed entirely in a thick microbial biofilm with short rod-shaped bacteria ( Figure 2B). We observed an interaction of bacteria with the host cells in the primmorphs and symbiotic microalgae infected with the Janthinobacterium sp. strain SLB01 through the use of ultrastructural analysis ( Figure 3). We found that amoebocyte cells were filled with green symbiotic microalgae in the healthy primmorphs ( Figure 3A). The amoebocyte cells were up to 20 µm in diameter, containing a nucleus with a prominent nucleolus. The cytoplasm of amoebocytes contains dictyosomes of the Golgi apparatus and cisterns of the endoplasmic reticulum. A distinctive feature of amoebocytes is the presence in the cytoplasm of specialized vacuolessymbiosomes with symbiotic microalgae representatives of the Chlorophyceae family enclosed in them. Microalgal cells (2.5-3.0 µm in diameter) have a thin electron-dense polysaccharide envelope separated from the cell's outer membrane by a narrow supramembrane space ( Figure 3B). There is also a chloroplast, which can contain electron-transparent inclusions that are, most notably, starch grains. Granules are often present between the thylakoid membranes and directly in the cytoplasm of the microalgae ( Figure 3B). In addition, it was noted that no bacteria were found in the mesohyl of healthy primmorphs. In the mesohyl of infected cell cultures, rod-shaped bacteria were found in the primmorphs 24 h after infection ( Figure 3D). The structure of the bacteria had an enlarged folded outer membrane and there was an electron-transparent halo observed around the bacterial cells which indicated their ability to lyse the surrounding components of the extracellular matrix. In addition to mesohyl, bacteria were present in amoebocytes, which penetrated via phagocytosis. The system of intracellular membranes was destroyed, and vacuolization was enhanced. The symbiosomes with microalgae enclosed in them were preserved in the cytoplasm of amoebocytes. Some symbiotic microalgae left the host cells and were located within the extracellular matrix. The destruction processes of cells of primmorphs reached the terminal stage on day 7 after infection ( Figure 3E). The microalgae were located directly in the mesoglea, where they became available for the action of bacteria on day 7 after the start of the infection. The cytoplasm in the cells of primmorphs was fragmented, resulting in the absence of whole functionally active cells. We observed that the extracellular matrix also contained symbiotic microalgae infected with bacteria, which, due to division, subsequently formed colonies of bacteria united by contact processes ( Figure 3F). The formation of bacterial colonies was accompanied by the lysis of the components of the microalgal cytoplasm, with the presence of a polysaccharide shell enclosing bacteria ( Figure 3G). Thus, as the infection progressed, the cells of primmorphs became lysed, and microalgae were served on the 14th day of cultivation on the surfaces of symbiotic microalgae. Scale bars: 2 However, we experimentally observed the interaction of bacteria with host the primmorphs infected with Janthinobacterium sp. strain SLB01 ( Figure 2B). Th mous epithelium was destroyed, and the symbiotic microalgae were packed enti thick microbial biofilm with short rod-shaped bacteria ( Figure 2B). We observed an interaction of bacteria with the host cells in the primmor symbiotic microalgae infected with the Janthinobacterium sp. strain SLB01 through of ultrastructural analysis (Figure 3). The strain Janthinobacterium sp. PLB02 was isolated from a cell culture of primmorphs infected with the Janthinobacterium sp. strain SLB01. The bacteria were rod-shaped, motile, and aerobic; in addition, the purple pigment violacein appeared on the second day. A morphological analysis showed that the bacteria have short rods up to 2.0 µm long and 0.3 µm in diameter, with a two-layer outer membrane typical of Gram-negative bacteria ( Figure 3C). The cytoplasm, in most cases, was granular, flagellated and electron-dense, with a well-defined nucleoid zone. Comparison of Genomes of Janthinobacterium spp. Strains In the present study, we compared the genomic contents of two strains: Janthinobacterium sp. SLB01 and reisolated Janthinobacterium sp. PLB02. We assembled the genome of the Janthinobacterium sp. strain PLB02 from sequence data the same way as was done for the Janthinobacterium sp. SLB01 strain. The final genome assembly statistics of the raw read count, genome size, number of genes, pseudogenes, protein-coding sequences, tRNA noncoding RNA, and references to genome reports are presented in Table 1. A genome completeness analysis with benchmarking universal single-copy orthologs (BUSCO) [35] showed results for the strains Janthinobacterium spp. SLB01 and PLB02, with 99.1% complete (not fragmented) and 0.9% missing BUSCOs. The fully assembled genomes included 6,467,981 bp for strain Janthinobacterium sp. SLB01 and 6,417,505 bp for strain Janthinobacterium sp. PLB02, and exhibited similar G+C contents (62.63% and 62.65%, respectively). Genome annotation with PGAP revealed 5643 genes (5502 proteincoding) for strain SLB01 and 5651 (5510 protein-coding) for strain PLB02, as shown in Table 1. We compared the genomic contents of the Janthinobacterium sp. strain SLB01 and Janthinobacterium sp. strain PLB02 with the data of Roary [34] and found that most of the genes were the same (with a homology of more than 99%). Phylogenetic Relationship We built a phylogenetic tree for the Janthinobacterium species to compare the genomic features of both strains-Janthinobacterium spp. SLB01 and PLB02-with closer species [38]. Strain Janthinobacterium sp. PLB02 showed the highest phylogenetic affiliation to strain Janthinobacterium sp. SLB01 of phylum Proteobacterium from the family Oxalobacteraceae ( Figure 4). The result of the phylogenetic tree based on 400 universal marker genes using loPhlAn (a maximum-likelihood method) [35] showed that the genomes of Janthinob rium sp. SLB01 and Janthinobacterium sp. PLB02 are homologous to each other and closely related to the psychrotolerant strain J. lividum MTR (Figure 4). Analysis of the Virulence Genes We compared the obtained genomes of strains of the Janthinobacterium sp. PLB02 Janthinobacterium sp. SLB01 with each other and analyzed the coding virulence pro and key genes such as the genes of violacein, floc formation, and the type VI secre system [21]. We found that the strains were 100% homologous to each other in term virulence factors. Earlier, we showed that the strain Janthinobacterium sp. SLB01 was to produce violacein and contained the violacein synthesis operon vioABCDE [21,22] lated from primmorphs, the strain Janthinobacterium sp. PLB02 also produced the pigm violacein, and the genome contained the violacein synthesis operon vioABCDE. The fl ing regions for gene coordinates and locus names are presented in Figure 5 and Tabl The result of the phylogenetic tree based on 400 universal marker genes using Phy-loPhlAn (a maximum-likelihood method) [35] showed that the genomes of Janthinobacterium sp. SLB01 and Janthinobacterium sp. PLB02 are homologous to each other and very closely related to the psychrotolerant strain J. lividum MTR (Figure 4). Analysis of the Virulence Genes We compared the obtained genomes of strains of the Janthinobacterium sp. PLB02 and Janthinobacterium sp. SLB01 with each other and analyzed the coding virulence proteins and key genes such as the genes of violacein, floc formation, and the type VI secretion system [21]. We found that the strains were 100% homologous to each other in terms of virulence factors. Earlier, we showed that the strain Janthinobacterium sp. SLB01 was able to produce violacein and contained the violacein synthesis operon vioABCDE [21,22]. Isolated from primmorphs, the strain Janthinobacterium sp. PLB02 also produced the pigment violacein, and the genome contained the violacein synthesis operon vioABCDE. The flanking regions for gene coordinates and locus names are presented in Figure 5 and Table 2. and key genes such as the genes of violacein, floc formation, and the type VI secretion system [21]. We found that the strains were 100% homologous to each other in terms of virulence factors. Earlier, we showed that the strain Janthinobacterium sp. SLB01 was able to produce violacein and contained the violacein synthesis operon vioABCDE [21,22]. Isolated from primmorphs, the strain Janthinobacterium sp. PLB02 also produced the pigment violacein, and the genome contained the violacein synthesis operon vioABCDE. The flanking regions for gene coordinates and locus names are presented in Figure 5 and Table 2. 1353472 3909260 1354779 3910567 100 1308 1308 vioB F3B38_RS17240 J3P46_17340 1354776 3910564 1357796 3913584 100 3021 3021 vioC F3B38_RS17245 J3P46_17345 1357798 3913586 1359087 3914875 100 1290 1290 vioD F3B38_RS17250 J3P46_17350 1359087 3914929 1360205 3915993 100 1119 1065 vioE F3B38_RS17255 J3P46_17355 1360216 3916004 1360216 3916585 100 582 582 We found 100% homology of the strains Janthinobacterium spp. SLB01 and PLB02. We used the generally accepted names of the genes of the violacein synthesis operon vioABCDE instead of the annotated names when comparing their genomes (Table 2). Our previous study discovered gene clusters involved in floc formation [21,22]. We found that the clusters of genes of the Janthinobacterium sp. strain PLB02 have 100% structural similarity to the genome of the strain Janthinobacterium sp. SLB01 ( Figure 6). Previously, we showed that the Janthinobacterium sp. strain SLB01 formed a strong biofilm rich in exopolysaccharides (EPS) in the stationary phase [21,22]. Interestingly, the isolated strain Janthinobacterium sp. PLB02 also formed a strong biofilm. The strain produced floc and biofilm via exopolysaccharide biosynthesis and PEP-CTERM/XrtA protein expression. A genome analysis showed that the Janthinobacterium sp. strain PLB02 has all the necessary gene cassettes for flocculation, similar to strain Janthinobacterium sp. SLB01. Both genomes contain the system glycosyltransferase putative exosortase XrtA (previously called EpsH), the PEP-CTERM system histidine kinase PrsK, the PEP-CTERM system associated sugar transferase, the sensor histidine kinase of a two-component system, and the PEP-CTERM-box response regulator transcription factor PrsR. Localization, annotation, and identity percentages of these genes are presented in Table 3. vioD F3B38_RS17250 J3P46_17350 1359087 3914929 1360205 3915993 100 1119 1065 vioE F3B38_RS17255 J3P46_17355 1360216 3916004 1360216 3916585 100 582 582 Our previous study discovered gene clusters involved in floc formation [21,22]. We found that the clusters of genes of the Janthinobacterium sp. strain PLB02 have 100% structural similarity to the genome of the strain Janthinobacterium sp. SLB01 (Figure 6). Sensor histidine kinase of a two-component system Moreover, we analyzed the type VI secretion system (T6SS) as the primary virulence factor in the genome of the strain Janthinobacterium sp. PLB02 and found that the genes of both strains were 100% identical to each other. As in the strain Janthinobacterium sp. SLB01, the genome of the isolated strain Janthinobacterium sp. PLB02 contained all three categories of genes required for the type VI secretion system to function. The isolated strain Janthinobacterium sp. PLB02 was identical to strain Janthinobacterium sp. SLB01. Discussion In this study, we showed that the strain Janthinobacterium sp. SLB01 isolated from a diseased sponge L. baikalensis and infected cell culture of primmorphs is the same and that the genomes of the strains are identical. The strains Janthinobacterium sp. SLB01 and Janthinobacterium sp. PLB02 are pathogens for cell cultures of primmorphs and the sponge L. baikalensis. After experimental infection of the cell culture of primmorphs, we found that short rod-shaped bacteria of the strain Janthinobacterium sp. SLB01 grew quickly and parasitized sponge cells and their symbiotic microalgae. We detected the death of the symbiotic microalgae (Chlorophyta) and the sponge cells in the infected primmorphs, as well as increased bacteria counts. The bacteria Janthinobacterium sp. was found in the mesohyl of cell cultures of primmorphs 24 h after infection and was able to lyse the primmorph cells. The characteristic features of the structure of Janthinobacterium sp. during the development of the infectious process included the presence of a folding outer membrane, an increase in the periplasm, and an electron-transparent zone of lysis around the bacterial cells. It is known that the outer cell membrane and periplasm of Gram-negative bacteria serve as the compartments responsible for the production of secondary metabolites, including proteolytic enzymes and other factors of bacterial cell virulence [39,40]. An increase in the surface area of the outer membrane and the volume of the periplasm in Janthinobacterium sp. over the course of infection indicates the activation of processes aimed at realizing their pathogenic potential. We observed that the Janthinobacterium sp. penetrated the cytoplasm of microalgae and lysed their contents, using nutrients for growth, division, and the formation of colonies of the bacteria. The infection process progressed in the sponge cells of the primmorphs and the microalgae and reached the terminal stage on the day 7 of infection, thus indicating a rapid course of the pathogenic process. We observed the destruction of the photosynthetic apparatus, the loss of chlorophyll autofluorescence, and the death of symbiotic microalgae in all the infected primmorphs. Earlier, we showed that the cell culture of the primmorphs of healthy sponge L. baikalensis is identical to that of sponges, and can be used as a model system for studying the diseases of Baikal sponges [18]. Here, we showed that during the experimental infection of the cell culture of primmorphs with the strain Janthinobacterium sp. SLB01, the bacteria attacked eukaryotic cells of the microalgae and then acquired the released nutrients after cell lysis ( Figure 3F,G). A comparison of the two genomes from Janthinobacterium sp. SLB01 and Janthinobacterium sp. PLB02 isolated from the diseased sponge and infected cell cultures of the primmorphs showed that genomes of these bacteria have identical genomic content. The genome sizes, gene counts, and G+C content were very close. The genome size of Janthinobacterium strains slightly differed due to the number of Ns (unknown nucleotides) after the scaffolding procedure. Moreover, we found that these species are rod-shaped Gramnegative bacteria that produce violacein, a compound with antimicrobial and antiviral properties that is toxic to eukaryotic cells [41]. The isolated bacteria Janthinobacterium sp. PLB02 can colonize the space and possibly suppress the grown microalgae with the pigment violacein. This pigment production was observed in the infected primmorphs, and all the genes (operon vioABCDE) were present in its genome. We identified five genes encoding VioA, VioB, VioC, VioD, and VioE proteins related to violacein biosynthesis similar to those identified in published Janthinobacterium sp. SLB01. Earlier, we observed that one essential strategy of the Jantinobacterium sp. strain SLB01 is the secretion of virulence factors through the cell membranes of the victim to achieve a potential target [21,22]. In addition, an identical T6SS secretion system of the strain Jantinobacterium sp. SLB01 was found in the isolated Janthinobacterium sp. PLB02. Both strains' genomes contained all three categories of genes required for the function of type T6SS [42,43]. Bacterial strains Janthinobacterium spp. SLB01 and PLB02, based on a comparison of complete genomes, showed similarity with the strain J. lividum MTR. Interestingly, J. lividum either caused necrosis on mushroom tissue blocks or colonized the skin of some amphibians, conferring protection against fungal pathogens [27,44]. In addition, isolated bacteria also produced floc formation and strong biofilm in the stationary phase. When cultivating the strains Janthinobacterium sp. SLB01 and Janthinobacterium sp. PLB02, we observed biofilm and floc formation in the diseased sponges and the infected cell cultures of primmorphs of L. baikalensis. A genomic analysis of the two strains found RpoN, PepA, XrtA, PrsK, and PrsR gene clusters present in the formation of floc and 100% similarity between the strains ( Table 2). Using an ultrastructural analysis, we found that the symbiotic microalgae were completely enclosed in a thick microbial biofilm during the infection of primmorphs with the strain Jantinobacterium sp. SLB01 ( Figure 2B). Moreover, on day 7 after infection, it was discovered that the formation of bacterial colonies was accompanied by utilization of the components of the microalgal cytoplasm; there remained only a polysaccharide shell with bacteria enclosed in it ( Figure 3G). Thus, floc formation and biofilm can negatively affect the physiology of the life of the host (sponge L. baikalensis) due to clogging of the pores. These negative effects of biofouling on the functioning of the filter-feeding marine sponge Halisarca caerulea were previously reported [45]. Exopolysaccharides (EPS) are known to be the main component of the biofilm produced by the species of Oxalobacteraceae [46]. It is known that the family Oxalobacteraceae is characterized by the presence of extremely ecologically diverse species of microorganisms and contains environmental saprophytic organisms, phytopathogens, and opportunistic pathogens, including those common in freshwater ecosystems [47]. The genomes of many environmental isolates of Janthinobacterium from ice, water, sediments, and soils were sequenced [25][26][27], but strains of Janthinobacterium sp. strain SLB01 and the new Janthinobacterium sp. strain PLB02 from the Baikal sponge and cell culture of primmorphs were isolated in this study for the first time. The disease and mass mortality of sponges and corals have been observed worldwide in the marine environment in recent years [48][49][50][51][52], and corresponding die-off events threaten overall sponge-associated biodiversity [53][54][55][56]. These changes in sponge-microbe interactions appear to be associated with climate change and the occurrence of opportunistic infections resulting from changes in water temperature caused by global warming, light intensity, and salinity [57][58][59][60][61][62]. Previously, Webster et al., presented a description of the pathogenic bacterial strain NW4327 isolated from an infected marine sponge Rhopaloeides odorabile in the Great Barrier Reef [63]. Choudhury et al. reported the isolation of the pathogenic bacterial strain of Pseudoalteromonas agarivorans found in diseased sea sponges with pathogenicity genes [64]. Thus, in this study, we sought to reproduce Koch's postulates with a cell culture of primmorphs. The present study is the first of its kind. We were able to isolate the new strain of Janthinobacterium sp. PLB02 after infecting a cell culture of primmorphs using the strain Janthinobacterium sp. SLB01 isolated from a diseased sponge L. baikalensis. We found that the strains are the same and have virulence factors in their genomes. We showed interactions of the Janthinobacterium sp., marking this species as a potential pathogen for cell cultures of primmorphs of the Baikal sponge L. baikalensis. The results of this study will help expand our understanding of microbial interactions in the development of disease and the death of Baikal sponges.
6,889.4
2022-12-21T00:00:00.000
[ "Biology" ]
Impacts of Chromatin States and Long-Range Genomic Segments on Aging and DNA Methylation Understanding the fundamental dynamics of epigenome variation during normal aging is critical for elucidating key epigenetic alterations that affect development, cell differentiation and diseases. Advances in the field of aging and DNA methylation strongly support the aging epigenetic drift model. Although this model aligns with previous studies, the role of other epigenetic marks, such as histone modification, as well as the impact of sampling specific CpGs, must be evaluated. Ultimately, it is crucial to investigate how all CpGs in the human genome change their methylation with aging in their specific genomic and epigenomic contexts. Here, we analyze whole genome bisulfite sequencing DNA methylation maps of brain frontal cortex from individuals of diverse ages. Comparisons with blood data reveal tissue-specific patterns of epigenetic drift. By integrating chromatin state information, divergent degrees and directions of aging-associated methylation in different genomic regions are revealed. Whole genome bisulfite sequencing data also open a new door to investigate whether adjacent CpG sites exhibit coordinated DNA methylation changes with aging. We identified significant ‘aging-segments’, which are clusters of nearby CpGs that respond to aging by similar DNA methylation changes. These segments not only capture previously identified aging-CpGs but also include specific functional categories of genes with implications on epigenetic regulation of aging. For example, genes associated with development are highly enriched in positive aging segments, which are gradually hyper-methylated with aging. On the other hand, regions that are gradually hypo-methylated with aging (‘negative aging segments’) in the brain harbor genes involved in metabolism and protein ubiquitination. Given the importance of protein ubiquitination in proteome homeostasis of aging brains and neurodegenerative disorders, our finding suggests the significance of epigenetic regulation of this posttranslational modification pathway in the aging brain. Utilizing aging segments rather than individual CpGs will provide more comprehensive genomic and epigenomic contexts to understand the intricate associations between genomic neighborhoods and developmental and aging processes. These results complement the aging epigenetic drift model and provide new insights. Despite these advancements, several fundamental questions remain. A prominent issue is the potential bias introduced by non-random sampling of CpGs. Most previous studies employed a sampling strategy to reduce the number of CpGs from~30 million (the total number of CpGs in the human genome) to statistically manageable numbers. For instance, the widely used Illumina 27K Chip analyzes approximately 0.1% of total CpGs in the human genome. These 'selected' CpGs, especially those used in commercially developed methylation arrays, are often biased toward promoters and CpG islands. However, DNA methylation is also highly prevalent in gene bodies and distal intergenic regions, with significant functional consequences (e.g., [23][24][25][26]). Moreover, most CpGs that exhibit variation of DNA methylation are located in gene bodies and intergenic regions [27]. The next-generation methylation chip (e.g., Illumina 450K Chip) examines~1.5% of total CpGs in the human genome, with similarly biased distributions favoring promoters and CpG islands [27]. Thus, sampling strategies could have significant consequences on the inference of DNA methylation changes with aging. Another important potential factor involves the variability of aging-associated DNA methylation changes across cell types and tissues. Although common aging modules may exist across different tissues (e.g., [5,11,15,28,29]), the extent to which tissue-or cell type-specific processes drive aging-associated DNA methylation changes remains unknown [29]. To shed light on these questions, it is necessary to compare patterns of aging-associated DNA methylation among different tissues. Moreover, performing such analyses using data from whole genome bisulfite sequencing, thus in principle examining all CpGs in the human genome, will yield unbiased genome-wide patterns. In addition, given that previous studies on the association between aging CpGs and chromatin states typically relied on limited numbers of a priori selected CpGs, it is useful to re-evaluate the relationship between chromatin states and CpG methylation using bisulfite-sequencing data. Here, we perform a comprehensive analysis of DNA methylation variation with aging using recently generated whole genome bisulfite sequencing DNA methylation data from the frontal cortex brain region of eight individuals [25,30]. We also generate a chromatin state map of the frontal cortex utilizing extensive histone modification data from the NIH RoadMap Epigenomics Project [31]. Although various patterns that are consistent with the random 'aging epigenetic drift' model are identified, we also observe that specific genomic regions follow distinctive aging patterns of DNA methylation. Notably, the integration of DNA methylation data sets and chromatin state maps reveals extensive co-variation of these two epigenetic marks. By comparing these results with previously reported genome-wide DNA methylation variation from CD4+ T lymphocytes [2], we can begin to address the heterogeneity of aging patterns among tissues. Furthermore, we introduce a new method to identify 'aging segments'. Aging segments are genomic regions with consecutive CpGs whose methylation changes in a concerted fashion with aging. Analyses of aging segments provide insights into the co-variation between DNA methylation and chromatin states as well as differences in the molecular mechanisms of aging-associated hyper-and hypo-DNA methylation. Results Tissue-divergent patterns of epigenetic drift based on nucleotide resolution whole-genome methylation maps We first describe global patterns of DNA methylation with respect to aging using whole-genome bisulfite sequencing data. Of the total 26.8 X 10 6 autosomal CpG sites (in the human genome hg19 / GRCh37 build), we examine 25.4 X 10 6 sites in frontal cortex samples from eight individuals ranging in age from newborn to 82 years and 9.0 X 10 6 sites in CD4+ T-cells (blood) samples from three individuals (newborn and 26 and 103 years old). These comprehensive data confirm that the whole genome is heavily methylated: the average fractional methylation levels are 0.7976 (± 0.0093)/CpG in brain and 0.7756 (± 0.0097)/CpG in blood. Patterns of DNA methylation variation across different functional regions are generally consistent with previous findings: promoters, gene bodies and repetitive regions exhibit low, medium and high methylation, respectively (Fig 1A and 1B). CpG islands and Alu elements exhibit the lowest and highest levels of DNA methylation, respectively, in both data sets (Fig 1A and 1B). With respect to genome-wide patterns of aging-associated DNA methylation changes, the blood data unequivocally indicate global hypo-methylation with aging accompanied with local hyper-methylation with aging at CpG islands ( Fig 1A, S1A Fig). These patterns are hallmarks of the global aging epigenetic drift model [1][2][3]9,10,14,21,29,32,33]. Interestingly, brain methylation maps reveal a very different picture. Compared with the blood data set, the brain samples exhibit much less variation of DNA methylation with aging ( Fig 1B, S1B Fig). Moreover, contrary to the aging epigenetic drift model, a slight increase in DNA methylation is noted in most genomic regions of brain. However, although the global pattern in the brain data does not conform to the previous model, CpG sites with extreme initial DNA methylation follow the expected trend. We examined extremely hyper-methylated (fractional methylation level > 0.8) and hypo-methylated (fractional methylation level < 0.2) CpGs. A significant increase in DNA methylation with age in hypo-methylated CpGs and a significant decrease in DNA methylation in hyper-methylated CpGs are detected regardless of the genomic context and tissue type. These patterns are especially pronounced during the time period up to young adulthood (ages 26 and 25 in blood and brain data, respectively) ( Fig 1C, S2 Fig, paired t-test, one-tailed). Notably, no statistically significant differences in brain data were observed in samples from individuals between the ages 25 and 82, whereas the blood data exhibit a significant decrease in DNA methylation. Nevertheless, mean DNA methylation levels of hyper-methylated and hypo-methylated CpGs are highly negatively correlated in brain data, and this correlation is most pronounced in CpG islands (Fig 1D). These analyses complement the aging epigenetic drift model by supporting its predictions from extremely hyper-and hypo-methylated CpGs in brain. Distinctive patterns of aging DNA methylation across chromatin states in brain Aging-associated changes in DNA methylation based on whole-genome bisulfite sequencing data. To represent aging patterns more clearly and in a comparable manner between the two data sets, only state map. From the 6 different histone modification profiles (H3K9me3, H3K27me3, H3K27Ac, H3K4me1, H3K4me3 and H3K36me3) generated by Chip-Seq of brain tissue from the NIH Roadmap Epigenomics database, we inferred 14 chromatin states using ChromHMM [34,35] (Materials and Methods, S1 Text, S3 Fig). These include the following defined states (Fig 2A): active transcription (TxS), weak transcription (TxWk), active enhancers in transcribed regions (TxEnhAc), poised/weak enhancers or low signal (EnhP/low), active intergenic enhancers (EnhAc), weak intergenic enhancers (EnhWk), active 5' flanking promoters/enhancers (TssFAc), weak 5' flanking promoters/enhancers (TssFWk), weak promoters (TssWk), active promoters (TssAc), poised promoters (TssP), polycomb-repressed regions (PcRepr), heterochromatin or low signal (Heter/low), and constitutive heterochromatin (ConHeter). The proportion of sites assigned to different chromatin states is presented in S4 Fig. We then examined variations in DNA methylation across different chromatin states. DNA methylation levels of different chromatin states are highly and significantly different from each other ( Fig 2B). Promoters, in particular active promoters (TssAc), exhibit the lowest methylation (0.0408 ± 0.0041). On the other hand, transcription (TxS, TxWk) and heterochromatin or low signal (Heter/low) states exhibit the highest mean methylation levels ( Fig 2B). These results are consistent with strong gene body methylation (TxS and TxWk) and methylation of heterochromatic regions (Herer/low). Interestingly, active and weak enhancers that are distally located in intergenic regions (EnhAc, EnhWk) are also highly methylated (65-70% methylated), which is in contrast to previous results reporting that enhancers are generally hypo-methylated (e.g., [36]). However, the DNA methylation levels of these regions are highly significantly reduced compared with flanking intergenic regions (Fig 2C), which is concordant with the relative hypo-methylation of enhancers [36,37]. On the other hand, states harboring strong chromatin signatures enhancers (including H3K4me1 and H3K27Ac) and located nearby transcription start site (states TssFAc, TssFWk, annotated as flanking promoters/enhancers) are strongly hypo-methylated. Moreover, different chromatin states exhibit different degrees of aging-associated methylation changes (Fig 3A). A linear regression model was employed using ages as predictors and DNA methylation levels as response (Materials and Methods, S5 Fig). Combining chromatin states and DNA methylation changes with aging, we demonstrate that CpGs that are located in active promoters (TssAc) tend to remain stably hypo-methylated throughout the aging process and exhibit the least amount of variation (Fig 3A and 3B). On the other hand, active enhancers located in intergenic regions and gene bodies (TxEnhAc and EnhAc) exhibit the most dynamic patterns of DNA methylation during aging, undergoing significant hypo-methylation with aging. Interestingly, chromatin states harboring enhancer signals yet residing nearby TSSs (such as TssFWk and TssFAc, Fig 2A) exhibit less variability with aging ( Fig 3A and 3B). In contrast, CpGs located in the 'poised promoters' state (TssP) and polycomb-repressed regions (PcRepr) exhibit substantial hyper-methylation with aging ( Fig 3B). Overall, regions corresponding to chromatin states associated with active histone marks tend to exhibit negative DNA methylation changes with age ( Fig 3C). In contrast, DNA methylation of chromatin brain data from three individuals (with comparable ages to those in the blood data set) are presented. Patterns from all eight individuals are highly similar to the simplified pictures. Data from 10,000 randomly selected CpGs are presented. (A) Comparisons of mean fractional methylation levels among 3 individuals (with 95% confidence intervals) across different genomic regions in blood. (B) Comparisons of mean fractional methylation levels (with 95% confidence intervals) among 3 individuals across different genomic regions in brain. (C) Data from extremely hypo-methylated (fractional methylation levels < 0.2) CpGs (upper panel) and extremely hyper-methylated (fractional methylation levels > 0.8) CpGs from the two data sets. (D) Methylation levels of extremely hyper-and hypo-methylated CpGs are strongly negatively correlated in brain. states that harbor repressive or poised histone marks are positively correlated with age ( Fig 3C). One caveat of the present analysis is that it utilized the chromatin state map generated from data from a specific individual. Consequently, the observed pattern may reflect individual-specific patterns. To take into account such variability, we first extracted genomic regions that exhibit consistent chromatin states in 9 different cell lines (including embryonic stem cells, erythrocytic leukemia cells, B-lymphoblastoid cells, hepatocellular carcinoma cells, umbilical vein endothelial cells, skeletal muscle myoblasts, normal lung fibroblasts, normal epidermal keratinocytes and mammary epithelial cells) [35]. We subsequently re-examined the relationship between chromatin states and DNA methylation in this 'consistent' chromatin state map. Interestingly, intergenic distal enhancers (states 5 and 7 in ref [35]) tend to variable across the 9 cell lines. This observation is potentially explained by the highly variable epigenetic nature of enhancers across various biological processes [30,[38][39][40][41]. Genomic regions corresponding to other chromatin states across the 9 cell lines reveal similar aging DNA methylation dynamics as shown above (S6 Fig). Novel aging segments define distinctive epigenomic and functional neighborhoods Neighboring positions in the genome may exhibit similar epigenetic profiles and facilitate complex regulation [42,43]. For example, DNA methylation levels of nearby sites are highly correlated in diverse genomes [44][45][46]. Consequently, it is of great interest to investigate whether explicit genomic neighborhoods exist that epigenetically respond to aging in a similar manner. Aging methylation maps of approximately all CpGs in the human genome provide an exciting opportunity to explore this question. Specifically, we examine clusters of adjacent CpGs that exhibit similar patterns of methylation changes with aging using the maximal scoring subsequence algorithm [47]. This approach aims to identify all non-overlapping and continuous subsequences with highest local scores and is used in a variety of genomic analyses (e.g., [48][49][50]). In our approach, subsequences correspond to clusters of adjacent CpG that exhibit similar positive or negative correlations with age (Materials and Methods). We subsequently identified 133,650 positive aging segments (length: 140.2 X 10 6 bps) and 7,661 negative aging segments (length: 31.9 X 10 6 bps) from the brain data (S7 Fig, S1 Table). This novel analysis reveals several intriguing aspects of the epigenomic response to aging. First, genomic regions covered by the positive and negative aging segments exhibit variable lengths in the brain data (S7 Fig). Although the number of positive individual CpGs (those that increase methylation with aging) is highly similar to the number of negative CpGs based on the regression analysis (50.9% versus 45.7%, not including zeros; S5 Fig), the total length of negative aging segments is considerably shorter than positive segments. This observation, at least on the surface level, is consistent with the notion that increases in DNA methylation reflect regulation, whereas decreases in DNA methylation are caused by stochastic processes (e.g., [17,29]). We next examine the overlaps between the aging segments defined in this study using the aging CpGs identified in previous studies [11,15,51]. A critical difference between these previous studies and the current study is that the previous studies utilized methylation arrays, which include CpGs that were pre-selected (Illumina Infinium 27K chip, approximately 27,000 CpG To facilitate the visualization of aging-associated DNA methylation changes, we divided CpGs into five classes based on their regression coefficients from the linear model. CpGs that rarely exhibit DNA methylation alterations (absolute value of regression coefficients < 0.0002, 5% quantile), those slightly exhibit DNA methylation alterations with aging (absolute regression coefficients in [0.0002, 0.0034), 5~20% quantiles), CpGs that change DNA methylation medially (absolute regression coefficients between [0.0034, 0.0272), 20~80% quantiles), and CpGs that strongly (absolute regression coefficients between [0.0272, 0.0591), 80~95% quantile) and very strongly (absolute regression coefficients > = 0.0591, top 5%) alter DNA methylation are referred to as never (never), slightly (slight), medially (medium), strongly (strong) or very strongly (very strong), respectively. CpGs with fractional methylation levels between 0~0.05 and 0.05~0.2 are defined as unmethylated and hypomethylated, respectively. CpGs with fractional methylation levels between 0.95~1.00 and 0. sites). On average, the Infinium 27K BeadChip covers 2 CpG sites per RefSeq gene, and this value is several orders of magnitude fewer CpG sites than that analyzed in the current study. On the other hand, these studies exhibit greater statistical power than our study because they focus on fewer positions (reducing the burden of multiple testing problem) in a considerably larger number of samples. Despite such differences, we observe that previously reported aging CpGs are highly enriched in aging segments (Fig 4A). In other words, our aging segments effectively capture previously identified aging patterns. Interestingly, positive aging segments exhibit enhanced enrichment for previously identified aging CpGs compared with negative segments (S8A and S8B Fig); the only exceptions include fetal and child brain samples [51]. This is again consistent with the idea that positive aging segments more strongly reflect regulatory processes compared to negative aging segments. Positive and negative aging segments also exhibit distinctive patterns of enrichment and deficiencies of specific chromatin states. Negative aging segments are enriched for strong transcription, active intergenic enhancers and 5' flanking regions (TxS, TxEnhAc, EnhAc, TssFAc). In addition, these segments are significantly devoid of poised promoters, polycomb-repressed regions and repetitive regions (ConHeter, Heter/low, PcRepr, Tssp) ( Fig 4B). In contrast, positive aging segments are enriched for poised promoters and polycomb-repressed regions but lack transcribed regions and active enhancers (Fig 4B). These observations are consistent with results from the previous section, indicating coordinated epigenetic responses between DNA methylation and chromatin states. Furthermore, positive and negative aging segments are enriched in distinctive gene functional categories (Table 1). Positive aging segments are involved in cell adhesion and development. In contrast, negative aging segments harbor many genes involved in RNA processing, metabolic processes, and protein ubiquitination (Table 1). Common and divergent patterns between blood and brain based on whole genome data To identify common aging patterns across tissues, we analyzed blood data (CD4+ T cells from three individuals, Heyn et al. [2]). Based on the extensive 51 chromatin state map of CD4+ T cells from Ernst and Kellis [53], we determined whether different chromatin states exhibit differential patterns of DNA methylation with aging. Unlike the brain data set (S9A Fig), most CpGs in the blood data set exhibit hypo-methylation with aging across different chromatin states (S9B Fig), consistent with the observed genome-wide hypo-methylation with aging ( Fig 1A). Of note, the weak/repressed promoters exhibit hyper-methylation with aging (positive regression coefficients with age) in the blood data as well (S9B Fig). When we applied the maximal segment algorithm, the number of resulting negative aging segments is significantly greater than the resulting positive aging segments (36,294 negative segments vs. 2,845 positive segments, S7 Fig, S1 Table). This finding is also consistent with that the blood data set exhibits pervasive hypo-methylation with aging (Fig 1). However, these results should be taken with caution, as they are exclusively based on three individuals. A direct comparison of the brain and blood aging segments is technically not feasible due to the differences in the sample size (eight versus three individuals) as well as the total numbers of CpGs analyzed (three-fold difference between the two data sets, Materials and Methods). Nevertheless, 1,174 and 1,512 genes are included in the positive and negative segments, respectively, in both samples (S3 Table). Interestingly, genes appearing on the positive segments in both samples are highly over-represented in gene ontology terms associated with development ( Table 2). For example, genes corresponding to the term 'embryonic morphogenesis' are 5-fold enriched in the positive aging segments. This term includes genes such as HOX genes, which are epigenetically suppressed post-embryonically. Common positive aging segments thus Aging and Whole-Genome DNA Methylation capture genomic neighborhoods of genes that are epigenetically silenced via gradual increases in DNA methylation. Genes located in the negative aging segments in both samples are enriched in GO terms related to metabolic process and phosphorylation ( Table 2). Discussion Our study demonstrates widespread aging-associated variations in DNA methylation in brain, by examining approximately all CpGs in the human genome. Understanding the molecular mechanisms of such epigenetic drift will advance our knowledge on aging and aid in elucidating the dynamics of DNA methylation turnover at individual CpG sites. One previously observed pattern in the aging process and cancer involves global hypo-methylation coupled with promoter hyper-methylation [1,2,29,54]. After re-examining this hypothesis using whole-genome methylation maps, the expected pattern is observed for blood but not for brain (Fig 1A and 1B). In light of these results, we propose that CpGs with extreme initial methylation levels are more prone to DNA methylation drift regardless of genomic context (Fig 1C and 1D). Ageassociated dysregulation of DNA methylation maintenance may cause these extreme states of hypo-and hyper-methylation to revert to intermediate methylation levels. However, for the remaining genomic regions, the direction of methylation changes with aging cannot be exclusively explained based upon the initial or mean methylation levels. For example, many hypo-methylated promoters exhibit minimal epigenetic drift with aging. Instead, we demonstrate that integrating histone modification data with aging DNA methylation maps provide the specific chromatin context to the observed patterns in brain (Fig 2 and Fig 3). Among the 14 chromatin states defined, active intergenic enhancers exhibit the most dynamic variation in DNA methylation with aging (Fig 3). This finding is consistent with previous studies indicating that enhancer hypo-methylation occurs with aging [11,[13][14][15][16][17], thus emphasizing the co-variation of histone modification and DNA methylation marks in aging. Furthermore, given that we could examine a much larger number of CpGs compared with previous studies, we demonstrate that originally hyper-methylated intergenic and intragenic enhancers are subject to strong hypo-methylation, which differs from the patterns for proximal enhancers. This observation may provide clues to identifying the underlying mechanisms of the co-variation between chromatin states and DNA methylation changes with aging. For example, many intergenic and intragenic enhancers are located in low CpG density genomic regions with high initial DNA methylation, whereas proximal enhancers and promoters are typically located in regions of high CpG density with low initial DNA methylation. These genomic differences and the initial epigenetic signals may affect how DNA methylation levels of specific CpGs change with aging. Future studies are necessary to refine the co-variation between chromatin states and DNA methylation changes, and to elucidate the underlying mechanisms. We also demonstrate that poised promoters and polycomb-repressed regions continue to increase DNA methylation with aging. A prominent example of this phenomenon is observed in the HOX clusters, which are epigenetically suppressed cooperatively via both DNA methylation and histone modifications (Fig 4C). Hyper-methylation of poised promoters could also be related to the initiation of de novo DNA methylation following H3K27me3 modification induced by polycomb complexes [55,56]. Comparing aging whole-genome methylation maps of brains to those of blood reveals intriguing similarities and differences between the two tissues. In both data sets, age-associated changes in DNA methylation are concentrated in intragenic and intergenic regions instead of promoters or exons (Fig 1A and 1B). Unfortunately, commercially available methylation chips tend to target genic and promoter regions, thus potentially limiting our ability to grasp the full extent of DNA methylation changes with aging. Numerous previous analyses of DNA methylation and aging focused on CpG islands and consequently observed aging-associated hypermethylation [57][58][59][60]. We also identify some common features of co-variation between chromatin states and DNA methylation variation. Specifically, aging-associated hyper-methylation of poised promoters and hypo-methylation of distal/intergenic enhancers are apparent in both data sets. Interestingly, blood samples exhibit a pronounced global hypo-methylation with aging [2], whereas brain samples do not exhibit an obvious pattern at the global level (Fig 1). Although caution should be used when interpreting this difference given the small number of samples, blood samples notably exhibited consistent hypo-methylation across many different types of CpGs compared with other tissues in a previous study [5]. Additional aging whole-genome methylation maps from diverse tissues will elucidate the details of the long-observed tissue differences in aging patterns noted among tissues. For example, epidermal whole-genome methylation maps exhibit minimal differences between young versus old populations; however, only two whole-genome methylation maps were compared [16]. The causes of among tissue differences in aging patterns are unclear. Some suggest the differences in proliferative potential across tissues as a factor in these differences [61]. This hypothesis may be worth re-visiting in light of a recent suggestion that a similar factor underlie differential cancer susceptibility across tissues [62]. Although analysis of whole-genome nucleotide maps provides an unbiased representation of aging and DNA methylation, it also offers a significant challenge to commonly used linear model methods given the extremely large number of CpGs in the entire genome, thus posing a tremendous burden of multiple testing corrections. Additionally, a strong spatial correlation of DNA methylation of nearby CpGs has been observed across diverse species [44][45][46]. To overcome such statistical limitations and efficiently utilize the spatial correlation, here we investigated clusters of CpGs that respond to aging in a similar manner. Our method offers a statistically robust framework to analyze aging whole-genome methylation maps. Notably, aging CpGs identified in previous studies using Illumina 27K data [15,51,52] are highly significantly enriched in our aging segments, suggesting that the aging segments capture biologically meaningful genomic neighborhoods. Moreover, aging segments could be more robust in the presence of SNPs. Aging CpGs are highly affected by individual SNPs occurring at each CpGs, whereas aging segments harbor a large number of CpGs. We also observe intriguing functional ontology associations with positive and negative aging segments. Positive aging segments, which exhibit gradual hyper-methylation with aging, are highly enriched with genes associated with developmental ontology terms in the brain (Table 1). This finding is consistent with the notion that DNA methylation down-regulates neurodevelopmental genes [25]. Notably, positive aging segments are highly enriched for the Homeobox domain in the DAVID INTERPRO database (q < 10 −13 after Bonferroni correction). In particular, three HOX gene clusters (A, B and D) reside in positive aging segments (Fig 4C) occupied by poised promoters (TssP) and polycomb-repressed regions (PcRepr). These results indicate that DNA methylation and histone modification synergistically suppress HOX expression in adult brains [25,63]. Negative aging segments in brain are enriched with genes associated with metabolism, RNA processing, and protein ubiquitination ( Table 1). Enrichment of these gene ontology terms in the negative aging segments may indicate epigenetic up-regulations of these genes. For example, protein ubiquitination is an essential posttranslational modification for the removal of damaged or misfolded proteins [64]. Thus, ubiquitin plays a critical role in proteome homeostasis during aging [65,66]. Impairment of ubiquitination pathways leads to the accumulation of damaged and aggregated proteins, which are associated with aging as well as neurodegenerative disorders, such as Alzheimer's [67,68]. Gradual hypo-methylation of genes in the protein ubiquitination pathways may indicate an epigenetic up-regulation of this pathway during aging. Our study has several potential caveats. The brain data used only included 8 individuals, and the blood data were derived from 3 individuals. In addition, the fact that data from brain and blood were obtained from different sets of individuals should be taken into consideration when comparing the extent of epigenetic drift between the two tissues. The histone modification data were obtained from a single individual; however, these data were complemented with data from multiple cell types. In addition, the brain methylation maps were generated from cortex samples and thus could be affected by cellular heterogeneity [69]. A recent study suggests that cellular heterogeneity may have only negligible effects on aging-associated DNA methylation changes [15]. Nevertheless, our analyses provide a good comparison with previously identified aging patterns from similar cortex samples [11,15,28,51,57]. Analyses of DNA methylation and chromatin modification data from a larger number of biological replicates obtained from cell-sorted samples will allow researchers to avoid the aforementioned potential biases. Such data will almost certainly become available in a near future. Our methods will be fully applicable to such data and help reveal the details of genome-wide differences across tissues and cell types and ultimately elucidate the molecular mechanisms underlying such differences and similarities between tissues. DNA methylation and gene expression data We analyzed DNA methylation maps generated by whole-genome bisulfite sequencing from the frontal cortex [25,30] samples from eight individuals spanning a diverse spectrum of ages (a 35-day-old male; 2-, 5-, 12-, 16-and 25-year-old males; 81-and 82-year-old females). We also analyzed whole-genome methylation maps of CD+ T-cells from a male newborn, a 26-year-old individual of unknown sex and a 103-year-old male [2]. In total, 25.4 X 10 6 CpGs from frontal cortex and 9.0 X 10 6 CpGs from CD4+ T-cells were analyzed. To eliminate confounding effects of gender [51,70], data from sex chromosomes were excluded. The fractional methylation level of each CpG was calculated as "the number of methylated reads / total number of reads (= number of methylated reads + number of unmethylated reads)" [37,70]. Our main focus was the first data set ('brain' data set). We limited our interpretation of the second data set ('blood' data set) as it contained only three samples and fewer mapped CpGs. Gene expression data were obtained from the BrainSpan Atlas of the Developing Human Brain [63,71]. Age-based methylation modeling Age-associated DNA methylation changes at individual sites were assessed using a linear model [11,15,51]. We used ln(age+1) as a predictor and fractional methylation level as a response variable to account for the rapid changes of DNA methylation that occur during early development [15,51]. Regression coefficients from this model indicate the strength and direction of age-associated DNA methylation changes. Chromatin states map of brain Chip-seq data containing 6 chromatin modifications H3K9me3, H3K27me3, H3K27Ac, H3K4me1, H3K4me3 and H3K36me3 from the prefrontal cortex of a 75-year-old female were downloaded from NIH Roadmap Epigenomics (www.roadmapepigenomics.org/). The GSM numbers for these data sets are GSM772833 for H3K27me3, GSM772834 for H3K9me3, GSM773012 for H3K4me3, GSM773013 for H3K36me3, GSM773014 for H3K4me1, GSM773015 for H3K27ac, and GSM773010 for ChIP-Seq input. We used ChromHMM [34] to train a multivariate Hidden Markov Model. First, histone modification reads were transformed into binary values using a default 200-bp bin size. Second, the LearnModel function was used to learn models. A transmission probability matrix and an emission probability were generated. Based on these priors, each bin was given posterior probabilities for each state. The state with the highest probability was used to label that bin. To maximally define possible chromatin states with 6 histone modifications, 7 to 15 state models were trained. We selected the 14 state model because it demonstrated key interactions amongst the chromatin marks without incurring unnecessary redundancies. Identifying aging segments We used a maximal scoring subsequence algorithm [47] to define aging segments. This approach aims to identify all non-overlapping and continuous subsequences with maximal local scores [47,48]. For all mapped CpGs, the t-statistics from the linear regression model were used as pre-scores. It is advantageous to use t-statistics from the regression because these values represent the impacts of both the strength of correlation (P-value) as well as the degree of the changes with aging (regression coefficients). After excluding outliers, the 'pre-scores' were then normalized to a [-1,1] scale. Each mapped CpG was given a positive score for increase of DNA methylation with aging or a negative score if it exhibited decreased DNA methylation with aging. The outliers are strong positive and negative CpG sites and therefore given 1 or -1, respectively. Unmapped CpGs are coded as 0. All other nucleotides are given -0.00257 to ensure the maximum distance of 250 bp between any two CpGs within a segment (see below). The maximum distance between two adjacent CpGs is determined based upon the pattern of spatial correlation of DNA methylation in the human genome. The correlation rapidly decreases to the baseline near or before~500 bp (S10 Fig). Consequently, we used 100, 250 and 500 bp and found that the results are highly similar (e.g., S8C Fig). We present results using the maximal distance between two adjacent CpGs as 250 bp (more details are provided in the S1 Text). Under this scheme, CpGs with a stronger increase or decrease of DNA methylation are given higher absolute scores. We then used the calculated scores as templates for the maximal segment algorithm using a custom in-house script (available upon request). Among the initially identified segments, only those subsequences ('aging segments') that exhibited statistically significant associations with age in a linear model (FDR-corrected q-value < 0.05) were retained. Consequently, we identified maximal clusters of adjacent CpGs that increase DNA methylation with aging (positive segments) and those that decrease DNA methylation with aging (negative segments). Gene ontology, permutation test, and visualization The DAVID 6.7 functional annotation tools [72,73] were used to examine enrichments of specific gene ontology (GO) terms. P-values were adjusted using the Bonferroni correction. Enrichment of aging CpGs from other datasets in our segments was performed by a permutation test as follows: if our aging segments have n CpGs overlapping with another dataset, we randomly choose n CpGs from the Illumina 27K chip and counted the number of overlaps with another data set, designated as m. This procedure was repeated T (= 100,000) times, and empirical P-values were calculated as: P T½m>n T . We used R base [74] and ggplot2 [75] plotting systems to generate figures. The Gviz package was used to visualize and annotate UCSC tracks [76]. Table. Positive and negative aging segments in blood and brain data. (XLSX) S2 Table. Gene ontology enrichment of aging segments in different chromatin states in brain. (PDF) S3 Table. Genes located in common aging segments. (XLSX) S1 Text. details on defining chromatin states and identifying maximal segments (PDF) laboratory and Hema Nagrajan, Iksoo Huh, Yuehui Zhao, Ying Sha, and members of Jung-Kyoon Choi laboratory at KAIST for discussion and suggestions to the manuscript. Author Contributions Conceived and designed the experiments: DS SVY. Performed the experiments: DS SVY. Analyzed the data: DS. Contributed reagents/materials/analysis tools: SVY. Wrote the paper: DS SVY.
7,744.8
2015-06-19T00:00:00.000
[ "Biology", "Medicine" ]
Linear Correction and Matching Method for 3D Line Structure Reconstruction the original Introduction Using the camera imaging model to recover the 3D structure of the object from the acquired 2D image sequence is one of the classic problems in the field of computer vision. e 3D reconstruction refers to the establishment of mathematical models suitable for computer representation and processing of 3D objects. e 3D reconstruction is the basis for processing, manipulating, and analyzing the properties of 3D objects in a computer environment. It is also a key technology for establishing virtual reality that expresses the objective world in a computer. How to make computers perceive the 3D environmental information has always been one of the goals in the field of computer vision. e development of computer vision and deep learning has provided significant enhancement in fields such as autonomous driving, biometrics, video recognition, and drones. However, if these areas want to further improve, 3D reconstruction may be a good breakthrough. e existing 3D reconstruction technology generally uses structure-from-motion (SfM) [1] approaches and multiview stereo (MVS) [2] pipelines (e.g., PMVS [3]or SURE [4]). e former can obtain the sparse point cloud model of the scene and the camera pose information and apply it to MVS to develop a 3D dense point cloud model. However, because the feature point dataset is very large, the MVS algorithm has a slow processing speed, which often takes a large amount of time and computing memory. In addition, viewing in the point cloud viewer has become extremely difficult. Moreover, the image-based 3D reconstruction technology is affected by factors such as lighting and occlusion when extracting feature points. At the same time, it is sensitive to the accuracy of feature point matching and the accuracy of camera correction when calculating the camera projection matrix and solving space points. Correctly extracting and matching the feature points and accurately solving the 3D geometric relationships have always been difficult problems in the field of computer vision. erefore, more complex geometric primitives can be selected as the data representation, such as planes (e.g., [5][6][7]) or lines (e.g., [8][9][10]). By analyzing the pinhole camera model, epipolar geometry, and various line segment detection algorithms, it is found that 3D reconstruction based on line matching is feasible. In addition, the surrounding artificial buildings have prominent line segment geometric features. If the relevant 3D information is extracted and matched, the 3D reconstruction efficiency can be enhanced. e unique 3D information can be used to extract and match the related 3D information using the unique geometrical features of the line segments. ese line segments can be obtained by any line segment detector. e two most commonly used line detection algorithms are LSD [11] and EDL [12]. Both algorithms can provide accurate and almost abnormal detection in a very effective way. Figure 1 shows an example image with line segments obtained using the LSD algorithm. e literature shows that the 3D reconstruction technology has evolved from a point-based motion recovery structure algorithm to a line-based multiview stereo vision algorithm, but each algorithm has its own advantages and disadvantages. Currently, how to obtain a high-precision 3D scene model is also the focus of 3D reconstruction research. Since the structure of the scene is usually complicated and of large scale, obtaining a high-precision 3D model is still a problem that deserves attention and requires significant amount of resources, energy investment, and technical research. Related Work e point-based motion recovery structure algorithm relies heavily on the unique textures in the scene and appears weak when facing some monotonous environments. Although, the SFM algorithm strives to create sufficient feature matches in order to successfully calculate the correct camera pose, the 3D models generated are usually very sparse. Since the linear characteristics of the artificial building environment are very obvious and the line segment is the most common geometric feature in the artificial building environment, it would be a good choice to complete the feature extraction and matching using the straight-line segment feature. Bay et al. [13] used line segments from two uncalibrated images to determine the relative camera poses and to compute a piecewise planar 3D model. However, this method is not suitable for processing more than two images, is not robust when dealing with unstable lighting conditions, and is unable to handle some outdoor scenes. Further, Schindleret et al. [14] incorporated the Manhattan-world assumption into the reconstruction procedure to decrease the computational complexity and to reconstruct buildings from two views. In 2010, Jain et al. [15] proposed a method of reconstructing lines from multiple different stereo images. e method does not require the correspondence of line segments in different images. e method independently reconstructs the line segments using connectivity constraints and then calculates the final 3D model by merging. Although this method has achieved good visualization, it is not suitable for large-scale datasets. In 2014, Micusik and Wildenauer [16] proposed a SLAM-like system with line matching through narrow baselines and showed impressive results, especially for indoor scenes. However, this method only attempts to estimate the camera pose estimation and 3D reconstruction through line segments is extremely difficult. Hofer et al. [17] proposed a public line-based 3D reconstruction tool, called Line3D++. e method first establishes a large set of potential line correspondences between images through weak epipolar constraints and uses a scoring formulation based on mutual support to separate correct matches from incorrect matches for each segment. e final line-based 3D model is obtained by clustering the 2D segments from different views using effective graph-clustering formulas. However, the 3D reconstruction results of Line3D++ method will lose part of the line structure mainly because the line detection result of the line matching step is not located at the true edge of the image, and there is no consistency check of the matching line pair. In order to solve this problem, this paper first corrects the LSD line detection results produced by Line3D++ and then uses the epipolar constraint principle to eliminate the mismatched lines. Finally, an accurate and complete 3D reconstruction result is obtained. Correct Line Position Let the image be I. Use the Canny operator [18] to perform edge detection on I and obtain the edge map E. Solve the gradient map G by E, using the form of first-order difference: where (i, j) are the coordinates of the pixel, while d x (i, j) and d y (i, j) are the first-order partial derivatives of x and y directions, respectively. Figure 2 shows the construction process of the extended gradient map, where Figure 2 e gradient direction of the surrounding pixels of the edge pixels is cleared and then all of the surrounding pixels are regarded as the edge pixels. At this time, the gradient is calculated again as shown in Figure 2(c). e above process is repeated until all the pixels are traversed, and the final result is obtained as shown in Figure 2 (1) e first problem exists in the local area of the corner. If D(p 1 , e 1 ) < D(p 1 , e 2 ), the correction result is p 1 ∈ e 1 and p 2 ∈ e 2 . In this case, the gradient gravitational map correction will obtain the incorrect correction result. In fact, p 1 should satisfy p 1 ∈ e 2 . (2) e second problem is at the abutting edge. If D(p 3 , e 2 ) < D(p 3 , e 3 ), the correction result is p 3 ∈ e 2 and p 4 ∈ e 3 . In this case, incorrect correction result will be obtained by the gradient gravitational map correction. In fact, p 3 should satisfy p 3 ∈ e 3 . In order to address the aforementioned problems, this paper proposes the following straight-line correction method. In Figure 5, the blue line is the correct edge position. Take the yellow straight-line segment l as an example. e two endpoints of yellow line are P and Q. e yellow line is divided into n equal parts, and the equal points are recorded as E 1 , E 2 , . . . , E n−1 . e corrected positions P ′ and Q ′ of the endpoints P and Q, respectively, and the corrected positions E 1 ′ , E 2 ′ , . . . , E n−1 ′ of the bisector are calculated by the gradient gravitational map. Let K be the slope of the line segment. Calculate the slope K(P ′ E 1 ′ ), · · ·, K(E n−1 ′ Q ′ ) of each small line segment after correction. en, sort the sequence in ascending order to obtain the new sequence K 1 < K 2 < · · · < K n . Take out the middle w consecutive slopes K [(n−w/2)]+1 , K [(n−w/2)]+2 , · · ·, K [(n−w/2)]+w and calculate the standard deviation as where K � w i�1 K [(n−w/2)]+i /w . Set the threshold ε; if σ < ε, then add K [(n−w/2)] , · · ·, K 1 from left to right to the aforementioned w consecutive slopes and calculate a new standard deviation σ ′ for each addition and check the condition σ ′ < ε. If condition is true, continue to join the next one and repeat the above operation. Otherwise, stop joining and remove the current added slope. Perform same operations on K [(n−w/2)]+w+1 , · · ·, K n . At this time, the line segment that holds the slope is saved with a total of s. e total number of endpoints is 2s, and the connectivity is determined by the number of occurrences of the same endpoint. Let N(x) denote the number of occurrences of endpoint x. If there are two endpoints satisfying N(x) � 1 and there are s − 1 endpoints satisfying N(x) � 2, then all the saved small line segments are connected, and the two points satisfying N(x) � 1 are taken as new endpoint pairs. It is extended to maintain the same length as the original straight-line segment. e extension method is as follows. Assume that the distance between the two ends p and q of the line segment l in Figure 6 is shortened by d 1 and d 2 , respectively, and p 11 and q 11 are used as the starting nodes. en, the extended line vector is calculated as where Vt is a calculation vector function, while V p 11 and V q 11 are determined to obtain a new pair of endpoints of the corrected straight-line segment. e line matching results are refined. e polar line in the adjacent three images is calculated using the point feature matching result. en, the matching results are combined to determine the final verified local feature area and random sampling is used to verify the feature similarity in the small neighborhood. us, the incorrect matching line features are eliminated. If the matching line exists only in two adjacent images, the above method is only performed in two adjacent images. Figure 7 shows the process of determining small neighborhoods by combining epipolar lines. In Figure 7, take three adjacent images I 1 , I 2 , and I 3 as examples. e blue lines are the polar lines, while the yellow and the black lines represent one of the polar lines and the corresponding matching line, respectively. In this paper, the polar and the corresponding matching lines are used as reference to obtain the small neighborhood A, B, C, A ′ , B ′ , C ′ , A ″ , B ″ , C ″ , around the line, that is, the areas surrounded by the green lines. e specific solution method is as follows. Firstly, the bisection points a ′ , b ′ , c ′ , . . . are obtained using the bisection of line segment L ′ . en, according to epipolar constraint, the corresponding polar lines of a ′ , b ′ , c ′ , . . . in Figure 7 . . , respectively. In order to speed up the calculation, the regional similarity can be calculated by random sampling. Secondly, the size of each small neighborhood is determined. In the experiment, let the radius threshold be R, and save the pixels smaller than the radius threshold in a new image I m . Finally, the gradient direction determines whether the neighborhood is the left or the right neighborhood to which the pixel belonging to I m is pointing. In order to maintain the consistency of the direction, the area pointed by the straight-line gradient represents the right neighborhood and the other side is the left neighborhood. e similarity of the line neighborhood is determined by calculating the similarity of the pixel colors within the region. Let the corresponding matching areas of L, L′, and L ″ be NE, NE ′ , and NE ″ , respectively. e number of pixels in areas NE, NE ′ , and NE ″ is m, m ′ , and m ″ , respectively. e neighborhood similarities of NE ′ -NE, NE ′ -NE ″ , and NE-NE ″ are calculated. e neighborhood similarity between NE ′ and NE is e neighborhood similarities between NE ′ -NE ″ and NE-NE ″ are calculated in similar manner as equation (4). e calculation is performed using the above method. If the corresponding regions in the three images are randomly selected to have high similarity, it is determined that the matching straight line is correct. In this way, the straight line is refined, and all three adjacent images in the dataset are refined in order to obtain the line matching results. Construction of gradient gravitational map is shown in Algorithm 1. e method of refining the line matching result is shown in Algorithm 2. Experiments For the experiments, 3.2 GHz CPU, 16G RAM, and a Nvidia GeForce GTX 1060 6 GB GPU were used. e proposed algorithm was implemented in C++ and (optionally) also in CUDA. In the experiment, the local neighborhood radius R was set to 5 pixel values, the number of samples K was 30% of all Mathematical Problems in Engineering small neighborhoods, the similarity threshold T was set to 10, and the proportional threshold P was set to 90%. In this paper, the effectiveness of the proposed method is illustrated by three parts of comparative experiments. e first shows results of line matching and purification. e second compares the sparse 3D point cloud model with the experimental results of the proposed method. e third part compares the results of Line3D++ with the experimental results of the proposed method. Figures 8-13 show the experiment results. Figures 8 and 9 show results of line matching and purification. Figures 8(a) and 9(a) show the direct match results with LSD. Figures 8(b) and 9(b) show the results of the straight line correction. Figures 8(c) and 9(c) show the results of the purification matching. In Figure 8, Figure 8 is situation can easily lead to incorrect matching results. e main reason is that the line of the edge position detected by the LSD is offset to the middle of the side plane. However, the proposed method can solve this problem. Fill in GM with E and EGM according to the above method. (4) Calculate the gravitational value at each location: if gradient direction � � 45°d raw a circle centered on the current position, and according to the edge pixels that intersect first to fill in GM (1, i + kx, j + ky) and GM (2, i + kx, j + ky); else according to the EGM (i + kx + ky); } } } (5) Return GM ALGORITHM 1: Steps of constructing gradient gravitational map. In Figure 9, Figure 9(b) eliminates lines 23, 5, 4, 3, 1, 8, and 26 in the matching result compared with Figure 9(a). Compared with Figure 9(b), Figure 9(c) removes line 1. Among all the matched line pairs that are removed, the line pairs with obvious errors are lines 2 and 26. Since they are not located at the true edges of the image, matching errors are caused. When the corrected result is used as an input for matching, a matching result with higher accuracy can be obtained. In addition, lines 1 and 5 in Figure 9(a) are obviously located at different edges. After being processed by the method proposed in this paper, this kind of error can be effectively reduced. Figures 10 and 11 show a comparison between the sparse and the line-based 3D models. Figures 10 and 11 show the sparse 3D point cloud models of the scene obtained by processing the image set using the SFM algorithm and the line-based 3D reconstruction results, respectively. It can be seen from the comparison that the 3D sparse point cloud model is able to represent the characteristics of the building, but the overall structure is not very obvious. On the other hand, although the 3D line segment model is very vague at some curve features, the lines of the building are very obvious. Using the 3D line segment model can more prominently represent the geometric structure of the building, especially for buildings with straight lines and few curves. In addition, the 3D point cloud model has poor reconstruction accuracy in the absence of textures, and the obtained 3D models often appear hollow. e 3D line segment model can provide more sufficient structural information and better reflect the geometric topology of the scene compared with the 3D point cloud model. us, the 3D line segment model provides a highly meaningful semantic 3D information for reconstruction. Figures 12 and 13 compare the Line3D ++ results with the experimental results of the proposed method. It can be seen from the figures that the 3D reconstruction results of Line3D++ lack many lines. Compared with Line3D++, the proposed improved method has more lines and more local structures, which significantly improve the integrity of the object. e experimental results can be analyzed from different angles and different structures. For example, Figure 12(c) shows the position of the door, and the reconstructed line segment is very sparse. e line segment reconstructed in Figure 12(d) is relatively dense, and many important lines are restored, making the structure of the door clearly visible. Figure 12(e) shows the position of the brick wall, but most of the reconstructed lines are vertical lines. ere are only few horizontal lines, and it is impossible to see what structure is reconstructed. In contrast, Figure 12(f ) restores relatively large number of horizontal lines, making the line segment and the outline of the brick clearer. Figure 13(c) shows a partial enlargement of the stepped portion. It can be seen that only a few lines have been reconstructed, while Figure 13(d) shows that the step is reconstructed properly due to the extremely high degree of reconstruction integrity of the proposed method. Figure 13(e) shows a partial area of a part of the wall. Obviously, Figure 13(f) has more lines and better results. Table 1 shows the number of feature lines of the Line3D ++ and the method proposed in this paper for two datasets. For the Castle dataset, the number of feature lines in Line3D ++ and the proposed method is 1,590 and 1,676, respectively. For the Herz-Jesu dataset, the number of feature lines in Line3D ++ and the proposed method is 1704 and 2394, respectively. e number of feature lines in both datasets increased significantly. Table 2 shows the RMSE of the Line3D ++ and the method proposed in this paper for two datasets. As we can see, the proposed method has a slightly higher accuracy. Reconstruction of the 3D line segment model was conducted for two sets of classical datasets. By comparing the experimental results and the experimental data in Table 1 and 2 before and after using the proposed method, it can be seen that the proposed method effectively solves the defects of insufficient accuracy and visual effects, for example, there are too many stray lines, and some areas cannot restore the characteristics of buildings. Moreover, the proposed method can improve the matching accuracy, produce detailed model outline, and provide high 3D reconstruction efficiency. Conclusion is paper presents a linear correction and matching method for 3D reconstruction of target line structure and resolves the mismatching problem in the line matching step in the Line3D++ algorithm. e gradient map is extended to construct the gradient gravitational map in order to correct the position of the straight-line segment detected by the straight-line extraction method. e epipolar constraint is used to eliminate the mismatched straight lines in order to improve the quality of the 3D reconstruction. e experimental results demonstrate and validate that the 3D reconstruction results obtained by the proposed method are more accurate and complete than the Line3D++. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,901.4
2020-05-20T00:00:00.000
[ "Computer Science" ]
Queens stay, workers leave: caste-specific responses to fatal infections in an ant. BACKGROUND The intense interactions among closely related individuals in animal societies provide perfect conditions for the spread of pathogens. Social insects have therefore evolved counter-measures on the cellular, individual, and social level to reduce the infection risk. One striking example is altruistic self-removal, i.e., lethally infected workers leave the nest and die in isolation to prevent the spread of a contagious disease to their nestmates. Because reproductive queens and egg-laying workers behave less altruistically than non-laying workers, e.g., when it comes to colony defense, we wondered whether moribund egg-layers would show the same self-removal as non-reproductive workers. Furthermore, we investigated how a lethal infection affects reproduction and studied if queens and egg-laying workers intensify their reproductive efforts when their residual reproductive value decreases ("terminal investment"). RESULTS We treated queens, egg-laying workers from queenless colonies, and non-laying workers from queenright colonies of the monogynous (single-queened) ant Temnothorax crassispinus either with a control solution or a solution containing spores of the entomopathogenic fungus Metarhizium brunneum. Lethally infected workers left the nest and died away from it, regardless of their reproductive status. In contrast, infected queens never left the nest and were removed by workers only after they had died. The reproductive investment of queens strongly decreased after the treatment with both, the control solution and the Metarhizium brunneum suspension. The egg laying rate in queenless colonies was initially reduced in infected colonies but not in control colonies. Egg number increased again with decreasing number of infected workers. CONCLUSIONS Queens and workers of the ant Temnothorax crassispinus differ in their reaction to an infection risk and a reduced life expectancy. Workers isolate themselves to prevent contagion inside the colony, whereas queens stay in the nest. We did not find terminal investment; instead it appeared that egg-layers completely shut down egg production in response to the lethal infection. Workers in queenless colonies resumed reproduction only after all infected individuals had died, probably again to minimize the risk of infecting the offspring. Background Social insects, such as honeybees, ants, and termites, provide the prime examples of altruistic, self-sacrificing behavior in nature [1], but not all members of the insect society are equally prone to sacrifice themselves. Workers typically do not produce offspring and increase their fitness indirectly through increasing the reproductive output of the queen. They readily engage in costly or dangerous tasks, such as foraging and defense, while the queens focus on laying eggs in the safety of their nests [2][3][4]. In terms of inclusive fitness [5], the life of an individual worker therefore counts less for the colony than that of the egg-laying queen. This is illustrated, for example, by honey bee workers killing themselves when defending the hive, because their barbed stingers cannot be withdrawn from mammal skin. In contrast, honey bee queens have a smoother stinger with smaller barbs and in principle can use it repeatedly [6]. Similarly, soldiers of Globitermes termites and workers of Colobopsis saundersi ants "explode" and cover attackers with sticky secretions [7,8], but such "autothysis" has never been observed in queens. Finally, individuals likely to take over reproduction and gain direct fitness in the future avoid risky tasks, such as colony defense [9][10][11][12]. Here, we examine whether queens and workers also differ in the readiness for altruism in their response to lethal infection. Insect societies, in which closely related individuals constantly interact with one another in the confined space of a shared nest, provide ideal conditions for the transmission of pathogens [13]. Therefore, social insects have evolved powerful mechanisms to counteract the transmission of pathogens, including the destruction of infected brood, intense allogrooming, avoidance, killing, and even the walling-in of diseased nestmates [14][15][16]. One particularly striking behavior is altruistic self-removal: moribund workers of several social insects leave their nests to die outside in isolation, probably preventing the spread of a potentially dangerous pathogen to their nestmates [17][18][19]. In contrast, dying queens have been observed to stay in the nest, where their corpses may be groomed by workers for days or even weeks (e.g., Atta mexicana [20], Solenopsis invicta [21], Pogonomyrmex badius [22]). As the survival of the colony depends on the queen, it might more strongly invest into its immune defense than workers [23][24][25] and try to overcome an infection. Alternatively, fatally infected queens might turn all available resources into reproduction in the nest, resulting in terminal investment [26,27]. Indeed, Cardiocondyla obscurior ant queens dying from an infection increased their egg laying before imminent death [28], whereas a non-lethal injury led to the upregulation of immune genes and a temporary decline of egg laying rate [25]. Unfortunately, the behavior of dying queens and workers has never been studied in the same species, which makes it difficult to determine whether the response to pathogen stress and impending death is caste-specific or merely varies among species. Here, we infected workers and queens of the monogynous (single-queened) ant Temnothorax crassispinus with spores of the entomopathogenic fungus Metarhizium brunneum. We investigated the behavior of conspecific dying queens and workers in queenright colonies and also queenless colony fragments. This set up enabled us to examine the specific effects of both caste (queens vs. workers) and reproductive status (reproductive vs. non-reproductive individuals), as in queenless colonies a small number of socially dominant T. crassispinus workers may lay eggs [29,30]. From the above-cited observations [17][18][19][20][21][22] we expected fatally infected, non-reproductive workers from both queenless and queenright colonies to die in isolation outside the nest but queens and egg-laying workers to die inside. The results of our study indicate caste-specific behavior of queens and workers dying from an infection: all moribund queens remained in the nest and all infected workers, both reproductive and non-reproductive, left the nest and died in isolation. Furthermore, in contrast to our expectation that egg-laying rates increase after exposure to a lethal pathogen (see Cardiocondyla obscurior queens [28]), egg-laying by queens and workers stopped after Metarhizium treatment and egg numbers slowly began to increase again in queenless colonies only after all or at least most of the infected nestmates had died. Methods Temnothorax crassispinus is a small monogynous, monandrous ant species (a single, singly-mated queen per colony), which lives in small colonies of up to 300 individuals in hollow acorns or twigs throughout Eastern Central Europe [30][31][32][33]. Colonies were collected in July 2017 in deciduous forests around Regensburg, Germany. We split 19 colonies into queenright (QR) and queenless (QL) parts consisting each of 18 old (darkly colored) and 18 young (lightly colored) workers or soon to emerge worker pupae and, in QR colonies, one queen. After queen loss, young workers engage in dominance interactions and one or a few dominant workers begin to produce male offspring from unfertilized eggs after one to 4 weeks [29,34]. Old workers are needed for the successful establishment of split colonies in the laboratory as they conduct non-reproductive tasks before young workers are old enough to take over. To distinguish old workers from aging young workers, we marked all old workers by clipping the tarsae of the middle leg. Young workers remained unharmed. Colonies were reared in small plastic boxes (10x10x3 cm 3 ) with a moistened plaster floor in incubators under 23°C / 15°C day / night cycles. They were fed twice per week with cockroaches and sugar solution. Dead individuals were removed but not replaced as this would have disturbed the established dominance hierarchies [34]. Hence, worker numbers varied slightly among colonies. Four weeks after the experimental colonies were set up we counted and removed all brood. We then noted the egg number daily for 10 days to estimate the reproductive rate of each colony before the treatment. Thereafter, all individuals were censused again and all brood was removed. To investigate whether queens and / or young, egg-laying workers adjust their behavior and reproductive efforts to pathogen stress, we treated the queen / all young workers with either a control solution (0.05% Triton X) or a Metarhizium brunneum spores suspension (1 × 10^8 spores / ml 0.05% Triton X). Preliminary tests had shown that 71% of T. crassispinus workers develop a lethal infection within 10 days after being dipped for 1 s into 500 μl of a spore suspension of this concentration. M. brunneum is an obligate-killing entomopathogenic fungus that penetrates the host cuticle within 48 h after exposure [35,36]. Subsequent hyphal growth and the release of toxins result in the death of the host. The fungus completes its life cycle by producing infectious conidiospores on the host surface [35][36][37][38]. We treated all young workers and the queen if present with either the control or spore solution. Note that in most colonies a few young workers disappeared from the nest or died between set-up and treatment so that the number of these individuals is typically lower than 18 per colony (see results). The marked old workers remained completely untreated. After exposure all individuals were placed on sterilized filter papers to remove surplus fluid and thereafter were kept in groups isolated from their colonies for 36 to 40 h to inhibit immediate transmission of spores to their uninfected and untreated nestmates. This short period of isolation is not long enough to change colony odor or the dominance hierarchy in the colony, and none of the returned workers or queens were attacked after being placed back into the nest. After return to their colonies, we noted the position and condition of queens and workers every morning and counted the numbers of eggs and dead individuals once every 24 h for the first 10 days and subsequently three times a week. Sampling was conducted blindly regarding the control and experimental groups. All dead individuals were removed. To verify that the ants had died of an infection with M. brunneum their corpses were immersed for 5 s in 70% EtOH, rinsed with distilled water, and surface-sterilized for 1 min in 1% NaClO. Subsequently, they were again cleaned with distilled water and dried on filter paper. The gaster was removed with sterile forceps and frozen at − 20°C for ovary dissections (see below). The head and thorax were placed into a sterilized Petri dish containing a moist cotton ball and lined with moist filter paper. After covering the Petri dish with a lid, the dish was sealed with Parafilm to prevent the loss of humidity. Samples were checked regularly until spore growth was visible on the ant surface or for a maximum of 3 weeks. The reproductive status of workers from both queenless and queenright colonies was determined by dissecting the ovaries of 70 workers that had died outside the nest and whose corpses had been frozen within ≤24 h after death (41 infected and 9 uninfected young workers, 20 uninfected old workers). As only few eggs were laid by the colonies during the first weeks after treatment, we increased the temperature in the incubators to 26°C/22°C (day/night) on day 39 to accelerate egg production. Individuals that died later were similarly sterilized and checked for spore growth. Four control workers could not be used for surface sterilization due to an advanced decay. Hence, the sample size differs in this case from that of the survival analysis. The final analysis of death rate was conducted after 75 days if not stated otherwise. For egg number comparisons day zero was defined as the day of the return of the treated individuals to their colonies, while for survival analysis day zero was the day of the treatment. Data (Additional file 1) were analyzed with R 3.2.3 software (R Development Core Team) using packages "vegan" for PerMANOVA [39] and "survival" [40] for the Kaplan-Meier survival analysis and graph. Pairwise survival comparisons were conducted using the package "survminer" [41]. In addition to the treatment group as predictor we included the colony as a random effect ("frailty") in the Cox survival analysis model [40] of the young workers to control for survival differences between colonies. Data from surviving individuals were included as censored. Kruskal-Wallis tests were used for group comparisons, Mann-Whitney U-test (unpaired) and Wilcoxon signed-rank test (paired) for two-sample comparisons. All pairwise tests were corrected for multiple testing according to a false discovery rate (p adjust method: "fdr") [42]. Survival rate of treated workers and queens At the day of the treatment each colony fragment contained 12.5 ± 2.7 (mean ± sd) young workers and 10.5 ± 3.3 (mean ± sd) old workers. Direct spore contact strongly reduced the lifespan of both queens and workers: 158 of 196 young workers (median percentage dead workers per colony 85%; Q1 67%; Q3 96%) and six of eight queens died within 75 days after treatment with spore solution, in contrast to 33 of 266 young control workers (median percentage dead workers per colony 9%; Q1 7%; Q3 14%; Mann-Whitney U-test: U = 330, p < 0.0001) and one of 11 queens (Fisher's exact test, p < 0.0063) exposed to Triton X and 91 of 398 old workers (median percentage dead workers per colony 18%; Q1 10%; Q3 33%) that had not been treated at all. Of the 158 dead, spore-treated workers, 23 (14.5%, eight QL and one QR colony) did not produce any M. brunneum spores after surface sterilization. Four of them did not show any pathogen load and 19 produced spores of other, unidentified pathogens. One of 33 (3%, one QR colony) of the young control workers that had died during the experiment was infected with M. brunneum. Of the 91 dead, untreated old workers, 11 (12%, two QL and 5 QR colonies) produced M. brunneum spores, 70 (77%) produced spores of unidentified pathogens, and 10 (11%, 4 QL and 4 QR colonies) did not show any pathogenic growth. Old workers infected with M. brunneum were excluded from survival analysis as these resulted from an uncontrolled infection by nestmates. Whereas across the different treatment groups untreated old workers did not differ in lifespan (χ 2 = 8.7, df = 4, p = 0.068, Fig. 1), young spore-exposed workers showed a strongly decreased survival compared to young workers treated with the control solution (χ 2 = 238, df = 4, p < 0.0001, for details see Additional file 2: Table S1). In addition to the treatment effect we also observed that colonies were differently sensitive to pathogen exposure (treatment: χ 2 = 47.56, df = 1.0, p < 0.0001, colony: χ 2 = 27.02, df = 8.8, p = 0.0012). The percentage of infected workers still alive 10 days after exposure varied significantly among colonies (χ 2 = 41.8, df = 9, p < 0.0001, for details see Additional file 2: Table S2). There was no colony effect on survival in young untreated control workers (χ 2 = 6.7, df = 8, p = 0.57). The presence of an uninfected queen did not have any effect on worker lifespan (QRWInf vs. QLInf: p = 0.740, see Additional file 2: Table S1). The survival rate of spore-exposed queens was also strongly Fig. 1 Survival of workers of the ant Temnothorax crassispinus was significantly decreased in young workers suffering an infection of the entomopathogenic fungus Metarhizium brunneum (QLInf, QRWInf) compared to young workers treated with a control solution (QLCo, QRCo, QRQInf) in queenless (QL) and queenright (QR) colonies. Old, untreated control workers did not show a reduction in lifespan between treatment groups. The experiment was terminated 75 days after the treatment and the lifespans of surviving workers were included as censored data. A colony-wise comparison also shows significant differences in survival rates between infected and uninfected individuals (see main text) reduced (χ 2 = 8.6, df = 2, p = 0.013; survival (days): 5, 6, 6, 7, 10, 10, > 74, > 74). Behavior of dying workers and queens Metarhizium-treated individuals were not prevented from entering the nest or from approaching healthy individuals or brood, e.g., infected and uninfected nestmates did not separate. One infected worker was observed carrying an egg. During the first 10 days after treatment the proportion of workers observed outside the nest was significantly higher in colonies in which workers were infected (old workers, control colonies: median 0.017; Q1 0.013 Q3 0.022; old workers, infected colonies: median 0.060; Q1 0.041; Q3 0.079; Mann-Whitney U-test, U = 71, p = 0.0113, young workers, control colonies: median 0.008; Q1 0.000; Q3 0.011; young workers, infected colonies: median 0.034; Q1 0.021; Q3 0.042; U = 66, p = 0.026, Additional file 2: Figure S1). Additionally, in colonies with infected workers the number of dead young workers was significantly correlated with the number of young workers observed outside the nest on the previous day (Spearman rank tests, n = 11, 0.098 < r s < 1, mean 0.590, 0.0001 < p < 0.802, Fisher's combined probability, df = 22, χ 2 = 106.110, p < 0.0001) but not so in old workers (n = 9, − 0.188 < r s < 0.546, mean 0.183, 0.129 < p < 0.889, Fisher's combined probability, df = 18, χ 2 = 14.183, p = 0.717). Both observations suggest that dying workers leave the nest, as described previously in a related species [17]. Furthermore, we directly observed 38 M. brunneum-infected workers dying outsidethey were easily identified by their cramped body posture and stiff locomotion when touched with forceps. In four cases in three different Metarhizium-treated colonies we observed untreated, old workers outside the nest carrying the corpse of a young worker. Such behavior was never observed in control colonies. We could not systematically test the reproductive state of deceased workers because their ovaries were rapidly destroyed by intense hyphal growth. Nevertheless, dissections revealed that of all the examined workers that had died outside the nest, 27% had eggs in development (for details see below). Furthermore, direct observations of egg laying by four workers that died after spore exposure and the presence of an egg laid in isolation by one of 18 infected workers, which later all died, suggests that several of the workers that had died outside had been reproductive. In conclusion, both non-reproductive and reproductive workers left the nest to die outside. In contrast, queens were never observed alive outside the nest chamber and while all 158 lethally infected workers, including the egg layers, left the nest before death, the six moribund queens died in the nest (Fisher's exact test, p < 0.0001). Three of six infected dead queens, but none of the infected dead workers, showed mutilations of legs and / or antennae. One infected queen was observed lying motionlessly in a stiff posture inside the nest and 1 day later was found dead outside the nest. The corpse of another infected queen was carried outside the nest by an untreated old worker. One queen, treated with Triton X, was found decapitated outside the nest 4 days after treatment. Ovary dissection revealed that the ovaries of the latter queen were undeveloped and that its spermatheca was empty, indicating that it had not been reproductive. This colony was excluded from further analyses. Effect of pathogen-exposure on worker and queen fecundity Egg numbers produced before and after treatment were analyzed separately for queenless and queenright colonies, as infected queens died very rapidly while egg production in queenless colonies could be monitored over much longer periods. Workers typically do not become reproductive in the presence of the queen and both treatment and queen presence had a strong effect on eggs present 3 days before and 3 days after the treatment (PerMA-NOVA; treatment: F = 2.8, df = 2, p = 0.038; time: F = 7.6, df = 1, p = 0.0024; queen presence: F = 10.2, df = 1, p = 0.0007). All queenless control colonies had contained eggs before the treatment, but in only six of ten queenless infected colonies workers continued to lay eggs after the treatment (see Table 1). Furthermore, infected colonies did not only produce fewer eggs after the treatment, but as long as infected workers were present the eggs produced in the colonies frequently disappeared (Additional file 2: Figure S2). The presence of eggs was affected by the number of infected individuals and its interaction with the day of the experiment. The number of infected individuals decreased with time and there was also a colony effect (PerMANOVA; number of infected individuals: df = 1, F = 3.9, p = 0.035; day: df = 25, F = 3.08, p = 0.0001; colony: df = 9, F = 9.14, p = 0.0001; number of infected individuals*day: df = 25, F = 1.8, p = 0.004). Queens did not lay any eggs during isolation and egg production in queenright colonies could only be analyzed for the first 3 days after return to the colony, as the first queen had already died on the third day. The median egg number produced during 3 days did neither differ among queens before (χ 2 = 0.1, df = 2, p = 0.948) nor after the treatment (χ 2 = 0.08, df = 2, p = 0.959, Fig. 3). Queens appeared to sensitively react to the treatment with an almost complete reproductive shut-down regardless of whether they had been exposed to spores or only Triton-X (PerMANOVA; treatment: df = 1, F = 0.15, p = 0.957; number of young workers: df = 1, F = 0.8, p = 0.395; total number of workers: df = 1, F = 0.03, p = 0.956; treatment * number of young workers: df = 4, F = 0.62, p = 0.556). Although all queenright colonies had contained eggs before the experiment started, no new eggs had appeared in four of 18 colonies even before the treatment. After the treatment, no eggs were laid in seven of eight colonies with an infected queen and four of five colonies each with an uninfected queen and either infected or Triton X treated workers (analyzed per colony until the death of the queen; Fisher's exact test, p = 0.0006). Although queens reacted sensitively to the treatment itself, they also appeared to be capable of adjusting their reproductive rate to the presence of infected workers, as queens of the control group produced more eggs after treatment than control queens with infected workers (Per-MANOVA: treatment: df = 1, F = 7.2, p = 0.0063, day of the experiment: df = 23, F = 1.6, p = 0.03, colony: df = 8, F = 7.4, p = 0.0001, see Additional file 2: Figure S3). Interestingly, workers did not start to reproduce in the presence of the queen even when the latter refrained from laying eggs. Six of eight queens died within 10 days after spore contact but workers began to continuously produce eggs in the formerly queenright nests only 37 to 59 days after the queen's death (n = 4, median 40.5). In two additional colonies eggs were sporadically produced but vanished even after 78 days. The initial presence of an infected queen appeared to suppress and delay worker reproduction compared to the removal of a Fig. 2 Mean number of eggs produced in queenless colonies of Temnothorax crassispinus during 7 days before and after the treatment. Workers infected with Metarhizium brunneum (left) laid significantly fewer eggs than workers of the control group (right) after, but not before the treatment. Boxplots show medians, 25 and 75 quartiles, and 95% percentiles (** p < 0.01 corrected for a false discovery rate: "fdr") Table 1 Number of queenless (QL) and queenright (QR) colonies of the ant Temnothorax crassispinus in the different treatment groups containing no eggs during the time before treatment (BT) and after the treatment (AT) (control workers: QLCo, QRCo, QRQInf; infected workers: QLInf, QRWinf; infected queen: QRQInf). In one QRCo colony all workers except one died during the experimental period and this colony was excluded from the 10 weeks after treatment analysis healthy queen (median 17 days, range 6 to 35 days; see also [29]). At the end of the experiment still no eggs were present in the nests of the two surviving infected queens. Colonies, in which the queen had died, appeared to produce more eggs than control queenright colonies (Mann-Whitney U-test: U = 4, p = 0.053, QRQInf n = 6, median 19, Q1 4.75, Q3 44.5; QRCo n = 5, median = 0, Q1 0, Q3 4; one colony with an uninfected but deceased queen in QRCo and two colonies with a still living, infected queen QRQInf were excluded from the analysis). The preparation of the ovaries of dead workers showed intense hyphal growth in the gasters of all workers with spore growth after surface sterilization (see Fig. 4). In 27 of 41 infected workers, internal organs, especially the ovaries, were no longer visible and it was impossible to determine their reproductive status. The ovaries of nine workers (64%, four QL and three QR colonies) had at least one egg in development, while the ovaries of five workers appeared to be undeveloped. In contrast, the ovaries of nine uninfected young workers and 20 uninfected old workers were clearly visible and differed in developmental status. One of nine uninfected young workers (11%, one QR colony) and nine of 20 (45%, one worker each in four QR colonies and three QL colonies and two workers in one QR colony) untreated old workers had one or two eggs in development. Traces of previous egg laying (e.g., corpora lutea and / or developing eggs) were found in one infected, dead worker each from two colonies with an uninfected queen. Dissections of the queens revealed hyphal growth throughout the gasters of five of six infected queens. Ovaries and corpora lutea could be detected in only one Fig. 3 Mean number of eggs produced in queenright colonies of Temnothorax crassispinus during 3 days before (left) and after (right) the treatment. Both groups decreased their egg laying rate. The reproductive rate of queens infected with Metarhizium brunneum (QRQInf) neither differed from that of control queens (QRCo, QRWInf) before nor after the treatment. The treatment itself seems to result in a reproductive shutdown in queens of all treatment groups. Boxplots show medians, 25 and 75 quartiles, and 95% percentiles Fig. 4 Ovaries of control (left, death 2 days after treatment of colony members) and Metarhizium-infected (right, death 4 days after treatment) Temnothorax crassispinus workers. The gaster contents of infected workers are unidentifiable as fungal hyphae have spread throughout the gaster (see microscope picture at the bottom right; magnification 40x), whereas the ovaries of the uninfected workers are clearly visible of the five queens; in the sixth queen, ovaries and corpora lutea were still visible, but no maturing eggs were present (see Additional file 2: Figure S4). We here report that queens and workers of the ant Temnothorax crassispinus react differently to an infection with the obligatorily killing pathogen Metarhizium brunneum: while moribund workers, regardless of ovarian status, left the nest to die away from it, queens stayed and were carried out of the nest only after their death. Our study shows that the altruistic self-removal is a caste-specific behavior and does not vary with the reproductive status of the workers. Furthermore, the presence of infected nestmates was associated with a strong decrease of egg laying even by uninfected nestmates. Through altruistic self-removal [17,18] and the withdrawal from social interactions [19] infected workers certainly minimize the risk of transmitting a pathogen to brood and adult nestmates. Altruistic self-removal is not caused by a specific manipulation of the pathogen but is induced by the workers to isolate themselves from other colony members to prevent possible risks for nestmates as previously shown for moribund workers by Heinze and Walter [17]. Horizontal transmission of spores has been documented in several insects (e.g., [60][61][62]) and cross-infection may also have been the cause of old, untreated workers dying from an infection with this fungus, in particular as old workers were seen handling corpses. It is therefore easy to see why infected workers leave the nest to die isolated from their nestmates. Considering the social withdrawal and pathogen control in workers, it is more difficult to understand why infected queens stayed and were removed by workers only after they had died. Even a single-queened colony should benefit from the self-removal or the early expulsion of a fatally infected queen. However, compared to workers most social insect queens have a longer lifespan and survive better under stressful conditions [63][64][65]. They might also have a higher chance to survive an infection than workers by investing more in immune defense ( [23,24]; see also below). Queens therefore might remain in the nest as long as there is a chance to recover. Furthermore, Rueppell and colleagues showed that while founding queens of Temnothorax are capable of conducting worker tasks and leave the nest to forage, they lose this behavioral plasticity in established colonies and remain in the nest even when workers are removed [66] whereas reproductive workers may still be capable of conducting non-reproductive tasks and leave the nest (e.g., [67]). Hence, fatally infected queens might simply not have been capable of leaving the nest independently. Workers apparently do not discriminate against infected nestmates before the fungus has begun to produce spores on the cadaver of its host (e.g., [68]) and therefore could reduce the contagion risk for the colony only after the queen's death by removing its corpse [20,69,70]. Both control and spore-treated queens refrained from reproduction for several weeks after treatment and even 10 weeks later only half of them had recommenced to lay eggs. A reduction of reproductive efforts has previously been observed in honey bee queens infected with the fungus-related pathogen Nosema apis [57] and Metarhizium-infected queens of the ant Lasius niger [69]. In the latter this was suggested to result from increased investment in the immune system, similar to the temporary drop of egg laying rates following an injury in the ant Cardiocondyla obscurior [25]. Since Temnothorax queens can live for several years [71], after a potentially dangerous treatment they might invest more strongly in pathogen defense and the restoration of their body condition than in reproduction. Even the contact with the solvent, handling stress, or the absence of allogrooming and trophallaxis during the isolation phase affected the physiology of queens in a way that they stopped egg laying. In addition, queens appeared to react sensitively to the presence of infected workers by reducing their reproductive efforts. While a few Temnothorax workers quickly begin to lay eggs when the queen is removed from the colony [29,30,34,72], no worker egg laying was observed in the presence of a non-laying, infected queen. This supports the view that that the stop of egg laying does not necessarily mean the loss of queen control (see also [73] for social dominance of ovariectomized wasp queens). Similarly, only few eggs were laid in queenless colonies after workers had been infected. Freshly laid eggs quickly disappeared, and egg numbers increased only after most of the infected nestmates had died, probably to prevent the cross-infection of newly produced brood. Although Metarhizium infection strongly decreased the survival of queens and laying workers we could not observe terminal investment [26,27], in contrast to what we have previously reported for Cardiocondyla ant queens [28]. This discrepancy might reflect the different life history of the two species. The single queens of many Temnothorax colonies may live for 10 years and longer [71], while queens in the multi-queen colonies of C. obscurior are short-lived (mean: 26 weeks [74]) and can quickly be replaced by female sexuals, which after eclosing from the brood mate with their brothers in the natal nest [75]. Temnothorax queens might therefore preferentially invest in individual and colony immunity as their future reproductive success depends more strongly on their survival than in Cardiocondyla. Further studies are needed to investigate how the life span of ant queens is associated with their immune investment. Conclusions Our data show that workers and queens of the ant Temnothorax crassispinus react differently to infection with the entomopathogenic fungus Metarhizium brunneum. Infected queens stayed inside the nest but refrained from reproduction. Workers, independent of their reproductive state, left the nest and died in social isolation. Both, queens and workers reduced reproductive investment after the treatment with M. brunneum. Egg numbers increased with the decreasing number of infected individuals in queenless colonies, but workers did not lay eggs in the presence of the queen, even when the queen was sick and did not reproduce. Our study reports for the first time a caste-specific behavior in response to lethal infections in the same species. Additional files Additional file 1: Datasets supporting the article. (XLSX 61 kb) Additional file 2: Table S1. Pairwise survival comparison of young Temnothorax crassispinus workers in queenless (QL) and queenright (QR) colonies, treated with a control solution (QLCo, QRCo, QRQInf) or infected with Metarhizium brunneu, (QLInf, QRWInf). Significant p-values (corrected for a false discovery rate "fdr") are marked in bold. Table S2. Pairwise survival comparison of young Temnothorax crassispinus workers of queenless colonies infected with Metarhizum brunneum. Significant pvalues (corrected for a false discovery rate "fdr") are marked in bold. Figure S1. Proportion of old (left) and young (right) Temnothorax crassispinus workers leaving the nest in colonies with young workers either infected with Metarhizium brunneum (infected colonies) or treated with a control solution (control colonies) independent of queen presence. Both young and old workers leave the nest more often when they themselves or nestmates are infected. Boxplots show median, 25 and 75 quartile and 95% percentile (*0.05 < p > 0.01; corrected for a false discovery rate: "fdr"). Figure S2. Reproductive rate of queenless Temnothorax crassispinus colonies during the first 25 days after the treatment. Eggs in infected colonies (top) vanish frequently and the colonies produce less eggs than control colonies (bottom). Figure S3. Number of eggs produced in queenright T. crassispinus control colonies (QRCo, left) or colonies with a control queen and M. brunneum infected workers (QRWInf, right). Whereas control queens increase their egg laying rate with time, the small number of eggs produced in colonies with infected workers vanish repeatedly. Figure S4. Ovaries of Temnothorax crassispinus queens infected with Metarhizium brunneum. Developmental status of the ovaries cannot be analyzed in five out of six queens as the gaster show excessive spore growth. (DOCX 2791 kb) Abbreviations QL: queenless; QLCo: queenless control; QLInf: queenless infected; QR: queenright; QRCo: queenright control; QRQInf: queenright queen infected; QRWInf: queenright worker infected
7,978.6
2018-12-01T00:00:00.000
[ "Biology" ]
Intensify3D: Normalizing signal intensity in large heterogenic image stacks Three-dimensional structures in biological systems are routinely evaluated using large image stacks acquired from fluorescence microscopy; however, analysis of such data is muddled by variability in the signal across and between samples. Here, we present Intensify3D: a user-guided normalization algorithm tailored for overcoming common heterogeneities in large image stacks. We demonstrate the use of Intensify3D for analyzing cholinergic interneurons of adult murine brains in 2-Photon and Light-Sheet fluorescence microscopy, as well as of mammary gland and heart tissues. Beyond enhancement in 3D visualization in all samples tested, in 2-Photon in vivo images, this tool corrected errors in feature extraction of cortical interneurons; and in Light-Sheet microscopy, it enabled identification of individual cortical barrel fields and quantification of somata in cleared adult brains. Furthermore, Intensify3D enhanced the ability to separate signal from noise. Overall, the universal applicability of our method can facilitate detection and quantification of 3D structures and may add value to a wide range of imaging experiments. Fluorescence microscopy once relied on single plane images from relatively small areas, and yielded limited amounts of quantitative data 1 . Nowadays, many imaging experiments encompass some form of depth or a Z-stack of images, often from distinct regions in the sample. Hence, much like biochemical and molecular experimental datasets 2,3 , accurate normalization, beyond background subtraction 4 of imaging signals, could reduce tissue-derived and/or technical variation. Signal heterogeneity often arises from sample-specific factors (e.g. excessive blood vessel absorbance in live imaging, or non-uniform tissue clearing/antibody penetration in fixed tissues). These elements combined with imaging distortions and illumination gradients contribute to non-uniformity both within and across image stacks and may lead to erroneous conclusions. Such heterogeneity is exacerbated the larger the imaged structure and it often limits the ability to perform downstream applications such as feature extraction, threshold-based detection, co-localization, three dimensional (3D) rendering, and image stitching. Standard filtering as well as total image correction tools that construct a mathematical model based on multiple single plane images [5][6][7] may excel at improving specific types of shading or microscopy distortions. However, they do not account for differences that arise from sample specific factors and are sub-optimal when signal-to-noise ratios, imaging conditions, and pixel distributions vary in a location-dependent manner -a typical property of 3D imaging. Specialized image processing tools for brain datasets have been designed to correct signal homogeneity but are limited to a specific use (e.g. somata detection) 8,9 . Moreover, modern 3D image datasets are acquired using advanced imaging modalities [10][11][12] and are based on novel sample preparation techniques [13][14][15][16][17][18] , some leaning on open source analysis tools 19,20 . Specifically, 2-Photon (2P) and Light-Sheet (LS) microscopes enable the acquisition of images from both deep and wide tissue dimensions (Fig. 1a,c, left panel). However, every biological sample and imaging technique introduces its own acquisition aberrations: beyond mirror and lens distortions 21 , the imaged preparations combine different characteristics (of e.g. cell density and lipid composition) that affect the optical penetration and light scattering at diverse tissue depths. Experimental limitation (antibody penetration, clearing efficiency) also constrain the ability to extract information from imaging experiments. Taken together, these difficulties call for the development of universal post-acquisition image . Intensify 3D processing pipeline for 2-Photon and light sheet image stacks. The latter requires an additional step to only account for tissue pixels in the image. The images in the stack are normalized one by one (XY normalization). After all the images are corrected the entire stack is corrected (Z Normalization) by semi-quantile normalization (other options exist) (c). Left panel. Light-Sheet imaging setup where the excitation light is orthogonal to the imaged surface. Red frame, middle panel. iDISCO immunostaining and clearing of CChIs as well as striatal Cholinergic interneurons. Original image suffers from fluorescence decay at increasing tissue depth. Green frame, right panel. Intensify3D Normalized image stack. Images before and after normalization are presented at the same brightness and contrast levels. SCIENTIfIC RepoRts | (2018) 8:4311 | DOI:10.1038/s41598-018-22489-1 pixels is significantly smaller than that of the background pixels. Moreover, signal-portraying pixels are often sparse and variable across the imaged region, while some images in an image stack might not contain a signal at all. On the other hand, background pixels are (by the assumption above) numerous and exhibit a continuous pixels histogram (often following a Poisson 22 distribution), allowing accurate assessment of quantiles. Leaning on these features, Intensify3D aims to detect and use the background for correct normalization of the signal. Consequently, our normalization algorithm initiates with an estimation of the background by removing as much as possible of the imaged signals. Then, the background intensity gradients are used for correction by local transformation (correction by division) of both signal and background, without compromising the signal-to-noise ratio (Fig. 1a,b). Intensify3D stack normalization: methodological outline We selected 2 P in vivo brain images harboring fluorescently labeled Cortical Cholinergic interneurons 23 (CChIs), which present with challenging complexity and diversity, to demonstrate our capacity to reach enhanced signal uniformity across the entire 3D space. Our correction process employs two input parameters that are determined by the user and represent the imaged signal: (1) Maximum background intensity (MBI), which stands for the highest pixel value of the background in a selected image stack. (2) Spatial filter size (SFS), which should be determined based on the largest element in the signal and preferably be at least twice the size of a typical imaged structure (Supplementary Figs S1 and S2a). Based on the MBI, Intensify3D automatically assigns a matching value to the entire image stack ( Supplementary Fig. S2b). Initially, each image in the stack is normalized separately across the XY dimensions. To generate an accurate representation of the image background, the signal carrying pixels are deleted by applying a threshold (MBI) and replaced by values presenting similar distributions to that observed in the rest of the image ( Supplementary Fig. S2c). Next, a background mask image is created by a Savitzky-Golay spatial filter (SFS), further removing features of the signal from the mask image while preserving general intensity gradients in the background ( Supplementary Fig. S2d, middle panel). Note that larger values of SFS will result in normalization of larger scale gradients in background intensity while ignoring smaller spatial changes. After the mask image is generated, it is used for normalization by division: the value of each pixel I(x) in the original image, I, is divided by the value of the corresponding pixel in the mask image, M, to produce a corrected image, N (For every pixel x, N(x) = I(x)/M(x)) ( Supplementary Fig. S2d). The corrected image is then standardized to avoid artificial "overexposure" due to normalization. Finally, for normalization across the imaged stack, Intensify 3D offers 3 types of Z normalization: (1) Upper quantile normalization, which shifts the intensity histogram of each image so that the upper quantile (based on MBI) would match across the entire stack ( Supplementary Fig. S3a). (2) Contrast stretch normalization, which fits the intensity histogram to two intensity quantiles (tenth percentile and upper quantile) through linear interpolation ( Supplementary Fig. S3b). (3) Semi-quantile normalization, which matches all image quantiles up to the upper quantile across the stack. Based on the transformation of the quantiles, pixels higher than the upper quantile are corrected through contrast stretch ( Supplementary Fig. S3c). Semi-quantile normalization achieved the best results in terms of homogeneity of both background and signal throughout the stack. (Figure 1a,c, Green frames). Addressing background complexity For cases where the imaged sample does not occupy the entire image (Fig. 1c), Intensify3D includes an option to automatically detect the area of the tissue ( Supplementary Fig. S4) and thus avoid normalization of irrelevant areas (e.g. imaging media) of the image. This feature is especially important when the relative size of the tissue section changes dramatically across the stack, as is often the case with Selective Plane Illumination Microscopy (SPIM) of large tissue samples (e.g. brain, heart). The automated detection option is based on principal component analysis (PCA) followed by either the application of a Gaussian mixture Expectation Maximization (E.M.) algorithm or K-means clustering to detect pixels that belong to the tissue. This step minimizes possible normalization artifacts due to media/tissue borders across the stack and accounts for changes in tissue size across the stack. Supplementary Fig. S1 presents a MATLAB graphical user interface (GUI) manual for using Intensify3D. Results To challenge the value of the Intensify3D normalization algorithm, we used in-house data from 2P and LS brain, mammary gland, and heart image stacks as well as simulated data. These represent distinct types of modern imaging platforms that are used for both visualization and quantification analyses and produce vast 3D data that can gain substantial additional value when normalized. Notably, each of these techniques introduces its own constraints both at the step of sample preparation and during the imaging process (detailed below); and our tool comes to correct both of these aspects. Correction of in vivo 2-Photon imaging data facilitates accurate neurite detection and measurements. In a typical 2P brain imaging experiment, a cranial window is opened in the mouse skull and the gap between the objective lens and the surface of the brain is filled with a water-based medium (external buffer or gel). 2P excitation is achieved through a tunable near infra-red pulsed laser, and the emitted fluorescence is split by filtering the image through green and red light filters, yielding split signals that are detected by photomultiplier tubes (PMTs) 24 . However, both the excitation light and the emitted light are subject to depth-dependent scattering; therefore, the excitation gradually becomes less efficient, which limits the power of detection with increasing depth despite the same amount of power being used. In addition, the detected photon emission is diminished accordingly, which leads to signal decreases both in intensity and in resolution (Fig. 1a, blood vessels absorb red light more than the surrounding tissue 24 . Thus, both the tissue and the imaging technology cause distinct difficulties, each of which needs correction to achieve appropriate normalization. The membrane composition, dendritic and axonal dimensions, and the morphology of neurons together determine their function [25][26][27] , making accurate assessment of a neuron's structure crucial to understanding the scope of its performance. Cortical cholinergic interneurons (CChIs) 23 provide an intriguing example of a neuronal population with functional complexity 28 . To access this specific neuronal population, we used mice that endogenously express a red fluorescent protein (ChAT_Cre X loxp_stop_loxp_tdTomato) in all cholinergic neurons. We then acquired 2P image stacks through a cranial window in an anesthetized mouse, with the same laser intensity across all depths (30 to 300 µm) (see Methods). Applying the Intensify3D normalization algorithm on this image stack added ample details to the observed structures without compromising their basic features. This is demonstrated by homogeneous image statistics, represented by the median and mean values across stack depths ( Fig. 2a and Movie 1). To estimate the difference between pre-and post-normalization images for feature extraction capabilities, we reconstructed neurons from original and corrected image stacks by a "blind" experimenter using a semi-automated reconstruction tool (Vaa3D, Allen Institute) 29 (Fig. 2b). This reconstruction highlighted considerable increases in the numbers and complexities of deep neurites (Fig. 2c). It further presented superior uniformity of dendritic diameters (automatically assigned by the reconstruction software) between deep and superficial dendrites (Fig. 2d), compatible with the known features of this class of bipolar cortical interneurons 23 . The apparent depth-dependent variability of dendritic diameters in the original 2P Z-stack is therefore misleading, and Intensify3D corrects this erroneous depth dependent profile of the normalized stack which is consistent with the actual situation 30 (Supplementary Fig. S5). Our algorithm thus expanded the capacity to detect and reconstruct deep neurites while maintaining their spatial characteristics and correcting 3D microscopy errors. Normalized Light Sheet microscopy images enable precise identification of anatomical macroand microstructures. Aside from the difficulties of imaging deep details of cellular features, normalizing microscopy image stacks is often confronted with large scale imaging variability. We addressed this issue using Light Sheet (LS), or Selective Plane Illumination Microscopy 31 (SPIM). LS microscopy differs fundamentally from confocal and 2P imaging in that the excitation involves a single sheet-like beam that is projected orthogonally to the acquisition objective, and in that the image is captured by a CMOS camera instead of the scanning laser in 2P 12 . This offers a powerful capacity for preparing multiple micrographs from vast areas of transparent tissue samples in a short time, while avoiding damage to tissue preparations. However, this technology also involves a major challenge in achieving equal penetration efficacy of the light beam through the specimen as well as of antibody penetration if used in combination with immunostaining. Reflections, deflections, and diffractions caused by differences in the intrinsic characteristics of the tissue (e.g. white vs. grey brain matter, cavities, etc.) as well as from the angle at which the light enters the tissue may additionally distort the signal in a plane-specific manner and result in non-homogeneous excitation. Extraction of accurate barrel field anatomy from auto-fluorescent LS scans. To test the capacity of Intensify3D to overcome difficulties at the macro scale level, we selected the cortical barrel fields which may be visualized in the auto-fluorescent channel of cleared hemi-brain iDISCO preparations 20 . Barrel fields present an intriguing example of a spatially defined, cortical processing unit capable of experience-dependent rewiring 32 . Recent studies have shown the importance of precise mapping of neuronal types in a single barrel column 33 and the effect of this anatomical diversity on network activity patterns 34 . Thus, the identification of individual barrel fields is crucial for studies focused on this region. Figure 3 presents an LS scan in the auto-fluorescent blue/green excitation emission spectrum of cleared mouse hemisphere samples prepared with the iDISCO+ method. Such scans may provide ample information regarding diverse neuroanatomical macrostructures 13 (e.g. white and grey matter, barrel cortex composition, hippocampus areas, blood vessels, etc.) without external fluorescent labeling. However, this type of signal is inclined to photo bleaching and suffers from massive changes in intensity along the path of the LS beam through the tissue (Fig. 3a,b top panel). This poses a challenge when attempting to select a threshold to separately identify elements within the tissue or between the tissue and the imaging media. At the single image level, Intensify3D corrected for intensity differences in the XY dimensions (Fig. 3b). Such corrections resulted in a shift in intensity of pixels (Fig. 3c, black curved arrow) of the tissue but not the media background due to automated tissue detection ( Supplementary Fig. S4). In post-normalization images, a simple threshold could then differentiate between the distinct anatomical features within the tissue (Fig. 3c, Movie 2). However, in addition to the X and Y dimensions, the original image stack showed substantial differences in intensity between different scans along the Z-axis. In our example, this was probably due to grooves in the surface of the tissue (Fig. 3d, arrows, Movie 2), which became more apparent after applying a threshold in an attempt to separate between distinct barrel structures (Fig. 3d, orange box -green region). Consequently, our correction contributed to improved homogeneity also along the Z-axis, allowing the selection of a single threshold by which each of the barrel structures could be effectively separated from the background around them. After 3D rendering (ImageJ, 3D viewer) 35 , all of the principal barrels 36 were clearly identified and could be numbered (Fig. 3e, Movie 3), further offering the option of testing and comparing their structural features individually for comparative analyses of different experimental samples. Correction of antibody and light penetration with Intensify3D facilitates accurate soma detection and quantification. The power of the LS microscope effectively comes into play when combined with tissue clearing techniques. The ability to acquire microscale morphologies and cellular distributions in a preserved macroscale tissue within a short time is unique to this technique. The iDISCO technique offers superb clearing power and the ability to immuno-stain desired targets and use far-red fluorophores that are superior in terms of interference by auto-fluorescence (Fig. 4a). Nevertheless, variabilities in LS laser efficiency and . Imaging setup of cleared brain samples with a LS microscope. The LS blue excitation illumination plane is perpendicular to the filtered CMOS camera (b). A single representative sagittal scan. Shown is the blue/green excitation emission spectrum of cleared mouse hemisphere samples before (red frame) and after normalization (green frame) (c). Relative (matched minimum/maximum values) pixel intensity histograms of images before and after correction. Note the post-normalization shift of pixels (black curved arrow) that corresponds to tissue and not to background pixels (d). Pre-and post-normalized image stacks perpendicular to imaging plane, see orientation illustration. Pre-normalization image stack shows decreased intensity due to grooves in the tissue surface (white arrows) as well as along the path of illumination (down). Orange rectangle region emphasizes the barrel cortex region. After applying a threshold (pixels below threshold removed), a 3D region around the barrels was selected (green region) for 3D rendering (FIJI) (e). 3D rendering of barrel fields before and after image normalization. Annotation for barrels marked in red, the green mesh labels the region of interest. Scale bars are 1 mm. antibody penetration efficacy both contribute to heterogeneities in the signal (Fig. 1c, left panel). Thus, cholinergic interneurons that are sparsely distributed within cortical layer 2/3 are easily visualized in this technique, but assessing their numbers, locations, and morphologies within cleared brain samples is confronted with in-depth limitations (Fig. 4b, top panel). To test the capacity of Intensify3D to correct LS images for microstructure analysis we used the iDISCO method on cortical tissues of mice where all cholinergic neurons express a red fluorophore and stained these cells with a far-red dye. We imaged a 0.8 × 1 mm area of the cortex and applied Intensify3D with automated tissue detection. Post-normalization images showed superior uniformity of imaged neurons, enhancing neuronal morphologies (Fig. 4b, bottom panel). Finally, we applied an open-source analysis tool (Fiji, 3D object counter) 37 to detect the somata of the CChIs and measured the distance of each soma to the cortical surface. Detected somata from original image stacks showed declining soma intensities as a function of cortical depth, most likely as an effect of decreased penetration of light and/or staining antibody. This reduction has been corrected with images normalized by Intensify3D (Fig. 4c,d). Specifically, the somata intensities in corrected image stacks showed no correlation with cortical depth (Pearson correlation R = 0.063, P = 0.42.) and a narrower distribution (Fig. 4d). Our analysis tool thus enabled correct assessment of both the site and density of these neuronal populations at variable tissue depths. Intensify3D restores distorted artificial 3D data and facilitates quantification of detected spheres. To supply controlled estimates of the performance of intensify3D we created an artificial image of randomly scattered 3D spheres. The artificial data is composed of ~500 Gaussian spheres with an artificial point spread function and an added background and Gaussian noise (Fig. 5a). We then applied the following intensity gradients to the 3D image: (1) Linear along the X axis. (2) Linear along both X and Y. (3) Logarithmic along the Z axis, and finally (4) Combined linear along X and Y together with a logarithmic gradient along the depth axis-Z. We corrected each distortion with either Intensify3D or CIDRE 6 (Fig. 5b). Intensify3D managed to restore the shape and pixel proportion in all cases without showing any visible artifacts in the corrected data (Fig. 5b, red histograms). In comparison, images corrected with CIDRE displayed "black spots" in the background, probably due to interference from the signal. Finally, we estimated the correction by applying the 3D object counter function (FIJI) 19 to detect and measure the spheres compared to the original undistorted data (Fig. 5b, Blue frame). Intensify3D performed better than CIDRE in all cases. Predictably, CIDRE did not account for changes in depth (Z gradients) since it is not designed for 3D analysis (Fig. 5c). Also, we selected the true positive spheres from both uncorrected images or those corrected by Intensify3D or CIDRE, and estimated the difference between original and corrected data (Mean absolute error). Again, correction with Intensify3D produced the lowest scores in all conditions (Fig. 5d). Intensify3D is applicable for a large range of biological tissues. To test the ability of Intensify3D in normalizing a variety of biological imaging datasets we chose two well-described complex structures: (1) the mouse mammary milk ducts and terminal end buds 38 and (2) the mouse heart 39 . Both samples were cleared with the iDISCO technique and imaged with a LS microscope in the auto-fluorescent channel as described above. The heart sample showed impressive uniformity across the imaged tissue, allowing classification of the major heart arteries and ventricles (Fig. 6a, upper panel). Notice the correction of dark frames along the imaging path (Fig. 6a, middle panel, red arrows). Likewise, mammary milk ducts post-correction presented enhanced features, enabling detection of distal ducts and buds with the same threshold (Fig. 6b). Discussion When the neuroscience pioneers -Santiago Ramón y Cajal, Camillo Golgi, and Alois Alzheimer, to name a fewdrew beautiful neuronal structures based on their basic microscopes, they likely overcame image inhomogeneity and imaging limitations with the help of a keen eye and much experience. Today, manual drawings and descriptive microscopy have been replaced by high resolution, large scale data which call for accurate quantification; moreover, signals that seem clear by eye do not always translate well to the downstream computerized tools. To address these difficulties, we developed and tested a post-imaging normalization tool in two state-of-the-art imaging platforms, and demonstrated that it can overcome common sample heterogeneity in large image stacks using both of these technologies and correct significant dataset errors. Specific advantages of our algorithm include its capacity to distinguish between the signal and background with minimal parameters defined by the experimenter, and avoiding distorting one at the expense of the other, as well as enabling applicability to various imaging platforms. The resulting avoidance of imaging errors and improvements in signal homogeneity are therefore an important asset for fluorescence microscopy imaging studies of all cells and tissues, especially in the brain. 2-photon imaging. Numerous microscopy studies require viewing large fields while maintaining high resolution and keeping the accuracy of microstructures. Furthermore, enabling accurate semi-or fully-automated reconstruction of microstructures from large image stacks is a prerequisite for a number of ambitious research efforts, including the Blue Brain project 40 and the BigNeuron initiative 41 . In this context, we challenged the use of our Intensify3D tool by analyzing 2P microscopy image stacks of adult mouse brains with fluorescently labeled cortical cholinergic interneurons 23 . Intensify3D normalization enabled homogenous representation across the entire image stack. Additionally, Intensify3D corrected significant errors in the estimation of deep dendrite diameters. Thus, normalized images offer a better representation of both imaged cell bodies and their thin extensions and serve as a superior platform for reconstructions and possibly modeling of the electrical properties of these neurons. Hence, this algorithm may offer a special added value to world-wide leading brain research projects. Light-Sheet imaging. Large scale imaging of cleared tissues with a Light-Sheet microscope is a rapidly expanding field 13,15,17,42 . The shapes, dimensions, and locations of cortical barrel fields are critical for studies in Figure 5. Intensify3D restores distorted artificial 3D data and facilitates quantification (a). 3D rendering of undistorted artificial 3D image stack of ~500 Gaussian spheres. Image stack dimensions -X Y Z: 600 × 600 × 500 pixels (b). Representative image (Z = 50) of undistorted image stack (left bottom) and distorted image stacks before and after correction with Intensify3D or CIDRE. All images are presented at the same minimum/maximum brightness levels. Outline of intensity histogram for the undistorted image (red curve) is overlaid on top of black filled intensity histograms for each of the individual images (left bottom corner) (c). ROC space for true/false positive detection rates by 3D object counter plugin (FIJI) on distorted data (crosses), Intensify3D (filled circles) or CIDRE (empty circles) corrected. Performance is in comparison to undistorted data (d). Relative mean absolute error for measured statistics of true positive spheres for distorted data, Intensify3D or CIDRE corrected. Performance is in comparison to undistorted data. Each row was corrected so that the minimal error is 1. the mouse somatosensory cortex, as well as for neurodevelopmental studies. For example, the barrels are notably altered following sensory deprivation during adolescence 36 , but the scope and significance of these changes in individual barrels remain largely unknown. Appling Intensify3D on LS data obtained from cleared adult mouse Figure 6. Correction of SPIM Imaging of mammary gland ducts and heart anatomy (a). Correction of a cleared iDISCO heart with intensify3D. 3 views (top, middle and bottom panels) of 3D rendering (ImageJ) based on whole mount mouse heart before (red frames) and after (green frames) correction. Aorta (red star), pulmonary artery (PA), left atrium (LA), right atrium (RA), right ventricle (RV), left ventricle (LV) and ventricular septum (VS) are marked. Scale bar represents 1mm in all views (b). 3D rendering based on whole mount imaging of cleared mouse mammary gland ducts before (red frame) and after (green frame) normalization with Intensify3D. Scale bar presents 150 µm. Left and right panels are matched in brightness and contrast. SCIENTIfIC RepoRts | (2018) 8:4311 | DOI:10.1038/s41598-018-22489-1 brains dramatically improved the detection and visualization of the barrel fields, indicating its applicability for such studies. At the microscale, we demonstrated that post-normalized scans of detected CChIs somata represent their real-life density, distribution, and composition compared to original scans, highlighting the importance of image normalization. Finally, to test the applicability of Intensify3D to diverse tissues we selected the mammary gland and heart, both of which present considerable challenges. We showed that with normalization we could extract the morphology of the milk ducts and buds by "simple" auto fluorescence. The heart is a complex organ composed of spaces and cavities that challenge imaging with a LS microscope. While this tissue challenged our tissue detections algorithm, the heart post-normalization showed superior uniformity, further strengthening the claim that images post-normalization represent the "real situation" better than uncorrected images. This predicts future use of the Intensify3D algorithm also for comparative studies that pursue dynamic changes in micro-and macrostructures, both in the brain and in other organs. Artificial data. Finally, to test the effects of normalization in a well-controlled milieu, we created an artificial data set of spheres in 3D and applied 4 types of distortions. Intensify3D managed not only to correct all of these distortions without adding any visible artifacts, but was also able to restore the data to the original intensity histogram in all cases (Fig. 5b). Moreover, correction empowered 3D object detection (Fig. 5c) and restored the basic statistics of the detected spheres (Fig. 5d). These results indicate that Intensify3D managed to correct both linear and logarithmic gradients across all 3 dimensions combined, and achieved it while preserving signal-to-noise ratios. General considerations. Notably, the definition of a "signal" primarily depends on the research question, and is subjective. Hence, applying a different size of the spatial filter (SFS), or selecting different maximum background intensity (MBI) levels will illuminate different structures in the resultant image; setting MBI too low will result in background regions of the image that will remain uncorrected, whereas combination of a high MBI with a small SFS will likely "correct" signal pixels and result in loss of information. In this context, any normalization process, if done carefully, can reduce signal variability. However, if the normalizing parameter (e.g., "housekeeping" gene, total protein concentration, RNA-seq or image background) is selected based on erroneous predictions, the correction process itself might introduce artifacts and mask information. For example, in cases where significant regions of the image are occupied by signal pixels, the normalization process would be compromised since the background in these regions will not be assessed correctly. For Intensify3D, errors might also occur if the background of the image is intrinsically different in intensity in one region of the stack as compared to another; for example, between different tissue types. Thus, making the basic decisions and defining the intrinsic assumptions of this tool is critical for achieving accurate normalization of microscopy signals based on background features. Another limiting factor comes from the attempt to "clean out" the signal from the image. Adding a machine learning approach may provide a more sophisticated way to improve finding of the signal-carrying pixels over the current selection, which is based on the definition of basic threshold and size filters. Lastly, the issue of normalization between images (along the Z axis) is not trivial. Because of the intrinsic differences in resolution between XY and Z in both SPIM 12 and 2P 11 point spread functions and the fact that Z step size is arbitrary, we treat each image as a separate data sample. To best match these data samples, we offer 3 types of between-image normalization: (1) The option of upper quantile normalization multiplies all of the pixels in an individual image by a constant (different for each image) so that the MBI value will match across the entire stack. This option will simply shift the intensity histogram but will not correct for any differences in the background histogram distribution (Fig. S3a). (2) Contrast stretch normalization linearly transforms each image in the stack so that the lower quantile (10th) and upper quantile values will match for the entire stack. This normalization will correct for differences in the "spread" of the intensity histogram (Fig. S3b). (3) Intensify3D records 10,000 quantiles from each image to precisely account for the intensity histogram of the background and signal which often occupies a very small number of pixels (<1%). Semi-quantile normalization will fit all the image quantiles lower than the upper quantile to match across the stack. From the upper quantile and above, the pixels will undergo the contrast stretch correction, assuming that these are the main fractions of pixels belonging to the signal. This normalization assumes that the background "behaves" similarly throughout the stack and that the differences observed should be corrected (Fig. S3c). Finally, there is the option not to correct across the depth of the stack but only by XY dimensions. Conclusions Our current findings and analyses demonstrate that the Intensify3D tool may serve as a user-guided resource, correct sample-and technology-driven variations, improve the reproducibility, and add extractable information to numerous imaging studies in neuroscience research as well as in life sciences at large. Given these advantages, we hope that our work will open an active discussion on matters of image normalization. We believe that image normalization has an integral role in any imaging experiment where numerical data is extracted. As in other fields of life sciences, normalization reduces variability between samples even when the experimental conditions are superb. Finally, Intensify3D might further be of value to time lapse fluorescence imaging platforms such as time lapse structural imaging 43,44 or calcium imaging 45,46 in which the fluorescence of the imaged sample is often compromised during imaging. Materials and Methods Mice. Two Microscopy. 2 photon microscope: A Custom built 2 Photon Microscope, with excitation of 1050 nm and a 25x lens was used for in vivo imaging of CChIs. Imaging was driven by MScan software (Sutter Instruments, CA). Stacks of full-frame images (512 × 512 pixels) were acquired in Z steps of 1 µm. Each stack frame was an average of 5 images. CChIs Image stack is 271 µm in total depth (~30 µm from surface to 300 µm). Cleared brains and tissues: Samples were attached with epoxy glue to the sample holder and placed in an imaging chamber made of 100% quartz (LaVision BioTec). The light sheet was generated by a Superk Super-continuum white light laser (emission 460 nm-800 nm, 1 mW/nm -3 (NKT photonics)). Barrel cortex imaging was done at 2× magnification, 10 µm step size and blue excitation filter (peak -470 nm/width − 40 nm) and a green emission filter (525/50). Mammary gland imaging was done with 2x magnification 10 µm step size (150 images), blue excitation filter (peak -470 nm/width −40 nm) and a green emission filter (525/50). Heart tissue was imaged in 0.8x with 5 µm step size (800 images). For CChIs, imaging was done at 5x magnification, 1 µm step size (later down sampled to 4 µm per image with Image J size adjust interpolation) and a far-red excitation (640/30) and emission filter (690/50). Procedures. In vivo 2-Photon. For the in vivo 2-Photon experiments, we administered mice with Rymadil analgesia (200 mg/kg body weight, 200 µl injection volume). Anaesthetized mice were put in a stereotactic frame (Narishige, Japan) and a small craniotomy (3 mm in diameter) was made over the right barrel cortex (2 mm caudal, 3 mm lateral to Bregma); dura was not removed. A 3 mm glass window was implanted over the craniotomy and sealed with VetBond (3M). CChIs were imaged through the cranial window. ChAT-IRES-CreXAi14 mice were anaesthetized with isoflurane (1-2% by volume in O2 LEI medical). Anaesthetized mice were euthanized by cervical dislocation. iDISCO clearing and staining. For the iDISCO-cleared brain experiments as well as mammary and heart, ChAT-IRES-CreXAi14 mice were anaesthetized with isoflurane (1-2% by volume in O2 LEI medical), administered with an intra-peritoneal injection of 200 mg/kg sodium pentobarbital. Following trans-cardial perfusion with 1xPBS solution and then 10% Formaldehyde in 1xPBS solution, the mouse brains, mammary gland, and heart were collected and used for iDISCO clearing as described 13 . For staining of the tdTomato expressing cells we used an anti-RFP antibody (Rockland, 600-401-379) followed by Alexa-647 conjugated Donkey anti-Rabbit secondary antibody (Jaxson immunoResearch, 711-605-152), following manufacturer's instructions. Software. Normalization tool and graphical user interface were designed with MATLAB (Simulink). 3D image rendering was done using the FIJI (ImageJ), 3D viewer plugin. Neuronal reconstruction was performed in Vaa3D (Allen Institute) by N.V. in a "blind manner". Neuronal diameter analysis was done with NEURON (Yale). External MATLAB and ImageJ scripts that were used in the algorithm are detailed under Supplementary Table 1. Detailed instructions, source code and standalone files are accessible at GitHub repository -https://github.com/ nadavyayon/Intensify3D and in Supplementary Fig. S1. Example data sets and movies are also available via Google Drive link published in the GitHub repository as well. System requirements. The normalization algorithm could potentially run on any operating system, since the use of memory or CPU power mainly depends on the size of the images and on parallel processing. In the user GUI, one can select the number of cores to use. Using more cores may enable simultaneous processing of more images, which will be faster but requires more memory. Users who do not possess an active MATLAB License can use the standalone version which requires an additional download of Free MATAB library files (500-700 MB depending on operating system). To conserve RAM memory, the basic statistics of each image (quantiles or mean intensity) are logged and then used for quantile or mean-based normalization across the image stack. Normalized images are then saved in a separate folder as an image series. To cope with large data sets, the algorithm takes advantage of MATLAB parallel processing (controlled by the user) and simultaneously corrects multiple images in the stack given that the RAM memory is sufficient. For memory conservation purposes, the estimated tissue region is saved in a form of a support image in a distinct folder. Using a standard PC with a core i7-4930 K and 64 gb of RAM −7 images of size 512 × 512 may be corrected per 1s. A typical Light-Sheet image also requires background estimation, and will take 5s per image. Naturally, this time estimation depends on the number/speed of processors and available RAM that the PC has.
8,311.8
2018-03-09T00:00:00.000
[ "Biology" ]
Refraction Data Survey , Ensemble & Time Series Statistics during Adolescence Purpose: The objective of this report is to quantify left eye to right eye refractive state differentials resulting from the accumulation of naturally occurring random fluctuations. Methods: Clinical SER data from adolescent emmetropic human subjects are measured and analyzed in terms of ensemble and time-series and fluctuations of the left eye , right eye , and left-right differential measurement, N = 20 subj. Results: Results include random fluctuations for left and right eyes of human subjects age 11 to 23 years. Ensemble R L differential measurement for this group is +/0.21 diopters RMS. Left-Right coupling ratio is CR = 0.88 for both the differential and average /2 control system input signals. Data from human subjects show individual fluctuations of +/0.15 D to +/0.35 D for the left and right eyes. Normal emmetropic eyes exhibit a slow and steady trend towards myopia, at the rate of -0.40 to -0.50 diopt./decade. Conclusions: Ensemble and time-series results show that 15.9% of emmetropic adolescents are expected to progress into myopia during ages 12 to 22 years. Substantial shifts of greater than +/0.4 diopters will produce a negative myopic exponential time constant response of the focal status of the eye. The aim of this study is to analyze cross-sectional (ensemble) and longitudinal (time series) results, both in terms of the trend line and fluctuations about said line. Introductıon The equivalency of cross-sectional data (ensemble) and longitudinal data (time series) is a useful simplification, finding application in the study of emmtropization and myopia progression.In this report, RMS refractive state fluctuations, including <L^2>, <R^2>, <(L+R)^2> and <(L-R)^2> are reported for human and laboratory subjects.The differential RMS left-right disparity sqr[< ( L -R)^2)>] is of fundamental importance for laboratory and clinical experiments. Myopia progression rates Greene & Medina [5] show that cross-sectional data indicates myopia progression at a yearly rate of -0.49 diopters/year, consistent with the longitudinal data of Goss & Cox [3] showing -0.40 diopters/year myopia progression rate.Zadnik, et al. [6] show that a single refraction data point R(t = to) can determine the future likelihood of myopia progression.One of the best reports is by Fledelius & Christensen [7] (N = 126 subj.)where longitudinal ocular growth rates are mathematically determined from cross-sectional data.Oakley & Young [8] and Greene, Grill & Medina Numerical measurement of the eye's accuracy The normal human and primate eye maintains long-term focal accuracy in the presence of focal perturbations.Making reasonable assumptions based on a physiological model, produces an accurate value for the eye's focal accuracy. A focal control equation Laboratory experiments demonstrate that the normal eye adjusts its long-term focus by a dynamic process [21].The exponential time-constant response of the eye to a focal perturbation in the eye's optical system is given by [22,23] Eq. ( 1)R(t) = Offset + Accom -Perturbation * EXP (-t/τ) While the equation can account for the eye's response, it cannot yield a direct measure of the eye's focal accuracy.The eye must overcome continuous micro-perturbations while growing to maintain accurate focus. Long-term dynamic system There is experimental evidence which suggests that each eye sets its long-term focus independently of the other eye [24].In this study of myopia development, accommodation in one eye was prevented with atropine while the other was not.The results show the atropinized eye stabilized while the non-atropinized eye progressed into myopia.The model treats the left and right eye as two independently tracking mechanisms.Each eye uses its own accommodation signal to drive the long-term focal setting system [23].Each eye has random noise in the actuator, i.e. perturbations in the focal status of the eye, Figure 1. Noise response of the eye's servo Sources of random noise in the optical system may include ordinary blinking, tear film variations, varying lighting conditions, contact lenses, so-called "spectacle blur" after removing contact lenses, variable intraocular pressure, seasonal variations, febrile disease, medical problems, the student's academic schedule, excessive use of alcohol, tobacco, marijuana, caffeine, drug sideeffects, excessive close work, etc. The function 1/(τ s + 1) for the eye's behavior has an exponential time-constant of ~ 100 days.The offset of the normal eye has a value of ~ 1.5 diopters.A Bode graph of this transfer function is shown, Figure 2. The high frequency components of noise fluctuations are attenuated.The eye's focal status will change very slowly on a daily basis, -0.01 to -0.001 diopt/day. Frequency response plot The closed loop frequency response shows a break point at 100 days, and a frequency roll-off of -6 DB/ [9] report myopia rates of -0.50 diopters/year, using both cross-sectional (ensemble) and longitudinal (time series) data techniques. Ray & O'Day [10] discuss the important problem of the statistical independence and correlation of left and right eye experimental data.Hung & Ciuffreda [11] present a very detailed analysis and review of accommodation control systems and the stabilizing effects of plus lenses in the range +1.0 to +3.0 diopters. McBrien & Adams [12] investigate cross-sectional and longitudinal techniques for determining myopia onset and progression in adults (ages > 21 yrs.), finding a refractive state change of -0.58 diopters of myopia over 2 years for 39% of the population sampled (N = 37 subjects). Hooker, et al. [13] report refractive state fluctuations of +/-0.30diopters RMS.Hung & Smith [14] demonstrate that negative and positive lenses of strength -3.0 and +3.0 diopters can alter the refractive state of primates to the same amount as the strength of these applied lenses, on a time scale of 60 to 90 days. Terminology Medical problems similar to, but distinctly different from normal subject statistics presented here include amblyopia, anisometropia, and anisomyopia.It is emphasized that the data presented in Appendix Table 1 are from normal uncorrected emmetropic subjects.Sometimes after correction, these types of problems may result in a permanent R-L differential. The objective of this study is to analyze cross-sectional (ensemble) and longitudinal (time series) results, both in terms of the <average> trend line and <r.m.s.> fluctuations about said line.The significance of this work is that the Ergodic Theorem from statistics allows us to equate cross-sectional data (ensemble average) with longitudinal data (time series) over reasonable time intervals, both in terms of the regression trend line (average data rate) and the +/-RMS scatter about said line.This is important in terms of studying progressive myopia development with an age-matched group (for instance, high school, college, or graduate students) typically showing -0.4 to -0.6 diopters of additional myopia per year.period of months.This sequence of measurements produces a continuing account of the eye's tracking accuracy, even though the average visual environment is changing.Measurement of noise by this technique is called the longitudinal or time-series of a stationary random process. Electronics experiments To A Representative noise programs In order to demonstrate how the changing focal status of the left and right eye generates a third differential noise statistic, a QBasic random number generator is programmed to simulate the noise in the left and right eyes, The individual noise values in the program that produce the differential noise statistic are: Eq. ( 6) Left = +/-0.124diopt Right = +/-0.174diopt Octave.This transfer function can be modeled by an analog computer, Figure 2 and Figure 3. Differential measurement The focal states in Appendix Table 1 show the focal status of 20 individuals selected at random from the age of 11 to 23.The RMS value is calculated from the differential measurement.Refractive state data in Appendix Table 1 are provided by the Berger Clinic [23] randomly selected from patient files.SER (spherical equivalent) refractions were subjective, noncycloplegic, accurate within +/-0.25 D. Patients with normal vision were chosen for the statistical sample.Subject I.D. is deleted from the data record to maintain patient confidentiality.I.R.B. approval was granted. Perfect tracking accuracy If the eye's control system were perfect, the focal setting of the left eye would be identical to the right.The extent to which this is not the case will give us a means to determine the eye's tracking accuracy. Differential focal status Eqs We can measure the differential focal status developed between the left and right eye.This technique is based on the statistical principle that the squares of noise sources may be added algebraically. Eq. ( 2) Differential ^2 = Left Eye ^2 + Right Eye ^2 The same factors that produce perturbations in the left and right eye are equivalent for both eyes, assuming the underlying noise process is ergodic, which is highly probable for all normal eyes.Therefore, combining the noise statistic of the left and the right eyes: +/-1.6 diopt.RMS are a normal part of the development process. Study limitations, future work One limitation of the Ergodic Theorem from statistics, as applied here to the time course fluctuations The average of these two numbers is 0.149 diopters RMS.Our estimate, calculated from the differential value, is +/-0.15 to +/-0.35 diopt.RMS. Ensemble differential measurement In dealing with stationary random processes, time averages are equivalent to ensemble averages.This is the classical Ergodic Theorem of statistics [25][26][27][28][29][30].harmonic time components, for frequencies less rapid than the sampling frequency.In other words, in terms of retrieving maximum information from the data, the situation is better than expected, improving with large sample size, at smaller time intervals.Future work may try to address this challenging problem, in terms of extracting the Fourier component spectrum. Figure 6, correlating left and right refractions, deserves some additional explanation.For this N = 20 of human emmetropization data, is that while these statistical techniques accurately determine the average time series and RMS fluctuations about the trend line, they cannot predict the frequency or time-scale of these fluctuations (see Fig. 4).The reason for this is simple.With a single "snap-shot" of the cross-sectional data, the time scale is simply not available.However, our preliminary numerical work to date, using an FFT program, indicates that Fourier analysis of the crosssectional data does indeed reveal all the fundamental subj.data set, it is only a co-incidence that both the line slope b and regression r have values about ~0.8.Strictly speaking, the regression line slope (b = 0.80 for Figure 6) is an indicator of the independence of the Left and Right refractions, whereas the correlation (r = +0.82for Figure 6) coefficient is an indication of the quality, accuracy, or distribution of the data about said line.Note that for larger and larger data sets, the significance value (p < 0.05) can improve to p < 0.0001. Previous experiments show the normal eye sets its long-term focus by a dynamic process.Physiological systems are complex and contain parameters which are not always clearly defined nor easily measured.Therefore, assumptions and simplifications are necessary in order to understand the long-term behavior of the normal eye.This requires insight into the fundamental behavior of optical systems in the presence of perturbations.Advanced statistical techniques establish the tracking accuracy of the normal eye Appendix 1. Figure 1 : Figure 1: First-order closed-loop control systems are used to predict the behavior of the left and right eyes.Actuator noise perturbs the eye's focal state. Figure 2 : Figure 2: Bode plot showing the frequency response characteristics of a first-order control system with time constant τ = 100 days. Figure 3 : Figure 3: Analog computer representation of 1-st order control system with R-C feedback network. simulate time-series random fluctuations in refractive state of the left and right eyes, randomly oscillating voltage signals are created using standard chipsets and operational amplifiers available from Radio Shack (Archer Electonic Components).A 2-MHz digital storage oscilloscope was used, model DSO 112, manufactured in China.The quad op amps were #324, with a band width of 10 kHz.Initial voltages supplied to the chips are +10-and -10-volts DC, yielding randomly fluctuating signals in the range Vpp = 150 millivolts.Simulated signals are shown in Figure 4. AC voltage measurements of RMS confirm the basic equation for sums and differences sqr [< ( R -L ) ^ 2 >] and sqr [ < R + L > ) ^ 2]. The ensemble differential measurement for this group of individuals is +/-0.213diopters RMS.The differential equation allows the calculation of the noise level in each individual eye: Eq. (7) DIFFERENTIAL * .707= INDIVIDUAL EYE NOISE (.213) * .707= +/-0.151Diopters RMSTable 1 displays 4 comparable studies from the literature, showing that L-R fluctuations from +/-0 Figure 6a :Figure 6b : Figure 6a: Refractive status for N = 40 student eyes.Regression trend line indicates refractive state proceeds as R(t) = -0.045t + 1.29 diopters.Statistics (+/-σ) indicate 15.9% are expected to progress into myopia during the period 12 to 22 years.An additional 34.1% will likely trend into myopia from 22 to 27 years.
2,902.2
2019-04-20T00:00:00.000
[ "Physics" ]
Effects of Acute and Developmental Exposure to Bisphenol S on Chinese Medaka (Oryzias sinensis) Bisphenol S (BPS), one of the substitutes for bisphenol A (BPA), is widely used in various commodities. The BPS concentrations in surface water have gradually increased in recent years, making it a predominant bisphenol analogue in the aquatic environment and raising concerns about its health and ecological effects on aquatic organisms. For this study, we conducted a 96 h acute toxicity test and a 15-day developmental exposure test to assess the adverse effects of BPS exposure in Chinese medaka (Oryzias sinensis), a new local aquatic animal model. The results indicate that the acute exposure of Chinese medaka embryos to BPS led to relatively low toxicity. However, developmental exposure to BPS was found to cause developmental abnormalities, such as decreased hatching rate and body length, at 15 dpf. A transcriptome analysis showed that exposure to different concentrations of bisphenol S often induced different reactions. In summary, environmental concentrations of BPS can have adverse effects on the hatching and physical development of Chinese medaka, and further attention needs to be paid to the potential toxicity of environmental BPS. Introduction Bisphenol A (BPA) is an essential synthetic chemical used in the production of polycarbonate plastics and epoxy resins [1,2].Measurable concentrations of BPA have been detected in the environmental media around the world [3][4][5][6][7], and exposure to BPA is almost unavoidable [8,9].BPA is a classic endocrine-disrupting chemical (EDC), which has been demonstrated to have adverse impacts on male reproduction in vertebrates [10,11], prostate development in mammals [12], osmoregulation in fish [13] and may induce obesity [14], dysplasia [15], and cardiovascular diseases [16].Due to its serious adverse effects, the use of BPA has been banned in many countries and regions.Bisphenol S (BPS) has been gradually developed as a substitute for BPA in polycarbonate plastics and epoxy resins [17,18].With its growing usage, BPS has been detected in human biological, food, and environmental samples [3,[19][20][21][22]. There have been reports of the growing occurrence of BPS in surface waters around the world.Yamazaki et al. reported that the concentrations of BPS in surface water were up to 8.7 ng/L in Japan, 42 ng/L in Korea, 135 ng/L in China, and 7200 ng/L in India [23].Jin et al. measured the BPS concentration in surface water samples and found that it ranged from 0.22 to 52 ng/L in the Liaohe River, 0.61 to 46 ng/L in the Hunhe River, and 0.28 to 67 ng/L in Taihu Lake [24].In 2022, the concentrations of BPS in the San Francisco Bay were reported to be up to 120 ng/L [25].In Europe, the concentrations of BPS were reported to be up to 35.2 ng/L in Slovenia and Croatia [26], 8.23 ng/L in Romania [27], 1584 ng/L in Poland [28], and 306 ng/L in England [29].In 2017, two studies reported the BPS concentrations in water samples collected from Taihu Lake, which ranged from 4.1 to 160 ng/L in samples collected in November 2016 [30] and 4.5 to 1600 ng/L in samples collected in April 2016 [31].These studies, especially those in the same location, have shown dramatic increases in BPS concentrations in aquatic environments, drawing attention to the health and ecological effects of environmental BPS exposure on aquatic species [32]. In previous studies, researchers found that BPS has the same order of magnitude of endocrine-disrupting effects as BPA [33].In addition, studies have shown that BPS exposure may induce obesity [34] and has anti-androgenic properties [35].In zebrafish embryos and larvae, BPS impacts the reproductive neuroendocrine system during development [36].Compared with BPA, the current research on the health hazards or effects of BPS on aquatic animals remains limited.It is necessary to determine the adverse impacts of BPS exposure on aquatic animals and its underlying mechanisms. Small teleost fish are often used as animal models for aquatic toxicology, especially the Japanese medaka (Oryzias latipes) and zebrafish (Danio rerio) [37][38][39].These fish have multiple advantages, such as a small size, short generation time, frequent spawning characteristics, complete genome sequences [40,41], and epigenetic reprogramming information [42,43].However, the responses of zebrafish and Japanese medaka exposed to pollutants differ significantly.For instance, the LC-96 of zebrafish embryos exposed to bisphenol F (BPF)-another substitute for BPA-was 7.40 mg/L [44].In contrast, the LC-96 of medaka embryos exposed to BPF was approximately 120 mg/L (unpublished data from our lab), approximately 16 times higher than that of zebrafish.Furthermore, the inbred laboratory strains of zebrafish and Japanese medaka do not inhabit wild surface waters, and therefore, it is necessary to include more local aquatic animal models to assess the risk of exposure to environmental pollutants. Chinese medaka (Oryzias sinensis) is related to the Japanese medaka and is found in most parts of East Asia [45].It has the similar advantages of small individuals, easy feeding and management, a short generation time, and frequent spawning.Previous studies have reported that the Chinese medaka's toxicological responses to pollutants are similar to that of the Japanese medaka and zebrafish [46,47] but often produces intermediate responses between zebrafish and Japanese medaka [46].In our previous study, the LC-96 of Chinese medaka embryos exposed to BPF was 87.90 mg/L, which is between those of zebrafish and Japanese medaka [48].Therefore, the Chinese medaka is an excellent aquatic animal model to fill the gap between zebrafish and Japanese medaka. In the present study, the hypothesis that exposure to BPS can induce adverse effects on aquatic animals was tested using the Chinese medaka as an animal model.To explore the toxicity of BPS, a 96 h acute exposure and a 15-day developmental exposure to BPS were conducted on Chinese medaka embryos and larvae.The developmental abnormalities were observed, and the transcriptome was analyzed to speculate on the impact of BPS at environmental concentrations on Chinese medaka. Fish Husbandry The Chinese medaka were obtained from a local aquarium store and cultured in the laboratory as previously described [48].All applicable institutional and/or national guidelines for the care and use of animals were followed. Acute Exposure to BPS A series of BPS concentrations (250, 500, 750, and 1000 mg/L) were prepared for the acute toxic exposure following the equal difference interval method to investigate the acute exposure toxicity of BPS toward Chinese medaka embryos.The highest concentration is close to the reported water solubility (1100 mg/L) [49].The concentration of DMSO in each group was controlled at 0.1% (v/v).Each treatment was performed in triplicate. Newly produced embryos were randomly distributed into each treatment group.For each treatment, 10 embryos were kept in each well of a 6-well culture plate.The exposure solution was renewed daily.The survival rates of the embryos in each treatment group were recorded at 24, 48, 72, and 96 hpf (hours post fertilization), and any dead embryos were removed.The experiment was conducted following the guiding principles of OECD (No. 212) [50]. Developmental Exposure to BPS Previous research has shown that BPS is ubiquitous in aquatic environments [51].To explore whether the environmental concentrations of BPS will produce toxic effects on aquatic organisms, a series of BPS concentrations (20,200, and 2000 ng/L), which cover the environmental concentrations, were used in this study.The concentration of the DMSO in each group was controlled at 0.0001% (v/v), and a blank control group with water was included.As there is no significant difference and only one differentially expressed genes (DEGs) was found between the solvent and blank control groups in a parallel study [48], only the blank control group was used in the following analysis.Each treatment group had 6 replicates.The embryos were randomly divided into the wells of 6-well culture plates and exposed to different BPS concentrations from the blastula stage until 15 dpf (days post fertilization).Each group in a single well contained 10 embryos.The exposure solution was replaced every two days.The plates were all placed in an environmental chamber at 26.8 • C with a 14/10 h light/dark cycle.The experiment was conducted following the guiding principles of OECD (No. 210) [52]. To measure the concentrations of BPS, 500 mL of the exposure solution was collected for each sample.The pH of the BPS sample was adjusted to 5 ± 0.02 using 0.1 mol/L diluted hydrochloric acid.Subsequently, solid-phase extraction was employed to extract and purify the treated BPS samples.The quantitative analysis phase utilized high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS).The chromatographic conditions involved mobile phase A (1 mmol/L ammonium fluoride solution), mobile phase B as pure acetonitrile, and gradient elution for effective separation.In negative ion source mode, mass spectrometry conditions specified a precursor ion (m/z) of 249/155.5 for monitoring BPS, and a product ion (m/z) of 107.9 for quantitative analysis.The cone voltage was set at 24, and the collision energy was set at 16/16, generating fragment ions to enhance quantitative accuracy.Ensuring a linear relationship, a standard curve was established for quality control and assurance.A correlation coefficient R 2 ≥ 0.997 indicates an excellent fit of the standard curve, meeting the requirements for a high-quality analysis and ensuring the reliable and accurate quantitative analysis of BPS. Morphology Observation and Sample Collection During the developmental exposure test, the hatching rate, survival rate, heartbeat rate, and blood circulation of larvae were examined under a microscope and recorded daily.At 15 dpf, the body length, heartbeats, and various abnormal phenotypes, including pericardial edema (ce), spinal curvature, enlarged yolk sac (cv), and decreased head-trunk angle (HTA↓) were recorded.After the measurements were completed, whole larvae samples were transferred into the Trizol reagent (Invitrogen, Waltham, MA, USA) and then stored at −80 • C for the following analysis. Transcriptomic Analysis 2.6.1. Library Construction and Sequencing The libraries were prepared as previously described [48].In brief, the total RNA was extracted using Trizol reagent according to the manufacturer's instructions.RNA quality was assessed with an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and agarose gel electrophoresis.mRNA was enriched using Oligo(dT) beads, followed by fragmentation and reverse transcription with random primers.Then, the cDNA fragments were purified and end-repaired.The A base was added to the end of the fragments.Then, the fragments were ligated to Illumina sequencing adapters.After size selection through agarose gel electrophoresis, the ligation products were PCR amplified and sequenced using an Illumina Novaseq 6000. GO and KEGG Enrichment Analyses To perform the gene ontology (GO) analysis [58], the DEGs were mapped to GO terms in the gene ontology database (http://www.geneontology.org;accessed on 3 February 2023), and the significantly enriched GO terms were defined through the hypergeometric test and an adjusted p < 0.05.KEGG (Kyoto Encyclopedia of Genes and Genomes) [59] enrichment analysis was performed to further understand the biological functions of the DEGs and their interaction with each other in certain biological functions.Pathways with an adjusted p < 0.05 were defined as significantly enriched. Statistical Analysis The data are presented as the mean ± SEM (standard error of the mean).The differences among treatment groups were determined with one-way ANOVA, and multiple comparisons were performed using Tukey's test.The t-test was also performed, if necessary.p < 0.05 was considered statistically significant. Acute BPS Toxicity Test on Embryos In the 96 h acute toxicity test, embryonic lethality rarely occurred.As the exposure concentration of BPS increased and approached its solubility limit in water, the survival rate only slightly decreased (Figure S1).There were no significant differences among the groups using one-way ANOVA, indicating that high concentrations of BPS have a relatively low toxicity in Chinese medaka. Developmental BPS Exposure Test In the 15-day developmental exposure experiment, the actual mean measured concentrations of BPS in the 20, 200, and 2000 ng/L treatment groups were 16.6 ± 0.3, 167.3 ± 11.2, and 1735.3 ± 54.2 ng/L, respectively.In the following, these three BPS treatment groups are expressed as nominal concentrations and are referred to as S20, S200, and S2000, respectively.All embryos survived during the developmental exposure, indicating that the environmental concentration of BPS (20, 200, and 2000 ng/L) generally does not cause fatal effects in the early stages of Chinese medaka.We recorded and analyzed the growth parameters of juvenile fish, including hatching rate, heartbeat, and body length.The hatch-ing rate of the S20 group significantly decreased (Figure 1A), and the body length of the S2000 group significantly decreased compared to the control (Figure 1B).There were no significant differences in heartbeat (Figure S2D).cause fatal effects in the early stages of Chinese medaka.We recorded and analyzed the growth parameters of juvenile fish, including hatching rate, heartbeat, and body length.The hatching rate of the S20 group significantly decreased (Figure 1A), and the body length of the S2000 group significantly decreased compared to the control (Figure 1B).There were no significant differences in heartbeat (Figure S2D). Multiple developmental abnormalities were examined (Figure 1C).No significant differences were observed in pericardial edema (Figure S2A), enlarged yolk sac (Figure S2B), and decreased head-trunk angle (Figure S2C).Different degrees of increases were found for several abnormalities, demonstrating that BPS at environmental concentrations may cause health defects in Chinese medaka larvae (Figure 1D). RNA Sequencing and Transcriptome Assembly To further illustrate the underlying mechanisms through which the BPS exposure caused adverse impacts, a transcriptome analysis was performed.After performing quality control, 57,363,422 to 75,195,674 high-quality clean reads were generated with RNA- Multiple developmental abnormalities were examined (Figure 1C).No significant differences were observed in pericardial edema (Figure S2A), enlarged yolk sac (Figure S2B), and decreased head-trunk angle (Figure S2C).Different degrees of increases were found for several abnormalities, demonstrating that BPS at environmental concentrations may cause health defects in Chinese medaka larvae (Figure 1D). RNA Sequencing and Transcriptome Assembly To further illustrate the underlying mechanisms through which the BPS exposure caused adverse impacts, a transcriptome analysis was performed.After performing quality control, 57,363,422 to 75,195,674 high-quality clean reads were generated with RNA-seq for each sample, and the unique mapping ratio ranged from 74.34% to 75.93% (Table S1). The transcriptomes were assembled, and the DEGs were identified.The results showed that 22, 156, and 109 DEGs were identified between the control versus the S20, S200, and S2000 groups, respectively (Table S2).Briefly, 3 genes were up-regulated, and 19 genes were down-regulated in the control vs. S20; a total of 26 genes were up-regulated, and 130 genes were down-regulated in the control vs. S200; and 68 genes were up-regulated, and 41 genes were down-regulated in the control vs. S2000 (Figure 2A). As shown in Figure 2B,C, one gene was up-regulated (Figure 2B), and nine genes were down-regulated (Figure 2C) in both the control vs. S20 and S200 comparisons.Two genes were up-regulated (Figure 2B), and six genes were down-regulated (Figure 2C) in both the control vs. S20 and S2000 comparisons.A total of 13 genes were up-regulated, (Figure 2B) and 16 genes were down-regulated (Figure 2C) in both the control vs. S200 and S2000 comparisons. Gene Ontology Analysis To analyze the molecular level harmful effects of BPS on Chinese medaka embryos, gene ontology enrichment was performed (Table S3). The most enriched GO terms in the control vs. S20 comparison are shown in Figure 3.In the molecular function class, the up-regulated DEGs were enriched in "L-tyrosine transmembrane transporter activity" and "arsenite transmembrane transporter activity", while the down-regulated DEGs were enriched in "phosphorylase kinase activity", "calmodulin-dependent protein kinase activity", "tau-protein kinase activity", "protein serine/threonine kinase activity", "calmodulin binding", "vitamin D 24-hydroxylase activity", and "squalene monooxygenase activity" items (Figure 3A,B).In the cellular component class, only the down-regulated DEGs were enriched in "phosphorylase kinase com- As shown in Figure 2B,C, one gene was up-regulated (Figure 2B), and nine genes were down-regulated (Figure 2C) in both the control vs. S20 and S200 comparisons.Two genes were up-regulated (Figure 2B), and six genes were down-regulated (Figure 2C) in both the control vs. S20 and S2000 comparisons.A total of 13 genes were up-regulated, (Figure 2B) and 16 genes were down-regulated (Figure 2C) in both the control vs. S200 and S2000 comparisons. Gene Ontology Analysis To analyze the molecular level harmful effects of BPS on Chinese medaka embryos, gene ontology enrichment was performed (Table S3). The most enriched GO terms in the control vs. S20 comparison are shown in Figure 3.In the molecular function class, the up-regulated DEGs were enriched in "L-tyrosine transmembrane transporter activity" and "arsenite transmembrane transporter activity", while the down-regulated DEGs were enriched in "phosphorylase kinase activity", "calmodulin-dependent protein kinase activity", "tau-protein kinase activity", "protein serine/threonine kinase activity", "calmodulin binding", "vitamin D 24-hydroxylase activity", and "squalene monooxygenase activity" items (Figure 3A,B).In the cellular component class, only the down-regulated DEGs were enriched in "phosphorylase kinase complex", "serine/threonine protein kinase complex", "protein kinase complex", and "bacterial-type flagellum basal body, C ring" items (Figure 3B).In the biological process class, only the down-regulated DEGs were enriched in "glucan metabolic process", "glycogen metabolic process", "cellular glucan metabolic process", and "energy reserve metabolic process" items (Figure 3B). plex", "serine/threonine protein kinase complex", "protein kinase complex", and "bacterial-type flagellum basal body, C ring" items (Figure 3B).In the biological process class, only the down-regulated DEGs were enriched in "glucan metabolic process", "glycogen metabolic process", "cellular glucan metabolic process", and "energy reserve metabolic process" items (Figure 3B).The top 10 enriched GO terms in the control vs. S200 comparison are shown in Figure 4.In the molecular function class, both up-regulated and down-regulated DEGs were enriched in "organic acid transmembrane transporter activity", "phosphotransferase activity, alcohol group as acceptor", "carboxylic acid transmembrane transporter activity", "kinase activity", "carbohydrate transmembrane transporter activity", and "amino acid transmembrane transporter activity" items (Figure 4A,B).In the cellular component class, only down-regulate DEGs were enriched in the "phosphorylase kinase complex" item (Figure 4B).In the biological process class, both up-and down-regulated DEGs were enriched in "anion transport", "carboxylic acid transport", "inorganic anion transport" (Figure 4A,B).The up-regulated DEGs were also enriched in the "organic acid transmembrane transport" and "carboxylic acid transmembrane transport" (Figure 4A).While the downregulated DEGs were enriched in the "skeletal myofibril assembly", "response to muscle stretch", "detection of muscle stretch", and "sarcomerogenesis" items, which are related to movement (Figure 4B).The top 10 enriched GO terms in the control vs. S200 comparison are shown in Figure 4.In the molecular function class, both up-regulated and down-regulated DEGs were enriched in "organic acid transmembrane transporter activity", "phosphotransferase activity, alcohol group as acceptor", "carboxylic acid transmembrane transporter activity", "kinase activity", "carbohydrate transmembrane transporter activity", and "amino acid transmembrane transporter activity" items (Figure 4A,B).In the cellular component class, only downregulate DEGs were enriched in the "phosphorylase kinase complex" item (Figure 4B).In the biological process class, both up-and down-regulated DEGs were enriched in "anion transport", "carboxylic acid transport", "inorganic anion transport" (Figure 4A,B).The up-regulated DEGs were also enriched in the "organic acid transmembrane transport" and "carboxylic acid transmembrane transport" (Figure 4A).While the down-regulated DEGs were enriched in the "skeletal myofibril assembly", "response to muscle stretch", "detection of muscle stretch", and "sarcomerogenesis" items, which are related to movement (Figure 4B). The top 10 enriched GO terms in the control vs. S2000 comparison are shown in Figure 5.In the molecular function class, both up-and down-regulated DEGs were enriched in "peptidase regulator activity", "peptidase inhibitor activity", "heparin binding", "glycosaminoglycan binding", "polysaccharide binding", and "sulfur compound binding" items (Figure 5A,B).The up-regulated DEGs were also enriched in the "endopeptidase regulator activity" and "endopeptidase inhibitor activity" items (Figure 5A).In the cellular component class, the DEGs were enriched in "extracellular region", "extracellular space", "pigment granule membrane", and "external encapsulating structure" items (Figure 5A,B).The up-regulated DEGs were also enriched in the "chitosome" and "melanosome membrane" items (Figure 5A).In the biological process class, the up-regulated DEGs were enriched in "protein activation cascade", "regulation of protein activation cascade", "blood coagulation, fibrin clot formation", "detection of external stimulus", "detection of abiotic stimulus", and "detection of visible light" (Figure 5A), while the down-regulated DEGs were enriched in "response to muscle stretch", "detection of abiotic stimulus", "detection of external stimulus", and "skeletal myofibril assembly" items (Figure 5B).In the molecular function class, both up-and down-regulated DEGs were enriched in "peptidase regulator activity", "peptidase inhibitor activity", "heparin binding", "glycosaminoglycan binding", "polysaccharide binding", and "sulfur compound binding" items (Figure 5A,B).The up-regulated DEGs were also enriched in the "endopeptidase regulator activity" and "endopeptidase inhibitor activity" items (Figure 5A).In the cellular component class, the DEGs were enriched in "extracellular region", "extracellular space", "pigment granule membrane", and "external encapsulating structure" items (Figure 5A,B).The up-regulated DEGs were also enriched in the "chitosome" and "melanosome membrane" items (Figure 5A).In the biological process class, the up-regulated DEGs were enriched in "protein activation cascade", "regulation of protein activation cascade", "blood coagulation, fibrin clot formation", "detection of external stimulus", "detection of KEGG Analysis In the comparison of the S20 and control groups, the DEGs were annotated into 12 functional categories and were significantly enriched in "Steroid biosynthesis", "Glucagon signaling pathway", and "Insulin signaling pathway" (Figure 6A and Table S4).In the comparison of the S200 and control groups, the DEGs were annotated into 34 functional categories, but no pathway was statistically significantly enriched (Figure 6B and Table S4).abiotic stimulus", and "detection of visible light" (Figure 5A), while the down-regulated DEGs were enriched in "response to muscle stretch", "detection of abiotic stimulus", "detection of external stimulus", and "skeletal myofibril assembly" items (Figure 5B). KEGG Analysis In the comparison of the S20 and control groups, the DEGs were annotated into 12 functional categories and were significantly enriched in "Steroid biosynthesis", "Glucagon signaling pathway", and "Insulin signaling pathway" (Figure 6A and Table S4).In the comparison of the S200 and control groups, the DEGs were annotated into 34 functional categories, but no pathway was statistically significantly enriched (Figure 6B and Table S4). Discussion With the increasing use of BPS, more research has been performed, which has shown that BPS has various potential toxicities in various animal models [60].Using Chinese medaka as an animal model, the present study assessed the acute toxicity of BPS exposure on embryos and the developmental toxicity of BPS at environmental concentrations on embryos and larvae.The 96 h acute exposure showed that BPS was not lethal up to 1000 mg/L for Chinese medaka embryos.In a previous study, for a 96 h acute exposure, the LC50 of bisphenol F (BPF) was 87.90 mg/L for Chinese medaka embryos [48], suggesting that BPS is less toxic than BPF.In cell models, researchers found that the LC50 of BPS on Discussion With the increasing use of BPS, more research has been performed, which has shown that BPS has various potential toxicities in various animal models [60].Using Chinese medaka as an animal model, the present study assessed the acute toxicity of BPS exposure on embryos and the developmental toxicity of BPS at environmental concentrations on embryos and larvae.The 96 h acute exposure showed that BPS was not lethal up to 1000 mg/L for Chinese medaka embryos.In a previous study, for a 96 h acute exposure, the LC50 of bisphenol F (BPF) was 87.90 mg/L for Chinese medaka embryos [48], suggesting that BPS is less toxic than BPF.In cell models, researchers found that the LC50 of BPS on fish primary macrophages was 39.1 mg/L and 29.7 mg/L after a 6 h or 12 h exposure, respectively [61].In zebrafish, the 96 h LC50 of BPS was reported to be 199 mg/L [62].Compared with other experimental animals, Chinese medaka seems less sensitive to BPS in the acute toxicity tests. In the developmental exposure experiment, the physical features of the larvae in the S2000 groups showed that the body length of larvae was significantly decreased compared to the control group.In zebrafish, developmental exposure to 100 µg/L BPS significantly reduced the body length at 5 dpf [63], indicating the conserved toxicological effects of BPS exposure on growth in teleosts.The defect rates of all embryos in the S200 and S2000 groups increased to different extents, which corresponds to the number of enriched DEGs in these two comparisons.These results indicate that the concentration of BPS in the developmental exposure dose was harmful to Chinese medaka embryos. Among the research on the environmental concentration of BPS, it was found to reach 7.2 × 10 3 ng/L in the Adyar River in India [23], much higher than the concentrations used for the developmental exposure in this study.In the Taihu Lake in China, the BPS concentration also reached as high as 1.6 × 10 3 ng/L, which was close to the highest concentration used in this study, suggesting a high ecotoxicological risk.Furthermore, BPS was frequently detected with other bisphenol analogues in almost all mediums [21,[24][25][26]30,32].It is crucial to examine the adverse effects induced by joint exposure to multiple bisphenol analogues in the future. Environmental disturbances during embryogenesis can cause subtle functional changes, altering gene expression, physiology, and metabolism [30].Due to its endocrine-disrupting characteristics, the effects induced by exposure to bisphenol analogues are usually not monotonous.In the present study, the comparison of the control vs. S200 group had the most DEGs.According to the GO analysis, the DEGs associated with the enriched GO terms in the control vs. S20 comparison were mostly down-regulated, while those in the control vs. S2000 comparison were mostly up-regulated.These results indicate that BPS employs various molecular mechanisms in its toxicity at different concentrations. Intriguingly, the "regulation of complement activation" and "humoral immune response" terms were highly enriched in the control vs. S2000 comparison.The S2000 group also had DEGs enriched in the immune-related pathways, such as "Complement and coagulation cascades" and "Neutrophil extracellular trap formation".Both results indicate that developmental exposure to 2000 ng/L BPS significantly affected the gene expression in the immune system.Similarly, exposure to 10 µg/L BPS significantly altered the expression of genes involved in the innate immune system in zebrafish, including the tnf-α and ifn genes [64].The immune system can actively respond to various environmental stresses, which is crucial to organisms' survival.More investigations are needed to uncover the mechanisms involved in the immunotoxicity of BPS. Moreover, the GO terms involving glycogen and lipid metabolism and transformation processes were frequently enriched, suggesting that BPS could hinder the energy conversion and absorption processes in organisms.This analysis was consistent with the results of the embryos with enlarged yolk sacs in the BPS developmental exposure experiment.This phenomenon is also reflected in other instances where BPS was demonstrated to interfere with yolk lipid consumption in zebrafish embryos [65].This result indicates that BPS exposure may cause harmful effects on energy metabolism. Recently, a cross-sectional study in the USA, which involved 1521 participants aged 20 years or older, found that urinary BPS concentrations were higher in obese participants [66].Other studies have demonstrated that BPS can bind to nuclear receptors in fatty tissues, contributing to the development of obesity [67].We also found a significant enrichment of the "Kaposi sarcoma-associated herpesvirus infection", "Chagas disease", "Alcoholic liver disease", "Systemic lupus erythematosus", "Staphylococcus aureus infection", "Leishmaniasis", "Pertussis", and "Legionellosis" KEGG pathways in Chinese medaka embryos exposed to environmental BPS.These pathways are associated with disease.The enrichment of these pathways maybe because BPS increases the incidence of related diseases. Besides the potential adverse health effects on humans, the pollution of aquatic ecosystems with EDCs has multiple impacts on non-human animals.Aquatic organisms are also an essential food source for humans, creating the complex relationships between EDCs and people, animals, plants, and the environment.Hence, it is necessary to consider the consequences of EDCs, such as BPS, from multiple dimensions with One Health think-ing [68,69].Furthermore, there are no two identical environments in the world, and so, it is critical to include more local animal models besides laboratory animal models to investigate the impacts of pollutants.In the present and previous studies, Chinese medaka has been proven to be an ideal fish model to fill the gap between laboratory and field work with its multiple advantages [45,47,48,[70][71][72]. Conclusions This study revealed that BPS had low acute toxicity on Chinese medaka embryos during a 96 h acute toxic exposure.However, the developmental toxicity of BPS on Chinese medaka embryos and larvae induced several developmental abnormalities, indicating that environmental concentrations of BPS may pose an ecological risk for aquatic organisms.GO and KEGG pathway analyses suggested that BPS exposure at environmental concentrations can affect the movement, metabolism, immune system, and disease-related genes. Supplementary Materials: : The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/jox14020027/s1,Table S1: Data statistics of clean data; Table S2: DEGs between every two different groups; Table S3: Enriched GO terms: Table S4: The top 20 KEGG pathways; Figure S1: Results of the acute toxic experiment.Survival rates of Chinese medaka embryos exposed to 0 (control), 250, 500, 750, 1000 mg/L BPS for 96 h; Author Contributions: B.L.: conceptualization, methodology, investigation, formal analysis, visualization, data curation, writing-original draft, supervision, and funding acquisition.Y.H.: methodology, investigation, formal analysis, visualization, data curation, and writing-original draft.D.P.: methodology, investigation, formal analysis, and funding acquisition.X.L.: methodology, investigation, and formal analysis.Y.G.: methodology, investigation, and data analysis.Z.L.: methodology, investigation, and funding acquisition.X.S.: conceptualization, methodology, writing-review and editing, and funding acquisition.J.W.: conceptualization, methodology, resources, writing-review and editing, supervision, and funding acquisition.X.W.: conceptualization, methodology, formal analysis, visualization, data curation, writing-review and editing, supervision, and funding acquisition.All authors have read and agreed to the published version of the manuscript.Informed Consent Statement: Not applicable. Data Availability Statement: The raw sequence data reported in this paper have been deposited in the Genome Sequence Archive (GSA) in National Genomics Data Center under accession number CRA012060, which is publicly accessible at https://ngdc.cncb.ac.cn/gsa (accessed on 4 December 2023). Figure 2 . Figure 2. Statistics for DEGs.(A) The numbers of up-regulated and down-regulated DEGs in the BPS treatment groups; (B) Venn diagram showing the shared up-regulated DEGs among the groups; (C) Venn diagram showing the shared down-regulated DEGs among the groups (S20, S200, S2000 represent the 20 ng/L BPS, 200 ng/L BPS, 2000 ng/L BPS treatments, respectively). Figure 2 . Figure 2. Statistics for DEGs.(A) The numbers of up-regulated and down-regulated DEGs in the BPS treatment groups; (B) Venn diagram showing the shared up-regulated DEGs among the groups; (C) Venn diagram showing the shared down-regulated DEGs among the groups (S20, S200, S2000 represent the 20 ng/L BPS, 200 ng/L BPS, 2000 ng/L BPS treatments, respectively). Figure 3 . Figure 3. Gene ontology term enrichment in the control vs. S20 comparison.(A) Top enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs. Figure 3 . Figure 3. Gene ontology term enrichment in the control vs. S20 comparison.(A) Top enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs. Figure 4 . Figure 4. Gene ontology enrichment in the control vs. S200 comparison.(A) Top 10 enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs.The top 10 enriched GO terms in the control vs. S2000 comparison are shown in Figure 5.In the molecular function class, both up-and down-regulated DEGs were enriched in "peptidase regulator activity", "peptidase inhibitor activity", "heparin binding", "glycosaminoglycan binding", "polysaccharide binding", and "sulfur compound binding" items (Figure5A,B).The up-regulated DEGs were also enriched in the "endopeptidase regulator activity" and "endopeptidase inhibitor activity" items (Figure5A).In the cellular component class, the DEGs were enriched in "extracellular region", "extracellular space", "pigment granule membrane", and "external encapsulating structure" items (Figure5A,B).The up-regulated DEGs were also enriched in the "chitosome" and "melanosome membrane" items (Figure5A).In the biological process class, the up-regulated DEGs were enriched in "protein activation cascade", "regulation of protein activation cascade", "blood coagulation, fibrin clot formation", "detection of external stimulus", "detection of Figure 4 . Figure 4. Gene ontology enrichment in the control vs. S200 comparison.(A) Top 10 enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs. Figure 5 . Figure 5. Gene ontology enrichment in the control vs. S2000 comparison.(A) Top 10 enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs. Figure 5 . Figure 5. Gene ontology enrichment in the control vs. S2000 comparison.(A) Top 10 enriched GO terms among up-regulated DEGs; (B) Top 10 enriched GO terms among down-regulated DEGs. Figure 6 . Figure 6.KEGG enrichment.(A) Top 20 enriched pathways of the DEGs in the control vs. S20 comparison; (B) Top 20 enriched pathways of the DEGs in the control vs. S200 comparison; (C) Top 20 enriched pathways of the DEGs in the control vs. S2000 comparison. Figure 6 . Figure 6.KEGG enrichment.(A) Top 20 enriched pathways of the DEGs in the control vs. S20 comparison; (B) Top 20 enriched pathways of the DEGs in the control vs. S200 comparison; (C) Top 20 enriched pathways of the DEGs in the control vs. S2000 comparison. Figure S2: The growth parameters of the 15 dpf larvae after BPS exposure.(A): The rate of Chinese medaka with pericardial edema (ce); (B): The rate of Chinese medaka with enlarged yolk sac (cv); (C): The rate of Chinese medaka with decreased head-trunk angle (HTA↓); (D): Heart beats (bpm, beats per minute) of the larvas. Funding: This work was supported by the National Natural Science Foundation of China [No.22176066]; the research funds of Research on Marine Biological Resources and Environmental Monitoring in Dongguan Bahaba Taipingensis Municipal Nature Reserve and Domestication, Artificial Propagation, and Field Release Techniques of Bahaba Taipingensis from the Dongguan Forestry Affairs Center [No.441901202109379]; the College Students' Innovation and Entrepreneurship Training Program from South China Normal University [No.202225047, S202310574109]; and the research funds of the Guangxi Key Laboratory of Environmental Pollution Control Theory and Technology [No.2201K002].Institutional Review Board Statement: The animal protocols in this study were approved by the Animal Care and Use Committee of the South China Normal University (No. SCNU-SLS-2022-025).
7,649.6
2024-03-22T00:00:00.000
[ "Environmental Science", "Biology" ]
Stability and Activity of Zn/MCM-41 Materials in Toluene Alkylation: Microwave Irradiation vs Continuous Flow Zn/MCM-41 mesoporous materials have been prepared via classic wet impregnation, employing zinc nitrate as precursor and tested for activity and stability in the Friedel-Crafts alkylation of toluene with benzyl chloride under microwave irradiation and continuous flow. The modified materials were characterized by means of a number of analytical techniques, and surface and textural properties were thoroughly checked. Materials containing the highest Zn loading (15 wt %) provided full conversion after 5 minutes reaction under microwave irradiation (300 W, 120 °C). Materials were proved to be stable and reusable for several cycles with an optimum performance under continuous flow conditions. Introduction The design of cost-competitive, highly active, and stable catalytic systems constitutes a significant challenge in the field of materials engineering for the 21st century [1]. Much effort has been made in recent years to investigate the use of solid acid systems, including porous aluminosilicates as alternative catalysts to homogeneous systems for acid catalyzed processes [2][3][4]. The use of benzyl chloride as an alkylating agent for Friedel-Crafts alkylation can provide access to substituted diphenyl derivatives-relevant intermediates in the synthesis of high added value products. Lewis acids, including AlCl 3 , FeCl 3 , HF, BF 3 , and ZnCl 2 , have been typically used in Friedel-Crafts alkylation [5,6]. However, recovery, separation, and disposal of homogeneous Lewis acids and their originated waste has several environmental, health, and safety issues. The utilization of solid acids, such as nanostructured materials, can provide alternative recycling and separation possibilities, minimizing pollution and waste. Alkylation reactions have been extensively catalyzed by a large variety of solid acids, such as: β-zeolites [7]; ZSM-5 modified with Ga, Zn, In, and Fe [8][9][10][11]; Fe, Ga, and Al-SBA-15 Physicochemical Characterization Textural and surface properties of synthesized materials have been presented in Table 1. In addition, zinc quantities in the catalysts, calculated by chemical analysis and expressed as Zn wt %, are also shown. The results obtained from low-angle X-ray powder diffraction (XRD) measurements for the synthesized catalysts and the support are shown in Figure 1. MCM-41 materials exhibited all characteristics resolved diffraction lines of the hexagonal ordering in MCM-41, indexed as (1 0 0), (1 1 0), and (2 0 0) reflections [17,20]. Accordingly, Zn/MCM-41 catalysts showed a similar low-angle XRD pattern as compared to the parent material. However, an intensity decrease accompanied by diffraction line widening could be observed at increased Zn content, pointing out a deterioration in long-range ordering. This last fact could reflect the partial structure collapse, as well as mesopore blocking upon Zn incorporation. FWHM (full width at half maximum) corresponding to the main (100) peak of the XRD patterns indeed provided hints of the widening of the peaks, which increased with the Zn loading in the support. The (100) reflection also changed slightly to higher angle, with the lattice parameter (a 0 ) concurrently decreasing upon long-range structural order diminishment [21]. On the other hand, no ZnO diffraction lines could be observed in high-angle XRD patterns ( Figure S2, Supplementary Materials), which suggested that Zn species were amorphous or their crystal domain size was below the detection limit for XRD (<4-6 nm). These are likely to be either finely dispersed in the framework or located at the external surface of the support [22][23][24]. The observed long range hexagonal ordering of Zn/MCM-41 materials and support was also corroborated by Transmission Electron Microscopy (TEM) images (Figure 2a,b), which revealed ordered hexagonal arrays of mesopores with uniform pore sizes, characteristic of MCM-41 materials [25]. The morphology of synthesized materials was examined by Scanning Electron Microscopy (SEM). Figure 2c,d show that these materials do not show any specific arrangement. Furthermore, it is possible observe aggregates with irregular shapes and an average size of 2-4 µm, supporting previous results [26]. Table 1. All catalysts display Type IV isotherms of IUPAC classification [27]. It is possible a decrease in the volume of gas adsorbed and in the pore volume regarding to the MCM-41 sample was observed, which could be due the formation of zinc species into the channels and onto mesoporous surface. In addition, the inflection region between p/p 0 range = 0.1-0.25 was assigned to the condensation phenomenon. The inflection is less pronounced with the increase in Zn loading, which indicates a broader pore size distribution. In the pore size distributions (PSD) plot (Figure 3 inset), the average pore diameter of support and Zn/MCM-41(1) sample was approximately 3.5 nm, with narrow distribution. However, by increasing the Zn amount in the samples, the PSD broadens, displaying a bimodal PSD, although the desorption branch does not display two distinct steps. These samples present a dual mesoporous distribution, with a well-defined peak at 2.6 nm and additional wide peak around of 4-6 nm. This may be due to the partial blockage of the primary mesopores at increased zinc oxide amount, the subsequent deterioration of the structure, and the appearance of larger size pores with irregular distribution [28,29]. In addition, as is shown in Table 1, Zn/MCM-41 samples present a diminution in the textural parameters (S BET and V TP values). The decrease in these values with increasing Zn content implies a blockage of pore channels besides structural deterioration, due to depositing Zn species on the support. UV-Vis spectra of Zn/MCM-41 samples and bulk ZnO are presented in Figure 4. A strong band at 280-380 nm for Zn/MCM-41 catalysts is associated with the presence of ZnO particles. It is well-known that ZnO powders present an intense absorption peak between 370 and 330 nm [30,31]. However, in semiconductors, the blue shift of absorption band edge towards lower wavelength indicates a quantum size effect. Accordingly, this absorption band increases in intensity and displacement towards higher wavenumber with metallic content growth, accounting for larger ZnO species. Additionally, X-ray photoelectron spectroscopy (XPS) is a useful technique to confirm presence of ZnO. XPS of Zn 2p ( Figure 5) revealed the binding energies (BE) at approximately 1022.8 eV and 1045.7 eV, corresponding to Zn 2p 3/2 and Zn 2p 1/2 , respectively, giving a spin-orbit splitting (SOS) of ∼22.9 eV. This corroborates that Zn is in the Zn +2 chemical state [32]. In addition, spin-orbit splitting is well known to be 22.2 eV in pure ZnO. These findings pointed out that ZnO would not be predominantly "bulk" ZnO, but probably as ZnO species highly dispersed and interacting with the support surface, according to Carlson [33]. The Si 2p signal in was approximately 104.4 eV, in good agreement with SiO 2 -type material. Likewise, oxygen contribution in Zn/MCM-41 samples showed a symmetric peak centered at 533.3 eV, mainly due to the O 2− in SiO 2 [34]. Other oxygen peaks attributable to ZnO oxide and to weakly adsorbed OH − were not detected, probably for the oxygen signal intensity of the siliceous material [35]. The binding energies, Zn/Si (surface), and Zn/Si (bulk) values, as well as Zn surface atomic and bulk concentrations for all samples, are summarized in Table 2. The metal/Si (surface) ratio corresponds to the dispersion of metal on the mesoporous support. In addition, the Zn/Si (surface) ratios are comparably reduced regarding Zn/Si (bulk) ratios, due to the fact Zn species would be incorporated mostly within the mesopores as clusters or very small nanoparticles. However, the zinc species would not be homogeneously dispersed on the siliceous mesoporous framework. The spectroscopic study of the chemisorption of pyridine (Py) is usually a useful method that allows an evaluation of the amount and strength of the acid centers on a catalyst surface [36]. To investigate the acidic strengths of Lewis (L) and Brønsted (B) sites, the pyridine thermodesorption was carried out. IR spectra at different temperatures are displayed in Figure 6. SiOH groups interact with pyridine through hydrogen bonding. All samples presented bands at 1446 and 1596 cm −1 due to hydrogen-bonded pyridine (H-Py) [37,38]. These band are not present after the evacuation at 200 • C due to the weakness of the site [33,34]. In addition, the Zn/MCM-41 samples also show the bands of H-Py (1446 and 1596 cm −1 ), Lewis-bonded pyridine (L-Py: 1451, 1576 and 1610 cm −1 ), and a band at 1491 cm −1 due to interaction of pyridine over (L + B) acid sites [39]. As observed in Figure 6, the intensity of bands at 1610 and 1451 cm −1 , corresponding to Py interaction with Lewis acid sites, increases with the Zn loading on the support. The band at 1576 cm −1 corresponds to the vibration of adsorbed pyridine to weak surface Lewis acid sites. Moreover, bands corresponding to Brønsted acid sites at 1540 and 1636 cm −1 were not detected [40]. Therefore, the band at 1491 cm −1 is only associated with Lewis acid sites, which increase with the amount of zinc in the material. Accordingly, it is possible to observe that these bands assigned to Lewis sites are maintained until 200 • C, indicating their acidic strength. Thus, these sites can mainly be associated with a strong electron-donor-acceptor adduct of the probe molecule with Lewis-type sites, attributed to isolated zinc species coordinated to framework oxygen atoms (Zn unoccupied molecular orbital) interacting with pyridine. Figure 7 displays the Fourier transform infrared (FT-IR) region of hydroxyl of the catalyst after Py adsorption followed by desorption at 200 • C. MCM-41 spectrum exhibit a band at approximately 3740 cm −1 assigned to isolated terminal O-H group [41,42]. In addition, a decrease of intensity and broadening of this band is evidenced for Zn/MCM-41 samples. The widening of this band is assigned to Zn-OH surface species, whereby an interaction would occur between the vicinal OH groups, such as Si-OH and Zn-OH groups. However, a decrease of this band could be due to a partial blocking of Si-OH groups from the coordination of the Si-OH groups with Zn ions, leading to the bonds of Si-O-Zn. This last has been already reported by us [21,37]. The molar extinction coefficient of the Lewis acid site-adsorbed pyridine band was employed to estimate concentration of the acid sites of the samples [43]. The acidity concentrations (µmol pyridine × g −1 ) are given in Table 1. Thus, the acid sites increase with the Zn amount in the MCM-41 support. Alkylation Reaction Under Microwave Irradiation Initially, the alkylation reaction of toluene (Tol) with benzyl chloride (BC) was conducted under microwave irradiation (120 • C, 300 W), with the goal of finding optimal conditions and the best catalyst (Scheme 1). Data are summarized in Table 3. This alkylation was first carried out using MCM-41 (25 mg), Tol (2.0 mL), and BC (0.1 mL). After 5 and 10 min, the products formation was not detected (Table 3, Entry 1). Similarly, if the reaction is carried out without catalyst, the formation of products is not observed ( Table 3, Entry 2). The initial use of Zn/MCM-41(1) indicates that when 12.5 mg catalyst, 1.0 mL toluene, and 0.1 mL benzyl chloride are used, a better conversion is observed after 10 min of reaction (Table 3, Entry 5). Interestingly, a higher conversion at approximately 80% could be obtained when the zinc loading on MCM-41 was increased for catalysts Zn/MCM-41(2.5) and Zn/MCM-41(10) ( Table 3, Entries 6 and 7). Observed products were primarily the ortho and para isomers, and a small amount of the meta isomer, as expected for an electrophilic aromatic substitution of toluene. All catalysts exhibited selectivity higher than 90% to the desired mono alkylated products, with a near 1/1 (ortho/para) ratio. In addition, the molecular sizes were calculated by HyperChem v5.0 (Hypercube, Inc., Gainesville, FL, USA) (see Supplementary Materials) in order to confirm there were no diffusion-limiting problems. On the other hand, the maximum conversion in the process was obtained for Zn/MCM-41(15), leading to quantitative product yields in only 5 min of reaction (Table 3, Entry 8). In search of the ideal amount of catalyst, the reaction was tested using 6.25 mg of Zn/MCM-41(15), but unfortunately there was a decrease in product conversion (Table 3, Entry 9). For a long time, Lewis acids have been reported to promote this type of reaction, being the determining factor [5]. As has already been observed, the increase of zinc loading is responsible for the acidity increase in the catalyst. Zn/MCM-41 (15) catalyst, exhibiting the highest conversion, has the highest zinc loading and highest acidity. However, the acidity of the materials is not the only important factor in the catalytic activity of the systems, since all materials exhibited relatively similar surface acid properties, particularly at moderate to high temperatures (Py titration data, Figure 6). The highest activity measured for Zn/MCM-41(15) could be associated not only with a slightly increased acidity but also with its textural properties. This material exhibited a marked dual mesoporous distribution, which was related to the structural deterioration and appearance of larger size pores with irregular distribution in the defective structure. This fact would probably contribute to enhancing the interaction between the reactant and catalyst, besides the accessibility to active sites, favoring the higher catalytic activity observed for this sample. The mesoporous surface area effect, on the alkylation reaction, has been reported by Milina et al. [44]. Catalyst Reuse Studies Under Microwave Irradiation The reusability of Zn/MCM-41(15) was subsequently investigated under microwave irradiation conditions with results included in Figure 8. The odd reuse numbers (1st, 3rd, and 5th) correspond to the reaction with regenerated material (calcinated at 400 • C in air, 2 h) and the pair reuses (2nd and 4th) are those without reactivations treatment. This test was carried out under the best conditions found (1 mL toluene, 0.1 mL benzyl chloride and 12.5 mg catalyst at 120 • C, 5 min reaction), for which reactions were stopped after a few minutes halfway through the reaction, and then the mixture was filtered off in order to separate the catalysts, and this was washed with toluene and kept in an oven at 100 • C for 1 h prior to its use in the next alkylation reaction (1st reuse), for which quantitative conversion was found. The second reuse of the catalyst, under identical reaction conditions and without reactivation, provided a much reduced conversion (56%) with similar selectivity (2nd reuse). The decrease of the total conversion could be attributed to the presence of organic species adsorbed on the catalytic material surface. For the 3rd reuse, the catalyst was calcined in air at 400 • C for 2 h, again providing quantitative conversion after 5 min of reaction. The 4th reuse, without reactivation, provided a reduced conversion with similar selectivity as the 2nd reuse. Thus, the regenerated materials could again provide quantitative conversion after 5 min of reaction (3rd and 5th reuses). Therefore, the catalyst was found to be stable, and its deactivation was not due to Zn leaching in the first few reuses or regenerations. Nevertheless, these results clearly demonstrated a high stability of the catalytic material under the investigated reaction conditions, preserving almost unchanged initial activity after several cycles. Continuous Flow Alkylation Reaction: Activity and Stability Additionally, the reaction was performed under continuous flow conditions in order to test the activity, but most importantly the stability of the synthesized materials, focusing on optimum Zn/MCM-41 (15). Reaction conditions (120 • C, 0.4 mL/min, 110 mg of catalysts) were translated from microwave to continuous flow, taking advantage of the moderate temperatures and pressures in the microwave reactor that could be mimicked in a continuous flow system. [45]. Results are summarized in Table 4. Initially, Zn/MCM-41 (15) catalyst exhibited a >99% conversion to alkylation products in the first 15 min of reaction (equivalent to a residence time of 0.6 min, see Table 4). After 165 min of reaction in continuous flow at 100 • C, we can observe a conversion decrease from >99 to 80%, while the selectivity remained practically constant throughout the process (Table 4). Most importantly, a long reaction run (20 h, Table 4 last entry) also indicated that the catalyst was rather stable with time on stream under continuous flow (72% conversion) after the observed initial activity drop due to (1) a minor leaching (<20 ppm) observed, probably of weakly coordinated Zn species in the materials and the leaching effect of generated HCl, and (2) the presence of strongly adsorbed aromatics that can be eliminated upon regeneration (i.e., calcination). Particularly, the stability of the materials under flow conditions was remarkable and further supported the reusability results under microwave irradiation, probably improved due to the low residence time of byproduct HCl in the catalyst under flow conditions that decreased Zn leaching. Table 4. Activities and selectivity of Zn/MCM-41 in the alkylation of toluene with benzyl chloride in flow chemistry (a) . Catalyst Preparation The pure siliceous mesoporous material (MCM-41) was prepared following the pathway reported by Elías et al. [17]. Zn/MCM-41 catalysts were prepared by the wet impregnation method using zinc nitrate salt as precursor. For additional experimental details, see Supplementary Materials. Characterization Zn content was quantified using inductively coupled plasma-atomic emission spectroscopy (ICP-AES) using a spectrophotometer VISTA-MPX CCD Simultaneous ICP-OES-VARIAN (Varian, Inc., Palo Alto, CA, USA). The samples were previously digested with HF and HNO3. Sample characterization was conducted using X-ray powder diffraction (XRD), Cu Kα radiation (λ = 1.5418 Å) and measured with a PANalytical X'Pert PRO diffractometer (Philips, Almelo, The Netherlands) in the range of 2θ from 1.5 to 7 • and from 10 to 80 • . N 2 adsorption-desorption isotherms were recorded at −196 • C (N 2 with 99.999% purity) in a Micromeritics ASAP 2000 instrument (Norcross, GA, USA), in order to provide information on textural properties. Scanning Electron Microscopy (SEM) was also employed to visualize the catalyst morphology, for which a JEOL JSM-6380 LV (Tokyo, Japan) (20kV acceleration voltage) and gold sputtering was utilized to coat the samples in order to maximize their beam stability. Fourier transform infrared (FT-IR) data was performed on a Nicolet iS10 FTIR spectrometer (Thermo Scientific, Waltham, MA, USA). For full experimental details, see Supplementary Materials. Catalytic Experiments Toluene alkylation with benzyl chloride and the reusability experiments were carried out using a CEM-DISCOVER microwave synthesizer (Matthews, NC, USA), in an open vessel under continuous stirring. Continuous flow experiments were conducted in a high-temperature high-pressure Phoenix Flow Reactor (ThalesNanoTM, Budapest, Hungary) connected to a HPLC pump. For experimental details, see Supplementary Materials. Conclusions Zn-containing MCM-41 catalysts was prepared with varying zinc contents and evaluated in the alkylation reaction of toluene using microwaves and flow chemistry, as well as benzyl chloride as alkylating agent. Zn/MCM-41 featured typical high surface areas and hexagonal arrangements. However, higher Zn loading originated a partial collapse in structure. The acidity of the catalysts increased with the highest Zn loading, which was reflected in the increase of the catalytic activity. The large mesopores of the catalysts did not pose any constraints on the reaction intermediates and products, as opposed to microporous materials. Zn/MCM-41(15) exhibited the best activity and selectivity to the desired products (o-and p-methyl diphenylmethane) in the selected reaction. In addition, the mesoporous catalysts were highly stable and reusable after several reuses or regeneration cycles, both employing microwave-assisted irradiation as well as, most importantly, in a flow reactor. Based on these findings, Zn-modified MCM-41 type molecular sieves are highly suitable solid acid candidates for Friedel-Crafts alkylation. Conflicts of Interest: The authors declare no conflict of interest.
4,488.4
2019-02-01T00:00:00.000
[ "Materials Science" ]
Pressure and compressibility of conformal field theories from the AdS/CFT correspondence The equation of state associated with ${\cal N}=4$ supersymmetric Yang-Mills in 4 dimensions, for $SU(N)$ in the large $N$ limit, is investigated using the AdS/CFT correspondence. An asymptotically AdS black-hole on the gravity side provides a thermal background for the Yang-Mills theory on the boundary in which the cosmological constant is equivalent to a volume. The thermodynamic variable conjugate to the cosmological constant is a pressure and the $P-V$ diagram is studied. It is known that there is a critical point where the heat capacity diverges and this is reflected in the isothermal compressibility. Critical exponents are derived and found to be mean field in the large $N$ limit. The same analysis applied to 3 and 6 dimensional conformal field theories again yields mean field exponents associated with the compressibility at the critical point. Introduction Ever since Bekenstein's discovery of the relation between entropy and the area of a black hole's event horizon [1] and Hawking's subsequent observation that black holes have an intrinsic temperature associated with them [2], the subject of black hole thermodynamics has been a fascinating area of research. Indeed black hole thermodynamics now has potentially great usefulness in providing insights into the thermodynamics of conformal theories via the AdS/CFT correspondence [3]. An interesting aspect of black hole thermodynamics is the role of the cosmological constant, Λ, as a thermodynamic variable: an idea originally considered in [4] and re-visited in [5]. Since Λ has the natural physical interpretation of pressure in the bulk, the conjugate variable is a volume and this has led to some intriguing recent results for the pressure and volume of black holes, with phase transitions and critical points showing very similar properties to liquid-gas phase transitions [6,7]. For Einstein gravity in the bulk, the critical exponents are mean field for all black hole solutions studied to date, leading to an analogy with the liquid-gas transition of a van der Waals fluid. In the context of AdS/CFT it was suggested in [5,8] that varying the cosmological constant in the bulk should be associated with varying the number of colors in the boundary CFT and this proposal was investigated in more detail in [9], where the thermodynamically conjugate variable to Λ was interpreted as a kind of chemical potential for color. An alternative approach is followed here based on a different interpretation of the thermodynamic significance of Λ which provides a length scale in the CFT. When global AdS co-ordinates are used the CFT on the boundary has a finite volume, determined by Λ, hence varying Λ in the bulk corresponds to varying the volume of the CFT on the boundary. If Λ gives the volume of the CFT then the thermodynamically conjugate variable is the pressure and one is led to the construction of P − V diagrams for the CFT, in which the pressure and volume are interchanged relative to their roles in the bulk. This can be achieved while keeping the number of colors fixed provided the higher dimensional Newton's constant is adjusted as Λ is varied, as suggested in [10]. In 10-dimensional string theory, for example, Newton's constant is related to the string coupling constant and tension by G 10 = 8π 6 g 2 s (α ′ ) 4 so varying G 10 can be thought of as varying g s with the tension fixed. In this paper the P − V diagram associated with 4-dimensional N = 4 SUSY SU(N) Yang-Mills theory at large N is investigated using the AdS/CFT correspondence. When the bulk contains a black hole there is a first order phase transition [11] which is associated with the deconfining transition for the quark-gluon plasma on the boundary [12]. When the black hole carries a U(1) charge, corresponding to R-charge in the Yang-Mills theory, this phase transition is one end of a line of first order phase transitions while the other end terminates at a second order transition where the heat capacity at constant volume for the Yang-Mills theory diverges [13]. The critical exponents in the bulk gravitational theory, with charge as the order parameter and temperature as the control parameter, are known to be mean field, [14], and so C V is finite at the critical point for the black hole in the bulk. At first sight this is at odds with the statement in [13] that there is a critical point in the CFT at which C V diverges. We shall show that there is no contradiction here and, when interpreted correctly, the boundary CFT has mean field exponents. In terms of pressure and volume there is a number of aspects of the phase transition that make it different from more usual cases. Firstly there is a single phase at low temperatures and the two phase regime exists for temperatures above the critical point [13]; secondly, above the critical point, along the line of first order phase transitions, it is the pressure that jumps across the phase transition, not the volume, so it is more appropriate to use the pressure as an order parameter rather than the more usual volume; thirdly the conformal symmetry dictates that volume and temperature are not fully independent and it is better to use charge as the control parameter rather than the temperature. This last point of view is more in keeping with the notion of the phase transition being a quantum phase transition rather than a thermal phase transition. With this interpretation the critical exponents for pressure and volume of the Yang-Mills theory are calculated in the large N limit and shown to be mean field. The general structure is the same for the 3-dimensional and 6-dimensional CFT's considered in [15], in particular the critical exponents at large N are mean field in all three conformal field theories. In section 2 the black hole thermodynamics of the relevant bulk solution is summarized and is related to that of the boundary CFT in section 3. Section 4 analyzes the case of N = 4 SUSY Yang-Mills in detail, the P − V diagram is constructed and critical exponents for the deconfining phase transition in the large N limit are calculated. Finally section 5 summarizes the results. Some technical details are in two appendices. The bulk In D space-time dimensions the Lagrangian for Einstein gravity, with a cosmological constant Λ, coupled to a U(1) gauge field, is The normalization of F is the same as in [13], it has dimensions of inverse length. A solution of the equations of motion arising from (1), corresponding a spherical charged asymptotically AdS black hole in D space-time dimensions is easily written down. In global co-coordinates the gauge potential is where is the volume of a unit (D − 2)-sphere. The constant c 0 accommodates some freedom in the choice of gauge. The normalization of the U(1) charge Q here is determined by requiring that the gauge field F = dA satisfies Gauss law where S D−2 is a sphere containing the charge. The line element is where and d 2 Ω D−2 is the line element on a unit (D − 2) sphere. There is an event horizon and the largest root of f (r) = 0 will be denoted by r h . It is natural to chose a gauge in which the potential above vanishes at the outer horizon, c 0 = 1 The parameters q and µ are then related to Q and the mass M of the black hole by and (we use units with c = 1, but keep the D-dimensional Newton's constant explicit). From (5), with f (r h ) = 0, and (7) The area of the event horizon is Ω D−2 r D−2 h and the Bekenstein-Hawking entropy is From the point of view of black hole thermodynamics, the mass is interpreted as the internal energy of the system, M = U(S, Q), and the Bekenstein-Hawking temperature is It will prove useful to define dimensionless variables, x := r h L and y := q L D−3 which can be used in lieu of S and the charge. In terms of these variables and In the context of superstring theory, the D-dimensional Newton's constant G D is descended from Newton's constant in the full D-dimensional theory (D = 10 or 11). Compactifying the higher dimensional theory on a (D − D)-dimensional compact space K D−D , with size L , allows for a direct where C is a dimensionless number determined by the metric on K D−D . The AdS length scale L is not intrinsic to the D-dimensional action, but is merely a parameter in a classical solution of the D-dimensional theory and it is perfectly reasonable to consider varying L in the solution. Thermodynamics of the boundary field theory In the AdS/CFT correspondence the boundary CFT theory has d = D − 2 space dimensions. With the global co-oordinates used in the previous section, the (d + 1)-dimensional space-time metric at fixed r is which is conformal to a (d + 1)-dimensional space-time with constant time spatial volume and this can be interpreted as the spatial volume of the boundary conformal field theory. The dimensionless ratio of the AdS length scale L to the D-dimensional Planck length in the bulk determines the number of degrees of freedom in the boundary field theory [15]. The most studied case is D = 10 compactified on K 5 = S 5 , with D = 5. the boundary CFT is then N = 4 SUSY SU(N) Yang-Mills theory, when the classical solution in the bulk (4) is relevant for large N. This has 16N 2 degrees of freedom (8N 2 bosonic and 8N 2 fermionic) and the large N limit is the weak gravity limit: hG 10 is of course related to the 10-dimensional Planck length,hG 10 = l 8 P l and, as usual, large N is related to the classical limit of the bulk theory, in which L >> l P l . The CFT is on a 3-sphere with volume and the inverse of the 5-dimensional Planck length (16) is related to the number of degrees of freedom per unit volume of the CFT so varying the volume of the CFT, keeping N 2 fixed, is completely equivalent to varying the 5-dimensional Planck length. The entropy is proportional to N 2 (with x := r h L as before) as is the dimensionless charge where y := q L 2 . On dimensional grounds the internal energy U(S, V, Q) = M is of the form with u(S, Q) a dimensionless function of dimensionless variables. The thermodynamics of the CFT is determined by the functional dependence of the mass on S, Q and V (or equivalently, x, y and L), explicitly Two other interesting cases are the 3-dimensional CFT obtained from M-theory in D = 11 compactified on AdS 4 × S 7 (more generally S 7 /k for integral k, [3,13]) and the 6-dimensional CFT obtained from M-theory in D = 11 compactified on AdS 7 × S 4 . For these two cases the N 2 behavior of the "extensive" quantities M, S and Q is replaced with N 3/2 and N 3 respectively and the powers of x change as in equations (11) and (12) but otherwise the analysis of the thermodynamics is similar to the case D = 10 and D = 5 studied in §4 below. The thermodynamics of N = SUSY Yang-Mills To explore the thermodynamics of the CFT fully we wish to fix N and allow S, Q and V to vary. This means we must vary L, but at the same time vary the D-dimensional Newton constant in such a way that N 2 in is fixed in the first equation of (16), [10]. For concreteness we shall focus on D = D = 5, the general structure is the same the other two cases, in particular the critical exponents are the same. Thermodynamically we interpret M = U(S, Q, V ) in (22) as the internal energy of the large N Yang-Mills theory at strong coupling, Note that, while the mass in the bulk is classical, the internal energy of the CFT is quantum mechanical and vanishes ash → 0 with x and y fixed. The temperature is then in the deconfined phase this is interpreted as the temperature of the quarkgluon plasma [12,13]. The pressure is [10] where ǫ = U V is the energy density. Hence and the speed of sound is given by simple consequences of the fact that dimensionally, Lastly the chemical potential is These expressions simplify at high T (large x) when the event horizon curvature and the charge are negligible. This is deep into the de-confined phase of the quark-gluon plasma and in this limit Since T ∝h we see that the pressure and the chemical potential here are of quantum mechanical origin. At fixed charge and temperature there is a phase transition [13], an extension of the Q = 0 Hawking-Page phase transition in the bulk [11], which is interpreted as the deconfining phase transition in the boundary Yang-Mills theory [12]. The heat capacity diverges when ∂T ∂S V,Q = ∂ 2 U ∂S 2 V,Q vanishes and there is a critical point when = 0. This happens at a critical point that was first found in [13]. The critical temperature is given by . Conformal invariance dictates that only the combination V 1 3 T is determined at critical point, V and T are not fixed separately. 2 2 Finite temperature field theory is associated with periodicity in Euclidean time with period 1/T and Euclidean time parameterizes a circle S 1 of radius 1 2πT . The spatial geometry is S 3 with radius L and, because the theory is conformal, the physics can only depend on the ratio of the radii of these two spheres, namely 2πT L. The physics depends on V 1 3 T , not on V and T separately, [12]. The specific heat critical exponent α can easily be determined by fixing y = y * and expanding 1/C V,Q around x * . With α can be extracted by similarly expanding the reduced temperature at constant volume, The critical exponent follows from This singularity in the heat capacity was not found in the bulk thermodynamics of higher dimensional asymptotically AdS charged black holes studied in [14,16], these authors found a critical point in the bulk but the exponents were mean field and C V,Q is finite there. To understand this apparent contradiction we write the full equation of state in terms of Q and Φ. Using (20) and (28) to eliminate x in favor of Q and Φ in (24) gives the equation of state Expanding about the critical point (31) with y = y * (1 + η), T = T * (1 + t) and Φ = Φ * (1 + ϕ), where LΦ * := 1 2 √ 5 , leads to Aside from a change in the sign of t this has the same analytic structure as the equation of state for a van der Waals gas [14] which, in reduced variables p = P −P * P * The van der Waals equation of state has mean field exponents, so one expects α = 0. The previous calculation gave α = 2 3 because Q was held fixed (fixed η) but the order parameter for the liquid-gas phase transition in the van der Waals gas is v, which would be analogous to ϕ in (38). Indeed C V,Φ is perfectly finite, [13], so α is indeed zero for fixed Φ, which is the analogue of the van der Waals case. If Q is the order parameter the usual exponents are defined by and the response function, behaves as . The equation of state (38) results in While these exponents, together with α = 2 3 , do satisfy the Rushbrooke and Widom scaling relations there is a difficulty in that γ is negative so the response function vanishes at critically rather than diverging. As indicated in (42) the response function χ T can be negative near the critical point, and this is the instability found in [17]. 3 It was also shown in [17] that Φ jumps across the phase transition in the two phase regime, suggesting that Φ is the order parameter for this transition. If Q is just fixed from the start and not varied then there is no problem, C V,Q diverges with exponent 2 3 and β, γ and δ are not relevant. But if we wish to probe the system by adjusting Q then Φ should be viewed as the order parameter, not Q. In that case C V,Q diverges as |t| −1 rather than |t| − 2 3 because it should be calculated for ϕ = 0 and not η = 0. The other exponents, ϕ ∼ |t| β and ϕ ∼ |η| δ , are easily obtained from (38) in the usual way and are found to be mean field In particular the heat capacity C V,Φ is finite (and negative) at the critical point [13]. Mean field behavior with Φ as the order parameter for the black hole in the bulk was first found in [14]. Alternatively the equation of state can be written in terms of the pressure rather than the chemical potential to study the compressibility. The adiabatic compressibility of the plasma follows easily from (25) and is but the isothermal properties are more interesting. It is shown in appendix A that the isothermal compressibility, κ T,Q , is given by Hence κ T,Q is vanishes at the critical point and is negative along Q = Q * close to the critical point when C V,Q * > 0 diverges. Thus the P -V response function, κ T,Q , behaves the same way as the Q-Φ response function χ T . This suggests that the order parameter here is P rather than V . If this is a valid point of view we should focus on the heat capacity at constant pressure, C P,Q , rather than C V,Q . Using the standard relation equations (47) and (48) give which is finite and negative when C V,Q diverges, just as C V,Φ is. In particular at the critical point Thus, with P as the order parameter, α = 0 is mean field. The negative value of C P at the critical point is a reflection of the instability found above in χ T . There is a subtlety in trying to extract β and δ as there is no independent definition of a critical volume, V and T are linked due to conformal invariance. In particular it would be wrong to assume the usual scaling relations to derive β and δ from α and γ above as conformal invariance imposes an extra constraint. A reduced volume can be defined as but this is not a new variable since conformal invariance relates it directly to t, As a consequence the usual definitions of the exponents β and δ do not apply. Since v and t are not independent it is better, when discussing pressure and volume, to use η to probe the physics near the critical point rather than t. Lines of equal charge in the P -V plane are plotted in figure 1 where P and V are rendered dimensionless by multiplying by appropriate powers of T . The isothermal compressibility is positive when the slope of the plotted curves is negative and there are two regions in the P -V plane where this is the case, the critical point lies on the boundary of the rightmost of these regions which corresponds to Q < Q * and v > 0 (hence t > 0, i.e. the high temperature regime). The pressure of the system jumps across the phase transition. To calculate critical exponents first define a reduced pressure at fixed volume. 4 Then, with p as the order parameter and Q as the control variable, β and δ are defined by in analogy with the usual definition of β, but with t replaced by η and p as the order parameter (so p and v are interchanged). Similarly Lastly the relevant response function is We leave the details to an appendix and here quote the result that β, γ and δ are indeed mean field. We conclude that, with the proper identification of the order parameter as being the thermodynamic variable that jumps across the line of first order phase transformations, the exponents are always mean field in the large N limit, both in the Φ-Q plane and the P -V plane. Summary The critical point of N = 4 SUSY SU(N) Yang-Mills in the large N limit, first found in [13,17], is also visible in the P − V diagram of the quark-gluon plasma. The volume here is provided by the cosmological constant in the 5-D bulk, Λ = − 6 L 2 , which is related to the volume of 3-space, S 3 , in the boundary Yang-Mills theory by V = 2π 2 L 3 . The pressure is then defined through the thermodynamic relation P = − ∂U ∂V , where U is the internal energy of the thermodynamic system -identified with the mass of the black hole in the bulk. This is in contrast to the thermodynamics of the black hole itself, where the roles of pressure and volume are reversed so that, on the gravity side, the cosmological constant provides the pressure and the thermodynamically 4 One can also use the temperature to render P dimensionless and define the reduced pressure as p = P/T 4 −(P/T 4 ) * (P/T 4 ) * . This does not change the critical behavior, in particular it gives the same critical exponents. conjugate variable is a volume and the black hole mass is the enthalpy of the system [5]. There is a constraint on the thermodynamic variables, due to the fact that the boundary field theory is conformal, and only the combination V T 3 is relevant to the phase structure, not V and T separately. As a consequence it seems better to use the charge rather than the temperature as the control parameter in discussing the thermodynamics. Then the pressure jumps across the phase transition in the two phase regime. In the approximation in which the fluid is treated as a gas of non-interacting particles -quarks and gluons in the plasma and non-interacting hadrons in the confined phase -this jump may be viewed as simply due to a change in the number of degrees of freedom: each degree of freedom contributes equally to the pressure any change across the phase transition in the number of effective degrees of freedom contributes to a change in pressure. With pressure as the order parameter in the P -V plane the exponents are mean field and the phase transition is in the same universality class as the van der Waals gas. Mean field exponents also characterize the phase transition in the Φ-Q plane. An unsatisfactory feature of the analysis is the instability associated with the negative value of C P at the critical point, implying that the system is unstable there when the pressure is fixed. Although κ T is positive in the region enclosed by the black curve to the right of the critical point in figure 1 both C P and C V are negative there. The only region of the P − V plane that has all three of C P , C V and κ T positive is the region to the left of the left-hand black curve in the figure, the single phase region. C P < 0 to the right of this curve. Instability in black hole thermodynamics is not new and indeed lies behind the phenomenon of black hole evaporation -an asymptotically flat Schwarzschild black hole also has negative C P . Whether or not negative C P is a problem is a question of time-scales, for Hawking radiation the time-scale for black hole evaporation can be very large, so large that the system is essentially quasi-static equilibrium and thermodynamic principles can still be applied, at least at early times before the evaporation process has gone too far. It is not clear whether the same thing can be said for the quark-gluon plasma, or what the physical interpretation of the instability should be in this case. Although the analysis here has focused on the case of 4-dimensional N = 4 SUSY SU(N) Yang-Mills, associated with string theory in D = 10 compactified on AdS 5 × S 5 , the thermodynamics of the 3-dimensional and 6-dimensional CFTs, obtained from M-theory in D = 11 compactified on AdS 4 × S 7 /k and on AdS 7 × S 4 respectively, is essentially similar -the critical exponents are mean field in all three cases. It would be of great interest to evaluate 1/N corrections to the exponents but the method employed here relies on a representation of the thermodynamic potentials arising from an exact solution of the Einstein equations in the bulk. The paucity of known exact solutions relevant to the problem is an obstacle to realizing 1/N corrections with this method. However it may be possible to make some progress in studying 1/N corrections by adding a Gauss-Bonnet term to the 5-D Einstein action [20]. Acknowledgment: This article is based upon work from COST Action MP1405 QSPACE, supported by COST (European Cooperation in Science and Technology). A Free energy calculations To study the theory at fixed T in more detail consider the free energy which is of the form where and F is a dimensionless function of dimensionless variables. The free energy is most easily expressed in parametric form using (17) and (24), and τ = 1 (4π) In terms of the functions F (x, y) and τ (x, y) in equations (62) and (63), whereḞ = ∂F ∂τ , and Note also that ∂S From this follows Now, since P = U 3V , we have Thus the isothermal compressibility, κ T,Q vanishes at the critical point. Expanding this around the critical point one finds that κ T,Q vanishes like where (69) has been used to eliminate η in the second equation. Now we can set p = 0 in (70) and equation (69) then implies ε ≈ 1 2 t and η ∼ −24ε, so Hence (κ T,Q ) −1 ∼ 1/|η| ∼ 1/t at the critical pressure and, with pressure as the order parameter, γ = 1. In the 2-phase regime (η < 0) the isothermal compressibility is positive, but it is negative for η = 0. Lastly setting η = 0 in the parametric equation of state (72) and (73) implies that p ≈ 16 7 ε and v ≈ 5 2 ε 3 so and hence δ = 3. p v v 0 p < p > Figure 2: Maxwell construction in the P -V plane along a curve of constant charge with P as the order parameter (P and V are rendered dimensionless using appropriate powers of T as in figure 1).
6,358.8
2016-03-20T00:00:00.000
[ "Physics" ]
Wavelet scattering networks in deep learning for discovering protein markers in a cohort of Swedish rectal cancer patients Abstract Background Cancer biomarkers play a pivotal role in the diagnosis, prognosis, and treatment response prediction of the disease. In this study, we analyzed the expression levels of RhoB and DNp73 proteins in rectal cancer, as captured in immunohistochemical images, to predict the 5‐year survival time of two patient groups: one with preoperative radiotherapy and one without. Methods The utilization of deep convolutional neural networks in medical research, particularly in clinical cancer studies, has been gaining substantial attention. This success primarily stems from their ability to extract intricate image features that prove invaluable in machine learning. Another innovative method for extracting features at multiple levels is the wavelet‐scattering network. Our study combines the strengths of these two convolution‐based approaches to robustly extract image features related to protein expression. Results The efficacy of our approach was evaluated across various tissue types, including tumor, biopsy, metastasis, and adjacent normal tissue. Statistical assessments demonstrated exceptional performance across a range of metrics, including prediction accuracy, classification accuracy, precision, and the area under the receiver operating characteristic curve. Conclusion These results underscore the potential of dual convolutional learning to assist clinical researchers in the timely validation and discovery of cancer biomarkers. cancers share many similar features, including symptoms, risk factors, and basic cell biology.Because cancer is a condition of genetic variations, 1,2 where the mutations can cause genes to function abnormally and result in alternative expression.Proteins, which are final products of gene expression, have influence on phenotypes and various biological processes.Hence, the capacity to discern the levels of protein expression holds clinical significance in the domain of cancer diagnosis, prognosis, treatment, and the anticipation of cancer recurrence. 1,3adiotherapy (RT) is a treatment for cancer.RT applies radiation to eliminate cancerous cells and shrink tumors.While RT does not consider the genetic makeup of tumors, it plays a significant role in cancer treatment and cost reduction within the health care system.A primary challenge in the use of RT, which may lead to its inappropriate application, is the absence of biomarkers for identifying cancer patients who could benefit from RT and respond favorably to the treatment. 4Because biomarkers serve as quantifiable biological indicators utilized in diagnosis, prognosis, and therapy; the exploration of cancer biomarkers holds immense importance, as they can be harnessed for precision or personalized cancer medicine.By offering insights into the individual biological mechanisms and the progression of cancer, these biomarkers empower clinicians to select effective therapeutic strategies. 5,6][9][10][11][12][13] To cite a few examples: mutations in the BRAF, KRAS, and p53 genes have been implicated in CRC development 14 ; the protein guggulsterone, a plant phytosteroid, has shown connections to inhibiting the growth of human CRC cells 15 ; the expression of APRIL, BAFF, IL8, and MMP2 has been linked to the inflammatory microenvironment of CRC tumors 16 ; multiple biomarkers have been explored in young CRC patients 17 ; the evaluation of DNp73 18 and RhoB 19 expressions through immunohistochemistry (IHC) in rectal cancer tumor and biopsy samples; fecal microbial biomarkers have demonstrated potential for early CRC diagnosis 20 ; and the gut microbiome, encompassing stool, blood, tissue, and bowel fluid samples, has been extensively investigated as a primary source of biomarkers for CRC screening. 213][24][25] Furthermore, AI becomes a revolution in cancer study.For examples, AI has been applied as a powerful tool for precision histology, 26 predicting unknown cancers, 27 diagnosis and therapy, 28 aiding histopathologists for improving clinical oncology, 29 and cancer research. 30ong various convolutional network types, the wavelet-scattering transform [31][32][33] has appeared as a topic of growing interest within the signal processing and machine learning communities.This approach continues finding applications in diverse fields, including but not limited to neural disease classification, 34 authentication of artworks, 35 predictive indoor fingerprinting-based localization, 36 ECG beat classification, 37 classification of alcoholic EEG signals, 38 and magnetohydrodynamic simulations for pattern analysis. 39ased on the successful applications of convolutional methods in CNNs and wavelet scattering, this paper attempts, for the first time, to combine these two types of convolution-based operations for extracting strongly differentiable features of protein-expressed IHC images.Following the extraction of these robust features, a support vector machine (SVM) model is deployed to learn and predict the survival time for two patient cohorts with rectal cancer.These cohorts comprise patients who either received preoperative RT or did not, with the prediction goal being to differentiate between survival times of less than or more than 5 years. | IHC imaging data The study utilized a dataset consisting of IHC images collected from 136 patients who were enrolled in the randomized Swedish Rectal Cancer Trial conducted between 1987 and 1990. 40Out of these 136 patients, 77 did not receive RT, while the remaining 59 patients underwent RT.All patients were followed up for a period exceeding 5 years.It has been reported that there were no statistically significant differences (p > 0.05) between the two survival groups with respect to clinical and pathological characteristics, including factors such as sex, age, TNM (tumor-node-metastasis) stage, and degree of differentiation (see Tables S1 and S2 in 18 provided in the Supporting Information section of a previous study, 18 which presented the characteristics of patients and p-values using the same IHC data). Tissue microarray (TMA) slides were prepared from rectal cancer-confirmed tissue samples, with validation by a pathologist.These samples were subsequently subjected to immuno-histochemical staining, either using RhoB or DNp73 markers.The tissue specimens were initially fixed with formalin, embedded in paraffin, and fashioned into TMAs.TMA sections measuring 4 μm in thickness underwent deparaffinization through a series of treatments involving 100% xylene and ethanol, followed by antigen retrieval.Subsequently, the sections were subjected to overnight incubation with anti-RhoB or anti-DNP73 mouse monoclonal antibodies, followed by a 25-minute incubation with a secondary antibody, Envision System Labelled Polymer-HRP Anti-Mouse.The sections were then exposed to Liquid DAB+ (Dako) and lightly counterstained with hematoxylin.Finally, the slides were digitally scanned using a Leica Aperio CS2 scanner. To avoid the problem of data imbalance in machine learning and to fairly demonstrate the discovery of protein expressions, the numbers of IHC images of RhoB and DNp73 expressions for both survival times are designed to be equal by reducing the number of the images of the majority class to that of the minority class.For RhoB expression with the RT case, total samples of < and >5 year survival times for tumor = 106 (average of 1.80 samples per patient), biopsy = 102 (average of 1.73 samples per patient), metastasis = 20 (average of 0.34 sample per patient), and normal tissue adjacent to tumor = 58 (average of 0.98 sample per patient).For RhoB expression with the non-RT case, total samples of < and >5-year survival times for tumor = 132 (average of 1.71 samples per patient), biopsy = 90 (average of 1.17 samples per patient), metastasis = 20 (average of 0.26 sample per patient), and normal tissue adjacent to tumor = 76 (average of 0.99 sample per patient).Only IHC images of tumor and biopsy tissues are available for DNp73 expression.For DNp73 expression with the RT case, total samples of < and >5-year survival times for tumor = 46 (average of 0.78 sample per patient), and biopsy = 22 (average of 0.37 sample per patient).For DNp73 expression with the non-RT case, total samples of < and >5-year survival times for tumor = 50 (average of 0.65 sample per patient), and biopsy = 28 (average of 0.36 sample per patient). Both RhoB and DNp73 datasets were previously studied in, 18,19 respectively.Ten-fold cross-validation was used in this study for training and testing various machine-learning models.This validation is particularly useful when the amount of data are limited.First, it divides the dataset into ten equal parts or "folds."Each fold should contain approximately the same distribution of data points as the entire dataset.The next step involves a loop that iterates ten times.In each iteration, one of the 10 folds is set aside as the test set, and the remaining nine folds are used as the training set.A machine-learning model is trained on the training set.The trained model is then used to make predictions on the test set. | Pretrained CNN models Pretrained CNNs refer to CNN models that have been trained on large datasets for the task of image classification or related computer vision tasks before being used for other tasks.These pretrained models are a subset of transfer learning techniques in deep learning.How pretrained CNNs work is described as follows.In this study, 10 popular pretrained CNN models are adopted as baseline deep networks for predicting survival time of the rectal patient cohort.These pretrained models are briefly described as follows. | GoogLeNet This CNN is a deep net, comprising 22 layers, and it operates on images with a resolution of 224 × 224 × 3 pixels. 42oogLeNet employs a distinctive approach by incorporating multiple convolutional filter sizes within each module of the network, occasionally incorporating max-pooling layers.The network was trained using a big dataset of over one million images sourced from the ImageNet database, 43 with the goal of classifying these images into 1000 distinct object categories.For the purpose of feature extraction from IHC images in this study, the layer known as "pool5-7x7_s1" (global average pooling) was specifically chosen. | AlexNet This is a CNN of eight layers deep, including five convolutional layers with occasional max-pooling layers and three fully connected layers with a final 1000-way softmax, and has an image input size of 227 × 227 × 3. 44 This net was trained with more than one million images of the ImageNet database 43 to classify 1.2 million images into 1000 object categories.Layer "pool5" was chosen for the deep-learning feature extraction of the IHC images. | ShuffleNet This particular network is a CNN characterized by its depth of 50 layers and tailored to accommodate an image input size of 224 × 224 × 3 pixels. 45It was conceptualized and structured for effective deployment on mobile devices, where computational resources tend to be constrained.The architecture of this network revolves around two distinctive operations: pointwise group convolution and channel shuffling, both were designed to optimize classification accuracy while aligning with the computational limitations of mobile devices.In this study, the layer denoted as "node 200" was chosen to facilitate the extraction of deep-learning features from IIHC images. | ResNet101 This deep neural network belongs to the residual networks (ResNet) family. 46It was pretrained and requires input images of 224 × 224 × 3 pixels.What sets this network apart is its ability to learn residual functions concerning the layer inputs, as opposed to the raw signals, and it achieves this by stacking residual blocks to create a network with a depth of 101 layers.This architectural innovation simplifies the optimization process, making it comparatively more straightforward to enhance the network performance by increasing its depth.In the context of this study, the "pool5" layer from ResNet101 was chosen to extract deep features from the IHC images. | DenseNet201 This network is a variation of the DenseNet model, 47 specifically known as DenseNet-201 due to its extensive depth of 201 layers.What distinguishes this architecture is its unique design, enabling the assimilation of collective information from preceding layers to reduce channel redundancy, resulting in a densely connected network.DenseNet-201 requires images sized at 224 × 224 × 3 pixels.In this study, the "avg_pool" layer, which is the global average pooling, was selected to facilitate the extraction of deep features from IHC images. | SqueezeNet SqueezeNet is a CNN characterized by its 18 layers, with an image input size of 227 × 227 × 3 pixels. 48This network underwent extensive training on a dataset exceeding one million images from the ImageNet database, 43 where its objective was to categorize images into 1000 distinct object categories.Consequently, this network possesses a wealth of knowledge derived from a diverse range of images.For the purpose of this study involving IHC data analysis, the "pool10" layer (global average pooling) was chosen to extract deep features. | Xception The nomenclature of this deep net signifies an advanced Inception model. 49Through enhancements to depth-wise separable convolution, this network surpasses the performance of the Inception model.With a depth of 71 layers, it requires input images sized at 299 × 299 × 3 pixels.Its training was conducted using an extensive dataset of over a million images sourced from the ImageNet database, 43 geared toward classifying these images into 1000 object categories.In this particular study focusing on IHC data analysis, the "avg_pool" layer was selected to facilitate the extraction of deep features. | InceptionResNetv2 This CNN has an impressive depth of 164 layers and requires input images sized at 299 × 299 × 3 pixels. 50nceptionResNetv2 is rooted in the Inception family, where it innovatively introduces residual connections by replacing the filter concatenation stage found in the Inception design.For the purposes of extracting deep features from IHC data in this study, the "avg_pool" layer was employed. | DarkNet53 This deep net 51 serves as the foundational CNN architecture for the YOLOv3 object detection methodology. 52This network spans 53 layers in depth and was designed to handle images with a size of 256 × 256 × 3 pixels.Having been pretrained on a diverse dataset comprising over a million images from ImageNet, 43 this CNN inherits a wealth of feature learning.In the context of this study involving IHC data analysis, the "avg1" layer was chosen for the extraction of deep features. | NASNetLarge This network is a specialized iteration of the Neural Architecture Search Network (NASNet) models. 53It was crafted to incorporate both normal and reduction cells, facilitating the exploration of search space, search strategy, and performance evaluation to attain optimal performance.NASNet-Large expects input images sized at 331 × 331 × 3 pixels.For the purpose of extracting deep features from IHC images in this study, the "global_av-erage_pooling2d_2" layer of the NASNetLarge was employed. | Wavelet scattering network analysis of deep-learning features The similarity between a signal and wavelets of varying frequency and scale at a time point can be measured with the continuous wavelet transform (CWT).The wavelet scattering network 54 makes use of the CWT to perform three basic operations for the decomposition of a signal (see Figure 1): convolution, nonlinearity, and average pooling.These operations alter the original signal across a series of layers organized in a tree-like structure, where the input for each subsequent layer stems from the output of its predecessor.The scattering process generates wavelet scattering coefficients at each layer of the network. In this study, input signals for the wavelet scattering decomposition are the flattened feature vectors of the IHC images extracted by one of the described pretrained CNNs.The process of computing wavelet scattering coefficients iteratively across multiple layers is outlined as follows. Let (t) be a band-pass filter, which is known as a mother wavelet, and j (t) be a wavelet filter bank, which can be formed by expanding the mother wavelet as where j = 2 (j∕Q) , j ∈ ℤ, 1 ≤ j ≤ J, J is the maximum level of layers, and Q is the number of wavelets per octave. Let f be an input signal, which is a (flattened) deep-learning feature vector of an input IHC image.The wavelet scattering coefficients at layer zero, which are also called the zeroth order scattering coefficients and computed by taking the average of the feature vector as where L 0 denotes the zeroth order scattering, is a low-pass filter, and * denotes the convolution operator. (1) F I G U R E 1 Three basic operations of a wavelet scattering process at each network layer. Coefficients of the wavelet scattering at layer 1 or the first-order wavelet scattering are determined by averaging the modulus of wavelet coefficients at Layer 1 as The second-order wavelet scattering coefficients are calculated as Likewise, computation of the third-order wavelet scattering coefficients is given as Typically, wavelet scattering coefficients at layers j, where j = 1 … J, are computed through the use of convolution, modulus, and average pooling operators, as follows The Morlet wavelet, which is widely used for wavelet scattering, is adopted as the mother wavelet and defined as. 55,56ere, in this study, c = 1, = 1, i is the imaginary unit, f is the center frequency, and 2 f = 5, and e2 ift = cos(5t), yielding (t) = e t 2 2 cos(5t) .Other parameters for wavelet scattering in this study were specified as follows.Scale of time invariance = half of signal length, using wavelet scattering network with two filter banks, where Q factor for filter bank 1 = 8 wavelets per octave, and Q factor for filter bank 2 = 1 wavelet per octave, and sampling frequency = 1 Hz. The mathematical operations underpinning wavelet scattering, as explained earlier, prove valuable in the context of texture feature extraction from IHC images within the domain of pattern classification.This transform presents several advantages for such an application, such as, | Support vector machines A model of support vector machines (SVMs) 60,61 adopted in this study is the linear SVM.It is a supervised machinelearning algorithm used for binary classification tasks.Its primary objective is to find a hyperplane that best separates two classes of data points in a feature space.The key characteristics and components of a linear SVM are described as follows. • Hyperplane: in a two-dimensional feature space, a hyperplane is a straight line that separates two classes of data points.In feature spaces with more than two dimensions, a hyperplane can be described as a flat affine subspace whose dimension is one less than that of the surrounding space.For binary classification, the hyperplane is used to distinguish between the two classes.• Margin: the margin is the distance between the hyperplane and the nearest data point from either class.In linear SVM, the goal is to find the hyperplane that maximizes this margin.A larger margin generally indicates better generalization performance.• Support vectors: support vectors are the data points that are closest to the hyperplane and play a crucial role in defining the margin.These are the points that, if moved or altered, would affect the position of the hyperplane.• Linear separability: linear SVM works well when the data are linearly separable, meaning that a hyperplane can completely separate the two classes without any misclassifications.• Soft margin: in situations where the data are not perfectly separable by a hyperplane, linear SVM can be (5) adapted to allow for some misclassifications.This is done by introducing a soft margin that allows some data points to be on the wrong side of the hyperplane while still aiming to minimize misclassifications and maximize the margin. • Kernel trick: linear SVM can be extended to handle nonlinearly separable data by using a technique called the kernel trick.This involves mapping the original feature space into a higher dimensional space, where the data may become linearly separable.Hereafter, the term SVM shall denote the linear SVM. SVM-based predictor Having described the selected pretrained CNN models for extracting deep features from the protein-expressed IHC images obtained from cohorts of > and <5-year survival times, and wavelet scattering network analysis as another convolutional feature extractor of the deep-learning features, training, and testing phases of the proposed predictor are designed as follows. For training a classifier, the 10-fold cross-validation was used, where nine randomly selected folds of the IHC images of each of the two survival classes were input into a pretrained CNN for transfer learning.The flattened learned feature vectors were extracted at the selected deep layer of the network as previously specified in the descriptions of the 10 pretrained CNN models.These convolutional feature vectors were then used as input signals for the wavelet scattering transform, resulting in the extraction of wavelet scattering coefficients or second convolutional features from the training IHC images.Finally, the extracted wavelet scattering features were used for training an SVM-based classifier for predicting the survival time.In this study, parameters specified for the SVM model are: kernel function = polynomial of order 2, kernel scale = 1, solver = sequential minimal optimization, and data = standardized. For testing the trained classifier, the other 10th folds of the IHC images of the two survival classes were used for extracting the wavelet scattering coefficients following the same procedure applied in the training phase.The wavelet scattering coefficients extracted from each image of the test data were then feed into the trained SVM-based classifier for the survival time prediction. The above processes of training and testing were repeated 10 times to ensure each of the 10 folds was tested.Thus, the validation resulted in a final confusion matrix accumulated from the 10 runs of training and testing of the whole protein-expressed IHC data for each of the tissue types.Figure 2 graphically describes the training and testing phases of the proposed approach.During the training phase, multiple IHC images (either RhoB or DNp73) with known survival times are used to extract wavelet-scattering features, facilitating the training of the SVM model.This results in an SVM model that has been trained to predict two classes: survival times greater than 5 years and survival times less than 5 years.In the subsequent testing phase, an IHC image of either RhoB or DNp73 expression, for which the survival time is unknown, is initially processed through a pretrained CNN for deep feature extraction.These features are then converted into wavelet scattering coefficients, which serve as input data for the trained SVM model.This trained SVM model is then employed to predict the survival time of the patient with rectal cancer. | Measures of prediction performance The following variables are defined as • P = the count of samples of less than 5-year survival • N = the count of samples of greater than 5-year survival • TP = the count of correctly predicted samples of less than 5-year survival • TN = the count of correctly predicted samples of greater than 5-year survival • FP = the count of wrongly predicted samples of less than 5-year survival • FN = the count of wrongly predicted samples of greater than 5-year survival Prediction accuracy (ACC), correct detection for <5year survival time (D(<5)), correct detection for >5-year survival time (D(>5)), precision (PRC), and F 1 score, which are used as statistical measures of predictor performance, are defined as follows. Another performance measure is the area under the receiver operating characteristic (ROC) curve.The ROC curve is constructed by plotting the TP rate against the FP rate at various thresholds.In other cases, the TP rate is called sensitivity or the probability of prediction and the FP rate is also known as the probability of false alarm.Thus, the area under the ROC curve (AUC) measures the quality of a predictor regardless what classification threshold is chosen. Higher values of the measures expressed in Equations (8-12) and the AUC indicate better performance of the predictor. RhoB expression The RhoB-expressed IHC images of tumors obtained from patients with RT were used to illustrate the robustness and significant reduction in computing speed of the proposed combination approach.Table 1 shows the performance measures between the 10 pretrained CNNs, which extracted deep-learning features of the IHC images and predicted the survival time; and the 10 proposed methods, which used the pretrained CNNs as the first convolutional learning for extracting deep-learning features of the IHC images, WS as the second convolutional learning for extracting multilevelfiltered features from the deep-learning features, and SVM for predicting the survival time.Results obtained from the 10-fold cross-validation include accuracy, predictions for < and >5 years, precision, F 1 score, AUC, and total computing time using a single GPU (Quadro RTX 6000).( Training (A) and testing (B) phases of the proposed approach. Table 2 shows the 10-fold cross-validation classification results obtained from the top 3 models found in Table 1, which are ResNet101-WS-SVM, NASNetLarge-WS-SVM, and AlexNet-WS-SVM for the RhoB-expressed IHC samples of tumor, biopsy, metastatic, and adjacent normal tissues with RT and non-RT. Figure 3 presents the RhoB-expressed IHC images that were incorrectly classified or tied by the predictors listed in Table 2. DNp73 expression Table 3 shows the 10-fold cross-validation classification results obtained from the top five models found in Table 1, which are ShuffleNet-WS-SVM, Xception-WS-SVM, ResNet101-WS-SVM, NASNetLarge-WS-SVM, and AlexNet-WS-SVM for the DNp73-expressed IHC samples of tumor tissue with RT and non-RT.Figure 4 presents the DNp73-expressed IHC images that were incorrectly classified or tied by the predictors listed in Table 2. | DISCUSSION For the prediction of survival time using RhoB expression on the tumor tissue of RT patients, accuracy rates obtained from the 10 individual CNNs are between 58% (InceptioResNetv2) and 67% (SqueezeNet) with GPU times of 9417 and 7915 seconds, respectively.Using the same data, the lowest and highest accuracies obtained from the proposed approach are 79% (DenseNet201-WS-SVM) and 97% (AlexNet-WS-SVM) with GPU times of 21 and 18 seconds.In comparison between the two best results obtained from the two approaches, the accuracy and computing time reduction provided by the proposed approach are 30% (97% vs. 67%) higher and 440 times (7915/18 seconds) faster than the baseline approach, respectively.Values of the AUC obtained from the 10 baseline methods are between 0.57 (InceptionResNetv2) and 0.70 (DenseNet201).Values of the AUC obtained from the 10 proposed methods are between 0.81 (DenseNet201-WS-SVM) and 0.97 (NASNetLarge-WS-SVM).For the two best proposed models, NASNetLarge-WS-SVM predicted >5-year survival (100%) better than <5-year survival (96%), and the other way around is for T A B L E 1 Ten-fold cross-validations of 5-year survival prediction using RhoB expression in rectal cancer tumor tissue with RT: comparative analysis.baseline CNNs in terms of prediction accuracy and computational time. ACC D(<5) D(>5) PRC The top three proposed models identified from Table 1 were selected for the survival time prediction of RT and non-RT patients using other types of tissue captured with RhoB-expressed IHC images, which are presented in Table 2.For the RT case, both AlexNet-WS-SVM and NASNetLarge-WS-SVM perfectly predicted the survival time (100%) using either the biopsy or metastatic samples, while the Xception-WS-SVM perfectly predicted the survival time (100%) using samples of normal tissue adjacent to tumors.For the non-RT case, AlexNet-WS-SVM appears to be the best prediction model using samples of tumor (98%), biopsy (100%), metastatic (100%), and normal (100%) tissues.For both RT and non-RT cases, the double convolutional learning of metastatic samples appears to be the most favorable for the survival prediction task (100% accuracy obtained from all three predictors). The top five proposed models found in Table 1 were selected for the survival time prediction of RT and non-RT patients using DNp73-expressed IHC images of tumor and biopsy tissues, which are shown in Table 3. Being similar to the prediction using RhoB expression, two most robust models in terms of accuracy and AUC measures are AlexNet-WS-SVM and NASNetLarge-WS-SVM.All five proposed models perfectly predicted the survival F I G U R E 3 RhoB-expressed samples of survival time of less (<5-y) or more (>5-y) than 5 years: falsely predicted by AlexNet-WS-SVM (A-F); falsely predicted by NASNetLarge-WS-SVM (G-K); and decided as tie votes by NASNetLarge-WS-SVM (L-P).Survival times of these images could not also be determined by two pathologists.time using the IHC images of biopsy (100%).Three models that perfectly predicted the survival time using the IHC images of tumor with RT (100%) are Xception-WS-SVM, NASNetLarge-WS-SVM, and AlexNet-WS-SVM.Two models that perfectly predicted the survival time using the IHC images of tumor without RT (100%) are ShuffleNet-WS-SVM, and ResNet101-WS-SVM.By taking tie votes into account, predictions for either < or >5 years, including with and without RT, were robustly performed by all five models (96%-100%), except for ShuffleNet-WS-SVM that yielded 87% for the >5-year survival using the tumor samples with RT.All five models achieved high precision rates (96%-100%), high F1 scores (0.93-1), and high AUC values (0.93-1).Once again, the validation results in terms of accuracy, specific survival time predictions, precision, F 1 score, and AUC demonstrate the potential application of the double convolutional learning for discovering the predictive power of protein DNp73 in rectal cancer. Robustness of the proposed approach is further demonstrated by its learning on relatively very limited data in the case of DNp73 expression, where there are small number of images for each survival group.Using the 10fold cross-validation, for the RT case, the accuracies are very high (93%-100%) for the survival time classification using the tumor tissue (100%), and biopsy tissue (100%).Likewise, using the 10-fold cross-validation for the RT case, the accuracies are even higher (98%-100%) for the survival time classification using the tumor tissue (100%), and biopsy tissue (100%). Figures 3 and 4 show IHC images of proteins RhoB and DNP73 expressions, respectively, which were mistaken or tied by certain classifiers of the proposed approach.T A B L E 3 Ten-fold cross-validations of survival prediction using DNp73 expression in rectal cancer, using top five proposed models. As indicated in Tables 1-3, most images that failed the prediction are of the tumor tissue, and in particular, RT tumor samples (4 for < and 3 for >5 years = 7 out of 12 for RhoB expression, and 1 for < and 3 for >5 years = 4 out 7 for DNp73 expression).However, two pathologists at Linkoping University could not determine the survival times of these images that can be separated for further examination to gain insight into assessing, labeling, and visualizing the protein levels of gene expression of the targeted detection reagents.Additionally, Table 4 presents a comparison of the top three prediction accuracies achieved through the utilization of RhoB and DNp73 expressions.This analysis indicates that DNp73 expression has the potential to offer more favorable prognostic insights compared to RhoB when employing AI for the analysis of both tumor and biopsy tissues, with or without RT.Among the models evaluated, Xception-WS-SVM emerges as the preferred choice for learning from DNp73 expression data, while AlexNet-WS-SVM exhibits superior performance when learning from RhoB expression data. The choice of using specific protein markers for cancer prognosis in research or clinical practice can depend on several factors.Some reasons why this study focused on two protein markers RhoB 10,62 and DNp73 63,64 are because of their relevance to this cancer type (different cancers can have distinct molecular profiles, and specific protein markers may be more relevant to one type of cancer than another), biological significance (RhoB and DNp73 have known roles in colorectal cancer progression, making them of interest for prognosis studies), and hypothesis testing (RhoB and DNp73 are hypothesized to have valuable prognostic information for colorectal cancer).Conducting experiments or assays to measure protein markers can be resource-intensive and time-consuming.Therefore, this report limits the number of markers to ensure the feasibility of the study. Cancer is a complex disease with numerous variables that can affect survival time.While protein markers can be important biomarkers for predicting outcomes, they are just one part of a comprehensive assessment that considers the patient's overall health, the characteristics of the cancer itself, and the effectiveness of treatment.Nonetheless, there are instances where patients' profiles do not yield valuable insights for cancer prognosis.In particular, the characteristics of these rectal cancer patients do not reveal useful information for prognosis.This can be due to several possible reasons, including (1) sample size: a small sample size may not be representative of the broader population of rectal cancer patients, making it challenging to draw statistically significant conclusions about prognosis; (2) heterogeneity: rectal cancer is a heterogeneous disease, and patients can present with various clinical and pathological features; and (3) variability in disease progression: the course of rectal cancer can vary widely from one patient to another, and clinical and pathological characteristics alone may not encompass all the relevant factors influencing prognosis. As overfitting is a common problem in machine learning, particularly in the context of supervised learning, where a model is trained to predict a target variable based on input data.Overfitting occurs when a machine-learning model learns the training data too well, to the extent that it starts to memorize the noise and random fluctuations in the training data, rather than learning the underlying patterns.As a result, an overfit model performs very well on the training data but poorly on unseen or new data, which is the data it have not encountered during training. Although the 10-fold cross-validation is a commonly used and effective method to mitigate overfitting and assess the performance of a machine-learning model, it is worth noting that there are variations of cross-validation, such as stratified k-fold cross-validation, which ensures that each fold preserves the class distribution of the original dataset.Stratified k-fold is particularly useful when dealing with imbalanced datasets.In this study, as mentioned earlier, to address the issue of data imbalance in machine learning and ensure an equitable representation of protein expression discoveries, the quantities of IHC images for RhoB and DNp73 expressions, for both survival time categories, were adjusted to achieve equal distributions.This adjustment involves reducing the number of images in the majority class to match that of the minority class.In spite of this strategy for achieving data balance, to confirm the robustness of the proposed approach, specific patient-independent data were used as test sets to predict survival times, utilizing samples of RhoB-expressed metastatic tissue with and without RT, as well as DNp73expressed biopsy tissues with and without RT.The use of patient-independent data achieved the same accuracies (100%) as those reported in Tables 2 and 3. As another issue, ensemble methods in machine learning combine predictions from multiple data types or models to improve overall performance and robustness.While it is possible to use different tissue types (tumor, biopsy, metastasis, and adjacent normal tissue) together in an ensemble method to predict survival time, there are several challenges and considerations that may make it more common to analyze them separately.Some reasons for this separation to be preferred include: (1) biological differences (different tissue types may have distinct biological characteristics, gene expressions, and molecular profiles.Combining them in a single model might not capture these variations effectively), (2) data availability (tissue samples can vary widely in terms of data availability.For example, in this study, samples of tumor tissue may have more than biopsy, metastasis, and adjacent normal tissue.Combining data from various tissue types might not be compatible or introduce bias into the model), (3) heterogeneity (tumors are known for their intratumor heterogeneity, which means that different parts of the tumor may exhibit different genetic and molecular characteristics.Combining tumor, biopsy, metastasis, and normal tissue data can further increase this heterogeneity and complicate the feature extraction process), and (4) interpretability (combining different tissue types in a single model can make it challenging to interpret the contributions of each tissue type to the prediction.Understanding which tissue type is more informative for survival prediction can be essential for medical research). In a previous study, 19 various combinations of pretrained CNNs and SVM were explored for prognostic purposes in rectal cancer patients who had not received RT.These combinations included ResNet18-SVM, SqueezeNet-SVM, DenseNet201-SVM, AlexNet-SVM, Xception-SVM, and NASNetLarge-SVM, all of which utilized the same IHC data of RhoB-expressed biopsy tissues.Among these combined models, NASNetLarge-SVM exhibited the most promising results, achieving an accuracy of 85% in a 10-fold cross-validation setting.However, it is worth noting that the combined models incorporating pretrained CNNs, WS features, and SVM, as reported in this study, surpassed this accuracy level. Another previous study 41 focused on predicting the survival time of rectal-cancer patients who had received RT.In this study, NASNetLarge-SVM, DenseNet201-SVM, and ResNet101-SVM were applied using the same IHC data of normal tissues adjacent to tumors.Among these models, DenseNet201-SVM achieved the highest 10-fold cross-validation accuracy of 75%.However, the combination of pretrained CNNs, WS features, and SVM yielded superior results compared to these models. Furthermore, the study reported in 41 introduced an alternative approach involving pretrained CNNs, fuzzy recurrence plots (FRPs), 65 and long short-term memory (LSTM) networks. 66This innovative combination demonstrated improved performance over the pretrained CNN-SVM models.Nevertheless, even the best-performing pretrained CNN-FRP-LSTM model, with ResNet101, achieved an accuracy of 88%, which is still lower than the CNN-WS-SVM models presented in our study. In comparison with studies reported in, 18,41 not only the AI-based approach developed in current work was tested with many types of the cancer tissues (biopsy, tumor, metastasis, adjacent normal) with RT and non-RT, but also able to consistently achieve much higher prediction rates in all cases.In comparison between using the pretrained CNN and SVM models for classification, training of the SVM model with the double convolution-based extracted features could provide a much faster computational speed than the pretrained CNNs.Such a saving in computing time makes the proposed approach economical and feasible for learning on big data as expected in our future acquisition or other applications. | CONCLUSION An approach for extracting strongly differentiable protein-expressed features from IHC images in rectal cancer by applying two different convolution-based methods has been presented, validated, and discussed.Not only the proposed approach could achieve very high prediction rates but also relatively fast computing time.Furthermore, the proposed algorithm is effective for learning on small datasets, offering many advantages over more data-intensive methods, where limited data exist, such as predicting treatment outcome or disease risk for a population whose electronic health records are not available. 67he results reported in this study are very encouraging for adopting and further exploring the combined AI algorithms of pretrained deep networks, multilevel analysis of wavelet scattering, and SVM-based classification for discovering and validating rectal or other cancer biomarkers using IHC data. F I G U R E 4 DNp73-expressed tumor samples of survival time of less (<5-y) or more (>5-y) than 5 years: falsely predicted by ShuffleNet-WS-CNN (A-C); falsely predicted by ResNet101-WS-CNN (D); falsely predicted by AlexNet-WS-CNN (E); decided as tie votes by Xception-WS-CNN (F) and NASNetLarge-WS-CNN (G).Survival times of these samples could not also be determined by two pathologists. T A B L E 4 Top three accuracies (%) obtained from 10-fold crossvalidations of 5-year survival prediction using RhoB and DNp73 expressions in rectal cancer.-WS-SVM) 100 (AlexNet-WS-SVM) 100 (AlexNet-WS-SVM) Common kernel functions include polynomial kernels and radial basis function kernels.• Optimization: the training of a linear SVM involves solving a convex optimization problem to find the optimal hyperplane that maximizes the margin while minimizing classification errors (soft margin case).This optimization is typically achieved using methods like the sequential minimal optimization algorithm.• Regularization: linear SVM can incorporate regularization terms to prevent overfitting.Regularization helps in controlling the complexity of the model and is especially useful when dealing with high-dimensional data.• Decision boundary: the decision boundary of a linear SVM is defined by the hyperplane.Data points on one side of the hyperplane are classified as one class, while data points on the other side are classified as the other class.• Classification: once trained, the linear SVM can be used to classify new data points by determining on which side of the hyperplane they fall.
8,961.4
2023-11-28T00:00:00.000
[ "Medicine", "Computer Science" ]
An Investigation of Negative DC Partial Discharge Decomposition of SF6 Under Different Metal Materials For determine the influence of metal materials on the negative DC partial discharge (PD) decomposition of SF<sub>6</sub> and put forward for the method of SF<sub>6</sub> DC gas insulated equipment condition monitoring and fault diagnosis based on decomposed component analysis, this study investigates the decomposition characteristics of SF<sub>6</sub> under three common metal materials (Al, 304 stainless steel, and Cu) in gas insulated equipment by using SO<sub>2</sub>, SOF<sub>2</sub>, and SO<sub>2</sub>F<sub>2</sub> as characteristic component gases under negative DC PD. Results show that the types of metal materials have substantial effects on the negative DC PD decomposition characteristics of SF<sub>6</sub>. The simulation results based on the first principles confirm the phenomena observed in the experiment: the amounts and generation rates of SOF<sub>2</sub> were the most abundant, followed by SO<sub>2</sub>F<sub>2</sub> and SO<sub>2</sub>. The calculation based on free electron cloud and finite depth particle in a box model shows that the electron current density of field emission has the relationship with metal work function <inline-formula> <tex-math notation="LaTeX">$\gamma $ </tex-math></inline-formula> and electrical field <inline-formula> <tex-math notation="LaTeX">$E$ </tex-math></inline-formula>: <inline-formula> <tex-math notation="LaTeX">$j(0{,E})\approx (BE^{2}{/}\gamma)\exp (-D\gamma ^{3 / 2}{/}E)$ </tex-math></inline-formula>, where B and D is constants. The electron current density by field emission is inversely proportional to the work function of metal with constant electric field. The metal material with smaller work function means the greater electron current density, further promoting the decomposition of SF<sub>6</sub> to generate more characteristic component gases. The results can well explain the phenomena observed in the experiment, that is, the decomposition of SF<sub>6</sub> is the most serious under Al electrode with lowest work function, followed by 304 stainless steel, and Cu. materials. Previous studies have shown that metal materials directly influence the PD decomposition characteristics of SF 6 [13], [14]. Consequently, it is priority to study the DC PD decomposition characteristics of SF 6 under different metal materials, obtain the influence law of metal materials on the SF 6 DC PD decomposition characteristics, further to eliminate the adverse effects of metal material differences on the method and technology of insulation condition monitoring and fault diagnosis based on DCA. Many scholars have studied the influence of metal materials on the decomposition characteristics of SF 6 . Belmadan et al. [15], Vijh [16], and Hirooka et al. [17] studied the influence of metal materials on SF 6 decomposition under arc and found that the generation rate of metal fluoride is proportional to the electrode potential of metal, and metal plays a catalytic role. Tang et al. [19] studied the influence of metal materials on SF 6 decomposition under corona discharge and specified that the types of metal materials affect the decomposition characteristics of sulfur-containing decomposition gases, such as S 2 F 10 , SOF 2 , and SO 2 F 2 ; however, almost no correlation was observed with carbon-containing decomposition characteristic gases, such as CF 4 . Tang et al. [19], [20] studied the influence of metal materials on SF 6 under positive DC PD; the difference in surface passivation film and metal halogenation resistance among metal materials resulted in different SF 6 decomposition characteristics. Little study on the effect of metal materials on the SF 6 decomposition characteristics under negative DC PD is currently available, but most unipolar HVDC transmissions use negative DC. Therefore, this study investigates the influence law of three common metal materials (Al, 304 stainless steel, and Cu) inside a gas-insulated equipment on SF 6 negative DC PD decomposition under the metal protrusion defect. Moreover, this study provides an experimental and theoretical foundation for method of SF 6 gas insulated equipment condition monitoring and fault diagnosis based on DCA. II. PD DECOMPOSITION PROCESS OF SF 6 According to the plasma chemical model proposed by Van Brunt and Herron [21], [22], SF 6 decomposes into lowfluorinated sulfides, such as SF 5 and SF 4 , and F atoms in the nearby glow area because of high-energy electrons, as indicated by Reaction (1). e + SF 6 → SF 6−x + xF + e x = 1, . . . , 5 However, the vast majority of low-fluorinated sulfides and F species to form SF 6 quickly. Only low levels of primary decomposition products react with the generated metal vapor (it is represents by M in Reaction (2)-(7)), trace amounts of H 2 O, and O 2 ; as shown in Reaction (2)- (14). In addition, there are organic solid insulation material, such as epoxy, in the gas-insulated equipment, carbon-containing gas would be generated in the PD process, as shown in Reaction (15), and (16). Reaction (2)- (14) show that under the action of PD, metal materials play a catalytic role in the primary decomposition products generation by SF 6 decomposition; on the other hand, the primary decomposition products will react with metal vapor to generate metal fluorides. The different of metallic materials lead to varying amounts and rates of the primary decomposition products of SF 6 . Such variation further affects the generation of SF 6 decomposition components. A. EXPERIMENTAL PLATFORM The SF 6 decomposition under negative DC PD setup in this study is shown in Figure 1. A needle-plate electrode model is used to simulate typical metal protrusion insulation defects in gas-insulated equipment; a diagram is shown in Figure 2. The spacing between needle and plate was 2 mm, the curvature radius of the needle was approximately 0.25 mm, and the cone angle of the needle was approximately 30 • . The ground electrode was 100 mm in diameter and 10 mm in thickness. The needle and plate electrodes were made of Cu, Al, and 304 stainless steel, which are common in DC gas-insulated equipment. Because the laboratory equipment pressure generally ranges from 0.10 to 0.40 MPa [23] and to ensure the comparability of experimental data, gas chamber pressure (absolute pressure), ambient temperature, and relative humidity were constant (0.35 MPa, 25 • C, and 50%, respectively) Water vapor content was less than 10 µL/L, and oxygen content was less than 100 µL/L. The SF 6 decomposition products were measured by Shimadzu GC-2010 Pro gas chromatograph (GC) (99.999% purity helium gas was used as carrier gas, with a flow rate of 2 mL/min, column temperature of 40 • C, injection volume of 1 mL, and diversion ratio of 10:1). B. EXPERIMENTAL PROCEDURES (1) The electrode and the gas chamber wall were wiped with absolute alcohol to remove residual impurities. (2) After electrode installation, the circuit was connected, as shown in Figure 1. The gas chamber was pumped to vacuum using a vacuum pump, filled with new SF 6 (the purity is 99.999%) to the atmosphere, and then pumped to vacuum. The above process was repeated twice to discharge the gaseous impurities in the gas chamber fully. Afterward, the gas chamber was pumped into vacuum and set aside for 12 h to check the gas tightness of the gas chamber. (3) The gas chamber was filled with SF 6 at an absolute pressure of 0.35 MPa. The amounts of H 2 O and O 2 in the gas chamber were measured using GE600 precision dew point meter and HY-YF oxygen analyzer, respectively, to ensure that their amounts were in line with IEC 60480-2004 [24]. Then, they were set aside for 12 h. (4) Voltage U = 25.00 kV was applied to the electrode for 96 h. About 20 mL decomposition sample gas was sampled every 12 h. Then, the volume fraction of each decomposition components of gas sample gas was measured. (5) Turn off the power, replace the metal protrusion defect made of other metal materials, and repeat the above steps. IV. INFLUENCE OF METAL MATERIALS ON THE PD DECOMPOSITION OF SF 6 Numerous studies have indicated that SF 6 decomposes under PD through a series of chemical reactions by which the low-fluorinated sulfur species react with metal vapor, trace H 2 O, and O 2 . The reaction products include SO 2 F 2 , SOF 2 , CF 4 , SO 2 , SOF 4 , HF, CS 2 , and SF 4 [25]- [27]. However, HF is a corrosive acid that can react with metal and organic solid insulating materials. SOF 4 is labile toward hydrolysis [28] and easily influenced by trace H 2 O in equipment. The chemical properties of H 2 S are stable, but minimal H 2 S is produced under PD. On the other hand, the three decomposition components, SO 2 , SOF 2 , and SO 2 F 2 , are relatively inert, and the proportion of their contents can determine the discharge intensity [14]. Hence, this study focuses on the role of the three decomposition products of SF 6 , namely, SO 2 F 2 , SOF 2 , and SO 2 . A. GENERATION LAW OF THREE DECOMPOSITION COMPONENTS UNDER DIFFERENT METAL MATERIALS The amounts of SOF 2 varies with time under different metal materials is shown in Figure 3. Specifically, the amounts of SOF 2 under the Al protrusion model was the largest, followed by 304 stainless steel, and the amounts of SOF 2 under the Cu protrusion model was the smallest. In addition, the formation rate of SOF 2 decreased with time under the Al condition. The amounts of SO 2 F 2 varies with time under different metal materials is shown in Figure 4. Overall, the amounts VOLUME 8, 2020 and generation rates of SO 2 F 2 under three kinds of metal protrusions were less than those of SOF 2 by the action of negative DC PD. Similar to SOF 2 , the amounts of SO 2 F 2 under Al protrusion was the most abundant, followed by 304 stainless steel protrusion, and the amounts of SO 2 F 2 under copper protrusion was least. The amounts of SO 2 varies with time under different metal materials is shown in Figure 5. Similarly, the amounts of SO 2 under Al protrusion was the most abundant, followed by 304 stainless steel and Cu protrusion. The phenomenon could be attributed to the following: The reaction rates of the main generation reaction of SOF 2 were much higher than those of SO 2 F 2 and SO 2 . The main generation reaction of SOF 2 was Reaction (10) and (11), and the main generation reaction of SO 2 F 2 are the Reaction (12) and (13). SF 6 only needs to break two S-F bonds to generate SF 4 , the reactant in Reaction (10), but SF 6 needs to break four S-F bonds to generate SF 2 , the reactant in Reaction (12). Therefore, SF 4 is more easily produced. In addition, the reaction rate of Reaction (10) and (11) was approximately two orders of magnitude of that of Reaction (13) [12]. Consequently, the amounts and generation rates of SOF 2 were greater than those of SO 2 F 2 . SO 2 was mainly hydrolyzed by SOF 2 , which was constrained by the amounts of SOF 2 , as shown in Reaction n (14). The chemical properties of SOF 2 were relatively inactive, and its hydrolysis reaction rate was only1.2±10 −23 cm 3 s −1 [28]; the reaction rates of Reaction (10) - (13) were an order of magnitude greater than that of Reaction (14) [22]. Therefore, the amounts and generation rates of SO 2 were the smallest among the three kinds of characteristic decomposition components. B. GENERATION LAW OF THE TOTAL AMOUNTS OF CHARACTERISTIC DECOMPOSITION COMPONENTS The S atoms in SOF 2 , SO 2 F 2 , and SO 2 may only come from SF 6 . Therefore, the total amounts of the three kinds of characteristic components could characterize the discharge intensity and the insulation deterioration of SF 6 . The total amounts of SOF 2 , SO 2 F 2 , and SO 2 varies with time under different metal materials is shown in Figure 6. The total amounts of the three characteristic components followed the order Al > 304 stainless steel > Cu. Therefore, the deterioration of SF 6 by Al was the most serious under PD, followed by 304 stainless steel and Cu. C. SIMULATION ANALYSIS OF CHARACTERISTIC DECOMPOSITION COMPONENT GENERATION REACTIONS The main generation reactions of SOF 2 , SO 2 F 2 , and SO 2 were simulated by Materials Studios 7.0 software based on density functional theory (DFT) to analyze the generation mechanism of SF 6 PD characteristic decomposition components. Structural optimization and energy analysis were performed using the Dmol 3 module. DFT is a theory that describes the ground state energy of the system as a function of electron density to solve the problem of multiparticle system in the first principle calculation. The basic premise is that the ground state property of the system is only determined by the distribution of electron density [29]. The material microstructure can be obtained by solving the Schrodinger equation via DFT, and the data of the material microstructure can be obtained without empirical calculation, which can provide the basis for the judgment of its physical and chemical properties. During simulation, all structures were optimized in Geometry Optimization task of Dmol 3 by using the generalized gradient approximation (GGA) of the Perdew-burke-ErnZerhof (PBE) functional was utilized to deal with the electron exchange and correlation [30]. GGA is an empirical description function, which can well describe the thermodynamic properties of molecular system. The double numerical plus polarization (DNP) was as the atomic orbital basis set, that is, all non-hydrogen atoms are polarized by d orbital function, while all hydrogen atoms are polarized by p orbital function during simulation, which makes the results of simulation more accurate. Otherwise, select electron spin was unrestricted. The Brillouin zone was sampled by 1×1×1 k-points via the Monkhorst-Pack method; precision choose ''fine''. In addition, to ensure the high accuracy of convergence calculation in all structural optimization processes, the gradient, convergence threshold on energy, and displacement were 4×10 −3 Ha/Å, 2×10 −5 Ha, and 5×10 −3 Å, respectively [31]. On the basis, the length and bond angle of each optimized molecule and reaction heat were determined. Literature [22] showed that the main formation reactions of SOF 2 were Reaction (10) and (11), those of SO 2 F 2 were Reaction (12) and (13), and that of SO 2 was Reaction (14). The geometrically optimized stable molecular structure model, including the bond length and bond angle for the reactants and products involved in the above five reactions, is shown in Figure 7. The unit of bond length is Å. The simulation results of this study are similar to those reported in literature [32]- [34]. On this basis, the reaction heat of the main reaction of three decomposition component gases were calculated as shown in Table 1. If the reaction heat is negative, then the reaction is an exothermic reaction; otherwise, it is an endothermic reaction. In addition, if the reaction is spontaneous, the smaller the reaction heat is, the more easily the corresponding reaction occurs [33]. Five reactions in Table 1 can be spontaneously achieved [35] because the reaction heat of SOF 2 and SO 2 F 2 are far less than that of SO 2 . Therefore, SOF 2 and SO 2 F 2 are easier to generate compared with SO 2 ; this finding is consistent with the experimental results in this study. Compared with SOF 2 and SO 2 F 2 , the reaction heat of generation reaction 1 of SOF 2 was much less than that of generation reactions 3 and 4 of SO 2 F 2 . From this reaction pathway, SOF 2 was easier to generate, but the reaction heat of generation reaction 2 of SOF 2 was greater than that of generation reactions 3 and 4 of SO 2 F 2 , it seems that SOF 2 is more difficult to generate from this reaction pathway, however, The reaction heat of SF 6 generated SF 4 , SF 2 and SOF 4 were endothermic, and that of SF 6 to SF 4 is much less than that of SF 2 and SOF 4 . For example, the reaction heat of SF 6 to SF 4 is 128.71 kcal/mol, while that of SF 6 to SF 2 is 364.2 kcal/mol [36]. Consequently, in terms of the total reaction process of SF 6 generating to SOF 2 or SO 2 F 2 , the reaction heat of SOF 2 is lower and easier to generate than that of SO 2 F 2 , which is also consistent with the experimental results in this study. V. COMPUTATIONAL STUDY AND THEORETICAL ANALYSIS OF THE EFFECT OF METAL MATERIALS ON THE NEGATIVE DC PD DECOMPOSITION OF SF 6 If the external environment provides a certain amount of energy to the electrons inside the metal, then the electrons may break away from the metal bond and enter the gas phase, namely, the field emission. In theory, more electrons and kinetic energy facilitate the decomposition of more SF 6 and further generate more stable characteristic decomposition components. Based on the free electron cloud and finite depth particle in a box model [37], the field emission of different metal materials was described in this study. Assuming the work function of electrons γ = W ∞ −W FM , W ∞ is the potential energy of electrons at an infinite distance outside the particle on a box, W FM is the Fermi energy of electrons, and W 0 is the potential energy of electrons inside the metal. When the temperature is T , electrons can leave the metal surface and enter the gas phase only when the velocity of electrons v x along the vertical direction of the metal surface (set as the x-axis direction) is greater than the threshold velocity v 0 . At this point, the electron energy is zero. Thus, the number of states in the region d − → A = dA x dA y dA z of the unit crystal wave vector space can be obtained. The number of electrons dn among − → v + d − → v can be expressed as In Formula (17), (18), and (19), m is the electronic mass, K is the Boltzmann constant, T is the temperature, h is the Planck constant,h = h/2π , and n(v x , v y , v z ) is the distribution of electron velocity. According to quantum mechanics theory, electrons do not necessarily detach the metal when they reach the metal surface, but need to be multiplied by the transmissivity δ(v x ) of the electron wave penetrating the metal surface barrier to calculate the probability of the electron leaving the metal [38]. Consequence, at temperature T , the value of current density J (T ) per unit time due to electrons detaching from the metal surface barrier can be expressed as where the e is electric charge. Then, the polar coordinate system is used to make ρ 2 = v 2 y +v 2 z , dv y dv z = ρdρdϕ, where the integral limit of ϕ is 0∼2π . At the same time, make: where the integral limit of σ is 0 ∼ ∞; the integral limit of θ is γ ∼ ∞. As a result: (23) Formula (23) is a strict Sommerfeld model [38], which can characterize and quantitatively explain the thermal emission of electrons. However, it remains unable to explain how the electrons detach from the metal surface and enter the gas phase when field emission. According to quantum mechanics, electrons can penetrate the metal surface barrier and leave the metal under the action of applied electric field even if the energy of the electrons inside the metal is below the maximum potential energy. In other words, if the velocity of electron along the x-axis v x > 0, field emission can be generated under the action of applied electric field [38]. In addition, the field emission current density of electrons can also be estimated theoretically by thermal emission current, but the lower limit of integration should be determined by the condition of v x = 0, as a result, (24) In addition, when the applied electric field is E, the potential energy W (x) of the electron outside the metal is defined as Under the action of applied electric field, the electron transmissivity is closely related to energy. Therefore, WKB (G. Wentzel, H. A. Kramers, L. Brillouin, WKB) method is used to estimate approximately the electron transmissivity [39]. (26) where the kinetic energy of the electrons perpendicular to the metal surface is W e = v 2 x /2m. The upper limit of integral x a in Formula (26) is determined by W (x) = W e , so Formula (27) can be obtained by calculating Formula (26). (27) Because the calculation of field emission made here, considering the situation T ≈ 0 K to simplify the model and facilitate calculation. At this point, θ = v 2 x /2m − W FM < 0, namely, there is no electrons would occupy the state in which θ is greater than 0. The integral limit of θ is −∞ ∼ 0. In addition, ln[exp(−θ/kT ) + 1] ≈ −θ/kT when T ≈ 0 and θ < 0. Therefore, Formula (24) can be simplified at T ≈ 0 K as follows: When θ ≈ 0, namely, W FM = W e , the probability of electrons penetrating the metal barrier is the greatest. In other words, θ = 0 is the most important contribution to the field emission electron flow. Consequently, Taylor expansion of δ(θ ) is performed at θ = 0 and obtains Formula (29) is brought into Formula (24). where B = e 3 /8π h , D = 8m/3eh, and they are all related constants. Formula (30) shows the electron current density of field emission, which is determined by the applied electric field and metal work function. When the applied electric field is constant, the electron current density is inversely proportional to the metal work function. The size parameters and applied voltage of the three kinds of metal protrusion defects are the same; thus, the electric fields E generated under different metal material protrusions are the same. Al has the least escape work, followed by 304 stainless steel and Cu (The Fe content in 304 stainless steel is much higher than that of other metal elements; thus, Fe plays a decisive role in the current density of field emission in 304 stainless steel). The work function of three metals is shown in Table 2 [37]. Therefore, some electrons in conduction band of Al are easier to ''pass through'' the barrier away from the metal surface than those in 304 stainless steel and Cu. Consequencly, the electron current density of Al is higher than two other metal material, namely, the electron current density of Al, 304 stainless steel and Cu are 5.787E 2 exp(−1.261/E), 5.707E 2 exp(−1.288/E) and 5.59E 2 exp(−1.329/E) via Formula (30), respectively, which promotes SF 6 decomposed more easily under the strike of high-energy electrons by Reaction (1) and generate more characteristic components gases under Al protrusion. VI. CONCLUSION In this study, the influence of three common metal materials on the decomposition of SF 6 in gas-insulated equipment under negative DC PD was investigated, which improve the research of SF 6 decomposition mechanism, and provide a scientific and theoretical basis for the method of SF 6 DC gas insulated equipment condition monitoring and fault diagnosis based on DCA. The following conclusions were drawn: (1) There is a significant correlation between the types of metal materials and the negative DC PD decomposition of SF 6 . The amounts of SOF 2 , SO 2 F 2 , and SO 2 , which were generated by the decomposition of SF 6 , were the most abundant under Al, followed by 304 stainless steel, and Cu. In addition, the amounts and generation rates of SOF 2 are the largest under the same metal, followed by SO 2 F 2 and SO 2 . (2) The reaction heats of five main formation reactions of decomposition components (SOF 2 , SO 2 F 2 , and SO 2 ) were calculated by DFT simulation. The smaller the reaction heat is, the more easily the corresponding reaction occurs. The observation on the generation rates of decomposition components via the simulation is consistent with the actual measurement results. (3) Based on the free electron cloud and finite depth particle in a box model, the electron current density of field emission has the relationship with metal work function γ and electrical field E: j(0, E) ≈ BE 2 γ exp(−Dγ 3/2 /E). The electron current density of field emission is inversely proportional to the metal work function with constant electric field. Consequently, SF 6 decomposition components gases are more abundent via Reaction (1)- (16) under the metal electrode with lower work function. The results can well explain the phenomena observed in the experiment. The influence of metal materials on the decomposition of SF 6 needs to be fully considered when putting forward for the method of SF 6 DC gas insulated equipment condition monitoring and fault diagnosis based on DCA. ZHENGQIN CAO received the Ph.D. degree from the State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, China. He is currently involved in high-voltage electrical equipment insulation online monitoring and fault diagnosis and gas sensors. YICHUN BAI is currently pursuing the master's degree with the Chongqing University of Science and Technology. She mainly engages in text analysis applied to power systems. YONGJIAN ZHOU is currently involved in highvoltage electric equipment insulation online monitoring and fault diagnosis. JIA WANG born in Chongqing, China. She is mainly focus on data analysis, psychology, and education.
6,015.8
2020-02-17T00:00:00.000
[ "Physics" ]
Preparation , Characterization and Photostability of Nanocomposite Films Based on Poly ( acrylic acid ) and Montmorillonite ion (or destruction) of carboxylic groups and macrochain oxidation leading to the formation of a new type of carbonyl groups in the film26,30. The general mechanism for leading to the destruction of this group during the photodegradation of PAA is shown schematically in Figure 731. Gonzaga et al. 4 Materials Research Figure 4. SEM images of (a) PAA/10%SWy-1 and (b) PAA/30%SWy-1, transversal section of the nanocomposite film (c) PAA/30%SWy-1. Figure 5. TGA and DTG curves for PAA and nanocomposite films. Figure 6. FTIR-ATR spectra of PAA/30%SWy-1 nanocomposite film as a function of irradiation time. Figure 7. Schematic showing the photodegradation of pure PAA31 5 Preparation, Characterization and Photostability of Nanocomposite Films Based on Poly(acrylic acid) and Montmorillonite The photooxidative degradation of pure PAA and nanocomposite films was followed using SEC. According to Kaczmarek et al., the radicals formed during UV irradiation of PAA may interact with the polysaccharide chain, resulting in its rupture, COOH abstraction and may interact with other radicals to form crosslinks between chain30. Molecular weight (Mw) of the nanocomposite films decreased after 12 h of irradiation (Figure 8). SEC results suggested that PAA and nanocomposite films degrade by random chain scissions32. Figure 8. Decrease in molecular weight (Mw) during photodegradation of PAA and nanocomposite films. Marimuthu and Madras (2007) proposed a model for polymer degradation in which it is possible to determine the degradation rate constant kd 33. This constant is obtained using a variation of number-average molecular weight (Mn) with time (Equation 1). Introduction In recent years, polymer layered silicate nanocomposites have attracted attention in many fields, e.g., in industry exploitation and fundamental research 1,2 .These materials exhibit excellent mechanical and thermal properties, decrease in gas/vapor permeability, and reduced flammability in the presence of a small amount of the silicate, as compared with the polymer itself [2][3][4] . Clay minerals are materials of natural origin, with particles smaller than 2µm.Among clays, montmorillonite (Mt) is one of the most commonly used and extensively studied of clay mineral polymer nanocomposites 5,6 .It pertains to the general family of 2:1 layered silicates made up of two silica tetrahedral sheets merged to an edge-shared octahedral sheet of magnesia or alumina.The stacking of these layers occurs in a parallel way, thus leading to a regular van der Waals gap between the layers called the interlayer region 7,8 . Complementary to the aforementioned properties, Mt is also widely available and an inexpensive material.It presents a large swelling behavior due to its expanded surface area leading to potential strong interactions between polymer matrix and clay mineral.Thereby, the combination of all those characteristics results in intercalated and exfoliated structures of the clay mineral polymer nanocomposites 9,10 . Poly(acrylic acid) (PAA) is a water soluble polymer and it belongs to a class of commercial polymers produced on a large scale and PAA has been used for applications in Cu(II) removal from aqueous solutions 11,12 , drug delivery applications 13 , as cleaning agents 14 and medicine 15,16 . The dispersion of the Mt in the polymeric matrix depends on the process used in the preparation of the nanocomposites, clay and polymer type, and the interaction between both (clay mineral and polymer) 17 . The polymer materials photodegradation is a concern in their process and usage, even in polymer layered silicate nanocomposites, as a consequence, it has been a subject of studies 18,19 .Thus, it is necessary to gain a better understanding of degradation and stabilization to ensure long life for the product, since this process for this nanocomposite is poorly understood. Herein, PAA nanocomposites with clay mineral were prepared by intercalation in solution and characterized by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM) and thermogravimetric analysis (TGA).Furthermore, the photostability of clay mineral polymer nanocomposites films was studied, because it knows that load of polymer by layered filler changes the interactions into the polymeric matrix, which can affect nanocomposite resistance to UV radiation 8 .ªInstituto de Química de São Carlos, Universidade de São Paulo, Av.Trab.São-Carlense, 400, 13566-590, São Carlos, SP, Brasil Materials Poly(acrylic acid) was purchased from Sigma Aldrich.The SWy-1 montmorillonite was kindly supplied by Source Clays Repository of the Clay Minerals Society, University of Missouri, Columbia, MO.The Mt was purified as described earlier 20 . Nanocomposite films preparation A Poly(acrylic acid) aqueous solution of 1.0% (w/v) was prepared by dissolving 1.0 g of PAA powder in 100 mL of distilled water.SWy-1 Mt (SWy-1) dispersions were prepared by dispersing appropriate amounts of Mt into distilled water and both were vigorously stirring for 24 h. The mixtures were stirred continuously and then cast onto glass Petri dishes.The films were dried in an oven at 37 ºC for 24 h and peeled from the glass Petri dishes.For comparison, the pure PAA films were prepared in the same way. Characterization The X-ray diffraction (XRD) was used to determine the structure of nanocomposite films using, a RigakuRotaflex -RU 200B diffractometer (Cu, radiation λ = 0.154 nm) at 50 kV, 100 mA.The interlayer space of the samples was calculated using Bragg's equation 21 . The surface morphology of nanocomposite films was examined by scanning electron microscopy (SEM) using 20 keV energy ZEISS LEO 440 equipment with an OXFORD 20 kV detector at 2.7 x 10 -6 torr. The TGA curves for PAA and nanocomposite films were obtained in a SDT-Q 600 TG/DTA simultaneous module (TA Instruments, Shimadzu).The curves were obtained under a dynamic air atmosphere and heated from 25 °C to 1000 °C at a heating rate of 10 °C/min.The thermal measurements were carried out under air atmosphere with flow rate of 60 mL min -1 . Photodegradation of nanocomposite films by UV Irradiation Nanocomposite films were exposed at 40 ºC in an irradiation chamber containing 16 UV germicidal lamps, emitting light predominantly at 254 nm.The SPR-01 Spectroradiometer (Luzchem) was used to measure the emission of the lamp.The emission spectrum of the lamps is presented in Figure 1, along with the UV-vis spectra of SWy-1, PAA and nanocomposite films.The absorption of SWy-1 at 241 nm is attributed to a charge transfer transition from (Fe +3 ) located in the Mt layers 22 .Photodegradation was monitored by Fourier transform infrared spectroscopy, FTIR spectra were recorded using a Perkin-Elmer Frontier spectrometer using a universal attenuated total reflectance (ATR) sampling accessory.All spectra were recorded in absorption mode at 1 cm -1 intervals and 32 scans. Size exclusion chromatography (SEC) was performed at 35 °C on a Shimadzu LC-20 AD chromatographic system with a Shimadzu RID-10A refractive index detector, using three OHpak KB-806M columns, narrow-distribution PEO standards were used for calibration.Irradiated film samples were dissolved in sodium nitrate solution 0.1 mol L -1 and aliquots (2 mL) were filtered through 0.45 µm membranes before analysis.The same solution was used as the eluent at a flow rate of 1 mL min -1 . Characterization of nanocomposite films The XRD patterns were used to calculate the d001value and to study the structure of the nanocomposite with different amounts of SWy-1. Figure 2 shows the results of XRD for PAA, SWy-1 and nanocomposite films.The polymer incorporation in the SWy-1 dispersion shifted the 001 reflection peak from 7.5° to 4.3 -4.6°, suggesting the occurrence of intercalation (PAA molecules may enter between silicate layers) for all concentrations of SWy-1.And this main peak in the XRD pattern of SWy-1 (2θ ~ 7.5 °) corresponds to an interlayer distance of 11.8 Å and it is attributed to the formation of interlayer spaces by regular stacking of the silicate layers along the [001] direction 12,23,24 . For the nanocomposite films containing 5, 10 and 20 wt % in SWy-1 mass, there is a lower and wider 001 reflection peak than pure SWy-1, indicating the occurrence of intercalation along with some clay exfoliation 18,25 On the other hand, the 001 reflection peak for PAA/30%SWy-1 has been shifted toward lower angles, which is an indication of the lower regularity and large basal spacing as a result of the intercalation of PAA chains into the clay galleries 26 . Figure 3 presents FTIR-ATR spectra of PAA, SWy-1 and PAA/5%SWy-1 nanocomposite film.SWy-1 spectrum shows characteristic band at 3626 cm -1 due to a hydroxyl group bound with Al 3+ cations 27 .The stretching and bending vibrations of O-H of water molecules present in the clay are observed at 3422 and 1638 cm -1 , respectively.The most intensive and narrow band is noticed at 996 cm -1 and it is attributed to Si-O stretching vibrations.The three bands below 996 cm -1 originate from bending vibrations of AlAlOH (914 cm -1 ), AlFeOH (884 cm -1 ) and AlMgOH (798 cm -1 ) 27,28 .The typical structure of PAA exhibits a broad absorption band at 3056 cm -1 due to the -OH stretching vibration, it has a characteristic band at 1700 cm -1 attributed to C=O stretching.The band at 1200-1315 cm -1 is related to the C-O stretch.The band at 1395-1450 cm -1 is assigned to C-O-H deformation vibration.Absorption band at 792 cm -1 is due to out-of-plane OH...O deformation, indicating the existence of strong inter-chain hydrogen bonds 29,30 .FTIR-ATR spectrum of PAA/5%SWy-1 is the combination of the characteristics bands of the spectra of both components, the band at 1700 cm -1 is connected with the stretching vibrations of C = O in carboxyl group.Absorption maxima at 1450 and 1412 cm -1 are characteristic of asymmetric and symmetric stretching vibrations of C -O bonds in carboxyl groups. Figure 4 shows SEM images of the films (PAA/10%SWy-1 and PAA/30%SWy-1) and these images suggest that nanocomposite films are homogeneous, indicating efficient dispersion of the polymer matrix into the SWy-1 clay.Most of separate and flat clay platelets are dispersed uniformly on the surface of the film.The amount of SWy-1 particles on the surface of the PAA/30%SWy-1 is more than PAA/10%SWy-1 nanocomposite film.In addition, the image of cross section (Figure 4c) shows the thickness of the films was 65 to 70 µm. For estimation of thermal stability, all samples were subjected thermogravimetric analysis at air atmosphere.The results obtained for PAA and nanocomposite films are presented in Figure 5.The PAA thermal degradation process is divided in several stages 28,29 The thermal degradation temperatures in the third stage for the nanocomposite films were higher (362-370 °C) than pure PAA (361 °C), which means that the presence of SWy-1 clay shifted the depolymerization process (third step) toward a higher temperature indicating the nanocomposites are more thermally stable when compared to PAA itself. Photodegradation of nanocomposite films PAA and nanocomposite films were irradiated with UV light up to 192 h at 40 °C.UV irradiation of PAA/SWy-1 films causes significant changes in FTIR spectra, which are presented in Figure 6. For the qualitative evaluation of nanocomposites photodegradation, the carbonyl band (1600-1800 cm -1 ) was applied.This band undergoes broadening during irradiation, but absorbance maximum (1698 cm -1 ) decreases.This is the evidence of the occurrence of two opposite reactions: abstraction (or destruction) of carboxylic groups and macrochain oxidation leading to the formation of a new type of carbonyl groups in the film 26,30 .The general mechanism for leading to the destruction of this group during the photodegradation of PAA is shown schematically in Figure 7 31 .The photooxidative degradation of pure PAA and nanocomposite films was followed using SEC. According to Kaczmarek et al., the radicals formed during UV irradiation of PAA may interact with the polysaccharide chain, resulting in its rupture, COOH abstraction and may interact with other radicals to form crosslinks between chain 30 . Molecular weight (M w ) of the nanocomposite films decreased after 12 h of irradiation (Figure 8).SEC results suggested that PAA and nanocomposite films degrade by random chain scissions 32 . .This constant is obtained using a variation of number-average molecular weight (M n ) with time (Equation 1). (1) The plot of the (M n ) behaviour for all samples as a function of irradiation time was presented in Figure 9, from the initial slopes of curves (Figure 9b) was determined the degradation rate constants, k d , and the values are shown in Table 1.The degradation rate constant for pure PAA was up to 22 times higher in relation to the nanocomposite with 30 wt % of SWy-1 content; therefore, the increase of SWy-1 concentration detained the degradation of PAA. Owing to the degradation rate, the SWy-1 might be considered as stabilizer against UV irradiation.Thereby, this stabilization can be explained by SWy-1 ability to disperse the incident light in addition to its absorbtion part of the UV light instead of PAA, hence minimizing the degradation rate 18,34 . Some studies in the literature describe the stabilization of polymer degradation promoted by clay mineral.This behavior was observed for nanocomposite degradation of PEO/clay, PVC/laponite and chitosans/montmorillonite 8,18,35 . SEC results indicated that the amount of Mt in the nanocomposite influenced the material's photostability. Conclusions PAA/clay nanocomposites present exfoliated and/or intercalated structures depending the amount of clay used.The nanocomposite films presented a homogeneous surface as it was shown by SEM images.Moreover, the nanocomposite films demonstrated an enhanced thermal stability compared to the pure polymer. The presence of clay detains the fast photo-oxidation of the nanocomposites and decreases the main chain scission process.The degradation rate constant for pure PAA was up to 22 times larger in relation to the nanocomposite with higher amount of SWy-1.Thus, the presence of SWy-1 clay contributes for the photostabilization of material.SWy-1 has ability to disperse the incident light as well as also to absorb part of the UV light instead of PAA.Such evidence bolsters the effect of SWy-1 in the stabilization against UV irradiation. Figure 1 . Figure 1.Emission spectra of irradiating lamp and absorption spectra of SWy-1, PAA and nanocomposite films . The first stage (70-173 °C) is the removal of the water physically absorbed.In the second stage (173-310 °C), the carboxyl side groups underwent decomposition.The third stage (310-410 °C) resulted in oxidation of carbon backbone chains (depolymerization).A further increase in temperature only H 2 O and CO 2 are released, indicating the complete oxidation. Figure 5 . Figure 5. TGA and DTG curves for PAA and nanocomposite films. Figure 7 . Figure 7. Schematic showing the photodegradation of pure PAA 31 Figure 8 . Figure 8. Decrease in molecular weight (M w ) during photodegradation of PAA and nanocomposite films. Figure 9 . Figure 9. (a) Variation of [M n (0)/M n (t)]-1 as a function of irradiation time for chitosan and nanocomposite films; (b) blow up of the initial times. Table 1 . Initial Number Average Molecular Weights and photodegradation rates of PAA and nanocomposite films.
3,406.2
2018-05-07T00:00:00.000
[ "Materials Science" ]
Integrated balanced homodyne photonic–electronic detector for beyond 20  GHz shot-noise-limited measurements Optical homodyne detection is used in numerous quantum and classical applications that demand high levels of sensitivity. However, performance is typically limited due to the use of bulk optics and discrete receiver electronics. To address these performance issues, in this work we present a co-integrated balanced homodyne detector consisting of a silicon photonics optical front end and a custom integrated transimpedance amplifier designed in a 100 nm GaAs pHEMT technology. The high level of co-design and integration provides enhanced levels of stability, bandwidth, and noise performance. The presented detector shows a linear operation up to 28 dB quantum shot noise clearance and a high degree of common-mode rejection, at the same time achieving a shot-noise-limited bandwidth of more than 20 GHz. The high performance of the developed devices provide enhanced operation to many sensitive quantum applications such as continuous variable quantum key distribution, quantum random number generation, or high-speed quantum tomography. INTRODUCTION High-speed balanced homodyne detectors receivers have become essential building blocks in a multitude of applications dealing with sensitive measurements. In the quantum field, some common applications using balanced detection are: quantum random number generation (QRNG) [1][2][3][4][5], continuous variable quantum key distribution (CV-QKD) [6,7], characterization of quantum states [8][9][10], photonic quantum sensing [11,12], quantum computing [13][14][15][16] and coherent Ising machines [17]. The utility for balanced detection is not only limited to quantum applications, because it is also used in many other classical applications where sensitive optical measurements are critical, such as optical coherence tomography [18], coherent lidar [19], gas sensing [20], or (long-distance) coherent optical communications [21]. The fidelity, stability, and speed of these measurements are determined in large part by the balanced receiver. For instance, in CV-QKD, one of the largest contributors to the excess noise has been shown to originate from the electronic detector [22] or, in QRNG, the generation rate scales naturally with the noise and bandwidth performance of the detector [1][2][3][4][5]. A lot of progress has been made recently to increase the performance by monolithically integrating the optical front end [9,10,23,24]. The result of this progress is not only a reduction of the physical size of the devices, but it also allows for more interferometric stability, reduced insertion loss, and the use of high-bandwidth components that have been present in traditional telecom applications for many years. Silicon photonics has been expressed as a very suitable platform for the integration of quantum photonics because it is able to obtain high integration density, low losses, good passives [25], and high-speed photodetectors (PDs) with bandwidths up to 40 GHz [26]. All of these properties are beneficial to the design of balanced homodyne receivers. Even though these integrated optical front ends make use of high-bandwidth components [10,23], the overall system speed is heavily constrained by the electrical bandwidth imposed by the readout electronics. The readout electronics in balanced homodyne detectors have typically been composed of discrete offthe-shelf packaged operational amplifiers and passives [10,[27][28][29][30]. The packaging parasitics in these components impose an inherent limitation in noise performance and bandwidth. Only recently has a commercial integrated telecom amplifier been interfaced with integrated quantum photonics [9]. This resulted in a convincing increase in bandwidth while maintaining similar noise levels to discrete off-the-shelf approaches. Although using commercial telecom transimpedance amplifiers (TIAs) provides a convenient way to fully integrate the balanced receiver, the amplifiers are not designed with highly sensitive analog applications in mind. This typically results in suboptimal noise performance since it does not make sense for these amplifiers to operate at noise levels well below the shot noise limit, because having ultralow noise performance does not yield significant improvements toward the bit error rates in digital communications systems. However, in many applications such as CV-QKD, random number generation, or quantum tomography, it is imperative that the electronic noise is many times lower than the shot noise. By custom design, integrated TIA circuits with lower noise can be explored. In this work, an optical front end is designed on imec's iSiPP50G silicon photonic platform [31]. The TIA, which converts the current produced by the optical front end, is designed in a 100 nm GaAs pseudomorphic high electron mobility transistor (pHEMT) technology. A framework is set out to map the different noise contributors and techniques are explored on how these contributors could be minimized. A. Photonic Integrated Circuit The schematic of the photonic integrated circuit (PIC) is shown in Fig. 1. Light is coupled into the chip via two grating couplers. In balanced homodyne detection, one grating coupler will receive the LO while the other receives the signal to be measured. In this implementation, one input contains a thermo-optical phase shifter, which is not used in this work, but could be used to implement CV-QKD with homodyne detection [22] or to perform phase scanning while measuring quantum states [9]. The two arms are optically mixed using a 2 × 2 multimode interferometer (MMI). Due to manufacturing tolerances, the power splitting ratio of the MMI can deviate slightly from the ideal 50:50 ratio. Likewise, the responsivity of the upper and lower photodiode can also exhibit some variation. These imperfections cause some common mode current to flow to the TIA. The rejection of this current is characterized by the common mode rejection ratio (CMRR). A poor CMRR is problematic for two separate reasons. First, because the TIA is DC coupled to the photodiodes, any DC current flowing into the TIA would cause a shift in the operating point of the transistors. To achieve low-noise performance, the TIA will require a large transimpedance gain, which means that a small amount of DC current can induce a large shift in the operating point. This results in a reduction of the dynamic range of the TIA. Second, a high CMRR cancels the classical noise such as the relative intensity noise (RIN) present in the LO [32]. This is crucial to maintain low-noise operation over the complete frequency range. To improve the CMRR, two Mach-Zehnder modulators (MZMs) are added to the output arms of the MMI. The MZMs are biased in such a way that equal amounts of current are produced by each photodiode. This is monitored via two pins on the TIA that measure the differential DC input current. Alternatively, a Mach-Zehnder interferometer could also be used to improve CMRR [9,23]. Eventually the light reaches two lateral germanium photodiodes that exhibit very low junction capacitances (<10 fF). The photodiodes have a nominal responsivity of 1.1 A/W at a wavelength of 1550 nm and a 3 dB opto-electrical bandwidth in Fig. 1. Schematic of the photonic integrated circuit. Two grating couplers, a thermo-optical phase shifter, a 2 × 2 MMI, two Mach-Zehnder modulators and two photodiodes are depicted. The anode of the top photodiode and the cathode of the bottom photodiode are common such that the differential current flows to the subsequent TIA. excess of 10 GHz. The currents produced by the photodiodes are subtracted via a common connection and the pads are connected to the TIA via bond wires. B. Transimpedance Amplifier The low-noise transimpedance amplifier is designed to convert the weak differential current produced by the balanced photodetectors to a sufficiently strong voltage, which can be easily processed by an analog-to-digital converter (ADC), without distorting the signal or adding much noise. TIAs in earlier balanced detectors have usually been constructed by assembling discrete off-the-shelf components [27,33], or have used a commercial bare die TIA used in telecom applications [9]. The issue with discrete components is that the overall bandwidth of the system is limited by all the packaging parasitics that makes high speed (GHz+) operation at low noise not practically achievable. Commercial telecom TIAs are often designed with simple digital modulation schemes in mind (e.g., on-off keying) and single photodiode operation. This results in the TIA having poor linearity when either sinking or sourcing current, causing severe distortion for the analog applications that are targeted in this work. Linear commercial TIAs do exist, but are usually for high baud rate, long-reach coherent applications and would be too noisy for highly sensitive applications. In this work, a TIA with a bandwidth in excess of 1 GHz, linear operation, and ultralow noise performance is targeted. A common figure-of-merit for balanced receivers is the quantum shot noise to classical noise ratio, measured with a vacuum input applied to the optical input port [9,10,22,34], commonly referred to as clearance. A large amount of clearance enables longer reach communications in CV-QKD and guarantees low overhead in QRNG, increasing the maximal random number generation rate. The clearance can be written as The shot noise density I 2 n,shot is equal to 2q I PD,bot + 2q I PD,top , with I PD,bot the average current flowing through the bottom photodetector and I PD,top the average current flowing through the top photodetector. If the optical power is properly balanced, the average current flowing through the top and bottom photodetector is identical. The classical noise (I 2 n,clas ) is mainly introduced by the TIA. To realize high levels of clearance it is essential that the TIA has a low input referred current noise density (IRND, I 2 n,TIA ), which depends greatly on the topology of the TIA and the transistor technology. In this work, field-effect transistor (FET) type shunt-feedback TIAs are considered (Fig. 2), and the input referred current noise density is approximated by [35,36] where C in = 2C PD in + C TIA in is the total input capacitance of the TIA, g m is the transconductance of the input FET (Q 1 ), I G is the gate current, is Ogawa's excess noise factor [37,38], k is the Boltzmann constant, and T is the absolute temperature. The noise contributed by the feedback resistor R F has a white spectrum and is the dominant source of noise at low frequencies. At the corner frequency f c ≈ g m /( R F (2πC in ) 2 ), the transistor drain noise becomes the dominant noise source (Fig. 3). The 1/ f noise is not taken into account because this should have a minimal effect, considering the frequency range of interest. Noise sources such as thermal noise generated by the substrate of the photodetectors or other secondary noise sources are not considered in this noise model. The clearance will be largest at low frequencies, and will drop rapidly above f c . To have wideband low-noise performance, the goal is twofold: Maximize the feedback resistance (i.e., transimpedance) and increase the corner frequency at which the transistor noise becomes dominant. The design of a TIA starts with the selection of an appropriate transistor technology. This selection will have a large impact on the achievable bandwidth, the maximum transimpedance, and the corner frequency f c . For high-bandwidth applications it is important that the selected technology has a high transition frequency f t , which allows for the transistors to still have gain at high frequencies. The maximal obtainable transimpedance gain R T depends on the 3 dB bandwidth (B W 3 dB ), the gain-bandwidth product (A 0 f A ), the input capacitance (C in ), the phase margin of the TIA (φ m ), and the number of gain stages in the voltage amplifier (n), as given by the transimpedance limit [39]: For a single-stage amplifier design the transimpedance limit simplifies to This equation implies that advanced technology nodes with a high f t (∼A 0 f A ) should allow for higher transimpedance values and hence improved low-noise performance at low frequencies. It also demonstrates the difficulty to manufacture high-bandwidth, low-noise TIAs. Considering single-stage amplifiers, if one would want to double the bandwidth for a given technology node (A 0 f 0 remains constant) and a given photodiode (C in constant), one would need to reduce the feedback resistor by a factor of four. This causes the low-frequency noise to increase fourfold. For a threestage amplifier, this becomes even worse, as the transimpedance would drop by a factor of 16. Even so, it doesn't mean that multistage amplifiers necessarily yield lower transimpedance values. It can be shown that multistage amplifiers can outperform singlestage amplifiers when the factor A 0 f A /B W 3 dB is large [39]. This reaffirms the preference for a fast technology node with a high gain-bandwidth product. Additionally, the corner frequency at which the noise of the input FET transistor becomes the dominant noise source, f c ≈ g m /( R F (2πC in ) 2 ), must be placed at a high frequency. This is achieved primarily by having a low input capacitance, which is comprised of a contribution by the photodiodes and by the TIA. Lateral waveguide photodiodes available in imec's iSiPP50G silicon photonics platform are used. These photodiodes have a very small junction capacitance (<10 fF). The capacitance contribution of the TIA can once again be reduced by selecting a fast technology node with a high f t . Secondarily, the excess noise factor can be lowered by selecting an appropriate technology. In this case, choosing a smaller technology node is considered to have adverse consequences, because short channel effects such as velocity saturation, carrier heating, vertical field mobility reduction, and channel length modulation impact [40]. However, different FET technologies offer different noise factors. While silicon metal-oxide-semiconductor field-effect transistor (MOSFET) technologies exist in very small nodes with a high f t , they have been shown to suffer from poor noise performance for small channel lengths [38,41,42]. High electron mobility transistors (HEMTs) are another type of FET that use III-V materials such as GaAs, GaN, or InP. Compared to silicon, these III-V materials achieve improved electron mobility and a higher saturation velocity, which yields high speed and low-noise devices [43]. For these reasons HEMTs have been used extensively in the design of low-noise amplifier monolithic microwave integrated circuits (MMICs) [44][45][46]. In this work, a 100 nm GaAs pseudomorphic HEMT (pHEMT) technology is used with a typical f t of 130 GHz. The transimpedance limit Eq. (3) also shows that amplifiers with a low phase margin φ m are able to achieve higher transimpedance values or a higher bandwidth. However, low phase margin values also result in higher overshoot in the time domain response and reduced phase linearity. Considering the modulation schemes employed in telecom, these disadvantages can be tolerated as digital signals are being transmitted. Commercial amplifiers strive for a phase margin of 63 • (i.e., a Butterworth response), which yields a good trade-off between bandwidth, ringing, and jitter [47]. This is not preferred as typical use cases for balanced homodyne receivers employ analog signaling. Therefore, a much For 100 nm pHEMT technology, it was found that a three-stage amplifier yields the highest transimpedance gain because the f t is significantly higher than the targeted bandwidth. The schematic of the amplifier can be seen in Fig. 4(a). Each stage consists of a common source amplifier followed by a source follower with a level-shifting Schottky diode and current source. A 50 buffer is added to isolate the TIA core from any outside loading and to provide 50 matching toward measurement equipment. To measure the DC current flowing into the TIA, the voltage before and after the feedback resistor is monitored. The voltages are sensed via two large resistors (R sense ) so they do not significantly influence the high-frequency behavior of the circuit. The voltage difference between these nodes is equal to I DC in R F . To obtain proper balancing, this voltage difference is used to tune the MZM on the PIC. Figure 4(b) shows the manufactured devices, with the PIC on the left and the TIA on the right. The output bonding pads of the PIC are placed close to the input bonding pads of the TIA. This allows for the use of short bondwires, preventing high frequency resonances. Wirebond capacitors, in addition to the onchip decoupling capacitors, are placed close to the TIA to provide increased power supply decoupling. The TIA has a physical dimension of 2.4 × 2.4 mm and a power consumption of approximately 850 mW. CHARACTERIZATION OF THE INTEGRATED CIRCUITS This section discusses the performance of the balanced homodyne receiver using metrics such as the CMRR, a 3 dB bandwidth, output matching, and noise. At the end of this section, a comparison is made between this detector and the state-of-the-art in literature. To characterize the CMRR, the PIC was connected to the TIA via wirebonds [ Fig. 4(b)] and the output of the TIA was probed. A 1550 nm CW laser (Koheras Basiks E15 source, NKT Photonics), amplitude modulated with a sine wave, is supplied to one of the optical inputs. A polarization controller is added to optimize power coupling to the chip, minimizing polarization-dependent losses. For both the balanced and unbalanced measurements, the photodiodes were biased identically with a reverse bias of 1.5 V. For the unbalanced case, a slight imbalance was introduced by the on-chip MZMs. The optical power going into the chip was kept low as to avoid nonlinear distortion in the TIA. For the balanced case, the input current monitoring pins on the TIA were used to bias the MZM structures such that the voltage drop across the feedback resistor was zero. The resulting CMRR measured at several frequency points between 10 MHz and 20 GHz is shown in Fig. 5. At 10 MHz, the CMRR is 80 dB and decreases to 26 dB and 27 dB at 10 GHz and 20 GHz, respectively (Fig. 5). The degradation of the CMRR is attributed to differences between the individual photodiodes such as deviations in junction capacitance, contact resistance, substrate parasitics, and differences in the transit-time-limited bandwidth. This high level of CMRR is obtained partially thanks to the MZM structures but also due to the high levels of precision in matching the path lengths that are achieved in integrated photonics. Solutions using external variable optical attenuators and optical delay lines have reported a CMRR ranging between 29 dB at 1 GHz to 23 dB at 20 GHz [48]. A more in-depth explanation of how the CMRR was measured can be found in Supplement 1. Next, the frequency response of the system was measured. To this end, a laser was modulated using a Fujitsu external 40 Gb/s LiNbO 3 MZM. The input ports of the MZM and the output of the TIA were connected to an Agilent N5247B PNA-X network analyzer. Using the on-chip MZM, a slight power imbalance is implemented. This is required so that the modulated signal raised above the noise floor and hence can be measured. The full twoport S-parameters are measured. The bandwidth of the external MZM was calibrated separately by connecting the modulated laser directly to a 70 GHz Finisar XPDV3120 photodetector and measuring the S-parameters of the MZM. The S 21 transmission coefficient of the calibrated S-parameters is used to measure the transimpedance gain, which can be seen in Fig. 6. A 3-dB bandwidth of 1.5 GHz is obtained. The output matching parameter S 22 is also shown and is less than −10 dB below 10 GHz. This guarantees very little reflections in the frequency band of operation when connecting the TIA with other 50 devices. The noise performance of the balanced receiver is obtained by measuring the output noise power spectral density (PSD) with a vacuum input applied to one input grating coupler while the other grating coupler is supplied with the LO. To measure the PSD, an Agilent N9020A MXA signal analyzer is connected to the output of the TIA. Figure 7 shows the PSD for different photodetector currents. The photocurrents were measured using a Keithley 2400 source meter. A translation to optical power can be obtained by multiplying with the responsivity (R = 1.1 A/W). As expected, the noise power increases when the current increases. When plotting the PSD for a single frequency (f = 100 MHz) versus the current, Fig. 8(a) is obtained. For low currents (i.e., low levels of shot noise), the noise is dominated by the electrical background noise. This background noise is obtained separately by blocking both optical input ports. As the current increases the shot noise becomes the dominant source of noise. A maximum shot noise to electrical noise clearance of 28 dB is measured at 100 MHz for a current of 3.14 mA. We also observed a deviation of the ideal linear shot noise behavior at high optical powers. We believe this deviation could be caused by carrier recombination [49]. Figure 8(b) plots the normalized PSD, in which the PSD for photocurrents ranging between 207 µA and 3.14 mA are normalized with respect to the PSD at 65.3 µA of photocurrent and with the electronic noise removed. In an ideal linear detector, the normalized PSD should increase equally over the complete frequency range for increasing current; e.g., for the PSD corresponding to 207 µA of photocurrent, the increase is 10 log(207 µA/65.3 µA) = 5.01 dB. However, as was already clear from Fig. 8(a), at high optical power the detector saturates. This can be observed on the normalized PSD corresponding to 3.14 mA of photocurrent, where the normalized noise density dips below the expected value of 16.82 dB at higher frequencies. Taking a closer look at the clearance over the full frequency band [ Fig. 8(c)], the clearance is high at low frequencies and decreases significantly at higher frequencies. Using Eq. (1), the IRND of the TIA can be calculated [solid red line in Fig. 8(c)]. When comparing the measured curve with the theoretical curve [dashed red line in Fig. 8(c)], a good correspondence between both can be observed. At low frequencies, the noise is dominated by the resistor while at high frequencies the noise is dominated by the input transistor drain noise. As the model in Eq. (1) is not all-encompassing, some deviation is expected, but the majority of the noise is characterized properly. The clearance was measured up to 20 GHz and remained shot noise limited. At 20 GHz, the shot noise is still twice as large as the electronic noise with a clearance of 4.8 dB. The reference makes use of an integrated TIA. Table 1 shows how the balanced detector in this work compares to the state-of-the-art in literature. It is clear that detectors that use discrete TIAs [10,[27][28][29][30] cannot reach bandwidths above 1 GHz. This is due to large parasitic capacitances that limit the obtainable bandwidth. The detector in [9] uses a commercial TIA in die form, so it is therefore able to reach much higher bandwidths and maintain noise levels comparable to the other references. The detector in this work is also able to achieve a high bandwidth while at the same time improving significantly in terms of noise. This is particularly clear in the shot-noise-limited frequency range (BW shot ). Only the work in [30] is able to reach comparably high levels of clearance, but requires a high optical power of 54 mW and is limited to 2 MHz bandwidth. CONCLUSION In this work a co-integrated balanced homodyne detector is reported. By designing a silicon photonics optical front end and a custom integrated TIA, a high bandwidth of 1.5 GHz and a reduction in noise with up to 28 dB clearance is achieved, which is significantly better compared to previous designs. A framework is used to model the noise generated by the TIA and provides useful insight in the trade-offs and optimization present in the TIA design. The high-bandwidth and low-noise performance translates to a large shot-noise-limited frequency range of 20 GHz. We believe that these integrated devices could provide significant enhancements to several noise-sensitive applications such as fast and long range CV-QKD systems, high-speed QRNG, optical coherence tomography, and accurate characterization of quantum states. Disclosures. The authors declare no conflicts of interest. Data availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document. See Supplement 1 for supporting content.
5,594.4
2021-08-26T00:00:00.000
[ "Physics" ]
Steady state and mean recurrence time for random walks on stochastic temporal networks Random walks are basic diffusion processes on networks and have applications in, for example, searching, navigation, ranking, and community detection. Recent recognition of the importance of temporal aspects on networks spurred studies of random walks on temporal networks. Here we theoretically study two types of event-driven random walks on a stochastic temporal network model that produces arbitrary distributions of interevent-times. In the so-called active random walk, the interevent-time is reinitialized on all links upon each movement of the walker. In the so-called passive random walk, the interevent-time is only reinitialized on the link that has been used last time, and it is a type of correlated random walk. We find that the steady state is always the uniform density for the passive random walk. In contrast, for the active random walk, it increases or decreases with the node's degree depending on the distribution of interevent-times. The mean recurrence time of a node is inversely proportional to the degree for both active and passive random walks. Furthermore, the mean recurrence time does or does not depend on the distribution of interevent-times for the active and passive random walks, respectively. I. INTRODUCTION A broad range of diffusive processes on networks, from consensus formation [1,2] to current flows on electric circuits [3], can be modeled by random walks or, equivalently, by Markov chains. Their unbiased exploration of the underlying structure also makes them popular tools for designing algorithms for, e.g., navigation and search on networks [4][5][6], defining central nodes in a given network [7][8][9], community detection [10], and respondentdriven sampling [11,12]. In tandem with these applications, the impact of network structure on dynamics of random walks, including the hitting time, mixing time, and stationary density has been extensively studied. However, recent studies identified limitations of the classical network paradigm, where dynamics is modeled by a dynamical process on a static underlying structure. In a broad range of empirical systems, evidence suggests instead that dynamics presents non-trivial correlations between events and long-tailed interevent time distributions [13][14][15]. These observations are incompatible with the Poissonian statistics implicitly assumed in stochastic models, therefore calling for richer models for temporal networks [16]. An important practical question concerns the impact of the temporality of a network on diffusion. In order to address this question, a first approach consists in simulating random walks on real or synthesized data of time-stamped event sequences, and to compare their dynamical properties to those of properly defined null models [17][18][19][20]. A second approach consists in studying analytically the properties of random walks on specific models of temporal networks. In most studies, however, network structure changes at regular time intervals, and transitions between networks at different times are independent [18,21,22] or Markovian [23,24]. In the present work, we follow the latter path to analytically model diffusion under the non-Poissonian nature of interevent-times. Previous studies proposed to model temporal networks as stochastic sequences of events obeying a prescribed distribution of intereventtimes attached on each link [25][26][27][28]. A random walk process, called the active random walk, was then defined as a renewal process, i.e., after a walker arrives at a node, the intereventtimes attached to all links incident to the node are reinitialized [25,26,28]. Despite its non-Markovianity owing to the fact that the rate at which an event takes place depends on the time of the previous event, this stochastic process can be described by a generalized master equation, and some of its properties, such as the stationary density [25] and the relaxation time [28], were analytically solved. In this work, we will first derive an analytical expression for the mean recurrence time for the active random walk. Then, we consider a different non-renewal process, called the passive random walk, in which interevent-times are reset only on the links traversed at each jump. The passive random walk is considered to be a more natural model for diffusion on networks than the active random walk, because in reality a diffusing entity, e.g., a virus, does not typically initiate interactions between agents (i.e., nodes) upon a jump. We will use the fact that the passive random walk shows a stronger non-Markovianity than the active random walk does because passive random walkers remember their past trajectories to some extent. Finally, we will perform numerical simulations to test our analytical predictions and compare the stationary density and mean recurrence time between the two types of walks. II. MODEL We consider an undirected network on N nodes. We denote the set of nodes by V = {1, 2, . . . , N} and the set of links by E. We often refer to this network as the aggregate network because it is considered as the aggregation of the temporal network, which we introduce in the following, across time. Stochastic temporal networks [25,26] add a time dimension to the aggregated networks by assigning a random interevent-time τ e to each link e ∈ E in a renewal manner. The interevent-time is the interval between two consecutive activation events of the link. We denote the probability density function (PDF) of τ e by ψ e (t). We assume that the mean of τ e is finite. By construction, the random walker jumps from the current node to a neighboring node at the instant when a link appears between the two nodes. We analyze two versions of random walks on stochastic temporal networks [26]. The first version is the so-called active random walk. In the active random walk, when the random walker arrives at a node, it reinitializes the interevent-times on all links, which makes the process renewal. In this case, the waiting-time, i.e., the time for which a walker waits on a node before the link appears, is equivalent to the interevent-time [29,30] The second version is the so-called passive random walk, which does not assume the reinitialization at all links. When the random walker moves to a neighbor through link e, a new interevent-time is drawn only for e. The complexity of the passive random walk lies in its non-renewal nature. In other words, transition rates of the passive random walk depend on the trajectory that the walker has taken such that we have to account for the entire trajectory of the random walker to accurately evaluate its behavior. It should be noted that for exponentially distributed interevent-times the active and passive random walks are identical and reduce to the usual continuous-time random walk on the aggregate (i.e., static) network. III. ACTIVE RANDOM WALK The steady state of the active random walk was derived through a master equation approach in Ref. [26]. In this section, we first review these results (Secs. III A and III B). Then, we derive the mean recurrence time for the active random walk (Sec. III C). A. Probability flows We denote the probability that the random walker is located at node i (1 ≤ i ≤ N) at time t by p i (t). The normalization is given by N i=1 p i (t) = 1. The rate at which the walker arrives at node j from node i at time t is denoted by q j←i (t). The transition rate for a single walker to move from i to j at time t is given by r j←i (t) := q j←i (t)/p i (t). The master equation that governs the random walk is given by If the underlying static network is connected, which we assume in the following, the random walk is mixing and the stationary density, denoted by p * i := lim t→∞ p i (t), is obtained if we set lim t→∞ dp i /dt = 0 for all nodes i. For exponentially distributed interevent-times, the transition rate is given by where τ (ij) denotes the mean of τ (ij) , the interevent-time on link (ij). By combining Eq. (1) and r j←i (t) = r i←j (t), we conclude that the steady state is the uniform distribution [31]. For arbitrary interevent-time distributions, we cannot usually calculate r j←i (t). In addition, r j←i (t) may not be symmetric with respect to i and j so that the steady state may deviate from the uniform distribution. To calculate the steady state in this case, we define f (t; j ← i) as the rate at which the walker transits from i to j after time t has elapsed since the walker arrived at i. This event happens when, link (ij) is activated at time t and any other link (ik), where k = j, has not been activated by t. Because all τ (ik) 's, with the case k = j included, are reinitialized at the arrival of the walker at node i, we obtain We use Eq. (3) to derive the master equation for the active random walk. The rate at which the random walker reaches node j from an adjacent node i at time t satisfies where is the rate at which the walker arrives at node i at time t from an arbitrary neighbor, p j←i (0) are initially chosen weights on the links satisfying and δ(t) is Dirac's delta function. A detailed proof for Eq. (4) is found in Ref. [26]. By substituting Eq. (4) in Eq. (1), we obtain for any t > 0. We define i.e., the PDF of the time to transit from node i to somewhere. In other words, f (t; i) is the PDF of min j;(ij)∈E τ (ij) , which is the first time at which a link incident to i is activated. By integrating Eq. (7) and abbreviating which is the probability to remain at i for a time longer than t, we obtain for any t > 0. The derivation of Eq. (10) is shown in Appendix A. B. Steady state General case The steady state of the active random walk is evaluated via the Laplace transform of Eqs. (4) and (10), expansion of the exponential, and application of the final value theorem [26]. Here we briefly present a slightly modified derivation of the steady state. We take the Laplace transform of Eq. (10) to obtain Here,p i (s) = According to the final value theorem, the steady state probability for node i, denoted by p * i , is given by p * i = lim s→0 sp i (s). By combining Eqs. (11) and (12), we obtain where q * i := lim t→∞ q i (t) is the rate at which the random walker arrives at node j in the steady state. and F a :=F a (0), whereF a (s) = ∞ 0 F a (t)e −st dt is the Laplace transform of matrix F a (t). We transform Eq. (4) into the Laplace space to obtain whereq(s) = (q 1 (s), . . . ,q N (s)) ⊤ and p(0) = (p 1 (0), . . . , p N (0)) ⊤ . We multiply both sides of Eq. (15) with s and obtain By taking the limit s → 0 on both sides of Eq. (16), we find that vector q * := (q * 1 , . . . , q * N ) ⊤ is the dominant eigenvector of F a , i.e., Suppose that the random walker just arrived at a node. F a contains the probabilities to make a transition from one node to another in one step, F 2 a contains the probabilities to do so in two steps, and so on. Equation (17) implies that q * is proportional to the steady state of the discrete-time random walk on the aggregate network with transition probabilities defined by F a . It should be noted that q * is unique up to the scaling factor because the aggregate network has been assumed to be connected. To obtain p * in Eq. (13), we weight the steady state vector of the discrete-time random walk with the mean time for which the walker stays at the node. Identical distributions When interevent-times for different links are identically distributed according to ψ(t), where d i is the degree of node i. By combining Eqs. (17) and (18), we obtain which is consistent with the fact that the steady state of the simple random walk on an arbitrary static undirected network is proportional to the degree [3,32]. By substituting Eq. (19) in Eq. (13) and using N i=1 p * i = 1, we obtain where τ ℓ 's are i.i.d. copies of the interevent-time. It should be noted that depends solely on the degree of a node. Therefore, the steady state depends only on the node's degree. If the interevent-time is exponentially distributed, we obtain min ℓ=1,...,d i τ ℓ ∝ 1/d i so that the steady state is the uniform distribution. This is consistent with our previous argument in Section III A, where we derived this fact directly from the master equation [see Eq. (1)]. Otherwise, min ℓ=1,...,d i τ ℓ is not necessarily proportional to 1/d i such that the steady state may not be the uniform distribution. C. Mean recurrence time Let T i|i be the first-passage time, i.e., the time at which a random walker starting at i returns to i for the first time. We denote the PDF of T i|i by g(t; i|i). Our goal in this section is to determine the mean recurrence time given by The hopping rate of the walker generally depends on the time already spent at a node. Therefore, we confine ourselves to the first-passage time since the walker has just arrived at node i. We will adapt the derivation of the mean first-passage time for discrete random walks on static undirected networks [8] to the case of the active random walk. Denote by p i|i (t) the probability that the random walker is located at node i at time t given it started at node i. We obtain The first term on the right-hand side of Eq. (23) governs the case in which the random walker has not left i until time t. The second term accounts for the walker that has left i at least once. By transforming Eq. (23) into the Laplace space, we obtain Equation (24) impliesĝ where The mean recurrence time of node i satisfies By substituting Eq. (25) in Eq. (27), we obtain Application of the final value theorem yields By using the rule of L'Hospital, we obtain Furthermore, we recall thatφ i (0) = min ℓ =i;(iℓ)∈E τ (iℓ) < ∞ [see Eq. (12)], so that the tail By substituting Eqs. (12), (13), (29), (30), and (31) in Eq. (28), we obtain In particular, if interevent-times on different links are identically distributed, the combination of Eqs. (19) and (32) yields It should be noted that for discrete random walks on undirected static networks the mean recurrence time is also inversely proportional to the node's degree [8]. IV. PASSIVE RANDOM WALK In this section, we evaluate the steady state and mean first-passage time of the passive random walk. The difference from the active random walk is that the move of a walker does not reinitialize interevent-times except on the link used for that jump. When the passive random walker arrives at a node, links incident to the node have thus already been inactive for some random time. Therefore, in contrast to the case of the active random walk, When the arrival of a walker on a node and the activation of an link are independent processes, it is known that the waiting-time distribution ρ e (t) is given in terms of the intereventtime distribution ψ e (t) by and that the average waiting-time depends on the variance of the interevent-time, a property called the waiting-time paradox or bus paradox in queuing theory [29]. In general, the time at which the random walker jumps to an adjacent node depends on the previous trajectory of the walker. However, we assume that Eq. (34) holds true in the following analysis. A. Probability flows In the following, we perform an approximation in order to analytically evaluate the Eq. (34). The PDF of the time when the walker transits from node i to node j given that it arrived at node i from node k is approximated by Equation (35) suggests that the trajectory of the random walker impacts the order of link activation. In particular, when the interevent-time obeys a long-tailed distribution, a transition from node i back to node k is more likely than that from i to other nodes. It should be noted that the probability of transition does not depend on the destination node in the case of the active random walk. It should be also noted that Eq. (35) is exact for exponentially distributed interevent-times. Similarly to Eq. (4), we obtain where the initial condition satisfies Eq. (6). Similarly to Eq. (9), we define which is the approximate probability that the walker stays at node j for time longer than t given that it arrived from node i. By substituting Eq. (36) in Eq. (1), using Eq. (37), and performing a calculation similar to Appendix A for the derivation of Eq. (10), we obtain for any t > 0. Equations (36) and (38) govern the dynamics of the passive random walk. B. Steady state General case To calculate the approximate steady state distribution, we proceed similarly to the case of the active random walk. By taking the Laplace transform of Eq. (36), we obtain In terms of the vectors, Eq. (39) is written aŝ whereF p (s) is the Laplace transform of the 2|E| × 2|E| matrix (|E| is the number of links in the aggregate network) given by q p (s) is the Laplace transform of the 2|E|-dimensional column vector q p (t) := (q i←j (t)) (ij)∈E , and p p (0) is the Laplace transform of the 2|E|-dimensional column vector p p (0) := (p i←j (0)) (ij)∈E . In Eq. (41), we define f (t; j ← i|ℓ ← k) ≡ 0 for i = ℓ because such a transition is impossible. By multiplying both sides of Eq. (40) by s and letting s → 0, we obtain where q * p := (q * i←j ) (ij)∈E = (lim t→∞ q j←i (t)) (ij)∈E and F p =F p (0). To determine the steady state p * i , we transform Eq. (38) to the Laplace space and multiply both sides by s to obtain By setting s → 0 in Eq. (43), we obtain Finally, Eq. (37) implieŝ Therefore,φ i←j (0) is the (approximate) mean time for which a walker arriving at i from node j waits before moving to a neighbor. In summary, the steady state is approximately given by where q * i←j is the solution of Eq. (42), and the normalization is given by N i=1 p * i = 1. Identical distributions Denote the components of F p by (F p ) (ji),(ℓk) for (ij), (kℓ) ∈ E. When interevent-times for different links are identically distributed according to ψ(t), which we assume in this section, we obtain The first equality in Eq. (47) follows from the fact that (F p ) (ℓk),(ji) > 0, indicating the walker moved from j to i and then from ℓ to k, if and only if ℓ = i. The second equality follows from the assumption that ψ e (t) = ψ(t) for any e ∈ E. Equation (47) indicates that F p is a doubly stochastic matrix. Therefore, the solution of Eq. (42) is given by q * p ∝ 1, where 1 represents the 2|E|-dimensional column vector whose all elements are equal to unity. By using Eq. (5), we obtain This result is the same as that for the active random walk with identical interevent-time distributions [see Eq. (19)]. To evaluate the right-hand side of Eq. (46), we use whereτ k 's are i.i.d. copies of the waiting-time distributed according to a common PDF ρ(t). By substituting In the last equality in Eq. (51), we used integration by parts. Equation (51) implies Therefore, the mean time that the passive random walker spends at a node before transiting to a neighboring node does not depend on ψ(t) except for the dependence on τ . By combining Eqs. (5), (46), (48), and (52) and using N i=1 p * i = 1, we obtain Unlike for the active random walk [see Eq. (20)], the (approximated) steady state of the passive random walk is the uniform distribution for any ψ(t) and network structure. This is also consistent with the case when interevent-times obey the exponential distribution, for which we derived the steady state in Section III A directly from the master equation. C. Mean recurrence time To evaluate the mean recurrence time for the passive random walk, we denote by T i←j|i the time at which a random walker leaving node i returns to i through link (ji) for the first time. The PDF of T i←j|i is denoted by g(t; i ← j|i). We also define p i|i←j (t) as the probability that the random walker is located at node i at time t, given that it arrived at i from j at time 0. It should be noted that Bayes' rule results in The PDF of the first recurrence time satisfies Here, φ i (t) denotes the probability that the walker resides at node i for time longer than t and is given by By following the same steps as in Eqs. (25)-(32), we obtain By substituting Eqs. (52) and (53) in Eq. (59), we obtain It should be noted that the mean recurrence time is also inversely proportional to the degree for the active random walk [see Eq. (32)]. It should be also noted that T i|i is independent of ψ(t) for the passive random walk except for the factor τ , but not for the active random walk. V. EXAMPLES To illustrate the theoretical results derived in Sections III and IV, we analyze examples in this section. We assume that the interevent-times are identically distributed for all links according to ψ(t), which is either a power-law or Weibull distribution. A. Power-law distributed interevent-times Consider the case in which all interevent-times follow a power-law distribution given by where α > 2, corresponding to the assumption τ < ∞. We plot the PDFs of the powerlaw by the dotted line in Fig. 1. In fact, many real data show α ≈ 1 or 1.5 [13][14][15], which apparently contradicts our choice α > 2. Here, for simplicity we assume α > 2 to investigate the effect of long-tailed distributions on the random walk on temporal networks. For the active random walk, the PDF of the transition time is calculated from the substitution of Eq. (61) in Eq. (3) as follows: The probability to make a transition from node i to an adjacent node j [see Eq. (14)] is given by The mean time for which the random walker stays at node i before moving to a neighbor [see Eq. (12)] is given by For the passive random walk, we substitute Eq. (61) in Eq. (34) to obtain the PDF of the waiting-time as follows: By substituting Eq (65) in Eq. (35), we obtain The approximate probability to make a transition from node i to a neighbor j conditioned that the random walker reached node i through link (ik) [see Eq. (41)] is given by To illustrate the difference between the active and passive random walks, we consider the network composed of three nodes shown in Fig. 2. For the active random walk, the mean time to stay at node i, i.e., Eq. (64), is reduced to min ℓ=1,...,d i where τ ℓ 's are i.i.d. copies of the interevent-time. By substituting Eq. (72) in Eq. (20), we obtain p * a,i ∝ where subscript "a" here and in the following corresponds to the active random walk. Equation (73) implies that the steady state is not uniform, which is in contrast to the case of exponentially distributed interevent-times. In particular, p * a,i is large for nodes with small degrees, which is opposite to the case of the discrete-time simple random walk in undirected networks for which the steady state is proportional to the node's degree [3,32]. In contrast, for the passive random walk, the steady state is the uniform distribution [see Eq. (53)]. To test our theory, we carried out numerical simulations. We set α = 3 and used the Barabási-Albert scale-free network [33] with N = 50 nodes. The two parameters in the Barabási-Albert model were set to m 0 = m = 2. We used the same single realization of the aggregate network for both active and passive random walks. We calculated the steady state as the average time for which the walker spent on each node between t = 10 3 and t = 10 8 . The choice t = 10 3 is to exclude the transient. We initially placed the walker at one of the two nodes initially created in the Barabási-Albert algorithm, each with probability 1/2. The numerically obtained steady state probability of each node is shown in Fig. 3(a) for the active (circles) and passive (triangles) random walks. The nodes on the horizontal axis are shown in the ascending order of the degree. The numerical results are accurately predicted by the theory (lines). In particular, the approximations made for analyzing the passive random walk do not cause a notable discrepancy between the numerical and theoretical results. Next, we examine the mean recurrence time. For the active random walk, we substitute Eqs. (20) and (64) in Eq. (32) to obtain For the passive random walk, by substituting τ = (α − 2) −1 in Eq. (60), we obtain where subscript "p" corresponds to the passive random walk. In the numerical simulations of the passive random walk, we assume that the walker arrived at the starting node i from each neighbor of i with the same probability at t = 0. To mimic the steady state of the stochastic temporal network, we assumed that the initial is drawn from ψ(t). For each starting node i, we averaged the recurrence time over 10 5 realizations of the random walk. Numerically obtained mean recurrence times on the same scale-free network as that used in Fig. 3(a) is shown in Fig. 3(b). The theory (lines) accurately matches the numerical results (symbols). We also confirmed that the numerical results when ψ(t) is the exponential distribution with the same mean (i.e., τ = 1; shown in Fig. 1 by the solid line) completely overlap with those for the passive random walk when ψ(t) is the power-law distribution [hence not shown in Fig. 3(b)]. Figure 3(b) also indicates that, for each node, the active random walk realizes a smaller mean recurrence time than the passive random walk does. B. Weibull distributed interevent-times For the power-law distribution of interevent-times, the steady state probability of a node decreases with the degree for the active random walk, and the mean recurrence time at each node is larger for the passive than active random walk (Sec. V A). However, these results are not universal. To show this, we consider the case in which the interevent-time obeys the Weibull distribution given by where m (0 < m < ∞) and λ(> 0) are parameters. The Weibull distribution with m = 2 and λ = √ π/2, which yields τ = 1, is shown by the dashed line in Fig. 1. It should be noted that the tail of the distribution is shorter than that of the exponential distribution with the same mean (solid line). We start by illustrating the dynamics of the passive random walk on the three-node network shown in Fig. 2. By combining ρ(t) = e −(λt) m with Eq. (35), we obtain independent of the λ value. For m = 1, the interevent-times are exponentially distributed such that (F p ) (21), (12) = (F p ) (23),(32) = 1/2. For 0 < m < 1, we obtain 1/2 < (F p ) (21),(12) = (F p ) (23),(32) < 1 such that the random walker tends to alternate between two nodes, which is similar to the dynamics when ψ(t) is the power-law distribution (Sec. V A). For m > 1, we obtain 0 < (F p ) (21),(12) = (F p ) (23),(32) < 1/2 such that the random walker tends to avoid The numerical results for the Weibull distribution with m = 2, λ = √ π/2, and the same scale-free network as that used in the previous section are shown in Fig. 4. The steady state probability for the active random walk increases with the degree [circles in Fig. 4(a)]. In fact, a direct calculation of Eq. (20) for the Weibull distribution yields p * i ∝ √ d i , as shown by the lines overlapping with the circles in Fig. 4(a). This result is in contrast to the case of the power-law distribution of interevent-times, for which p * i decreases with d i [see Eq. (73)]. For the passive random walk, the steady state obeys the uniform distribution (triangles), which is consistent with the theory. The mean recurrence time under the Weibull distribution of interevent-times is shown in Fig. 4(b). The results for the passive random walk (triangles) are indistinguishable from those for the power-law and exponential distributions, which is consistent with the theory. We also find that, for any node, the mean recurrence time is larger for the active than passive random walk. This result is opposite to that for the power-law distribution of intereventtimes. VI. CONCLUSIONS We studied two models of random walks on stochastic temporal networks. Our main findings are summarized as follows. First, the steady state for the passive random walk with identically distributed interevent-times on links is uniform for any network and distribution of interevent-times. Second, for the active random walk, the steady state probability decreases and increases with the degree for the power-law and Weibull distribution of interevent-times, respectively. Third, the mean recurrence time for both types of walks is inversely proportional to the node's degree. Fourth, the mean recurrence time for the passive random walk does not depend on the distribution of interevent-times. Fifth, the active random walk produces smaller and larger mean recurrence times for each node than the passive random walk does when the interevent-time obeys the power-law and Weibull distributions, respectively. The present result that the mean recurrence time is inversely proportional to the node's degree is consistent with that in Ref. [17]. In particular, both studies conclude that the distribution of interevent-times does not affect the mean recurrence time (squares and diamonds in Fig. 7 in Ref. [17]). We reached this conclusion by explicit derivation of the mean recurrence time. In contrast, we consider that the strength of Ref. [17] in this respect lies in numerically showing the universality of this result across different data sets. It should be noted that a discrete-time simple random walk on a different temporal network model yields different results; the mean recurrence time decreases but is not inversely proportional to the degree [21]. The passive random model induces a correlated random walk. Interesting connections of the present study may be made to seminal work on correlated random walk on lattices [34][35][36] and to recent work modeling empirical pathways on networks by second-order Markov processes [19,37,38]. Pursuing connection to anomalous diffusion on lattices [39] may be also interesting. acknowledges the support provided through CREST, JST.
7,359.8
2014-07-17T00:00:00.000
[ "Computer Science", "Mathematics", "Physics" ]
ERA-MIN: A Decade since the Inception of the EU Led Effort to Support the International Raw Materials Research Community "2279 ERA-NET Cofund on Raw Materials (ERA-MIN) is a global, innovative and flexible panEuropean network of research funding organisations, supported by EU Horizon 2020, that counts now with its third edition. ERA-MIN3 (2020–2025) builds on the experience of the FP7 ERA-NET ERA-MIN (2011–2015) and the still running H2020 ERA-MIN 2 (2016–2022). ERA-MIN aims to support the European Innovation Partnership on Raw Materials (EIP RM), the EU Raw Materials Initiative, the Circular Economy Action Plan and further develop the raw materials (RM) sector in Europe through funding of transnational research and innovation (R&I) activities. This is achieved through calls designed and developed specifically for the non-fuel, non-food raw materials sector. Introduction ERA-MIN is one of the many Public-Public Partnerships (P2Ps) in research and innovation that aims to establish networks of public research funding organizations (ministries, funding agencies and programme managers) from EU Member States and other countries, that come together to efficiently design and support a common vision of research and innovation activities. The main goal of P2Ps is to align national strategies to come with a more efficient approach and avoid the fragmentation of public research effort. P2Ps include partnerships supported by the European Commission (EC) such as ERA-NETs and Art 185s and also Member State-led initiatives, known as Joint Programming Initiatives. ERA-MIN is one of the close to 200 ERA-NETs, which are networks of national and regional research funding organizations and ministries that join forces to design and implement thematic research and innovation programmes to fund transnational projects through joint, open and competitive calls. The first ERA-NETs, created during the 6th Framework Programme of the EC, helped to coordinate national/regional research strategies by supporting the implementation of joint calls for transnational proposals. In the 7th Framework Programme, when the first project of ERA-MIN started, the ERA-NET scheme was reinforced by ERA-NET Plus, which in some specific priority sectors allowed the topping-up of joint trans-national funding for calls with European Union funding. After this successful pilot, in Horizon 2020 the ERA-NET Cofund instrument merged the former ERA-NET and ERA-NET Plus into a single instrument that required implementing one substantial call with top-up funding from the EC on each of its initiatives. Thus, the focus of ERA-NETs shifted from the funding of networks to the top-up funding of joint calls. Currently there are 66 active ERA-NET Cofund networks amongst a total of 265 P2Ps created since 2002 [1]. Both ERA-MIN 2 and ERA-MIN3 correspond to the ERA-NET Cofund scheme type and are coordinated by FCT-Foundation for Science and Technology in Portugal. The EU Raw Materials Policy Context The importance of raw materials (RM) and its related key enabling technologies are essential for Europe's and the World's industrial future. Several strategic documents have been developed in the past decade, since 2008, when the Commission adopted the medium-to long-term integrated EU strategy covering non-energy and non-agricultural raw materials: the "Raw Materials Initiative built in 3 pillars: international, domestic and recycling [2]. All RM policies aim to guide the raw materials sector to more sustainable, efficient and cleaner technological innovations that guarantee the raw material cycle to keep driving the world's economy for many more years to come. At a global level, the 2030 Agenda for Sustainable Development [3] and its 17 Sustainable Development Goals incorporate some the key objectives the raw material community looks to support. Specifically, technological innovation is at the foundation of the efforts undertaken to achieve the environmental and growth set by the general assembly of the United Nations and also an integral part of the ERA-MIN goals objectives (mainly SDG6 (Ensure Availability and Sustainable Management of Water and Sanitation for All), SDG7 (Ensure Access to Affordable, Reliable, Sustainable and Modern Energy for All), SDG9 (Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation), SDG 17 (Strengthen the Means of Implementation and Revitalize the Global Partnership for Sustainable Development) and particularly SDG 12 (Ensure Sustainable Consumption and Production Patterns). In that respect, importance is brought to energy and resource-efficient technologies for mining and mineral processing, as well as secondary raw material supply via recycling. Additionally, end-of-life products re-use and recycling, and the development of innovative processes for production and remanufacturing, in the context of life-cycle analysis and new business models are some of the objectives that many of the ERA-MIN projects have at their core. At a European level, the Circular Economy Action Plan draws measures that cover the whole life cycle: from production and consumption to waste management and the market for secondary raw materials. The Report on the implementation of the "Circular Economy Action Plan (SDW (2019)90)" [4] supporting the transition towards a "Circular economy and a zero waste programme for Europe (COM(2014)398)" [5] are also key in guiding the future of the raw materials sector. More specifically, the "Report on Critical Raw Materials and the Circular Economy" [6] highlights the importance of adapting to changes brought on by the transition to a low-carbon and more circular economy, as well as the strategic importance of raw materials for the EU manufacturing industry. In this sense, ERA-MIN addresses all parts of the raw materials values chain defined in the EC Communication Circular Economy [7], and the Circular Economy, Closing the loop. "An ambitious EU Circular Economy Package" [8] as shown in Figure 1. In addition, the European Commission, in the communication "A clean planet for all" [10], has also defined a set of seven main strategic priorities considering new and improved materials for buildings, reduction of materials through re-use and recycling, substitution of carbon intensive materials, biogenic materials as well as more efficient and sustainable batteries, one of the key sectors clearly dependent on a sustained supply In addition, the European Commission, in the communication "A clean planet for all" [10], has also defined a set of seven main strategic priorities considering new and improved materials for buildings, reduction of materials through re-use and recycling, substitution of carbon intensive materials, biogenic materials as well as more efficient and sustainable batteries, one of the key sectors clearly dependent on a sustained supply of raw materials. Furthermore, the "European Green Deal" [11] designs a set of deeply transformative policies, which include new forms of collaboration with industry and investments in strategic value chains aiming to mobilise industry towards a clean and circular economy. Finally, it is worth highlighting two specific initiatives which are closely interrelated with the ERA-MIN goals: the European Innovation Partnership on Raw Materials (EIP RM) [12] and the European Institute of Innovation and Technology for Raw Materials (EIT RM) [13]. The EIP RM is a multi-stakeholder platform launched by the EC whose main objective is to help raise industry's contribution to the EU's GDP to around 20% by 2020 by securing its access to raw materials. Since its launch, the EIP has held annual high-level conferences where the calls for Raw Materials Commitments, including ERA-MIN calls were announced. ERA-MIN 2 was a member of the High-level Steering Group of the EIP RM for the period 2017-2020 and takes very careful consideration of the EIP RM recommendations for the establishment of the priorities of its calls for proposals. ERA-MIN 2 was also a member of the EU-Canada Raw Materials Stakeholders Forum Steering Committee to inform the governmental EU-Canada Bilateral Dialogue on Raw Materials under the CETA agreement. The EIT RM is a body of the EU which brings together 115 research performing organisations from 22 EU MS, making it the biggest consortium of this kind in the world. The EIT RM addresses challenges in the area of RM (sustainable exploration, extraction, processing, recycling, and substitution) and integrates multiple disciplines, diversity and complementarity along the three sides of the knowledge triangle (business, education and research) and across the whole RM value chain. ERA-MIN will cooperate with EIT RM to foster possible synergies and complementarities. FP7 ERA-NET ERA-MIN ERA-MIN (2011-2015) was an ERA-NET on the "Industrial Handling of Raw Materials for European industries" that was launched in November 2011 coordinated by CNRS (France). It was supported by the FP7 and aimed at setting up networks and mechanisms to foster coordinated research in the field of industrial production and supply of non-energy and non-agricultural raw materials, in line with the "EU Raw Materials Initiative". To achieve this objective, ERA-MIN conducted three main tasks: 1. Mapping and networking of the European non-energy mineral raw materials research community; 2. Implementing the research actions and European research programs financed by the fifteen national and public funding agencies involved in ERA-MIN. ERA-MIN consortium was enlarged to fifteen EU countries and two non-EU countries (Argentina and South Africa). In 2013 the ERA-MIN Research Agenda [14] Thirteen projects on primary resources addressed the first part of the loop "From exploration to mine closure and rehabilitation", seven projects on secondary resources and two projects on substitution of CRM were funded under the ERA-MIN JTCs. The ERA-MIN funded R&I projects have successfully contributed to develop the raw materials community in Europe and beyond, in line with the objectives of the SIP of EIP RM. Moreover, the analysis of the Calls 2014 and 2015 showed significant cooperation within EU countries and also with Argentina and South Africa in the non-energy and non-agricultural raw materials sectors, from exploration, extraction, mineral processing and metallurgy to recycling of mining and smelting residues and substitution of critical materials for green energy technologies. Moreover, the enterprises in the mining sector and in the recycling sector are cooperating mainly through academia-industry partnerships [15]. Support and promote R&I cooperation in Europe. 2. Reduce fragmentation of R&I funding in the area of non-energy non-agricultural raw materials across Europe and globally. 3. Provide a pan-European support network and financial resources to improve synergies, coordination and collaboration. 4. Improve the efficiency and impact of human and financial investment in R&I activities in the area of Raw Materials. The ERA-MIN 2 consortium ( Figure 2) consists of 21 funding organisations from 18 countries/regions (13 EU MS countries/regions, one Associated country, and four non-EU countries). Three Joint Transnational Calls have been launched under ERA-NET ERA-MIN 2: one co-funded call in 2017 and two additional calls in 2018 and in 2019 only with national/regional funds. ERA-MIN 2 has built a strong global network with the European raw materials players, as well as with non-European stakeholders. Seven funding organisations from EU MS regions (Brussels, Wallonia, Calabria), EU MS countries (Czech Republic, Greece, Slovakia) and non-EU Québec province have joined one, or two calls. A total of 40 multidisciplinary projects were funded under the ERA-MIN 2 calls, with €29 million of public funding (€41 million of total projects costs). All the thematic areas, based on challenges and priorities identified in the ERA MIN Research Agenda as well as in the SIP of EIP RM, were covered ( Figure 3). Topic 1. Exploration and mining is addressed by 10 projects, topic 2. Design is addressed by 17 projects, topic 3. Processing, production and remanufacturing is addressed by 26 projects, topic 4. Recycling and re-use of End-of-Life products is addressed by 24 projects and topic 5. Cross cutting issues is addressed by 23 out of the 40 projects funded in the 2017, 2018 and 2019 calls. Moreover, up to now, a decade since the first ERA-MIN, a total of 57 R&I projects were supported with €42 million of public funding (€60 million of total projects costs), 34% (average) private sector [16], which cover the whole raw materials innovation chain and contributed successfully for A total of 40 multidisciplinary projects were funded under the ERA-MIN 2 calls, with €29 million of public funding (€41 million of total projects costs). All the thematic areas, based on challenges and priorities identified in the ERA MIN Research Agenda as well as in the SIP of EIP RM, were covered ( Figure 3). Topic 1. Exploration and mining is addressed by 10 projects, topic 2. Design is addressed by 17 projects, topic 3. Processing, production and remanufacturing is addressed by 26 projects, topic 4. Recycling and re-use of End-of-Life products is addressed by 24 projects and topic 5. Cross cutting issues is addressed by 23 out of the 40 projects funded in the 2017, 2018 and 2019 calls. Moreover, up to now, a decade since the first ERA-MIN, a total of 57 R&I projects were supported with €42 million of public funding (€60 million of total projects costs), 34% (average) private sector [16], which cover the whole raw materials innovation chain and Please be aware that projects may address more than one subtopic. H2020 ERA-NET ERA-MIN3 ERA-MIN3 (2020-2025), started officially on 1 December 2020, is a global, innovative and flexible pan-European network of 24 research funding organisations, supported by EU Horizon 2020 under the ERA-NET Cofund scheme. It aims to continue strengthening the mineral raw materials community through the coordination of research and innovation programmes on non-fuel and non-food raw materials (metallic, construction, and industrial minerals). Its five key objectives are: 1. Contribute to the objectives and the implementation of both the RMI and the EIP RM Strategies, particularly in the priority area of RM R&I coordination, maximising the impact of other actions in the Technology Pillar of the SIP; Please be aware that projects may address more than one subtopic. H2020 ERA-NET ERA-MIN3 ERA-MIN3 (2020-2025), started officially on 1 December 2020, is a global, innovative and flexible pan-European network of 24 research funding organisations, supported by EU Horizon 2020 under the ERA-NET Cofund scheme. It aims to continue strengthening the mineral raw materials community through the coordination of research and innovation programmes on non-fuel and non-food raw materials (metallic, construction, and industrial minerals). Its five key objectives are: 1. Contribute to the objectives and the implementation of both the RMI and the EIP RM Strategies, particularly in the priority area of RM R&I coordination, maximising the impact of other actions in the Technology Pillar of the SIP; 2. Reduce fragmentation of RM R&I funding in the area of non-energy, non-agricultural raw materials across Europe and globally; 3. Improve synergy, coordination and coherence between regional, national and EU funding in the non-energy, non-agricultural RM research fields through transnational and international collaboration; 4. Improve use of human and financial resources in the area of non-energy, non-agricultural RM research and innovation; 5. Improve the competitiveness and the environmental, health and safety performance of non-energy, non-agricultural RM operations. ERA-MIN3 will launch at least two JTCs for transnational R&I proposals to support peer-reviewed excellent transnational R&I projects on non-fuel and non-food raw materials (metallic, construction, and industrial minerals). The EU Co-funded ERA-MIN Joint Call 2021 [17] is the first joint call of the ERA-NET Cofund ERA-MIN3, officially launched on the 15 January 2021, which counts with an indicative budget of €19.5 Million. In terms of pre-proposal submission, the EU co-funded ERA-MIN Joint Call 2021 has been the most successful call of ERA-MIN to date, with 146 pre-proposals submitted by deadline and 892 applicants participating (of which 32% are enterprises), requesting funding to the 24 participating countries/regions of the call [18]. Moreover, nine associated partners in pre-proposals, including four enterprises, were from other EU and non-EU countries, namely, Alberta province in Canada, Austria, Greece, Morocco, Norway, Peru, United Kingdom and USA. When comparing with the EU co-funded Call 2017 of ERA-MIN 2, the thematic area most addressed in ERA-MIN 3 Call 2021 was Recycling whereas in 2017 was Production which shows an increased interest of the RM ERA-MIN3 will launch at least two JTCs for transnational R&I proposals to support peer-reviewed excellent transnational R&I projects on non-fuel and non-food raw materials (metallic, construction, and industrial minerals). The EU Co-funded ERA-MIN Joint Call 2021 [17] is the first joint call of the ERA-NET Cofund ERA-MIN3, officially launched on the 15 January 2021, which counts with an indicative budget of €19.5 Million. In terms of pre-proposal submission, the EU co-funded ERA-MIN Joint Call 2021 has been the most successful call of ERA-MIN to date, with 146 pre-proposals submitted by deadline and 892 applicants participating (of which 32% are enterprises), requesting funding to the 24 participating countries/regions of the call [18]. Moreover, nine associated partners in pre-proposals, including four enterprises, were from other EU and non-EU countries, namely, Alberta province in Canada, Austria, Greece, Morocco, Norway, Peru, United Kingdom and USA. When comparing with the EU co-funded Call 2017 of ERA-MIN 2, the thematic area most addressed in ERA-MIN 3 Call 2021 was Recycling whereas in 2017 was Production which shows an increased interest of the RM research community towards Recycling topics in line with the current EU Raw Materials Policies. A second joint call for transnational R&I proposals is planned in 2023 supported with national and regional funds only and funding organisations from other EU and non-EU countries or regions are invited to associate and commit national/regional funds to support their institutions (universities and/or enterprises) in international consortia thus promoting access to new knowledge as well as to new markets world-wide. ERA-MIN3 the Continuation towards a European Partnership In parallel to the conclusion of the ERA-MIN 2 project, ERA-MIN3 aims to sustain for the following five years, the efforts and structures built on the back of the network and bridge what is one of the last ERA-NETs with the upcoming European Partnerships. Continuing on the Commission's plan for a new ERA based on excellence: The EC communication "A new ERA for Research and Innovation" [19] defines several strategic objectives for the last ERA-NET Cofund to which ERA-MIN3 will contribute: 1. prioritise investments and reforms in research and innovation, to support the digital and green transition and Europe's recovery; 2. improve access to excellent R&I for researchers across the EU; 3. translate results into the economy to ensure market uptake of research output and Europe's competitive leadership in technology; 4. make progress on the free circulation of knowledge, researchers and technology through stronger cooperation with EU countries. To make the European Research Area stronger, national research and innovation policies will continue to be strengthened too, and in this sense, ERA-MIN3 will continue to organise, besides the two JTCs for R&I projects in the raw materials, additional activities to build further bridges with other funding organisations, EU funded projects and other stakeholders in the non-fuel, non-food raw materials sector and across the world. As a continuation of the ERA-NETs, the new policy approach on European Partnerships and the rationalisation and reform of the partnership landscape that started in 2019 mark the change from Horizon 2020 to Horizon Europe. Having started in 2020, the transition to the European Partnerships scheme in the raw materials sector will take place from 2025 onwards. Under the first co-funded call, the ERA-MIN3's funded R&I projects are planned to start in May 2022 and will run all the way until mid-2025, right before the end of the ERA-MIN3 project. The second call, expected for 2023, will bridge the end of the ERA-MIN network as we know it, with its new embodiment into the European Partnerships with its continued goals to support the circularity, sustainability and leadership of the European and worldwide non-fuel, non-food raw materials sector.
4,450.6
2022-01-28T00:00:00.000
[ "Environmental Science", "Engineering", "Business", "Materials Science" ]
Multiple Hypothesis Testing in Proteomics: A Strategy for Experimental Work* In quantitative proteomics work, the differences in expression of many separate proteins are routinely examined to test for significant differences between treatments. This leads to the multiple hypothesis testing problem: when many separate tests are performed many will be significant by chance and be false positive results. Statistical methods such as the false discovery rate method that deal with this problem have been disseminated for more than one decade. However a survey of proteomics journals shows that such tests are not widely implemented in one commonly used technique, quantitative proteomics using two-dimensional electrophoresis. We outline a selection of multiple hypothesis testing methods, including some that are well known and some lesser known, and present a simple strategy for their use by the experimental scientist in quantitative proteomics work generally. The strategy focuses on the desirability of simultaneous use of several different methods, the choice and emphasis dependent on research priorities and the results in hand. This approach is demonstrated using case scenarios with experimental and simulated model data. With the advent of high throughput genomics approaches, researchers need appropriate bioinformatic and statistical tools to deal with the large amounts of data generated. In quantitative proteomics work, differences in expression of many individual proteins between treatments or samples might need to be tested. Researchers must then address what has come to be known as the multiple hypothesis testing problem. Suppose 500 features such as protein spots in a two-dimensional electrophoresis (2-DE) 1 experiment, or mass spectrum features relating to protein or peptide abundance, are each compared between treatments using a t test. If the conventional a priori significance level of ␣ ϭ 0.05 is used, then 5% or about 25 significant features are expected to occur just by chance even if the null hypothesis of no treatment effect is true for all 500 features. Thus it is easier to make a false positive error when picking out significant results in an experiment with multiple features, than when considering one feature in isolation. A variety of statistical methods have been devised to deal with the multiple hypothesis testing problem. These are applicable in quantitative proteomics. In this paper we use examples from 2-DE proteomics to demonstrate these methods. In this technique, the intensity of signal from protein spots on 2-DE gels is measured and compared between gels. Use of the word "spot" is obviously not synonymous with use of the word "protein" in that it does not encompass all forms of a given protein such as alternatively spliced variants and posttranslational modification variants that might form spots in different positions on the gel. The multiple testing approach is introduced with the following example. Table I shows simulated data for a model of a 2-DE proteomics experiment in which 500 spots have been compared between two treatments using the t test. The third column gives p values significant at ␣ ϭ 0.05 sorted from low to high. A threshold line is shown drawn under spot 70. This has been selected arbitrarily for illustration of some properties of a threshold. The p values for the spots above the threshold are all less than ␣ ϭ 0.05 but we cannot declare them to be significant at the ␣ ϭ 0.05 level because of the multiple hypothesis testing problem. In reality, spots above the threshold will be a mixture of true positives (with treatment effect) and false positives (null hypothesis true); spots below the threshold will be a mixture of true negatives (null hypothesis true) and false negatives (with treatment effect). Multiple hypothesis testing correction methods are used to help position a threshold on the list, different methods placing the threshold in different positions. Spots above the threshold can be declared significant, the null hypothesis is rejected and the alternative hypothesis of treatment effect accepted. Spots below the threshold can be declared nonsignificant and the null hypothesis is accepted. This is in accord with the Neyman-Pearson decision rule method of statistical inference (see 1). As the threshold is moved up the table of sorted p values there should be a lower proportion of false positives left above the line. This is useful if our main focus is on being confident that significant results reveal treatment effects, worthy of further investigation. However there will be an increasing proportion of false negatives left below the threshold, we will be failing to recognize treatment effects. Optimal positioning of the threshold for the results in hand is a balancing act, influenced by our perception of whether it is more important to avoid false positive or false negative errors. It has been suggested that there is traditionally too great an emphasis on avoiding false positives (type I errors), and that greater attention should be given to avoiding false negatives (type II errors) (2). False positives can be corrected by further investigation, whereas an experiment with a false negative result might never be repeated, and possible true treatment effects missed. The Fisher view of statistical inference (see 1) could be further applied to spots above the threshold. This is that the lower the p value, the greater the strength of the evidence against the null hypothesis, and the more confident we can be that further investigation will confirm a treatment effect. There are many statistical problems, relevant to genomics work, that are still under active debate. Examples include the possible arbitrariness of the ␣ ϭ 0.05 critical significance level (e.g. [3][4], the doubt about whether significance testing is The standard deviation (biological error) within each treatment group was set to 1 for all spots. Treatment effect sizes are, 50 spots effect ϭ 2, 100 spots effect ϭ 1, 350 spots effect ϭ 0 (see text for further explanation). Column headings are Spot, label for spot; effect, defined immediately above; p-value, in t-test between treatments; SB (0.05), probability for sequential Bonferroni at FWER ϭ 0.05; BH 5%, critical value for false discovery rate of 5%; BH 20%, critical value for false discovery rate of 20%; SGoF(0.05), probability for sequential goodness of fit at ␣ ϭ 0.05; SFisher (0.05), probability for sequential combining probabilities test at ␣ ϭ 0.05; FDR adj., FDR adjusted probability; q-value. The table has two breaks incorporating spots 15-51 and 73-86 respectively to save space. See text for further details. Tabulated values are rounded to six decimal places, although spreadsheet calculations were carried out to more than six decimal places. even useful as compared with estimation of parameters and confidence intervals (e.g. 4 -5), and the interpretation of the concept of probability itself (6). In the particular case of the multiple hypothesis testing problem, many methods and refinements have been proposed (e.g. see 7 for review). The experimental scientist has the problem of evaluating these methods and assessing the debate on their relative merits and validity. In this circumstance we feel that particular methods ought not to be strongly prescribed to the experimental scientist working in proteomics. There should be freedom to apply and choose from the variety of methods available. Our main aim here is to promote the strategy that simultaneous consideration of several multiple hypothesis testing methods is useful, and that particular emphasis on one method rather than another might differ depending on the scientific question and priorities under consideration. We first present a selection of different multiple hypothesis testing approaches that can be applied in proteomics work. We then present a survey analysis which suggests that the use of such methods in the proteomics literature is as yet rather limited. We then demonstrate and discuss the proposed strategy using some example datasets. Multiple Hypothesis Testing Methods-We describe next some of the most widely used multiple hypothesis testing methods, using as illustration the simulated data of Table I. The data is based on a model with five biological replicates for each of two treatment groups each compared at 500 protein spots. The standard deviation (biological error) within each treatment group was set to one for all spots. Values were chosen at random, assuming normality, using the Excel add-in Poptools (8). For one of the treatment groups, the mean value was set equal to zero for all five biological replicates for all spots. For the second treatment group, for all biological replicates, the spot means were set to 2, 1, and 0 for 50, 100, and 350 spots respectively. Where both treatments have a mean value of zero, the null hypothesis is true, otherwise there is a treatment effect with a mean difference between treatment groups of two (for 50 spots) and one (for 100 spots). The table shows an arbitrarily selected replicate of the model for which the number of spots significant at ␣ ϭ 0.05 is close to the mode for the model, assessed in 100 Monte-Carlo replicates made using Poptools. We emphasize that it is not our intention to investigate the properties of the model in depth, rather to use it to provide some representative data to illustrate multiple hypotheses testing methods. Most traditional multiple hypothesis testing methods aim to control the number of false positives. An example is the Bonferroni correction (9). This controls the family-wise error rate (FWER) which is the probability of making one or more false positive errors, in a set of tests. For example if FWER ϭ 0.05, the traditional significance level, then each test in a set of m tests needs to have an a priori p value less than or equal to ␣ ϭ 0.05/m to be declared significant a posteriori. Alternatively the p value can be multiplied by m, as in Table I. If the product is less than or equal to 0.05 then the test can be declared significant. If the lowest p value is declared significant, the second lowest p value is assessed using ␣ ϭ 0.05/ (m-1) and so on. This is known as the sequential Bonferroni method (SB). It has been criticized as being too conservative. The threshold usually comes near the top of the table, and although the spots above it are unlikely to be false positives, there are many false negatives below the threshold. Thus the method has low power to detect many true treatment differences. In Table I only three spots remain significant after applying the SB method. An alternative to the SB method was proposed by Benjamini and Hochberg (10). This is the false discovery rate (FDR) method. This aims to determine a threshold such that a proportion or percent of p values above the threshold are false positives, the remainder true positives. FDR is said to be controlled at this percent level. There are different methods to control FDR and we use here the acronym BH to refer to the specific procedure of Benjamini and Hochberg (10). Critical values for control at the BH 5% level are shown in Table I. The critical values for a spot are calculated as ␣ϫ (i/500) where i is spot number from 1 to 500, after ranking by increasing p value. The BH method is implemented by what is called a step-up procedure. Starting from the bottom of the table, the p values are checked until one value is less than or equal to the critical value in the same row. This p value and all those above it in the table are then declared significant at the BH 5% level and included above this threshold. In Table I, 12 spots are above the threshold and thus declared significant at this level. From the practical viewpoint this means that a proportion 0.05 of these 12 are expected to be false positives, and in FDR terminology these would be false discoveries. The remainder, a proportion 0.95 are expected to be true discoveries, where the null hypothesis is false and the alternative hypothesis true. The BH 5% threshold is lower down the table than the SB threshold. Theoretical studies have indicated that the BH method has greater power to detect true positives than the SB method, assuming of course that there really are some spots with treatment effects. The cost is that shifting the threshold down the table results in some spots that were true negatives now moving above the threshold and being converted into false positives. The BH method can set control at different levels, for example Verhoeven et al. (11) illustrate graphically the different threshold effects of control at 5%, 10%, and 20%. A column for less stringent control, at BH 20% is also given in Table I. At this control level, a proportion 0.2 of spots above the threshold are expected to be false positives. An alternative to controlling FDR at a specific level is to select a region within the list of p values and work out the FDR for it. This is the basis of FDR adjusted probability (12,13). For example, suppose the chosen region were that cut off above the arbitrary threshold in Table I. The FDR adjusted probability of spot 70, just above this threshold is defined as (p value ϫ 500)/i ϭ 0.212 or 21.2%, where i ϭ 70 in this case. Thus for all spots above the threshold a proportion 0.212 are expected to be false positives. It should be noted that FDR adjusted probability can have the same value for all spots within a set of spots, for example spots 63-70 in Table I. This is imposed for a technical reason that demands that no spot can have a value that is higher than that of a spot further down the table. The positive false discovery rate (14) is a modified version of the FDR that takes into account that in practice we are only interested in analyzing datasets in which at least one feature has been declared to have a significant p value at the chosen ␣ level. The positive false discovery rate cannot be controlled at specific levels as can FDR, but allows calculation of the q-value (15,16), which is analogous to FDR adjusted probability. Thus whereas FDR can be used to control at a specific level such as BH 5% or BH 20% (when we use the Benjamini and Hochberg FDR method) both FDR adjusted probability and q-value can be applied to a specific region of p values from the top of the list down to some point in the list. The q-value of spot 70, just above the arbitrary threshold in Table I, is 0.171. This means that the expected proportion of false positives occurring in the set of spots above the arbitrary threshold is 0.171. There is a symmetry between the false positive rate (p) and the false discovery rate (q) (15): p is the probability of getting a significant result at level ␣ given that the null hypothesis is true: q is the probability that the null hypothesis is true given a significant result at level ␣. Next we present two approaches that are less widely used than SB and the false discovery method. In the combined probability test of Fisher (17,18) the aim is to combine the a-priori p values of all the spots into a single p value for the data as a whole, which is then compared with the chosen ␣ value. To do this, the natural logarithm of the k listed p values are summed. This gives a test statistic distributed as a chisquare with 2k degrees of freedom. The null hypothesis for this combined "meta" test statistic is that all the individual spots in the list show no difference between treatments. The probability value for the combined test statistic can be called the meta p value. If the meta p value is significant at level ␣ then it can be concluded that at least one of the spots in the list has a null hypothesis that is false. The best candidate to declare as having a false null hypothesis is the spot with the lowest p value, the one at the top of the list, and the meta p value is thus placed next to it in the same row at the top of the column headed SFisher for sequential combined probability test of Fisher in Table I. The procedure continues by repeating the combined probability test but with the exclusion of the p value at the top of the list. The resulting new meta p value is then placed in the second row in the SFisher column. This sequential procedure continues until the meta p value is no longer significant at level ␣. At this point it can be concluded that there is no evidence that any of the remaining spots have false null hypotheses, and thus the number of spots with treatment effects is equal to the number of significant meta tests. In Table I the top 59 spots are significant using a meta test ␣ ϭ 0.05. Alternative methods for combining probabilities are available, for example the unweighted Z-test (the standard normal deviate) called Stouffer's test and a weighted version of this test (see 18). Stouffer's test produces similarly positioned thresholds to Fisher's test in the case scenarios considered here and not discussed further. Application of the weighted test is rather complex, depending for example on the kind of test used and effect size (e.g. see 18,19) and thus cannot be applied to lists of p values without additional information. The final method is based on the exact binomial test (20,21), and is explained with reference to the Table I example. If all null hypotheses are true then 500 ϫ 0.05 ϭ 25 spots are expected by chance to have p values significant at ␣ ϭ 0.05. Suppose however that 33 spots, more than expected, are significant at this level. The probability of obtaining a ratio that is 467:33, or more highly skewed toward an excess of spots that are individually significant at ␣ ϭ 0.05, is calculated using the binomial theorem assuming a null expectation of 0.95: 0.05. This meta test probability is 0.045, and if we also use the significance level ␣ ϭ 0.05 for the meta test, the meta test is significant at this ␣ level. It can therefore be concluded that at least one of the 500 null hypotheses is false. As in the sequential Fisher test, the best candidate to declare as significant is the spot with the lowest p value at the top of the list. In Table I, 88 spots are individually significant at ␣ ϭ 0.05 and the meta test p value for 412:88 is highly significant. By analogy with the Fisher test, the exact binomial meta test can be applied sequentially, by testing ratios progressively closer to the expected 475:25 until a nonsignificant meta p value is obtained. For each meta test in this sequential procedure, the meta p value is partnered with the spot with lowest individual p value (column 3) at that point. This is the basis of the SGoF (Sequential Goodness of Fit) test (20). The meta p values deriving from this test are given in Table I in the column headed SGoF(0.05) for sequential goodness of fit. Consider the meta p value associated with a particular spot, say 0.034 for spot 53. Because it is less than the meta test ␣ ϭ 0.05, it can be concluded that there is at least one false null hypothesis in the collection of 448 spots incorporated in the meta test at that point. In the next sequential application of the meta test, the meta p value ϭ 0.052 for spot 54, so at this point it can be concluded that there is no evidence to reject the null hypothesis for any of the remaining 447 spots. The sequential procedure stops. In general, the sequential methods SGoF and SFisher have greater power than the SB and BH methods. In the latter, the p value needed for a spot to be declared significant decreases as the number of spots in the dataset increases, and thus the power to detect true effects also decreases (22). In the former it is the opposite because a larger dataset results in more power in the meta test (20). Survey of Multiple Testing in Proteomics Journals-Multiple hypothesis testing methods such as FDR have been dissem-inated for more than one decade since the publication of Benjamini and Hochberg (10). We present now the results of a survey undertaken to gauge the current use of multiple hypothesis testing methods in proteomics journals. Issues of the three journals Molecular and Cellular Proteomics, Journal of Proteome Research, and Proteomics were examined for the year 2009 and papers in which authors had presented lists of protein spots comparing different treatments using the quantitative proteomics 2-DE technique were identified and examined. The studies in these papers were thus candidates for the application of multiple hypothesis testing methods. Much smaller samples of papers were taken for 2010, mainly to confirm that there has not been any recent substantial change in behavior in relation to use of multiple hypothesis testing methods. The results of the survey are presented in Table II. A large majority of the papers (89.2%) did not use multiple hypothesis testing methods. This pattern is consistent across journals and years. Those that did use multiple hypothesis testing, mainly used FDR or q-value methods. The low implementation of multiple hypothesis testing methods was not through lack of use of statistical methods generally, for the majority of papers (92.8%) used ␣ ϭ 0.05 or ␣ ϭ 0.01 criteria for declaring spots as having significant expression differences between treatments. In about half of these, fold change, the ratio of the expression between treatments, was used as an associated criterion, although fold change was seldom used alone. It would be difficult to argue from the results of the survey that it is superfluous to further promote the application of multiple hypothesis testing methods on the basis that these methods are currently widely used in proteomics work. Case Scenarios-To demonstrate how different thresholds might be implemented depending on scientific priorities we consider next some examples of scenarios that might be faced in proteomics studies. Use of model data has the advantage that we can see the true effect sizes though this would not of course be known for experimental data. Suppose that a proteomics experiment is set up to identify a protein marker that can separate cancer and normal cells with high confidence. Investigation of targets might be expensive in resources, which would be wasted investigating false positives. In this situation the threshold could be drawn near the top of the list of ordered p values, and SB might be fine for positioning it. Consider Table I as example. The three spots above the threshold all have the maximum treatment effect size of two, thus the strategy would be effective in this case. If we wished to be a little less conservative, the BH 5% threshold could be chosen, which identifies 12 significant spots. It is reassuring that the SFisher and SGoF methods give low meta p values for these spots. The computed q-value is even more reassuring than the BH 5% with values of 0.036 or less, suggesting that only a proportion 0.036 of the 12 spots are false positives. An intermediate position might be to choose say the top five spots which have even lower q-values of 0.017 or less. For the simulated data of Table I, 9 of the 12 spots above the BH 5% threshold have the largest treatment effect size in the model and only one, spot 11, is a false positive. A contrasting scenario could be that of an exploratory study to identify biochemical pathways involved in adaptation of molluscs to an environmental stress, comparing with a control environment. Here we might want a liberal threshold placed further down the table and thus declare many more spots as significant targets for further investigation. Subsequent protein identification of only a subset of these might give clues about potential pathways of interest. One possibility would be to set the threshold at BH 20% which includes 62 spots in Table I. Of these, 11 have zero effect in the model and thus are false positives, a number roughly in line with expectation for BH 20%. An alternative could be to use the approach of selecting a threshold and calculating FDR adjusted probability for the region above it. For example, suppose the arbitrary threshold in Table I marked off a convenient region because resources were available for picking and identifying roughly 70 protein spots. As explained above, a proportion 0.212 of false positives are expected in this region above the threshold using FDR adjusted probability. This is potentially useful, for although the level of FDR control has only changed from 20% to 21.2%, eight more spots are included about the threshold. Targeting a few positives that unbeknown to the investigator are false might not be a problem if this minority are irrelevant to an emerging picture of important biochemical pathways. If we accept ␣ ϭ 0.05 as the appropriate significance level for a single spot then a very liberal position would be to draw the threshold below all spots having a p value of less than 0.05, a total of 88 in Table I. The FDR adjusted probability and q-value at this threshold increase to 0.278 and 0.224 respectively, the difference between the two values arising from the different assumptions of these methods. As these values increase as we move down the table, the increasing number of false positives included above the threshold gives the potential for confusing any emerging pathway picture, though in the simulated data the number of false positives included happens to be only 14 out of the 88. Given that q-values are available to aid decision we could consider being even more liberal. For example, although not shown in Table I, there are 120 spots with a q-value of 0.325 or less, but we might reflect at this point on whether too many false positives are being included above the threshold. The next scenario considers a model with a smaller average difference between treatment groups. In this model, the standard deviation within treatment groups was set to 1, as before. For one of the treatment groups, the mean value was set to 0 for all biological replicates. For the second treatment group, for all biological replicates, the spot means were set to 1 and 0 for 100 and 400 spots respectively. An example of simulated data from this model is shown in Table III. A total of 48 spots are significant at the level ␣ ϭ 0.05, an excess of 23 over the 25 expected for null data with all spots having effect size ϭ 0. The p values are too high to draw thresholds for SB, BH 5%, and BH 20%. Thus these methods are not useful in providing evidence of a treatment effect. However the SGoF(0.05) and SFisher give 13 and 20 p values, respectively, less than ␣ ϭ 0.05. Thus these spots can be declared significant at this meta test ␣ level for these methods. The q-value in the range 0.325-0.428 suggests that roughly one third are false positives, compared with a proportion of true nulls of 400/500 ϭ 0.8 in the dataset as a whole. Thus picking a selection of spots because they are above these thresholds is certainly better than picking spots at random from the 500. In order to decide which of the meta test results to give priority to, the Fisher approach to statistical inference, which uses the magnitude of the p value as a measure of the strength of the evidence against the null hypothesis (1), could be implemented. Doing this, the 13 p values above the SGoF(0.05) threshold would be favored as candidates for further work. In reality, the proportion of spots with effect size of 1 above the SGoF(0.05) and SFisher(0.05) thresholds in Table III is 6/13 ϭ 0.46 and 12/20 ϭ 0.60, respectively. Thus unfortunately, in these data, applying the Fisher approach and focusing on the 13 spots above the SGoF(0.05) threshold, because they have lower p values, would actually be less fruitful in identifying spots with treatment effects than using the SFisher(0.05) threshold. For the analysis as a whole, we can conclude that treatment effects have been demonstrated that might guide further studies, but progress to advance biochemical or physiological understanding through spot picking and mass spectrometric analysis might be hindered by presence of the false positives. We conclude this section with an example of real experimental data, from a study of a gastropod mollusc exposed to two different environments (treatment and control) with three biological replicates per treatment (Diz A. P. et al. unpublished data). A total of 549 spots were analyzed using the Progenesis SameSpots 2-DE image analysis software from Nonlinear Dynamics Ltd., and of these, 125 were significant at ␣ ϭ 0.05 using the t test, many more than the null expectation of 549 ϫ 0.05 ϭ 27. After applying SB (0.05), BH 5%, and SGoF(0.05) multiple hypothesis testing methods, respectively 1, 1, and 87 spots were declared significant. The first two methods have clearly eliminated many spots for which the null hypothesis is false, and are thus not very useful. SGoF(0.05) is clearly better but eliminates 125Ϫ87 ϭ 38 of those spots initially significant at the ␣ ϭ 0.05 level. A montage from Progenesis SameSpots of the six individuals for each of four of these 38 spots is shown in Fig. 1 with p values indicated. Visually all these spots seem to be very different on average between the two treatments, especially when we weigh in our minds the biological variation, which appears relatively small. Psychologically it seems hard to accept that the above three multiple hypothesis testing methods are providing a useful service by excluding these 38 spots. However these spots are found to be significant with SFisher(0.05). Given our prior intention to make use of this multiple hypothesis testing method, we have a justification for including the 38 spots as targets for further work. This decision receives support from consideration of the q-value, which is 0.089 for the 125 spots and 0.075 for the 87 spots, that are significant with SGoF(0.05). Both values could be considered fairly similar in value and satisfactorily low, justifying selection of the larger set of 125 spots for further work. Software for Application of Methods-The multiple hypothesis testing methods described in this paper can be carried out using the SGoF software (20). The QVALUE software (14,15,23) provides many alternative options for computing q-values. The Multitest V1.2 software (21) performs the exact binomial test on lists of p values. A useful table that illustrates application of Bonferroni and FDR methods, and that can be adapted in a spreadsheet is given in Fig. 1 of Verhoeven et al. (11). The application of the step-up approach can be seen in Tables I and III by comparing values in the BH 5% and BH 20% columns just above and below the respective thresholds with the p value column. A Supplemental Table in an Excel spreadsheet table, inspired by the Verhoeven et al. (11) table, has been provided which implements methods discussed in this paper. The spreadsheet cells show the formulas and functions needed. The table can be modified and other p value lists pasted in for analysis. We think the table has some didactic value even if in normal practice the methods could be executed by the dedicated software described above. The q-value is excluded from the spreadsheet as the computation is more complicated and this method is best executed using the dedicated software. The Supplemental Table also includes a glossary of some of the terms and methods discussed in this paper. DISCUSSION AND CONCLUSIONS This paper presents a selection of multiple hypothesis testing methods and reviews how they might be applied using case scenarios. We hope that this work will help to promote the use of multiple hypothesis testing methods among those researchers who do not currently use them. It has not been our intention to be prescriptive about precisely which methods should be used. Rather we emphasize the strategy of applying several different multiple testing methods to the data in hand. We have suggested appropriate software to apply these methods. A useful strategy might thus be to prepare a table similar to Table I. This could be included as supplementary information to a publication if appropriate. Then, summary tables could be given in the paper with the numbers of features such as proteins spots declared significant by the different methods (e.g. 24,25). The next stage would be to provide a discussion along the lines of that used here for the case scenarios, giving emphasis to particular methods depending on research priorities and position of thresholds. For example, where some surety is needed that significant fea- TABLE III Simulated results of a model of a proteomics experiment with two treatments and 500 protein spots The standard deviation (biological error) within each treatment group was set to 1 for all spots. Treatment effect sizes are, 100 spots effect ϭ 1, 400 spots effect ϭ 0. Column headings are as in Table 1. The table has a break incorporating spots 30 -46 to save space. tures do reflect treatment effects, the SB method might be augmented by a consideration of q-values. Where it is possible to be more liberal, the sequential methods SGoF and SFisher might be used, or FDR control might be set at a level such as BH 5%. In exploratory studies, FDR could be set at a more liberal level such as BH 20%, or FDR adjusted probabilities or q-values could be determined for a specific region such as the threshold marking off all p values significant at ␣ ϭ 0.05 . The availability of a table similar to Table I would also allow the readers of the paper to carry out easily their own assessment of the significance of the results. Other important statistical considerations relating to the use of lists of p values in proteomics research should be mentioned. In the datasets of Table I and Table III, the number of spots above the different thresholds falls far short of the actual number of spots with an effect in the models used. There are many false negatives below the thresholds and the number of true null hypotheses is being over estimated. Sev- There were three biological replicates for each of two treatments. Significant p values at ␣ ϭ 0.05 or lower for comparison of treatments with a t test are shown. These spots were not declared significant after applying SB (0.05), BH 5%, and SGoF (0.05). Image analysis and montage preparation were carried out with Progenesis SameSpots versus 4.0 software. eral estimation methods are available for experimental data for the proportion of true null hypotheses in a list of p values (e.g. 15,26,27). For the 500 spots of the Table I model, the proportion estimated by the Storey and Tibshirani (15) method is 0.79, somewhat greater than the actual 350/500 ϭ 0.7. For the data of the Table III model the estimate is 0.92 also greater than the actual 400/500 ϭ 0.8. An estimated excess of the proportion of true null hypotheses must be because of the use of only five biological replicates per treatment, there is a lack of power. In the design of quantitative proteomics experiments it is important to distinguish between biological and technical replication. Biological replicates would be different individual or pooled organisms allocated among different treatment groups (see 28). Technical replicates occur when the same biological replicate is repeated on different gels. Variation between biological replicates should be tested for significance against variation between technical replicates: variation between treatments should be tested against variation between biological replicates. Thus when testing for treatment differences, priority should be given to maximizing the number of biological replicates to optimize power (29). A further consideration in this context is that a limited number of biological replicates will also affect the precision of p values. This has most severe consequences for the SB where the p values required for FWER ϭ 0.05 for individual spots need to have precision to many decimal places in large datasets (20). If resources permitted, one approach to increasing power would be to repeat the entire experiment, and attempt to confirm or eliminate positive and negative results as true or false. The use of the q-value is dependent on the assumptions made regarding the distribution of p values. Under the null hypothesis of no treatment effect, a uniform distribution is expected (15,30). However aspects of experimental design or inappropriate statistics for estimating p values can result in nonuniform statistics that should be taken into consideration (e.g. 7, 31, 32). Many variant tests for combining probabilities or applying the FDR approach have been examined and compared in relation to underlying assumptions and statistical properties (see 7,18,33). An important consideration is whether the p values under study are correlated or are independent. Fortunately, many FDR approaches are robust to deviations from the assumption of independence (e.g. see 7). The results of the survey reported here indicate that use of fold change is a popular criterion for assessing spots. Fold change provides some information on effect size but cannot be used as a criterion for determining significance according to defined ␣ levels. However given that p values confound effect size and precision (1,4), fold change might provide useful supplementary information. For example, in Table III, spots 4 -20 above the SFisher (0.05) threshold have the same q-value. If only a few of these can be selected for further investigation, then fold change might be used as an indicator of effect size as an alternative to using the p value as an indicator of the amount of evidence against the null hypothesis. This approach might be particularly appealing in circumstances where visual evidence strongly suggests a treatment effect as in the results shown in Fig. 1. Finally, further experimental work could also be used to confirm significance of spots above a liberal threshold. This could be done simply by repeating the experiment. However this method of updating the p values simply applies greater power, it does not replace the need for multiple hypothesis testing methods. Another attempt to bypass multiple hypothesis testing would be the in-depth investigation of individual proteins using techniques such as Western blotting (e.g. 34,35). This would be analogous to confirming candidate transcripts using qRT-PCR in microarray work. But as Pan et al. (36) point out, this might involve a heroic effort if there are many target features, and initial use of multiple hypothesis testing methods would be sensible to narrow down candidates for further experimental work. This paper has as its main focus the application of a multiple testing strategy to the list of p values obtained in quantitative 2-DE work. However multiple testing is also relevant to gel-free proteomics methods (see 37), and also of course in transcriptomics work where treatment differences in mRNA abundance are considered. Thus the strategy we outline is a generic one. For example, the strategy could be considered for use in any of those bottom-up or top-down quantitative proteomics methods in which p values are obtained for differences in abundance for mass spectrum features determined across samples or treatments. The strategy might also be assessed in protein identification with mass spectrometric methods where each p value in the list corresponds to a different target protein or peptide, and where FDR methods are feasible (see 37). New experimental proteomics methods continue to be developed. Multiple testing strategies could be pertinent to any of these provided that the method generates lists of p values for the features under study.
9,238.6
2010-12-07T00:00:00.000
[ "Biology", "Chemistry" ]
Operation reliability analysis of independent power plants of gas-transmission system distant production facilities The new approach was developed to analyze the failure causes in operation of linear facilities independent power supply sources (mini-CHP-plants) of gas-transmission system in Eastern part of Russia. Triggering conditions of ceiling operation substance temperature at condenser output were determined with mathematical simulation use of unsteady heat and mass transfer processes in condenser of mini-CHP-plants. Under these conditions the failure probability in operation of independent power supply sources is increased. Influence of environmental factors (in particular, ambient temperature) as well as output electric capability values of power plant on mini-CHP-plant operation reliability was analyzed. Values of mean time to failure and power plant failure density during operation in different regions of Eastern Siberia and Far East of Russia were received with use of numerical simulation results of heat and mass transfer processes at operation substance condensation. Introduction Powering is an important aspect of any production.Stable supply necessary for the implementation of technological processes of energy determines the stability of the enterprises.Questions of reliability and efficiency of electric utility are particularly important for businesses with remote objects on its balance sheet from a centralized power system (such as gas and oil pipelines of large extent).Typically, the functioning of these objects is accompanied by the need to accommodate small linear features, such as devices of electrochemical protection, crane components, stations of teleautomatic system, etc.If gas or oil pipelines are located in areas remote from the centralized power system, the only possible way of organizing powering is the use of independent sources of powering. For powering gas pipelines in the Far East of Russia, independent energy sources are used with a closed thermodynamic cycle of power up to 4 kW (Fig. 1).Such sources operate on natural gas and are characterized by a considerable duration of automatic operation and a high value of thermal efficiency.Nevertheless, often there are failures of this equipment which negatively affect functioning of the gas transmission system.Thus the analysis of statistical data showed [1] that one of basic reasons of violations of operation of a power source is appearance of extremely high value of temperature of working substance on an output from the condenser.The purpose of article -the analysis of operation modes of independent power installations which cause failures because of extremely high temperature in the condenser, and also time between failures determination taking into account climatic conditions of the Far East of Russia. Problem statement Power installation (Fig. 1) functions by the following principle.The torch 3 working at natural gas, in the steam generator 6 heats, and then evaporates organic working substance (dichlorobenzene).The working substance in a vaporous status moves in the turbine 8 and brings into rotation a shaft of a turbogenerator of an alternating current 9. Further the working substance arrives in the condenser with the air cooling 12, consisting of 2 ranks of ribbed tubes the internal diameter of D in = 38 mm united by collectors.The working substance is condensed and with pump 10 comes back to the steam generator 6, and the cycle becomes isolated.It should be noted that dichlorobenzene besides using in a running cycle of power installation is also the liquid greasing bearings of sliding of a shaft of a turbogenerator. It was supposed that dichlorobenzene in a vaporous state at a temperature of saturation T S arrives in tubes of the condenser (Fig. 2).Through surfaces of tubes of the condenser the membrane whose thickness in the process of a current of vapor-liquid mix in the channel increases is formed.Process of condensation comes to the end at achievement of a share of steam in vapor-liquid mix of working substance of the standard value declared by manufacturer [2]. Mathematical model The non-stationary differential equations describing process of condensation of a working body in tubes of condenser installation (Fig. 2) and corresponding formulated physical problem definition, have the following appearance.The heat conductivity equation for working substance in a condensation zone (T 1 = T s , 0 < x < x 1 , 0 < y < y 1 ): Equation of continuity of vapors of working substance (0 < x < x 1 , 0 < y < y 1 ): Equation of a condition of vapors of working substance (0 < x < x 1 , 0 < y < y 1 ): T -temperature (K); t -time (s); x, y -coordinates of the Cartesian system of coordinates (mm); U , Vspeed components in a projection to axes x and y (m/s); a -coefficient of heat diffusivity (m 2 /s); C v -molar concentration (mol/m 3 ); D -coefficient of diffusion (m 2 /s); -density (kg/m 3 ); P -pressure (N/m 2 ); -coefficient of dynamic viscosity (kg/(m• s); M -molar weight (kg/mol); R t -the gas constant (J/(mol • K)); T 0 -reference temperature (K); -coefficient of heat conductivity (W/(m• K)); Q c -hidden energy of phase transition (J/kg); W c -condensation speed (kg/(m 2 • s)); -thermolysis coefficient, (W/(m 2 • K)); T out -ambient temperature (K); T input -temperature of vapors on an entrance to the channel (K); U 0 , V 0 -initial distributions of speed (m/s); C 0 -concentration of steam on an EPJ Web of Conferences entrance to the channel (C 0 = 1input /M 1 ) (mol/m 3 ); 1input -density of vapor on an entrance to the channel (kg/m 3 ); -the nondimensional coefficient of condensation ( = 0, 1); k -padding coefficient equa 0,4; P n -pressure of saturated steam (N/m 2 ); the indexes "1", "2", "3" correspond to vapors of dichlorobenzene, its liquid phase and a material of pipes of the condenser.The regional conditions corresponding to chosen statement, are described in. The system of nonstationary differential equations with the corresponding boundary conditions ( 1)-( 8) was solved by a method of finite differences [3,4].Difference analogs of differential equations ( 1)-( 8) were solved by a local and one-dimensional method.The sweep method was applied to the solution of one-dimensional difference equations with the use of the implicit four-pointwise scheme.The method of prime iterations was applied to the solution of the non-linear equations.The technique of an assessment of reliability of results of the executed numerical researches is based on the checking of conservatism of the applied difference scheme. Results and discussion Numerical researches are performed at typical values of parameters works of considered condenser installations of independent sources of power supply (Fig. 2): reference temperature of actuation medium on an entrance to the condenser T 1 = 403 K; heat effect of condensation of Q c = 311.7 kJ/kg; sizes of area of the decision H x = 400 mm, H y = 1500 mm; molecular mass dichlorobenzene M = 147 kg/kmol; the nondimensional coefficient of evaporation = 0.1; speed of a working body in the condenser V 1 = 0.01 m/s; heat-transfer coefficient at condensation of vapors of dichlorobenzene in the condenser channel 1−2 = 650 W/(m 2 •K). Thus critical value of temperature of dichlorobenzene on condenser escaping at which excess the risk of emergence of failure of an independent source of power supply considerably increases, T output ≈ 340 K. Thus, on the basis of the received results of numerical model operation it is possible to draw a conclusion that at a temperature of a free air T out = 303 K critical value of temperature of actuation medium after the condenser is already possible at an entrance temperature T input ≈ 420 K, that corresponds to power P = 1500 W (less than 40% of capacity). The dependence of temperature of a working body on condenser escaping from temperature of air environmental the condenser is defined at various duties (Table ).The received dependences well correspond to the data presented in Fig. 3. 01011-p.4 Thermophysical Basis of Energy Technologies Received at application of the developed mathematical model results of numerical researches allow to define a time between failures of considered independent sources of power supply at operation specific climatic conditions.As an example Russia can consider the Amur region, on the territory of which the extent of the main gas pipelines makes more than 1200 km.The data presented in Fig. 3 and in Table allow to draw a conclusion that when functioning independent power installation at a rated power (P = 2000 W, T input = 426 K) temperature of actuation medium on escaping of the condenser will exceed critical value at an ambient temperature T out = 305 K.According to long-term climatic supervision [8] in the Amur region duration of weather conditions within a year with air temperature 305 K and higher makes 840 h.Thus, time per fault at the capacity P = 2000 W in climatic conditions of the Amur region will make T 0 = 7920 h. It should be noted that climatic conditions considered as an example for calculation of time per fault of the region are characterized by the big duration of the periods with the under air temperatures.In case of use of similar independent power installations in regions of the central part of Russia or in the countries of Europe with warmer climate and larger duration of the periods of time with elevated temperature of air indexes of reliability of work of an independent power source will be considerably lowered. Conclusions The mathematical model, allowing to predict emergence of failures of an independent source of the power supply functioning on a closed thermodynamic cycle, during the work in the wide range of a output capacity and in various climatic conditions is developed. As a result of the conducted numerical researches of condensation process of actuation medium (dichlorobenzene) in an air capacitor of the independent power installation working on a closed thermodynamic cycle, dependences of output temperature of actuation medium on its reference temperature and on temperature of an ambient air are received.Emergency conditions of operation at which emergence of failures of power installation for the reason of extremely high temperature on condenser escaping is possible are defined. Figure 2 . Figure 2. Schematic representation of solution region for the problem: 1 -vapors of the working substance; 2condensed fluid; 3 -wall of the condenser pipe; 4 -surrounding air. Figure 3 . Figure 3. Values of output temperature of actuation medium T output at various temperatures of dichlorobenzene on an entrance to the condenser T input : 1 -T out = 303 K; 2 -T out = 273 K; 3 -T out = 253 K. Table 1 . Dependence of temperature of dichlorobenzene on condenser escaping from temperature of the ambient air.
2,461.2
2015-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
NO ESCAPE FROM CATEGORIZATION: AN INSIDER’S VIEW OF COMPOUNDS There has been a surge of syntactic research on compounding, joining a large literature on the nature of roots and phase theory. In an attempt to probe into the syntactic domain for idiosyncratic interpretation and to account for categorial exocentricity, disappearance of subcategorization, and lexical integrity effects, some recent studies on compounding have argued that root compounds are made up of two free acategorial roots directly merged in syntax, without undergoing categorization. The main goal of such an approach is to extend the phase domain in order to maintain two uncategorized roots awaiting further Merge operations. When a category head is merged on the top of this structure, it will trigger its Spell-Out, and as a result, both roots will (i) receive a single category status, (ii) be identified as a single syntactic object for the purposes of extraction and binding, and (iii) be assigned a non-compositional interpretation. In this article, we argue that root categorization should not be analyzed as an optional derivational step. By exploring compounding in Brazilian Portuguese, we identify a handful of phenomena that challenge the assumption that root compounds are made up of two bare roots. We propose that categorial exocentricity, subcategorization, and lexical integrity effects can be straightforwardly accounted if we assume that the unifying characteristic of compounds is the presence of a category head merged on the top of two categorized roots. We claim that non-compositional domains are not determined by categorization. Following Harley (2014), we admit that non-compositionality is assigned at LF through a set of LF instructions associated with roots in a particular syntactic environment. Introduction There has been a surge of syntactic research on compounding joining a large literature on the nature of roots and phase theory. In an attempt to probe into the syntactic domain for idiosyncratic interpretation, as exemplified in (1), and to account for categorial exocentricity (2), disappearance of subcategorization (3), and lexical integrity effects (4), some recent studies have argued that root compounds (RootCs) are made up of two free acategorial roots directly merged in syntax without undergoing categorization (Zhang, 2007;Zwitserlood, 2008;Bauke, 2013Bauke, , 2014Bauke, , 2016Borer, 2013;De Belder, 2017, a.o.). B. Categorial exocentricity (i.e., the compound's overall category differs from those of its constituents). C. Disappearance of subcategorization (i.e., the selection properties of a predicate within a compound are not satisfied). ( D. Lexical integrity effects (i.e., the impossibility of moving (4) or pronominalizing only one of the compound's members). In syntactic theories of word formation like Distributed Morphology (DM), lexical roots -i.e., primitives bearing conceptual content-are essentially category neutral (Marantz, 1995(Marantz, , 1997. They are categorized by combining with a category-assigning head (viz., n[oun], v [erb], a [djective]), as illustrated in (5): (5) n, v, a 3 n, v, a √ROOT Following a phase-based approach to interpretation, most works in DM admit that roots are not interpreted independently, since they never constitute a syntactic phase (Marantz, 2001(Marantz, , 2008Arad, 2003Arad, , 2005. Thus, once a root is categorized -i.e., once a root is merged with a category head-, it is dispatched to the phonological (PF) and semantic (LF) interfaces -as described in (6)-, and receives an interpretation, which can be idiosyncratic. This interpretation is then carried along throughout the derivation. (6) n, v, a 3 LF n, v, a √ROOT PF In root compounding, the merger of two uncategorized roots would extend the phase domain, and consequently it would maintain two uncategorized roots awaiting further Merge operations in the derivation. When a category head is merged on the top of this "compound structure" (√P), both roots are shipped together to LF, as depicted in (7). In this context, independent interpretation is not expected; hence both roots can be assigned a non-compositional meaning, as is the case with the nominal RootC in (1). Additionally, the structure in (7) would bear a single categorial status and serve as a single syntactic object (SO), thus neither of the two roots, √α and √β, could be independently manipulated. This would account for the phenomena exemplified in (2), (3), and (4). In this article, we question whether the merger of two or more bare roots can give rise to well-formed SOs. We depart from the assumption that acategorial roots are defective syntactic primitives, since they are feature-less items and lack a pre-specified content (Arad, 2003(Arad, , 2005Acquaviva & Panagiotidis, 2012;Harley, 2014;Panagiotidis, 2011Panagiotidis, , 2014Panagiotidis, , 2015Panagiotidis, , 2020. Each root must therefore be independently merged with a category head before being sent to the interpretive interfaces. By reviewing Bauke's (2013Bauke's ( , 2014Bauke's ( , 2016 contrast between Germanic and Romance nominal RootCs, we show that the generic structure in (7) does not account for a handful of morphosyntactic and morpho-phonological phenomena in BP nominal RootCs. We also revisit Zhang's (2007) discussions on categorial exocentricity, subcategorization, and lexical integrity, arguing that these phenomena can be straightforwardly accounted if we assume that the unifying characteristic of compounds is the presence of a category head merged on the top of two categorized roots (Nóbrega, 2014(Nóbrega, , 2015Nóbrega & Miyagawa, 2015;Nóbrega & Panagiotidis, 2020). We also indicate that (7) cannot explain a set of interpretive effects observed in compounding more generally. The article is laid out as follows. In Section 2, we discuss the identity of categorization in root compounding. By evaluating the set of phenomena listed above, we provide a body of evidence that root categorization should not be considered an optional derivational step. Romance RootCs, exemplified with data from BP, indicate that both roots must be categorized individually. We primarily review Bauke's (2013Bauke's ( , 2014Bauke's ( , 2016 interpretive distinctions between Romance and Germanic nominal RootCs, pointing out where the author's structural account fails. Subsequently, we explore an alternative solution to explain the three phenomena examined by Zhang (2007). In Section 3, we expand our analysis to account for the assignment of a non-compositional interpretation to RootCs. We argue that non-compositionality in compounding is dissociated from categorization. Following Harley (2014), we assume that any type of idiosyncratic interpretation is assigned at LF through a set of LF instructions associated with roots and the overall syntactic environment they are in. Finally, in Section 4, we present our final remarks. The assignment of non-compositional meanings to complex structures A parametric distinction is commonly made between compounding in Germanic and Romance languages (see Di Sciullo & Williams, 1987;Snyder, 1995Snyder, , 2001Roeper, Snyder, & Hiramatsu, 2002;Roeper & Snyder, 2005;Di Sciullo, 2005;Delfitto, Fábregas, & Melloni, 2011;a.o.). In recent works, Bauke (2013Bauke ( , 2014Bauke ( , 2016 re-assesses this distinction, reconsidering the two tendencies frequently pointed out about nominal root compounding in these language groups, namely: (i) Germanic languages tend to produce nominal RootCs that are compositional, productive, and recursive, while (ii) Romance languages hardly ever create nominal RootCs productively, and the existing forms tend to display a fixed interpretation, which can quite often be expressed by simple nouns, as illustrated with the examples in (8) (Bauke, 2014 p. 22) Focusing on German nominal RootCs, Bauke identifies an intra-language variation, showing that these two tendencies in fact can co-exist in a single system. German displays two patterns of nominal root compounding, which can be distinguished by the following morphological and interpretive properties: (9) Patterns of nominal root compounding in German a. Pattern #1: • Nominal RootCs made up of two bare lexical items combined without any intervening inflectional material; • They are non-recursive, non-compositional, and non-productive. b. Pattern #2: • Nominal RootCs displaying inflectional material in compound internal positions, which is attached to the compound's first constituent member; • Whenever these inflectional markers occur, the resulting compound is compositional and productive, and allows for a range of alternative interpretations -the so-called 'weak compositionality' (see Pirrelli, 2002). c. Landeskirche country.GEN+church 'national church' or 'church that is associated with the country' , or 'church that shows the country's typical architecture, ' etc. d. Landerspiel country.PL+match 'match between two national teams' or 'game that involves knowledge about certain countries' , or 'game that is typically played in certain countries' , or 'game that is characterized by customs of a certain country, ' etc. The compound in (10a) consists of the combination of two roots: Land 'country' and straße 'street' . Since this compound does not involve any internal inflectional material, it can only display -according to Bauke-, one fixed interpretation, thus exemplifying the pattern in (9a). This pattern parallels the Romance RootCs in (8). As Bauke (2014, p. 28) points out, Landstraße is "a very specific type of road" and it cannot be interpreted as "any kind of road that runs through the countryside." This lexicalized type of interpretation is quite different from that of the compounds in (10b) -(10d). Although the examples in (10b) -(10d) display a preferred reading, a number of alternative interpretations exist alongside. For instance, although the preferred reading of Landsman, in (10b), is 'compatriot' , it does not exclude the emergence of a set of additional interpretations, such as 'man who loves the countryside' , and 'man who advocates for the conservation of the countryside' . This interpretational flexibility is assumed to be due to the presence of inflectional material attached to the first constituent, generally a plural or genitive marker. This latter type illustrates the pattern in (9b). Bauke (2014Bauke ( , 2016 indicates that the assignment of alternative interpretations in German RootCs can be attested even in cases where the preferred interpretation has a strong tendency for an idiosyncratic meaning. (11a), for example, has no alternative interpretation available. (11b), on the other hand, despite being more drifted, can also refer to (i) 'a castle that is built in the shape of a bed' , and (ii) 'an arrangement of several beds that resemble a castle' . (11) German (Bauke, 2014, p. 28) a. Bettlaken bed+sheet 'bedsheet' b. Bettenburg bed.PL+castle 'big ugly hotel with lots of rooms' Based on these empirical observations, Bauke puts forth a morpho-semantic generalization about the interplay between the morphological and interpretative properties of German nominal RootCs, which can be synthetized as in (12): (12) Bauke's generalization on German nominal root compounding "As long as an inflectional marker is available [in compound internal positions], a compositional reading can still be retrieved; once the inflectional marker is lost, the interpretation of the compound is fixed and does not allow for productive alternatives" (2014, p. 30). In an attempt to account for the inter-and intra-language variation observed in root compounding, and for the morpho-semantic generalization in (12), Bauke proposes that compositional, recursive, and productive RootCs result from the merger of two independently categorized roots. Following Marantz (2001Marantz ( , 2008, Bauke argues that the inflectional marker attached to the first constituent member is a categorizing head that has the properties of a phase head. Consequently, inflectional markers coincide with a phase that triggers cyclic Spell-Out, as depicted in (6). Thus, the element to which the inflectional marker is attached undergoes independent interpretation at LF and allows for a compositional reading. On the other hand, compounds with a non-compositional interpretation, such as Romance and the Germanic pattern in (9a), would have two roots merged without undergoing categorization. In this case, independent interpretation is not expected -as pointed out earlier-, which would explain their idiosyncratic and non-productive character. In Table 1, below, we summarize Bauke's observations and structural distinctions to each compound type: With respect to Romance languages, Bauke recovers the recurrent assumption that "novel compounds are hardly ever formed productively" and "when speakers of a Romance language form a novel endocentric nominal RootC, the result requires an explanation of the meaning of this compound" (2014, p. 22). This claim, however, is not entirely correct. Although Romance nominal RootCs are not recursive (as opposed to English and German RootCs; e.g., restaurant coffee cup), they are not peripheral as the literature often suggests, and most newly coined forms do not necessarily display a fixed, lexicalized interpretation. BP speakers, for example, easily coin novel RootCs with compositional and straightforward readings, such as the ones listed in (14): judge+star 'a judge who seeks fame; who got famous' All nominal RootCs in (14) have at least two interpretations. Alongside their attributive reading -i.e., a reading involving a modification relation-, they can also display a coordination reading, as illustrated in (15): (15) BP pastor-deputado lit. pastor-congressman i. Attributive reading: 'a pastor who, in addition to being a pastor, has another parallel occupation, congressman'; ii. Coordinate reading: 'pastor and congressman ' . 1 This interpretational flexibility indicates that their internal structure is not entirely opaque. Furthermore, it is relevant to highlight that Romance non-compositional RootCs may also display inflectional material occurring in compound internal positions, similarly to what has been observed in German pattern #2. Examples are listed in (16). Oppositely to the German pattern #2, the inflectional material attached to the compound's first member does not trigger a whole range of alternative interpretations, as noticed particularly with the German RootCs in (11). The examples in (16) contradict Bauke's structural account for the German pattern #2 in (13b). The RootCs in (16) evidence that (i) BP nominal RootCs cannot be the result of the merger of two uncategorized roots, and that (ii) non-compositional RootCs must also have their roots independently categorized in order to allow for the insertion of inflectional markers in their first constituent members. To further verify the plausibility of Bauke's account, let us explore four different contexts. As the first context, let us consider Romance non-compositional RootCs, such as the ones in (16), admitting -as Bauke suggests-that Romance RootCs are made up of two uncategorized roots. Since LF assigns a fixed meaning to this type of SO, we can set aside any discussion on the establishment of different grammatical relations between its constituent members, such as attribution or coordination. Bauke's account, however, is not able to explain the following PF facts: first, it would preclude the insertion of inflectional markers in the first constituent member of BP non-compositional RootCs (e.g., cara-s metade-(s) lit. face-PL+half-PL 'soul mates'). Second, if RootCs bear a single category head, they should display a single primary stress (Marvin, 2002(Marvin, , 2013, a theoretical expectation that conflicts with the phrasal stress pattern of Romance RootCs (Nespor, 1999) As a second context, let us explore BP compositional RootCs. If Romance RootCs were made up of two uncategorized roots, then we would find additional problems to account for some interpretive effects at LF, along with the PF issues identified for the first context. First, this analysis would not be able to explain why we find different grammatical relations holding between the constituent members of nominal RootCs, such as attributive (e.g., BP. trem-bala lit. train+bullet 'bullet train') and coordination relations (e.g., BP. sofá-cama lit. sofa+bed 'daybed'), see also (15). Second, this analysis does not elucidate, in a principled way, why in some nominal RootCs only one root (viz., the non-head noun) is interpreted idiosyncratically, as in (19) Now let us consider non-compositionality in other compound types. If noncompositionality in the two-root domain is associated with the absence of category heads, as (13a) implies, how then could we explain the correct distribution of nominal class markers and verbal theme vowels in non-compositional V-N compounds, such as those in (20)? Finally, let us keep assuming that Romance RootCs are made up of two bare roots merged without undergoing categorization. Then, let us admit that the nominal category head (n) is visible at both interfaces. In this scenario, we would be able to account for the correct distribution of class markers in Romance N-N RootCs (e.g., peix-e espad-a lit. fish+sword 'sword fish'). However, in the case of nominal N-A RootCs, it is hard to determine to which root a nominal class marker will be attached, and to which root gender agreement will be attached, as illustrated with the examples in (21). These four contexts suggest that an analysis for Romance RootCs along the lines of Bauke's (2013Bauke's ( , 2014Bauke's ( , 2016 proposal cannot account for a set of morphological and morpho-phonological facts. Bearing this in mind, we claim that non-compositionality in compounding should be dissociated from categorization. We suggest that even non-compositional RootCs must have their roots independently categorized. In the next sub-section, we will point out that lexical integrity effects provide evidence for postulating a third category head in compounding, which turns both categorized roots into a single SO for the purposes of movement and binding, and elucidates cases of categorial exocentricity. Zhang (2007) claims that (i) categorial exocentricity -i.e., compounds where the constituent in the head position does not impose its categorial features on the whole construction (Scalise, Fábregas, & Forza, 2009, p. 58)-, illustrated in (22), (ii) the disappearance of subcategorization -i.e., cases where the subcategorization of a verb has not been satisfied-, in (23), and the impossibility of movement (24) and pronominalization (25) of a single constituent member, serve as evidence that Chinese RootCs result from the merger of two uncategorized roots. Lexical integrity effects in compounding (22) Chinese (Zhang, 2007, p. 172) a. zhe zhang zhuozi de da-xiao (A+A  N) this CL (Zhang, 2007, p. 174) a. Ta mai-le shu/*mai he buy-PRF book/sell 'he bought books' b. yi zhuang mai-mai one CL buy-sell 'a transaction of trade' (24) Chinese (Zhang, 2007, p. 176) a. Tamen yixiang fu-ze they always carry-duty 'they are always responsible' b. *Tamen yixiang lian ze dou fu they always even duty also carry Intended: 'they are always even responsible' (25) Chinese (Zhang, 2007, p. 177) *Ta xian na-le yi ba cha i -hu ranhou ba ta i dao-ru beizi-li he first take-PRF one CL tea-pot then BA it pour-in cup-in Intended: 'he first took a tea-pot, and then poured the tea into a cup' As an alternative to Zhang's (2007) proposal, we suggest that these phenomena may in fact be indicating that the unifying characteristic of compounds is the presence of a category domain on the top of two or more categorized roots, which is responsible for turning one or more syntactic elements (e.g., categorized roots, phrasal constituents) into a single SO (see Nóbrega, 2014Nóbrega, , 2015. Following Nóbrega & Panagiotidis (2020, p. 230), we incorporate this assumption in the syntactic definition of compounds presented in (26). (26) Compounds within syntax Compounds are phrasal structures with two or more categorized roots combined in a specific grammatical relation -viz. subordination, attribution, or coordination-, which are further categorized by a category head, n, v or a. According to (26), compounds should be analyzed as the by-product of the recategorization of an endocentric syntactic structure. As a consequence, the overall category of the compound may -in some cases, such as those in (22)differ from the category of its internal constituent members. Furthermore, the nominal status of deverbal compounds would inhibit the subcategorization frames of their internal verbs, which would explain (23). Finally, the category domain on the top of this compound structure would require both categorized roots to be moved together, thus no element can be moved out of the compound in isolation, as observed in (24). It also inhibits reference to some of the compound's roots by using an anaphoric device, as in (25). Other empirical facts motivating the assumption of a category domain in compounding are: (i) the addition of a subcategorization frame in verbal N-V and synthetic compounds, as in (27) and (28); (ii) parasynthetic compounds (i.e., when two roots form a (non-existent) compound with a derivational suffix; see Bisetto & Melloni, 2008, and (iii) the addition of inflectional markers distinct from those expected for the compound members when used as an autonomous word, as observed in Modern Greek and Slavic compounds (Nespor & Ralli, 1996;Ralli, 2008Ralli, , 2009Ralli & Karasimos, 2009); see Nóbrega (2014) The generic structure for a RootC, including the categorial domain alluded in (26), is thus as follows: Based on the empirical facts discussed so far, we may admit that categorization is not an optional derivational step, especially in non-compositional domains. In addition to the empirical issues explored, merging two uncategorized roots leads to a set of theoretical drawbacks, most of them associated with their feature-less nature (Panagiotidis, 2011(Panagiotidis, , 2014(Panagiotidis, , 2015(Panagiotidis, , 2020. First, by postulating that two free acategorial roots create a well-formed SO, we would necessarily have to admit that feature-less items are able to project (Acquaviva, 2009). Second, compounds made up of two free acategorial roots would be inherently headless. Roots are 'weak' syntactic elements, since they remain feature-less due to the 'No Tampering Condition' (Chomsky, 2008(Chomsky, , 2015; thus, the output of two uncategorized roots would in principle induce formal crashing at the interfaces, since no immediate head can be identified. With this in mind, we will now discuss how the derivation of RootCs in BP is structured, and how a non-compositional meaning can be assigned to both roots. Deriving and interpreting BP nominal RootCs We have concluded thus far that roots are never interpreted independently. Their meaning is negotiated when they are dispatched to the interfaces, right after being categorized. Second, roots are defective syntactic objects, since they are content-less and feature-less primitives (Arad, 2003(Arad, , 2005Acquaviva & Panagiotidis, 2012;Harley, 2014;Panagiotidis, 2011Panagiotidis, , 2014Panagiotidis, , 2015Panagiotidis, , 2020. Therefore, roots must be merged with a category head; otherwise they cannot be semantically and phonologically interpreted (Panagiotidis, 2014(Panagiotidis, , 2015. Third, non-compositional interpretation in compounding emerges when both roots are shipped together to the interpretive interfaces, allowing this combination to be In which α, β, γ are category-defining heads and ℜ stands for the grammatical relations of subordination, attribution and coordination. assigned an idiosyncratic meaning (Nóbrega & Panagiotidis, 2020, p. 229). In the compound structure in (29), both roots are merged to a categorizing head as their complements, as shown in (30). Nevertheless, as indicated by Nóbrega and Panagiotidis (2020, p. 231-233), once category heads are phase heads, both roots will be spelled-out separately, precluding the assignment of a single idiosyncratic meaning to the compound. It is not consensual, however, that roots are complements of category heads. Marantz (2013), in particular, shows that contextual allomorphy requires a root to be concatenated as an adjunct to a category head (specifically v). Irregular pasttense morphology in English, for instance, is sensitive to the past-tense feature of T. The root √TEACH, in (31), has to be realized as /t / in the environment of √TEACH, and the past-tense feature has to be realized as /t/ in the environment of √TEACH. (31) English past tense √TEACH + v(Voice) + past = taught (Marantz, 2013, p. 98) (31) thus describes a locality issue: if v is a phase head, then the root and T are on different sides of a phase boundary. In order to allow for a root to be in the same Spell-Out domain as v (or Voice), Marantz argues that each root is adjoined to the verbal category head. Consequently, the root will not be in the Spell-Out domain of v. As a result, v(+Voice) does not interfere with Tense serving as the context for the vocabulary item (VI) at the root, since all of these heads will be spelledout at the same time, in the complement domain of C. Additional evidence and possible extensions are cases of contextual allosemy in Murinypata, a language of North-West Australia. In languages with noun classifiers, distinct classifiers can be used with the same noun to specify its meaning. In Murinypata, the choice of the noun classifier will govern the meaning assigned to the root, as illustrated with the noun kamarl 'eye' in (33): 3 3 DP 2 2 (33) Murinypata (Walsh, 1976, p. 275) a. These examples instantiate a case of contextual allosemy, where the meaning of the root is sensitive to the noun classifier attached above its nominal category head. Thus, for LF to assign a meaning to the root √KAMARL 'eye' , both the root and the classifier node (CL) have to be in the same Spell-Out domain, otherwise such an outer morphology cannot influence the root's meaning, as schematically illustrated in (34). If we admit that the root is adjoined to its category head, then n will not interfere with CL serving as the context for the VI at the root, since all of these heads will be spelled-out at the same time, in the complement domain of D. (2020) developed an analysis along these lines to explain root categorization in compounding. Expanding Marantz's (2013) adjunction proposal, the authors suggest that roots are externally pair-merged with category heads, as in (35). This assumption would assure that: (i) each root will be in a local domain with a category head, and that (ii) both roots will be in the same Spell-Out domain, allowing the assignment of an idiosyncratic meaning to the compound. 4/5 (35) 3 <√ROOT, x> <√ROOT, x> in which x stands for n, v or a. Following Nóbrega and Panagiotidis (2020, p. 232), we also admit that the grammatical relations connecting the members of a compound ( ) are derived by the nature of the operation Merge applying to combine their categorized roots, Furthermore, we admit that the syntactic derivation of a compound would follow the same derivational steps of complex specifiers or complex adjuncts (following the technical implementation envisaged in Nunes & Uriagereka, 2000;Nunes, 2012;Piggott & Travis, 2013). Thus, considering the Numeration in (37a), the computational system would independently select a nominal category head (n) and a root (37b-c), and subsequently it would externally pair-merge them together (37d). A second root would be externally pair-merged with a nominal category head as a second root syntactic object (37e-g). Finally, both SOs would be merged to each other following one of the specifications in (36). The derivational step in (37h) indicates that both roots were concatenated in an attributive relation. After being concatenated, both categorized roots (i.e., roots in a strict local domain with a category head) have a third category head set-merged on the top of them. At this moment, this category head determines a phase head and triggers the Spell-Out of its complement -since it was set-merged to the structure-, allowing both categorized roots to be dispatched to the interfaces together. In (38), we describe the generic structure of a nominal RootC with an attributive interpretation (e.g., trem-bala lit. train+bullet 'bullet train'; aula-debate class+debate 'debate class' , and the compounds listed in (14)), and indicate the Spell-Out domain. Now to account for the assignment of a single idiosyncratic reading to two categorized roots, we claim that the interpretation of a nominal RootC is determined at LF, through instructions associated with roots and the syntactic structure they are in, following Harley's (2014) proposal. LF instructions assign interpretations to roots, taking into account the overall syntactic environment in which they are inserted. Thus, the syntactic context -and, more importantly, the categorial environment-is extremely relevant to determine the roots' meaning. Harley (2014), following Acquaviva (2009), assumes that roots are phonologically abstract and semantically vacuous linguistic primitives, and that a root terminal node is solely individuated by an alphanumeric index. For example, an arbitrary English root, such as √ 77 -associated to the phonological matrix /θrow/-, can be assigned multiple interpretations at LF, depending on its syntactic environment, as illustrated with the LF instructions in (39). (39) LF instructions (Harley, 2014, p Following this rationale, a non-compositional RootC, such as (BP) sambacanção lit. samba+song 'boxers' , will have its idiosyncratic meaning listed as part of 3 3 n C <nP> g g 3 the LF instructions of both roots. For instance, √ 243 "SAMB-", in the context of root √ 38 "CANÇ-", will be assigned the meaning 'boxers' . The same meaning is codified as part of the LF instructions of the root √ 38 "CANÇ-" in the context of root √ 243 "SAMB-", as described on the right-hand portion of the LF instructions in (40) In these compounds, only the non-head noun -viz., the second memberis assigned a drifted interpretation, while the compound's head generally receives an elsewhere interpretation. This is illustrated with the compound bolsasanduíche in (42) To conclude, a short note on weak compositionality. Based on what has been discussed, we hypothesize that weak compositionality, as observed in (10b) -(10d), is an interpretive effect that arises when the compound is interpreted compositionally. This suggests that weak-compositionality is not necessarily dependent of a lexicalized meaning codified as part of an LF instruction, as is the case of the non-compositional RootCs in (40) and (42). Furthermore, weak compositionality seems to be restricted to nominal attributive compounds, and although their meaning may vary substantially, such variation is restricted by the attributive reading (Scalise & Vogel, 2010). We assume for now that weak compositionality is context-dependent, and may be regulated by pragmatic factors. Final remarks In this article, we argued that categorization is not an optional derivation step. Contrarily to approaches that resort to a delayed categorization to account for non-compositionality and lexical integrity effects in root compounding, we argued that non-compositionality is dissociated from categorization, and that lexical integrity effects are a reflex of a categorial domain established on the top of a complex structure (which generally comprises two categorized roots). We also argued that non-compositionality and weak compositionality should be analyzed as different LF phenomena. The former arises when LF instructions determine particular interpretations to a syntactic structure, while the latter is contextdependent, presumably pragmatically determined. Notes 1. In the coordination reading, order variation is possible, pastor-deputado and deputadopastor (Bauer & Tarasova, 2013;Arcodia, Grandi, & Wälchli, 2010), while in the attributive reading -when the speaker assigns prominence to one of the compound members-order variation is not allowed; for this reason, the interpretation in (15i) cannot be extracted from the compound deputado-pastor. 2. The vowel -o-connecting both roots is a linking element (LE), which is inserted for phonotactic reasons when the first root ends in a consonant, and the second root begins with a consonant (Nóbrega, 2013(Nóbrega, , 2014Scher & Nóbrega, 2014). This LE does not need to be inserted in cases where this consonant cluster is not formed, e.g. psic-análise lit. psych+analysis 'psychotherapy' and hidr-elétrica 'lit. hidr+electric 'hydroelectric' . 3. In fact, (13a) is not capable of differentiating word-based from stem-based nominal compounds, commonly found in some Romance languages. Since Romance nouns are generally linked to a class marker, we can admit that stem-based and word-based compounds are, in these languages, two different modes of externalizing RootCs. 4. One consequence of assuming that roots are externally pair-merged with category heads is that they may not be in fact "categorized" (Alexiadou & Lohndal, 2017, p. 220). In light of this, we restate the canonical notion of categorization (Categorization Assumption; see Embick & Marantz, 2008) as follows: "A root must be merged in a strict local domain with a category-assigning head. Categorially non-individuated roots are not legitimate LF and PF objects, inducing formal crashing at the interfaces. " 5. Since roots are adjoined to category heads, they could be seen as "optional. " Thus, it would be expected to find syntactic structures with grammatical features, but no roots. Possible examples are "this is here", "it is here" (Emonds, 1985apud Panagiotidis, 2011. 6. Borer (2014, p. 355-356) points out that this type of double marking, in cases of noncompositionality, would be a drawback. We do not see, however, how listing the same meaning as part of the LF instructions of both roots would interfere in a significant way in the compound's interpretation.
7,308
2020-10-22T00:00:00.000
[ "Linguistics" ]
Development of Simple Green Spectrophotometric and Conductometric Methods for Determination of Cephalosporins in Pure, Pharmaceutical Dosage forms and Human Urine Five Simple, accurate and rapid spectrophotometric and conductometric methods were developed for the determination of four third generation cephalosporins, namely, cefotaxime sodium (I) , cefoperazone sodium (II), ceftazidime pentahydrate (III) and cefdinir (IV) in pure active ingredient, pharmaceutical dosage forms and human urine. Method A: is based on the reaction of the sulphide ions produced from the alkaline hydrolysis of the cited four drugs with Paminophenol (PAP). This reaction results in a thionine dye (phenothiazine derivative) formation which exhibits maximum absorbance at 545 nm. Method B: is based on oxidation of drug (I and III) with a known excess of n-bromosuccinimide (NBS) in acidic medium followed by the determination of unreacted amount of n-bromosuccinimide with metol and sulphanilic acid. The purple-red reaction product exhibits maximum absorbance at 520 nm. Method C: is based on the formation of yellow chelate between drug (IV) and palladium (II) chloride in buffered medium (pH 3.5) with an absorption maximum at 314 nm. Method D: is based on the reaction of drug (IV) with aqueous ninhydrin to give yellow colored product in the presence of bicarbonate with an absorption maximum at 433 nm . Method E: A conductometric method is based on the reaction of the four cited drugs with phosphotungstic acid (PTA) forming an ion associate in aqueous medium. Validation of the proposed methods was carried out. All proposed methods were successfully applied for the commercial dosage forms of the cited drugs. Method C was successfully applied for the determination of cefdinir in human urine. Instruments Shimadzu recording spectrophotometer UV 1201 equipped with 10 mm matched quartz cells and conductometer model 470 portable conductivity / TDS meter, 25 DEG.C-C10 dip-type cell with a cell constant, K cell of 1.09 were used. Digital analyzer pH meter (USA) was employed for pH measurment. Reagents and materials All chemicals and materials were of high analytical grade, and double distilled water was used through the work. 9-Metol (Sigma Aldrich ,Germany) (0.2 g%, w/v) aqueous solution. 11-Palladium(II) chloride (Sigma, Milwukee, WI,USA) was prepared as 2×10 -3 M by dissolving 35.5 mg of palladium(II) chloride in 1 mL of concentrated hydrochloric acid and diluting to 50 mL with distilled water, with the aid of heat and then the solution was cooled and diluted to 100 mL with distilled water. 12-Ninhydrin (Sigma, Milwukee, WI,USA) was prepared as 0.5% aqueous solution. 13-Phosphotungstic acid (Winlab , UK) was prepared as 10 -2 M aqueous solution. 14-Sulphuric acid , hydrochloric acid and sodium hydroxide were obtained from El-Nasr Chemical Company, (I, II, III and IV) respectively were transferred and the solutions were completed with 1M NaOH to the mark . The flasks were heated in a boiling water bath for 50 minute for drug (I and II) and 60 minute for drug (III and IV) and then cooled to room temperature. One mL of each of these solutions was transferred into another three sets of 10 mL volumetric flasks. Two milliliters of zinc acetate solution then the specified volume of paminophenol and Ammonium iron (III) sulphate solutions were added to each flask. The flasks were shaken for 30 seconds and allowed to stand for the specified time. The volumes were completed to 10 mL with distilled water. Absorbance was measured at 545 nm against blank solution ( Figure 1). Method B Into two sets of 10 mL volumetric flasks, accurate volumes of the standard solution of each drug containing (0.02-0.32) and (0.04-0.32) mg of drug (I and III) respectively were transferred . To each flask 0.8 and 2 mL of NBS were added for drug (I and III) respectively. The content was mixed well . The flasks were kept for 15 and 10 min for drug (I and III) respectively with intermittent shaking. Then, the specified volume of metol was added . After 1 minute, the specified volume of sulphanilic acid was added to each flask, The flasks were kept aside for 3 minutes. Then, the volume was diluted to the mark with bidistilled water and mixed well. The absorbances were measured at 520 nm ( Figure 2). Method C To a set of 10 mL volumetric flasks, accurately measured aliquots of standard drug (IV) solution in the range of (0.03 -0.26) mg were transferred. 2 mL buffer solution of pH 3.5 were then added, then 0.5 mL of 2 M potassium chloride and 1 mL Pd(II) chloride solution. The solutions were allowed to stand at room temperature (25 ᵒC) for 10 minutes, then diluted to volume with bidistilled water, absorbance was measured at 314 nm against blank solution ( Figure 3). Method D To a set of 10 mL volumetric flasks, accurately measured aliquots of standard drug (IV) solution in the range of (0.04 -0.3) mg were transferred and completed to 5 mL with bidistilled water. 1 mL of ninhydrin solution followed by 1 mL of saturated solution of sodium bicarbonate were added to each flask. The flasks were heated in a boiling water bath for 15 minutes, cooled and then diluted to volume with bidistilled water. Absorbance was measured at 433 nm versus blank solution ( Figure 3). Method E Aliquots of drug solution containing (3 -30 mg) were transferred to a 50 mL calibrated flasks. Volumes were made up to the mark using bidistilled water and transferred to a beaker. Titration with 10 -2 M phosphotungstic acid was performed. The conductance was measured subsequent to each addition of phosphotungstic acid solution and after stirring for 2 minutes, the conductance was corrected for dilution [7] using the following equation. Where Ω -1 obs is the observed electrolytic conductivity, v1 is the initial volume and v2 is the volume of phosphotungstic acid solution added. A graph of corrected conductivity versus the volume of added phosphotungstic acid solution was constructed and end-point was estimated (Figure 4). .Method A and E : The conent of one vial was transferred into 100 mL volumetric flask and the volume was completed with distilled water .Accurate volume of vial equivalent to 300 and 150 mg for method A and C respectively was transferred to 50 mL volumetric flask .The volume was completed with double distilled water and the procedure was completed as under general procedure . For Method B: The conent of one vial was transferred into 100 mL volumetric flask and the volume was completed with 0.2 M and 0.05 M HCl for drug (I and III) respectively. Accurate volume of vial equivalent to 10 mg for drug (I and III) was transferred to 50 mL volumetric flask .The volume was completed with 0.2 M and 0.05M HCl for drug (I and III) respectively and the procedure was completed as under general procedure . For cefdinir capsules: The contents of 10 capsules were removed and their weight was determined accurately. The combined contents were mixed and a quantity equivalent to 10 mg for method C and D and equivalent to 150 mg for method A and E was transferred into 50 mL volumetric flasks, extracted with 1 mL 1 M NaOH and shaken with 10 mL bidistilled water, then filtered and diluted to 50 mL with bidistilled water. The assay was completed as under general procedure. For cefdinir suspension: An accurately measured volume of the freshly reconstituted oral suspension equivalent to 10 mg for method C and D and equivalent to 150 mg for method A and E was transferred into 50 mL volumetric flasks, extracted with 1 mL 1 M NaOH and shaken with 10 mL distilled water, then filtered and diluted to 50 mL with distilled water. The assay was completed as under general procedure. O c t o b e r 1 8 , 2 0 1 3 Prepartion and analysis of human urine samples ( for method C) Human urine samples were collected freshly from healthy volunteers. Blank urine pool was diluted 1:1 with double distilled water, then spiked with the appropriate amounts of stock solution to prepare samples. The assay was completed as described above. RESULTS AND DISCUSSION Many of the reported methods suffered from poor sensitivity, use of expensive organic solvents and extraction step. The use of organic solvent as the reaction medium is undesirable. Green or environmentally-friendly analytical methods are promising in recent years. Modern analytical methods need to be green without losing accuracy and sensitivity.The aim of the present work was to develop five new sensitive, cost effective methods for the determination of cephalosporins in pure drug , in pharmaceutical preparations and human urine. Method (A) is based on the use of PAP in sulphuric acid and aqueous medium. Method (B) is based on redox reaction in acidic and aqueous medium . Method (C) is based on use of the palladium (II) chloride in walpole acetate buffer pH 3.5, unlike other methods which are based on the use of palladium (II) chloride in DMF [6]. No spectrophotometric method was reported for cefdinir determination in human urine, so method (C) is advantageous in its determining in human urine. Method (D) is based on use of aqueous ninhydrin in bicarbonate medium, unlike other methods which are based on the use of methanolic or ethanolic ninhydrin [8] . Method (E) is based on direct titration of the cited drugs with PTA in aqueous medium. So, all proposed methods are free from usage of hazardous and expensive chemicals. Since inexpensive and easily available chemicals are used, the developed methods are green low cost analytical methods for cefdinir. Method development (Method A) Para amino phenol (PAP) was widely used in many analytical methods for pharmaceutical compounds determination. It was used for determination of cimetidine, famotidine , nizatidine , ranitidine hydrochloride [9] and cephalexin [10]. Method A is based on the alkaline hydrolysis of the cited drugs producing the sulphide ions which react with PAP and ferric ions by ring closure redox reaction giving red thionine dye (phenothiazine derivative) . Optimization of the reaction conditions The effect of hydrolysis time, effect of PAP volume, ammonium iron (III) sulphate volume and effect of reaction time were studied. It was found that 50 minutes hydrolysis time for (I and II) and 60 minutes for (III and IV), 4.5 mL of PAP for (I and II) , 3 mL for (III) and 4 mL for (IV) (Figure 5), 1.5 mL of ammonium iron (III) sulphate for (I,II and III) and 2 mL for (IV) and 1 minute for (I and II) and 3 minutes for (III and IV) were sufficient to give maximum absorbance. The color produced was stable for 1 hour. Method developemt (Method B) NBS-metol-primary arylamine combination was used for the determination of the oxidant, thereby permitting the indirect assay of many oxidisable substances including drugs in which the drug is oxidized with a known excess of NBS and, after the reaction, the unreacted NBS reacts with metol and primary arylamine and the purple color formed is measured and correlated to drug concentration. Many pharmaceuticals have been estimated by this approach using NBS as oxidant and sulphanilic acid as primary arylamine .e.g. aspartame [11] and pioglitazone hydrochloride [12]. In this method, the application of NBS-metol-primary amine combination to the determination of (I and III) is described. The method is based on the oxidation of the drug by a known excess of NBS in acidic medium and subsequent determination of the unreacted NBS by interacting with metol and the primary aromatic amine, sulphanilic acid. The studied drugs when added in increasing amounts to a fixed amount of NBS, consume NBS and consequently, there will be a concomitant fall in the NBS concentration. This is observed as a proportional decrease in the absorbance of the reaction mixture on increasing the concentration of drugs .The following scheme illustrates the proposed reaction mechanism. Scheme (3): Proposed reaction mechanism of the studied drugs and NBS-metol-primary amine combination. Optimization of the reaction conditions The effect of acid type, molarity of hydrochloric acid, NBS volume, metol volume, sulphanilic acid volume, reaction time between drug and NBS, waiting time after addition of sulphanilic acid and time after dilution were studied. It was found that 0.2 M and 0.05 M hydrochloric acid, 0.8 and 2 mL of NBS solution, 0.5 and 1 mL of metol solution (Figure 6), 0.5 and 1 mL of sulphanilic acid solution, 15 and 10 minutes reaction time with NBS and 3 minutes waiting time after addition of sulphanilic acid were sufficient to give maximum absorbance. Solutions of ceftazidime can be measured immediately after dilution with water giving stable absorbances. For cefotaxime sodium, the solution should be stand for 5 minutes after dilution then measured due to increasing the absorbances in the first five minutes after dilution. All absorbances for both drugs are stable for 30 minutes. O c t o b e r 1 8 , 2 0 1 3 Method development (Method C) The use of palladium (II) chloride as a complexing agent for drugs quantitation is very wide. Palladium(II) chloride was found to form complexes of square or 5 -co-ordinate shape [13]. The chelate complex of palladium (II) ions is watersoluble and does not need extraction procedure. Several drugs were determined spectrophotometrically by measuring the color intensity of their complexes with palladium (II) ions, e.g. metoclopramide, promazine [14] and cefotaxime , cefuroxime and cefazolin [6]. Reaction of Pd(II) chloride with drug (IV) produced yellow complex which was soluble in walpole acetate buffer pH 3.5. The absorption spectra showed a maximum absorbance at 314 nm . 3.3.1.Optimization of the reaction conditions Effect pH, effect of KCl , effect of reagent volume, effect of reaction time , effect of temperature and effect of order of addition were studied. It was found that 2 mL of Walpole acetate buffer pH 3.5 , 0.5 mL of 2 M KCl , 1 mL 2 × 10 -3 M Pd(II) chloride (Figure 7), 10 minutes were sufficient to give maximum absorbance with cefdinir. Temperatures higher than room temperature caused absorbance decrease. The most suitable order of addition was found to be drug , buffer, KCl and Pd(II) chloride. The color produced was stable for 30 minutes. 3.3.2.Composition of the complex The stoichiometry of the complex between cefdinir and Pd(II) chloride was studied by applying Job's method of continuous variation [15] using an equimolar ( 5 × 10 -4 M) solutions of cefdinir and Pd(II) chloride . The total volume of drug and Pd(II) chloride was kept at 2 mL then the procedure was completed as under the above mentioned procedure. The results obtained showed that the stoichiometric ratio of the complex was (1 : 1) (reagent:drug) (Figure 8). O c t o b e r 1 8 , 2 0 1 3 Vd is the volume taken from drug molar solution. Vr is the volume taken from Pd (II) chloride molar solution. Formation Constant of the Reaction Product The formation constant (Kf) of the complex was calculated using the following formula [16]: The Gibbs free energy change of the reaction (ΔG) was found to be -2.6 × 10 4 K.J./mole. It has a negative value which indicates the spontaneous nature of the reaction [16]. Method developemt (Method D) Ninhydrin (triketohydrindane hydrate) is a carbonyl reagent which can form a purple condensation product which can be measured spectrophotometrically .So, it was applied in the pharmaceutical assay of different nitrogenous compounds such as some penicillins [17] and tranexamic acid [8]. A modified approach for ninhydrin green use was developed for lisinopril determination based on the formation of a yellow colour product with aqueous ninhydrin in the presence of bicarbonate with an absorption maximum at 420 nm [18]. So the aim of this work is to develop simple green method for cefdinir determination by the reaction with aqueous ninhydrin in bicarbonate medium giving yellow color measured at 433 nm. The green use of ninhyrin leads to short heating time (15 minutes) and avoidance of organic solvent use, so reduces cost. Optimization of the reaction conditions The effect of pH, volume of NaHCO3, concentration of ninhydrin and heating time were studied. Different molarities of NaOH were used to study the effect of pH on the reaction. No color product was formed in NaOH medium, so the reaction is specific in bicarbonate medium .1 mL of NaHCO3, 1 mL of ninhydrin ( Figure 9) and heating for 15 minutes were sufficient to give maximum absorbance and stable yellow color product. The developed yellow color was stable for 1 hour. O c t o b e r 1 8 , 2 0 1 3 Method E ( conductometric method) Conductometric titration is one of the simplest analytical techniques used in drug standardization laboratories. Precipitimetric conductometric titrations using phosphotungstic acid as a titrant are commonly used for the quantitative determination of different compounds eg. reproterol HCl , pipazethate HCl , salbutamol sulphate [19]. The present method aims to introduce new conductometric method for the determination of the cited drugs which is very simple in application and of low expenses but as the same time having a high degree of accuracy and precision when compared to the reported methods . The conductance measured before any addition of the titrant (volume of phosphotungstic acid equals zero) is due to the formation of RNHx + and OH − by hydrolysis. During titration, replacement of the RNHx + ions by mobile H + occurs resulting in ion associate formation and the conductivity increases. The conductivity continues to increase rapidly after the endpoint. Curve break is observed at molar ratio of 3 : 2 (drug-reagent) for all drugs except for ceftazidime pentahydrate which has a curve break at drug-reagent molar ratio of 1:1. Representative titration curve is shown in (Figure 4) indicating two straight lines intersecting at the end point. After the end point, sudden change in the slope occurs. Optimization of the reaction conditions The optimum conditions for performing the titration in a quantitative manner were elucidated as described below. 3.5.1.1.Titration medium. Preliminary experiments in aqueous solutions of both drug and reagent, drug and reagent solutions in ethanol-water (50%, v/v) mixture, methanolic solutions of both drug and reagent, drug and reagent solutions in methanol-water (50% v/v) mixture and drug and reagent solution in acetone-water (50% v/v) mixture. Aqueous medium led to higher conductance and most sharp end point for all drugs. Reagent's concentration. The optimum concentration of phosphotungstic acid was found to be 10 -2 M to give a constant and stable conductance for all drugs. Concentrations less than 10 -2 M give unstable readings. 3.5.2.Composition of the complex: In 50 mL volumetric flask , 6 millilitres of 10 -2 M drug solution were transferred , completed to 50 mL with bidistilled water , transferred to a beaker and titrated by 10 -2 M PTA. The conductance was measured after 2 minutes stirring from each addition of reagent solution. A graph of corrected conductivity versus PTA volume was constructed indicating curve break at a molar ratio of 3 : 2 (drug-reagent) except for ceftazidime pentahydrate which has a curve break at drug-reagent molar ratio of 1:1. 3.6.1.For the methods (A , B, C and D) Standard calibration curves for the cited drugs were constructed by plotting absorbances against concentrations. Beer's law limits, linear regression equations, molar absorptivity and sandell sensitivity were estimated for each method ( Table 1). The correlation coefficients were found to be 0.9999 indicating excellent linearity over beer's law limits for all methods ( Table 1). The detection limit (LOD) and limits of quantitation for the proposed methods were estimated according to ICH O c t o b e r 1 8 , 2 0 1 3 [20] and listed in (Table 1). Their values indicate the high sensitivity of the proposed methods. Accuracy and precision were determined by analyzing one concentration of each drug in seven replicates. The relative standard deviation (RSD%) and percentage relative error (Er%) were estimated at 95% confidence levels ( Table 2). The results showed that the proposed methods have good reproducibility. For Method E: Recovery study of the cited drugs content in their commercial preparations was performed using the proposed method. (Table 3) showed that the proposed method is accurate and reproducible over a concentration range of 3 -30 mg. Table 4 showed comparison of the results obtained using method D with the reported method [21] using Student t-test and Variance ratio F-test at 95% confidence level . O c t o b e r 1 8 , 2 0 1 3 Analytical applications The proposed methods were successfully applied to determine the cited drugs in their commercial dosage forms. Recovery studies were performed (Tables 3 and 5). The results validation was determined by comparison with the reported method [21] using Student t-test and Variance ratio F-test at 95% confidence level (Table 4).It was found that no significant differences between the proposed methods and reported method [21]. O c t o b e r 1 8 , 2 0 1 3 Human urine (for method C) Method C was successfully applied to determine cefdinir in spiked human urine samples with excellent precision and accuracy (Table 6) .No interference was found from the biological urine matrix. Moreover, the linearity of the proposed method was checked over a concentration range of 4 -26 μg.mL -1 . Excellent linearity was observed. The regression equation was found to be y = 0.0347x + 0.0726 .The correlation coefficient (r 2 ) was found to be 0.9993. The detection (LOD) and quantitation limits (LOQ) were found to be 1.64 and 4.97 μg . mL -1 respectively. Their values confirm the sensitivity of the proposed method in human urine. So, The proposed methods are simple, green, accurate and precise in determining the cited drugs in its pharmaceutical formulations and human urine without interference from common excipients or biological matrix. The proposed methods have higher sensitivity than many of the reported methods. All methods are green analytical methods so, they are inexpensive and ecofriendly. Moreover, they are less time-consuming and do not require difficult extraction procedures. So, the proposed methods are suitable for routine analysis of the cited drugs in control laboratories.
5,149.2
2008-12-12T00:00:00.000
[ "Medicine", "Chemistry" ]
Classifying breast cancer using multi-view graph neural network based on multi-omics data Introduction: As the evaluation indices, cancer grading and subtyping have diverse clinical, pathological, and molecular characteristics with prognostic and therapeutic implications. Although researchers have begun to study cancer differentiation and subtype prediction, most of relevant methods are based on traditional machine learning and rely on single omics data. It is necessary to explore a deep learning algorithm that integrates multi-omics data to achieve classification prediction of cancer differentiation and subtypes. Methods: This paper proposes a multi-omics data fusion algorithm based on a multi-view graph neural network (MVGNN) for predicting cancer differentiation and subtype classification. The model framework consists of a graph convolutional network (GCN) module for learning features from different omics data and an attention module for integrating multi-omics data. Three different types of omics data are used. For each type of omics data, feature selection is performed using methods such as the chi-square test and minimum redundancy maximum relevance (mRMR). Weighted patient similarity networks are constructed based on the selected omics features, and GCN is trained using omics features and corresponding similarity networks. Finally, an attention module integrates different types of omics features and performs the final cancer classification prediction. Results: To validate the cancer classification predictive performance of the MVGNN model, we conducted experimental comparisons with traditional machine learning models and currently popular methods based on integrating multi-omics data using 5-fold cross-validation. Additionally, we performed comparative experiments on cancer differentiation and its subtypes based on single omics data, two omics data, and three omics data. Discussion: This paper proposed the MVGNN model and it performed well in cancer classification prediction based on multiple omics data. Introduction Cancer is one of the leading causes of death in the world today.According to the global cancer statistics report in 2020, there were nearly 19.3 million new cases of cancer and 10 million cancer-related deaths worldwide (Bray et al., 2018).Due to factors such as globalization and economic growth, the number of new cancer cases is expected to continue to rise.Cancer is a disease characterized by the uncontrolled growth and spreading of specific cells in the body to other parts of the body.These cells can also transfer to distant body parts, forming new tumors through metastasis (Hanahan and Weinberg, 2011).Tumors can be classified into different grades, known as tumor grading, by examining tumor cells under a microscope.Tumor grading compares the degree of cellular and tissue morphological changes between cancer cells and normal cells, indicating the tumor's differentiation.Generally, based on the abnormality of tumor cells observed under a microscope, tumors are classified into grades 1, 2, or 3 (sometimes also 4), called G1, G2, G3, and G4, respectively (Sobin and Fleming, 1997).These represent well-differentiated, moderately differentiated, poorly differentiated, and undifferentiated tumors.Cancer is also a heterogeneous disease that encompasses various subtypes.The same type of cancer can be divided into subtypes based on different mechanisms of occurrence.Different subtypes of the same cancer reflect distinct molecular carcinogenesis processes and clinical outcomes.With the advent of precision medicine, cancer classification has gradually become one of the fundamental goals of cancer informatics.Heterogeneous cancer populations are grouped into clinically meaningful subtypes based on the similarity of molecular spectra. Breast cancer is a most common cancer worldwide (Loibl et al., 2021).The number of breast cancer patients is increasing year by year, and the proportion of women under the age of 40 with breast cancer has reached 6.6% (Assi et al., 2013).Breast cancer incidence rates have risen in most of the past four decades; during the most recent data years (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019), the rate increased by 0.5% annually (Giaquinto et al., 2022).Breast cancer, as a highly heterogeneous disease, is composed of different biological subtypes, which possess distinct clinical, pathological, and molecular characteristics, as well as prognostic and therapeutic significance (Reis-Filho and Pusztai, 2011).Therefore, studying breast cancer subtypes is of great significance for precision medicine and prognosis prediction (Waks and Winer, 2019).In the year 2000, Perou et al. first proposed the molecular subtyping of breast cancer.They concluded that breast cancer can be divided into four subtypes: Luminal A subtype, Basal-like subtype, HER2-enriched subtype, and Normal-like subtype (Perou et al., 2000).Sorlie et al. subdivided the luminal subtype into luminal A and B subtypes (Sorlie et al., 2003).Waks et al. categorized breast cancer into three major subtypes based on the presence or absence of molecular markers, including estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2).These subtypes are ER+/PR+/HER2-(luminal A), HER2-positive, and triple-negative breast cancer (TNBC), where all three of these molecular markers are negative (Yersal and Barutca, 2014).The HER2-positive subtype can be further divided into ER+/PR+/HER2+ (luminal B) and ER-/ PR-/HER2+.Tao et al. categorized breast cancer into five subtypes based on immunohistochemistry (IHC) markers, including ER, PR, and HER2 (Tao et al., 2019).These subtypes include luminal A, B, HER2-positive, TNBC, and unclassified. With the advancement of sequencing technologies, various types of omics data in the biosphere, including transcriptomics data [RNA expression data (Wang et al., 2009;Ozsolak and Milos, 2011)], metabolomics (Shulaev, 2006) data, proteomics (Altelaar et al., 2013) data, methylation patterns (Laird, 2010) data, as well as genomics data [DNA sequence data (Metzker, 2010)], have experienced rapid growth and accumulation.Many researchers have developed corresponding tools to handle this large-scale omics data.Another issue gradually gaining attention from researchers is whether there is interaction between complex traits and omics data.Previous studies mainly focused on the relationship between individual omics data and biological processes.Due to the reliance on a single type of omics data in analyzing the causes of complex traits, there have been few research results in this area until now.Through many existing experimental studies, it is known that there is a specific connection between different omics data, and they can complement each other's missing information.This is crucial for researchers to discover the relationship between complex traits and different omics data (Reif et al., 2004;Sieberts and Schadt, 2007;Hamid et al., 2009;Hawkins et al., 2010;Holzinger and Ritchie, 2012).Integrating different types of omics data and designing reasonable and adequate multi-omics data integration methods to accurately predict cancer differentiation and subtype classification have become hot topics in cancer research. Deep learning, as an emerging and efficient method in the field of machine learning, is more capable of capturing non-linear complex relationships in complex models.It has been widely used in the research of multi-omics data fusion methods (Cai et al., 2022).Mohammed et al. proposed a LASSO based 1D-CNN method and compared it with SVM, ANN, KNN, and bagging tree methods, the results indicating that the classification performance of the deep stacking method was superior to the traditional machine learning method (Mohammed et al., 2021).Li et al. proposed the MoGCN method by integrating multi-omics data based on a graph Convolutional network (GCN).Autoencoders and similarity network fusion methods are used to reduce and construct a patient similarity network (PSN) respectively to capture complex nonlinear relationships among multi-omics data (Li et al., 2022).Xing et al.Proposed the MLE-GAT method, namely multi-layer embedded graph attention method, uses WGCNA method to format each patient's omics data into a co-expression network and uses the full gradient map significance mechanism to identify disease-related genes (Xing et al., 2021).Blanco et al. points out the need to maintain a certain balance between biology and computer technology, and to integrate biological knowledge into modeling methods (Linares-Blanco et al., 2021).Leng et al. suggests that the best foundational model for predicting the fusion of multiple omics data is the GNN model (Leng et al., 2022). This paper considers the relations between feature nodes in the aggregation of GCN model, which are constructed based on multiple sets of omics data to form a similarity network.The correlation between samples can be captured through this similarity network, effectively preserving the biological semantic and geometric structures of the data.While for the GAT model, the relations between nodes are learned through network training.However, especially when the sample size is small, the training effect may not be satisfactory.Therefore, this paper adopts the GCN model instead of the GAT model in the design, and subsequent experiments have also validated this design. Data collection The breast cancer data used in this study were obtained from The Cancer Genome Atlas (TCGA) database (Weinstein et al., 2013), which contains various cancer types and their corresponding omics data.A total of 606 breast cancer cases were carefully selected, which included gene expression data, DNA methylation data, copy number variation (CNV) data, differentiation annotation, and subtype annotation.The specific statistical information of the mRNA, DNA methylation, and CNV data for the collected breast cancer cases is shown in Table 1.Among the breast cancer cases with differentiation annotation, there were 245 samples labeled as low differentiation (G3), 286 samples labeled as medium differentiation (G2), and 75 samples labeled as high differentiation (G1).The detailed information is presented in Table 2. In this article, Tao et al. classified breast cancer into four subtypes using immunohistochemistry (IHC) labeling: luminal A, luminal B, HER2-positive, and triple-negative breast cancer (TNBC).The luminal A subtype is the most common, accounting for 60% of all breast cancer subtypes (Malhotra et al., 2010).The majority of patients with the luminal B subtype are elderly.Approximately 25% of breast cancer patients are HER2-positive, which is associated with a poorer prognosis.Most patients with HER2-positive advanced breast cancer are likely to have lymph node metastasis in the axillary region.The TNBC subtype is characterized by the absence of estrogen receptor (ER), progesterone receptor (PR), and HER2 (Tao et al., 2019).Compared to other subtypes of breast cancer, TNBC tends to rapidly deteriorate and metastasize. In the breast cancer cases with subtype annotation, there were a total of 398 cases.Out of these, 277 cases were annotated as Luminal A, 40 were annotated as Luminal B, 11 were annotated as HER2(+), and 70 were annotated as TNBC.Table 3 provides detailed information on these cases.The above three omics data and two annotation files are provided in the Supplementary Material. Data preprocessing Generally, deep learning models do not require separate feature selection, as they can achieve this through the neural network's weights.However, due to the "large p small n" dimensionality catastrophe problem in omics data, training the network weights of omics data using the deep learning model is not adequate.In deep neural networks, fewer features often mean better interpretability and higher training speed.In this study, the collected breast cancer case sample data underwent preprocessing operations using three feature selection algorithms: chi-square test, linear normalization, and minimum redundancy maximum relevance (mRMR) (Yiming, 1997;Peng et al., 2005;Forman, 2008).The specific data preprocessing workflow is shown in Figure 1. This paper uses the chi-square test to select features for each omics type.The features are sorted based on their number in the hypothesis test using the samples corresponding to each classification task.Then, the top-k features are selected for each omics data.In this study, k is set to 5000.Normalization is performed using linear scaling, transforming the data values to fit within the range of [0,1].The paper also employs the minimum Redundancy Maximum Relevance (mRMR) feature selection algorithm.The difference between each feature's maximum relevance value and the minimum redundancy value is used as the feature score.The features are then sorted in descending order based on their scores, and the top 500 features are selected for further filtering.These selected features are favorable for cancer differentiation and subtype prediction. Graph construction A graph is a complex data structure consisting of nodes and edges.Many scenes in real life shown in the form of graphs or networks.For example, resources and users in recommendation systems can be considered as nodes in a graph, and the relationships between users and items can be considered as edges.Complex terms like chemical molecules can also be abstracted as graphs (Zhou et al., 2020).Most deep learning algorithms use data such as speech, images, and text with tidy and regular data structures.However, conventional deep learning algorithms are difficult to handle for those irregular and complex network structures.The Graph Convolutional Network (GCN) (Kipf and Welling, 2016) model can process such graph structures. In this paper, patient similarity networks are constructed by using cosine similarity for three kinds of omics data, namely mRNA, DNA methylation, and CNV data, respectively (Pai and Bader, 2018).The calculation formula for cosine similarity is as Eq.1: where, A and B are two known attribute vectors, A i and B i respectively represent the components of the vector sum.Each patient sample is a node in the patient similarity network, and the goal of each GCN in the model is to learn features aggregation from the graph-structured data by leveraging the Frontiers in Genetics frontiersin.org03 features of each node and the relationships between nodes.Therefore, the input of the GCN module consists of two parts: the feature matrix and the graph structure description.The feature matrix is represented as X ∈ R n×d , where n is the number of nodes and d is the number of input features.The graph structure description is an adjacency matrix A ∈ R n×n , constructed by computing the cosine similarity between node pairs.The computation equation is as Eq.2: In the equation, A ij represents the adjacency relationship between node i and node j, x i and x j are the feature vectors of node i and node j, and s(x i , x j ) is the cosine similarity between node i and node j. ϵ is a threshold determined by k, where k represents the average number of edges preserved for each node.The computation equation for k is as Eq.3: where I(•) represents an indicator function, and n is the number of nodes.With the similarity network, GCN can be trained using omics features and the corresponding similarity network to learn specific omics data. Model design The proposed model in this paper consists mainly of the Graph Convolutional Neural Network (GCN) module and an attention (Velikovi et al., 2017) module.The GCN module is designed for learning the feature aggregation of specific omics data, while the attention module is designed for the fusion of multi-omics features corresponding to different omics data obtained from the output of the GCN module.The attention module can assign different attention weight to each neighbor of a node, thus identifying more important neighbors for better classification of breast cancer differentiation and its subtypes. This paper presents a detailed architecture of the model for predicting the differentiation degree and subtypes of breast cancer, as shown in Figure 2. In this paper, the GCN is constructed by stacking multiple convolutional layers.Specifically, each layer is defined as Eq.4: where l is the number of graph convolutional layers, H (l) is the input of the l th layer, W (l) is the weight matrix of the l th layer.σ(•) represents a non-linear activation function.H (l+1) is the output of the l th layer.When the number of graph convolutional layers is too large, the resulting node feature vectors will become overly smooth, meaning that the features of each node become very similar.This is mainly because each layer of the GCN integrates information from the node and its neighbors.As the layers deepen, each node incorporates information from more neighbors, including some unrelated nodes.This ultimately leads to similar feature vectors for different types of nodes.This paper's model observed that when the number of graph convolutional layers exceeded three, there was no significant improvement in the experimental results.Instead, it increased the computational time and led to overfitting on some datasets.Therefore, the GCN module in this paper's model consists of three graph convolutional layers. To effectively train GCN, this paper extends the approach of Kipf et al. (Kipf and Welling, 2016) by further modifying the adjacency matrix A as Eq.5: Data preprocessing flowchart. Frontiers in Genetics frontiersin.org04 where D is the diagonal degree matrix of Ã, and I is the identity matrix. The attention model was introduced by Velikovi et al. ( 2017).The attention model incorporates a self-attention mechanism during the propagation process in the network.Unlike GCN, which treats all neighbors of a node equally, this attention model assigns different attention scores to all neighbors.A higher score for a neighbor indicates a higher importance level for that node.The attention network is implemented by stacking multiple graph attention layers.The input to a single graph attention layer is a set of node feature vectors as Eq.6: where N represents the number of nodes in the node-set, and F represents the corresponding eigenvector dimension.The output of each layer is a new set of node feature vectors as Eq.7: where F′ represents the new node eigenvector dimension. In order to obtain sufficient expressive power to transform input features into higher-level features, the graph attention layer first performs self-attention processing according to the set of node feature vectors of input as Eq.8: The shared attention mechanism a is a mapping of R F′ × R F′ xR, and W ∈ R F′×F is a weight matrix that is shared by all h ′ i .e ij represents the importance of the features of node j to node i.In this study, the attention module is used to compute the attention coefficients for each omics feature matrix.The attention mechanism is then applied to aggregate different types of omics features, resulting in the final omics feature matrix.The fused feature matrix obtained from the attention module is further processed using SoftMax function for final label prediction. Performance metrics Samples are generally divided into positive and negative classes for binary classification tasks.Therefore, the classifier has four classification results: TP, TN, FP, and FN.TP refers to correctly classifying positive samples as positive.TN refers to correctly classifying negative samples as negative.FP refers to incorrectly classifying negative samples as positive.FN refers to incorrectly classifying positive samples as negative.To evaluate the model's predictive performance, we mainly used three evaluation metrics: accuracy, F1 score, and area under the receiver operating characteristic curve (AUC-ROC).The specific calculation formulas are as as Eqs 9-14: In the paper, "accuracy" refers to the proportion of correctly predicted results among all samples."F1" is the arithmetic Frontiers in Genetics frontiersin.orgaverage of precision and recall divided by the geometric mean.F1 has the worst effect when the value is 0 and the best effect when the value is 1.The receiver operating characteristic curve is known as ROC, and the area under the curve (AUC) represents the area under the ROC curve.AUC is calculated through the integral of the ROC curve, and a higher AUC indicates better classification results.We adopt two evaluation indexes for multi-classification tasks, F1 macro and F1 weighted (Leng et al., 2022).Its calculation formula are as Eqs 15-17: F1 macro takes values between 0 and 1 and is unaffected by data imbalance.On the other hand, F1 weighted is the weighted average of F1 score for each category, where the weight is the proportion of each category in the accurate predictions.The difference between F1 weighted and F1 macro is that F1 macro assigns the same weight to each category, while F1 weighted assigns different weights based on the proportion of each category. The model proposed in this paper and the comparison model are specifically executed on the workstation based on Ubuntu 18.04.5 LTS system and Pytorch v1.7.0.The working environment of the workstation is as follows: CPU is AMD Ryzen 7 3700X 8-Core, 16-Thread,Memory is 64G, GPU is GeForce GTX 1080 Ti (11G). Implementation details In deep learning, networks with many parameters are very powerful (Srivastava et al., 2014).However, dealing with the overfitting problem is a key issue.This paper adopts two approaches to address the overfitting issue.The first approach is to add dropout layers to the model.It randomly drops elements in the neural network during training, preventing overfitting caused by excessive training.Each sub-network channel consists of three sequential graph convolution layers and two dropout layers are used in our model and then weighted each channel using the attention mechanism.The second approach is to employ early stopping during the training process of the network model.Specifically, if the loss function of the validation data does not show a significant decrease in the first 100 epochs of training, the model's training is paused (Prechelt et al., 2012). This paper computed the cross-entropy between the actual distribution and the predicted distribution of breast cancer differentiation and its subtypes (Tabor and Spurek, 2014).The loss is calculated by minimizing the cross-entropy.The loss function used in this paper's model is shown in Eq. 18: where L is the loss function, Y L is the set of node indexes with labels, Y l is the label of the label node, that is, the type of breast cancer differentiation and its subtypes, C is the parameter of the classifier, and Z l is the final node embedding of the label node.This paper optimizes the entire model through end-to-end backpropagation. The performance of binary classification 3.3.1 Analysis of experimental results of binary classification in differentiation degree In order to comprehensively evaluate the performance of our MVGNN model compared to traditional machine learning methods and recent supervised multi-omics data integration methods, this paper employs 5-fold cross-validation for different models.The average accuracy, average AUC value, and average F1 value obtained on the test dataset are used as evaluation metrics.These models include Support Vector Machine (SVM), Random Forest (RF), Neural Network (NN), GCN, GAT, and Multi-Omics Graph Convolutional Networks (MOGONET).MOGONET is the latest method for multi-omics data integration published by Wang et al. (2021).The View Correlation Discovery Network (VCDN) are used to explores cross-omics correlations in the feature space, enabling effective multi-omics integration.Three pairs of breast cancer differentiation classifications are considered: well-differentiated vs. moderately-differentiated (G1 vs. G2), well-differentiated vs. poorly-differentiated (G1 vs. G3), and moderately-differentiated vs. poorly-differentiated (G2 vs. G3).The same dataset split is used, and the average accuracy, average AUC value, and average F1 value based on 5-fold cross-validation are used as evaluation metrics.The experimental results of all models in predicting any two types of breast cancer differentiation are shown in Table 4. In the experimental process, SVM, RF, NN, GCN, and GAT were trained using preprocessed multi-omics data directly concatenated as input.All methods were trained using the same preprocessed data.According to Table 4, the proposed MVGNN model for integrating multi-omics data achieved the highest accuracy, AUC value, and F1 value compared to traditional machine learning methods, graph convolutional network models, and the latest methods for integrating multi-omics data in classifying any two types of breast cancer differentiation.The values are: accuracy-0.778,AUC-0.745,F1-0.809.It can be Analysis of experimental results of binary classification on subtypes This article adopts a five-fold cross-validation method to train all models, and all methods use the same training set, validation set, and test set.The evaluation metrics are average accuracy (ACC), average area under the curve (AUC), and average F1 score.The classification results of any two subtypes of breast cancer include (1) luminal A vs. luminal B, (2) luminal A vs. HER2(+), (3) luminal A vs. TNBC, (4) luminal B vs. HER2(+), ( 5) luminal B vs. TNBC, and (6) HER2(+) vs. TNBC.The experimental results of predicting any two subtypes of breast cancer by each model are shown in Table 5. Based on the data in Table 5, this paper's model achieved the highest accuracy, AUC value, and F1 score compared to traditional machine learning methods, graph convolutional network models, and the latest integrated multi-omics data methods for any two classification results of breast cancer subtypes.The values are as follows: accuracy -0.9180, AUC -0.9530, and F1 score -0.7155.It can be concluded that this paper's model outperforms traditional machine learning methods and the latest multi-omics data integration methods in the overall classification results of any two subtypes of breast cancer. Analysis of the results of multi-classification experiments on differentiation degree To better evaluate the performance of the MVGNN model, this paper uses the model to predict the differentiation degree and subtypes of breast cancer based on multi-classification. Specifically, based on the same data set partitioning, this paper uses the average accuracy, average F1_weighted value, and average F1_macro value calculated through 5-fold cross-validation as evaluation metrics.The multi-classification results of breast cancer differentiation degree are G1 vs. G2 vs. G3.The specific experimental results of the MVGNN model and other methods in the multi-classification of breast cancer differentiation degree are shown in Table 6. According to Table 6, it can be observed that the MVGNN model proposed in this paper achieves the highest ACC value (0.621), the highest F1_weighted value (0.597), and the highest F1_macro value (0.541) compared to traditional machine learning methods, graph convolutional network models, and the latest integrated multi-omics data methods in the multi-classification results of breast cancer differentiation degree.It can be concluded that the model proposed in this paper outperforms traditional machine learning methods and the latest multi-omics data integration methods in the multi-classification problem of breast cancer differentiation degree. Analysis of experimental results of multiple classifications on subtypes In the same way, the experimental details in Section 3.4.1 are utilized in this study.The multi-classification results of breast cancer subtypes are luminal A vs. luminal B vs. HER2(+) vs. TNBC.The specific experimental results of the MVGNN model compared with other methods on multi-classification of breast cancer subtypes are presented in Table 7.According to Table 7, it can be observed that the MVGNN model proposed in this paper, as compared to traditional machine learning methods, graph convolutional network models, and the latest integrated multi-omics data approaches, achieves the best performance in the multi-classification of breast cancer subtypes.The corresponding performance measures are the accuracy (ACC) value of 0.735, the weighted F1 score (F1_weighted) value of 0.725, and the macro F1 score (F1_macro) value of 0.636.Hence, these results are sufficient to demonstrate the effectiveness of the proposed model in this study. The performance of different network module • Analysis of experimental results on differentiation classification To select the module most beneficial for breast cancer differentiation and subtype classification in the model, this study employed a five-fold cross-validation approach to assess the performance of different modules on the same test dataset.For all models, the same training and validation sets were utilized. Specifically, this study performed 5-fold cross-validation on the training dataset, with all modules utilizing the same training, validation, and test sets.Mean accuracy, AUC value and mean F1 value were used as measurement metrics.The detailed experimental results of different modules on two types of breast cancer differentiations are presented in Table 8; Figure 3.By comparing the experimental results of GCN + VCDN and GAT + VCDN, as well as GAT + Attention and GCN + Attention, in predicting any two types of breast cancer differentiations, it can be observed that there exists a specific correlation between biological genomic data.The GAT module did not utilize this correlated information, while the GCN module was able to fully exploit the correlations between biological data, resulting in better differentiation prediction outcomes.Similarly, by comparing the experimental results of GCN + VCDN and GCN + Attention, as well as GAT + VCDN and GAT + Attention, it was found that introducing the attention module improved the performance of predicting breast cancer differentiation.This is because the attention mechanism in the attention module can identify more important neighbors, enabling better classification of breast cancer differentiation.Therefore, this study chose the GCN + Attention model, the MVGNN model, as the final model for predicting breast cancer differentiation. • Analysis of experimental results on subtype classification Similarly, the experimental setup for predicting breast cancer differentiation was used.The specific experimental results of different modules on any two breast cancer subtypes are shown in Table 9; Figure 4. By comparing the experimental results of GCN + VCDN and GAT + VCDN, as well as GAT + Attention and GCN + Attention in predicting two different subtypes of breast cancer, it can be observed that the introduction of the GCN module can improve the accuracy of breast cancer subtype prediction to a certain extent.This is because GCN can effectively utilize the correlation in the biological data.Similarly, by comparing the experimental results Results of any two classifications of different modules in breast cancer differentiation. of GCN + VCDN and GCN + Attention, as well as GAT + VCDN and GAT + Attention, it can be concluded that the introduction of the attention module increases the precision of predicting breast cancer differentiation.This also indicates that introducing an attention mechanism can improve the model's performance. The performance of multi-omics data fusion • Analysis of experimental results on differentiation classification Specifically, for different types of omics data combinations, the same data set partitioning was adopted in this study, and the average accuracy, average AUC value, and average F1 value of 5-fold crossvalidation were used as metrics.Figure 5 shows the average accuracy, AUC value, and F1 value of the classification results for different degrees of breast cancer differentiation using different types of omics data.DNA_methylation, mRNA, and CNV in the figure represent the single omics data classification experiments using the MvGNN model with mRNA expression, DNA methylation, and CNV data, respectively.mRNA + DNA_methylation, mRNA + CNV, and DNA_methylation + CNV refer to the classification experiments using two types of omics data simultaneously.mRNA + DNA_ methylation + CNV refers to the classification experiments simultaneously using all three types of omics data.The specific experimental results are shown in Table 10; Figure 5. From Table 10; Figure 5, it can be observed that compared to using a single type of omics data or combining two types of omics data, the model integrating three types of omics data achieved the highest accuracy AUC, and F1 scores in predicting any two subtypes of breast cancer differentiation.The scores were 0.778, 0.803, and 0.809, respectively.This indicates that the model in this study successfully extracted useful information for classification from different omics data. • Analysis of experimental results on subtype classification Similarly, this paper uses the dataset partitioning described in Section 3.5.1 and utilizes the average accuracy, average AUC, and average F1 values from 5-fold cross-validation as performance metrics.Experiments were conducted on the classification of any two subtypes of breast cancer using different types of omics data.The integrated model of three omics data achieved the highest accuracy in classifying any two subtypes of breast cancer, with values of 0.921 (luminal A vs. luminal B), 0.968 (luminal A vs. HER2+), 0.91 (luminal A vs. TNBC), 0.82 (luminal B vs. HER2+), 0.964 (luminal B vs. TNBC), and 0.925 (HER2+ vs. TNBC).This indicates that the model proposed in this paper can extract useful information for classification from different omics data.Furthermore, regarding AUC, the integrated model based on three omics data achieved the highest values in classifying any two subtypes of breast cancer, except for the luminal A vs. HER2+ and luminal A vs. TNBC classifications.The respective AUC values were 0.881 (luminal A vs. luminal B), 0.925 (luminal B vs. HER2+), 0.997 (luminal B vs. TNBC), and 0.979 (HER2+ vs. TNBC).Although the model based on three omics data for the luminal A vs. HER2+ classification was 0.6% lower and for the luminal A vs. TNBC classification was 1.2% lower compared to the models integrating mRNA expression data and CNV data or DNA methylation data, respectively, this still demonstrates the robustness of the proposed model in handling Results of any two classifications of different modules in breast cancer subtypes. Conclusion The grading and subtyping of cancer, as a complex trait with distinct molecular features, has significant prognostic and therapeutic implications.Therefore, cancer grading and subtyping research is essential for precision medicine and prognostic cancer prediction.In recent years, numerous supervised multi-omics data integration methods have emerged domestically and internationally.However, these methods do not consider the interrelationships between different types of omics data, which may lead to a bias towards a specific type of omics data in the final prediction results.It is crucial to explore how to improve the predictive performance of models by utilizing the interrelationships between different types of omics data. This study proposes a multi-omics data fusion algorithm based on a heterogeneous graph neural network.The algorithm combines graph convolutional networks and graph attention networks to predict the differentiation and subtypes of cancer.The breast cancer data from TCGA is used in this study, which includes gene expression data, DNA methylation data, copy number variation (CNV) data, differentiation level annotations, and subtype annotations for each breast cancer sample. First, preprocessing operations, including chi-square test, normalization, and minimum Redundancy Maximum Relevance (mRMR), are performed on the three types of omics data for breast cancer.Then, we conduct experiments using the MVGNN model, traditional machine learning algorithms, and popular multi-omics data integration methods separately for binary and multi-class classification of breast cancer differentiation and subtypes using 5fold cross-validation.According to the experimental results, our model achieves the best performance in both binary classification of breast cancer differentiation and subtypes, and multi-class classification. Furthermore, to select the modules in the model that are more conducive to predicting breast cancer differentiation and subtypes, we also perform 5-fold cross-validation to test the performance of different modules on the test set.Finally, to further test the classification prediction performance of the model, we compare the differentiation and subtype experiments using only one type of omics data, two types of omics data, and all three types of omics data.Based on the experimental results, the breast cancer classification predictions using the MVGNN model with all three types of omics data perform better than those using two or just one type of omics data. Discussion The MVGNN model proposed in this paper has achieved good results predicting breast cancer differentiation and subtypes, but some work will be carried out in future.For example: The overall classification performance of the proposed MVGNN model is satisfactory.However, from the experimental results in Section 3.5.2, it can be observed that The classification results of any two types of breast cancer differentiation in MVGNN model with different combination of omics data. TABLE 1 Statistics of breast cancer data. TABLE 2 Statistical information of breast cancer data differentiation. TABLE 3 Classification of cancer subtypes. TABLE 4 The prediction results of classification in any two degrees of differentiation across different models. TABLE 5 Prediction results of each model for any two subtypes of breast cancer. TABLE 8 Results of any two classifications of different modules in breast cancer differentiation. TABLE 9 Results of any two classifications of different modules in breast cancer subtypes. TABLE 10 The classification results of any two types of breast cancer differentiation in MVGNN model with different combination of omics data.Frontiers in Geneticsfrontiersin.org our model needs improvement in differentiating between luminal A and HER2(+) subtypes, as well as between luminal A and TNBC subtypes in breast cancer.This also indicates that our gene expression, DNA methylation, and CNV data are insufficient to distinguish the boundaries between luminal A and HER2(+) subtypes and luminal A and TNBC subtypes.Therefore, there may be differences in these subtypes of breast cancer in other types of omics data.In future work, we aim to integrate additional omics data, such as metabolomics data and mutation data, to enhance our breast cancer subtype classification model.This paper primarily trains the MVGNN model on the breast cancer dataset from TCGA.In order to further demonstrate the performance of the MVGNN model in cancer classification and diagnosis, future studies can include additional datasets of different cancers, such as lung cancer, liver cancer, gastric cancer, and colon cancer, which have high mortality rates.
8,131.2
2024-02-20T00:00:00.000
[ "Medicine", "Computer Science" ]
A novel textile-based UWB patch antenna for breast cancer imaging Breast cancer is the second leading cause of death for women worldwide, and detecting cancer at an early stage increases the survival rate by 97%. In this study, a novel textile-based ultrawideband (UWB) microstrip patch antenna was designed and modeled to work in the 2–11.6 GHz frequency range and a simulation was used to test its performance in early breast cancer detection. The antenna was designed with an overall size of 31*31 mm 2 using a denim substrate and 100% metal polyamide-based fabric with copper, silver, and nickel to provide comfort for the wearer. The designed antenna was tested in four numerical breast models. The models ranged from simple tumor-free to complex models with small tumors. The size, structure, and position of the tumor were modified to test the suggested ability of the antenna to detect cancers with different shapes, sizes, and positions. The specific absorption rate (SAR), return loss (S11), and voltage standing wave ratio (VSWR) were calculated for each model to measure the antenna performance. The simulation results showed that SAR values were between 1.6 and 2 W/g (10 g SAR) and were within the allowed range for medical applications. Additionally, the VSWR remained in an acceptable range from 1.15 to 2. Depending on the size and location of the tumor, the antenna return losses of the four models ranged from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-$$\end{document}-36 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-$$\end{document}-18.5 dB. The effect of bending was tested to determine the flexibility. The antenna proved to be highly effective and capable of detecting small tumors with diameters of up to 2 mm. Introduction Breast cancer is a condition in which breast cells start growing and dividing uncontrollably and invade surrounding healthy cells or metastasize into several other body parts [1].Breast cancer was identified as the second leading cause of death for women worldwide, and there were approximately 685000 deaths in 2020 [2].According to the World Health Organization, there were more than 2.26 million new cases of breast cancer identified worldwide in 2020; it is the most commonly diagnosed cancer, representing 11.7 of the overall 19.3 million new cancer cases [3].The American Cancer Society estimates that there will be approximately 297,790 new cases of breast cancer diagnosed in women, and approximately 43,700 will die in 2023 in the United States only [4].Researchers have emphasized that accurate and highly efficient approaches are needed for breast cancer detection since detecting cancer at an early stage raises the survival rate by up to 97% [5]. Among the various diagnosis platforms, imaging modalities represent the most important parts of cancer diagnosis and treatment operation since they offer a wide range of invisible information and interior body pictures [6].Breast imaging systems, in general, refer to the commonly used diagnostic methods for detecting breast tumors, such as ultrasonic (U.S.) imaging, mammography, positron emission tomography (PET), and magnetic resonance imaging (MRI) [7,8].The drawbacks and limitations of the existing breast imaging approaches inspired researchers to create and develop novel microwave-based techniques [9].The nonionizing radiation and noninvasive characteristics of microwaves make them a capable option in the field of breast cancer detection [10,11].The most commonly utilized methods that expose the body to microwaves and analyze the transmitted and received signals are microwave tomography and UWB radar imaging [12,13]. Active microwave imaging is based on electromagnetic scattering due to the dielectric contrast between the different objects under investigation [14].In the situation of breast imaging, the dielectric contrast between different tissue types has been used in active microwave imaging to generate a 2D or 3D image of the breast [15].Over the last few years, several active microwave breast imaging systems have been developed [16,17].Unlike the image reconstruction goal of microwave tomography, UWB's radar-based imaging system addresses a specific computational challenge, focusing on determining the location of large scattering obstacles such as malignant tumors [18]. The importance of antennas in UWB imaging cannot be overstated, as they are the interface between the electromagnetic waves and the objects being imaged.Some transducers (microwave antennas) that operate in the medical microwave frequency range of 300 MHz to 20 GHz are used to illuminate the breast [19,20].These short pulses contain valuable information about the object under investigation.When a tumor is present, it creates backscattered signals at microwave frequencies, which depend on the contrast between healthy and malignant tissues [21].The system's resolution capabilities are influenced by UWB antenna properties, enabling the differentiation of small features or anomalies within the object [22]. In breast imaging, it's essential that the antennas are comfortable and flexible [23].Patient comfort and compliance are critical factors, and antennas that can conform to the shape of the breast while maintaining performance are crucial for practical and effective imaging.Also, the sensitivity of tumor detection depends on the interspace between the antenna and the breast, and decreasing this distance leads to an increase in detection sensitivity [19].On the other hand, recent global health crises, such as the COVID-19 pandemic that emerged in 2019, have underscored the importance of seeking alternatives to conventional breast imaging devices [24].The anticipated progress in the medical device industry is expected to pave the way for out-of-hospital screening, thereby addressing diagnostic delays that may arise from health crises [25].There has been growing interest in developing flexible and textile-based wearable microwave antennas.Wearable prototypes seem to be more affordable and have considerably smaller overall sizes than table-based devices.Textile substrates are increasingly preferred as the primary materials for wearable biomedical antennas due to their flexibility and comfort for the wearer [26]. Mahmood et al. [27] proposed a fully grounded UWB textile antenna operating in the 7-28 GHz frequency band for early breast cancer detection.They used denim as the substrate and a shield conductive textile for patch and ground with a total size of 60*50*0.7 mm 3 .Bahrami et al [28] introduced a small, flexible monopole antenna for breast imaging that operated within the 2-5 GHz frequency band.Although their antenna was small at 20*20 mm 2 , the narrow band affected the image resolution.Srinivasan et al. [29] presented a novel antenna that utilized a jean substrate and copper conductive material to detect breast cancer, and it operated within the 2.4 GHz frequency band.Other researchers have worked on designing textile-based wearable antennas for on-body applications [30][31][32][33][34][35]. Wearable antennas have the potential to revolutionize breast cancer screening by providing real-time, portable, and patientfriendly solutions.However, the integration of UWB antennas into wearable textiles presents several technical challenges, including the design of compact and efficient antennas that can maintain their performance when integrated into fabrics, ensuring flexibility and durability, addressing biocompatibility concerns, and ensuring radiation safety [36]. This article addresses these challenges by presenting a novel textile-based UWB patch antenna designed specifically for breast cancer imaging applications.The development of such antennas represents a significant advancement in the field of medical imaging and has the potential to transform the way breast cancer is diagnosed and monitored. The primary objective of the research is to design a UWB patch antenna using textile materials for both the substrate and conductor.In contrast to conventional studies that typically employ fabric as the substrate and metal for the conductor, this study addresses biocompatibility concerns by utilizing a full textile-based antenna.The main objectives of this antenna design are to achieve compactness and simplicity while maintaining operation within the frequency range of 2-11.6 GHz.Achieving this wide frequency coverage is particularly challenging when using fabric materials in small antennas.This extensive frequency range plays a crucial role in facilitating high penetration and resolution in breast imaging.These properties enable the antenna to detect tumors of various shapes and sizes in different areas of the breast.Unlike many existing models that simplify tumor shapes by assuming they are spherical, this research employs more realistic models and acknowledges the diversity of tumor shapes and sizes encountered in clinical scenarios.To assess the performance of each antenna model, various critical parameters, including specific absorption rate (SAR) for radiation safety, return losses, voltage standing wave ratio (VSWR) to ensure antenna efficiency, and the impact of bending to determine flexibility, were examined. Antenna design Antenna selection is very important in microwave imaging, and the antenna must be able to transmit signals as accurately and efficiently as possible.In previous research, wearable UWB antennas have shown limitations in resolution, low bandwidth, high SAR values, and larger dimensions.In order to efficiently detect tumors, a wearable antenna designed for breast tumor detection should have a wide bandwidth, low SAR, compact design, and high degree of adaptability.Due to their low-profile conformal designs, low costs, simple manufacturing processes, and adaptabilities in terms of implementation, microstrip patch antennas have been utilized.The patch antenna consists of a conductive ground layer, a dielectric layer above it, and a conducting patch over the substrate, as shown in Fig. 1a.Patch antennas are typically designed to operate at a specific frequency or within a narrow frequency band.However, there are several methods to increase the bandwidth of patch antennas, including increasing the thickness of the substrate, cutting slots, cutting notches, and using a partial ground plane.In this work, bandwidth has been increased by cutting notches, in addition to using a partial ground plan.The proposed rectangular microstrip patch antenna was conceived for a UWB breast cancer imaging application, and it operated in the frequency of range 2-11.6 GHz.The antenna was developed with a jeans substrate with a dielectric ∈ r equal to 1.7 and a height (h) of 0.7 mm.A 100% polyamidebased fabric metalized with copper, silver, and nickel was used as the conductive material for the patch and ground plane with a thickness of 0.11 mm.The dimensions of a patch antenna play a fundamental role in determining its characteristics and performance.The length and width of the patch determine the resonant frequency of the antenna.Generally, a longer patch corresponds to a lower resonant frequency, while a wider patch results in a higher resonant frequency.Adjusting L and W allows the antenna to operate at a specific frequency or within a desired frequency band.Figure 1c shows how the return loss of an antenna changes as the length of the partial ground plane is varied.The most commonly used equations for calculating L and W are based on the fundamental resonant mode of the patch, which is the half-wavelength mode.Equations 1 through 6 indicated the primary antenna dimensions [37].Once the desired resonant frequency and dielectric constant are known, Equation ( 1) is used to determine the patch width (W). (1) where C is the velocity of light, 3 * 10 8 m∕s , ∈ r is the substrate dielectric constant, and fr the antenna resonant frequency. After determining the width, there are additional parameters that need to be considered to calculate the length of the patch accurately.These parameters include the effective dielectric constant, the effective length, and the antenna length extension.These are necessary for more precise design and impedance matching.The effective dielectric constant ( ∈ e ff ) was calculated with Eq. ( 2). The effective length (Leff) was calculated as The antenna length extension ( ΔL ) was calculated as To determine the antenna length (L), Antennas are designed to be used in free space; therefore, antenna theory does not work correctly with human tissues, which requires optimizing the antenna design to make it compatible with human tissues.The return loss changed when the antenna was inserted into a breast (high permittivity medium).Miniaturization techniques are used for minimum return loss, especially with the antenna ground, and slot length.The antenna optimization process was performed to achieve the desired bandwidth.The trust region framework (TRF) algorithm was used with the electromagnetic (EM) Computer Simulation Technology (CST) software to optimize the ground plane length to achieve the desired bandwidth range of 2-11.6 GHz.The designed and optimized antenna dimensions are shown in Fig. 1b and Table 1, respectively.The microstrip line feeding method was used to feed the antenna. The SAR, VSWR, and return loss were analyzed for each model to assess its effectiveness.Return loss and VSWR are dependent on the reflection coefficient Γ .The reflec- tion coefficient ( Γ ) indicated the reflected power from the antenna. (2) where V − 0 is the reflected wave, V + 0 is the incident wave, and z 0 andz L are the transmission line impedance and the load impedance respectively. The standing wave ratio is a numerical representation of impedance matching between the antenna and the transmission line.It measures the ratio of the amplitude of the maximum standing wave v max to the minimum standing wave v min as shown in Equation (7).If the VSWR is under 2, an antenna match is typically considered satisfactory. The specific absorption rate is the most suitable metric used in assessing the impact of EM field exposure in the very near field of a Radio frequency (RF) source [38].The following equation was used to determine the local SAR measured in W/kg at any location in the human tissue: where E is the amplitude of the electrical field in human tissue expressed in volts per meter (V/m), is the conductivity of the tissue (in Siemens per meter, S/m), and is the density of the tissue (measured in kilograms per cubic meter).( 7) Breast phantom design A range of breast phantoms was created to evaluate the practicality of the proposed antenna for detecting breast cancer.The Cole-Cole and Debye models are commonly employed to characterize the dielectric properties of biological tissues [39,40].Both models contribute to understanding how biological tissues interact with electromagnetic fields.The IT'IS material parameter database [41] was used with our model to determine the dielectric characteristics of the breast skin, fat, and glandular tissues in the desired frequency range, as shown in Table 1.The tumor properties were assumed according to the literature [42].Computer Simulation Technology (CST) as a well-known software was used for electromagnetic simulation and analysis (Table 2).Figure 2 illustrates that the antenna design was initially tested on a breast model without tumors (Model A) and then on a breast model with a 5 mm radius tumor (Model B).Additionally, one more tumor with a 2 mm radius was placed in a different location in Model B to create Model C to evaluate the ability of the antenna to detect tumors in diverse areas.Finally, a square tumor was introduced to observe how it affected the antenna performance (Model D). Results The return loss, SAR for radiation safety, voltage standing wave ratio to ensure antenna efficiency, and the impact of bending to determine flexibility, were examined and calculated for each model to measure the antenna performance. As shown in Fig. 3a, the model without a tumor gave a return loss of − 18.5 dB. Figure 3b shows that there was an increase in the return loss from − 18.5 dB to −24 dB due to the presence of a tumor.Figure 3c also shows a similar increase in S11, demonstrating that tumors with similar sizes and locations exhibited relatively identical increases in the return losses regardless of the form of the tumor.Despite the smaller size of the second tumor in Fig. 3d, which has two tumors of different sizes, there was a considerable increase in the S11 value compared to that of Model A, which indicated that the number of reflecting targets inside the breast affected the return loss directly.Figure 4 shows that the VSWR values varied from 1.1 to 2 throughout the frequency 2-11.6 GHz range, indicating a satisfactory impedance match between the antenna and the transmission line. The specific absorption rate (SAR) values, which measure the impact of electromagnetic field exposure, remained within the allowed range for medical applications.The SAR varied between 1.6 and 2 W/g (10 g SAR) for the tested models, indicating that the antenna configuration was suitable for use safely in a breast cancer detection system.This is an important finding, as SAR values above the allowed range can cause harmful effects on human tissues which is a common problem in many previous research.Figure 5 shows the SAR value at a frequency of 10.6 GHz, which was calculated from Model B. The suggested antenna was subjected to bending tests to determine its flexibility since the human body is not flat.Figure 6 shows the shifts in the resonance frequency arising from cylindrical bending but still working in the desired frequency range.This showed that the antenna maintained its performance even after bending, indicating its suitability for wearable applications.This is significant because the antenna must adapt to the body's curvature for accurate detection. Discussion In this study, a novel fully textile-based ultrawideband (UWB) microstrip patch antenna was designed, modeled, and simulated for early breast cancer detection.The antenna was tested with four numerical breast models with different tumor sizes, shapes, and locations to evaluate its performance.The results of the study showed that the antenna was effective in detecting breast tumors.The return loss, which indicates the reflected power from the antenna, was increased by the presence of a tumor.This increase was consistent regardless of the tumor shape, indicating that the antenna can detect tumors of different forms.The VSWR values remained within an acceptable range throughout the frequency range 2-11.6 GHz.The flexibility of the antenna was also tested by bending it.The antenna's performance with respect to return loss, VSWR, SAR, and flexibility met the requirements for breast cancer detection. An analysis presented in Table 3 reveals that although fabric materials can facilitate the attainment of a wide frequency range when employed as substrate and conductor as in [27]and [43], they invariably result in significantly larger antenna sizes.Notably, one of the primary challenges in Textile-based antenna design stems from the dielectric properties of these fabrics.It is essential to acknowledge that the antenna's size plays a pivotal role in its suitability for integration into wearable imaging devices, particularly given the constraints imposed by the limited dimensions of the human breast.Smaller antenna dimensions facilitate their deployment within arrays.The primary achievement of this research lies in the successful development of a compact and lightweight antenna, capable of operating across a UWB frequency range utilizing fully textile-based materials.The designed antenna is small in size compared to the other antennas.On the other hand, as shown in Table 3, antennas with relatively small dimensions [32,35] use metal-based conductive materials as patch and ground.Unfortunately, this design compromises flexibility and adaptability to conform to the body's shape, making them less comfortable and potentially unsafe compared to fabric-based alternatives.Additionally, the antennas exhibit a narrower frequency Fig. 5 Obtained SAR value at 10.6 GHz Fig. 6 Bending effect on the antenna return loss range, which adversely impacts image resolution.It is worth noting that [28] succeeded in fabricating a smaller 20 x 20 antenna but suffered from a limited frequency range.Most previous research has used breast models composed of multiple layers that approximate the shape of the real breast.While these studies have proven successful in detecting spherical-shaped tumors, the current study goes beyond this by performing antenna tests to detect tumors of different shapes and sizes.According to that, this novel approach addresses a critical gap in existing literature.An important outcome of this research is the demonstrated capability to image tumors as small as 2 mm in diameter.Overall, the study demonstrated that the designed textile-based UWB microstrip patch antenna was effective in detecting breast tumors of various sizes, shapes, and locations.The use of a textile-based substrate provided comfort for the wearer, making it suitable for wearable applications.However, it is important to note that this study was based on numerical breast models, and further experimental validation is needed to confirm the antenna's performance in real-world scenarios.The effects of other factors, such as different breast shapes, varying tissue properties, and the presence of other objects or clothing, must be investigated. Conclusion This research was designed to develop a novel, inexpensive, comfortable, fully textile-based wearable UWB microstrip patch antenna capable of detecting breast cancer in its early stages.The antenna operated in the 2-11.6GHz frequency range, which made it possible to attain good resolution at high frequencies above 5 GHz and good penetration at low frequencies.The proposed antenna comprised a jean substrate sandwiched between a patch layer and a ground plane made of a 100% polyamide-based fabric that had been metalized with copper, silver, and nickel. The designed antenna was tested with four numerical breast models, and the tumor sizes, shapes, and locations were changed to test the ability of the antenna to detect the tumor.The use of tumors with different forms filled most of the gaps in the previous research, which considered the tumors to be spherical.The specific absorption rate (SAR), return loss (S11), and voltage standing wave ratio (VSWR) were calculated for each model to measure the antenna performance.The simulated SAR values remained between 1.6 and 2 W/g (10 g SAR), which were in the accepted range for medical applications.The VSWR remained within the acceptable range of 1.15-2.The return loss of the antenna varied from −36 to − 18.5 dB for the four models and showed noticeable changes due to changes in the tumor size or location.The bending effect was examined to assess the antenna's flexibility, and it showed good performance after bending.The provided antenna also did not require an immersion medium.As a result, the impacts of immersion-related imaging errors were eliminated, and the uncertainty in tumor localization was reduced.Fabrication and experimental measurement will be performed to validate the results of the designed antenna. Funding Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK).The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Declarations Conflict of interest The authors have not disclosed any competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fig. 1 a Fig. 1 a Basic rectangular patch antenna, b proposed antenna after the optimization process c Impact of the partial ground plane in the return loss Fig. 3 a Fig. 3 a S11 for Model A, b S11 for Model B, c S11 for Model C and d S11 for Model D Table 1 Optimized antenna dimensions Table 2 Breast phantom tissue Table 3 Comparison between the proposed work and some important research in the literature
5,287.8
2024-03-26T00:00:00.000
[ "Medicine", "Engineering" ]
Tubular cell loss in early inv/nphp2 mutant kidneys represents a possible homeostatic mechanism in cortical tubular formation Inversion of embryonic turning (inv) cystic mice develop multiple renal cysts and are a model for type II nephronophthisis (NPHP2). The defect of planar cell polarity (PCP) by oriented cell division was proposed as the underlying cellular phenotype, while abnormal cell proliferation and apoptosis occur in some polycystic kidney disease models. However, how these cystogenic phenotypes are linked and what is most critical for cystogenesis remain largely unknown. In particular, in early cortical cytogenesis in the inv mutant cystic model, it remains uncertain whether the increased proliferation index results from changes in cell cycle length or cell fate determination. To address tubular cell kinetics, doubling time and total number of tubular cells, as well as amount of genomic DNA (gDNA), were measured in mutant and normal control kidneys. Despite a significantly higher bromodeoxyuridine (BrdU)-proliferation index in the mutant, total tubular cell number and doubling time were unaffected. Unexpectedly, the mutant had tubular cell loss, characterized by a temporal decrease in tubular cells incorporating 5-ethynyl-2´-deoxyuridine (EdU) and significantly increased nuclear debris. Based on current data we established a new multi-population shift model in postnatal renal development, indicating that a few restricted tubular cell populations contribute to cortical tubular formation. As in the inv mutant phenotype, the model simulation revealed a large population of tubular cells with rapid cell cycling and tubular cell loss. The proposed cellular kinetics suggest not only the underlying mechanism of the inv mutant phenotype but also a possible renal homeostatic mechanism for tubule formation. Introduction Inversion of embryonic turning (inv) mutant mice develop situs inversus, jaundice and polycystic kidneys, with most mutants dying before postnatal day (P) 7 [1,2]. Subsequently, mutations in inv were identified as being responsible for human type II nephronophthisis (NPHP2), an infantile autosomal recessive renal disorder [3]. The cystic phenotype, including tubular dilatation, was shown to be similar in mice and humans [4]. The C-terminal domain of inv is poorly conserved in mice and humans, while the N-terminal domain with ankyrin a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 repeats is highly conserved [5]. The C-terminal domain of the mouse inv was reported to be important for its interaction with the serine-threonine kinase Akt, which plays important roles in cell survival [6]. Introduction of a modified inv gene, lacking the C-terminus (invDC), rescued all inv phenotypes except for cystic kidneys [7,8]. Because the invDC mutant survives longer than inv mutant, the invDC model is useful for investigating renal cystogenesis, excluding its other associated abnormalities. Cystogenesis in polycystic kidney disease (PKD) is the most typical phenotype in ciliopathy [9] because the responsible proteins, such as the polycystin-1 and polycystin-2, are localized in cilia of renal epithelia [10] [11]. In models for autosomal dominant polycystic kidney disease (ADPKD), abnormal proliferation was considered to be one major phenotype, along with epithelial simplification and other phenotypic changes, such as apoptosis, that are likely to be important [12] [13] [14]. We previously showed that invDC cystic kidneys had significantly increased cell proliferation and apoptosis in the later, severe stages of cystogenesis [8]. However, the tubular cell kinetics in postnatal early cystogenesis with the inv mutation remains unknown. In addition, it is generally unclear how the abnormal proliferation is linked to cell death, such as apoptosis, in PKD and its biological significance has not been well addressed. Although pathogenic cellular phenotypes, such as oriented mitotic defect, are associated with the collecting duct in the renal medulla [15] [16], the underlying cellular kinetics in the cortical cystogenesis observed in NPHP models such as in invDC mutant remains unexplained. Notably, most cortical tubular cells in these models appeared to have a slow cell cycle with final cell division occurring postnatally, an effect characterized by rapid growth decline from P15 to P30 [17]. This finding was different from those in in vitro cell lines, because these continued to proliferate. Studies using conditional knockout mice showed that severity or onset of the polycystic phenotype occurred within the developmental window of up to P14 [18,19]. These observations raised intriguing questions about the role of increased proliferation in early cortical cystogenesis with the inv mutation, that is, it is still uncertain whether the abnormal cell proliferation was caused by changes in cell cycle length or a defect in growth control in the kidneys in vivo. The purpose of our study was to investigate the significance of proliferation in early cortical cystogenesis and to clarify the complicated cellular kinetics using mathematical modeling. We showed that total tubular cell numbers were unaffected in early cortical cystogenesis, despite a significant increase in the proliferation index, measured by bromodeoxyuridine (BrdU) labeling. Unexpectedly, tubular cell loss with abnormal nuclear protrusion and debris was identified as an early cystogenic phenotype in the invDC mutant although no typical apoptotic cell death was detected. Based on current experimental data, we established a new multi-population shift model involving a large population of tubular cells with rapid cell cycling, along with increased tubular cell loss in a population with slow cell cycling. The proposed tubular cell kinetics model demonstrated the critical role of tubular cell loss in both renal tubular cystogenesis and homeostasis. Tubular cell number was unaffected in early invDC kidneys despite an elevated BrdU proliferation index InvDC kidneys show normal morphology until P15, while early cystic tubules begin to emerge spontaneously within the cortex beginning from P7 [8]. Therefore, we characterized early cystogenesis from P9 to P15 to determine the cellular basis for the increased cell proliferation index we observed. Although the proliferation index was significantly increased in the invDC cystic model (Table 1 and S1 Fig), overall tubular growth in the postnatal cortex showed a similar pattern to that of kidneys in control mice. This implied that a homeostatic mechanism maintaining (or controlling) cell number was active, even in the invDC mutant cystic model. EdU was intraperitoneally injected into control and invDC mutant mice at 24 h before kidney sampling at the indicated postnatal days (P), followed by the intraperitoneal administration of BrdU after 21 h. Kidney sections were processed for BrdU-and EdU-staining as described in Materials and Methods. The proliferation index (%) was respectively calculated by dividing the number of BrdU-or EdU-positive tubular cells by the total tubular cell number which is accumulated from 3 or 4 mice in each group (n ! 8 imaging fields per mice), and columns indicate the groups of mice the sections were from. The differences in both BrdU-and EdU-based proliferation rates were statistically significant between control and invDC mutant cells (P < 0.05; Fisher's Chi-squared test). Note that the double-labeling ratio, which indicates the proportion of cells re-entering S phase, was barely detectable in both control and mutant cells with this time lag. The scarce amount of BrdU labeling in tubular cells at P4 was consistent with data shown in S1 Fig. To understand if the proliferation index, based on BrdU-labeling, was linked to the resulting cell number increases in invDC kidneys, we compared the weight, genomic DNA (gDNA) levels and tubular cell number per renal cortical area in mutant and control kidneys. Kidney weights or kidney/body weight ratio in mutant mice were not significantly increased at P4 or P9, compared with in controls (Fig 1a and 1b). Although kidney weights in mutant mice began to increase significantly from P15, these values were potentially overestimated because of increased cystic fluid in the spontaneous cystic tubules. Consistent with this, dissected kidneys had lower weights. There were also no significant differences between mutant and control mice in levels of gDNA in the whole kidneys (Fig 1c). The gDNA content was significantly decreased in the later, severely cystic kidneys (at P30), compared with in controls (S2 Fig). This was consistent with previous reports that apoptosis was significantly elevated at the later stage of kidney damage [8] [20]. Furthermore, average tubular cell number per cortical area was not significantly different in the two mouse strains (Fig 1d). Taken together, these results indicated that the elevated proliferation index was not accompanied by an increase in total tubular cell number in the early invDC kidneys. Although cystic tubule is morphologically characterized by the increased cell number and/ or tubular dilation, these phenotypes are not fully quantified in early invDC cystogenesis. To evaluate the relationship between tubular cell number and diameter, we first performed the histogram analysis of the data in Fig 1d focusing on cell counts per tubular cross section. The histogram analysis showed that the percentage of tubules having from 2 to 4 cells was higher in control mice than in mutants, while the percentage of tubules having more than five cells was higher in mutants than in the controls (Fig 2a). It is noteworthy that the percentages of cystic tubules were increased in mutant, while the total tubule number was decreased (n = 137) relative to control (n = 196) with the same cortical area. The average cell numbers accumulated from all tubules per area (n = 20) were also not statistically significant (43.7 ± SEM = 4.1 in control; 38.0 ± SEM = 3.3 in mutant) even when selecting the cross-sectioned tubules, which was compatible with the result of Fig 1d. Interestingly, we found that the tubular diameter in Tubular cell kinetics in early cortical cystogenesis was unveiled by a new multi-population shift model inv mutants was larger than in the controls when tubules with the same numbers of cells (< 10 cells per tubule) were compared (Fig 2b). This implies that tubular dilatation occurred without an increase in cell numbers per tubular cross-section as the initial process during early cytogenesis and, subsequently, the enlarged cystic tubules were linked to the apparent cell number increase. Temporal doubling time analysis of tubular cells during early cortical cystogenesis Increases in the BrdU-based proliferation index can reflect changes in one entire cell cycle length. Therefore, we next compared the doubling times of tubular cells in the controls and mutants. This was evaluated by determining entry into the second S phase (re-entry into next cell cycle) based on the principle that both 5-ethynyl-2´-deoxyuridine (EdU) and BrdU can be incorporated the cells in S phase. Thus, the double labeling experiment was performed, referred to previous report using kidneys [17] and the percentages of double positive cells per total tubular cells were then compared at the indicated time points (Fig 3a). Unexpectedly, there was no change in doubling times in mutant cells at P11 and P14, although the population doubling time at P17 was significantly higher in the mutants than in the controls (Fig 3b). This may have indicated either a minor population of cells re-entering the cell cycle or a changed S phase length. Because the length of the S phase in the invDC mutant was comparable to that in the controls at P15 (S3 Fig), we concluded that the doubling times were comparable in early polycystic kidneys of the mutants and in normal kidneys of control mice, at least until P15. Taken together, considering that total cell numbers were no different in control and mutant kidneys (Fig 1), these results suggested that tubular cell loss likely occurred in renal cystogenesis, preventing an overall increase in cell number. This was demonstrated by temporally tracing the percentages of EdU-labeled tubular cells per total tubular cells (Fig 3c). The elevated EdU-labeling ratio in P8 mutant mice was decreased over time to the same extent as in controls, until P17. Establishment of a multi-population shift model for tubular cell kinetics in renal tubule formation To analyze the cellular kinetics in early cystogenesis, we established a mathematical model for tubular development on the basis of current data and other observations. First, we found that BrdU-positive tubular cells (3 h before labeling) were always detectable in an isolated pattern in each cross-sectioned tubule and that double positive cells were never detected in the same tubule in both controls and invDC mutants (Fig 4a). In contrast, the EdU-labeling experiment (24 h before labeling) exhibited paired EdU-positive cells with adjacent labeled cells first appearing within 24 h. This indicated that the EdU-incorporating cells each divided into paired daughter cells (refer to Fig 5a) because renal tubular cells are thought not to migrate far. Collectively, these observations suggested that each adjacent tubular cell did not divide at the same time, as supported by the fact that cells positive for EdU (24 h before labeling) and BrdU (3 h before labeling) resided next to one another in identical tubules (Fig 4a). We reasoned that heterogeneous tubular cell populations, as suggested in Fig 3b, were derived from an original population dividing in an asymmetrical mitotic manner. Based on this concept, we designed a multi-population model (Fig 4b), utilizing a shift probability (p) from a rapid cell cycling population (RP) to a slow cell cycling population (SP) as incidence rate, giving rise to asymmetrical mitosis. The doubling time of each population was defined as T1 = 2 d for SP and T2 = 7 d for RP, based on the following considerations: (1) there are peak population doubling times (3 and 6 d) in postnatal renal development (Fig 3b) and (2) this doubling time pattern was similar to that observed in a previous study, using double labeling with the BrdU and 3 H-thymidine, showing that there were two major peak population doubling times in cortical tubular cells (2-3 and 6-7 d) [17]. Considering that there was tubular cell loss the in invDC mutants (Fig 3c), the probability of cellular loss q1 in RP and q2 in SP was introduced into the model. For the mathematical model fitting, values for average growth rate (P15/P9) and (P30/ P15), calculated as the rates of increase in whole kidney gDNA, were utilized (Fig 1b and S2 Fig). This was based on the criterion that total tubular cell number was comparable in controls and mutants until P15 (Fig 1). In the simulations, we found that cellular loss in SP, but not RP, was a better fitting model for the invDC mutant phenotype and this finding was consistent with the observation that the increased tubular cell numbers were beginning to decrease from P11 (Fig 3c). However, the model still did not sufficiently explain the growth decline from P15 to P30 in later cystic kidneys, accompanied by a significant decrease in whole kidney gDNA (S2 Fig). Importantly, we noticed that it was necessary to also introduce the probability of reentering cell cycle (r) at 168 h (T2) in SP for the model to be consistent with a significantly decreased gDNA at P30. In this context, the probability, r, must be 0.60 in the controls (Fig 3c, blue line: p = 0.95, q2 = 0.00, r = 0.60) to fit the experimental values for change in average growth ratios from P9 to P30. The non-dividing cell population with "1-r" (NDP) biologically indicated both a quiescent population and a slower cell cycling one at > T2, two possibilities that would be difficult to distinguish experimentally. Among the various simulation values, we identified the best fitting model for the invDC mutant phenotype (Fig 4c, red line), demonstrating that a large population of tubular cells with rapid cell cycling (p = 0.95 ! 0.80) and tubular cell loss (q2 = 0.00 ! 0.12) were accompanied by another population that had exited the cell cycle (r = 0.60 ! 0.55). The proposed model showed that both cell proliferation and cell loss occurred, explaining why cell numbers appeared to be unaffected in invDC mutant mice until early cystogenesis at P15 (Fig 1). That is, under conditions of cell loss, the shift probability (p, 0.95 ! 0.80) can decrease if the whole population size is equal to that in a model with no cell loss. Therefore, the increased BrdU or EdU proliferation indexes can be interpreted as an increase in the RP by the decreased shift to the SP, suggesting a defect in cell fate decision (control of the probability shift) rather than in cell cycle machinery. Taken together, the experimental findings and the mathematical model demonstrated a renal homeostatic mechanism in which proliferating tubular cells in RP of the invDC mutant are cleared by cell loss in SP. Although the model also implied a population re-entering cell cycle at T2 in both controls (r, 0.60) and mutants (r, 0.55), the ratio of change in the whole population was approximately 2.0% (Fig 4c right graph). Therefore, we concluded that tubular cell loss following uncontrolled proliferation (q2, ctrl: 0.00 ! mutant: 0.12) was more critical in early invDC cystogenesis. Significant tubular cell loss and γH2AX signaling in early cortical cystogenesis To investigate apoptosis as a possible cause of cellular loss, we performed terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL). TUNEL-positive apoptotic tubular cells were not detected in early cystogenesis at P15 in the renal cortex (Fig 5a), while a TUNEL signal was clearly detected in controls after DNase I treatment or in cisplatininduced apoptosis of the renal cortex (S4a-S4d Fig). However, we observed increased luminal nuclear debris of various sizes and with irregular shapes in mutant mice (Fig 5a and 5b). Striking nuclear protrusions or sloughing cells were also observed in the tubular lumens and some of the cells had brush border loss (Fig 5b-5d). These cellular phenotypes closely resembled those of acute tubular necrosis (ATN) [21], but appeared to be more cell cycle-dependent because most of the protruding cells expressed the G2/M phase marker, phosphohistone H3 (pH3) (Fig 6a). Although the sloughing tubular cells appeared to be dropped into the tubular lumens (a classical observation in ATN), we noticed that they were bound to the luminal side. Thus, this tubular phenotype suggested cell loss in early cystogenesis and was Tubular cell kinetics in early cortical cystogenesis was unveiled by a new multi-population shift model likely to involve renal phagocytic clearance by adjacent tubular cells, as recently described [22]. In support of this, we observed luminal binding of isolated luminal debris, even in precystic tubules (Fig 6a inset). The DNA damage response (DDR) was recently analyzed in models of PKD [23,24], while γH2AX, a molecular marker for the DDR, was implicated as a non-apoptotic marker of mitotic cell death [25]. Thus, γH2AX expression is considered by some to indicate a specific form of cell death [26,27]. Interestingly, a recent in vivo study using the gut, a tubular organ, also showed mitotic catastrophe with γH2AX-positive but TUNEL-negative cells involved in gut homeostasis [28]. We observed γH2AX expression in the nuclei of luminal sloughing cells in the invDC mutants (Fig 6b) in addition to the BrdU-positive cells in both mutants and controls (Fig 6c, left), the latter regarded as a cell cycle marker from the middle stage of S phase to G2 or M [29] [30]. We found that total γH2AX expression in the invDC mutant (8.9%) was higher than in controls (4.6%), while the BrdU-negative and γH2AX-positive cell populations were also significantly elevated (control, 1.4%; invDC, 4.9%) (Fig 6c, middle and right). In the controls, the BrdU-negative and γH2AX-positive cell population was likely to represent those retained in the G2 phase because BrdU was injected at 3 h before harvesting of the kidneys. If the average BrdU-labeling ratio (mutants/controls = 1.5) from P9 to P15 (Table 1) was proportionally applied to a possible ratio estimation of the G2 phase population, the single γH2AXpositive pool (4.9%) must have contained both the additional cells (2.8%), possibly with damaged DNA, and G2-retained cells (2.1%). Overall, elevated γH2AX-expression in the invDC mutants suggested signs of a possible mitosis-linked cell death and DNA-damaged cells exited from the cell cycle, in addition to an increased BrdU-labeling index. Active role of tubular cell loss in renal tubular hemostasis Control of organ size is mediated by the balance of cell proliferation, differentiation and apoptosis in multicellular organisms [31]. Some evidence in PKD models suggested that both increased cell proliferation and apoptosis occurred in polycystic kidney tubules [12] [13] [14]. Our previous study using the invDC model showed significant apoptosis in the later, severe stage of cystogenesis, while TUNEL-positive cells were detected during early cystogenesis, at P7 [8]. However, the cell number-based, not tubule-based, ratios were very low (0.010% in controls; 0.016% in mutants). These previous results could also have been explained by accidental cell death with an allowable limit of error and, therefore, did not seem to be a significant phenotype, at least in early cystogenesis. In the present study, we focused on tubular cell kinetics during early cystogenesis, up to P15, in an NPHP model and re-evaluated renal tubular proliferation dynamics. Unexpectedly we found that tubular cell loss, possibly non-apoptotic cell death, as indicated by γH2AX expression, was a critical phenotype in early invDC cystogenesis. We could not identify the type of cell death because there were no distinct markers for necrosis (or controlled necrosis, known as necroptosis) or apoptosis. However, we did observe nuclear debris in the renal tubules in the invDC mutants, connected to adjacent tubular cells. This suggested that the dying tubular cells were rapidly cleared, through phagocytosis by adjacent tubular cells [22]. Thus, proliferating cells were likely cleared by phagocytosis to maintain both cell number and tubular monolayer integrity. To precisely quantify this type of cell death, further experiments using the invDC mutant mated with a multicolor and multiclonal rainbow mice [32] might be appropriate, enabling phagocytosis-mediated clearance to be quantified by overlapping staining colors. Deletion of the anti-apoptosis gene, Bcl-2, resulted in polycystic kidney formation (similar to PKD) [33], demonstrating that controlled tubular cell death may play an active role in physiological renal tubular formation. In the Bcl-2 deficient kidney, significant numbers of pyknotic nuclei (an indicator of cell death) were observed in the interstitium, rather than in tubular cells [34]. This suggested that non-apoptotic cell death can be occurred in cortical tubular cells lacking Bcl-2, because Bcl-2 is also associated with necrosis and autophagy [33]. Given that phagocytosis-mediated clearance of tubular cells occurred during tubular formation in early invDC cystogenesis, apoptotic signaling might be abolished or masked by unknown phagocytotic process. Then, at the later stage of cystogenesis, tubular phagocytotic capacity might decline or be fully inhibited and, thereby, striking elevations of apoptosis might be detectable. The gDNA content in the kidneys of dying invDC mutants, at P30, was significantly lower than in the controls. This was consistent with an increase in apoptosis at the later stage, as we previously described [8]. Similarly, late stage apoptosis was also reported during cyst regression in a PKD model [20]. In these types of cystic kidney models, the balance between cell proliferation and cell loss can differ depending on the stage of disease progression. A significant aspect of our study is that the postulated cellular kinetics behavior was demonstrated by mathematical modeling, enabling a clear understanding of the relationship between abnormal cell proliferation and cell loss during cystogenesis. Biological significance of γH2AX signaling in the invDC polycystic kidney model The DDR signaling marker, γH2AX, was markedly increased in some polycystic kidney models [23,24] and one of the genes involved in DDR, NEK8, appeared to work through downstream signaling in the inv mutant model [35]. Consistent with this, we detected increased γH2AX expression in the invDC mutant renal cortex, with most expression in the S phase cell population in the controls. It is noteworthy that γH2AX expression was detected in abnormal protruding cells in invDC tubular lumens. Because apical mitosis occurred with interkinetic nuclear migration (INM) in renal ureteric tubules, abnormal tubular protrusion may imply a process predicting mitotic cell death [25]. Importantly, previous studies showed that loss of inv/nphp2 cause tubular cell hypertrophy and increased bi and multinucleated cells [4,36]. These phenotypes might be derived from mitotic catastrophe marked with γH2AX, which we speculate in invDC mutant, because it also can exhibit abnormal mitosis resulting in aneuploidy or multinucleation in addition to mitotic cell death [37]. Interestingly, γH2AX can be detected as a ring-like staining pattern around the nucleus in early apoptotic cells, a pattern distinguishable from the dotted staining pattern observed during the DDR [38]. We observed only the dotted immunostaining pattern. The presence of γH2AX can critically distinguish between TUNEL-positive apoptotic DNA ladders and chromosomal breaks [27], supporting that tubular cell death associated with an γH2AX-positive signal occurred in early inv DC cystogenesis. This type of cell death mechanism might be fundamentally conserved for epithelial homeostasis as recently reported in gut tissue [28]. On the other hand, γH2AX was reportedly involved in homeostatic suppression of proliferation by restricting stem cell proliferation without obvious DNA damage [39,40]. The role of such events in our cystic model was not addressed in the context of DNA damage. Further investigation is needed to elucidate whether the biological roles of γH2AX in tubular cell death are protective or progressive in the invDC cystic model. How tubular cell loss contributed to renal cytogenesis in the invDC model? Planar cell polarity (PCP) is the coordinated orientation of tissue cells in an orthogonal direction to apico-basal polarity [41]. Renal tubular cells have PCP, controlled by oriented cell division (OCD), within the closed epithelial tubules and a defect in PCP was commonly proposed as a cellular phenotype in PKD [15] [16]. On the other hand, it is controversial whether loss of OCD occurs before or after renal cystogenesis [42] [43]. In addition, it has been shown that core PCP signaling regulated renal tubular diameter but loss of this control did not induce renal cysts [44]. Although, the inv mutation was also associated with PCP-associated genes such as disheveled [45], their relationship to renal cystogenesis was uncleared in inv mutant model. Remarkably, we observed early invDC tubular phenotype was very similar to the morphological features of acute tubular necrosis in the kidney, with a flattened tubular morphology, sloughing cells and luminal necrotic debris [46]. We hypothesized that tubular loss of proliferating cells by renal phagocytosis triggered a defective tubular reorganization, including a flattened tubular morphology. Thus, the tubular dilatation proceeded in precystic tubules. This effect accumulated over time, causing a mitotic angle defect and leading to renal cyst expansion in the invDC mutants. The concept that tubular cell loss is critical in renal cystogenesis is compatible, at a molecular level, with a recent report that the C-terminus of the inv protein interacted with Akt signaling, an important process for cell survival [6]. To investigate our hypothesis, a live imaging approach would be required although it would be technically quite difficult to observe kidney development under proper conditions for the long time periods needed. Recent technical breakthroughs, enabling production and maintenance of nephrogenic progenitor cells like the ES cell line [47], might be useful to observe in vitro tubular formation in future research. Renal tubular cell kinetics elucidated in a multi-population shift model In multicellular organisms, several cell populations become specialized within organs and tissues to perform functions and specific roles, as in the hematopoietic stem cell system. However, in some other organs and tissues, including the kidney, it is not clear how many cell populations contribute to formation of tubular structure leading to terminal differentiation. Our model revealed that a restricted tubular cell population contributed to cortical tubular formation and identified a progenitor-like rapid cell cycling population. This helped increase understanding of the complex cellular phenotypes in invDC cystogenesis, when fitted with the experimental data from P9 to P30. However, our model could did not predict the higher proliferation rates from P4 to P9 (average ratio at P9/P4 = 3.4). Because nephrogenesis by non-tubular cells is believed to continue at P4, the shorter doubling time or loss of a slow cell cycling population (SP) would be assumed. Further double labeling experiments and model customization in future studies will be required to more fully understand the cellular kinetics. Although many cell proliferation models have been described, to our knowledge all were consistent with a single cell population and most were focused on single cancer cell lines, not multiple cell populations. In addition, it is difficult to understand proliferation dynamics using only an experimental approach when considering multiple cell populations and the mechanical principles of proliferation control in multiple cell populations are not understood. In our modeling study, we observed some interesting examples of simulation of proliferation dynamics. From these simulations, we also found that general declines in cell proliferation curve could be explained by increased a non-dividing cell population (NDP) in addition to cellular loss, that is, our model predicted that the NDP (38%) emerged under normal cortical tubular development. This numerically confirmed the biological significance of both a quiescent cell population and a longer slow cell cycling one in size control of kidney, potentially because quiescent cells may reenter into the cell cycling at certain times. Although significant increase of a third SP (doubling time = 9 d) in mutants might be explained by the decreased probability (r, 0.60 ! 0.55) of cell division in SP, the difference in ratios of NDP (1-r) between controls (r = 0.60) and mutants (r = 0.55) was approximately 2.0% (Fig 3c right graph). This indicated that the quiescent cell population size was not dramatically changed in the invDC mutant, compared with in controls. Considering this, tubular cell loss (q2 = 0.12: 9.6% in total) could be identified as a critical phenotype in early invDC cystogenesis. Given that tubular cell loss is linked to mitotic cell death and then rapidly cleared by renal phagocytosis, an indicative value of cell death as shown in Fig 5c must be underestimated. Future studies using live imaging analysis of renal tubular development, overcoming the technical hurdles, might further elucidate the involvement of cell death and the associated mechanisms. Conclusions In conclusion, tubular cell loss was identified as an early cystogenic phenotype in the invDC kidney model. We concluded that total tubular cell numbers were unaffected in early cystogenesis in the invDC kidney because of tubular loss of the proliferating cells, suggesting that tubular cell loss is critical in both early renal cystogenesis and tubular homeostasis. The underlying cellular kinetics in early cortical cystogenesis were unveiled by a new multi-population shift model, not only providing new insights into the invDC mutant phenotype but also suggesting that a few restricted tubular cell populations contribute to postnatal cortical tubular formation. Animals FVB/N and transgenic inv/inv mice carrying invΔC:: GFP (invΔC) were maintained as previously descried [7] [8] in a pathogen-free state at the animal facility of Kyoto Prefectural University of Medicine, Japan. All experimental procedures were approved by the Committee for Animal Research, Kyoto Prefectural University of Medicine. We used +/inv or +/+ mice carrying the invΔC transgene as wild-type controls for all experiments. Quantification of whole kidney genomic DNA Whole kidney gDNA was quantified using DNA zol Genomic DNA isolation reagent (Molecular Research). Whole kidney weight at each developmental stage was measured, and gDNA was isolated according to the manufacturer's instructions. Over six kidneys were evaluated from mice in each group (control and invDC mutant). The concentration of gDNA was measured with a BioPhotometer plus (Eppendorf), and the total amount of DNA was calculated. Immunohistochemistry Isolated kidneys were fixed in 4% paraformaldehyde in phosphate-buffered saline (PBS) according to standard procedures. Kidneys were cryoprotected with 30% sucrose in PBS and embedded in optimal cutting temperature compound (Tissue-Tek, Miles; Elkhart, IN, USA), and then frozen in liquid nitrogen and stored at −80˚C until sectioning. Kidney sections (10μm thick) were immunostained according to a standard protocol. Antibodies and chemicals used for immunohistochemistry were as follows: mouse BrdU (Sigma), rabbit polyclonal phosphor-histone H3 (Ser10) (Millipore), rabbit polyclonal γH2AX Detection of apoptosis Apoptosis was measured using a terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) assay with an In situ Apoptosis Detection Kit (TAKARA, JAPAN) according to the manufacturer's instructions, followed by DAPI staining. Renal cortical sections from control and invDC mutant mice at P9 and P15 were used for the TUNEL assay (n = 10 fields with 40× objective/age). The staining reaction without the transferase was used as the negative control, and DNase I-treated slides were used as positive controls. For evaluation of renal apoptosis as another positive control, mice were sacrificed at 2 and 3 days after a single intraperitoneal injection of PBS or the nephrotoxic drug, cisplatin, at 20 mg/ml/kg (Wako, JAPAN). The stained slides were analyzed by LSM510 confocal microscopy (Zeiss, Germany) and TUNEL-positive tubular cell number was counted per area. EdU-BrdU double labeling Tubular cell proliferation kinetics were evaluated by both bromodeoxyuridine (BrdU) and 5-ethynyl-2'-deoxyuridine (EdU) incorporation. First, mice were injected intraperitoneally with a solution of PBS containing EdU (Click-iT EdU Alexa Fluor 647 Imaging kit, Thermo Fisher scientific) according to the manufacturer's instructions at 24 h before the isolation of kidneys. Next, a BrdU solution (10 mg/kg) was injected at 3 h before the isolation of kidneys. Kidneys were fixed and visualized as described above. For detection of BrdU signals, sections were treated with 1.5 M hydrochloric acid at 37˚C for 20 min before other immunostaining procedures. Microscopic fields (40×) of renal sections were randomly selected and processed for immunostaining as mentioned above. After counterstaining with DAPI, all specimens were imaged by LSM510 confocal microscopy (Zeiss, Germany). The number of EdU-or BrdU-positive nuclei and the total tubular cells were counted and then accumulated from all images with a Zeiss LSM Image Browser (Zeiss, Germany). The percentages of EdU-or BrdUpositive tubular cells were calculated by dividing the cell numbers by total tubular cell numbers and then represented as double proliferation indexes. The doubling time of cortical tubular cells was evaluated based on the percentages of the EdU/BrdU-double positive cells referring to previous report [17]. Briefly, EdU was intraperitoneally injected into the mice at P8, followed by multiple BrdU injections at 6 h and 12 h before harvesting kidneys at P11, P14, and P17. The percentages of EdU/BrdU-double positive or EdU-positive tubular cells per total tubular cells was respectively calculated as described above and compared between control and invDC mutant mice at the indicated postnatal mice age (n = 3). Renal tubular characterization For the comparison of average tubular cell number per area of renal cortex in P15 mice, confocal imaging was performed in randomly selected fields of renal cortex (20 fields from two mice in each group) by using an LSM510 confocal microscope (Zeiss, Germany). We first identified tubules which were mostly observed as the circular or elliptical morphology with F-actin staining in renal cortex. All tubules identified on acquired images were numbered with tubular diameters (both major and minor axis) which were manually measured using the LSM Image Browser. The number of tubular cells per each imaging fields were counted based on the nuclear staining as mentioned above. The average tubular cell number per cortical area (150 μm 2 with a 40× objective) was calculated by dividing the accumulated tubular cell number by the total field number (n = 20). For evaluation of the relationship between tubular cell number and diameter, the cross-sectional tubule was defined with a major/longer to minor/shorter axis ratio 1.5 as previously described [48]. Those tubules that appeared to be morphologically almost round shape were selected for the counting. Then, the numbers of epithelial cells per tubular cross-section were counted and the percentages of tubules having the given numbers of cells were calculated by dividing the tubule numbers by total tubule numbers. The diameter per tubular cross-section was defined by selecting the minor diameter and then compared between control and mutant. For TUNEL-negative cell death evaluation, luminal debris was counted as DAPI-nuclear staining with small-irregular or dot-like debris in tubular lumens or on the luminal side of tubular cells. LTL-staining was performed for clear identification of luminal nuclear debris or morphology of cells with nuclear protrusions. Mathematical stochastic model for proliferation Classically, stochastic models for proliferation are used to analyze cancer initiation or expansion, based on a single population [49] [50]. However, we found that at least two types of doubling time populations were maintained during tubular development (Fig 3b), as supported by previous observations [17]. Therefore, we used a multipopulation shift model (Fig 4b) consisting of a rapid cell cycling population (RP) and a slow cell cycling population (SP). The SP was most likely to be derived from the RP during tubular development. This can be explained as follows: a population shift from a RP to a SP is a change in cell fate resulting in asymmetrical mitosis. Furthermore, the cellular loss probability (q) was combined as implied in Fig 3c, although tubular cell loss is considered mostly to not occur during normal tubular development. A non-dividing cell population (NDP), as a quiescent population or a longer slow cell cycling population, is generally believed to be emerged during a biological development. For this concept, the probability of cell division (r) was added to the model. Therefore, the mixed growth dynamics of the two populations over time (t) can mostly be represented by the following equations: where N1(t) or N2 (t) and N0 (t) are the cell number of the RP or SP at time t and time zero, respectively; p is the probability of shift from a RP to a SP; and q denotes the probability of cellular loss. T1 and T2 are the cell cycle length of the RP and SP, respectively, defined as 48 h and 168 h in the current study based on the present data and the related reference [17]. All other parameters are defined in Table 2. The model underlying equations were written with text editor, ATOM (https://atom.io/) and we simulated the stochastic models by using the Python 3.6.3 software program (https://www.python.org/downloads/) coupled with ATOM. We also utilized the Python packages, the numerical package Numpy-1.7.0 (https://pypi.python.org/ pypi/numpy/1.7.0) and plotting package, Matplotlib-2.1.0 (https://pypi.python.org/pypi/ matplotlib) for the graph presentation of simulation results. Statistical analyses We evaluated differences between the experimental groups with the Kolmogorov-Smirnov test, unpaired t-test, Fisher's chi-squared test, and Mann-Whitney U test as indicated in each figure legend. Data are presented as mean ± standard error of the mean (SEM). A value of P < 0.05 was considered statistically significant. All statistical analyses were performed using Graph Pad Prism 7 (Graph Pad Software Inc., La Jolla, CA, USA).
8,849.4
2018-06-11T00:00:00.000
[ "Biology", "Chemistry" ]
New characterizations of weights on dynamic inequalities involving a Hardy operator In this paper, we establish some new characterizations of weighted functions of dynamic inequalities containing a Hardy operator on time scales. These inequalities contain the characterization of Ariňo and Muckenhoupt when $\mathbb{T}=\mathbb{R}$ T = R , whereas they contain the characterizations of Bennett–Erdmann and Gao when $\mathbb{T}=\mathbb{N}$ T = N . Introduction In [10], Muckenhoupt characterized the weights such that the inequality 1/k holds for all measurable ≥ 0 and the constant C is independent of (here 1 < k < ∞). The characterization reduces to the condition that the nonnegative functions and ω satisfy and K ≤ C ≤ Kk 1/k (k * ) 1/k * . In [7], Bradley gave new characterizations of weights such that the general inequality holds for all measurable ≥ 0 and the constant C is independent of (here 1 ≤ k ≤ q ≤ ∞). The characterization reduces to the condition that the nonnegative functions and ω satisfy and K ≤ C ≤ Kk 1/q (k * ) 1/k * for 1 < k < q < ∞ and K = C if k = 1 and q = ∞. In [3], Ariňo and Muckenhoupt characterized the weight function such that the inequality holds for all nonnegative nonincreasing measurable function on (0, ∞) with a constant C > 0 independent on (here 1 ≤ k < ∞). The characterization reduces to the condition that the function ϕ satisfies ϕ(x) dx for all ς ∈ (0, ∞) and B > 0. In [12], Sinnamon characterized the weights such that the inequality holds for all measurable ≥ 0 and the constant C is independent of (here 0 < q < 1 < k and 1 r = 1 q -1 k ). The characterization reduces to the condition that the nonnegative functions and ω satisfy b For the discrete case, however, Bennett and Erdmann [4] characterized the weights such that the inequality holds for all nonnegative nonincreasing sequence z n . The characterization reduces to the condition that the nonnegative sequence ϕ n satisfies ϕ k for all n ∈ N and B > 0. In [8], Gao extended the results of Bennett and Erdmann and characterized the weights such that the inequality holds for all nonnegative and nonincreasing sequences z n and a n with a 1 > 0, where the constant C is independent of a n and z n . The characterization reduces to the condition that the nonnegative sequences a n and ϕ n satisfy ϕ k for all n ∈ N and B > 0, where A n = n k=1 a k . In this paper, we are concerned with proving some dynamic inequalities on time scales; see [1,2]. The general idea is to prove our results where the domain of the unknown function is a so-called time scale T, which is an arbitrary nonempty closed subset of the real numbers R. In [11], the authors characterized the weights such that the dynamic inequality on a time scale T holds for all nonnegative rd-continuous function on The characterization reduces to the condition that the nonnegative functions and υ satisfy Moreover, the estimate for the constant C in (4) is given by As a particular case of (4) if k = q, (ς) = ϕ(ς)(σ (ς)ς 0 ) -k and υ = ϕ, then we get the inequality and the characterization reduces to the condition that the nonnegative function ϕ satisfies Our aim in this paper is to establish some new characterizations of the weights for the dynamic inequalities of the form (5) and for the general inequalities of the form where ϒ(ς) = ς ς 0 ψ(x) x, 1 < k < ∞, and c > 0. The paper is organized as follows. In Sect. 2, we present some definitions and basic concepts of time scales and prove essential lemmas needed in Sect. 3 where the main results are proved. Our findings significantly recover particular cases. Indeed, the proposed theorems contain the characterizations of inequalities (2) and (3) proved by Bennett and Erdmann and Gao when T = N, whereas they give the characterizations of inequality (1) proved by Ariňo and Muckenhoupt when T = R. Preliminaries and basic lemmas For completeness, we recall the following concepts related to the notion of time scales. We refer the reader to the two books by Bohner and Peterson [5,6]. A time scale T is an arbitrary nonempty closed subset of the real numbers R. We assume throughout that T has the topology that it inherits from the standard topology on the real numbers R. The forward jump operator and the backward jump operator are defined by: σ (ς) := inf{s ∈ T : s > ς} and ρ(ς) := sup{s ∈ T : s < ς}, respectively. A point ς ∈ T is said to be left-dense if ρ(ς) = ς , right-dense if σ (ς) = ς , left-scattered if ρ(ς) < ς , and right-scattered if σ (ς) > ς . A function z : T → R is said to be right-dense continuous (rd-continuous) provided z is continuous at right-dense points and at left-dense points in T, left-hand limits exist and are finite. The set of all such rd-continuous functions is denoted by C rd (T, R). and the integration by parts formula on time scales is given by The time scales chain rule (see [5,Theorem 1.87]) is given by where it is assumed that z : R → R is continuously differentiable and δ : T → R is delta differentiable. A simple consequence of Keller's chain rule [5, Theorem 1.90] is given by The Hölder inequality, see [5,Theorem 6.13], on time scales is given by where ς 0 , b ∈ T, , z ∈ C rd (I, R), γ > 1, and 1 γ + 1 ν = 1. The special case γ = ν = 2 in (11) yields the time scales Cauchy-Schwarz inequality. Throughout the paper, we assume (without mentioning) that the functions are nonnegative rd-continuous functions on [ς 0 , ∞) T and the integrals considered are assumed to exist (finite i.e. convergent). We define The following lemma is adopted from [11]. The following lemmas are needed in Sect. 3. Lemma 2.2 Assume that ϕ, ψ are nonnegative rd-continuous functions defined on Using ω(ς 0 ) = ϒ(∞) = 0 (recall all integrals are assumed to be convergent), we obtain that The proof is complete. where z is a nonincreasing function. Remark 2.1 Note in the above lemma that Main results This section is devoted to state and prove our main results. Furthermore, assume that is nonincreasing and Proof Suppose that (18) holds. Apply Lemma 2.1 with z = , and we have that Substituting (20) into the left-hand side of (19), we get that Applying Lemma 2.2, with on the right hand side of (21), we see that Using the additive property of integrals [5, Theorem 1.77(iv)] on time scales, we have for Substituting (18) into the right-hand side of (23), we see that Applying integration by parts formula (8) on the term where υ(s) = s ς 0 (σ (x)ς 0 ) k-1 x. Using ϒ σ (ς) = 0 and υ(ς 0 ) = 0, we have that Substituting (25) into the right-hand side of (24), we get that Since inequality (26) becomes Applying Lemma 2.2 on the term From (27) and (29), we have Putting ψ(x) = 1 in Lemma 2.4, we see that the function is nonincreasing. Now, by applying Lemma 2.3, with we have from (28) and (30) that Substituting (31) into the right-hand side of (22), we see that Applying Hölder's inequality (11) on the term with indices k and k/(k -1), we get that Finally substituting (33) into (32), we have that This implies that which is (19). The proof is complete. Thus we get the inequality Remark 3.6 In the case when T = R and ς 0 = 0, then K = 1 in the previous remark and from (38) we have which is a Hardy-type inequality with constant (k 2 /(k -1)) k (see [9]). Remark 3.7 In Theorem 3.2 we could replace is rd-continuous with is integrable. Remark 3.8 Suppose that is integrable, and also assume that
1,943.2
2021-04-23T00:00:00.000
[ "Mathematics" ]
SACRED: Software Approach for Collaboration and Research Dissemination Collaborative research is an opportunity to bring creative minds together and blend multiple disciplines to churn out innovative solutions. In this era of massive social media and information overload, a streamlined process framework with best practices and procedures is a requirement for genuine scientific collaboration. The main aim of this work is to bring forth a software-centric framework for harmonizing research. SACRED is the outcome of experiences gained during the development of ‘ARMS’-An Analysis Framework for Mixed Criticality Systems. ARMS is a collaborative platform to disseminate research and literature in real-time mixed criticality systems. ARMS is hosted on Amazon Amplify with the user interface implemented using the ReactJS framework. SACRED summarizes the software-centric process, practices and procedures followed, and renders it for similar collaborations in the future. INTRoDUCTIoN "None of us is as smart as all of us" -Kenneth Blanchard The opportunity of connecting scientific and engineering minds in a collaborative fashion is a boon that yields stupendous results.In this era of information technology, collaborative research is feasible to a great extent as it brings together academicians and industrial engineers in smaller constellations.This provides multi-fold benefits to society and works as a stepping stone for budding researchers.The main challenges faced by a collaborative platform are: availability/approachability of domain expertise, proper division of labor among contributors, proper utilization of available skills, setting common goals among contributors, a common understanding among stakeholders, uniformity in work product, focused plan and regular updates to the community without losing zeal.In an effort to overcome these challenges, this paper presents a software approach that will harmonize research in a wider context.In other words, the work aims to bring the wealth of research contributions generated by diverse research groups under a common umbrella. As part of this work, a group of mixed criticality systems (MCS) researchers launched a novel initiative platform -An Analysis Framework for MCS (ARMS), to build a large-scale research repository for enabling higher visibility of research results.ARMS aims at providing a framework to encourage collaboration between industry and academia thus enabling higher acceptance of research results by considering MCS as the test domain.It is a cloud-based platform designed for archiving, updating and reporting existing tasks models in MCS.The associated cloud-based database (DB) gets updated with on-going and new task models to keep up with recent and ever evolving research works.The ARMS platform compares and contrasts domain needs vis-a-vis with task models, attributes and presents a comprehensive landscape of the existing task model parameters in the mixed criticality domain.It also provides academicians and engineers the opportunity to choose task models appropriate to the specifics of the problem under study and provides a holistic view of chosen algorithms and models by analyzing, scheduling and generating statistics.The software-centric framework used in the development of ARMS is named -Software Approach for Collaboration and REsearch Dissemination (SACRED). SACRED provides a detailed step-by-step software-centric approach that consists of three phases namely, setup, rollout and collaboration.The setup phase includes a systematic literature review, that results in identification, grouping and clustering of task parameters based on usage scenarios and functional attributes.It includes a multi-vocal literature review consisting of published works, grey literature and industrial contributions.The rollout phase launches the ARMS framework for stakeholder's use.It is a three-stage process, consisting of platform finalization, design & development and validation & deployment of the cloud-based framework.The collaboration phase provides facility for pursuits and distribution.It further helps in establishing partnerships that allow comparison of existing research results and facilitates in performing regular updates.The software-centric processes, practices and procedures followed in the development of ARMS is extensible to other domains/ disciplines wherever collaborative research yields benefits. The paper is organized as follows: Section 2 presents related literature on collaborative platforms.Section 3 provides the software process followed by SACRED.Section 4 presents a case study of the SACRED process applied to the cloud-based aggregator platform ARMS.Section 5 lists the challenges faced/best practices followed in SACRED and a comparative analysis with other collaborative platforms and Section 6 concludes the work. RELATED woRKS Software development requires innovation, collective intelligence and collaboration of many minds.Collaborative platforms encourage association between industry and academia thus enabling dissemination of research results.A large number of collaborative platforms (Beck et al., 2022;Borgho & Teege, 1993;Brownson et al., 2021;Chorfi et al., 2022;Gesing et al., 2019;Khan et al., 2021;Lautamäki et al., 2012;McLennan & Kennell, 2010;Monnard et al., 2021;Stodden et al., 2012) have been presented in literature over the years.HUBzero (Gesing et al., 2019;McLennan & Kennell, 2010) is one such platform that allows researchers to collaborate and network to develop simulation/modeling tools.These tools are made accessible to the community through the web browser and launched in a grid infrastructure.Borgho and Teege (1993) presented a collaborative editor for managing software engineering projects.The editor provides facility for distributed editing, notifications and imparts consistency through dynamic voting.A web based java editor was proposed by Lautamäki et al. (2012) to support collaboration for development of java applications.The online editor provided support for error detection, automatic code generation and social media features.Another collaborative platform to enhance programming skills was presented by Chorfi et al. (2022).Geographically distributed learners were able to interact with one another and with mentors to solve problems and develop shared programs.Beck et al. (2022) proposed an interdisciplinary collaborative research framework to disseminate innovations in the field of science.An approach to disseminate research on public health among the general and research community was proposed by Monnard et al. (2021).Mentored training for distributing cancer research is discussed in Brownson et al. (2021).Khan et al. (2021) studied the adoption of collaborative learning through social media during the Covid-19 pandemic.The work aimed at understanding the correlation between student performance and social media use, to comprehend the impact of social media on students during the pandemic.Stodden et al. (2012) pointed out that the research results presented in various works are unavailable for use by the research community.Their work presented a cloud based infrastructure that provided support and access to both the code and results from various published articles.Users were also able to experiment using their own data.The work by Peng et al. (2014) summarizes and compares software collaborative platforms in terms collaboration, co-ordination, awareness, communication and value transfer. Based on the literature survey, it is observed that there are limited collaborative research dissemination platforms for the real-time systems community.The scope of ARMS and the corresponding software process -SACRED is to build a software-centric design level task modelling and scheduling framework of real-time MCS.It also assists the research community to understand the state-of-the-art research in the domain and evaluates by comparing with existing solutions thus allowing to propose and publish new ideas. SACRED oVERVIEw Collaborative research requires a framework as well as a process that describes how things are executed with a focused approach towards improvement.It is also important that this process provides a starting platform for budding researchers.Situations like the pandemic or other calamities hamper personal interactions and collaborations.But technology helps to overcome such situations by bringing about a holistic platform where researchers with a common mindset can collaborate and excel together.SACRED provides a detailed step-by-step approach to make a simple collaborative research framework and provides a cloud-based platform for researchers' use.This platform development requires a distinct mechanism rather than following well-known methodologies like verification and validation cycle, waterfall or agile model as it focuses on software research, collaboration, coordination, awareness, communication and value transfer. SACRED consists of a three-phase software centric approach that includes setup, rollout and collaboration phase as depicted in Figure 1 .As part of the setup phase, the researcher starts with a systematic literature review, that extends to aid in identification of parameters and subsequently helps to list out a uniform nomenclature by assimilating the commonalities among these parameters.Further, these parameters are grouped based on domain specific criteria.The rollout phase includes platform finalization and the design and development of the cloud-based framework for easier access to the research data.The next phase -collaboration -provides facility to compare existing research results and makes it feasible to compare one's own results with the results available in the research fraternity.It also allows industrial and academic contributions by researchers.This platform serves as a stepping stone as well as a decision support system for existing researchers, designers and budding engineers.The following subsections describe the phases with emphasis on the expected outcomes of each phase and the best practices used. Setup Phase The purpose of this initial phase is to form a collaborative working group consisting of researchers, developers and collaborators and to prepare guidelines for regular communication and day-to-day activities.This step also elaborates on activities like literature review, preparation of common guidelines, work instructions, checklist and templates.Based on the domain of study, it is feasible to come up with a common framework or grouping of state-of-the-art research results.Researchers and developers have to also factor in the available time constraints and decide on the resource requirements. The expected outcomes after the successful completion of this phase include: • Formation of an active core team -Prior to starting the literature review, a team comprising of researchers, developers and collaborators who will aid in the review, development and updating needs to be put in place.All this is achieved through brain storming and targeted discussions with like-minded people which will provide the needed momentum for subsequent stages of research.• Guidelines, checklists, templates, communication framework and systematic literature review -This step aids in preparing a common process and communication framework for working together.It includes guidelines for information exchange, regular connects and meeting agenda.It also includes preparation of checklist and templates to gather information and review of contents.An important engineering step that forms the basis of any research is a systematic literature survey.This includes collecting existing literature in a specific area and analyzing it to identify gaps and feasible improvements required in the domain of study.• Parameter Identification with Uniform Nomenclature -Based on the literature review, high level parameters are extracted and classified.The parameter extraction process involves the identification and extraction of relevant parameters from the selected papers.These parameters may be represented using different notations by various researchers.There is a need to consolidate the parameters and provide a uniform nomenclature (including the notations) to make it usable by researchers in the relevant domain.• Grouping/clustering state-of-the-art research results-Classification of research results into broad groups based on clusters in the landscape or based on industrial needs is an essential step.The parameters identified during the identification phase are grouped based on different criteria. The best practices established during this phase include preparation of work instructions by each and every team member, establishment of the frequency and mode of communication, weekly report submission on progress and risks, template preparation and guidelines for capturing information, regular presentations and demos among team members. Rollout Phase The steps in the rollout phase aim to provide a dissemination framework based on the respective domain.It includes selection of the rollout platform, design, development and validation of tools.In this framework, codes of algorithms and models of the existing works are tested.The authors of the publication are contacted for the code base.If it is provided by them, the development team incorporates the same in the framework.If the authors provide only the execution permission to access their servers with the help of an interface, then the framework incorporates the work with the help of foreign servers.In the worst-case, the developer team codes the algorithms and sends them to the authors for correctness test before incorporating the same in the framework. The expected outcomes after the successful completion of this phase are: • Platform finalization -Platform finalization is the key to providing a user friendly and multiplatform experience.For the MCS field of research, this work finalized a web/mobile based application where the MCS research community can interact with the system through an app or web browser.The platform is deployed on a cloud server (Amazon Web Services (AWS)) for seamless accessibility and security.• Framework design and development -Different methods can be used for aggregating and summarizing findings of various works.For the MCS task model work, a cloud-based aggregator is designed and developed.This aggregator is a client-server platform which will help researchers in identifying the most appropriate task models for their work.This framework archives the state-of-the-art task models and task parameters in MCS and provides various options for the researchers to choose task parameters in terms of various usage scenarios and functional attributes.• Framework validation and deployment -A deployed framework needs to be tested with published data and validated with the authors wherever possible before rolling out to collaborators.In this step, help can be taken from volunteers in the research community to support the process. The best practices implemented at this stage include object oriented modularized design with usage of formal modelling languages like Unified Modeling Language (UML).Collaborative testing by peers, like-minded researchers and authors improves the quality and user friendliness of the framework. Collaboration Phase The purpose of this step is to collaborate with an extended groups of researchers and yield results as an authentic freeware platform.The expected outcomes after the successful completion of this phase are: • Distribution -Distribution, deployment and maintenance is carried out in this phase.The freeware version of the framework should be made available to users to decide and finalize on appropriate choices for their work.For example, ARMS framework will be administrated and maintained by a group formed by MCS researchers.Controlled access will be provided to users upon on-demand request.• Research assimilation, decision support system & partnership -Decision support systems aid in qualitative analysis.In this work, to facilitate decision making, the task model framework is extended with scheduling algorithms to perform schedulability analysis.This will aid designers in performing comparative analysis and identifying apt task models for their study.This step also allows integration of author's code base / execution on their server / review of newly developed code base by the authors.• Updates -Regular and streamlined updates are vital in order to add new features or improve existing ones.Regular updates with industrial and academic contributions on task models and schedulers makes the tool a robust guide to researchers and engineers thereby facilitating mutual benefits. Swift and speedy responses among collaborators and their active participation to contribute latest research and artifacts make the framework up-to-date, relevant and purposeful. AN ANALySIS FRAMEwoRK FoR MCS (ARMS) IMPLEMENTATIoN -A CASE STUDy This section describes a factual experience that triggered the SACRED process methodology.It is the development of a platform for mixed criticality researchers.This platform follows a cloud-based client-server architecture and helps researchers in identifying the most appropriate task models for their applications along with detailed analysis.In this platform, the state-of-the-art task models and task parameters in MCS are archived.The aggregator tool platform generates task sets based on the chosen task model parameters and performs schedulability analysis.It facilitates researchers and engineers to add or enhance task models and scheduling algorithms.Regular bi-annual updates with industrial and academic contributions make this platform a robust guide to researchers and engineers thereby facilitating mutual benefits. The typical architecture of the task model aggregator platform is depicted in Figure 2 .The app server consists of seven software modules -Task Model Updater, Task Model Enhancer, Task Model Generator, Task Set Generator, Analyzer, DB Process and Scheduler.The web client/app at the user end consists of a User Process, Admin Process and a Display Subsystem.Major functionalities of this aggregator platform include task model generation, task set generation with selected task parameters, scheduling of task sets with mixed criticality schedulers and detailed analysis of results.The list of features supported are: • Administered addition of task parameters and task models based on recent publications of repute. • Collaborative maintenance framework for addition/deletion/enhancement of task models, parameters and usage scenarios.• Selection of task models and task parameters based on nature of study and usage scenarios. • Selection of scheduler from the standard set and feasible enhancement with customized or thirdparty plug-in schedulers. Setup Phase As the first step, a team was setup for literature review and development.The team comprised of a team lead, researchers and developers.The team was augmented with a sustenance consortium to nurture and maintain the aggregator platform.The group of researchers conducted a systematic literature review on task models in the MCS domain.The review covered 15 years of MCS research which started with the pioneering work by Vestal (2007) in 2007.After elaborate discussions, brainstorming sessions and several iterations more than seventy task model parameters were identified.Few of these parameters are listed with uniform nomenclature in Table 1.Further, these parameters were analyzed and grouped based on hardware configurations, usage scenarios and functional attributes. The hardware configurations include uni-core and multi-core processor types.The usage scenarios are resources, quality of service (QoS), context switching, energy efficiency, fault tolerance and parallel processing.The functional attributes are defined with respect to each usage scenario.As an example, task parameters required for modelling MCS with the resource usage scenario for both unicore and multi-core systems can be categorized into three main functional attributes namely, shared resource usage, resource synchronization and communication (message passing).When considering shared resources such as memory, the basic task model (Vestal, 2007) is extended with parameters like minimum/maximum number of memory accesses (Pellizzoni et al., 2010), worst case number of cache misses (Yun et al., 2012), worst case memory access time (Li & Wang, 2016), worst case number of L1/LL cache misses (Nair et al., 2019), number of memory accesses (Awan et al., 2018) and intra/inter-core blocking times (Burns, 2013;Nair et al., 2019).Resource synchronization includes parameters like blocking times (Burns, 2013), priority (Zhao et al., 2014), active criticality (Zhao et al., 2014) and preemption level (Zhao et al., 2015) to deal with issues of priority and criticality inversion.The parameters considered in communication are message size (Tamas-Selicean & Pop, 2011), release time jitter (Burns & Davis, 2013) and worst case communication time (WCCT) (Dridi et al., 2019). Operating system (OS) overheads due to context switching has substantial impact on the schedulability of the system.The task parameters that contribute to context switching include priority, address space and context switching times before and after execution of the task in each mode of operation (Davis et al., 2018;Evripidou, 2016).In MCS, increasing energy requirements demand efficient energy optimization techniques.This necessitates amendments in task modelling related to energy parameters like energy/power consumption vector (Awan et al., 2016), processor frequency levels (Huang et al., 2014), voltage and frequency scaling factors (Taherin et al., 2018).In order to achieve fault tolerance, parameters like failure probability (Guo et al., 2015), worst case execution time (WCET) of backups (Pathan, 2014), replication/distribution (Thekkilakattil et al., 2014), reliability constraints, dropping factor and various overheads (Choi et al., 2018) are considered.QoS is a mechanism to improve the schedulability of low criticality tasks.It is a feature that needs to be incorporated while considering all other aspects such as energy, resources, fault tolerance, OS overheads etc.The parameters that contribute to this feature include importance (Fleming & Burns, 2014), tolerance limit (Gu et al., 2015), WCET in degraded mode (Giannopoulou et al., 2013), QoS values (Pathan, 2017), etc. The introduction of multi-core architectures brings ample opportunities to implement parallel processing.Parallel processing systems consider parameters like number of threads (Liu et al., 2014), number of cores (Gill et al., 2018) and work and span for parallel tasks (Gill et al., 2018).Graph based task models are well suited to illustrate inter/intra task dependencies.Mixed criticality applications represented as graphs have vertices and edges.Edges can be classified as control flow edges and mode switching edges (Ekberg et al., 2013).Some of the parameters associated with graphs include mode of the job (Ekberg et al., 2013) and a function that defines the interference among tasks (Huang et al., 2013).Probabilistic task models consider parameters of execution time probability mass function (B.Alahmad et al., 2011), execution demand random variables and measure of probability (B.N. Alahmad & Gopalakrishnan, 2018). The research team leader reviewed and accepted the task parameter grouping and confirmed the authenticity of the task models.The swimlane for this process is shown in Figure 3. Rollout Phase ARMS is a client-server web/mobile app that aims to provide an easy-to-use decision support system based on contemporary research results in MCS.The ARMS aggregator tool was hosted on Amazon Amplify (Version 4.43.0).The user interface was implemented using ReactJS 17.0.1.The DB is managed using Amazon DynamoDB and communication with the DB is handled using GraphQL application programming interfaces (APIs).Amazon S3 storage bucket and Amazon Cognito are used for storage and controlled access respectively. Based on user preferences all task models that match the user selection criteria are displayed.The user selects one of the displayed task models for further analysis.The swimlane for task model generation, scheduling and analysis is shown in Figure 4. Upon selection of a model, the user can choose task parameters and retrieve task sets.The user is provided with facility to customize task parameter values before invoking the scheduler.The scheduler service, standard or custom, is used for scheduling and analyzing task parameters.The user is allowed to choose from the standard list of existing schedulers for analysis.They also have options of using customized schedulers or third-party solutions for scheduling task sets.The results of the scheduler are used for detailed analysis.The sequence diagrams for these processes are devised.Figure 5 shows the task model and task set generator sequence diagram.Here, the user selects attributes and usage scenarios.It views all the appropriate task models based on its selection.The user then selects a task model and chooses parameters.These values are used for the generation of sample task sets.The user is allowed to customize these task sets on a limited scale.These task sets are sent to the scheduler that generates schedules and statistics.Figure 6 shows the sequence diagram for scheduling and statistics generation.The task set customized by the user is used for scheduling.The user selects the appropriate scheduling mechanism.If the user and statistics generated by this scheduler are sent to the user.ARMS provides an option to keep this scheduler in DB with visibility only to that user.Figure 7 and Figure 8 show the usability of the ARMS and analyze the feasibility of priority ceiling protocol with Earliest-Deadline-First-with-Virtual-Deadlines (EDF-VD) (Zhao et al., 2014) scheduling algorithm for the MCS domain.A user can view task models based on processor configuration and usage scenarios as shown in Figure 7(a) and Figure 8.In the current version, the processor configurations supported are uni-core processor and multi-core processor.The usage scenarios supported are basic, Multi-core processor model has parallel processing as an additional usage scenario.There are usage scenarios which require selection of functional attributes to generate task models.For instance, resource usage scenario has resource synchronization, message passing and shared memory as functional attributes.Multiple usage scenarios can also be selected to display task models based on user requirement.The generate option displays the available task models with respect to the chosen attributes with its description and related works in the MCS domain.Figure 7(b) displays the resource-based task models on selection of uni-core processor configuration, resource usage scenario and resource synchronization functional attribute.Based on research requirements, it is possible to choose and customize tasks models to suit the needs of research/usage scenarios.For each of the displayed/customized task models, sample task sets can be generated.As an example, Figure 7(c) displays sample task sets for the chosen resourcebased task model with parameters period, deadline, criticality, (WCET) vector, active criticality, active priority, nominal priority and semaphores.Further, the framework assists to choose algorithms for scheduling and profiling.The task sets generated in Figure 7(c) are scheduled using the standard EDF-VD scheduling algorithm as shown in Figure 7(d) The displayed schedule using chosen task sets for a hyper-period is depicted in Figure 7(e)and the Gantt chart of the resultant schedule is shown in Figure 9.The detailed analysis with statistics such as number of preemptions, number of decision points, min/ max/average response time of each task are shown in Figure 7(f). These results explore the usability of ARMS framework for the MCS domain and assist in viewing existing literature, state-of-art research results and choosing the right resource parameters for priority ceiling protocol with resource constraints.It also confirms the usability of priority ceiling with EDF-VD as the protocol for schedulability analysis of resource based MCS. Collaboration Phase The aggregator tool is a freeware web/mobile app that will be made available to users who need to view and decide on appropriate task models for their work.The users will also be able to perform schedulability and comparative analysis.A sustenance team of the working group will play the role of aggregator tool admin.The aggregator platform aims to provide up-to-date information on task models in mixed criticality literature.To facilitate this, the admin performs periodic updates and updates based on literature review conducted by the sustenance consortium.Figure 11 shows the sequence of events when the user sends a request to update/enhance the task model.The user sends a request to admin for updating/ enhancing the task model.The admin process reviews the request and checks its authenticity.It responds with a reject message when the request is not authentic.The admin performs further processing and acknowledges the user accordingly.Figure 12 shows the sequence of events when the admin performs either a period update or an update based on user request.The admin checks if the task model is already available in the DB.If the literature corresponding to the task model is not part of the DB, then the record in the DB is updated with new literature.If task model is not present in the DB, the DB is updated with the new task model along with newly generated statistics and results.If the request originated from a regular user, the Admin process sends an acknowledgement (as shown in Figure 11). SACRED -Challenges and Best Practices This section summarizes the challenges faced while developing this software centric collaborative framework.It also proposes the best practices followed to overcome these challenges.The challenges and best practices are listed in Table 2. HUBZero is a web-based platform that develops simulation and modeling tools through collaboration.Tools hosted on HUBZero use the standard X11 windowing system.Each user is provided their own personal home directory with ownership and required access rights and tools are executed based on specific user's rights.This enables HUBZero to carefully control and track the resource consumption for each tool.HUBZero is deployed using Rappture.PBPCLG provides a collaborative platform for problem solving, learning programming and mentoring thus behaving like an educator tech app.It mainly focuses on java programming and problem solving.PBPCLG identifies four types of interactions.The first -individual responsibility, accurately depicts the writer's individual duty towards coding.The second -alternative works is when various developers create distinct variants of the same code snippet and later put them together as one fragment.The third, exchange of dynamic tasks allows developers to share code and ideas.And lastly, collective responsibility allows numerous participants to come to a consensus and create a single code.PBPCLG is deployed using java script. OIS provides accessibility, clarity, dimension and encourages collaboration and dissemination in the field of science.It aims to transform multidisciplinary knowledge gained through collaboration into innovations through an iterative approach.Experts from several disciplines collaborated to design and develop this framework, emphasizing on the differences and similarities across systems.OIS methodology aspires to influence policy debates and offers direction to researchers and practitioners thereby creating a foundation for ongoing research. MT-DIRC provides mentored coaching and dissemination of cancer research.Associates were mentored by the MT-DIRC programme, for a period of two-years.They developed distinct innovation and dissemination skills in multiple domains.Annual summer courses were conducted where participants received didactic, small-group, and one-on-one training.Additionally, participants engaged in national conferences and were funded for their research work.Each participant was also allocated a mentor for specialized guidance to achieve the required skill sets.Online and in-person training sessions were also conducted to train mentors. This work SACRED presents the software framework used in the development of a collaborative platform specifically for the real-time MCS research community.The collaborative platform ARMS, deployed as a cloud-based app/web client provides a research repository and a holistic view to encourage collaboration between industry and academia.It is designed for archiving, updating and reporting existing tasks models in MCS along with their analysis and statistics.ARMS is deployed using ReactJS.Table 3. compares these collaborative frameworks based on the domain of study, purpose, associated tools and deployment. Figure 2 . Figure 2. Block schematic of the aggregator platform Figure 4 . Figure 4. Task model generator and analyzer swimlane Figure 7 . Figure 7. ARMS -(a) Attributes Selection (b) Attributes Output (c) Task Set (d) Algorithm (e) Schedule (f) Statistics for Resource Task model Figure 8 . Figure 8. ARMS Attributes Selection on web browser
6,601
2023-01-06T00:00:00.000
[ "Computer Science" ]
An Improved Focused Crawler: Using Web Page Classification and Link Priority Evaluation , Introduction With the rapid growth of network information, the Internet has become the greatest information base. How to get the knowledge of interest from massive information has become a hot topic in current research. But the first important task of those researches is to collect relevant information from the Internet, namely, crawling web pages. Therefore, in order to crawl web pages effectively, researchers proposed web crawlers. Web crawlers are programs that collect information from the Internet. It can be divided into general-purpose web crawlers and special-purpose web crawlers [1,2]. Generalpurpose web crawlers retrieve enormous numbers of web pages in all fields from the huge Internet. To find and store these web pages, general-purpose web crawlers must have long running times and immense hard-disk space. However, special-purpose web crawlers, known as focused crawlers, yield good recall as well as good precision by restricting themselves to a limited domain [3][4][5]. Compared with generalpurpose web crawlers, focused crawlers obviously need a smaller amount of runtime and hardware resources. Therefore, focused crawlers have become increasingly important in gathering information from web pages for finite resources and have been used in a variety of applications such as search engines, information extraction, digital libraries, and text classification. Classifying the web pages and selecting the URLs are two most important steps of the focused crawler. Hence, the primary task of the effective focused crawler is to build a good web page classifier to filter irrelevant web pages of a given topic and guide the search. It is generally known that Term Frequency Inverse Document Frequency (TFIDF) [6,7] is the most common approach of term weighting in text classification problem. However, TFIDF does not take into account the difference of expression ability in the different page position and the proportion of feature distribution when computing weights. Therefore, our paper presents a TFIDF-improved approach, ITFIDF, to make up for the defect of TFIDF in web page classification. According to ITFIDF, the page content is classified into four sections: headline, keywords, anchor text, and body. Then we set different weights to different sections based on their expression ability for page content. That means, the stronger expression ability of page content is, the higher weight would be obtained. In addition, ITFIDF develops a new weighting equation to improve the convergence of the algorithm by introducing the information gain of the term. The approach of selecting the URLs has also another direct impact on the performance of focused crawling. The approach ensures that the crawler acquires more web pages that are relevant to a given topic. The URLs are selected from the unvisited list, where the URLs are ranked in descending order based on weights that are relevant to the given topic. At present, most of the weighting methods are based on link features [8,9] that include current page, anchor text, linkcontext, and URL string. In particular, current page is the most frequently used link feature. For example, Chakrabarti et al. [10] suggested a new approach to topic-specific Web resource discovery and Michelangelo et al. [11] suggested focused crawling using context graphs. Motivated by this, we propose link priority evaluation (LPE) algorithm. In LPE, web pages are partitioned into some smaller content blocks by content block partition (CBP) algorithm. After partitioning the web page, we take a content block as a unit to evaluate each content block, respectively. If relevant, all unvisited URLs are extracted and added into frontier, and the relevance is treated as priority weight. Otherwise, discard all links in the content block. The rest of this paper is organized as follows: Section 2 briefly introduces the related work. In Section 3, the approach of web page classification based on ITFIDF is proposed. Section 4 illustrates how to use LPE algorithm to extract the URLs and calculate the relevance. The whole crawling architecture is proposed in Section 5. Several relevant experiments are performed to evaluate the effectiveness of our method in Section 6. Finally, Section 7 draws a conclusion of the whole paper. Related Work Since the birth of the WWW, researchers have explored different methods of Internet information collection. Focused crawlers are commonly used instruments for information collector. The focused crawlers are affected by the method of selecting the URLs. In what follows, we briefly review some work on selecting the URLs. Focused crawlers must calculate the priorities for unvisited links to guide themselves to retrieve web pages that are related to a given topic from the internet. The priorities for the links are affected by topical similarities of the full texts and the features (anchor texts, link-context) of those hyperlinks [12]. The formula is defined as where Priority( ) is the priority of the link (1 ≤ ≤ ) and is the number of links. is the number of retrieved web pages including the link . Sim( , ) is the similarity between the topic and the full text , which corresponds to web page including the link . Sim( , ) is the similarity between the topic and the anchor text corresponding to anchor texts including the link . In the above formula, many variants have been proposed to improve the efficiency of predicting the priorities for links. Earlier, researchers took the topical similarities of the full texts of those links as the strategy for prioritizing links, such as Fish Search [13], Shark Search algorithm [14], and other focused crawlers including [8,10,15,16]. Due to the features provided by link, the anchor texts and link-context in web pages are utilized by many researchers to search the web [17]. Eiron and McCurley [18] put forward a statistical study of the nature of anchor text and real user queries on a large corpus of corporate intranet documents. Li et al. [19] presented a focused crawler guided by anchor texts using a decision tree. Chen and Zhang [20] proposed HAWK, which is simply a combination of some well-known content-based and link-based crawling approaches. Peng and Liu [3] suggested an improved focused crawler combining full texts content and features of unvisited hyperlink. Du et al. [2] proposed an improved focused crawler based on semantic similarity vector space model. This model combines cosine similarity and semantic similarity and uses the full text and anchor text of a link as its documents. Web Page Classification The purpose of focused crawling is to achieve relevant web pages of a given topical and discard irrelevant web pages. It can be regarded as the problem of binary classification. Therefore, we will build a web page classifier by Naive Bayes, the most common algorithm used for text classification [21]. Constructing our classifier adopts three steps: first, pruning the feature space, then term weighting, and finally building the web page classifier. 3.1. Pruning the Feature Space. Web page classifier embeds the documents into some feature space, which may be extremely large, especially for very large vocabularies. And, the size of feature space affects the efficiency and effectiveness of page classifier. Therefore, pruning the feature space is necessary and significant. In this paper, we adopt the method of mutual information (MI) [22] to prune the feature space. MI is an approach of measuring information in information theory. It has been used to represent correlation of two events. That is, the greater the MI is, the more the correlation between two events is. In this paper, MI has been used to measure the relationship between feature and class . Calculating MI has two steps: first, calculating MI between feature in current page and each class and selecting the biggest value as the MI of feature . Then, the features are ranked in descending order based on MI and maintain features which have higher value better than threshold. The formula is represented as follows: where MI( , ) denote the MI between the feature and the class ; ( ) denote the probability that a document arbitrarily selected from the corpus contains the feature ; ( ) denote the probability that a document arbitrarily selected from the corpus belongs to the class ; ( , ) denote the joint probability that this arbitrarily selected document belongs to the class as well as containing the feature at the same time. Term Weighting. After pruning the feature space, the document is represented as = ( 1 , . . . , , . . . , ). Then, we need to calculate weight of terms by weighting method. In this paper, we adopt ITFIDF to calculate weight of terms. Compared with TFIDF, the improvements of the ITFIDF are as follows. In ITFIDF, the web page is classified into four sections: headline, keywords, anchor text, and body, and we set the different weights to different sections based on their express ability for page content. The frequency of term in document is computed as follows: where 1 , 2 , 3 , and 4 represent occurrence frequency of term in the headline, keywords, anchor text, and content of the document , respectively; , , , and are weight coefficients, and > > > ≥ 1. Further analysis found that TFIDF method is not considering the proportion of feature distribution. We also develop a new term weighting equation by introducing the information gain of the term. The new weights calculate formula as follows: where is the weight of term in document ; and are, respectively, the term frequency and inverse document frequency of term in document ; is the total number of documents in sets; IG is the information gain of term and might be obtained by ( ) is the information entropy of document set and could be obtained by ( | ) is the conditional entropy of term and could be obtained by ( ) is the probability of document . In this paper, we compute ( ) based on [23], and the formula is defined as where |wordset( )| refers to the sum of feature frequencies of all the terms in the document . Building Web Page Classifier. After pruning feature space and term weighting, we build the web page classifier by the Naïve Bayesian algorithm. In order to reduce the complexity of the calculation, we fail to consider the relevance and order between terms in web page. Assume that is the number of web pages in set ; is the number of web pages in the class . According to Bayes theorem, the probability of web page that belongs to class is represented as follows: where ( ) = / and the value is constant; ( ) is constant too; is the term of web page for the document ; and can be represented as eigenvector of , that is, ( 1 , 2 , . . . , , . . . , ). Therefore, ( | ) is mostly impacted by ( | ). According to independence assumption above, ( | ) are computed as follows: where ( , ) is the number of terms in the document ; is vocabulary of class . Link Priority Evaluation In many irrelevant web pages, there may be some regions that are relevant to a given topic. Therefore, in order to more fully select the URLs that are relevant to the given topic, we propose the algorithm of link priority evaluation (LPE). In LPE algorithm, web pages are partitioned into some smaller content blocks by content block partition (CBP) [3,24,25]. After partitioning the web page, we take a content block as a unit of relevance calculating to evaluate each content block, respectively. A highly relevant region in a low overall relevance web page will not be obscured, but the method omits the links in the irrelevant content blocks, in which there may be some anchors linking the relevant web pages. Hence, in order to solve this problem, we develop the strategy of JFE, which is the relevance evaluate method between link and the content block. If a content block is relevant, all unvisited URLs are extracted and added into frontier, and the content block relevance is treated as priority weight. Otherwise, LPE will adopt JFE to evaluate the links in the block. JFE Strategy. Researchers often adopt anchor text or linkcontext feature to calculate relevance between the link and topic, in order to achieve the goal of extracting relevant links from irrelevant content block. However, some web page designers do not summarize the destination web pages in the anchor text. Instead, they use words such as "Click here," "here," "Read more," "more," and "next" to describe the texts around them in anchor text. If we calculate relevance between anchor text and topic, we may omit some destination link. Similarly, if we calculate relevance between link-context and topic, we may also omit some links or extract some irrelevant links. In view of this, we propose JFE strategy to reduce Input: current web page, eigenvector k of a given topic, threshold Output: url queue (1) procedure LPE (2) block list ← CBP(web page) (3) for each block in block list (4) extract features from block and compute weights, and generate eigenvector u of block if Sim 1 > threshold then (7) link list ← extract each link of block (8) for each link in link list (9) P r i o r i t y ( l i n k ) ← Sim 1 (10) enqueue its unvisited urls into url queue based on priorities (11) end for (12) else (13) t e m pqueue ← extract all anchor texts and link contexts (14) for each link in temp queue (15) extract features from anchor text and compute weights, and generate eigenvector u 1 of anchor text (16) e x t r a c tf e a t u r e sf r o ml i n kcontexts and compute weights, and generate eigenvector u 2 of link contexts text (17) S i m 2 ← Sim JFE ( , V) (18) if Sim 2 > threshold then (19) P r i o r i t y ( l i n k ) ← Sim 2 (20) enqueue its unvisited urls into url queue based on priorities (21) end if (22) dequeue url in temp queue (23) end for (24) where Sim JFE ( , V) is the similarity between the link and topic V; Sim anchor ( , V) is the similarity between the link and topic V when only adopting anchor text feature to calculate relevance; Sim context ( , V) is the similarity between the link and topic V when only adopting link-context feature to calculate relevance; (0 < < 1) is an impact factor, which is used to adjust weighting between Sim anchor ( , V) and Sim context ( , V). If > 0.5, then the anchor text is more important than link-context feature in the JFE strategy; if < 0.5, then the link-context feature is more important than anchor text in the JFE strategy; if = 0.5, then the anchor text and link-context feature are equally important in the JFE strategy. In this paper, is assigned a constant 0.5. LPE Algorithm. LPE is uses to calculate similarity between links of current web page and a given topic. It can be described specifically as follows. First, the current web page is partitioned into many content blocks based on CBP. Then, we compute the relevance of content blocks with the topic using the method of similarity measure. If a content block is relevant, all unvisited URLs are extracted and added into frontier, and the content block similarity is treated as priority, if the content block is not relevant, in which JFE is used to calculate the similarity, and the similarity is treated as priority weight. Algorithm 1 describes the process of LPE. LPE compute the weight of each term based on TFC weighting scheme [26] after preprocessing. The TFC weighting equation is as follows: where , is the frequency of term in the unit (content block, anchor text, or link-context); is the number of feature units in the collection; is the number of all the terms; is the number of units where word occurs. Then, we are use the method of cosine measure to compute the similarity between link feature and topic. The formula is shown as follows: where u is eigenvector of a unit, that is, u = { 1, , 2, , . . . , , }; v is eigenvector of a given topic, that is, v = { 1,V , 2,V , . . . , ,V }; , and ,V are the weight of u and v, respectively. Hence, when u is eigenvector of the content block, we can use the above formula to compute Sim CBP ( , V). In the same way, we can use the above formula to compute Sim anchor ( , V) and Sim context ( , V) too. Improved Focused Crawler In this section, we provide the architecture of focused crawler enhanced by web page classification and link priority evaluation. Figure 1 shows the architecture of our focused crawler. The architecture for our focused crawler is divided into several steps as follows: (1) The crawler component dequeues a URL from the url queue (frontier), which is a priority queue. Initially, the seed URLs are inserted into the url queue with the highest priority score. Afterwards, the items are dequeued on a highest-priority-first basis. (2) The crawler locates the web pages pointed and attempts to download the actual HTML data of the web page by the current fetched URL. (3) For each downloaded web page, the crawler adopts web page classifier to classify. The relevant web pages are added into relevant web page set. (4) Then, web pages are parsed into the page's DOM tree and partitioned into many content blocks according to HTML content block tags based on CBP algorithm. And calculating the relevance between each content block and topic is by using the method of similarity measure. If a content block is relevant, all unvisited URLs are extracted and added into frontier, and the content block relevance is treated as priority weight. (5) If the content block is not relevant, we need to extract all anchors and link-contexts and adopt the JFE strategy to get each link's relevance. If relevant, the link is also added into frontier, and the relevance is treated as priority weight; otherwise give up web page. (6) The focused crawler continuously downloads web pages for given topic until the frontier becomes empty or the number of the relevant web pages reaches a default. Experimental Results and Discussion In order to verify the effectiveness of the proposed focused crawler, several tests have been achieved in this paper. The tests are Java applications running on a Quad Core Processor is the set of relevant web pages assigned by classifier. Therefore, we define Precision [3,27] and Recall [3,27] as follows: The Recall and Precision play very important role in the performance evaluation of classifier. However, they have certain defects; for example, when improving one performance value, the other performance value will decline [27]. For mediating the relationship between Recall and Precision, Lewis [28,29] proposes -Measure that is used to evaluate the performance of classifier. Here, -Measure is also used to measure the performance of our web page classifier in this paper. -Measure is defined as follows: where is a weight for reflecting the relative importance of Precision and Recall. Obviously, if > 1, then Recall is more important than Precision; if 0 < < 1, then Precision is more important than Recall; if = 1, then Recall and Precision are equally important. In this paper, is assigned a constant 1. Evaluate the Performance of Web Page Classifier. In order to test the performance of ITFIDF, we run the classifier using different term weighting methods. For a fair comparison, we use the same method of pruning the feature space and classification model in the experiment. Figure 2 compares the performance of -Measure achieved by our classifying method using ITFIDF and TFIDF weighting for each topic on the four datasets. As can be seen from Figure 2, we observe that the performance of classification method using ITFIDF weighting is better than TFIDF on each dataset. In Figure 2, the average of the ITFIDF's -Measure has exceeded the TFIDF's 5.3, 2.0, 5.6, and 1.1 percent, respectively. Experimental results show that our classification method is effective in solving classifying problems, and proposed ITFIDF term weighting is significant and effective for web page classification. Experimental Data. In this experiment, we selected the relevant web pages and the seed URLs for the above 10 topics as input data of our crawler. These main topics are basketball, military, football, big data, glasses, web games, cloud computing, digital camera, mobile phone, and robot. The relevant web pages for each topic accurately describe the corresponding topic. In this experiment, the relevant web pages for all of the topics were selected by us, and the number of those web pages for each topic was set to 30. At the same time, we used the artificial way to select the seed URLs for each topic. And, the seed URLs were shown in Table 1 for each topic. Performance Metrics. The performance of focused crawler can also reflect the availability of the crawling directly. Perhaps the most crucial evaluation of focused crawler is to measure the rate at which relevant web pages are acquired and how effectively irrelevant web pages are filtered out from the crawler. With this knowledge, we could estimate the precision and recall of focused crawler after crawling web pages. The precision would be the fraction of pages crawled that are relevant to the topic and recall would be the fraction of relevant pages crawled. However, the relevant set for any given topic is unknown in the web, so the true recall is hard to measure. Therefore, we adopt harvest rate and target recall to evaluate the performance of our focused crawler. And, harvest rate and target recall were defined as follows: (1) The harvest rate [30,31] is the fraction of web pages crawled that are relevant to the given topic, which measures how well it is doing at rejecting irrelevant web pages. The expression is given by where is the number of web pages crawled by focused crawler in current; is the relevance between web page and the given topic, and the value of can only be 0 or 1. If relevant, then = 1; otherwise = 0. (2) The target recall [30,31] is the fraction of relevant pages crawled, which measures how well it is doing at finding all the relevant web pages. However, the relevant set for any given topic is unknown in the Web, so the true target recall is hard to measure. In view of this situation, we delineate a specific network, which is regarded as a virtual WWW in the experiment. Given a set of seed URLs and a certain depth, the range reached by a crawler using breadth-first crawling strategy is the virtual Web. We assume that the target set is the relevant set in the virtual Web; is the set of first pages crawled. The expression is given by Evaluation the Performance of Focused Crawler. An experiment was designed to indicate that the proposed method of web page classification and the algorithm of LPE can improve the performance of focused crawlers. In this experiment, we built crawlers that used different techniques (breadth-first, best-first, anchor text only, link-context only, and CBP), which are described in the following, to crawl the web pages. Different web page content block partition methods have different impacts on focused web page crawling performance. According to the experimental result in [25], alpha in CBP algorithm is assigned a constant 0.5 in this paper. Threshold in LPE algorithm is a very important parameter. Experiment shows that if the value of threshold is too big, focused crawler finds it hard to collect web page. Conversely, if the value of threshold is too small, the average harvest rate for focused crawler is low. Therefore, according to the actual situations, threshold is assigned a constant 0.5 in the rest of the experiments. In order to reflect the comprehensiveness of our method, Figures 3 and 4 show the average harvest rate and average target recall on ten topics for each crawling strategy, respectively. Figure 3 shows a performance comparison of the average harvest rates for six crawling methods for ten different topics. In Figure 3, -axis represents the number of crawled web pages; -axis represents the average harvest rates when the number of crawled pages is . As can be seen from Figure 3, as the number of crawled web pages increases, the average harvest rates of six crawling methods are falling. This occurs because the number of crawled web pages and the number of relevant web pages have different increasing extent, and the increment of the former was bigger than that of the latter. From Figure 3, we can also see that the numbers of crawled web pages of the LPE crawler is higher than those of the other five crawlers. In addition, the harvest rates of breadthfirst crawler, best-first crawler, anchor text only crawler, linkcontext only crawler, CBP crawler, and LPE crawler are, respectively, 0.16, 0.28, 0.39, 0.48, 0.61, and 0.80, at the point that corresponds to 10000 crawled web pages in Figure 3. These values indicate that the harvest rate of the LPE crawler is 5.0, 2.9, 2.0, 1.7, and 1.3 times as large as those of breadthfirst crawler, best-first crawler, anchor text only crawler, linkcontext only crawler, and CBP crawler, respectively. Therefore, the figure indicates that the LPE crawler has the ability to collect more topical web pages than the other five crawlers. Figure 4 shows a performance comparison of the averagetarget recallfor six crawling methods for ten different topics. In Figure 4, -axis represents the number of crawled web pages; -axis represents the average target recall when the number of crawled pages is . As can be seen from Figure 4, as the number of crawled web pages increases, the average target recall of six crawling methods is rising. This occurs Mathematical Problems in Engineering because the number of crawled web pages is increasing, but the target set is unchanged. The average target recall of the LPE crawler is higher than the other five crawlers for the numbers of crawled web pages. In addition, the harvest rates of breadth-first crawler, best-first crawler, anchor text only crawler, link-context only crawler, CBP crawler, and LPE crawler are, respectively, 0.10, 0.15, 0.19, 0.21, 0.27, and 0.33, at the point that corresponds to 10000 crawled web pages in Figure 4. These values indicate that the harvest rate of the LPE Crawler is 3.3, 2.2, 1.7, 0.16, and 1.2 times as large as those of breadth-first crawler, best-first crawler, anchor text only crawler, link-context only crawler, and CBP crawler, respectively. Therefore, the figure indicates that the LPE crawler has the ability to collect greater qualities of topical web pages than the other five crawlers. It can be concluded that the LPE crawler has a higher performance than the other five focused crawlers. For the 10 topics, the LPE crawler has the ability to crawl greater quantities of topical web pages than the other five crawlers. In addition, the LPE crawler has the ability to predict more accurate topical priorities of links than other crawlers. In short, the LPE, by CBP algorithm and JFE strategy, improves the performance of the focused crawlers. Conclusions In this paper, we presented a novel focused crawler which increases the collection performance by using the web page classifier and the link priority evaluation algorithm. The approaches proposed and the experimental results draw the following conclusions. TFIDF does not take into account the difference of expression ability in the different page position and the proportion of feature distribution when building web pages classifier. Therefore, ITFIDF can be considered to make up for the defect of TFIDF in web page classification. The performance of classifier using ITFIDF is compared with classifier using TFIDF in four datasets. Results show that the ITFIDF classifier outperforms TFIDF for each dataset. In addition, in order to gain better selection of the relevant URLs, we propose link priority evaluation algorithm. The algorithm was classified into two stages. First, the web pages were partitioned into smaller blocks by the CBP algorithm. Second, we calculated the relevance between links of blocks and the given topic using LPE algorithm. The comparison between LPE crawler and other crawlers uses 10 topics, whereas it is superior to other techniques in terms of average harvest rate and target recall. In conclusion, web page classifier and LPE algorithm are significant and effective for focused crawlers.
6,700.4
2016-05-19T00:00:00.000
[ "Computer Science" ]