text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Tribology Characteristics of Heatproof Alloys at a Dynamic Pin Ladening in the Variable Temperature Field
An analysis of the operating conditions of gas turbine engines, their components, and the destruction causes was carried out. The designer problems of tribo-joints operating under difficult conditions of force and temperature loads are singled out. The study aimed at obtaining the comparable quantitative dependences of blade material wear, taking into account the role of both cyclical changes in the temperature of the gas flow under the conditions close to real ones, and their frictional characteristics. Deformable heat-resistant nickel alloys and foundry heat-resistant nickel alloys from which T-shaped samples were made, were chosen for the research. The tests were carried out on the developed gas dynamic stand, which simulates the working conditions of the bandage joints of the bladed turbomachines of gas turbine installations. The intensity of wear was determined as the ratio of the worn material volume to the number of load cycles under different temperature conditions. The wear resis - tance of three-way connections operating under the conditions of non-stationary thermal loads and fluctuations in the contact was considered. It was shown that thermal cycling leads to a decrease in the wear resistance of heat-resistant nickel alloys by 2–3 times and depends on the average temperature of the cycle. It wasfound that resistance to the wear, and also the character of change of coefficient of friction is mainly determined by the terms of education and destruction of the protective superficial layer. Basic factors managing tribology processes in the zone of contact were determined.
INTRODUCTION
Gas turbine engines are used in various spheres of activity: air, sea and pipeline transport, energy, several critically important industries, etc. [1,2].In particular, gas turbine installations are an important part of the energy system of many countries.They are especially useful in the situations where a quick start-up or shutdown of the power system is required, as well as during peak loads.Gas turbines can work on different types of fuel, which makes them very flexible in use, and they can also be adapted for the production of electricity for a city or a region [3][4][5].Gas turbine plants are successfully used in the chemical, oil and gas, and metallurgical industries, as they are characterized by high fuel efficiency compared to other types of power plants.Most commercial aircraft use turbojet or turbofan engines that provide the necessary power for flight and can operate stably at high altitudes.Gas turbine engines are used on large sea vessels (tankers, cargo and military ships, passenger liners, etc.) [6,7].The gas turbine drive is also used for modern tanks.Therefore, improving the operational characteristics of elements of turbojet installations is an urgent task.
During the gas turbine performance, operators face several problems: at low loads, their efficiency can significantly decrease; slight deviations in the operation of cooling systems can lead to overheating of engine parts and damage or loss of efficiency; leaks of liquid fuel or flammable gases can create a fire hazard.At the same time, as a result of high temperatures and mechanical loads, the blades of the turbines as well as their bandage and lock connections can undergo significant wear and damage [8].Detection of failures of gas turbine parts with possible causes and methods of elimination are discussed in papers [9,10].
The study [11] investigated the parameters of interatomic interaction in alloys containing nickel, which can serve as a theoretical basis for the development of new compositions of heatresistant alloys.Researchers [12] developed the concept of software for the rational selection of materials.To improve the quality of products, optimization of the technological processes of manufacturing parts is also used [13].
The works [14,15] provide an overview of the main methods of applying thermal barrier coatings and evaluating the efficiency of blade cooling systems for gas turbines.Particulate erosion is a common phenomenon observed in gas turbine engines.Composites with a ceramic matrix are the main candidates for protecting components of hot sections of gas turbine engines [16,17].Calculation of the thickness of protective coatings is usually a complex multi-criteria optimization problem due to the contradictions that arise between the goals of the task: achieving high thermal insulation characteristics, ensuring long-term operation, manufacturability and low manufacturing cost [18][19][20].
The studies [21][22][23] introduced the models for studying the effect of abrasive on covered parts.The impact and abrasive wear resistance of protective coatings were studied in papers [24][25][26].Researchers [27][28][29], based on the model of contact along the line, considered the issue of the mechanics of the destruction of plate parts, taking into account the partial closure of crack-like defects during bending.Similar problems about the behaviour of contact cracks in plates under the action of simultaneous stretching and bending were studied in publications [30][31][32].
Some authors studied contact phenomena in lock joints [33,34], including taking into account energy dissipation during mutual sliding on contact surfaces [35].Analytical-numerical [36,37], experimental [38,39] and technological [40,41] approaches to the study of stresses in composite structures also deserve attention.The temperature distribution in layered elements was also studied [42,43].However, such studies are usually performed for bodies of simple forms.It should be noted that the qualitative properties of analytical solutions of a much more general system were studied in [44,45].
When creating tribojoints (e.g., bandage joints and interlocking bladed turbomachines) that operate in rough conditions, i.e. high power loads and high temperatures, the designer has a number of challenges and difficulties.Among the design problems, two are most important: • limited number of given data regarding influence load on properties not only particular joints, but also parts materials included in joint; • the results obtained in laboratory terms by different researchers are incomparable [46,47], as conducted on different methodologies, at the different modes of ladening.
In the article an attempted has been made to complete this subject (solve the problems).The ultimate goal of the research is a receipt of comparable quantitative dependences of wear paddles materials, taking into account the role of both cyclic changing temperature of the gas stream under the conditions of near to the real and their friction descriptions.However, as a mechanism of wear in these terms is very complicated and determining terms of his modification is difficult, then limitations were laid on in studies, foremost on the high bound of thermocycle in relation to the temperature of exploitation knot, in order to eliminate possible irreversible structural changes in material [48,49].
MATERIALS AND METHODS
Table 1 shows the chemical composition of the tested deformable heat-resistant nickel alloys, i.e.KHN62MVKYU and KHN77TYR (GOST 5632-72 Standard.High-alloy steel and corrosion-proof, heat-resisting and heat treated alloys.Grades) and foundry heat-resistant nickel alloys: ZhS6U, ZhS6K and VZhL2 (OST 1 90126-85 Industry standard.Heat-resistant alloys foundry of vacuum smelting).
To carry out wear tests, alloys were used (Table 1), from which T-shaped samples were made (Figure 1).
During hard engine starting modes, the temperature rises rapidly from ambient to maximum.
Material of details tribojoints in these condition is in the difficult enough tense state arising up from an action cyclic mechanical and thermal loads causes thermo fatigued destruction.The cyclic changes of temperature causes craks formation and propagation.Moreover, KHN77TYR alloy duration of work decrease almost 2-3 times with comparison to stationary tests in increase temperature.At warm-up engine both protracted and brief durability decreases, i.e. there is more intensive cracks propagation from the appearance of thermal strain.Therefore, it is expedient to define the influence of thermocycle parameters (t c -temperature change range, °C; τ h -heating rate, s; τ cool -cooling rate, s) on the wearproofness of heatproof materials.
The samples made of heat-resistant nickel alloys were tested on a specially developed gasdynamic bench National University "Zaporizhzhia Polytechnic", Ukraine [50,51], which allows the simulation of the working conditions of bandage connections of bladed turbomachines of a gas turbine installation.The intensity of wear, in relation to the volume of threadbare material, was determined by the amount of cycles of loads.The volume of the worn alloy was determined by the size of the friction zone and the amount of linear 1.The result was taken as the arithmetic mean of three wear tests.The standard deviation of the measurements did not exceed 5% of the mean value.Comparison obtained at the different loads, amplitudes, and temperatures of results conducted on the coefficient of wear where: I v -the intensity of wear, mm 3 /cycle; p -the specific loading in a contact, kg/mm 2 ; A -amplitude of the mutual moving of standards, mm; F n -a nominal area of contact, mm 2 ; t -a temperature of tests.
Mechanical loads realized at the next modes: • the amplitude of the mutual moving of patterns -0.1 mm; • specific pressure is in contact -27 MPа; • frequency of vibrations patterns -33 Hertzs; • a base of tests -0.5⋅10 6 cycles.
The comparative tests of wear heatproof nickel-alloys at a thermocycling were conducted at the hardest ladening on when the temperature drops and speed of heating and cooling, near to the thermo shock (at t c = 20 ↔ 700 °C, τ h = 7 s, τ cool = 11 s; t c = 20 ↔ 900 °C, τ h = 12 s, τ cool = 18 s).
RESULTS AND DISCUSSION
In tests conditions the absolute value of wear of the investigated alloys (see Figure 2) exceeds the wear at a ambient temperature almost tenfold, at equal maximal temperature of the cycle (terms of mechanical loads are the same).Therefore, considerable wear at a thermocycling is the result of two basic processes in the flow of one thermocycle: • fatigued -at temperatures below transition temperatures from high wear to stable at tests in isothermal conditions ; • oxidizing -at temperatures higher than transition temperatures.
These two processes are accompanied by the phenomena, inherent to thermal fatigue, assisting the destruction of local volumes of material.
Comparison of coefficients wear conducted under various conditions shows that at a ambient stationary temperature at a thermocycling pattern of behavior curves are practically the same (see Figure 2) and well enough described by dependences of a kind: • for the alloy KHN77TYR • for the alloyZhS6K where: К w -the coefficient of wear; t -a temperature of tests; α, α 1 , β, β 1 , γ, γ 1 -are the coefficients determined by the contact load, speed and amplitude of the mutual moving working surfaces.
Empirical dependences (2) and ( 3) are obtained based on the mathematical processing of experimental results.
The absolute value of coefficient wear at thermocycling is higher (approximately 4.7 times for the KHN77TYR alloy and 6.6 times for the ZhS6K alloy) than in the case of the test at t = const.It is significant to decline of speed reduction of Fig. 2. Wear of heatproof alloys at a thermocycling and stationary temperature К w with the height of temperature (see Figure 3).However К w factor is in an area of more low temperatures than at high.
The same changes of speed decline К w damage of material (higher intensity of wear) is explained at a thermocycling.The identity of the behavior of curves of К c = f(t) (see Figure 4) testifies to the regular processes that happen in contact.In the case of t = const at room temperatures, wear is a result of fatigue processes (on the type of pin fatigue) [52], and may overlap with oxidation, which is the predominant factor in this case.
From the analysis of wear materials in the range of temperatures from a room, it is necessary value to maximal (see Figure 4), that the basic state of general wear is the wear at temperatures below transition temperatures, as half time pin co-operation is caused at these temperatures.In this case, oxides appear in very negligible quantities.It is insufficient for the formation of protective layer.Therefore contact is mainly on new surfaces.As in the case of wear at ambient temperatures, adhesion processes are possible.In this temperature range, as at thermocycling there is a transfer of material from one surface to another.
The impact of some parameters thermocycle of on wearproofness heatproof alloys of ZhS6K, KHN77TYR is shown in Figure 5 an Figure 6.
With an increase in both the maximal and middle temperature of the cycle (see Figure 6) the wear of all investigational alloys tends to decline/ decrease.This decline is conditioned by properties of the protective oxide layer that appears at a thermocycling, though less intensively, what at t = const.The rate of wear decline alloy of ZhS6K some less in area of range temperatures 20↔700 °C and even changes the trend at 20↔900 °C.Such character of change is conditioned by a propensity to the embrittlement of this alloy in an area of temperatures 700-800 °C.During the dynamic pin loads , due to the completely use the plasticity reserve, assist fatigued destruction.
In the conditions of thermocycling, as with isothermal tests, oxidizing processes play a substantial role in the decrease of durability The proof is experiment of the wearproofness alloy of KHN77TYR and ZhS6K in the conditions of thermocycling to added in the zone of friction argon.In reference to charts in Figure 6 in an air environment with the increase of average temperature of cycle wear alloy of KHN77TYR drop (curve 1) because of intensification oxidizing processes with the increase of temperature.In this case, different cycles were used by changes the values of the initial and final temperatures, which made it possible to expand the possibilities of analyzing the influence of a large number of different values of the average cycle temperature on the wear resistance of the alloys under study.
However, entering the friction zone argon changes the characters of curve wear depending on temperature (curves 2 and 3).With an increase a t max wear does not drop, but increases because oxidizing processes are suppressed in this case, the protective function of oxides is lack and thermal fatigue processes begin initialize.It should be noted that in a neutral environment, the wear of alloy ZhS6K with the increase of t max increases quicker, than KHN77TYR as keeps a less (almost 5 times) reserve of plasticity.
The impact of parameters thermocycle on wearproofness of materials shows up mainly through tensions appearing both as a result of the heat of changing and through tensions formed during oxidation.Variable thermal effect because of different speeds of heating and cooling causes heating tensions of compression, that on an absolute value considerably exceed tensions of stretching.Consequently, superficial layers practically fully are in the conditions of the asymmetric cycle of the compression equated by a condition: where: σ cool -are tensions arising up at cooling; σ h -are tensions arising up at heating.
Thermal cyclic tensions are determined by the difference in temperature in the cycle (t max and t min ).
Increasing temperatures in the process of thermocycling cause the oxidization of superficial layers, attended with penetration on the depth of layer metal oxide mixture of type spinels, because oxidization is not only on a surface but also on the borders of grains, and also in pores and nearby areas.The metal oxide mixture in a superficial layer has lower values of coefficient thermal expansion characterizes as basic metal (see Table 2).Also because of that the volume of oxides exceeds the volume of metal an oxide formed of that several times, there are tensions at a thermocycling.These tensions are compression, if the temperature of the pattern below than temperature of the formation oxide layer and stretchings, if the temperature of the pattern exceeds the temperature oxides appear at that.Remaining tensions arising up at oxidization, in the interval of temperatures 20↔800 °C (at cooling) arrive at considerable sizes, order of 26 МPа [53].Variable cyclic tensions formed and as a result of the heat changing and as a result of oxidization cannot coincide on a phase with tensions arising up at the appendix of the dynamic pin loads.However in any case there is cumulation of damages, which accelerates the process of microcracking in the damaged superficial layer and the process of cracking and removing layer by layer of oxides considerably.
During thermocycling, there are changes in decrease of hardness carbides mainly at borders grains and precipitate of alloying elements, which change the parameter of the crystalline grate of alloy.The increase amount of heat change a carbidic phase appears as point inclusions.They located from borders deep into grains that then can be locked in carbidic layers.Development of such carbidic excretions causes the embrittlement of alloy, which is contributed to decline of the duration cycle "strengthening -unstrengthening" and subsequent separation of particle wear.Similar to the case of tests in isothermal conditions the growth of sizes γ' -phase.Therefore, coagulation takes place at a thermocycling, that results in a general coarse of structure for an alloy ZhS6K.The growth of sizes γ' -phase is observed both at the growth of the number of thermocycles and at the increase of the interval cycle.In both cases is the increase of tension γ' -phase and reduction of the degree of her putting in order.An increase in the size of the intermetalline phase at heat of changing assists dislocation mobility, which affects, in the final analysis, on friction tiredness.
In the thermocycling process the wear on poorly etch surface appears zone of small depth impoverished by alloying elements.This zone has high hardness (17-19 GPа) and in the process of pin interaction, there is its permanent crushing and displacement.On some alloys, there is sheared formation, Carlsbad twin of in subsuperficial zones.Presumably, the change of heat at a high t max cycle cause rapid material ageing.
At thermocycling structural and phase changes in the superficial layers of metal are practically the same, as in the case of wearproofness tests in isothermal conditions.However, these changes are quicker.
Along with wearproofness other important tribology characteristics of construction materials is a coefficient of friction, which at a dynamic pin loads will allow to precisely select a wearproof material.For constructors it will be helpful to make dicision regardin material for example for the bandage shelves of working shoulder-blades turbine turbo-engines.Experimental research selected heatproof alloys was performed on methodology [50].Based on the tests of alloys ZhS6U and VZhL2, it is concluded that in dynamic character conditions of application loads in the contact layouts the coefficient of friction alloys decrease, according to formula: • for the alloy ZhS6U f f = 29.346n -0.30939 (5) • for the alloy VZhL2 f f = 0.95889 n 7.72127 (6) where: n -a number of loads.
In an initial period the change of coefficient friction is uneven, and these vibrations, for example, for the alloy of ZhS6U make 20-25% from nominal.The instability of coefficient of friction in a primary period is a result of character interaction contacting surfaces, affecting a lot on final size of wear.Probably, maximum values correspond to the period of the mechanical interaction roughs, accompanied with adhesion processes in the zone of contact.The minimum values of coefficient friction are in the period of destruction of separate ledges on pin surfaces, including appearing in initial period products of transfer metal.
The uneven change of coefficient friction is conditioned yet and by that for every cycle of contact speed of slipping becomes equal to zero, at that the moment beginning of the movement is consequently the biggest and a coefficient of friction will be maximal.In addition, the uneven changes of coefficient friction cause the additional vibrations (to take place at a dynamic pin loads) of the normal load, frequency will be determined by the stiff of the tribology system.In the case of the dynamic of the normal loading, these vibrations can increase (at resonance) or decrease (at asynphases vibrations).Duration of period uneven change of coefficient friction (expressed in this case by the number of cycles loads Ncr) is determined, in the final analysis, by the temperature dependence of wear contacting materials (see Figure 7) The alloy of ZhS6U has this smooth enough change, while at VZhL2 there is a maximum value N cr at a temperature of about 400 °C.Similar in the case of dependence wear on a temperature [52] duration of this period defines time, during that a protective layer consisting mainly of products oxidization of material appears on a surface.Higher the temperature in the zone of contact, the less duration of this period.The necessary condition of passing to the set period of friction is stopping contact with new surfaces due to the formation of a sufficient amount of oxides and conglomerates, making a protective layer.On the surfaces of friction the separate shiny, almost mirror spots of contact, perceiving on itself loads, appear at the end of this period.With the increase of temperature over 600 °C separate spots in general complication occupy more than 50% of contour area contact.It should be noted that, unlike the reversible or unidirectional friction of skidding in the conditions of dynamic pin ladening at temperatures below transition temperatures such spots of contact on a surface appear even if, the speed of their appearance is considerably less speed of destruction.Formation and subsequent destruction of such spots contact, presumably, will be determined by the speed of formation protective layer, which has influenced the ability for oxidization of alloys, property of oxides, their propensity to dispergating and deformation under the action of external mechanical (amplitude, effort, frequency) and thermal parameters of loads.It should be noted that micro-and nanoparticles of metal oxides have different functional properties [54], therefore, their different effects and mechanism of wear of parts can be expected.In addition, the durability (in particular, fatigued) of separate ledges, their cyclic viscidity and dependence on adhesion processes from a temperature, affect on longevity of such layers.All indicated factors are in permanent interaction and must be examined complexly consider the external terms of loads.
Despite the significant progress achieved in tribology, many problems related to the improvement of wear resistance and reduction of friction losses are still not fully understood.This is due to the wide range of mechanical and physicochemical phenomena that occur in the contact zone.Simultaneous analysis of all such phenomena is nearly impossible .It is advisable to consider a limited set of informative parameters which may be sufficient to comprehensively characterize a tribosystem.
Moreover, in testing on a friction machine, the tribocontact loads conditions should correspond as close as possible to the real conditions of tribojoint operation.It is a matter of general experience that a variety of wear mechanisms exist.Variation in any given factor, or the appearance of a new one, can result in changes in the wear mechanism.Numerous studies into many load factors have been conducted and friction and wear regularities have been determined under specified conditions.The solution to the problem of the surface strength of friction pairs under normal vibrations is only possible after finding the main mechanisms and features of the contact destruction of two solids that are nominally stationary relative to each other and subjected to simultaneous vibration as well as variable temperature.
According to the obtained results (Figure 2), the cyclic temperature change contributes to a decrease in the wear resistance of heat-resistant nickel alloys by 2-3 times compared to tests at a constant elevated temperature during the tribo-joints operation of nodes of various machines and mechanisms under conditions of complex dynamic load and non-stationary temperatures.This is important, since most tribo-junctions, in particular, aircraft gas turbine engines, are operated under conditions of complex dynamic loads.At the same time, there is a combined effect of high temperatures, the properties of an aggressive gas environment, and the mutual movement of parts with the presence of vibrations acting in different directions, including the presence of shock loads.Without consider the entire complex of load factors, the research results are distorted and a picture of the wear process is created, which does not match the real one.
The complex non-stationary nature of the load leads to a specific stress state of the surface layers of tri-joint materials, which significantly affects its wear resistance.This is accompanied by a change in the coefficients of friction and wear, as can be seen from the analysis of the obtained results (Figure 3, 4), which is determined by the conditions of formation and destruction of the protective surface layer.Moreover, as follows from the obtained results (Figure 5, 6), the wear resistance of heat-resistant nickel alloys also depends on the temperature in the thermocycle.With an increase in both the maximum temperature and the average temperature of the cycle (Figure 6), the wear of all studied alloys tends to decrease.The appropriate mechanism of wear of heat-resistant nickel alloys in the considered contact conditions is described.
This explains the limited possibilities of using the general provisions of friction theories, as well as most of the results of experimental studies.In addition, traditional research methods are based on the separate study of the influence of one or an extremely limited number of factors without consider their interaction, as well as without consider the dynamics of the tribosystem as a whole.It is known that the wear of heat-resistant alloys can occur by several different mechanisms.A different nature of wear depending on temperature occurs both for heat-resistant alloys based on Ni and Co and for mild steels based on Fe.A change in one or another load factor and the appearance of a new factor lead to a change in the wear mechanism and its physical picture.
Previous studies [48,50,51,52,55] established the need for a comprehensive study of the wear resistance of tri-joints, taking into account the contact conditions, especially temperature and its changes during operation.At the same time, the plastic-destructive attributes of the metal during friction should be considered a physicochemical process, that is, a process accompanied by a complex of structural, physical, and physicochemical changes in the state of the surface layer of the deformed alloy.However, the influence of nonstationary temperature on the wear resistance of tribo-joints under conditions of complex dynamic load is practically not covered in publications.
Of course, the results of the first stage of research are given.In further research, it is necessary to more deeply consider and analyze the actual state of the surface layer of alloys after tribological tests, the corresponding traces of wear on the friction surface, determine the influence of the chemical composition and physical and mechanical properties of alloys on wear resistance under the considered friction conditions.The given materials present the results of the study of the attributes of heat-resistant nickel alloys under the conditions of testing tribojunctions maximally close to the operating conditions of aircraft gas turbine engine assemblies.But it is necessary to consider the wear resistance of other alloyed steels that are used for the manufacture of tri-joint parts that are operated at non-stationary lower temperatures, including at cyclic negative temperatures.
In world practice, the trend of development of functionally oriented research methods and their correlation with data obtained during field tests can be observed.This is quite natural since the design of tribonodes based on traditional design solutions without taking into account the specificity of their operating conditions (first of all, changes in load parameters over time) often leads to the fact that such designs of triboconnections turn out to be insufficiently reliable.It is appropriate to note that the reliability of the results obtained during full-scale tests is very low due to a large spread of the controlled values, which are a consequence of the nature of the contact interaction, which changes over time.This character of each machine is its own and depends both on the structural features of the part and components, their manufacturing technology, and on the operating conditions.Therefore, there is an urgent need to determine the nature of the load of tribonodes, the ranges of load parameters, and their evolution during operation, determined based on statistical data of a typical set of load modes and their changes over a set period.
After studying the influence of each of the load parameters separately or in combination on the tribo characteristics of the unit and its parts, it is possible to determine the equivalent state of the contacting surfaces and then simulate these states in laboratory conditions.Such simulation makes it possible to increase the reliability of the obtained results and significantly reduce the duration of tests.On the other hand, the study of the mechanism of damage to materials, and the creation of wear models of the contact surfaces of parts that work in extreme conditions, allow to purposefully create (or choose from among existing) wear-resistant materials, to develop constructive and technological measures aimed at increasing the durability of parts that wear out.The considered regularities of changes in the wear resistance of tribo-joints depending on the operating conditions can be useful to specialists during the construction of gas turbine engine assemblies.
Research results [55] show that by optimizing the structural state of the surface layer, an increase in the wear resistance of alloys is achieved.The obtained results of tests of nickel-based alloys at different temperature requirements are consistent with the results of research [10], where the influence of the stress-strain state of metal parts on the operational characteristics of gas turbine engines was studied.
Research results [56] showed that to extend the life cycle of gas turbine engines, it is advisable to use surfacing to restore worn parts from heatresistant nickel alloys.Rationally selecting heatresistant alloys for the manufacture of parts and operating modes, it is possible to increase the resource of gas turbine engines [57] and their environmental friendliness [58].
Further studies, it is planned to investigate the wear of heat-resistant alloys during an asymmetric loading cycle, examination of images of wear marks, as well as analysis of materials after testing.
CONCLUSIONS
The article research the effect of thermal cycling on the tribological characteristics of heatresistant nickel alloys.Based on the conducted experiment, it was established that: • The wear resistance and tribological characteristics of heat-resistant alloys depend significantly on the conditions of dynamic and temperature contact of the tribojoint.• Cyclic temperature changes contribute to a decrease in the wear resistance of heat-resistant nickel alloys from 2 to 3 times compared to studies at a constantly elevated temperature.• The absolute value of the wear coefficient during thermocycling in the range 5 to 7 times higher than during tests with a constant temperature, which is determined by the conditions of formation and destruction of the protective surface layer.• The factor of friction is characterized by a step change and depends on the oscillations of sliding in the contact zone and the temperature of thermocycling.
The analysis of results researches of tribology characteristics of heatproof alloys shows impact of different combinations of factors dynamic pin loads and cyclic thermal.The most contribution to damage of materials have the processes of grasping and phenomenons of the pin thermo fatigued.
In conclusion, researches allow to find the principal reasons for the decline wearproofness heatproof alloys.In addition, investigations let to define impact degree of factors on the pin surface.That allow to set process wear parameters necessary for the proper constructing of tribocouplings in the final analysis.
Fig. 7 .
Fig. 7. Change of duration passing to the set value of force friction depending on a temperature: 1 -ZhS6U; 2 -VZhL2
Table 1 .
Chemical composition of heat-resistant nickel alloys | 7,196 | 2023-10-20T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Asp179 in the class A β‐lactamase from Mycobacterium tuberculosis is a conserved yet not essential residue due to epistasis
Conserved residues are often considered essential for function, and substitutions in such residues are expected to have a negative influence on the properties of a protein. However, mutations in a few highly conserved residues of the β‐lactamase from Mycobacterium tuberculosis, BlaC, were shown to have no or only limited negative effect on the enzyme. One such mutant, D179N, even conveyed increased ceftazidime resistance upon bacterial cells, while displaying good activity against penicillins. The crystal structures of BlaC D179N in resting state and in complex with sulbactam reveal subtle structural changes in the Ω‐loop as compared to the structure of wild‐type BlaC. Introducing this mutation in four other β‐lactamases, CTX‐M‐14, KPC‐2, NMC‐A and TEM‐1, resulted in decreased antibiotic resistance for penicillins and meropenem. The results demonstrate that the Asp in position 179 is generally essential for class A β‐lactamases but not for BlaC, which can be explained by the importance of the interaction with the side chain of Arg164 that is absent in BlaC. It is concluded that Asp179 though conserved is not essential in BlaC, as a consequence of epistasis.
Introduction
In protein families sets of highly conserved amino acid residues can be identified.Conservation is used as a proxy for essentiality because mutations in these residues are evidently not tolerated due to loss of function.The equation of conservation and essentiality is valid as an argument to explain conservation, but it does not mean that a conserved residue is essential in every member of the protein family.Epistatic effects influence the role of residues and the essential nature of the residue may be lost without the residue being mutated.An example of such a case is described here.In a recent study from our laboratory all the second and third-shell conserved residues in the class A β-lactamase BlaC from Mycobacterium tuberculosis were mutated [1].In line with expectation, mutation of most conserved residues resulted in non-functional enzyme.The distance from the active site was correlated with the function of the residues.Third-shell conserved residues, far from the active site, were shown to be essential for solubility and folding.Second-shell residues, around the active site, contribute to stabilizing the single catalytically most active conformation [1].Interestingly, this broad mutagenesis study also revealed that for some conserved residues certain mutations functioned equally or better than the wild-type enzyme in activity against ampicillin, raising the question whether these conserved residues were essential.
Here, we focus on residue Asp179 (Ambler numbering [2]), which is present in 99.8% of the enzymes in this family [1].Asp179 is located in the Ω-loop of the protein, a loop that also contains Glu166, the general acid/base involved in deacylation of the enzyme, which is the second step in the catalytic mechanism [3][4][5].In BlaC, the mutation D179N led to an increase in resistance against penicillins of Escherichia coli cells, together with slightly increased thermostability of the purified enzyme [1].However, substitution of any other residue for Asp179 in TEM-1 and KPC-2 has been reported to result in less fitness, except when using ceftazidime as a substrate [6][7][8][9][10][11].These results lead to the hypothesis that Asp179 is essential in other β-lactamases but not in BlaC, despite being conserved.To test this idea, BlaC D179N was characterized in more detail and its fitness was compared with the same variant constructed in several other β-lactamases, representative of the class A β-lactamase family.Our results show that the mutation D179N indeed decreases general fitness of other β-lactamases, contrary to BlaC.The crystal structure of BlaC D179N offers an explanation for the loss of the essentiality of Asp179.
BlaC D179 variants show different substrate specificity
To investigate the importance of the side chain in position 179 of BlaC, the Asp was mutated to Ala, Glu, Gly, Leu, Asn, and Gln.E. coli cultures producing Asp179 variants in the periplasm were tested for resistance against various antibiotic compounds.The minimum concentration at which the cells could not grow was determined by applying drops of cell culture of OD 600 0.3-0.0003 on agar plates, containing the penicillins ampicillin, carbenicillin or penicillin G, or the third-generation cephalosporin ceftazidime (Fig. 1,
Figs S1 and S2
).In our experience, this approach allows for subtler differences in antibiotic resistance to be detected than standard MIC determination.For the evaluation of β-lactamase inhibitor susceptibility, 100 μ g mL À1 carbenicillin was used in combination with the β-lactam inhibitor sulbactam and non-β-lactam inhibitor avibactam.The results indicate that almost all tested Asp179 mutants outperformed wild-type against ceftazidime.This antibiotic is a poor substrate for BlaC.In the used assay, cells producing wild-type BlaC do not grow on plates with 0.8 μg mL À1 ceftazidime, compared with 0.2 μg mL À1 for the negative control, BlaC S70A.For comparison, for ampicillin and carbenicillin, cells stop growing at 120 μg mL À1 and >1000 μg mL À1 , respectively (Table 1).Of the D179 variants, BlaC D179G shows the largest increase, a more than six-fold increase in the minimum concentration that inhibited growth (Fig. 1A, Table 1).However, only cells producing BlaC D179N displayed increased growth in presence of other β-lactam antibiotics (Fig. 1B, Figs S1 and S2, Table 1) and they also showed somewhat higher resistance to avibactam, which probably can be attributed to an increased conversion of carbenicillin that was used in combination with inhibitors (Fig. 1C).It is concluded that mutation of Asp179 in BlaC shifts the substrate specificity from penicillins to ceftazidime, except for BlaC D179N, which outperforms wild-type BlaC on both types of substrates.
BlaC D179N has a higher melting temperature than wild-type
To characterize the BlaC variants further, the enzymes were overproduced in the cytoplasm of E. coli.The yield of soluble BlaC D179E from 1 L of cell culture was lower than for the other BlaC variants and most protein was insoluble (Fig. 2A).Other BlaC variants were produced in quantities similar to BlaC wild-type, except for BlaC D179N, for which production was slightly increased.All mutants, except for D179E, exhibited the same secondary structure content as wild-type BlaC, as judged by CD spectroscopy (Fig. 2B).Denaturation experiments were performed to establish the thermal stability of the BlaC variants.Both tryptophan fluorescence and a thermal shift assay with hydrophobic dye were used for melting temperature assessment, as these methods might yield structure specific results.Substitution of Asp179 with Asn resulted in 1.5 °C increase in melting temperature, whereas substitutions to Gly, Gln and Ala significantly lowered melting temperature (Table 2, Fig. 2B).For BlaC D179E and D179L the unfolding curves did not show a clear melting point.
BlaC D179G and D179N exhibit different kinetic profiles
The kinetic parameters of nitrocefin hydrolysis of BlaC D179N, BlaC D179G and WT BlaC were measured using purified enzymes.BlaC D179N displayed a nitrocefin activity very similar to that of the wild-type BlaC with catalytic efficiencies of 4.0 AE 0.3 × 10 5 M À1 s À1 and 3.3 AE 0.3 × 10 5 M À1 s À1 for wild-type BlaC and BlaC D179N respectively (Table 3, Fig. 3A).An in vitro inhibition assay with avibactam indicated no reduced sensitivity of this mutant for the inhibitor (Fig. 3B).Therefore, we attribute the slightly higher resistance of cells producing BlaC D179N as compared to wild-type BlaC to an elevated level of active enzyme, and the increased resistance of the cells to avibactam to concomitant faster degradation of the antibiotic.The overexpression results (Fig. 2A) and somewhat higher melting temperature suggest a higher stability of the soluble enzyme, and, thus, the level of active BlaC D179N in the cell assay may well be higher than that of wild-type enzyme.Purified BlaC D179G does not display any activity against nitrocefin (Fig. 3C).However, activity against the poor substrate ceftazidime benefits from both the Asp-to-Asn and Asp-to-Gly substitution (Table 3), with 2.5-and >20fold increase of k cat /K M parameters, respectively, in line with the findings of the cellular assay.Ceftazidime degradation with BlaC D179G clearly displays two phases (Fig. 3D), so the standard steady-state model is not applicable.At lower enzyme concentration, two phases can be distinguished also for wild-type BlaC and BlaC D179N (Fig. 3E).We used the second linear phase to calculate the velocity of the reaction.Twophase ceftazidime hydrolysis was observed and explained before for KPC-2 β-lactamase with the burst phase caused by rapid acylation and the following linear phase by slow deacylation [12].This explanation does not apply to BlaC, as the amplitude of the burst phase indicates that the enzyme molecules perform more than a single turnover and the amount of product formed in the first phase is dependent on the substrate concentration.The substrate dependence of the second phase of BlaC D179G indicates a low apparent K M (Table 3, Fig. 3F), suggesting a rate constant of acylation that is much larger than of deacylation.For wild-type and D179N BlaC, the apparent K M is high and only the k cat /K M can be determined (Fig. 3F).It is probable that BlaC D179G (and to small extend also wild-type and D179N BlaC) exists in two conformations that react differently with ceftazidime.Such twophase kinetics for ceftazidime has previously been 2.
Table 2. Melting temperatures for BlaC variants.SD represents the standard deviation of three measurements.The two methods do not necessarily report the same conformational change making the melting temperature method-dependent.
Tryptophan fluorescence Hydrophobic dye published for TEM-1 W165Y/E166Y/P167G and PenI C69F and suggests a branched pathway for substrate hydrolysis [13,14].Such kinetics have also been observed for other BlaC variants [15] and will be described in more detail elsewhere.
BlaC D179N shows subtle conformational changes in comparison to wild-type BlaC
Structural characterization focused on BlaC D179N.NMR spectroscopy of BlaC D179N confirmed that this variant is well-folded (Fig. S3). 1 H and 15 N chemical shifts of backbone amide resonances of the mutant protein were assigned and compared to those of wildtype BlaC (Fig. 4).The largest chemical shift perturbations (CSPs) due to the mutation are observed for amides in the Ω-loop but smaller CSPs spread out over other parts of the structure.The crystal structure of the BlaC D179N solved at 1.8 Å resolution reveals the nature of the structural changes (Table S1, Figs 5 and 6).Overall, the structure of the mutant resembles the BlaC wild-type structure (Cα RMSD 0.31 Å, Fig. 5).Surprisingly, the newly introduced asparagine occupies the same location as the side chain of aspartate (Fig. 5B) and the interactions of D179 are conserved.
In the structure of wild-type BlaC, the carboxycarboxylate interaction requires a shared proton between Asp172 and Asp179 [16,17], in BlaC D179N this interaction likely is an ordinary hydrogen bond between γ-carboxy group of Asp172 and NH 2 of the amide group of Asn179.Despite the conserved interactions, some changes are observed in the Ω-loop of the mutant.Two peptide bonds, involving Pro174-Gly175 and Arg178-Asn179 are flipped in BlaC D179N (Fig. 6A,B).The flipped bond involving Arg178 is accompanied by the loss of salt bridges between Arg178 and Asp172 and Asp163 and Arg161.Asp163 was found in two conformations, one of which is rotated toward the solvent and the created space is occupied by the backbone carbonyl of Arg178.The second conformation is still able to from a salt bridge with Arg161, but both Asp163 and Arg161 are pushed away from the loop containing Arg178.Previously, our group solved the structures of wild-type BlaC with the inhibitors clavulanic acid, sulbactam, tazobactam, and avibactam, as well as BlaC G132S with sulbactam [18,19].Here, we solved the structures of BlaC D179N with inhibitors sulbactam and vaborbactam at 1.9 Å, as well as structure of BlaC wild-type with vaborbactam (Fig. 5C,D) as no structures of BlaC with this transition state inhibitor were available.Both inhibitors occupied the binding pocket of the BlaC D179N in the same way as in the wild-type enzyme (Fig. 5E, F).In the structure of BlaC D179N with sulbactam, however, only the Pro174-Gly175 peptide bond was found flipped, while in the structure of BlaC D179N with vaborbactam both Pro174-Gly175 and Arg178-Asn179 were found in the same conformation as in the wild-type BlaC (Fig. 6C).These observations suggest an increased conformational freedom of this region of the Ω-loop in BlaC D179N, compared to the wild-type enzyme.However, the NMR spectra did not provide evidence for millisecond dynamics (no line broadening) and normalized B-factor analysis did not show any significant difference between the Ω-loop in the wild-type structures and the D179N variant structures (Fig. 6E).Combined, these findings do not suggest that BlaC D179N has strongly increased dynamics in the Ω-loop.
The comparison of the B-factors shows a few differences in the solvent exposed loops distant from the mutation site that might be explained by the crystal packing or pH differences in the crystallization.The structure of wild-type BlaC bound to trans-enamine adduct of sulbactam displays increased normalized Bfactors of the residues 99-107 located in the loop on top of the binding pocket, compared to those of the BlaC D179N sulbactam adduct, which might indicate the importance of the Asp179 residue for the selectivity of the substrate binding, as the residues within this loop were shown to be involved in substrate recognition, or be the result of a slightly different position of the adduct in the active site (Fig. 5E).The D179N mutation is detrimental in other class A β-lactamases To investigate why residue 179 does not appear as an asparagine in class A β-lactamases, the D179N mutation was introduced in four BlaC orthologues.These β-lactamases were selected based on sequence identity (Table 4, Fig. 7), structural root-mean-square deviation (RMSD), and available information from previous research.CTX-M-14, KPC-2, NMC-A, and TEM-1 share on average 42% of their sequences with BlaC, which is less than the average sequence identity of 50% found for 497 class A β-lactamase sequences.The average RMSD for the Cα atoms of CTX-M-14, KPC-2, NMC-A, and TEM-1 compared to BlaC is 0.79 Å (Table 4).The genes coding for the soluble parts of these β-lactamases and their D179N variants were cloned in the same expression vector as blaC, behind a signal peptide for TAT-based translocation [19].The ability to convey resistance to E. coli against antibiotics was tested as described for the BlaC variants.For both ampicillin and meropenem, the cells expressing the wild-type variants of CTX-M-14, KPC-2, NMC-A and TEM-1 grow better than the cells expressing D179N variants, whereas this is not the case for BlaC (Fig. 8, Figs S4-S7).This trend was also observed for carbenicillin, although NMC-A did not confer any resistance at the concentrations used for this assay (Fig. 8, Fig.S5).Interestingly, cells expressing the D179N variants for carbapenemases NMC-A and KPC-2 grow better on ceftazidime than cells expressing the wild-type variant, whereas cells expressing TEM-1 did not grow at all on the concentrations tested.These differences in growth were also observed when growing the cells in liquid cultures (Figs S8- S11).Cells producing TEM-1 grow worse in the presence of ampicillin than ones making BlaC, which is surprising because TEM-1 is known to be quite active against ampicillin.Perhaps the expression system used in this study is less suitable for TEM-1 than for BlaC.So, in summary, and contrary to what was observed for BlaC, the D179N mutation negatively affects the ability of E. coli cells to grow in presence of penicillins or meropenem for all four β-lactamases, CTX-M-14, KPC-2, NMC-A and TEM-1, either because the mutation changes the catalytic properties or leads to reduced levels of active enzyme.
The in vitro activity of the β-lactamases was tested using the soluble cell fractions and nitrocefin as substrate, following an approach described before [1].For these experiments, the genes were cloned in overexpression vectors with T7 promotor and cytoplasmic expression, as used for the BlaC production.Wild-type BlaC and BlaC D179N exhibit comparable protein levels and nitrocefin activity.However, large differences were observed for the other β-lactamases (Fig. 9).Wild-type CTX-M-14 and TEM-1 were better at nitrocefin hydrolysis than BlaC while their D179N variants were insoluble and no activity could be detected in the soluble cell fraction.Similarly, for KPC-2 and NMC-A the SDS-PAGE analysis shows the presence of more wild-type than mutant protein in the soluble cell fraction (Fig. 9A), indicating that mutation D179N influences the levels of soluble enzyme.
Discussion
The most common side chain interactions observed in the Ω-loop of class A β-lactamases are Arg161-Asp163, Glu166-Asn170, and Arg164-Asp179.BlaC does not carry Arg at position 164, thus the salt bridge to Asp179 is missing.The importance of this salt bridge was discussed in multiple studies on various class A βlactamases.It was shown that mutations in both Arg164 and Asp179 increase resistance against ceftazidime, while decreasing the resistance to other β-lactam antibiotics [8,12,20,21,22,23].The same effect is observed in our study on BlaC, because almost all mutants of Asp179 cause increased resistance against ceftazidime in E. coli and decreased resistance to other compounds.Increased flexibility of Ω-loop has been suggested as an explanation for the shift in substrate profile.A higher ceftazidime minimal inhibitory concentration was also reported for mutant P167S of CTX-M-14, and it was shown that this mutation causes conformational flexibility in Ω-loop and a large rearrangement of the loop in acyl-enzyme complex [24].In resting state enzyme the loop was structurally similar to that in the wild-type protein, but with acylated adduct it assumed a different conformation, changing the position of the adduct compared to the adduct in the structure of the wild-type enzyme and so influencing the positions of active site residues.We observed a similar behaviour for the same mutation in BlaC, showing that indeed the conformational freedom introduced by this mutation is the reason for its changed substrate profile [15].The two-phased product formation curve and low melting temperature The 2mF0-DFc electron density map for BlaC D179N is centered on labelled residues and is shown in purple chicken wire, with contour level 1 σ and extent radius 5 Å; (C) Crystal structure of BlaC D179N with sulbactam (PDB entry 8BTV, orange) overlaid with wild-type structure free form (grey) and with sulbactam (PDB entry 6H2K [18], turquoise); (D) Crystal structure of BlaC D179N with vaborbactam (PDB entry 8BTW, light blue) overlaid with wild-type structure, free form (grey) and with vaborbactam (PDB entry 8BV4, dark blue); (E,F) position of inhibitors sulbactam (E) and vaborbactam (F), respectively in BlaC wild-type (turquoise and dark blue, respectively) and BlaC D179N (orange and light blue, respectively).The 2mF0-DFc electron density maps with contour level 1 σ and extent radius 5 Å are centered on inhibitor structures and are shown in blue chicken wire for BlaC D179N structures or black chicken wire for BlaC wild-type with vaborbactam.The figures are generated using CCP4mg [45].
observed for BlaC D179G (Table 2, Fig. 2) resemble the behaviour of BlaC P167S and suggest that also this variant exists in more than one conformation, either in the resting state or during the reaction (branched kinetics).Also, for other β-lactamases, studies indicate that mutation in D179 enhances the flexibility of the Ω-loop.Barnes and colleagues [11] used modelling to predict the changes in the Ω-loop occurring upon mutation of Asp179 in KPC-2, indicating loss of the interaction with Arg164 and increased flexibility of Ω-loop.This mobility led to changes in the position of the catalytic residues Ser70 and Glu166.The crystal structure of KPC-2 D179N showed that the disruption of 179-164 interaction results in a displacement of the active site residue Asn170 [6].This change was accompanied by the drastic decrease in the stability of the protein.The studies on the KPC D179Y variant showed that this mutation leads to a disordering of the Ω-loop, which was linked to an improved ceftazidime degradation [6,25].Increased flexibility of the Ω-loop was also observed in the crystal structure of PC1 D179N from Staphylococcus aureus, where the Ω-loop was found to be disordered [26].In the case of BlaC Asp179 variants, the increased conformational freedom of the part of the Ω-loop can be explained by the change in the interaction between Asp172 and residue at position 179.Crystal structures of D179N BlaC indicated the possibility for the peptide bonds between these two residues to flip, which is likely In BlaC the side chain of Asp172 fills part of the space taken by the side chain of Arg164 in TEM; (E) The difference between the normalized B-factors (B 0 ) of BlaC D179N and wild-type BlaC in free form (in purple, wild-type structure 5OYO [27] and mutant structure 8BTU), with the vaborbactam adduct (in blue, wild-type structure 8BV4 and mutant structure 8BTW), and with the sulbactam adduct (in turquoise, wild-type structure 6H2K [18] and mutant structure 8BTV).The cutoff for the significant ΔB' was set to two standard deviations of the mean.The residues 160-180 in the Ω-loop are highlighted in grey.The figures are generated using CCP4mg [45].
caused by a new hydrogen bond interaction between Asn179 and Asp172, which may be less rigid than the carboxyl-carboxylate interaction in the wild-type enzyme.The absence of this hydrogen bond in BlaC D179G might lead to even more conformational freedom in the Ω-loop.Although NMR data and the normalized B-factor analysis of the crystal structures provide no indication of increased flexibility of the Ωloop in the D179N variant, the changes in the peptide bonds observed in the crystal structures indicate the ability of this variant to adapt the conformation of the loop to the specific substrate or inhibitor.Such slight enhancement of conformational freedom may enable the enhanced ceftazidime enhancement while maintaining the activity against other substrates.
Here, we tested the effects of D179N in five β-lactamases that are representative for the class A β-lactamases.E. coli producing this variant of KPC-2, NMC-A and TEM-1 are more sensitive to penicillins and meropenem and less to ceftazidime, in line with fitness studies on TEM-1 and KPC-2 [8,11].In CTX-M-14, the D179N substitution impacts the ability of the cells producing this variant to grow in presence of all tested antibiotics, probably because the solubility of the enzyme is compromised by the mutation.These findings, together with the previous findings described in the literature, suggest that in all cases the Ω-loop is more mobile, enhancing activity against ceftazidime or, in the case of CTX-M-14, to such an extent that the protein is no longer stable and becomes insoluble.These observations make BlaC D179N an interesting exception.Mutation to other residues of Asp179 in BlaC also leads to a reduced melting temperature and a shift in the substrate spectrum toward ceftazidime activity.BlaC D179N, however, is produced as a soluble protein with high yield, shows a single conformation in the NMR spectrum, with no evidence of extensive line-broadening, and yields a crystal structure with an ordered Ω-loop.At the same time, it shows somewhat enhanced activity against ceftazidime as well as good activity against penicillins and nitrocefin.Position 164 is occupied predominantly by Arg in class A β-lactamases, while BlaC carries Ala at this position.The H-bond between Asp172 and Asp179 in BlaC is only possible due to the absence of Arg at position 164, because the side chains of Arg164 and Asp172 would clash (Fig. 6D).This subtle change could be responsible for differences observed upon mutation in Asp179 in BlaC vs. other β-lactamases.The D179N in BlaC appears to strike a balance that enhances stability and yet slightly increases Ω-loop flexibility.In other β-lactamases the mutation introduces flexibility and reduced stability due to the lost salt bridge between Asp179 and Arg164.Asp179 is highly conserved in class A β-lactamases, and so is Arg164, thus it is possible that being able to cope with the loss of this Arg, opens a new evolutionary pathway to BlaC by introducing the D179N mutation to stabilize the enzyme and at the same time enhance its substrate spectrum.
β-Lactamase activity in bacterial cells
Resistance assays were performed with E. coli KA797 cells transformed with pUK21 based plasmids with a TATsignal sequence [19] and containing the blaC, blaCTX-M-14, kpc, nmcA or bla wild-type or mutant genes, coding for the soluble parts of BlaC, CTX-M-14, KPC-2, NMC-A and TEM-1, respectively (Fig. 7).For the on-plate assay, cells were applied on the agar plates with various β-lactam antibiotics as 10 μL drops with OD 600 values of 0.3, 0.03, 0.003 and 0.0003.All plates contained 50 μg mL À1 kanamycin and 1 mM IPTG and were incubated for 16 h at 37 °C.In pUK21 the bla genes are under the control of the lac promotor but the used strain did not overproduce the LacI inhibitor, so production of the β-lactamases was semiconstitutive due to the high copy number of the plasmid.IPTG was added to ensure complete release of inhibition.For the assay with bacterial suspensions, cells with the Table 4. Sequence identity as determined by ClustalOmega and RMSD as determined with PyMOL when aligning by structure (Uniprot entries P9WKD3-1, Q9L5C7, Q9F663, P52663, P62593, and PDB entries 2GDN [3], 1YLT [41], 2OV5 [42], 1BUE [43], and 1ZG4 [44]).
Sequence identity (%) Structural RMSD ( Å)
BlaC CTX-M- Protein production and purification β-Lactamases were produced using E. coli BL21 (DE3) pLysS cells transformed with pET28a plasmids containing the T7 promotor.The same genes as used for the bacterial cell assays (Fig. 7) but without the signal sequences and with an N-terminal His-tag and TEV cleavage site were used for this cytoplasmic overexpression system [19].BlaC was produced and purified as described previously [27].For experiments comparing wild-type and D179N variants of various β-lactamases, the pellets from 10 mL overnight cultures were lysed in 200 μL of BPER (Thermo Scientific, Rockford, IL, USA) for 30 min.After centrifugation, the soluble fraction was diluted 50-fold in 100 mM sodium phosphate buffer pH 6.4 and used for circular dichroism spectroscopy and kinetic experiments.Protein solubility was determined by running samples of the whole lysate and the soluble fraction on a 4-15% Mini-PROTEAN TGX Stain-Free Protein Gel (BioRad, Hercules, CA, USA).
Circular dichroism spectroscopy
Circular dichroism spectra were recorded in a 1 mm quartz cuvette at 25 °C with a Jasco J-815 spectropolarimeter.Samples contained 100 mM sodium phosphate buffer (pH 6.4).The curves represent the average of five transients.The alignment is generated using Clustal Omega [40] and visualized using Jalview [46].
Melting temperature
Thermostability of BlaC variants was determined with the use of the hydrophobic dye SYPRO ® Orange (Sigma-Aldrich, St. Louis, MO, USA) or using tryptophan fluorescence changes.The commercially available stock of SYPRO ® Orange dye has a 5000× concentration, but a 4× concentration was used in the measurements.Tryptophan fluorescence was measured as a function of temperature using a Tycho NT.6 (NanoTemper Technologies, M ünchen, Germany) at 330 nm and 350 nm and the ratio 330 nm/350 nm was used to evaluate the melting temperature.All measurements were done in triplicate in 100 mM sodium phosphate buffer (pH 6.4).
Kinetics
Kinetic experiments for BlaC D179 variants were performed using a Lambda 800 UV-vis spectrometer (PerkinElmer, Waltham, MA, USA) at 25 °C in 100 mM sodium phosphate buffer (pH 6.4).For nitrocefin kinetics 5 nM of enzyme was used with 0, 10, 25, 50, 100, 200, 300, and 400 μM of nitrocefin (Δε 486 = 18 × 10 3 cm À1 M À1 ).The reactions were followed at 486 nm for 90 s in triplicate.The initial velocities were fitted to the Michaelis-Menten, Eqn (1).υ 0 is the initial reaction rate, [S] 0 the initial substrate concentration, V max the maximum reaction rate and K M the Michaelis constant.υ 0 and [S] 0 are the dependent and independent variables, respectively, and K M and V max are the fitted parameters.V max is equal to the product of the specific rate constant (k cat ) and the enzyme concentration.Note that due to the two-step reaction of antibiotic hydrolysis, the K M must be considered as apparent value that cannot be directly compared with the K M from the Michaelis-Menten derivation.To measure the hydrolysis of ceftazidime for Fig. 3D, 1 μM of BlaC was mixed with 20 μM ceftazidime.Substrate degradation was followed at 260 nm for 7 min in duplicate, using an extinction coefficient difference (Δε 260 ) of 6.8 AE 0.9 (×10 3 ) M ˗1 Ácm ˗1 [15].The kinetic parameters of the ceftazidime hydrolysis reaction were determined with 100 nM of BlaC and 10, 25, 50 and 100 μM of ceftazidime.The reactions were followed at 260 nm for 5 min and performed in duplicate.In case of biphasic behaviour (Fig. 3E) of the reaction, the velocities of the second phase (steady state condition) were calculated and plotted against substrate concentration.For wild-type BlaC and BlaC
Inhibition assay
To measure the BlaC inhibition by avibactam, 2.5 nM of BlaC was used with 100 μM nitrocefin in the presence of increasing amounts of avibactam, 0, 10, 100 and 500 μM.The reactions were followed at 486 nm for 20 min and the experiments were performed in duplicate.
Crystallization
Crystallization conditions for BlaC D179N at a concentration of 10 mg mL À1 were screened for by the sitting-drop method using the JCSG+, BCS and Morpheus (Molecular Dimensions, Catcliffe, UK) screens at 20 °C with 200 nL drops with 1 : 1 protein to screening condition ratio [32].Crystal growth became visible within 4 days in various conditions specified in Table S1.After 1 month the crystals were mounted on cryoloops in mother liquor and vitrified by plunging in liquid nitrogen.The crystals of BlaC bound to the inhibitors were soaked in corresponding mother liquor with 10 mM sulbactam or vaborbactam for 20-40 min.
X-ray data collection, processing and structure solving
Diffraction data were collected at the Diamond Light Source (DLS, Oxford, England).Diffraction data were recorded on a Pilatus detector.The resolution cutoff was determined based on completeness and CC1/2 values.The data were integrated using DIALS [33] and scaled using Aimless [34].The structures were solved by molecular replacement using MOLREP from the CCP4 suite [35] using PDB entry 2GDN [3] as a search model for all structures except for BlaC D179N with sulbactam for which 6H2K [18] was used as a search model.Subsequently, building and refinement were performed using Coot and REFMAC [35].Waters were added in REFMAC during refinement.The following residues were modelled in two conformations: Asp163 for BlaC D179N; Lys230 for BlaC D179N with sulbactam; and Asn197 for BlaC D179N with vaborbactam.The final models fall on the 98th-99th percentile of MolProbity [36].The models were further optimized using the PDB-REDO webserver [37,38].Structure validation showed a RamaZ score [38] of À0.15, À0.27, À0.44, and À1.82 for D179N, D179N with sulbactam, D179N with vaborbactam, and wild-type BlaC with vaborbactam respectively; 97%-99% of all residues are within the Ramachandran plot favoured regions with two outliers for all structures, namely, Cys69 and Arg220.Data collection and refinement statistics can be found in Table S1.
The normalized B-factor analysis was performed using BANΔIT server [39].The Gly145A/B/C/D insertion in one of the loops of BlaC was excluded from the analysis for all structures as it differs considerably even between the wildtype BlaC structures available.
Fig. 1 .
Fig. 1. Cell growth assays.Drops of increasing dilutions of E. coli cultures were spotted on LB-agar plates containing the indicated antibiotics and inhibitors, as well as kanamycin (50 μg/mL) to ensure plasmid stability and 1 mM IPTG to induce gene expression.The plates were incubated at 37 °C for 16 h.(A) Ceftazidime.Wild-type BlaC panels are from the same LB-agar plates as BlaC variants (Fig. S1 shows complete photos); (B) Carbenicillin, ampicillin, and penicillin G (Figs S1 and S2 show complete data set).(C) Inhibitors sulbactam and avibactam in the presence of 100 μg/mL carbenicillin.BlaC S70A is catalytically inactive and functions as negative control.
Fig. 2 .
Fig. 2. Production, folding and stability of BlaC variants.(A) SDS-PAGE analysis shows the whole lysate (L) and soluble fraction (S) after production of wild-type BlaC and BlaC D179 mutants using a cytoplasmic overexpression system.(B, left) CD spectra of BlaC Asp179 mutants.(B, right) Negative derivative of signal from thermal shift assay with the hydrophobic dye SYPRO ® Orange of BlaC Asp179 mutants.The melting temperatures are listed in Table2.
Fig. 3 .
Fig. 3. Kinetic analysis (A) Michaelis-Menten curves for reaction with nitrocefin of BlaC wild-type and D179N.Error bars represent standard deviations of triplicates and curves represent the fit to the Michaelis-Menten equation (eq.1); (B) Relative activity in the absence or presence of avibactam and BlaC measured as amount of hydrolyzed nitrocefin after 20 min at 25 °C.Measurements were performed in duplicate in the presence of 100 μM nitrocefin and 2.5 nM BlaC.The error bars represent one standard deviation.(C) Product formation curves with 400 μM nitrocefin and 5 nM BlaC as a function of time; (D) Product formation curves with 20 μM ceftazidime and 1 μM BlaC.(E) Product formation curves with 50 μM ceftazidime and 100 nM BlaC as a function of time.Data were obtained in 100 mM sodium phosphate buffer (pH 6.4) at 25 °C by measuring a change in absorbance at 486 and 260 nm for nitrocefin and ceftazidime, respectively; (F) The initial velocities of the second phase of the ceftazidime degradation reaction as a function of initial ceftazidime concentration.Error bars represent standard deviations of duplicate experiments.
Fig. 4 .
Fig. 4. Average chemical shift differences (CSP) between the amide resonances of BlaC D179N and wild-type BlaC mapped on the structure of BlaC D179N (PDB entry 8BTU).Residues are coloured green for CSP < 0.025, yellow for CSP > 0.025 ppm, orange for CSP > 0.05 ppm, red for CSP > 0.1 ppm and grey for no data, backbone amide nitrogen atoms are represented as spheres.Side chains of active site residues and N179 are represented as blue sticks, the entrance to the active site is at the back of the protein in this representation.The figure is generated using The PyMOL Molecular Graphics System, Version 2.5.0 (Schr ödinger, LLC, New York, NY, USA).
Fig. 5 .
Fig. 5. (A) Crystal structure of BlaC D179N (PDB entry 8BTU, lilac) overlaid with wildtype structure (PDB entry 2GDN [3], grey); (B) Detail of the region around the mutation.The 2mF0-DFc electron density map for BlaC D179N is centered on labelled residues and is shown in purple chicken wire, with contour level 1 σ and extent radius 5 Å; (C) Crystal structure of BlaC D179N with sulbactam (PDB entry 8BTV, orange) overlaid with wild-type structure free form (grey) and with sulbactam (PDB entry 6H2K[18], turquoise); (D) Crystal structure of BlaC D179N with vaborbactam (PDB entry 8BTW, light blue) overlaid with wild-type structure, free form (grey) and with vaborbactam (PDB entry 8BV4, dark blue); (E,F) position of inhibitors sulbactam (E) and vaborbactam (F), respectively in BlaC wild-type (turquoise and dark blue, respectively) and BlaC D179N (orange and light blue, respectively).The 2mF0-DFc electron density maps with contour level 1 σ and extent radius 5 Å are centered on inhibitor structures and are shown in blue chicken wire for BlaC D179N structures or black chicken wire for BlaC wild-type with vaborbactam.The figures are generated using CCP4mg[45].
Fig. 6 .
Fig. 6. (A,B) Crystal structure of BlaC D179N (PDB entry 8BTU, lilac) overlaid with wild-type structure (PDB entry 2GDN [3], grey), showing the flipped peptide bonds between residues 174 and 175 (A) and 178 and 179 (B).The side chain of Asp163 in BlaC D179N is present in two conformations.The 2mF0-DFc electron density of the flipped peptide bonds are represented in purple chicken wire, with contour level 1 σ and extent radius 5 Å; (C) Crystal structure of BlaC wild-type (grey) overlaid with BlaC D179N free form (lilac), with sulbactam adduct (PDB entry 8BTV, orange) and vaborbactam (PDB entry 8BTW, blue).The backbone of the residues 174-179 is shown in sticks.The black arrows indicate the flipped peptide bonds.(D) Overlay of BlaC wild-type (grey) and TEM-1 (PDB entry 1ZG4 [44], yellow) showing the position of the residues 164, 172 and 179.In BlaC the side chain of Asp172 fills part of the space taken by the side chain of Arg164 in TEM; (E) The difference between the normalized B-factors (B 0 ) of BlaC D179N and wild-type BlaC in free form (in purple, wild-type structure 5OYO[27] and mutant structure 8BTU), with the vaborbactam adduct (in blue, wild-type structure 8BV4 and mutant structure 8BTW), and with the sulbactam adduct (in turquoise, wild-type structure 6H2K[18] and mutant structure 8BTV).The cutoff for the significant ΔB' was set to two standard deviations of the mean.The residues 160-180 in the Ω-loop are highlighted in grey.The figures are generated using CCP4mg[45].
Fig. 8 .
Fig. 8. Activity against antibiotics of five class A beta-lactamases.Cultures of E. coli expressing genes of the wild-type or D179N variants of β-lactamases BlaC, CTX-M-14, KPC-2, NMC-A or TEM-1 were spotted in increasing dilution on plates containing ampicillin, carbenicillin, meropenem or ceftazidime.Different panels within black border originate from different parts of the same LB-agar plates (Figs S4-S7).
D179N the determination of the K M and V max values was not possible because K M > > [S], so k cat /K M values were determined from υ 0 /[S].Activity in lysates was determined by mixing the 50× diluted soluble fraction of cell lysates with 0, 10, 20, 50, 100, 200, 300 and 400 μM of nitrocefin at 25 °C.
Fig. 9 .
Fig. 9. Effects of D179N mutation on several β-lactamases.(A) SDS-PAGE analysis shows the whole lysate (L) and soluble fractions (S) of negative control (NC), BlaC (31.5 kDa), CTX-M-14 (31.3 kDa), KPC-2 (31.8 kDa), NMC-A (32.4 kDa), and TEM-1 (32.2 kDa), indicated by an arrow.Samples were corrected for cell density; (B) Activity in nitrocefin conversion using soluble cell fractions of cultures overproducing the indicated β-lactamases was measured in 96-well plates containing the indicated nitrocefin (N) concentration.Upon ring opening, nitrocefin turns from yellow to red.The picture was taken 30 min after the start of the reactions.
Table 1 .
Concentrations of various β-lactams and β-lactamase inhibitors at which growth of E. coli producing BlaC variants is no longer observed, determined with the droplet test (Fig.1, Figs S1 and S2).All values are in μg mL À1 .Catalytically inactive BlaC S70A was used as a negative control.
Table 3 .
Apparent Michaelis-Menten kinetic parameters for nitrocefin and ceftazidime hydrolysis.Reactions were carried out in 100 mM sodium phosphate buffer (pH 6.4) at 25 °C.Standard deviations (SD) are calculated from triplicate measurements for nitrocefin and duplicate measurements for ceftazidime.ND, not determined.AE SD (μM)k cat AE SD (s À1 ) k cat /K M AE SD (10 5 M À1 s À1 ) K M (μM) k cat AE SD (s À1 ) k cat /K M AE SD (10 3 M À1 s À1 ) OD 600 0.3 were diluted 100-fold in LB medium and incubated overnight at 37 °C with constant shaking.Measurements were performed with Bioscreen C plate reader. | 8,829.8 | 2023-06-19T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Multiple Myeloma-Derived Exosomes Regulate the Functions of Mesenchymal Stem Cells Partially via Modulating miR-21 and miR-146a
Exosomes derived from cancer cells can affect various functions of mesenchymal stem cells (MSCs) via conveying microRNAs (miRs). miR-21 and miR-146a have been demonstrated to regulate MSC proliferation and transformation. Interleukin-6 (IL-6) secreted from transformed MSCs in turn favors the survival of multiple myeloma (MM) cells. However, the effects of MM exosomes on MSC functions remain largely unclear. In this study, we investigated the effects of OPM2 (a MM cell line) exosomes (OPM2-exo) on regulating the proliferation, cancer-associated fibroblast (CAF) transformation, and IL-6 secretion of MSCs and determined the role of miR-21 and miR-146a in these effects. We found that OPM2-exo harbored high levels of miR-21 and miR-146a and that OPM2-exo coculture significantly increased MSC proliferation with upregulation of miR-21 and miR-146a. Moreover, OPM2-exo induced CAF transformation of MSCs, which was evidenced by increased fibroblast-activated protein (FAP), α-smooth muscle actin (α-SMA), and stromal-derived factor 1 (SDF-1) expressions and IL-6 secretion. Inhibition of miR-21 or miR-146a reduced these effects of OPM2-exo on MSCs. In conclusion, MM could promote the proliferation, CAF transformation, and IL-6 secretion of MSCs partially through regulating miR21 and miR146a.
Introduction
Multiple myeloma (MM) is the second most common hematological malignancy and characterized by clonal proliferation of malignant plasma cells in the bone marrow (BM) [1]. Accumulating evidence indicates that MM cells can affect the function and phenotype of mesenchymal stem cells (MSCs), osteoclasts, and endothelial cells by releasing soluble factors such as cytokines/proteins [2] and extracellular particles [3], which in turn favor the progression of MM cells [4,5]. For instance, MM cells can educate MSCs to acquire a tumor-like phenotype with the ability to secrete interleukin-6 (IL-6), IL-8, and TNF-β, which further promote MM survival [6,7]. It has also been shown that cancer cells can affect the function and phenotype of MSCs through secreting soluble factors [8,9].
Exosomes, through delivering biological molecules such as proteins and microRNAs (miRs), represent a novel component of tumor microenvironment and play an important role in the communication between cancer cells and MSCs [10]. Previous studies have demonstrated that exosomes released by cancer cells could be incorporated by MSCs and result in the cancer-associated fibroblast (CAF) transformation of MSCs [11][12][13][14][15][16]. These studies have shown that CAFs transformed from MSCs express fibroblast-activated protein (FAP), α-smooth muscle actin (α-SMA), and stromalderived factor 1 (SDF-1) and display enhanced proliferation and secretion of cytokines including IL-6 and TGF-β which could contribute to a tumor-supportive microenvironment. Exosomes released by acute myeloid leukemia cells have been shown to promote MSC proliferation [12]. It has also been suggested that chronic lymphocytic leukemia-derived exosomes could induce CAF transformation and IL-6 secretion of MSCs through transferring exosomal miR-150 and miR-146a [14]. However, whether MM exosomes can regulate MSC transformation remains unclear.
Emerging evidence indicates that miRs could be responsible for the proliferation, CAF transformation, and cytokine secretion of MSCs [13,15]. miR-21 is a well-known oncogenic miRNA during MM proliferation and invasion and also a critical regulator in CAF transformation of breast cancer [17,18]. It has been reported that exosomes of leukemia cells carry high levels of miR-21 and regulate MSC functions [12]. miR-146a expression has been demonstrated to be associated with levels of IL-6 secretion in breast cancer [19]. Moreover, MSC overexpressing miR-146a resulted in an increased secretion of IL-6, which further supports MM survival [20]. However, the role of miR-21 and miR-146a in regulating MSC proliferation and transformation has not fully been understood.
In this study, we examined the effects of MM-derived exosomes on MSC proliferation, CAF transformation, and IL-6 secretion, as well as the role of miR-21 and miR-146a in these effects.
Exosome Extraction and
Purification. The extraction and purification procedures were performed according to the previous study with some modifications [21]. Briefly, OPM2 cells were conditioned in RPMI 1640 medium without FBS. When the OPM2 cells reached 80%-90% confluence, the supernatants containing exosomes were harvested. The exosomes were purified by the procedure of differential centrifugation and purification. In brief, the supernatants were centrifuged for 20 min at 2000g to remove cellular debris. The cell-free culture medium was centrifuged at 20,000g for 70 min and ultracentrifuged at 170,000g for 1.5 h to pellet exosomes. Exosome pellets were collected and diluted in filtered PBS. The collected exosomes were stored at −80°C and used for following experiments. The size and concentration of exosomes were analyzed by using Nano Tracking System Analysis (NTA) 300 (UK).
PKH26 Stain of OPM2 Exosomes.
For exosome-uptaking experiment, purified exosomes derived from OPM2 (OPM2-exo) were stained using PKH26 membrane dye (Sigma, USA). Stained exosomes were washed in 2 ml of PBS, collected by ultracentrifugation as demonstrated above, and resuspended in filtered PBS. 10 μg of the PKH26-stained exosomes or the same volume of the PKH26-PBS control was added and incubated for 24 h. The binding of OPM2exo to the MSCs was observed with a fluorescence microscope (Germany). OPM2 cells were washed twice with PBS, stained with Hoechst 33342 for 5 min, and washed twice with PBS before being photographed.
2.4. Cell Proliferation Assay. Proliferation of MSCs was determined by various methods including MTT assay (Sigma, USA), Cell Counting Kit-8 (CCK-8, Dojindo, Japan) assay, and direct cell counting. For the MTT assay, MSCs were seeded at 1 × 10 3 cells/plate in a 96-well plate and cocultured with 0 (PBS, vehicle control), 5, 10, 20, 40, or 80 μg/ml OPM2-exo. After day 4, the cells were incubated with 20 μl of 5 mg/ml MTT solution for 4 h at 37°C. After removing the medium containing MTT, 150 μl dimethyl sulfoxide (DMSO) was added to each well to dissolve the formazan. The optical density (OD) was measured at 490 nm by using a microplate reader (BioTek, USA). We also conducted a more sensitive assay to evaluate MSC proliferation since it has been shown that the detection sensitivity of CCK-8 is higher than that of any other tetrazolium salts such as MTT, XTT, or MTS. Briefly, cells (1 × 10 3 per well) were plated in 96-well plates in triplicate for culture (37°C and 5% CO 2 ). In the following day, the cells were cocultured with the same concentrations of OPM2-exo in a final volume of 90 μl for 4 days. After incubation, 10 μl CCK-8 solution was added to each well and incubated for 2 h. Then, the absorbance at 450 nm was measured by the microplate reader (BioTek, USA). To directly count the cell number, MSCs were seeded at low density in 6-well plates (5 × 10 3 cells/plate). After 24 h, OPM2 cells were washed 3 times with PBS and switched to serum-free media, and OPM2-exo (80 μg/ml) was added. The medium was changed every 3 days and added with fresh OPM2-exo. The cell number was counted at days 1, 4, and 10 with the automated cell counter (Beckman, USA) after trypan blue staining.
CAF Transformation Assay.
MSCs were seeded at 5 × 10 3 cells/plate in 6-well plates. 12 h after seeding, MSCs were treated with OPM2-exo (80 μg/ml) to trigger the CAF transformation. The medium was changed every 3 days and added with fresh OPM2-exo. After 10 days, cells and conditioned medium were then collected and prepared for the following analysis.
Quantitative Real-Time PCR Analysis (qRT-PCR).
RNA was treated with TRIzol (Invitrogen, USA). One microgram of RNA was transcribed to cDNA using Tran-Script cDNA Synthesis Kit (Takara, Japan), and qRT-PCR was performed using a Bio-Rad 96 System (Bio-Rad, USA) with SYBR Green II qPCR Premix (Takara, Japan). The primers were listed at Supplementary Material Table S1. The PCR was conducted at 95°C for 10 minutes, 50 cycles at 95°C for 30 seconds, 60°C for 30 seconds, and 72°C for 1 minute. We used GAPDH (FAP, α-SMA, and SDF-1) and U6 (miR-21, miR-146a) as the internal control for normalization and calculated the relative expression by the 2 −ΔΔCt method.
2.7. IL-6 ELISA Assay. The MSC-conditioned medium was centrifuged to remove cellular debris, and then, IL-6 protein concentrations were quantified by using the ELISA kit (Invitrogen, USA) according to the manufacturer's protocol. In brief, the conditioned medium of MSCs was harvested, and standard and sample extracts were added to the microplate precoated with an antibody specific for IL-6. HRP substrate was added to each well. The level of IL-6 was measured at 450 nm.
2.9. Statistical Analysis. Data were expressed as means ± SEM of three independent experiments. Statistical analysis was performed by using one-or two-way analysis of variance (ANOVA) (SPSS version 17.0, SPSS, USA). Differences were considered to be significant when p values were smaller than 0.05.
miR-21 and miR-146a Were Rich in MM-Derived
Exosomes and Their Levels in MSCs Were Increased after Coculture with OPM2-exo. As shown in Figure 1(a), NTA showed that the diameter of the isolated OPM2-exo was around 100 nm. qRT-PCR results demonstrated that exosomes derived from three MM cell lines (OPM2, RPMI 8226, and U266) contained higher levels of miR-21 and miR-146a when compared with those derived from parent MM cells (Figure 1(b)). OPM2-exo could be uptaken by MSCs after incubation for 24 hours analyzed by PKH26 stain (Figure 1(c)). With the treatment of OPM2-exo, we also observed the increased expressions of miR-21 and miR-146a in MSCs (Figure 1(d)).
OPM2-exo Promoted the Proliferation of MSCs in
Dose-and Time-Dependent Manners. We performed the proliferation assay by using different concentrations of OPM2-exo (0, 5, 10, 20, 40, and 80 μg/ml) in coculture with MSCs. Both MTT (Figure 2(a)) and CCK-8 (Figure 2(b)) results showed that OPM2-exo promoted the proliferation in a dose-dependent manner. The optimal concentration for the OPM2-exo effect was considered to be 80 μg/ml. Microscopy pictures showed that MSC displayed a clear increase in cell density in a time-dependent manner which is further enhanced with the coculture of OPM2-exo (Figure 2(c)). We next examined the effect of OPM2-exo on the growth of MSCs following treatment of OPM2-exo (80 μg/ml) by directly counting the cell number. According to the cell count analysis, MSCs' number increased about 2 times after incubation with OPM2-exo at day 4 and over 2 times at day 10 ( Figure 2(d)).
OPM2-exo Induced the Transformation of MSCs into
CAFs with Increased IL-6 Secretion. As noted in Figure 2(c), MSCs displayed a different phenotype cultured with OPM2exo for 10 days, implicating the MSC transformation. We also examined the mRNA expressions of CAF transformation markers including FAP, α-SMA, and SDF-1. Results showed that OPM2-exo (80 μg/ml) significantly induced the expressions of CAF transformation markers after being cocultured with OPM2-exo (Figure 3(a)). As shown in Figure 3(b), IL-6 mRNA was examined by using qRT-PCR and its level in the conditioned medium was measured by using ELISA after coculture with OPM2-exo (80 μg/ml) for 10 days to observe the changes. The results showed that there was an increase in IL-6 mRNA expression as well as in its secretion of MSCs at day 10, which is significantly enhanced by the treatment of OPM2-exo (80 μg/ml). Collectively, these results indicated that MSCs undergo CAF transformation in response to tumor exosome exposure.
Inhibition of miR-21 in MSCs Was Able to Inhibit the OPM2-exo-Induced MSC Proliferation and CAF
Transformation. To elucidate the role of miR-21 in the proliferation and CAF transformation of MSCs, MSCs were transfected with miR-21 inhibitor and incubated with OPM2-exo (80 μg/ml) for 10 days. The transfection efficiency of miR-21 inhibitor in MSCs was evaluated by qRT-PCR (Figure 4(a)). As expected, the level of miR-21 in MSCs was significantly decreased about 60% compared with that of veh or miRCtrl after transfection. Results showed that MSCs transfected with miR-21 inhibitor could significantly decrease the proliferation of MSCs when cultured with OPM2-exo for 4 days (Figure 4(b)). Additionally, inhibition of miR-21 decreased expressions of CAF markers including FAP, α-SMA, and SDF-1 in OPM2-treated MSCs at day 10 ( Figure 4(c)).
Inhibition of miR-146a
Could Reduce the IL-6 Expression and Secretion of OPM2-exo-Treated MSCs. To further elucidate the role of miR-146a in the IL-6 expression and secretion of transformed MSCs, MSCs were transfected with miR-146a inhibitor and cultured with OPM2-exo for 10 days. The transfection efficiency of miR-146a inhibitor was evaluated by qPCR ( Figure 5(a)), and the results showed that the miR-146a expression was significantly inhibited about 60%. Inhibition of miR-146a was able to decrease the IL-6 expression and secretion of OPM2-exo-treated MSCs (Figures 5(b) and 5(c)).
Discussion
In the present study, we identified the effects of MM-derived exosomes on the proliferation, CAF transformation, and IL-6 secretion of MSCs, as well as defining the role of miR-21 and miR-146a in these effects. Increasing evidence indicates that cancer exosomes could regulate the functions of MSCs probably through delivering their carried miRs [12,14]. It has been reported that miR- 21 and miR-146a play an important role in regulating MSC transformation and cytokine secretion [22,23]. In this study, we analyzed that the levels of miR-21 and miR-146a in OPM2-exo and in MSCs after being coincubated with OPM2-exo. We found that miR-21 and miR-146a were enriched in OPM2-exo which enhanced the levels of these two miRs in coincubated MSCs. We also performed the qPCR analysis and found that miR-21 and miR-146a were significantly increased in exosomes from two other human MM cell lines (RPMI-8226 and U266). Our findings are consistent with previous reports showing that cancer exosomes can selectively package miRs which are able to be delivered into target cells for functioning [14,24,25]. For instance, exosomes of chronic lymphocytic leukemia have been shown to selectively deliver miR-21, miR-146a, miR-155, miR-148a, and let7-g to MSCs [14]. Our data suggest that miR-21 and miR-146a might be involved in regulating the functions of OPM2-exo on MSCs.
Previous studies have reported that exosomes derived from cancer cells could promote MSC proliferation [11,12].
For instance, exosomes of T-cell leukemia/lymphoma cells are able to induce MSC proliferation, which is associated with the miR-21 expression [12]. Since miR-21 is selectively packaged in OPM2-exo, we further determined the effect of OPM2-exo on MSC proliferation and clarified whether miR-21 was the underlying mechanism. We found that MM exosomes were able to promote MSC proliferation in time-and dose-dependent manners. Moreover, we applied miR-21 inhibitor to further explore the role of miR-21 in OPM2-exo-induced MSC proliferation. Our results showed that miR-21 inhibitor significantly reduced the proliferation of MSCs induced by OPM2-exo. These data indicate that MM exosomes promote the proliferation of MSCs at least partly via miR-21, although the detail downstream pathway remains to be determined.
Exosomes derived from colorectal cancer, lung tumor, and leukemia have been shown to induce CAF transformation of MSCs [11][12][13][14][15]. CAFs are characterized by the expression of several markers including FAP, α-SMA, and SDF-1, as well as increasing secretion of cytokines [16,26,27]. The precise cellular origins of CAFs remain largely unclear; CAFs are reported to originate from various cell types such as resident fibroblasts [27], epithelial cells [28], and MSCs [8]. However, the role of MM exosome on CAF transformation of MSCs has not been determined yet. Our results showed that MSCs could be transformed to CAFs by OPM2-exos partially through the delivery of miR21 and miR146a and the activation of their downstream genes including IL-6, SDF-1, FAP, and α-SMA. It has been illustrated that miR-21 is highly associated with CAFs in breast and ovarian cancer [29,30]. In this study, we also detected the association of elevated level of miR-21 with CAF transformation in OPM2exo-treated MSCs. To confirm the role of miR-21 in CAF transformation of MSCs induced by OPM2-exo, we downregulated the level of miR-21 in MSCs by miR-21 inhibitor. Interestingly, we found that miR-21 inhibitor significantly decreased the effect of OPM2-exo on the expression of CAF markers, suggesting the involvement of miR-21 in CAF transformation of MSCs. Previous studies indicate that several genes including PTEN and PDCD4 are the downstreams of miR-21 in MSC. By inhibiting the expressions of PTEN, miR-21 could increase the levels of α-SMA, FAP, and SDF-1 in breast cancer. In this study, we found that miR-21 upregulated the expressions of α-SMA, FAP, and SDF-1 in the transformed MSC. Based on these, we tentatively attribute SDF-1, FAP, and α-SMA to be the target genes of miR-21. However, this hypothesis needs to be verified in the future. Another important characteristic of transformed CAFs is their ability to secrete proinflammatory cytokines [13]. Frassanito et al. have reported that the CAFs of MM express high levels of TGF-β and IL-6 [16]. The cytokines secreted by CAFs, especially for IL-6, are believed to participate in the growth, angiogenesis, and metastases of MM [26]. Moreover, it is reported that IL-6 secretion of MSCs is regulated by miR-146a [20]. Since we detected that miR-146a was enriched in OPM2-exo and that the level of miR-146a in MSCs was increased after OPM2-exo coculture, we focused on IL-6 and miR-146a in this study. As expected, we found that IL-6 secretion was increased in transformed CAFs induced by OPM2-exo. Our data is supported by previous studies in gastric and lung cancer showing that cancer cell exosomes are able to elevate IL-6 secretion of transformed CAFs from MSCs [13,15]. Moreover, we found that miR-146a inhibitor could significantly reduce the IL-6 expression as well as the IL-6 protein in the conditioned medium of OPM2-exotreated MSCs. Our findings indicate that miR-146a is responsible for the increased IL-6 secretion of CAFs transformed from MSCs by MM exosomes. Previous studies have confirmed that miR-146a upregulates the expression of the Notch/IL-6 proinflammatory pathway; we postulate that IL-6 is the downstream gene of miR-146a in MSCs. Nevertheless, our deductions remain to be further elucidated.
Conclusions
In conclusion, our data have demonstrated that MM exosomes are able to promote the proliferation, CAF transformation, and IL-6 secretion of MSCs. Our results also suggest that miR-21 and miR-146a are involved in regulating the functions of MSCs. Our study highlights the important roles of MM exosomes and miRs in regulating the MSC functions and MM survival, which may be potential targets for MM therapies in the future. | 4,162.2 | 2017-11-27T00:00:00.000 | [
"Biology"
] |
Pulse Oximeter Monitoring Bracelet for COVID-19 Patient Using Seeeduino
The increase in positive cases of COVID-19 makes it grave to monitor the level of oxygen saturation in the blood (SPO2) of COVID-19 patients. The purpose is to prevent silent hypoxia, which lowers oxygen levels in the blood without symptoms. In general, a conventional pulse oximeter is a clip that is clamped on a finger to measure SPO2 levels and heart rate per minute (HR). This research aims to design a compact pulse oximeter monitoring bracelet. The main components of the pulse oximeter monitoring bracelet are the Seeeduino XIAO microcontroller, MAX30100 sensor, and OLED display. The method of collecting data on ten people using a conventional pulse oximeter and prototype device to measure SPO2 and HR levels the interval 30 seconds were a taken measurement. The results show that the Pearson correlation value for SPO2 and HR are -0.73 and 0.98, respectively. These results demonstrated that there is a strong relationship between variables and sufficient linearity. In addition, a pulse oximeter monitoring bracelet is easy to use and low-costs, which makes it an attractive option for the successful implementation of such monitoring SPO2 and HR of COVID-19 patients.
INTRODUCTION
The severe acute respiratory syndrome, coronavirus disease 2019 (COVID- 19), has been spread around the world and is therefore regarded as a pandemic. As at April 18, 2021, over 140 million cases and 3.1 million deaths were recorded after the pandemic began [1]. In consent with the implementation of the COVID-19 prevention and control guidelines from the Indonesian Ministry of Health, self-quarantine is applied to people with no symptoms, people under management, and patients under surveillance. In patients who are positive for COVID-19, monitoring activities need to be carried out to avoid worsening symptoms [2].
Generally, human body temperature is used for monitoring COVID-19 patients [3] [4]. However, for longperiod monitoring, human body temperature is difficult to monitor because it is affected by the temperature of the environment and patient activity. Another monitoring is by monitoring the level of oxygen saturation in the blood (SPO2) and heart rate (HR). SPO2 and HR are used in order to prevent silent hypoxia, which results in lowering oxygen levels in the blood without symptoms, causing tissue damage in the body, and can lead to complications such as respiratory failure or sudden death if left untreated.
A pulse oximeter is a tool used to monitor SPO2 and HR, generally the use of a pulse oximeter device that is clamped on the finger [5]. The noninvasive optical technique of pulse oximeter measurement is the absorption of a spectrum of light of different wavelengths (at 660nm to 940nm) on oxyhemoglobin and deoxyhemoglobin [6] [7]. Basically, a pulse oximeter uses LED light to determine oxygen saturation by comparing how much red and infrared light is absorbed in blood [8]. In the development of the pulse oximeter tool, it has a variety of models, which are clamped at the fingertips, in the form of a ring, and most recently in the form of a smartwatch. However, the conventional pulse oximeter models that are clamped at the fingertips may not be very convenient for continuous monitoring over long periods of time [8] [9]. This study aims to build a pulse oximeter monitoring bracelet prototype that is comfortable for long measurements and can ease monitoring of SPO2 and HR in COVID-19 patients.
METHOD 2.1. Oxygen Saturation (SPO2)
Oxygen saturation is the percentage ratio between the amount of oxygenated hemoglobin in the arteries to the amount of deoxyhemoglobin and oxyhemoglobin. Oxygen saturation values range from 95-100% and can be measured using a non-invasive method, specifically pulse oximeter [10][11].
Pulse Oximeter
Pulse oximetry has a light detection sensor to measure oxygen saturation and HR by combining two technologies, namely spectrophotometry and plethysmography (PPG) [10] [12]. Noninvasive pulse oximeter generally uses PPG technology which has two modes, namely transmission, and reflectance, where the difference in the location of the pulse oximeter sensor is LED and photodetector (PD). In the transmission mode, the location of the LED and PD is opposite where the LED light penetrates the network, and then the light intensity is transmitted and detected by the PD. Whereas in front-line LED and PD reflectance mode, both have the same way of working by emitting light on a certain wave by blood or tissue, and PD detects the light intensity [13] [14][15].
Prototype Design
In designing the prototype, the stages perform are the design of the wiring tool and printing on the PCB. Fig. 1 shows a schematic circuit of the pulse oximeter monitoring. In this research, we use the Seeeduino XIAO development board that has powerful performance using ATSAMD21G18A-MU as a microcontroller with low power. The design has a small size as a thumb with good performs in processing but requires less power and can be used for wearable devices [16]. The MAX30100 sensor module, including LED (Red-Infrared), drive circuit, and PD, will convert the intensity of the transmitted light into SPO2 and HR values [17]. The OLED SSD1306, with a size of 128 mm x 64 mm, then displays the results of the SPO2 and HR in real-time [18]. Fig. 2 shows the PCB 3-D design and the implementation of the prototype pulse oximeter monitoring bracelet. The pulse oximeter monitoring is designed as a bracelet because it can be a wearable device and can be long-period monitoring [19].
Calculation Pulse Oximeter
A pulse oximeter is a device that can measure your pulse and oxygen saturation in the blood. Oxygen concentration is measured by calculating the ratio between absorbed light from IR LED (940 nm) and Red LED (660 nm) [20]. In short, SPO2 is defined as the ratio of the oxygenated Hemoglobin level over the total Hemoglobin level, as shown in (1). The ratio R between these two wavelengths is defined with (2). Light transmits DC such as skin, bone, muscle. For light transmits AC such as arterial blood. Fig. 2. Top (a) and back (b) 3-D PCB design view, (c) Implementation design of the pulse oximeter monitoring bracelet
Pearson Correlation Coefficient
Correlation is a statistical method used to assess the probability of a linear relationship between two continuous variables x or y [4]. It is to calculate the Pearson correlation coefficient as for a sample statistic [21]. The correlation coefficient can be positive in the range 1 to -1. If the value is close to 1, then there is a strong relationship, and if the value is close to 0, then the closeness between the variables is weak [22]. Equation
RESULTS AND DISCUSSION
The data collection using a compact bracelet pulse oximeter monitoring prototype is shown in Fig. 3. At the time of collecting data, a prototype device is used on the wrist of the subject, and a conventional pulse oximeter is clipped to the fingertip. Conventional pulse oximeter tool (Yobekan, calibrated standard) is used as a reference of SPO2 and HR values. Ten subjects with sampling time intervals every 30 seconds were conducted pulse oximeter measurement. The subjects were children ten years old, adults 20-35 years old to 64 years old. Activities before monitoring each subject in a condition where they finished working from home, finished taking a nap, and studied online at home. Fig. 4 and Fig. 5 show that linear relationships between prototype and reference pulse oximeter. Table 1 shows the data average of SPO2 measurements. From the results of measuring the average SPO2 by calculating the Pearson correlation, it is obtained with a value of -0.73, which has a strong accuracy between the value of the prototype device and the conventional pulse oximeter reference device. The calculation is as follow.
Table 2 shows the data average of HR measurements. From the results of measuring the average HR by calculating the Pearson correlation, it is obtained with a value of 0.98, which has a strong accuracy between the value of the prototype device and the conventional pulse oximeter reference device. The calculation is as follow.
CONCLUSION
Pulse oximeter monitoring bracelet has been successfully designed. The results of pulse oximeter monitoring bracelet measurement in 10 people of different ages show that the Pearson correlation value for SPO2 and HR are -0.73 and 0.98, respectively. These results demonstrated that there is a strong relationship between variables and sufficient linearity. In addition, a pulse oximeter monitoring bracelet is easy to use and low-costs, which makes it an attractive option for the successful implementation of such monitoring SPO2 and HR of COVID-19 patients. | 1,978.2 | 2021-04-21T00:00:00.000 | [
"Computer Science"
] |
Optimizing the Cargo Location Assignment of Retail E-Commerce Based on an Artificial Fish Swarm Algorithm
An efficient storage strategy for retail e-commerce warehousing is important for minimizing the order retrieval time to improve the warehouse-output efficiency. In this paper, we consider a model and algorithm to solve the cargo location problem in a retail e-commerce warehouse. The problem is abstracted into storing cargo on three-dimensional shelves, and the mathematical model is built considering three objectives: efficiency, stability, and classification. An artificial swarm algorithm is designed to solve the proposed models. Computational experiments performed on a warehouse show that the proposed approach is effective at solving the cargo location assignment problem and is significant for the operation and organization of a retail e-commerce warehouse.
Introduction
Under the new retail model, customers have higher and higher requirements on the timeliness of online shopping distribution with the rapid popularization of online shopping. E-commerce warehouse managers are interested in finding the most economical way which minimizes the costs involved in terms of energy consumption, distance traveled, and time spent. As one of the important subsystems of the logistics system, the sorting system plays an important role in picking orders accurately and timely.
Electronic retail pursues a demand-driven organization with high product variety, small order sizes, and reliable short response times. An order lists the items and quantities requested by a customer from a distribution centre or a warehouse. e amount of daily orders reaches 20,000 to 30,000. Customer satisfaction is one of the key performance indicators of the retail e-commerce warehousing centre, and it mainly depends on the accuracy and timeliness of orders. It is reported that the picking order time accounts for about 50% of the order response time on average [1], which accounts for the largest proportion in the operation links such as warehousing, loading and unloading, and information processing. In addition, the travel time accounts for about 50% of the order picking time in the process of starting, searching, travel, sorting, and other picking, which is the most time-consuming work with the largest labor consumption. As a consequence, minimizing the order retrieval time plays a critical role in improving the warehouse-output efficiency for any logistics system. ere are four methods to reduce travel times or distances by means of more efficient control mechanisms in warehouses [2]: (1) determining a product item order for picking routes, (2) zoning the warehouse, (3) assigning orders to batches, and (4) assigning products to the correct cargo location. Cargo location optimization refers to the reasonable type and quantity of items stored in the corresponding cargo location, which minimizes the costs involved in terms of distance traveled and/or time spend.
One of the most important concerns of warehouse managers is finding the most cost-and time-efficient way to pick orders placed by customers, which would allow the company to be seen to be a reliable company that satisfies its customers [3]. Cargo location assignment requires assigning a position for each cargo. An appropriate position for each cargo is important and influences the operational efficiency of warehouses [4]. erefore, Cargo location assignment plays a critical role in improving customer satisfaction and minimizing warehouse operational costs. Storage assignment is an important decision problem in warehouse operation management. It involves the placement of a set of items in a warehouse in such a way that some performance measure is optimal. e main purpose of using a storage location assignment system is to establish the parameters for ease of identification and location of items in warehouses.
Literature Review
e optimization problem of cargo location assignment has received a significant amount of attention. Many scholars studies on cargo location assignment primarily from the viewpoints of cargo turnover efficiency, shelf stability, and warehouse storage strategy to minimize the total order picking distance. Jiao et al. considered the working performance and security requirements of an automatic warehouse. A simple weighted genetic algorithm was used to solve the weighted and normalized multiobjective models [5]. Zhang et al. studied the metrological centres of cargo locations with many constraint rules in warehouses and proposed a simulated annealing algorithm to reassign cargo locations based on a prepartitioning strategy [6]. Li et al. proposed to separately use the traditional genetic algorithm and a virus coevolutionary genetic algorithm to solve the cargo location optimization problem [7]. Tang et al. put forward a storage strategy to optimize the cargo location using multilane shelves according to the material characteristics of a large amount of warehousing, multiple varieties, and large volume differences for typical shipping enterprises [8]. Xie et al. proposed a novel bilevel grouping optimization model for solving the storage location assignment problem with grouping constraint. Sophisticated fitness evaluation and search operators were designed for both upper and lower level optimization [9]. Yang et al. discussed the Container Stacking Position Determination Problem, specifically focusing on the storage space allocation problem in container terminals [10]. Xie et al. developed an efficient Restricted Neighbourhood Tabu Search algorithm to solve the storage location assignment problem with grouping constraints [11]. Flamand et al. investigated retail assortment planning along with store-wide shelf space allocation in a manner that maximizes the overall store profit [12].
Many authors have studied some optimization problems, including picking routes, location assignment, and picking order distance to minimize operational cost. Battini et al. presented the storage assignment and travel distance estimation joint method, a new approach useful to design and evaluate a manual picker-to-parts picking system, focusing on goods allocation and distances estimation [13]. Guo et al. suggest that using head-up displays like Google Glass to support parts picking for distribution results in fewer errors than current processes [14]. Adasme et al. proposed four compact polynomial formulations based on classical and set covering p-median formulations and proposed Kruskal-based heuristics and metaheuristics based on guided local search [15]. Zhou et al. calculated the sum of the expected picking distance in the main channel and the expected picking distance of the subchannel, and a mathematical model for return-shape picking paths of the V-type layout was established [16]. Duan et al. constructed a Stackelberg model in which one retailer sells a national brand (NB) and its store brands (SB) and maximized the category profit by allocating shelf space and determining the prices for the SB and NB products [17]. Luan et al. presented a Location-Routing Problem model to assist decision makers in emergency logistics. e model attempted to consider the relationship between the location of warehouses and the delivery routes to maximize the rescue efficiency [18]. Tian et al. presented new energy-efficient models of its sustainable location with carbon constraints. An artificial fish swarm algorithm (AFSA) was proposed to solve the proposed models [19]. Bortolini et al. faced the so-called unit-load assignment problem for industrial warehouses located in seismic areas presenting an innovative integer linear programming model [20]. Tian et al. studied the optimal location of a transportation facility and automotive service enterprise issue and presented a novel stochastic multiobjective optimization to address it [21].
Some of these studies also considered the inbound and outbound warehouse times, the stability of shelves, and the classification of cargo, as we do in this paper. However, when establishing the target model, these studies did not consider the actual layout of shelves. Our work in this paper is distinct in that we consider the influence of parity on the driving distance of a forklift. In this study, to solve the assignment problem using a genetic algorithm and an AFSA, the ideal point method is proposed for transforming multiple objectives into a single objective. Computational experiments show that the optimization effect of the AFSA is superior to that of the classical genetic algorithm. To sum up, the paper makes the following main contributions: (1) e paper takes multirow fixed shelves as the research object. According to the actual layout of warehouse shelves, the influence of X parity on the in-out storage efficiency in x-axis direction is considered on the basis of the existing optimization research model of cargo location. (2) e paper uses the modified ideal point method to construct the evaluation function and apply the AFSA to the optimal cargo location of the retail e-commerce. e remainder of the paper is organized as follows. Section 3 describes the problem. Section 4 describes the assumptions and constructs the mathematical model. e solving algorithm is proposed in Section 5. Section 6 reports the numerical experiments and analysis of results. Section 7 presents the conclusions and future work.
Problem Description
Generally, retail e-commerce warehouses are mainly composed of a temporary storage area and a shelf area. e warehouse plan is shown in Figure 1. e warehouse consists of multiple aisles, each of which is relatively independent with separate shelves, and the inbound and outbound (I/O) points are unique. e process of goods entering the warehouse includes carrying the goods to be stored from the storage area to the I/O points, followed by a forklift carrying the goods from the I/O points of the shelves to the cargo space. e shelf stereogram is shown in Figure 2. At present, the storage strategy in the supermarket warehouse is random storage, which means that warehouse personnel use forklifts to randomly assign goods to the nearest idle shelves in the process of warehousing goods and putting them on shelves. e warehousing of retail e-commerce enterprises is a special category of warehousing. Compared with traditional enterprise warehousing, retail e-commerce warehousing has the characteristics of using personnel as service objects and includes more kinds, shapes, and qualities of goods. At present, the following problems exist in the storage of goods and the allocation of cargo locations. First, due to the variety of goods in the supermarket warehouse and the random storage mode, goods are stored in a disorderly manner. It is easy for goods from different categories to be mislabelled and goods in adjacent positions to shift. Second, customers' demands for goods and their demand times are random, which require the supermarket warehouse to respond to orders quickly and efficiently. However, random storage makes the distribution of relevant goods scattered, and it takes more time to find similar goods when they are not in the warehouse, which leads to a low operational efficiency of the supermarket. ird, the appearances and weights of all kinds of goods that are stored in the supermarket warehouse are quite different. If the warehoused goods are randomly stored in idle positions, there may be low shelf stability and hidden safety risks.
In view of the abovementioned problems in the warehouse, this paper proposes the following optimization strategies for the assignment of storage space: (1) ree types of cargo location assignment strategies are used, including dedicated storage, randomized storage, and class-based storage [22]. A dedicated storage policy prescribes a particular location for the storage of each product, and no other item can be stored at that location, even if the space is empty. Random storage is used because of the necessity of optimizing the storage area, and materials are placed in existing idle positions. Randomized and dedicated storages are extreme cases of class-based storage policies; that is, randomized storage considers a Mathematical Problems in Engineering single class, and dedicated storage considers one class for each item. In addition, there is an increase in the costs of using space when the space is poorly used in dedicated storage, while when using random storage, much effort is placed on the order picking system. Class-based storage combines the features of the other two systems and can be a good alternative for making a warehouse more efficient in terms of the space that is used, the order picking operation, and the warehouse costs. (2) Higher frequency goods should be stored closer to the I/O points. e optimization goal of the warehouse is to reduce the total time of inbound and outbound goods over a certain period to the shortest time by optimizing the cargo location assignment. e most important performance measures in a warehouse are generally related to the time or effort required for cargo to enter and leave the warehouse, i.e., the storage and retrieval of items from the temporary storage area and their delivery to the point where they will be picked up by the appropriate forklift. After determining the storage of high-frequency outbound goods, the remaining cargo locations are arranged to store other goods. e total forklift travel time and the positions of higher frequency goods are strongly correlated, which has a strong impact on the warehouse operating efficiency. e closer a frequent item is to the I/O point, the lower is its total forklift travel time [23].
(3) Generally, heavy cargo should be kept on the ground or at a lower position on the shelves to maximize shelf stability and improve security, and light cargo should be placed at higher positions on shelves, which can reduce the height of the whole shelving unit [4]. (4) A previous study presented a detailed analysis of the calculation of the similarity coefficient and used the Rogers-Tanimoto similarity coefficient to measure the correlation between two goods [24]. Similar goods are more likely to occur in the same order; therefore, similar goods should be placed in a concentrated manner, which can significantly minimize the total order picking distance and time.
4.1.
Assumptions. e following reasonable assumptions are put forward to simplify the model: (1) A good only has one cargo location, and each cargo location can only store one product (2) e cargo box for each cargo inventory is a rectangular parallelepiped (3) Goods are stored on shelves in full boxes, and goods on one shelf are regarded as a whole (4) e volume of each good during each inbound and outbound delivery is less than the maximum storage capacity of the cargo location In this paper, the first objective is to improve the inbound and outbound efficiencies of the warehouse by minimizing the total forklift travel time. is objective can be optimized by placing higher frequency goods nearer to the entrance of the warehouse. e total forklift travel distance needs to be analysed and calculated, including the distance from the temporary storage area to the assigned location, which is described as follows. When the distance traveled by the forklift to place all goods into storage is calculated, the first row and the second row have the same distance in the x-axis direction, the third row and fourth row have the same distance in the x-axis direction, and so on. erefore, the driving distance of the forklift is as follows: 4 Mathematical Problems in Engineering x is an odd number, x is an even number, where L is the distance between the centre lines of the adjacent shelf passages.
Stability Model.
Cargo locations are assigned to minimize the centre of gravity, which is the basic goal of warehouse operations. Only under the premise of ensuring shelf stability is the realization of the other optimization objectives meaningful.
To meet the requirement for shelf stability, this paper focuses on the vertical direction. We only consider the z-axis direction in the centre of the shelf. e sum of the coordinates of all goods in the z direction can be used to determine whether the shelf is sufficiently stable. is evaluation function is calculated as
Classification Model.
When cargo locations are allocated, the relevance of the goods needs to be considered, and strongly similar cargoes should be assigned to the same shelf area. erefore, an objective function is built as follows. At first, we should calculate the distance between each good and the central coordinates of similar goods. en, we need to calculate the distance between each good and the central coordinates of similar goods and compute the sum of these distances. is distance is calculated as In summary, the multiobjective mathematical model of cargo location optimization is as follows: where x � 1, 2, . . . , a, y � 1, 2, . . . , b, and z � 1, 2, . . . , c, when x is an odd number D x � x · (L/2), otherwise, when D x � (x − 1) · (L/2).
Transformation of the Target Function.
In practical applications, we show that minimizing the total forklift travel distance, maximizing shelf stability and minimizing the distance between similar cargoes are conflicting criteria. erefore, taking into account these three objectives, a balance should be set to adjust the cargo location assignment. To evaluate these three criteria simultaneously, this paper proposes using the ideal point method to construct an evaluation function. is method allows for different objectives to be evaluated and assigns different weights to them in the final calculation. First, the algorithms are used to find the optimal solution (i.e., the ideal point) of each target. en, the distance between the actual point and the ideal point of each objective function is calculated. Finally, the difference between the actual point and the ideal point of each objective function is weighted accordingly. e objective functions can be transformed as follows: where x � 1, 2, . . . , a, y � 1, 2, . . . , b, and z � 1, 2, . . . , c, when x is an odd number D x � x · (L/2), otherwise, when D x � (x − 1) · (L/2).
Algorithms
Artificial fish swarm algorithm (AFSA) [19], which was presented by Li in 2002, is a new swarm intelligence optimization method by simulating fish swarm behavior. It is an effective method to solve optimization problems, e.g., facility location allocation [19], traveling salesman problem [25], and sorting activities [26]. Besides, reference [25] concludes that AFSA has strong global search ability and fast convergence rate and obtains a better solution. Specific performance in the following aspects: (1) it possesses fast convergence speed and is able to be used to solve the practical problems. (2) As for the accessions that do not need so high precision, it can be used to get an acceptable result quickly. (3) No need of question's strict mechanism pattern and no need of the accurate description to questions, this will extend the application range. us, we propose to adopt AFSA to solve the problem of cargo location assignment.
Principle of the AFSA.
Assuming an Ndimensional space, the fish population is X i � (x i1 , x i2 , . . . x in ). Every artificial fish's present state can be expressed as where i � 1, 2, . . . , N and (x i1 , x i2 , . . . x in ) represent the optimized states of the artificial fish. e food concentration in the present location of Mathematical Problems in Engineering the artificial fish can be expressed as Y � f(x). e variable visual refers to the range of vision of the artificial fish, and steprepresents the maximum step length of the artificial fish. |X j − X i | is the distance between artificial fish X i and X j , δ is the crowding factor, and try number is the maximum number of trials of artificial fish in which fish prey every time.
Preying Behaviour.
ere is an assumption that the present state of artificial fish is X i . en, the artificial fish randomly selects a state X j within its visual range, which means that |X j − X i | ≤ visual. If f(x i ) < f(x j ), then the artificial fish selects X j as the current state. If not, the artificial fish selects a new state again and compares it to the current state. After attempting try number repeatedly, if the state cannot satisfy the advancement condition, then artificial fish X i performs a random behaviour. is process is expressed as a mathematical formula as follows:
Swarming
Behaviour. e current position of an artificial fish is X i , and the distance between the artificial fish and another artificial fish at any position in its visual field is |X j − X i |. e variable nf is the number of partners within the visual range of the artificial fish, and X c is the centre position of a swarm of strong fish. If (Y c /n f ) > δY i , the food concentration at X c is high and less crowded, and then, the fish swims toward the centre. If not, artificial fish X i conducts preying. e introduction of a crowing factor can largely avoid the artificial fish being trapped in the local minima due to the high density of fish at a certain location. e following behavior is a mathematic description:
Following Behaviour.
Following behaviour is similar to swarming behaviour. Using X i as the current state of an artificial fish, the fish will search for the optimum companion X max within its perceptual area. If (Y max /n f ) > δY i , there is much food and the artificial fish are not crowded; otherwise, the artificial fish have to prey. e step moving follows the following rule:
Chromosome Coding Design.
When an AFSA is used to solve the cargo location assignment problem, its coding can be expressed via two methods: an expression based on the cargo location and an expression based on the goods. e chromosome coding based on the cargo location coding method is as follows.
Each artificial fish refers to a way of allocating goods to be stored. A number of nonrepeating values (i.e., the number of stored goods) are randomly selected from a number of values (i.e., the number of cargo locations) as artificial fish. e position of each component of the artificial fish represents the number of goods, and the value of each component of the artificial fish represents the number of locations. e one-dimensional vector is then converted into a three-dimensional vector of x, y, and z.
Assuming there are 5 goods to be put into storage, the possible chromosomes are, for example, 5 4 2 1 3 . e artificial fish indicates that good no. 1 is stored in the fifth location. Good no. 2 is stored in the fourth location, good no. 3 is stored in the second location, and so on. In addition, the artificial fish also indicate that good no. 1 is stored in the (1, 1, 5) section, good no. 2 is stored in the (1, 1, 4) section, and so on.
Procedure of the AFSA.
e procedure of the AFSA is as shown in Figure 3.
Step 2: setting bulletin board to record the current status of each artificial fish and select the optimal value record.
Step 3: updating the state of every artificial fish. e states of artificial fish are dynamically updated as follows. Supposing that the current state of an artificial fish is X i . First, the artificial fish tries to follow. If that fails, the artificial fish swarms. If swarming fails, the artificial fish preys. Finally, if preying fails, the artificial fish enacts a random behaviour, and max gen � max gen − 1.
Step4: evaluating the fitness value of each artificial fish using formula (6). e steps are repeated until the termination condition is met.
Data Preparation.
In this chapter, first, we should obtain data about product characteristics, which include the type (i.e., how many different goods there are), the goods weight and frequency, and the goods number and the original coordinates for each good in the warehouse. Moreover, the warehouse characteristics are required, which include its dimensions (i.e., length and width), layout, forklift speed, and distance (i.e., the distance between the temporary storage area and the inbound and outbound points and the distance between the centre lines of the adjacent shelf passage). Finally, the parameters of the AFSA are set.
In addition, to facilitate the calculation, each position's height (h) is 1. 6.2. Contrast Experiment. MATLAB, 2015version, launched by MathWorks Inc., is used, and the test is conducted on a Windows 32-bit operating system. In this section, we evaluate the proposed AFSA approach and compare it with the classical genetic algorithm (GA) and particle swarm optimization (PSO) from previous work [4,27]. According to the evaluation function based on the ideal point method, the paper makes the following arrangement. First, the single target is simulated by three algorithms to find the optimal value of the single target. en, the optimal value of a single objective is substituted into the evaluation function, and three single objectives are integrated into multiple objectives. Besides, the multiobjective simulation experiment is carried out by three algorithms, and the optimization results of the three algorithms are compared. Finally, a scale simulation experiment is conducted to verify the universality of the model and the proposed AFSA algorithm.
Simulation and Comparison Experiment for the Inbound Efficiency.
To evaluate the first objective, we conducted 20 times simulations using the GA, PSO, and AFSA. After the three algorithms are each iterated 200 times, the optimized result diagrams that are obtained are shown in Figure 4 illustrates that the optimized result of AFSA is superior to that of the PSO and GA in terms of the forklift operating time. e functional values using the PSO and GA are reduced from the initial value 983.2857 to 787.3571 and 672.5714, which reduce 19.9259% and 31.5996%, respectively. When using the AFSA, the functional value decreases to 617.6429, which is 37.1858% lower than the initial value.
(4) Comparing the optimized position in Figure 5 from the AFSA with the position in Figure 1 before optimization, it can be seen that most goods allocated to positions are near the I/O points.
e results indicate that the three algorithms can improve the warehouse-input efficiency to some extent, but some goods are placed on higher levels and similar goods are scattered. Figure 6 illustrates that the optimized result of the AFSA is superior to that of the PSO and GA in terms of the forklift operating time. e functional values using the PSO and GA are reduced from the initial value 4.5874 to 3.2517 and 1.6296, which reduce 29.1167% and 64.4766%, respectively. When using the AFSA, the functional value decreases to 1.6, which is 65.1219% lower than the initial value.
(4) Comparing the optimized position in Figure 7 from the AFSA with the position in Figure 1 before optimization, it can be seen that most goods are placed on the bottom shelves.
e results indicate that the two algorithms can improve shelf stability to some extent, but some goods are allocated to cargo spaces far from the I/O points and similar goods are scattered. Figure 8 illustrates that the optimized result of the AFSA is superior to that of the PSO and GA in terms of the forklift operating time. e functional values using the PSO and GA are reduced from the initial value 104.622 to 55.3044 and 37.6389, which reduce 47.1388% and 64.0239%, respectively. When using the AFSA, the functional value decreases to 31.5563, which is 69.8378% lower than the initial value.
Simulation and Comparison Experiment for Cargo
(4) Comparing the optimized position in Figure 9 from the AFSA with the position in Figure 1 before optimization, it can be seen that similar goods are assigned near the central cargo spaces, which greatly increased the concentration of related goods.
e results indicate that the three algorithms can improve the concentration of similar goods to some extent, but some goods are allocated to cargo spaces far away from the I/ O points and some goods are placed at higher levels.
Simulation and Comparison Experiment for
Multiobjectives. Goods are mixed, the inbound and outbound times are optimized, and shelf stability and cargo classification are performed to maximize the warehouse operating efficiency and minimize the warehouse operating costs.
When only considering a single objective, the PSO, GA, and AFSA are used to solve the problem of cargo location optimization. e multiple operation results of the PSO, GA, and AFSA show that the program can converge after a limited number of iterations and obtain optimized results.
Mathematical Problems in Engineering
Comparing the results of the PSO, Ga, and AFSA, we find that the AFSA obtains better results. erefore, the results of the AFSA are chosen as the ideal point. erefore, f * 1 (x, y, z) is 617.6429, f * 2 (x, y, z)is 1.6, and f * 3 (x, y, z) is 31.5563. ese three ideal points are input into formula (6) to convert the multiobjective function into a single objective function as the evaluation function of the algorithm. e transformed evaluation function eliminates the influence of different dimensions through the ideal point method.
f(x, y, z) � min We use the abovementioned three algorithms to optimize the comprehensive objective, and formula (10) is used as the fitness function for the three algorithms. e three algorithms are used to conduct 20 simulations and generate a comparison diagram of the iterations. After 500 iterations, the optimized result diagrams that are obtained are shown in Figures 10 and 11. e optimized results obtained based on the three algorithms reveal the following: (1) e results of the PSO, GA, and AFSA show that the program can converge after a limited number of iterations and obtain optimized results. (2) Figure 10 illustrates that the optimized result of the AFSA is superior to that of the PSO and GA in terms of the comprehensive objective. e functional values using the PSO and GA are reduced from the initial value 1.1683 to 0.6945 and 0.17568, which reduce 40.5499% and 84.962%, respectively. When using the AFSA, the functional value decreases to 0.1455, which is 87.5484% lower than the initial value. (3) Comparing the optimized position in Figure 11 from the AFSA with the positions in Figures 3-8, it can be seen that similar goods are assigned nearer to its central cargo space and most goods are placed on the bottom shelves.
When comparing the results of the PSO, GA, and AFSA based on the data presented in Table 2, it shows that AFSA is better than the PSO and GA with regard to single objective optimization and multiobjective optimization. e PSO and GA are superior to the former one in terms of the convergence speed. However, the AFSA has strong global research ability and can obtain a better solution. e abovementioned results indicate that combining the model with the AFSA can significantly shorten the inbound and outbound working times for the warehouse, maximize the shelf stability, and increase the concentration of related goods. e final cargo position coordinates after comprehensively considering the optimization of the three objectives based on the AFSA are shown in Table 3.
Scale Experiment.
To demonstrate the effectiveness of the proposed AFSA to the cargo location optimization problem, we conducted 20 simulations experiments on 150 goods apart from 30 goods by the PSO, GA, and AFSA along with acquiring a comparison diagram of the iterations. e algorithms iteration diagram in Figure 12 shows that the three algorithms can converge stably within 500 iterations. Additionally, it demonstrates AFSA has higher convergence speed than the PSO and GA. e optimized results of the PSO, GA, and AFSA in Table 4 indicate that AFSA has better optimization effect than the PSO and GA. e total optimized values using the PSO and GA are reduced from the initial value 10.778 to 10.2551 and 10.1958, which reduce 4.8514% and 5.402%, respectively. When using the AFSA, the functional value decreases to 8.5114, which is 21.0295% lower than the initial value.
e abovementioned results of scale experiment indicate the universality of the proposed model and the AFSA to solve the cargo location assignment. It can significantly shorten the inbound and outbound working times for the warehouse, maximize the shelf stability, and increase the concentration of related goods.
Conclusion
e paper takes multirow fixed shelves as the research object and considers the influence of parity on the in-out efficiency of the x-axis.
is paper constructs a multiobjective mathematical model in which the objectives are efficiency, stability, and classification; and then, the multiobjective model is converted into a single objective model. Finally, this paper uses the PSO, GA, and AFSA to separately solve the problem. e Optimized results show that AFSA is significantly more efficient than PSO and GA, and the model presented in the paper can achieve better retail enterprise warehouse slotting optimization using the AFSA, thus greatly reducing the operating costs. In the future, the algorithm will be further improved to improve the solution efficiency, and more target objectives will be considered.
Data Availability
e "original cargo location information" data used to support the findings of this study are included within the article, and the simulation experiment code is included within the supplementary materials.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,617.4 | 2020-05-05T00:00:00.000 | [
"Computer Science",
"Business",
"Engineering"
] |
Suppression of viscosity enhancement around a Brownian particle in a near-critical binary fluid mixture
Abstract We consider the Brownian motion of a rigid spherical particle in a binary fluid mixture, which lies in the homogeneous phase near the demixing critical point, assuming that neither component is more attracted by the particle surface. In a recent study, it was experimentally shown that the self-diffusion coefficient first decreases and then reaches a plateau as the critical point is approached. The decrease should reflect the critical enhancement of the viscosity, while the plateau was interpreted as representing the suppression of the enhancement due to the shear around the particle. To examine this interpretation, we take into account the local shear rate to calculate the dependence of the drag coefficient on the particle speed, and then utilize a Langevin equation to calculate the self-diffusion coefficient.
Introduction
We consider the Brownian motion of a colloidal rigid particle in a binary fluid mixture lying in the homogeneous phase near the demixing critical point. In some combinations of the mixture and particle material, one of the components is preferentially attracted by the particle surface and the preferred component is remarkably adsorbed near the particle surface because of the near-criticality (Beysens & Leibler 1982;Beysens & Estève 1985). The particle motion deforms the adsorption layer, which affects the force exerted on the particle (Lee 1976;Omari, Grabowski & Mukhopadhyay 2009;Okamoto, Fujitani & Komura 2013;Fujitani 2018;Tani & Fujitani 2018;Yabunaka & Fujitani 2020). In other combinations exhibiting negligible preferential adsorption, the particle motion remains still influenced by the near-criticality because of the critical enhancement of the viscosity (Ohta 1975;Ohta & Kawasaki 1976). This enhancement can also be influenced by the particle motion, as is pointed out in a recent experimental work (Beysens 2019). We briefly mention its background in some paragraphs below.
Let us first assume there are no particles in an equilibrium near-critical binary fluid mixture. The composition can be represented by the difference between (or the ratio of) the mass densities of the two components. The order parameter, which we can take to be proportional to the deviation of the local composition from the critical one, fluctuates about the equilibrium value on length scales smaller than the correlation length, ξ . Correlated clusters, where the order parameter keeps the same sign on average, range over these scales, and are convected to enhance the interdiffusion of the components on larger length scales (Kawasaki 1970;Onuki 2002). Thus, ξ affects how the two-time correlation function of the order-parameter fluctuation decays. Writing Γ k for the relaxation coefficient of its spatial Fourier transform, with k denoting the magnitude of the wavenumber vector, we have Γ k = Ω(kξ) × k z , (1.1) for small k with kξ being finite. Here, z denotes the dynamic critical exponent for the order-parameter fluctuation and Ω represents a scaling function, which approaches a constant multiplied by (kξ) 2−z as kξ becomes much smaller than unity (Siggia, Hohenberg & Halperin 1976;Hohenberg & Halperin 1977). This leads to Γ k ∝ k 2 for sufficiently small k, which is expected for the hydrodynamic mode of a conserved quantity. We write k B for the Boltzmann constant and T for the temperature of the mixture. The mode-coupling theory for a three-dimensional mixture gives where K denotes the Kawasaki function with K(x) ≈ 3πx 3 /8 for x 1 and K(x) ≈ x 2 for x 1, andη represents the shear viscosity (Kawasaki 1970;Onuki 2002). In this theory, the weak critical singularity of the viscosity is neglected, and the dynamic critical exponent is found to be three. This theoretical result turns out to be in good agreement with the experimental results (Swinney & Henry 1973). In the refined calculation of the dynamic renormalization group, the critical enhancement ofη is considered, and the value of z is found to be slightly larger than three (Folk & Moser 2006).
The mixture is assumed to be at equilibrium in the preceding paragraph. The critical enhancement of the transport coefficients, i.e. the Onsager coefficient for the interdiffusion and the shear viscosity, can be suppressed when a shear is imposed on the mixture. Influences of a simple shear flow are studied theoretically (Onuki & Kawasaki 1979;Onuki, Yamazaki & Kawasaki 1981) and experimentally (Beysens, Gbadamassi & Boyer 1979;Beysens & Gbadamassi 1980). In an example of this flow, the x component of the velocity is y multiplied by the constant shear rate s (> 0), with (x, y) denoting two of the three-dimensional Cartesian coordinates. A correlated cluster of the order-parameter fluctuation would be deformed by the shear when its lifetime is longer than a typical time scale of the shear, 1/s. The lifetime for a cluster with the size of 1/k is evaluated to be 1/Γ k , while the cluster size ranges up to ξ . Hence, if the shear is strong enough to satisfy the enhancement is suppressed. This condition of strong shear is also derived in terms of the renormalization group, as mentioned in appendix A. A simple shear flow is a kind of linear shear flow, where the velocity V at a position is proportional to the positional vector, and can be regarded as a linear combination of a stagnation-point flow and a purely rotational flow (Rallison 1984). These linear shear flows are two-dimensional. A pure-extension flow, being a three-dimensional linear shear flow, and a stagnation-point flow are referred to as elongational flows in Onuki & Kawasaki (1980a,c), where the suppression is studied for some linear shear flows. In a linear shear flow, the time derivative of a directed line segment X linking two fluid particles is equal to (∇V ) T · X , where the matrix ∇V represents the homogeneous velocity gradient and superscript T indicates the transposition. Thus, the exponential of the product of (∇V ) T and the time t determines how X is stretched and shrunk with time. In the elongational flow, the shear rate s in (1.3) is given by the largest stretching rate, i.e. the largest eigenvalue of ∇V ; some details are mentioned in the penultimate paragraph of appendix A.
The mean square displacement of a Brownian particle becomes proportional to the time interval as it becomes sufficiently long. The self-diffusion coefficient of the particle is defined as the constant of proportionality divided by twice the spatial dimension. In the study mentioned at the end of the first paragraph (Beysens 2019), it is shown that the self-diffusion coefficient of a Brownian particle in a near-critical binary fluid mixture first decreases and then reaches a plateau as T approaches the critical temperature T c along the critical isochore in the homogeneous phase. The first decrease should reflect the critical enhancement ofη, while the plateau can be regarded as representing the suppression of the enhancement due to the shear caused by the particle motion. Using (1.2) and replacing s in (1.3) with the average particle speed divided by the particle radius, Beysens (2019) estimates the temperature range exhibiting the suppression; the estimated range appears consistent with the observed one. In the present study, we calculate the self-diffusion coefficient for direct comparison with the experimental results.
In the first three subsections of § 2, we calculate the hydrodynamic force exerted on a rigid spherical particle moving translationally in a fluid mixture quiescent far from the particle. Assuming a typical length scale of the flow to be much larger than ξ , we need not consider dynamics of the order-parameter fluctuation, which is significant only on length scales smaller than ξ (Furukawa et al. 2013;Okamoto et al. 2013). The mixture is assumed to be incompressible, as in the previous studies mentioned above (Folk & Moser 1998;Onuki 2002). This assumption usually works well in a near-critical mixture prepared experimentally (Anisimov et al. 1995;Onuki 2002;Pérez-Sanchez et al. 2010). When the viscosity is homogeneous, the magnitude of the force is proportional to the particle speed. The constant of proportionality (the drag coefficient) is given by Stokes' law (Stokes 1851) and is linked with the self-diffusion coefficient of its Brownian motion through the Sutherland-Einstein relation (Einstein 1905;Sutherland 1905), although the Brownian motion is not always translational. This relation can be derived from the Langevin equation for the particle velocity (Bian, Kim & Karniadakis 2016), and is further founded on the fluctuating hydrodynamics (Bedeaux & Mazur 1974), even near the critical point (Mazur & van der Zwan 1978). In our problem, the suppression of the viscosity enhancement is locally determined by the inhomogeneous shear around the particle, and the drag coefficient can depend on the particle speed in its range to be considered in the Brownian motion. Neither Stokes' law nor the Sutherland-Einstein relation is applicable when the suppression occurs. Assuming the suppression to remain weak even if it is brought about by the local strong shear, we calculate the drag coefficient. In § 2.4, we use a one-dimensional Langevin equation to link the drag coefficient with the self-diffusion coefficient. Our results are shown and discussed in § 3, and summarized in § 4.
Formulation and calculation
We write d for the spatial dimension; our calculations in the text are limited to the case of d = 3. The values of the static critical exponents are shown in Pelisetto & Vicari (2002); we use ν ≈ 0.630 and η ≈ 0.0364. The exponent η represents the deviation from the straightforward dimensional analysis of the static, or equal time, correlation function of the order-parameter fluctuation at the critical point. When the shear is not so strong as to suppress the critical enhancement, the correlation length ξ is homogeneously given by ξ = ξ 0 τ −ν on the critical isochore, where τ is defined as |T − T c |/T c and ξ 0 is a non-universal constant. Then, the singular part of the shear viscosity is proportional to τ ν (d−z) in a flow whose typical length is much larger than ξ , as described at (A 2). This exponent is measured to be around −0.042 (Berg & Moldover 1989, 1990, which leads to z = 3.067. Because |ν(d − z)| is small, the viscosity exhibits a very weak critical singularity. Thus, for the viscosity, the dependence of the regular part on τ is also significant unless the mixture is very close to the critical point, unlike for the Onsager coefficient of the interdiffusion. As in Beysens (2019), we usẽ as the viscosity free from the shear effects. In this form of multiplicative anomaly, the regular partη B is defined asη whereη 0 is a non-universal constant and E a denotes the activation energy (Sengers 1985;Mehrotra, Monnery & Svrcek 1996). Molecules would be required to overcome some energy barrier to shift their locations in a dense liquid. Equation (2.1) supposes τ < 1 and the singular part represents the enhancement. We define τ s so that a given shear rate s affects the critical enhancement for τ < τ s , and define s * so that holds. Because of (1.1) and (1.3), s * is independent of the imposed shear. We will later apply our results to a mixture of isobutyric acid and water. For this mixture under no shear, measured values of Γ k /k 2 for small k in the neighbourhood of the critical point are shown in figure 10 of Chu, Schoenes & Kao (1968). These values and ξ 0 = 0.3625 nm (Beysens, Bourgou & Calmettes 1985) give s * = 3.7 × 10 8 s −1 with the aid of (A 4) and (A 6). From § 2.1 to § 2.3, we calculate the drag coefficient of a spherical rigid particle with radius r 0 , by assuming it to move translationally with the velocity Ue z in a binary fluid mixture in the absence of the preferential adsorption (figure 1). Here, e z denotes a unit vector. The mixture is on the critical isochore in the homogeneous phase with T being close to T c , and is quiescent far from the particle. Assuming ξ to be much smaller than a typical length scale that the flow changes, we regard the local velocity field as a linear shear flow having the same velocity gradient to determine the local viscosity.
Ue z r 0 ξ FIGURE 1. A drawing of a situation for our calculation of the drag coefficient from § 2.1 to § 2.3. A particle with the radius r 0 moves translationally with the velocity Ue z in a mixture fluid quiescent far from the particle. A part of a cross-section containing the z axis is shown; the dashed curve represents half of the cross-section of the particle surface. The velocity field, represented by arrows outside the particle, is calculated with a homogeneous viscosity, although the viscosity becomes inhomogeneous when the suppression of the critical enhancement occurs somewhere. A magnified view of the smaller rectangular region is given in the larger one, where clusters are schematically drawn in black and white with some being deformed by the local shear. The correlation length ξ is assumed to be sufficiently small as compared with a typical length of the flow.
where Θ denotes the step function; Θ(x) vanishes for x < 0 and equals unity for x > 0. The shear rate is inhomogeneous, as shown later. Thus, the suppression makes the viscosity inhomogeneous. Subtracting the homogeneous partη (0) from the whole viscositỹ η givesη (1) , which is non-positive because of d = 3 < z.
The velocity and pressure fields, v and p, satisfy the incompressibility condition and Stokes' equation, i.e.
where E is the rate-of-strain tensor. Here, a low Reynolds number is assumed, as discussed in § 2 of Yabunaka & Fujitani (2020). The no-slip boundary condition is imposed at the particle surface, while v tends to zero and p approaches a constant, denoted by p ∞ , far from the particle.
Approximation for a weak suppression
We consider a particular time and take the spherical coordinates (r, θ, φ) so that the origin is at the particle's centre and that the polar axis (z axis) is along e z ; the coordinate z should not be confused with the dynamic critical exponent. The unit vectors in the directions of r and θ are denoted by e r and e θ , respectively. The no-slip condition gives v = Ue z at ρ = 1, where ρ denotes a dimensionless radial distance, r/r 0 . We can assume v φ = 0. The drag force is along the z axis; its z component, denoted by F z , is given by the surface integral of (2ηE · e r − pe r ) · e z over the particle surface. The drag coefficient is given by −F z /U, and can depend on U in our problem. Thus, we write γ (U) for the drag coefficient.
We, respectively, write v (0) and p (0) for the velocity and pressure fields obtained when the viscosity is forced to beη (0) homogeneously. Equation (2.6a,b) yields The solution is well known (Stokes 1851) and is given by and v (0) φ = 0. The arrows outside the particle in figure 1 represent v (0) for U > 0. The superscript (0) is also added to a quantity calculated from v (0) and p (0) . The drag coefficient calculated from (2.8a-c) is given by which is independent of U and represents Stokes' law. We define v (1) and The boundary conditions are v (1) = 0 at ρ = 1 and v (1) → 0 and p (1) → 0 as ρ → ∞. (2.11) In the flow field of v and p, we define κ as the maximum value of a dimensionless ratio |η (1) /η (0) |. Equation (2.5) is proportional to κ. From (2.10) and (2.11), v (1) is also proportional to κ. The particle speed supposed here lies in the range involved in the Brownian motion. We assume that τ is not so small as to cause strong suppression, and assume κ to be so small that the calculation up to the order of κ makes sense. At this order, (2.10) becomes (2.12) Here, s contained inη (1) is replaced by s (0) , which is the shear rate calculated from v (0) . Likewise, we can evaluate κ by using over the particle surface, where E (1) is the rate-of-strain tensor for v (1) and p (1) . On the z axis, the components of ∇v (0) with respect to the Cartesian coordinates (x, y, z) is expressed by ∂ z v (0) z multiplied by a traceless diagonal matrix, whose diagonal elements are −1/2, −1/2 and 1 from the top. Here, ∂ z indicates the partial derivative with respect to z. Thus, noting the description at the end of the preface of § 2, we can regard v (0) on the z axis as a pure-extension flow locally. In particular, at a point with θ = π for U > 0, the largest stretching rate occurs in the z direction, i.e. the radial direction, and is given by ∂ r v (0) r . As θ approaches π/2, periodic motion becomes predominant over elongational motion in v (0) , as is suggested by figure 1 and is explicitly shown in appendix B. A rotational flow is found to be weak in suppressing the critical enhancement (Onuki & Kawasaki 1980c). Thus, considering the discussion on the elongational flow in the fourth paragraph of § 1, we assume that s (0) is given by the largest real-part of the eigenvalues of ∇v (0) . Calculating them directly from (2.8a-c), we find s (0) to be given by where a positive factor c, depending on θ , ranges from 1/2 to 2. Equation (2.14) with c = 2 equals |∂ r v (0) r |. We proceed below with calculations by regarding c as a constant, in spite of the actual dependence of c on θ . It will be shown later that our result of the self-diffusion coefficient is rather insensitive to the value of c.
Expansions with respect to the spherical harmonics
The flow we consider here is symmetric with respect to the polar axis, and thus the angular part of v (1) can be expanded in terms of the vector spherical harmonics, for J = 1, 2, . . . (Morse & Feshbach 1953;Barrera, Estévez & Giraldo 1985;Fujitani 2007). Here, Y J0 (θ ) is one of the spherical harmonics, √ (2J + 1)/(4π)P J (cos θ), with whereby F J and H J are defined. They are obtained with the aid of the orthogonality of the vector spherical harmonics. The incompressibility condition gives We use (2.18) to delete T J from the r and θ components of (2.12), which are combined to give Here, X J is defined as (2.20) Similar calculations can be found in deriving (2.17) of Fujitani (2007) and in deriving (3.20) of Yabunaka & Fujitani (2020). Equations (2.11) and (2.18) give Applying the method of variation of parameters, we can rewrite (2.19) as where the kernel Γ J is given in appendix C. The first term of (2.13) does not contribute to F (1) z because (2.14), and thus (2.5), vanish at the particle surface. With the aid of (2.18), we use the θ component of (2.12) to delete Π J and T J from the last two terms of (2.13). These terms are thus rewritten as the sum of terms involving R J and H J over J. Only the terms involving R 1 are left after the surface integration of (2.13), as shown by (C 4) and described below (C 5). Substituting (2.22) into the result of the surface integration, we use The right-hand side above is related to the fraction appearing in (2.8b) because of the Lorentz reciprocal theorem (Lorentz 1896), as shown in appendix B of Fujitani (2018) and mentioned at (A2) of Yabunaka & Fujitani (2020). We thus arrive at whereX is defined asX (2.25) As shown by (C 7),X is given in terms of the integral with respect to θ because (2.20) involves F 1 and H 1 , which are calculated from the right-hand side of (2.12) with the aid of the orthogonality of the vector spherical harmonics. Thus, (2.24) contains a double integral with respect to ρ and θ . We have analytical results for the integrals with respect to ρ, as described at the end of appendix C, and thus what to calculate numerically is the integration with respect to θ . We findX to depend on U only through τ s (0) . With U * denoting s * r 0 , the ratio is a function of a dimensionless speed |U|/U * because of (2.3) and (2.14), and is denoted byγ (U/U * ). Using the value of the critical exponents stated in the preface of § 2, we numerically calculate the integral of (2.24) to obtainγ (u). In figure 2,γ (u) decreases as u increases, which represents that the critical enhancement ofη is suppressed more as the faster particle causes stronger shear. At the smaller value of τ , the suppression is shown to be stronger, which can be explained by the existence of larger clusters deformable for smaller u.
Description of the Brownian motion
When the viscosity is homogeneous in the absence of the suppression, as mentioned in § 1, a simple description of the Brownian motion is given by the Langevin equation for 0.004 0 FIGURE 2. We plotγ (u) for τ = 1.008 × 10 −3 (•) and 1.26 × 10 −4 (+). As mentioned in the text,γ represents the drag coefficient non-dimensionalized by (2.9), while u represents the particle speed non-dimensionalized by U * ≡ s * r 0 . the particle velocity, where the force exerted on the particle is separated into the thermal noise and the instantaneous friction force (Bian et al. 2016). The former represents the force varying much more rapidly than the latter and vanishes after being averaged over a macroscopic time interval (Sekimoto 2010), while the friction coefficient in the latter equals the drag coefficient given by Stokes' law. This is founded in terms of the fluctuating hydrodynamics (Bedeaux & Mazur 1974;Mazur & van der Zwan 1978) and is numerically verified (Keblinski & Thomin 2006). The components of the thermal noise in the three orthogonal directions are statistically independent, and thus the self-diffusion coefficient can be calculated in one dimension. To calculate the self-diffusion coefficient in our problem, we still use the drag coefficient γ (U) as the friction coefficient in the Langevin equation, considering that the viscosity can be only weakly inhomogeneous depending on the particle speed. This amounts to assuming γ (U) to be most probable friction coefficient when the particle velocity is U in the Brownian motion at the time resolution of the Langevin equation.
The effective mass, denoted by m, is the sum of the particle mass and half the mass of the displaced fluid (Lamb 1932;Bian et al. 2016;Fujitani 2018). Here, unlike in the preceding subsections, U is a stochastic variable depending on the time t. The Langevin equation is where W represents the Wiener process and the symbol • indicates that (2.26) should be interpreted in the Stratonovich sense (Risken 2002;Sekimoto 2010). The positive function b(U) is fixed so that (2.26) is consistent with the Boltzmann distribution, as shown in appendix D. The self-diffusion coefficient of the particle D is given by (Bian et al. 2016) where · · · means the equilibrium average. Defining M as mU * 2 /(k B T), we utilize the Laplace transformation to obtain as shown in appendix D. This equation can be also derived from (2.26) by using not the Laplace transformation but some of the equations in § S.9 of Risken (2002). Converting the integration variables u and u 1 to u √ M and u 1 √ M, respectively, we find that D depends on M only through the variable ofγ .
Equation (2.28) involvesγ (u) for infinitely large u because (2.26) formally supposes any particle speed, including the particle speed larger than assumed in the hydrodynamic formulation. This is also the case with the Langevin equation supposing a constant drag coefficient, where Stokes' law is assumed even for particle speeds so large as to break the validity of the Stokes approximation. In either case, we can avoid this inconvenience in computing the self-diffusion coefficient, to which such large speed never contributes. An effective cutoff speed is implicitly imposed on the Langevin equation. This point is discussed in the next section.
Results and discussion
Latex beads with radius 80 nm are put in a mixture of isobutyric acid and water in the experiment of Beysens (2019), where we have m = 3.32 × 10 −18 kg and U * = 29.6 m s −1 . The mixture can be regarded as incompressible near the demixing critical point (Clerke & Sengers 1983;Onuki 2002). The thermal average of U is 3.53 × 10 −2 m s −1 , which is denoted byŪ. The improper integrals in (2.28) can be replaced by definite integrals involving only particle speeds smaller than approximately 4Ū, as described in the latter half of appendix D. Because the viscosity of the mixture is around 2.5 × 10 −3 kg m −1 s −1 (Allegra, Stein & Allen 1971), the Reynolds number is 1.4 × 10 −2 for U =Ū, and remains sufficiently small as compared with unity even if multiplied by four. This is consistent with our hydrodynamic formulation. The variable u in figure 2 represents U/U * , which equals 1.19 × 10 −3 for U =Ū. Thus, the range of the horizontal axis in figure 2 approximately coincides with the integration interval of the definite integrals used in our numerical calculation.
The data of the self-diffusion coefficient in Beysens (2019), ranging over 6.31 × 10 −5 ≤ τ ≤ 6.81 × 10 −2 , are replotted with open circles in figure 3. The viscosity of the near-critical mixture of isobutyric acid and water, containing no particles, is measured in Allegra et al. (1971). From the data in their table 2, with the ones for four values of τ from the smallest being excluded according to Oxtoby (1975), we calculate the self-diffusion coefficient by applying Stokes' law and the Sutherland-Einstein relation, i.e. by dividing k B T/(6πr 0 ) by the viscosity, and plot the results with crosses in figure 3. The crosses, ranging over 1.14 × 10 −4 ≤ τ ≤ 2.78 × 10 −2 , agree with the open circles for τ > 7 × 10 −3 . These open circles and the crosses should be explained by using (2.1), i.e. η (0) , which is free from the suppression due to the shear.
Conversely, we can calculate the viscosity from the open circles for τ > 7 × 10 −3 by applying Stokes' law and the Sutherland-Einstein relation. In a graph (not shown here) where these results and the data of Allegra et al. (1971) with the exclusion above are linearly plotted against τ , we perform the curve-fit toη (0) with the aid of 'NonlinearModelFit' of Mathematica (Wolfram Research) by using T c = 300.1 K (Toumi & Bouanz 2008) and the values of the critical exponents stated in the preface of § 2. Estimated values areη 0 = 3.38 × 10 −6 kg m s −1 and E a /(k B T c ) = 6.35 with the standard deviations being 5.11 × 10 −7 kg m s −1 and 1.53 × 10 −1 , respectively. We use the estimated values to calculateγ (0) = 6πη (0) r 0 and plot k B T/γ (0) with the dashed curve in figure 3; the curve agrees well with the data applied to the curve-fit. Employingγ (0) thus obtained, we calculate the prefactor of (2.28), whose integrals are numerically calculated after being replaced by the definite integrals mentioned above. Our results of D for c = 1 are plotted with solid circles in figure 3. It appears that the open circles saturate to reach a plateau as (2019), where T c is estimated to be 301.1 K. Crosses come from Allegra et al. (1971), with T c being estimated to be 299.4 K. The dashed curve represents k B T/(6πη (0) r 0 ), which we calculate by using the parameter values estimated from the open circles for τ > 7 × 10 −3 and the crosses. Solid circles represent our results of (2.28) for c = 1. The red short bar above (below) each of the solid circles for τ ≤ 1.61 × 10 −2 represents (2.28) for c = 4 (0.25). τ decreases below 3.2 × 10 −3 , although they are distributed rather widely in the direction of the vertical axis. Our results pass through the middle of the distribution. This strongly suggests that the saturation should represent the suppression of the critical enhancement ofη due to the local shear caused by the particle motion, as claimed by Beysens (2019). Figure 4 shows that our calculation results of D increase as c increases, which can be expected because the shear is then evaluated to be larger. In this figure, the ratios of the change in D lie within 5 % when c changes from unity to 0.1 or to 10; dependences of the ratio on c are almost the same for the two values of τ . For comparison with the data of Beysens (2019), we plot (2.28) for c = 0.25 and 4 with red short bars in figure 3. It is clear for τ > 10 −3 that the two bars at the same τ are closer to each other as τ is larger; (2.28) depends on c only when the suppression occurs. The range of c is considered as 1/2 ≤ c ≤ 2 in § 2.2. For c = 1/2 (2), each of the results indicated by solid circles for τ ≤ 1.61 × 10 −2 is shifted to the middle between the solid circle and the lower (upper) short bar. These slight shifts show that our results for any value of c in the interval of 1/2 ≤ c ≤ 2 explain the experimental data well. It is also suggested that, if we take into account the dependence of c on θ in this interval, the calculation results should remain in good agreement with the experimental data.
Let us examine where the suppression occurs around a particle moving in the way supposed in figure 1. In the approximation mentioned below (2.12), we use s (0) instead of s in (2.5) to calculate |η (1) /η (0) | for c = 1 and U =Ū, and show the results in figure 5, where the suppression occurs in coloured regions. The maximum of (2.14) is taken at (ρ, θ ) = ( √ 2, 0) or ( √ 2, π). The value of |η (1) /η (0) | equals κ at these points, and becomes smaller at a point more distant from these points, as shown in figure 5. The maximum of τ s (0) is 0.0129 × c 0.518 for U =Ū, and is 0.0264 × c 0.518 for U = 4Ū, which is approximately the effective cutoff in our numerical integration. The latter yields κ ≈ 0.13 at τ = 10 −3 and ≈ 0.21 at τ = 10 −4 for c = 1 and U = 4Ū. Thus, κ is adequately small as compared with unity in the range of τ examined in figure 3, which would make our formulation globally meaningful. for c = 1 and U =Ū at each of τ = 1.26 × 10 −2 (a) and 8.064 × 10 −3 (b). The particle is assumed to move translationally with the velocityŪe z . The region shown in each figure is the same as in figure 1; the dotted curve represents the half of the cross-section of the particle surface. The curves outside the particle are the stream lines of v (0) , which are represented by arrows in figure 1.
From the maximum of τ s (0) for U =Ū mentioned above, κ is found to become non-zero when τ is smaller than 1.29 × 10 −2 for c = 1 and U =Ū. This value of τ approximately agrees with the onset temperature of the suppression estimated in Beysens (2019), where the shear rate, with its inhomogeneity and dependence on U being neglected, is evaluated to be a typical shear rate,Ū/r 0 . The value of τ is slightly smaller than 1.29 × 10 −2 in figure 5(a), where a very weak suppression occurs in narrow regions around (ρ, θ ) = ( √ 2, 0) and ( √ 2, π) for c = 1 and U =Ū, as expected. However, at this temperature, the suppression cannot be read in figure 3. In figure 5(b) with smaller τ , |η (1) /η (0) | becomes larger in wider regions, which means that the suppression occurs more strongly and extensively, although the suppression can be read only slightly from the solid circle at this temperature and cannot be read from the open circle closest to this temperature in figure 3. It is for τ < 3.2 × 10 −3 in figure 3 that the suppression can be read explicitly from the experimental data (•); the suppression for c = 1 and U =Ū should occur more strongly and extensively in this range of τ than in figure 5(b). Thus, we overestimate the onset temperature of the suppression in the data of the self-diffusion coefficient if we evaluate the shear rate to beŪ/r 0 .
For U = 4Ū, as mentioned above, the maximum of τ s (0) is 0.0264 × c 0.518 . The maximum is smaller than unity in the range of c examined in figure 4, as supposed in our formulation. This also shows that our results are free from the details of a formal rule for large particle speed in appendix D. In the absence of the suppression, we have ξ = ξ 0 τ −ν = 28 nm and 120 nm for τ = 10 −3 and 10 −4 , respectively. The correlation length should be reduced by the strong shear, which suppresses the order-parameter fluctuation with small wavenumber. The correlation length under the shear effect is dependent on the direction, and proportional to τ −0.5 at the largest in the stagnation-point flow (Onuki & Kawasaki 1980c). This exponent is the same as in the mean-field approximation. The curvature of the stream line in figure 3 suggests that a typical length of the flow is several times larger than the particle diameter. Thus, a typical length of the flow is sufficiently large as compared with ξ in the range of τ examined in figure 3, as supposed in our formulation.
It is assumed in (2.4) that the suppression is perfect if it occurs. In terms of the renormalization-group calculation, the singular part of the viscosity changes in the coarse-graining procedure, which makes sense until the resolution reaches ξ , and the way of the change is altered when the resolution exceeds a threshold determined by the shear rate. Equation (1.3) can be derived from the condition of whether the threshold comes before ξ or not, as mentioned in the latter half of appendix A. In the procedure after the alteration, the singular part of the viscosity continues to be changed and becomes anisotropic. Thus, the assumption in (2.4) does not hold exactly. The ratio of the correction in the later stage, i.e. in the stage after the alteration, to the one in the earlier stage is evaluated by averaging the former correction over the directions in appendix E. If d = 3 is substituted into these results valid up to the linear order with respect to 4 − d, the evaluated ratio is smaller than 4 (7) per cent for a pure-extension flow (a simple shear flow). These small values would support the appropriateness of (2.4), which can explain the data for the three-dimensional mixture of Beysens (2019) in figure 3.
In deriving (1.3), the lifetime of a correlated cluster and the range of the cluster size are evaluated. However, some deviations are possible in these evaluations, and may be required to compensate for the approximation in (2.4) mentioned above. For example, let us consider replacing ξ with 1.5ξ in (1.3). This replacement is equivalent with putting c equal to 1.5 z ≈ 3.5. A change of (1.3) to this extent cannot be denied from the data of Beysens (2019) in figure 3, considering that the red short bars above the solid circles still lie in the middle of the distribution of the data.
We simply link the drag coefficient with the self-diffusion coefficient by means of the one-dimensional Langevin equation for the particle velocity. Similar nonlinear Langevin equations are used for different problems in Klimontovich (1994) and Lindner (2007). Validity of the Langevin equation with the Stratonovich interpretation in our problem, where the viscosity can be inhomogeneous and dependent on the particle speed, remains to be founded on the fluctuating hydrodynamics, unlike in the cases studied by Bedeaux & Mazur (1974) and Mazur & van der Zwan (1978). It still appears that the Langevin equation can describe the data of Beysens (2019) well in figure 3, where a rather large distribution of the data for small τ in the direction of the vertical axis may come from the properties of the viscosity in our problem.
Summary and concluding remarks
Correlated clusters of the order-parameter fluctuation are generated in a near-critical binary fluid mixture lying in the homogeneous phase near the demixing critical point. The upper size of clusters, the correlation length, becomes larger as the critical point is approached. Then, as is well known, the convection of large and long-lived clusters enhances the transport coefficients in the coarse-grained dynamics (Kawasaki 1970). It is also well known that a sufficiently strong shear, if imposed, can deform long-lived clusters to suppress the critical enhancement (Onuki & Kawasaki 1979). In a recent experiment (Beysens 2019), shear around a Brownian particle in a near-critical mixture on the critical isochore was suggested to cause this suppression to influence the motion. Deviation of the self-diffusion coefficient from the Stokes-Sutherland-Einstein formula was observed in the temperature range where the suppression is judged to occur from a typical shear rate around a particle moving with a typical speed.
How the deviation depends on the temperature is calculated in the present study. We first calculate the drag coefficient of a particle moving translationally in a mixture which is quiescent far from the particle. The suppression is simply assumed to occur perfectly when the cluster with the size of the correlation length becomes so long-lived as to be deformed by the shear. The shear rate is inhomogeneous and depends on the particle speed. Hence, the suppression makes the viscosity inhomogeneous and dependent on the particle speed. The calculation supposes a low Reynolds-number and a sufficiently weak influence of the shear on the viscosity, which are realized in the experiment. We next employ the drag coefficient thus calculated, dependent on the particle speed, as the frictional coefficient in a one-dimensional Langevin equation of the Stratonovich type to calculate the self-diffusion coefficient. The calculation results agree well with the experimental data, which is rather robust to changes in the threshold for the occurrence of the suppression.
Acknowledgements
The author thanks S. Yabunaka for helpful discussions.
Declaration of interests
The author reports no conflict of interest.
Appendix A. Previous results of the renormalization-group calculations
For an equilibrium binary fluid mixture in the homogeneous phase on the critical isochore near the critical point, as the renormalization steps are iterated, the Onsager coefficient for the interdiffusion approaches a constant, denoted by λ * . Rewinding the rescaling procedure to decrease the cutoff wavenumber k at each iteration, we obtain the coefficient coarse grained up to k. Writing λ for it, we have that is, if kξ is larger than or comparable to unity. Here, k o ( k) is the cutoff wavenumber before coarse graining and thus 1/k o has a microscopic length. If kξ is much smaller than unity, 1/k in the parentheses of (A 1) should be replaced by ξ ; rewinding the rescaling procedure makes sense up to the length scale of ξ . Likewise, the singular part ofη is given whereη * denotes a constant forη corresponding with λ * (Onuki & Kawasaki 1979). These exponents can be also derived from the dynamic scaling assumption (Folk & Moser 2006). We define the order parameter so that the coefficient of the square-gradient term in the dimensionless effective Hamiltonian is a half, as in Siggia et al. (1976), Onuki & Kawasaki (1979) and Onuki (2002). In the Fourier transform mentioned above (1.1), we put the two times equal to each other and write χ k for the result, which is the static correlation function or static susceptibility. We have χ k ≈ ξ 2 (k/k o ) η for kξ 1. As k → 0 and ξ → ∞ with kξ being an arbitrary positive number, the renormalization-group calculation gives (Siggia et al. 1976) where the Kawasaki amplitude R is a universal constant approximately equal to 1/(6π). The dimensionless scaling function,Ω, tends to unity as its variable approaches zero.
Considering that ληξ d−2 divided by χ k equals Rk B T c for kξ 1, we have with kξ being fixed to be small (Siggia et al. 1976).
According to the renormalization-group calculation up to the order of ≡ 4 − d in the presence of a simple shear flow (Onuki & Kawasaki 1979), (A 1) is found to break down for k < ∼ k s , where k s is defined so that holds. The shear is strong enough to suppress the critical enhancement if k s ξ 1 holds; k s then comes before 1/ξ in the coarse-graining process of decreasing k (figure 6). Otherwise, λ for kξ 1 remains given by (A 1) with 1/k being replaced by ξ . With the aid of (A 3)-(A 5), we find that this condition of strong shear approximately agrees with (1.3). From (2.3) and (A 5), we have k −1 Ambiguity on the condition of strong shear is discussed in § 3. The above-mentioned breakdown of (A 1) occurs when the static susceptibility deviates from the Ornstein-Zernike form in the mean-field approximation for an equilibrium fluid mixture. Then, using the wavenumber vector k, we should write χ k , not χ k , for the susceptibility because of its anisotropy. The method of characteristics is used to calculate χ k for a simple shear flow in § 3 of Onuki & Kawasaki (1979), and for other kinds of linear shear flow in § 4 of Onuki & Kawasaki (1980a) and § 3 of Onuki & Kawasaki (1980c). In this method, a wavenumber vector dependent on a parameter, with the dimension of the time, is introduced, and the 'time' derivative of the vector equals the product of the matrix of the velocity gradient and the vector. Thus, s in (1.3) can be determined in the way described in § 1.
The renormalization correction of λ in the later stage with k < k s in a simple shear flow becomes dependent not only on k but also on k, and is much smaller than the one in the earlier stage with k > k s even when is put equal to unity, as shown by (4.62) and (4.85) of Onuki & Kawasaki (1979). We draw figure 6 from figure 2 of Onuki & Kawasaki (1979), where it is stated that the singular part of the viscosity behaves in the same fashion, although the viscosity for k < k s is not only dependent on k but also expressed in terms of Onuki & Kawasaki (1979). We define k so that the cutoff wavenumber viewed on the original lattice equals k at the iteration number, l, in the renormalization-group calculation. We have k/k o =b −l , whereb is the length rescaling factor. The curve follows (A 1) for k o k k s , but deviates from (A 1) for k s k.
an anisotropic tensor (Onuki & Kawasaki 1980b). Within a simplified picture of figure 6, assuming (2.4) amounts to drawing the curve of the singular part ofη as if the curve for k ≥ k s were perfectly free from the shear effect and linked to a horizontal line in k < k s .
Appendix B. Local shear rate
We here examine how ∇v (0) changes around a particle, which is briefly mentioned above (2.14). From (2.8a-c), we can calculate the components of ∇v (0) with respect to the three-dimensional Cartesian coordinates (x, y, z) at a point on the xz plane (φ = 0). There, the (x, y), ( y, x), ( y, z) and (z, y) components vanish. The components can be expressed in terms of a 3 × 3 matrix. We rewrite this matrix as the sum of two matrices; one is the diagonal matrix with the diagonal elements being from the top. We rotate the (x, z) coordinates to have (x , z ) coordinates so that the diagonal elements of the other matrix of the two vanish at the point; the diagonal matrix mentioned above is not changed by this rotation. The components of the velocity gradient with respect to the coordinates (x , y, z ) are given by where we use A ≡ 3 sin θ/(4ρ 2 ). We define C as ∇ y v (0) y = 3(ρ 2 − 1) cos θ/(4ρ 4 ), and define B as 3/(8ρ 4 ) multiplied by the square root of [−2 + 2(5 − 3ρ 2 ) cos 2 θ ] 2 sin 2 θ + [3(ρ 2 − 1) + 2(5 − 3ρ 2 ) sin 2 θ ] 2 cos 2 θ.
(B 3) Thus, A and B are non-negative. The first term in the square brackets of (B 2) represents a pure-extension flow. where we useť ≡ tU/r 0 and note that the matrices in the square brackets of (B 2) commute. In figure 7 with ρ = 1.5, we have A > B, i.e. purely imaginary Ω e , for 0.74 < θ < 2.4 approximately, and C 2 > Ω 2 e for 0.6 < θ < 2.5 approximately. Although data are not shown, as ρ increases above unity, the θ region with A > B and the one with C 2 > Ω 2 e are narrower. We find that A > B holds at θ = π/2 for any ρ larger than unity and that Ω e = 1.5|C| holds at θ = 0 and π. As θ changes from 0 or π to π/2, Ω 2 e decreases more rapidly than C 2 , as shown in figure 7. The two-dimensional flow with C = 0 is considered in Onuki & Kawasaki (1980c), where a rotational flow with A > B is shown to be weak in suppressing the critical enhancement. For simplicity, we thus neglect the periodic deformation of a cluster in determining the local shear rate.
Appendix C. Some details in calculating the drag force We introduce , (C 1) and the kernel appearing in (2.22) is given by together with Γ J (ρ, σ ) = Γ J (σ, ρ) for ρ < σ. The kernel above for J = 1 is equal to 1/30 multiplied by Γ R of Okamoto et al. (2013). Similar calculations are there in Fujitani (2018) and Yabunaka & Fujitani (2020). As mentioned above (2.23), we can delete Π J and T J from the last two terms of (2.13). A similar procedure can be found in deriving (4.19) of Okamoto et al. (2013). By using where δ ij implies the Kronecker delta, we find the surface integral of the last two term of (2.13) to be given by Differentiating (2.5) with respect to ρ or θ yields a term having the derivative of the step function, i.e. the delta function. Noting that this term vanishes because of its prefactor, we find that the negative of the right-hand side of (2.12) can be rewritten as Here, the vector h dependent on (ρ, θ ) can be calculated by using (2.8a-c). Because (2.17) is equal to (C 5), we use the orthogonality of the vector spherical harmonics to calculate F 1 and H 1 . For example, H 1 (ρ) is the integral of the inner product of vectors of (C 5) and B 10 over the surface of the unit sphere. We thus find that H 1 (ρ) vanishes at ρ = 1, where (2.14), and thusτ s (0) vanish. Substituting (2.8a-c) into the right-hand side of (2.12) and defining ζ as (d/z) − 1, we find that the components of h are given by and h φ = 0, where we write s for s (0) for conciseness. We have h r (ρ, θ ) = −h r (ρ, π − θ) and h θ (ρ, θ ) = h θ (ρ, π − θ). From (2.20) for J = 1, we obtaiň X(ρ) = − ρ 2 2 π/2 0 dθ [2 sin θ cos θ h r (ρ, θ ) + sin 2 θ∂ ρ ρh θ (ρ, θ )]Θ(τ s (0) − 1), (C 7) where ∂ ρ operates on all the following terms including the step function.
Appendix D. Self-diffusion coefficient
The Fokker-Planck equation corresponding to (2.26) is where P(U, t; U 0 ) represents the probability density of U at a time t on condition that U = U 0 at t = 0. The variables of the functions are dropped on the right-hand side above for conciseness. The stationary solution, denoted by P eq (U), is proportional to (Risken 2002) which should be proportional to the Boltzmann distribution (Zwanzig & Bixon 1975), We can rewrite the right-hand side above by changing the integration interval to the one from U to −∞ because the integral from −∞ to ∞ vanishes. Then, using integration by parts, we find that |b(U) 2 − 2k B Tγ (U)| equals 2k B Te mU 2 /(k B T) ∞ −U dU 1 γ (U 1 )e −mU 2 1 /(k B T) , (D 5) where the prime indicates the derivative. We can assume thatγ (u) decreases to a positive constant as |u| increases to ∞ for the reason mentioned in the next paragraph. Thus, for U < 0, (D 5) becomes larger if we replace γ (U 1 ) with a suitable positive constant. Using an asymptotic form of a complementary error function, we find that (D 5) vanishes, and thus b(U) 2 remains finite, in the limit of U → −∞. This result is used later. We can make a convenient formal rule for the particle speed larger than assumed in the text, because we later find its influence on (2.28) negligible. In other words, making such a rule is a substitute for introducing an upper cutoff of the particle speed to the model in the text. For definiteness, we here specify a formal rule. In (2.1), we should replace τ with unity for τ > 1. Accordingly, (2.4) is supplemented with the ruleη =η B (T) for τ > 1 if τ s < 1 and for any τ otherwise. As the particle speed is larger, because the region with τ s > 1 approaches the whole mixture region, γ (U) approaches Stokes' law of 6πη B (T)r 0 from the larger side. This means thatγ (∞) is not smaller than τ ν(z−d) , which is 0.75 and 0.68 for τ = 10 −3 and 10 −4 , respectively. Thus, the assumption mentioned in the preceding paragraph holds in the range of τ of figure 3. which is larger if we delete the last two terms in the square brackets, and π 4 e 2w/3 q 2 1 q 2 ⊥ + e −2w/3 (q 2 2 q 2 3 + q 2 2 q 2 4 + q 2 3 q 2 4 ) , (E 7) which is larger if we replace e −2w/3 with e 2w/3 in the second term in the square brackets. This expression shows δ e > 0. Using a new variable ζ 1 ≡ e 2w/3 , and performing the integration with respect to the angular components of q other than θ 1 , we have δ e < 4 19π 2 × 3π 2 10 1 0 dq q 7 π 0 dθ 1 I 3 (q, θ 1 )(5 − 4 sin 2 θ 1 ) sin 4 θ 1 . | 12,349.4 | 2020-11-23T00:00:00.000 | [
"Physics"
] |
cGAS/STING Pathway in Cancer: Jekyll and Hyde Story of Cancer Immune Response
The last two decades have witnessed enormous growth in the field of cancer immunity. Mechanistic insights of cancer immunoediting have not only enhanced our understanding but also paved the way to target and/or harness the innate immune system to combat cancer, called cancer immunotherapy. Cyclic GMP-AMP synthase (cGAS)/Stimulator of interferon genes(STING) pathway has recently emerged as nodal player in cancer immunity and is currently being explored as potential therapeutic target. Although therapeutic activation of this pathway has shown promising anti-tumor effects in vivo, evidence also indicates the role of this pathway in inflammation mediated carcinogenesis. This review highlights our current understanding of cGAS/STING pathway in cancer, its therapeutic targeting and potential alternate approaches to target this pathway. Optimal therapeutic targeting and artificial tunability of this pathway still demand in depth understanding of cGAS/STING pathway regulation and homeostasis.
Introduction
The cross-talk between cancer and the immune system was reported first in the 1960s [1,2], but the relative inefficacy of naturally occurring immune responses coupled to lack of understanding of the underlying molecular mechanisms posed a major challenge in this field. The immune system can selectively recognize and kill cancer cells, a process called tumor immunosurveillance. As a counter strategy, cancer cells have evolved to bypass this process and use the immune system to promote tumorigenesis. This dual role of the immune system in both suppressing and promoting cancer, called cancer immunoediting, poses a challenge from a therapeutic perspective [1]. With the advent of recent technologies, accompanied with a better understanding of the molecular mechanisms involved, the last two decades have seen tremendous growth in the field of cancer immunotherapy.
The cGAS/STING pathway was initially described to play a crucial role in antimicrobial immune response. Following activation by aberrant cytosolic DNA, the enzyme cGAS produces the mammalian 2 ,3 -cGAMP, which in turn activates STING protein and thereby leads to production of Type I interferon (IFN) and other pro-inflammatory cytokines that boost the immune response. Presence of microbial DNA in the cytosol seemed to be the major activator of cGAS [3]. Cyclic di-nucleotides (CDNs) produced by certain bacteria were also shown to activate some isoforms of human and mice STING by direct binding [4][5][6][7]. According to recent studies, self-DNA leaked from nucleus or mitochondria, probably followed by cell division, DNA damage and/or autophagy, can also activate this pathway, leading to pathophysiological outcomes [8,9].
Apart from its role in protecting the host from a variety of pathogenic attacks, cGAS/STING pathway also plays a crucial role in cancer. While activation of cGAS/STING pathway has mostly been reported to produce anticancer effects [10][11][12][13], evidence also suggests that it is implicated in carcinogenesis via self-DNA induced autoinflammation [14][15][16][17][18]. Here, recent understanding of the role of cGAS/STING pathway in cancer and its therapeutic modulation has been reviewed, with an aim to emphasize the point that a better understanding and artificial tunability is required for optimal targeting of this pathway as a potential cancer immune therapy. Potential strategies to discover small molecule modulators of the cGAS/STING pathway from a therapeutic perspective have also been discussed.
Type I IFN, Immune Response and Cancer
Cancer specific antigens and endogenous expansion of CD8+ T cells have been discovered in many cancers. Clinical studies indicate spontaneous T cell priming, immune infiltration of T cell recruiting cytokines, and Type I IFN response into tumor sites; a phenomenon called T cell inflamed microenvironment [19]. Harlin et al. reported a strong correlation between tumor infiltrating CD8+ T cells and chemokine expression in metastatic melanomas. In subset of melanoma metastasis, it was suggested that reduced critical chemokine expressions is a key factor in limiting activated T cell migration and, thereby, effective anti-cancer response [20]. In anti-cancer immune response, the maturation and activation of antigen-presenting dendritic cells (DCs) is a critical step for activating the T-cell response to kill cancer cells. This is blocked by tumor cell-derived cytokines such as IL-10 and TGFβ. Thus, co-stimulatory inflammatory signals are immensely important for T cell activation. In context of microbial infection this is often mediated by Toll-like receptor (TLR) stimulation. Very little was known about the mechanism in sterile state, until recent gene expression profiling data indicated Type I IFN response to be a key player in activating T-cell response. Ablation of Type I IFN response in vivo, either by IFN receptor (IFNAR) deletion or treatment with antibody, was shown to enhance chemically induced tumor formation. This also showed weaker transplanted immunogenic tumor reduction than wild type (WT) mice, highlighting the importance of Type I IFN in spontaneous tumor rejection [21]. The carcinogen methylcholanthrene treatment showed enhanced tumor production in IFNAR−/− mice [21,22]. DCs were shown to be the main players in stimulating Type I IFN signaling [23,24]. The induction of tumor-specific CD8+ T cells, leading to immune rejection of tumors, was predominantly mediated by Type I IFN production in dendritic cells (DCs) [25]. In summary, initial activation of anti-tumor innate immune response depends on Type I IFN production by DCs which eventually helps in CD8+ T cell cross priming, followed by tumor cell killing. Upon radiotherapy, increased intratumoral Type I IFN production was found. This was associated with increased cross priming potential of tumor infiltrating DCs and could be eliminated by removing IFNAR [26]. Similar dependency on Type I IFN in anti-tumor CD8+ T cell induction was also reported in context of tumor cell therapy. Type I IFN response was also shown to have anti-angiogenic response [27].
Therefore, Type I IFN seems to be nodal player in eliciting an effective anti-tumor immunity by acting as a bridge between innate and adaptive immunity. A better molecular understanding of Type I IFN production and regulation will be helpful in basic and therapeutic perspective.
cGAS/STING Pathway Induced Type I IFN Production and Cancer Immunity
Given the role of Type I IFN in optimal T cell priming against tumor, the obvious mechanistic question was to identify the molecular pathway(s) that trigger IFN production in DCs in cancerous conditions. Recent evidence suggests cGAS/STING signaling as one of the key pathways in this context. Transplantable mouse tumor model study showed defective T cell priming against tumor antigen in STING−/− and IRF3−/− mice [12]. In a colon cancer model, increased phosphorylation of NF-κB and STAT3 leading to transcriptional suppression of pro-inflammatory cytokines IL-6 and keratinocyte chemoattractant (KC) was observed in STING−/− mice [28]. Increasing evidence indicates the presence of cytoplasmic DNA in cancer cells that can induce IFN response via cGAS/STING pathway [13,29]. Damaged genomic DNA caused by carcinogens like DMBA, cisplatin, etoposide or radiation [13,30,31] and mitochondrial DNA leakage [32] has been shown to be the primal sources of the cytoplasmic DNA in cancer cells that can potentially activate cGAS/STING mediated immune response. DNA damage induced micronuclei in cancer cells also stimulate cGAS/STING pathway following membrane damage [33]. These clearly indicate a strong link between DNA damage response and cGAS/STING pathway mediated cancer immune response. DNA fragments derived from cancer cells, present in the tumor microenvironment, were shown to be taken up by DCs. This DNA fragments activated cGAS/STING pathway, inducing Type I IFN response and thereby activating DC maturation. Matured DCs in turn stimulated CD8+ T cell priming ( Figure 1) [13]. between DNA damage response and cGAS/STING pathway mediated cancer immune response. DNA fragments derived from cancer cells, present in the tumor microenvironment, were shown to be taken up by DCs. This DNA fragments activated cGAS/STING pathway, inducing Type I IFN response and thereby activating DC maturation. Matured DCs in turn stimulated CD8+ T cell priming ( Figure 1) [13]. Gliomas, when induced de novo in STING−/− mice, showed shorter survival associated with increased immune-suppressor cells and decreased IFN producing CD8+ T cells in a tumor microenvironment compared to wild type mice. Lower expression of Type I IFN and Type I IFNinducible Interferon-stimulated gene 54 (ISG54) was also reported in STING−/− mice compared to the wild type mice. Intratumoral delivery of the STING agonist, cyclic di-GMP, improved survival via enhanced Type I IFN production [34,35]. STING signaling is severely suppressed in colorectal carcinoma associated with obstructed anti-tumor T cell priming and Type I IFNs; a strategy of tumor cells to evade the immune surveillance pathway [36]. In colitis associated colorectal cancer model, STING was shown to have a protective role via regulation of intestinal inflammation. STING−/− mice were prone to tumor formation due to excessive colon inflammation [28]. STING−/− mice were also shown to be prone to colitis associated cancer, induced by DNA damaging agents. STING constitutes a crucial response to intestinal damage and is essential for stimulating tissue repair pathways to prevent tumorigenesis [28]. A crucial role of STING pathway in response to cryoablation was shown in OVA model. The resulting Type I IFN expression enhanced DC functionality, resulting in clonal expansion, polyfunctionality and memory formation of tumor-specific CD8+ T cells [37]. In line with the previous knowledge of radiation induced cellular stress and excessive DNA damage, this report has also indicated cGAS mediated sensing of irradiated-tumor cells by DCs. cGAS/STING-dependent cytosolic DNA sensing pathway in DCs was shown to be essential for Type I IFN induction after radiation, which determines the radiation-mediated adaptive immune responses [12]. Macrophages were also found to be key players in cGAS/STING mediated cancer immunity [28], indicating the involvement of other antigen presenting cells (APCs) beyond DC-T cell axis. Furthermore, recent evidence suggests tight regulation of cGAS/STING signaling in T cells [38,39]. T cells from patients carrying constitutive active STING mutant (TMEM173) showed decreased cell proliferation and IL-2 Gliomas, when induced de novo in STING−/− mice, showed shorter survival associated with increased immune-suppressor cells and decreased IFN producing CD8+ T cells in a tumor microenvironment compared to wild type mice. Lower expression of Type I IFN and Type I IFN-inducible Interferon-stimulated gene 54 (ISG54) was also reported in STING−/− mice compared to the wild type mice. Intratumoral delivery of the STING agonist, cyclic di-GMP, improved survival via enhanced Type I IFN production [34,35]. STING signaling is severely suppressed in colorectal carcinoma associated with obstructed anti-tumor T cell priming and Type I IFNs; a strategy of tumor cells to evade the immune surveillance pathway [36]. In colitis associated colorectal cancer model, STING was shown to have a protective role via regulation of intestinal inflammation. STING−/− mice were prone to tumor formation due to excessive colon inflammation [28]. STING−/− mice were also shown to be prone to colitis associated cancer, induced by DNA damaging agents. STING constitutes a crucial response to intestinal damage and is essential for stimulating tissue repair pathways to prevent tumorigenesis [28]. A crucial role of STING pathway in response to cryoablation was shown in OVA model. The resulting Type I IFN expression enhanced DC functionality, resulting in clonal expansion, polyfunctionality and memory formation of tumor-specific CD8+ T cells [37]. In line with the previous knowledge of radiation induced cellular stress and excessive DNA damage, this report has also indicated cGAS mediated sensing of irradiated-tumor cells by DCs. cGAS/STING-dependent cytosolic DNA sensing pathway in DCs was shown to be essential for Type I IFN induction after radiation, which determines the radiation-mediated adaptive immune responses [12]. Macrophages were also found to be key players in cGAS/STING mediated cancer immunity [28], indicating the involvement of other antigen presenting cells (APCs) beyond DC-T cell axis. Furthermore, recent evidence suggests tight regulation of cGAS/STING signaling in T cells [38,39]. T cells from patients carrying constitutive active STING mutant (TMEM173) showed decreased cell proliferation and IL-2 production, cell cycle arrest and increased apoptosis [38]. In murine model, sustained activation of STING via STING agonists showed enhanced ER stress and activation of cell death pathways [39]. Therefore, STING activation and homeostasis seems to be tightly regulated in a cell type specific manner. Hence, further investigation in the context may indicate the predisposition of different cell or cancer types to cGAS/STING mediated anti-tumor immunity.
Not surprisingly, cancer cells have evolved defense mechanisms. In human colon cancer model, cGAS and STING promoter hypomethylation was shown to reduce expression of cGAS and STING. This, in turn, inhibited DNA damage dependent cytokine production, allowing the cancer cells to bypass the immune surveillance [36]. Similar epigenetic silencing of cGAS and STING has also enabled melanoma cells to escape immune surveillance upon DNA damage [40].
Therapeutic Targeting of cGAS/STING Pathway in Cancer
Given the suppressed Type I IFN response in non-T cell inflamed tumors, boosting robust immune signaling in tumor microenvironment has the potential to enhance cross-priming of tumor specific CD8+ T cells [41]. With the growing evidence and understanding of the role of cGAS/STING pathway in facilitating anti-tumor immunity, recent efforts are targeting to modulate this cGAS/STING pathway in context of cancer immune therapy.
Targeting with Small Molecule and CDNs
The small molecule 5,6-dimethyllxanthenone-4-acetic acid (DMXAA) initially showed potent anti-tumor activity in various mouse models [42,43], but they completely failed in phase III clinical trial in non-small cell lung cancer when combined with chemotherapy. Recent studies have shown DMXAA to be a direct mouse STING activator and the fact that it cannot interact with human STING explains the lack of clinical activity in human [44]. This has also necessitated the screening for small molecule activators of human STING. CDNs having the potential to directly activate STING have thus been explored for their anti-cancer immune potential. Cyclic di-GMP (c-di-GMP) was shown to enhance the immunogenicity and anti-tumor effect of a peptide vaccine, TriVax, in mouse B16 melanoma model [45]. c-di-GMP was shown to improve vaccination against metastatic breast cancer [45]. While low doses of c-di-GMP provided strong adjuvant effects in vaccinations, high c-di-GMP dosage activated caspase-3 causing direct tumor cells killing [46]. Enhanced survival of glioma-bearing mice, associated with enhanced Type I IFN response, was found after intratumoral administration of c-di-GMP. As an adjuvant, c-di-GMP also boosted OVA peptide vaccination [35]. The intravenous administration of c-di-GMP, encapsulated in YSK05-liposomes, into mice significantly induced Type I IFN production and activation of natural killer (NK) cells. This resulted in a strong antitumor effect in a lung metastasis mouse model [47]. STING activation by 3 ,3 -cGAMP in chronic lymphocytic leukemia model caused apoptosis induction and tumor regression. A similar effect was also seen in syngeneic or immunodeficient mice, grafted with multiple myeloma. This report indicated the potential of CDNs in direct eradication of malignant B cells [48].
Targeting Non-Canonical Mammalian CDN and Analogs
The recent report, showing particular human STING varients refractory to bacterial CDNs (with canonical 3 -5 linkages), suggests that non-canonical mammalian CDNs (2 -5 linkages) are the preferred set of compounds for advancement to clinical trials [49]. In melanoma and colon cancer model, intratumoral injection of 2 ,3 -cGAMP stimulated CD8+ T cell response, delayed injected tumor growth and induced systemic anti-tumor immune response. The antitumor potential of 2 ,3 -cGAMP was further enhanced by the blockade of both PD1 and CTLA4 [50]. TLR9 agonist and 2 ,3 -cGAMP were also shown to synergistically induce innate and adaptive immune response. The combination of TLR9 agonist and 2 ,3 -cGAMP induced strong Th1-type responses and cytotoxic CD8+ T-cell responses. Intratumoral injection of TLR9 agonist and 2 ,3 -cGAMP reduced tumor size in mouse melanoma model [51]. 2 ,3 -cGAMP was also shown to have significant anti-tumor effect in adenocarcinoma model where intratumoral injection of this STING agonist enhanced cytokine production, triggered dendritic cell activation and selectively activated apoptosis in tumor cells. It was also shown that 2 ,3 -cGAMP treatment enhanced the expression of STING and IRF3. This amplification loop seemed to have positively influenced the anti-tumor effect. Combination therapy of 2 ,3 -cGAMP and the DNA damaging chemotherapeutic drug 5-fluorouracil (5-FU) not only showed synergistic anti-tumor effects but combination of 2 ,3 -cGAMP and low doses of 5-FU also reduced adverse toxicity of 5-FU chemotherapy [11]. Exogenous 2 ,3 -cGAMP treatment and radiation were also shown to synergistically amplify anti-tumor immune response [12].
The successful application of non-canonical 2 ,3 -cGAMP in eliciting anti-tumor immunity, paved the way for exploring the anti-cancer immunostimulatory potential of chemically synthesized non-canonical CDN analogs. Rationally designed synthetic dithio mixed-linkage CDNs were shown to potentially activate the five known human STING variants. ML RR-S2 CDA, the lead compound, having enhanced stability and lipophilicity, showed improved STING activation and anti-cancer potential both in vitro and in vivo [42]. Strong significant tumor regression in the B16 melanoma model, CT26 colon cancer model, or the 4T1 breast cancer was seen after intratumoral injection of the synthetic CDN. Robust systemic antigen-specific CD8+ T cell response was also induced causing rejection of distant, non-injected tumors. Around 50% of treated animals were tumor-free and had more than 150 days of survival after intratumoral injection. Synthetic CDN also conferred absolute protection against tumor re-challenge by providing long-lived immunologic memory. Though intratumoral injection of synthetic CDNs can limit their application, the abscopal response can potentially activate a strong systemic immune response [40].
The promising synergistic anti-tumor effect of 2 ,3 -cGAMP and radiation [12] has also motivated researchers to optimize the therapeutic potential of synthetic CDN and radiation combination therapy. R P , R P dithio 2 ,3 -CDN molecules combined with CT-guided radiotherapy showed synergistic anti-cancer immune response in local and distal tumors in a murine pancreatic cancer model. This synergistic effect produces a two-phase response, where the initial TNFα secretion driven T-cell-independent hemorrhagic necrosis is followed by a CD+ 8 T-cell-dependent recurrence control [52]. Synthetic 2 ,3 -cGAMP analogs are also being explored as vaccine adjuvants in cancer immunotherapy. Cellular cancer vaccine, STINGVAX, was synthesized by combining synthetic 2 ,3 CDNs with granulocyte-macrophage colony-stimulating factor (GMCSF). STINGVAX showed strong in vivo antitumor efficacy in multiple cancer therapeutic models. Rationally designed synthetic 2 ,3 CDNs, e.g., one with a R P , R P dithio diastereomer and another with a non-canonical 2 ,3 mixed linkage (c [A (2 ,5 ) pA (3 ,5 ) p]), boosted antitumor efficacy of STINGVAX in multiple aggressive cancer therapeutic models. Interestingly, in comparison to murine cells, where R P , R P dithio 2 ,3 CDN molecules were shown to be most potent STING stimulator in vivo, synthetic CDNs containing 2 ,3 mix linkage phosphate bridge seemed to be more potent activators of human APCs. Significant PD-L1 up regulation associated with tumor infiltrating CD8+ T cells was found in tumors from STINGVAX treated mice. Combination of STINGVAX and PD-1 blockage could target poorly immunogenic tumors that were totally unresponsive to PD-1 blockage [53].
Successful application of synthetic CDN analogs in eliciting anti-tumor immune response via STING activation allows us to explore additional alternate avenues to therapeutically target this pathway. Small molecule and/or modified nucleic acid activators of cGAS can be an alternative along the line. One advantage of activating the enzyme cGAS over STING is that this upstream enzyme can produce an amplified signal relative to the STING receptor-small molecule interaction. In addition, structural similarity between human and mouse cGAS would also be advantageous in extrapolating mouse model results to clinical trials. Along this line, Hall et al. have very recently reported a high affinity cGAS inhibitor which can be tested for its therapeutic application in context of cancer or autoimmune disorder [54].
cGAS/STING Pathway in Carcinogenesis
Although cGAS/STING pathway is being targeted for potential cancer immune therapy, evidence indicates that cGAS/STING pathway contributes to inflammatory carcinogenesis as well. Severe side effects, including autoimmune and inflammatory response and direct tissue toxicity, limit the anti-tumor potential of Type I IFN therapy and cGAS/STING signaling modulation therapy.
STING was shown to enhance inflammatory cytokine levels of infiltrating phagocytes and thereby promote inflammation driven carcinogenesis. STING−/− mice were resistant to mutagen (DMBA) induced skin carcinoma compared to wild-type mice [30]. DNA sensing via cGAS/STING pathway also induces tolerogenic response in mice [55] by activating immune regulatory mechanisms. Indoleamine 2,3 dioxygenase (IDO) is an important immune check point and STING induced IDO activation in tumor microenvironment was shown to promote the growth of Lewis lung carcinoma (LLC). STING ablation enhanced CD8+ T cell infiltration, tumor cell killing, decreased suppressor cell infiltration and IL-10 production in tumor microenvironment in LLC mice model, indicating the role STING signaling in attenuation of CD8+ T cell functions during tumorigenesis [56]. Molecular insight of virus induced carcinogenesis has also shed light on the importance of cGAS/STING pathway in this context. Expression of STING was shown to be up regulated and activated in HPV+ tongue squamous cell carcinoma (TSCC) samples and activated STING promoted the induction of several immunosuppressive cytokines and chemokines, e.g., IL-10, CCL22, etc., that facilitated regulatory T cells (Tregs) infiltration and thereby helped in carcinogenesis by affecting anti-cancer immune response [57].
Recent evidence suggests that self-DNA induced activation of cGAS/STING pathway is responsible for autoimmune and inflammatory disorders [8,15,16]. Given the genetic stress, DNA damage and nuclear DNA leakage in cancer, cGAS/STING activation can potentially help in inflammation induced carcinogenesis.
It should also be mentioned that the carcinogenic effect of cGAS/STING signaling may be cancer type specific. While STING−/− mice were resistant to DMBA induced skin carcinoma [54], STING−/− mice developed colonic tumor at an enhanced frequency compared to WT mice [58]. Thus, tumor type, location and tumor microenvironment may play significant role in dictating the anti-cancer or carcinogenic role of cGAS/STING pathway.
Concluding Remarks
cGAS/STING pathway seems to be a double-edged sword in cancer, and hence it is important to understand the molecular details and spatio-temporal regulation of this pathway in the context of cancer. Understanding how to shift this balance towards anti-cancer immune activation represents an attractive therapeutic strategy to combat cancer. It is logical to assume that the concentration of STING activators, such as 2 ,3 -cGAMP and other synthetic CDN analogs, have key roles to play in this context. While optimal CDN concentration helps in eliciting anti-tumor immune response, high CDN level, followed by uncontrolled STING activation, may lead to sustained inflammation and carcinogenesis. Thus, determining optimal 2 ,3 -cGAMP level is of extreme importance in context of cancer therapeutics. We have reported a RNA based fluorescent biosensor against 2 ,3 -cGAMP [59]. The ability to measure 2 ,3 -cGAMP levels in cells will impact our understanding about the optimal activation of cGAS that can elicit a robust anti-cancer immune response, while still not activating inflammation induced carcinogenesis. The biosensor can also be used in high throughput format to screen for small molecule modulators of cGAS activity in cancer therapeutic perspective. Measuring cGAS activity will also help us identifying DCs in tumor microenvironments that are CDN sensitive and therefore may help in effective therapeutics. cGAS/STING pathway has the potential to be targeted for effective cancer therapeutics. A deeper knowledge of cGAS/STING pathway and its regulation could point towards successful therapeutic targeting.
Conflicts of Interest:
The author declares no conflict of interest. | 5,289.2 | 2017-11-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
ON THE ELIMINATION OF NONLINEAR PHENOMENA IN DC / DC CONVERTERS USING TYPE-2 FUZZY LOGIC CONTROLLER
DC/DC converters are wealthy of nonlinear phenomena that appear when the converter parameters are subject to perturbation or variation. The converter may exhibit bifurcation from behavior to another, quasi periodic and chaotic responses. In such cases, it is difficult and even impossible to analyze, to predict and to control the converter behavior. This paper gives a description of a DC/DC converter and shows their desirable and undesirable behaviors; then a solution, based on type-2 fuzzy logic controller, is proposed to eliminate the undesirable behaviors and to enhance the converter dynamics.
INTRODUCTION
The power converters are used in many fields to adapt the electrical power to the consumer need with minimum loss of energy.The DC/DC converters ensure this task in many fields such as renewable energies, electronic circuits, medical equipment, satellite…etc.However, in some circumstances the variation of the circuit parameters, the load or the power supply perturbations can lead to the apparition of some nonlinear phenomena such as double periodicity, bifurcation and chaos that can destroy the control performances.
Hence, many approaches have been developed, in the literature, to shift or to suppress these nonlinear phenomena [1][2][3][4].Nevertheless, most of these approaches focus on abnormal behaviors removal in a given operating point without any guarantee about the control performances.
In [5], a fuzzy PID controller is synthesized, by analogy with conventional PID, for the regulation of DC/DC converters output voltage.The idea behind is to determine the fuzzy controller parameters based on an established analogy with a conventional PID.This study is extended in [6] to be a systematic approach for synthesizing fuzzy PID controller.It gives possibility of finding or locating different zones of stability of the closed loop system.Other studies followed are focused on finding an optimal choice for the fuzzy controller parameters in the stability zones [7], [8].
The study in [7] proposes a new fuzzy logic controller optimized by the LMI (Linear Matrix Inequality) approach.However, this method is complex and needs an important computation time and memory space for data processing and storage.
In [9], the authors proposed a new approach based on the analytical and systematic calculation of various fuzzy controller parameters to ensure the stability of the converter.The developed controller allows the shifting of the nonlinear phenomena and forces the converter to operate in the simplest behavior in a wide range of variation of the operating point.
To the same end, the authors in [10] proposed the enhancement of the converter behavior by taking into account the effect of perturbation and parameters variation.They used instead of constant reference, a dynamical ramp to dump the effect of perturbation and to keep the simplest behavior of the converter despite of the system parameters variation.
However, the aforementioned works are based on analytical solutions and they are a little bit complex and their implementation in real plant is questionable.In addition, they do not take into consideration the different uncertainties in the converter model and in the definition of the control strategy.
Furthermore, the use of type-1 fuzzy logic in the aim of controlling dynamical electrical systems needs an accurate knowledge on the system to determine the membership functions and to express the control strategy in an optimal number of fuzzy rules.
Furthermore, the membership grade, in type-1 fuzzy system, is a crisp number and cannot handle the different uncertainties.Indeed, uncertainties are of multiples sources and words in fuzzy rules could mean various things to several peoples.To tackle this problem, the type-2 fuzzy logic is proposed as an extension to type-1 fuzzy system.It is characterized by a set of membership functions instead of one to describe each situation.
In this study, the Boost converter is selected to be current controlled and functioning in continues conduction mode.This choice is motivated by the fact that the converter, under these conditions, exhibits a large spectrum of nonlinear phenomena.At first, the description of this converter is presented.Then, the proposed method based on type-2 fuzzy logic is described to the end of nonlinear phenomena suppressing.The approach will be validated through simulation results and their performances will be evaluated throughout a comparative study.
BOOST CONVERTER
The simplified version of the current controlled Boost converter is given in figure 1.In this control mode, we are able to control both slow and fast dynamics of the system [12,13].The converter elements are chosen in such manner that the inductor current never drops to zero 27 , and to ensure that the converter operate in continuous conduction mode (CCM) [12].In this case, we have only two configurations related to the switch sw position.is given by this expression: The system state, in each configuration, is expressed as: (2) with i A the state matrices in the th i configuration given by: and the state vector is Using the control scheme of figure 1, it is well known that the converter is wealthy of nonlinear phenomena and exhibits complex and undesirable behaviors [11,12].Our goal is to suppress these nonlinear phenomena and to keep the converter operating in the simplest behavior without neglecting the traditional goals of the regulation problem.
The type-1 fuzzy logic controller has been used successfully to this end in several works of our team [5,6,7,8]; however, this kind of controller cannot handle efficiently the uncertainties of both system and control strategy.The uncertainty of this last is an inherent characteristic especially in the fuzzification part (choice of the membership functions and their distribution) and in the inference part due to its linguistic nature (the same world have different meanings for different peoples).To solve this problem, we propose in this work to take advantage of the ability of type 2 fuzzy sets theory to include the different uncertainties in the control strategy.
TYPE-2 FUZZY LOGIC CONTROLLER
The architecture of type-2 fuzzy system is similar to type-1 with an additional bloc called type reducer.It allows reduction of conclusions from type 2 to type-1.
For each linguistic value, the type-2 fuzzy system is characterized by a set of membership functions, instead of one used in classical type-1 fuzzy systems.Hence, the membership grade for every element is a fuzzy set in [0, 1].That is unlike the type-1 fuzzy system, where the membership grade is a crisp number [16].Thus, a type-2 fuzzy system is very useful in circumstances where we need to take into consideration the different uncertainties on converter parameters and on control strategy.
The core of a Type-2 fuzzy controller is the type-2 fuzzy system given in figure 2 [14,16].It is constituted of four blocks; the three blocks of type-1 fuzzy and a fourth block that ensures the reduction from type-2 fuzzy system to a type-1.
TYPE-2 FUZZY CONTROL OF DC-DC CONVERTER
The significant parameters in current mode control of the Boost converter are the error e between the inductor current and the reference and its time derivative de .These parameters are the input of the fuzzy controller.The output of this last is the increment of the control action n dc .The type-2 fuzzy controller scheme is shown in figure 3.
Fuzzification
For fuzzification, we use the triangular and trapezoidal membership functions for inputs and singleton for output (see figure 4).
Inference
The inputs variables of the type-2 fuzzy controller have five fuzzy sets, which give twentyfive rules for the highest membership functions, and twenty-five rules for the lower limit of the membership functions.The flowing table summarizes the different rules of the control strategy obtained by the interconnection between the input variables.
The membership functions are determined by the following expression: Where the number of actives rules and the output fuzzy sets (singletons).
The inference mechanism of type-2 fuzzy logic is explained in figure 5.
Defuzification
The final output of the type-2 fuzzy controller ( ) is equal to the average of the both decisions for the high and low limits of membership function given by this expression: with
SIMULATION RESULTS
To obtain the global picture on the different behaviors of the Boost converter and to validate the proposed type-2 fuzzy controller, we used the bifurcation diagram tool.In this context, we are interested in building the bifurcations diagram in the case of the input voltage variation, load and reference current variation.We evaluate, after that, the proposed approach performance through a comparative study with the obtained results in previous works.
The Boost converter parameters are: The Figure 6a illustrates the original behavior of the Boost converter and gives the different operating zones P 1 , P 2 , P 4 , P 8 and chaos.Figure 6b shows the enhancement obtained by the proposed Type-2 fuzzy controller.We remark that the proposed controller allows the illumination of the undesirable phenomena and the widening of the desired area of period 1.Indeed, the simplest behavior "period 1" is enlarged from [1.4-3.5]A(period-1) in the original behavior to [1.4-16]A under the use of the proposed type-2 fuzzy controller.
Bifurcation diagram with input voltage variation
In the case of the input voltage variation, the obtained bifurcation diagrams are represented in figure 7. behaviors and ensures a wide region of operating in period 1 behavior.Indeed, the desired zone of period one is extended from the range [35-50] V in the original behavior to the interval [10-50] V using type-2 fuzzy controller.
Bifurcation diagram with load variation
In the case of load variation, the Boost converter behaviors with and without the proposed controller are depicted in figure 8. Figure 8a shows that the converter exhibit multiple behaviors P 1 , P 2 , …, quasi-periodicity and chaos.The simple and desired behavior is ensured only in a restricted region from 8Ω to 13Ω; whereas, from figure 8b, we can remark that the type-2 fuzzy controller ensures the behavior of period 1 (desired behavior) on the whole range and even extend this simple behavior until R = 100Ω.
EVALUATION OF THE PROPOSED APPROACH
To evaluate the enhancement introduced by the proposed type-2 fuzzy controller, we present, in the following, a comparative study with recent results from the literature.We compared, at first, our results with those obtained by the Type-1 fuzzy controller presented in [5].We remark, from figure 9, that the proposed type-2 fuzzy controller ensures a largest zone of period-1 compared to the one obtained by type-1 fuzzy logic controller, which demonstrate the superiority of the proposed controller in terms of handling the system parameters variation and keeping the system on its simplest behavior.Furthermore, if we compare the results obtained by the proposed type-2 fuzzy controller (Figure 10a) with those obtained in [7] using the type-1 fuzzy controller optimized by LMI method (Figure 10b), we remark that not only the performance of the converter are enhanced but also we gain the simplicity of the proposed approach.Indeed, without any complicated optimization task we have a better performance.
CONCLUSION
In this work, type-2 fuzzy logic controller is proposed for suppressing the nonlinear phenomena exhibited by the Boost converter.This last showed to be wealthy of nonlinear phenomena that complicate the system behaviour and makes their analysis and control an arduous task.Based on type-2 fuzzy logic, the proposed controller allowed handling efficiently the system parameters variation and ensuring the converter operation in its simplest and predictable behavior.The simulation results confirmed this fact and showed the enhancements obtained by the proposed controller in terms of elimination of the undesired complex phenomena.The obtained performances are evaluated through a comparison with those obtained in the literature.These comparisons showed the superiority of the proposed type-2 fuzzy controller against optimized and non-optimized type-1fuzzy logic controllers in term of suppressing nonlinear phenomenon and widening the desired behavior region.
Fig. 1 .
Fig. 1.Boost converter under current mode control (simplified version) If T is the clock cycle, the dwell times in the two configurations are respectively
Fig. 3 .Fig. 4 .
Fig. 3. Scheme of Type-2 fuzzy controller of the Boost converter a) Original behavior b) Behavior with type-2 fuzzy controller Fig. 7. Bifurcation diagrams with input voltage variation Comparing the two bifurcation diagrams of figure 7, we can remark the enhancement introduced by the type-2 fuzzy controller.It allows the elimination of the complex undesirable | 2,802.8 | 2018-09-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Research on composite dynamic disaster prevention and control system of mine earthquake and shock in thick and hard rock mines
With the rapid entry into deep mining of coal mines in China, the impact composite dynamic disaster of thick and hard layer mines and mine earthquakes has increasingly become a major disaster that threatens the safe and efficient mining of deep coal. Studying its occurrence mechanism, disaster prevention and control system has become a new major scientific issue in the field of coal mine safety. This paper proposes a framework for a composite dynamic disaster prevention and control system framework for thick and hard rock mines. First, the gravity forms, extents and deformation characteristics of different rock layers of the structural model are analysed, and the expressions of concentrated force and periodic breaking step distance of rock beams in thick and hard rock layers at the fixed support end are deduced. Then, according to the cause of the shock composite dynamic disaster, finally, the specific testing and calculation methods of mine earthquakes in thick hard rock mines are designed. The regional and local measures to manage compound dynamic disasters are put forward. Experiments show that the system is successfully applied to the mining practice of working face, and the results of water conservancy and stress monitoring support the rationality of the system. And through the implementation of impact prevention and control measures, the safety and disaster prevention and control of the working face was finally realised.
Introduction
China has abundant reserves of ultra-thick coal seams, and the fully mechanised caving face mining of ultrathick coal seams has the characteristics of high output, fast efficiency and significant economic benefits [1,2].The mining depth of most underground coal mines in our country is not more than 400-500 m, and the movement range of the roof is mainly 6-8 times the mining height above the coal seam, which is the traditional theory of rock pressure [3,4].Key parameters such as mining scale and intensity of longwall panels are constantly being refreshed.The movement range of the overlying rock at the working face increases dramatically, and the evolution law of the mining stress field presents complexity [5,6].In the height direction, it has far exceeded the 'basic roof' range, and in the horizontal direction, it has also exceeded the range of the upper and lower lanes on the working surface.
In recent years, domestic scholars have achieved a lot of research results on the characteristics of rock pressure in fully mechanised caving faces of extra-thick coal seams.In view of the research on the failure height of the top coal body in fully mechanised caving mining of extra-thick coal seams, a 'three-zone' structural model of the top coal body in fully mechanised caving mining of extra-thick coal seams is proposed, namely 'scattered body zone' and 'block body zone' from bottom to top 'Cracked beam belt' [7][8][9].Based on the cantilever beammasonry beam mechanical structure model of extra-thick coal seam, the mechanical model of the fracture of the cantilever beam structure with the central inclined crack is established, and the expression of the support load is given [10].The 'face contact block arch' model of fully mechanised caving surface of extra-thick coal seam was proposed, the mechanism of top coal arching was studied, and the arching phenomenon that the top coal was easy to form blocking coal caving during the caving process was revealed [11].Based on the BBR research system, the optimisation of the coal caving method for fully mechanised caving mining in extra-thick coal seams is carried out, and the segmented and large-interval coal caving method is proposed [12].The research on the mechanism of shock composite dynamic disasters should be strengthened from the following aspects: 1. Make a more scientific classification of the combined dynamic disaster of mine shock and shock in thick and hard rock mines [13]; 2. In-depth study of the quantitative mechanical model of shock composite dynamic disasters [14,15]; 3. Consider the research on the disaster mechanism of the mine-seismic composite system in the mine with thick hard rock layer under the stress path that is more in line with the actual conditions of the site [16]; 4. Strengthen the role of deep learning and big data technology in the study of the nonlinear mechanism of disasters [17]; 5. Continue to increase scientific and technological research efforts [18].
The movement state and stress distribution of the overlying hard rock layers in the stope are the main factors controlling the occurrence of dynamic disasters such as rockbursts.Scholars have carried out a lot of research in related fields [19,20].With the continuous depletion of superficial resources in our country and the continuous increase in the depth and intensity of coal mining [21], in recent years, rockburst disasters have occurred frequently, and rockburst has become a typical form of dynamic disasters that affect the safety of coal mine production in China and restrict the harmonious development of mining areas.Rockburst is an induced disaster [22].The main reason for it is that the stratum structure is damaged or the stress is in an abnormal state in the mining influence range [23].The motion state of the thick and hard rock layer determines the dynamic appearance and influence scope of the working face, and its buckling motion is the main reason for inducing strong rockbursts [24,25].Rockburst is usually regarded as a problem related to structural instability of coal and rock mass.
In summary, this paper proposes a study on the dynamic hazard system of seismic and shock loads in thick rock mines.The main outstanding work is as follows: 1. Detailed research and discussion on the analysis of the seismic and impact composite dynamics of the thick and hard rock layers; 2. A complete set of disaster prevention and control system is proposed under the combined influence of the combination of earthquake and shock in thick and hard rock layers; 3. Finally, the experimental comparison is carried out, and the experiments are carried out under different vibration degrees.The experimental analysis results show that the system proposed in this paper can pre-analyse disaster prevention and control.
2 Mine earthquake and shock composite dynamic disaster in thick and hard rock mine
Mine earthquake with thick and hard rock layers
During the fracture movement of the thick and hard rock layer, the bearing stress of the coal body in the working face also undergoes periodic stress changes.The periodic breaking motion of the overlying thick hard rock layer in the stope is the main reason for the concentration and transfer of the advanced bearing stress in the working face.According to the width of the coal pillar and its stress state, the fully elastic state mainly experienced by the coal pillar of the re-mining face during the continuous mining process is obtained [26].Based on the spatial structure of the thick and hard rock layers and their boundary characteristics, the overhanging thick and hard rock layers are regarded as elastic rock beams, and the coal-rock mass at the stope boundary is regarded as the fixed support end of the stope.The basic morphological characteristics that may appear after the critical layer is broken are as follows: 1.The key layer cantilever beam structure and the masonry beam structure (or hinged structure) are the two types of key layer structures that determine the pressure of the working face and control the type and danger of disasters; 2. The main factor of the shape of the key layer in the stope is the mining height and the height of the key layer from the coal seam, that is, after the key layer is broken, whether the rotation amount of the broken rock block exceeds its maximum rotation amount to maintain a stable structure; 3. After the coal seam is mined, the low-level roof directly collapses layer by layer and fills the goaf, according to Qian Ming et al., as Here, the rotation amount ∆ J of the broken rock block in the thick and hard rock layer is the maximum rotation amount ∆ max required to form the cantilever beam structure.m is coal seam thickness or mining height.K s is a low-level caving rock layer between the top coal (slab) and the bottom of the thick hard rock layer.h L is the height of the thick hard rock layer from the coal seam.L is the breaking step of thick hard rock layers.q 0 is overburden load for thick hard rock beams.By analysing the thick and hard rock layer-coal pillar structural system and its stress state, the mechanical model of the thick and hard rock layer-coal pillar structure under static conditions is further simplified.In order to facilitate the analysis and calculation, the support force on the fixed end of the cantilever rock beam is approximated as the concentrated force F 2 .The transfer body structure composed of the horizontal cantilever rock beam-broken rock block is mainly flexural deformation, and the support structure composed of coal pillar coal mass and rock pillar in the height direction is mainly composed of forced compression deformation: 1.The overhang length of the cantilevered rock beam at the fixed end does not exceed the ultimate breaking step distance under the equilibrium condition of the hinged structure; 2. The stress on the coal pillar at the bottom of the fixed support end is lower than its comprehensive support strength.
Mechanical model building
The model is squeezed by the rock masses on both sides to form horizontal thrusts T 1 and T 2 .The supporting action of contacts and fixed ends in the vertical direction is simplified to concentrated force F 1 and F 2 as Among them, w is the deflection of the cantilever rock beam -the flexural deformation of the fractured rock block structure.The principal vector and principal moment of the force system at any point in the structural system are both zero.The following section focuses on quantitative analysis of the above-mentioned mechanical model, and solves the key parameters.
In the horizontal direction, the broken rock block squeezes with the adjacent rock beams during the rotational deformation process, thus forming a horizontal thrust.It is not difficult to conclude that T 1 and T 2 are a pair of action force and reaction force, which satisfies the equilibrium condition: where the cantilever rock beam and the hinged structure of the broken rock block maintain a balance of gravity, which satisfies the equilibrium condition: Here, q 0 is the load of overlying rock borne by the rock beam, and γ and h are the bulk density and thickness of the rock beam, respectively.The hinged rock mass is subjected to horizontal thrust during the slow deformation process.The hinged contact points create internal friction F 1x in the vertical direction.F 2x is equal in size and opposite in direction, which satisfies the condition: Finally, the expression of the concentrated force F 2 at the boundary of the fixed support end of the rock beam is: where the concentrated force transmitted by the spatial hinge structure to the coal-rock mass in front of the working face in the thick and hard rock-coal pillar model.The concentrated force belongs to the space transfer load or additional load formed by the mining process.
Periodic motion step of thick and hard rock layers
The basic theory of flexural deformation of rock beam in elastic mechanics, the differential equation of flexural deformation of rock beam is: where M x is the bending moment of the rock beam at any interface along the working face strike, E is the elastic modulus of the rock beam and I is the moment of inertia of the section.First, according to the moment balance condition, the bending moment M x of any section of the rock beam is obtained and expressed as: where x is the horizontal distance between the position of the rock beam section and the boundary of the fixed support end.Then, the maximum bending moment M 0 of the rock beam at the boundary of the fixed end and the tensile stress σ 0 of the boundary of the fixed end are: Finally, if the ultimate tensile strength of the thick hard rock layer is σ t = σ 0 , the periodic breaking step of thick and hard rock layers is: The magnitude of the additional concentrated force transmitted from the space hinge to the coal pillar in the thick and hard rock layer-coal pillar model can further analyse the instability criterion of the thick and hard rock layer-coal pillar model.
Evaluation basis and evaluation index of mine earthquake
The shock wave generated by the mine earthquake is similar to the general shock wave.The shock wave can produce strain effects (deformation and vibration) and inertia effects (pressure and tension) on the surrounding medium and surface buildings during the propagation of the medium.Among them, the judgement basis (safe Z and unsafe Z) operation expressions are: where '−', '+' and ' ' represent logical operators 'NOT', 'OR' and 'AND', respectively.The specific discrimination results are shown in Table 1.
Table 1 Vibration damage judgement table
Category Displacement Speed Acceleration Discrimination result It can be seen from the elastic mechanics reasoning that the additional force caused by the mine earthquake is related to the particle vibration velocity: The equation of motion of an isotropic ideal elastic body is: where σ is the additional stress caused by particle vibration, c is the velocity of the shock wave propagating in the medium, ϕ is the medium displacement function, φ is the volume deformation function of the medium and G is the volume deformation function of the medium.Finally we obtain the expression:
Shock composite dynamic disaster
Aiming at the regular mechanism of shock composite dynamic disaster, the existing research can be summarised into qualitative and preliminary quantitative research as follows.The first category is to combine typical case studies and theoretical analysis to discuss the conditions and characteristics of disaster occurrence, and then to propose a qualitative explanation of the disaster occurrence mechanism.The second type is to use the experimental platform to carry out experimental research from the point of view of the damage and instability induced disasters of gas-bearing coal and rock mass, in order to grasp the occurrence mechanism of compound disasters.The process of composite dynamic disaster must be controlled by the damage and damage of objects and rock medium and its coupling effect with mechanics and seepage behaviour in coal.
Under the condition of deep high stress, rockburst disasters mainly occurred in the coal seam with strong shock tendency and developed into the coal seam with weak shock tendency.Similarly, the increase of stress, gas content and gas pressure led to coal and gas outburst from soft coal to medium hard coal.Coal development is shown in Figure 1.Therefore, many deep mining mines face the threat of rockburst and gas outburst dynamic disaster at the same time, and some mines even have a composite dynamic disaster in which the impact and outburst are mutually induced.The use of microseismic monitoring technology to study composite dynamic disasters is an interdisciplinary subject, involving the knowledge of many basic disciplines such as geology, rock (damage, fracture) mechanics, dynamic signal testing and analysis.The essence of a microseismic event is the manifestation of a series of dynamic evolution processes such as stress, strain, deformation, cracking, instability and failure of the surrounding rock.Because the microseismic monitoring technology can describe the movement and failure of rock formations in an all-round way in space stress drop and its failure size and failure mode, so it has unique advantages over traditional methods.
At present, the main technical measures for pressure relief and outburst elimination of coal seam roadway excavation working face include mining protective layer, regional gas pre-drainage, advanced drilling, deep hole water injection, hydraulic punching, hydraulic cutting, deep hole loosening blasting and deep hole drilling controlled blasting and so on.
3 Prevention and control system of mine earthquake and impact composite dynamic disaster based on thick and hard rock layer mine The rupture of thick and hard rock layers can induce strong mine earthquakes and cause different degrees of vibration damage to the ground.According to the conditions of strong mine shocks induced by the rupture of thick and hard rock layers, the idea of a shock composite dynamic disaster prevention and control system is proposed to change the conditions of mine shocks and reduce the energy released by mine shocks, as shown in Figure 2. Changing the conditions of mine shock can be based on mining technology, where the purpose is to reduce the height of the fracture of the thick hard rock layer and maintain the structural stability of the thick hard rock layer.Among these, reducing the energy released by the mine shock mainly controls the splitting scale of the thick and hard rock layers and the scale of the movement of the thick and hard rock layers.Mining protective layer and regional gas pre-draining have been widely used in China and abroad as regional technologies for preventing and controlling mine gas dynamic disasters.In view of the characteristics of poor gas permeability and soft coal quality of prominent dangerous coal seams in China, enhanced drainage technologies such as hydraulic hole reaming, wind-driven slag discharge and other drilling construction technologies along the coal seam have been studied, and good application results have been achieved.However, the short-range lower protective layer mining technology and regional gas pre-extraction and anti-outburst affect the evaluation system as follows: 1. Mining of protective layers between coal groups in mines.At present, most mines adopt the downward mining sequence of coal groups.The mining practice of some mines has proved that the upward mining between groups has a good protective effect on the coal of the upper group.
2. Mining of protective layers within coal seam groups of mines.When mining within the group, give priority to mining the protective layer within the group and liberate other coal seams.
3. Preventive measures for excavation face.Under the condition of no protective layer mining, the gob-side roadway technology is preferentially used for the coal roadway driving face with the risk of composite dynamic disaster in each mine.The coal seam extraction measures shall be implemented in the gob-side roadway to protect the roadway excavation of the adjacent outburst coal seam.
4.
Outburst prevention measures at coal mining face.The coal mining face generally adopts the measures of transportation, long-draining along the layer of the return air lane, shallow-draining in the working face, and drilling through the layer in the high (low) level roadway.After the effectiveness of the measures exceeds the standard, take supplementary measures for shallow hole pressure relief and drainage in the area exceeding the standard.
4 Experimental results and analysis
Experimental data
In the northwestern part of a mining area, the main mining thick hard rock layers are mines, with an average coal thickness of 9.43 m and a coal burial depth of 800-1000 m.Since the coal mine was opened in July 2019, the phenomenon of impact composite dynamic has occurred many times during the excavation of the main road and after the formation of the roadway with the thick and hard rock layers.During the period, the coal-rock mass fracture-induced shock composite dynamic appears in the coal pillar area of a typical deep-buried mine.This section selects 8 dynamic manifestation events and 58 mine earthquake events that occurred in the Northwest Mine Coal Mine from August to December 2019 as the research objects, and explores the mechanism law of mine earthquake and shock composite dynamic disaster prevention and control system in deep hard rock mines.
Experimental methods and evaluation criteria 4.2.1 Calculation of fracture parameters of high-level thick hard rock
The width of the working face is first divided into 175 m, according to the actual mining experience in the west area.The three options are as follows: 1.When the working face is preferentially mined, the overlying thick hard rock can be broken, and the sum of the width of the working face and the adjacent goaf is about 545 m; 2. The sum of the width of adjacent goaf is about 535 m in priority; 3. The sum of the width of the joint working face and the adjacent goaf is about 715 m.
Here, the ratio of goaf width and rock thickness in different schemes is (535-715)/110 = 4.86-6.45.Therefore, referring to the research results of the thin plate theory, the rock fracture law is analysed as: where a 1 and a 2 are the step distance of the first broken rock under the condition of three-side fixed support, one side simply supported and four-side fixed support.h is the rock thickness, and l m is the limit span of rock under infinite length condition for working face propelling as: where σ t is the ultimate tensile strength of rock.q is the load of thick hard rock, u represents the Poisson's ratio and b indicates the overhang width of the rock bottom.For the case of narrow coal pillars: where when the four-side fixed support condition is taken as n = 2. Three sides are fixed on one side and one side is simply supported or the conditions of the adjacent goaf are taken as n = 1, where b 0 is the gob width.
Overall stability analysis of working face
According to the mining geological data, the goaf width on one side of the incomplete mining area is 360 m.Take the average bulk density of the overlying rock as y = 25 Kn/m 3 .The uniaxial compressive strength of the medium is 18.5 MPa.Considering the complex and changeable spatial structure of the overlying hard rock in the mining face, and the violent movement of the roof, The dynamic load effect varies greatly under different mining intensities.Take the dynamic load factor as K = 0 (complete static stress state), 0.5 and 0.8 times.
Experimental results and analysis
Figure 3 shows that the average support strength of coal mines increases with the increase in working face width and tends to be stable.When the working face width is less than the limit width (about 80 m), there is a possibility of 'static instability'.The theoretical estimate of the limit width of the working face is about 145 m.In order to meet the height safety requirements of rockburst working face, factors such as roadway, section coal pillar arrangement and safety factor are considered.The actual working face layout width should be greater than 175 m.According to the analysis, scheme 3 is the best for the prevention and control of rockburst.In addition to considering the overall stability of the working face, there are important water conservancy facilities on the stope ground.Assess the vibration damage effect of mine earthquakes on water conservancy facilities.Find mining options that reduce vibration damage.According to the three schemes, the total lengths L G of the clamped edge at the initial fracture of rock and rock are 1370 m, 1790 m and 1838 m, respectively.Without considering other factors, the total elastic energy released by the rock for the first time for different schemes U ≈ 1.6 × 10 10 J, 1.3 × 10 11 J, and 1.6 × 10 11 J are estimated according to the evaluation criteria.According to the research method, the vibration velocity of the surface water conservancy facilities caused by the mine earthquake induced by the release of elastic energy is estimated, and the calculation and analysis results are as shown in Table 2. 2 shows that the vibration velocity v of the surface water conservancy facilities caused by the mine earthquake is different under different mining schemes and vibration efficiency.Mine earthquakes have different effects on the vibration of surface water conservancy facilities.Among them, Model 1, mining of thick hard rock, breaks the particle vibration velocity of water conservancy facilities in the range of 0.03-0.17cm/s.Compared with Model 1, the vibration velocity of water conservancy facility particles caused by mining earthquakes in Models 2 and 3 is significantly larger, where the safe allowable particle vibration speed is 0.5-0.9cm/s.The experimental results also show that, under the same vibration efficiency, the vibration velocity of the particles of the water conservancy facilities caused by the mining earthquake of Model 1 is generally lower than the national safety standard.Models 2 and 3 mining caused by mine shock caused the particle vibration speed of water conservancy facilities to be relatively large, and the maximum vibration speed was close to the safety standard.However, it takes a period of time for the micro-cracks to develop, merge and gradually form macro-cracks before the high-level thick and hard rock layers are broken.Under normal circumstances, the overall structure of the thick hard rock roof will not break down instantaneously.Finally, for the sake of security, the parameters selected for calculation are also too large.Therefore, the possibility of damage to the surface water conservancy facilities caused by the mine shock induced by the rock formation is extremely small.Analysis of experimental results can provide important support for mining plan optimisation and selection.
Conclusion
The motion state of the thick and hard rock strata determines the dynamic appearance degree and influence range of the working face, and its instability motion is the main reason for the strong rockburst disaster.Rockburst is usually regarded as a problem related to structural instability of coal and rock mass.This paper proposes a framework of a composite dynamic disaster prevention and control system framework for thick and hard rock mines.First, the gravity forms, extents and deformation characteristics of different rock layers of the structural model are analysed, and the expressions of concentrated force and periodic breaking step distance of rock beams in thick and hard rock layers at the fixed support end are deduced, based on this.Then, according to the cause of the shock composite dynamic disaster, finally, the specific testing and calculation methods of mine earthquakes in thick hard rock mines are designed.The regional and local measures to manage compound dynamic disasters are put forward.Experiments show that the system is successfully applied to the mining practice of working face, and the results of water conservancy and stress monitoring support the rationality of the system.However, the mines discussed are still relatively simple, and there are no comparative experiments with multiple mines.In the future work, the multidimensional mine seismic data will be compared horizontally.
Fig. 1
Fig. 1 Shock -changing trend of the dangerous range of prominent dynamic disasters with mining depth
Fig. 2
Fig. 2 Countermeasures for reducing vibration damage of mining surface
Fig. 3
Fig. 3 Disaster risk of working face under different widths | 6,127.8 | 2022-12-23T00:00:00.000 | [
"Computer Science"
] |
A tree based eXtreme Gradient Boosting (XGBoost) machine learning model to forecast the annual rice production in Bangladesh
In this study, we attempt to anticipate annual rice production in Bangladesh (1961–2020) using both the Autoregressive Integrated Moving Average (ARIMA) and the eXtreme Gradient Boosting (XGBoost) methods and compare their respective performances. On the basis of the lowest Corrected Akaike Information Criteria (AICc) values, a significant ARIMA (0, 1, 1) model with drift was chosen based on the findings. The drift parameter value shows that the production of rice positively trends upward. Thus, the ARIMA (0, 1, 1) model with drift was found to be significant. On the other hand, the XGBoost model for time series data was developed by changing the tunning parameters frequently with the greatest result. The four prominent error measures, such as mean absolute error (MAE), mean percentage error (MPE), root mean square error (RMSE), and mean absolute percentage error (MAPE), were used to assess the predictive performance of each model. We found that the error measures of the XGBoost model in the test set were comparatively lower than those of the ARIMA model. Comparatively, the MAPE value of the test set of the XGBoost model (5.38%) was lower than that of the ARIMA model (7.23%), indicating that XGBoost performs better than ARIMA at predicting the annual rice production in Bangladesh. Hence, the XGBoost model performs better than the ARIMA model in predicting the annual rice production in Bangladesh. Therefore, based on the better performance, the study forecasted the annual rice production for the next 10 years using the XGBoost model. According to our predictions, the annual rice production in Bangladesh will vary from 57,850,318 tons in 2021 to 82,256,944 tons in 2030. The forecast indicated that the amount of rice produced annually in Bangladesh will increase in the years to come.
Introduction
There has been a fast expansion in the world population, which has put a strain on the agricultural sector [1]. Rice is considered the world's third most common major crop, with more than 50% of the world's population eating it as a staple diet [2,3]. As one of the most nutrient-dense grains, rice is an excellent source of carbohydrate as well as vitamins (B, E, thiamine) and minerals (Ca, Mg, Fe) [4]. About 160 million Bangladeshis rely on rice as a basic meal for their daily diets and survival [5]. Bangladesh's economy is heavily dependent on rice production, which means that the price of rice has a considerable impact on GDP growth, inflation, wages, employment, food security, and poverty [6]. The rice industry employs over 48% of the rural population, provides two-thirds of all caloric intake, and accounts for half of the average person's protein intake [7]. For agricultural GDP and national income, the rice subsector alone contributes about 4.5% to the GDP [8]. Nearly all farming households in Bangladesh cultivate rice. It is produced on about 10.5 million hectares of land, which occupies about 75 and 80% of the total cropped and irrigated areas, respectively [9]. Accurate and timely estimates of crop production before harvest are essential for food security and administrative planning, especially in the current, ever-changing global environment and international scenario [10,11]. Rice yield forecasting has been extensively examined using various methods all around the world. In order to forecast rice yield, Kumar and Kumar (2012) added fuzzy values to the time series [12]. Alam et al. (2018) applied two hybrid approaches including ARIMAX-ANN and ARIMAX-SVM for estimating rice yield in India [13]. Jing-feng (2011) used NOAA/AVHRR data to predict rice production in Zhejiang Province through ratio models and regression models [14]. Using a crop growth model, Yun (2003) forecasted regional rice production in South Korea [15]. Koide et al. (2013) employed precipitation hindcasts from one uncoupled general circulation model (GCM) and two coupled GCMs to examine the predictive abilities of retrospective seasonal climate forecasts (hindcasts) customized to Philippine rice production data [16]. A satellite remote sensing technique was used by Noureldin et al. (2013) to forecast the production of rice in Egypt [17]. However, to reveal the growth pattern and make the most accurate prediction of rice production in Bangladesh, it is necessary to use a suitable approach that can successfully describe the observed data. Different techniques have been taken to accurately estimate yield, and each method has its own strengths and limitations [18]. For example, Rahman (2010), Mahmud (2018), Rahman et al. (2016), and Sulatana and Khanam 2020 applied the autoregressive integrated moving average (ARIMA) and artificial neural network (ANN) for predicting rice production in Bangladesh [19][20][21][22].
Sensor technologies, big data, the Internet of Things, artificial intelligence (AI), and machine learning approaches have recently shown great potential to advance precision agriculture and obtain accurate predictions [23]. According to the aforementioned literature and to the best of the author's knowledge, XGBoost is a machine learning algorithm that has not been widely deployed. The eXtreme Gradient Boosting (XGBoost) model is a supervised machine learning technique and an emerging machine learning method for time series forecasting in recent years [24,25]. It is a novel gradient tree-boosting algorithm that offers efficient out-ofcore learning and sparsity awareness. XGBoost is a supervised learning technique that ought to be particularly good for the problem of claim prediction with both big training data and missing values, even if the commonly used methods such as random forest and neural networks can handle missing values [26,27]. The robustness of XGBoost results in increased usage of the method in many other applications. As an example, Aler et al. utilize XGBoost in the field of direct-diffuse solar radiation separation by creating two models [28]. Moreover, in infectious disease prediction such as COVID-19, the XGBoost achieved greater prediction accuracy [29,30].
In contrast, the Autoregressive Integrated Moving Average (ARIMA) model developed by Box and Jenkins (1990) is most widely used for forecasting time series data because of its capacity to handle non-stationary data [31]. The ARIMA model is a suitable forecasting method in agriculture for different crops and has been extensively used in the fields of economics and finance [31][32][33]. Therefore, this study aimed to (a) compare the predictive accuracy of the autoregressive integrated moving average (ARIMA) and eXtreme gradient boosting (XGBoost) for accurate modeling the annual rice production data in Bangladesh; and (b) carry out the best model to forecast rice production for the next 10 years (Fig 1). Finally, the findings of this study will help government officials and development practitioners make more accurate short-term predictions of future rice production to boost administrative planning and ensure food security.
Data source
The annual rice production data from 1961 to 2020 (60 years) used in this study were collected from the website of FAOSTAT [34]. The data were divided into training and test sets. The proportion of training and testing data was 90% and 10%, respectively. The ARIMA and XGBoost models were built using the training data sets. The test data were used to evaluate the predictive ability of the developed models. The data set does not contain any missing values.
ARIMA model
The autoregressive integrated moving average (ARIMA) is a technique for analyzing and predicting time series data that was initially introduced by Box and Jenkins in 1976 [35]. An ARIMA (p, d, q) time series model consists of its three components. The letters p of the ARIMA model denote the autoregressive (AR) order, d denotes the differencing order, and q denotes the moving average order (MA) [36,37]. The autoregressive order AR(p) describes the linear combination of the observations that are p times earlier with the random shock term, which can be mathematically defined as Where, Y t and ε t represent the observed value and the random shock terms at time t, ; i (i = 1,2,3,4. . ..) indicates the model parameters, and c is the constant term. On the other hand, the moving average order MA(q) explains the dependent variable for previous random shock terms, which can be defined as where, μ represents the mean of the series, θ j (j = 1, 2, 3. . . q) denotes the model parameters,
PLOS ONE
and q indicates the model's order [38]. According to the above explanation, the ARMA (p, q) model can be defined mathematically as follows: The general form of the ARIMA (p, d, q) model with the differenced series may be defined mathematically as follows: Where y 0 t explains the difference between the series (the number of differences can be greater than 1);; ; 1 , ; 2 . . .; p indicate the coefficients of AR(p) terms and θ 1 , θ 2 . . .θ q show the coefficients of the moving average, MA(q) term. More information regarding ARIMA model can be found in the literature [30,39].
XGBoost model
The eXtreme Gradient Boosting (XGBoost) is a type of boosting application that combines several learning applications to produce higher prediction accuracy than any of the individual learning applications used in several fields [24]. It is a decision tree-based ensemble machine learning approach that is frequently employed in data science. After utilizing an internal approach that aggregates the outcomes from several individual trees, precise forecasts can be obtained [29]. XGBoost was first introduced by Chen Tianqi and Carlos in 2011, and since then several researchers have refined and enhanced it for the follow-up study [40]. The XGBoost model aims to execute a gradient descent optimization approach so that the loss function can be reduced [41]. Boosting is an ensemble technique that can assemble thousands of forecasting models with lower performance into a strong, high-performance model by repeatedly merging the models within permissible parameter values [40,42]. The objective function can be written as follows: As mentioned above, the objective function (5) consists of a loss function denoted by L and a regularization term O(f k ), that reduces the new tree's output variation.ŷ i denotes the predicted value and y i represents the observed value. A detailed information regarding the XGBoost model can be found in the literature [24,39].
Evaluation parameter of models
One of the major criteria of model evaluation is the calculation of model accuracy. The accuracy of a model describes how the actual and predicted values are close to each other. Model accuracy can be calculated by using several measures [43]. This study used the four widely used model accuracy measures, such as mean absolute percentage error (MAPE), mean percentage error (MPE), mean absolute error (MAE), and root mean square error (RMSE). These measures can be defined mathematically as follows: RMSE ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 1 n Where n indicates the number of samples,ŷ i denotes the predicted value and y i represents the observed value, andŷ i À y i indicates the error value. The MAPE measurement provides the percentage result of the errors. Better fitting results are achieved with less errors [41].
Statistical analyses
ARIMA and XGBoost predictive models and several statistical analyses were carried out using RStudio (Version 4.2.1) [44]. The ARIMA model was fitted using the "forecast" package [45]. The XGBoost model was constructed with the "forecastxgb" package. The "ggplot2" package was used for graphical visualization. All necessary codes and data are available at https:// github.com/Arman-Hossain-Chowdhury/Rice-production.
Results
The highest amount of rice produced in Bangladesh was 54,905,891 tons in 2020, and the lowest was 13,304,520 tons in 1962. The average amount of rice produced annually in Bangladesh is 29,960,847.08 tons. And the boxplot indicates that the data have no outliers (Fig 2).
We plotted the time series of the annual rice production data from 1961 to 2020 in Bangladesh. The data vary considerably and show a linear pattern. The Augmented Dickey Fuller (ADF) test confirmed that the data are not smooth (Fig 3).
PLOS ONE
To reduce variation and stabilize the actual data, Box & Cox (1964) presented a parametric power transformation technique [46]. We applied this technique to make the data stable and exhibit less variation (Fig 4) [47].
PLOS ONE
We performed the ADF test to see the stationarity of the data and found the data non-stationary (p-value = 0.57) at level. To compensate for the trend shift observed in (Fig 4), we used first-order differencing of the transformed sequence (Fig 5). The differenced time series was found stationary using the ADF test (p-value = 0.01). So, the parameter (d) of the ARIMA model was 1.
In the ACF diagram, there was an evident peak at lag 1 indicating that the MA may become 1 and an evident spike at lags 0 in the PACF diagram, suggesting that the AR may become 0 (Fig 6). Therefore, the maximum p and q values are 0 and 1, respectively.
The ARIMA model was built with the "auto.arima" function to list all possible models and then selected the model ARIMA (0,1,1) with drift on the basis of the lowest Corrected Akaikes Information Criteria (AICc) value. The drift parameter value indicates that the rice production drifts upward positively ( Table 1).
After that, the residual diagram, the ACF diagram of the residual, and the residual histogram were drawn, indicating a normal distribution (Fig 7). Hence, the ARIMA (0, 1,1) with drift model proved significant.
The XGBoost model was developed after adjusting several parameters. The adjusted parameters for the model were shown in S4 Table in S1 File. If a feature significantly affects the predicting performance when random noise takes its place, it is considered to be important. The feature importance of the XGBoost model was computed to see how each feature contributed to the prediction accuracy in the training set. And it was found that lag 5 of the training data contribute greatly to the model (Fig 8).
The curve of actual, fitted, and forecast values of the annual rice production in Bangladesh by ARIMA (0,1,1) with drift and the XGBoost model has been illustrated in Fig 9. The forecasted values of the XGBoost model were quite close to the actual values.
Model comparison
The ARIMA (0,1,1) with drift model was built using the difference of the time series data. As a result, we lost a value in the training set; therefore, we compared the remaining 53 values. We used a maximum of eight time-lagged variables as input features for XGBoost. Because the maximum lag of 8 of the rice production data can contribute precisely to improve the XGBoost model prediction accuracy. Hence, the remaining 46 values were compared for the XGBoost model. The prediction accuracy for the ARIMA and XGBoost models is shown in Table 2.
The MAPE value of the test set of the XGBoost model was comparatively lower than the ARIMA model, which indicates that XGBoost performs better than ARIMA in predicting the annual rice production in Bangladesh. The detailed information regarding XGBoost model fitting can be found in S1 File.
Finally, based on our preferred XGBoost model, we predicted the annual rice production for the next 10 years (S1 File). According to our forecasts, during the next 10 years, the amount
Discussion
In our study, we found a linear upward pattern in the annual rice production data in Bangladesh. The primary goal of this study was to compare and contrast the predictive accuracy of the ARIMA and XGBoost forecasting models and make a short-term prediction with the best model. In this research, we examined the annual rice production in Bangladesh as a whole from 1961 to 2020. It is commonly known that Bangladesh has a subtropical tropical monsoon, which is distinguished by significant seasonal changes in precipitation, high temperatures, and humidity. In Bangladesh, there are three different seasons: a warm, humid summer from March to June; a chilly, wet monsoon season from June to October; and a cool, dry winter from October to March. In the past, temperatures in Bangladesh have ranged from 15˚C to
PLOS ONE
34˚C annually, with an average temperature of roughly 26˚C [48,49]. Food production (e.g. rice, wheat) is particularly vulnerable to climate change because the agricultural productions are severely impacted by the climate patterns. Several previous studies examined that mean temperature can negatively impact the rice production [50,51]. Precipitation had a positive impact on rice production, which was also determined by a previous study [52]. To know the actual pattern of the annual rice production in Bangladesh and forecast it accurately, time series modeling is very crucial [53]. The ARIMA model for the annual rice production data was established based on the concept of linear regression to forecast future data points. Without using any other explanatory variable, the ARIMA model is capable of understanding the pattern of the historical data and making accurate forecasts. So, it is simple to establish the ARIMA model [24]. Since ARIMA is a well-known and most widely used time series forecasting model, this study compared the ARIMA model with the robust XGBoost machine learning model. The ARIMA model can be well fitted to non-stationary data after the Box-Cox transformation and differencing of the original data [39]. But differencing can cause data lose. In order to differencing the data, this study lost one-year data. We built the ARIMA models using the auto.arima function by adjusting the power transformation parameter (lambda) and selected the appropriate model based on the lowest AICc value. Based on the lowest AICc value, we finally selected the optimal ARIMA (0,1,1) with the drift model.
On the other hand, we used the tree-based ensemble XGBoost supervised machine learning technique on our data. Several previous studies used several machine learning models, such as the artificial neural network [22], the random forest [26,54,55], and the support vector machine [56,57] to predict rice production and obtained effective predicting results. The eXtreme gradient boosting is a robust machine learning technique for precisely modeling,
PLOS ONE
analyzing, and forecasting time series data [25]. The XGBoost model provides a variety of advantages regarding model forecasting. For example, it does not require any preprocessing of the data. It has a rapid processing speed, robust feature selection, good fitting, greater predictive performance and late scaling penalty than a typical Gradient boosting decision tree which removes the model from the occurrences of overfitting [25,58]. As a result, we compared the predictive performance of the ARIMA model with the XGBoost model. From the result, it is clear that XGBoost performs better than the ARIMA model. In the meantime, the XGBoost model may also be utilized for cross-validation and has the ability to automatically identify significant feature vectors. The MAPE value of the XGBoost model for the test set is comparatively lower than the ARIMA model, which indicates XGBoost performs better than the ARIMA model in predicting the annual rice production in Bangladesh. Therefore, we used the XGBoost model to make a short-term prediction for the next 10 years. The prediction reveals that the amount of rice produced annually will grow in the following years in Bangladesh.
According to our study, the fitting and forecasting accuracy of the XGBoost model is much better than the traditional time-series ARIMA model. Without requiring any influencing factor, our proposed model can feasibly predict the annual rice production in Bangladesh.
Limitations
In this study, we identified a model by comparing the ARIMA and XGBoost models that could accurately predict the annual rice production in Bangladesh. There are several machine learning models such as Decision Tree, LightGBM, and so on that are more robust and might have greater prediction accuracy. These models need to be applied in the future to find the best one. We mainly concentrated on the effect of time on rice production, which made it simpler to develop and predict our model. As a result, one of the limitations is that some climatic and econometric factors like temperature, rainfall, consumption, and so on, which are well known to affect rice production, were not taken into account in this study. These should be investigated further in light of the data's availability.
Conclusion
We built an ARIMA and XGBoost model for forecasting the annual rice production in Bangladesh. These models were applied to generate a short-term prediction in this study. The XGBoost model performed better than the ARIMA model in predicting the annual rice production in Bangladesh. Finally, the government and development practitioners can employ XGBoost models over ARIMA to make more accurate short-term predictions of future crop production. | 4,744.6 | 2023-03-27T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Deep Learning Regression Approaches Applied to Estimate Tillering in Tropical Forages Using Mobile Phone Images
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. Six architectures were evaluated, including AlexNet, ResNet (18, 34, and 50 layers), ResNeXt101, and DarkNet. The best regression model showed a mean absolute error of 7.70 and a correlation of 0.89. Our findings suggest that our proposal using deep learning on mobile phone images can successfully be used to estimate regrowth density in forages.
Introduction
Pasture areas cover 21% of the territory (170 million hectares) in Brazil; however, a large part of these pastures are degraded [1], leading to lower livestock productivity. The current average Brazilian productivity (73.5 kg of CWE. ha −1 ·yr −1 ) is lower than the potential productivity of 294 kg CWE.ha −1 ·yr −1 [2]. This production gap represents a great challenge to be surpassed by the livestock producing countries. On one hand, the increase in the world population leads to increased demand for protein. On the other hand, policies to combat climate change require more natural environment conservation, thus demanding less area for animal protein production. In this scenario, increasing the productivity of areas already used for animal protein production is essential to meet the growing demand and to attend to the policies for reducing greenhouse gas emissions, without increasing pasture area. To achieve this goal, the development of more productive cultivars by efficient forage breeding methodologies can help reduce the productivity gap [3].
Tillers are small units of forage grass plants responsible for pasture production. After defoliation of the pasture (e.g., grazing by animals) the regrowth of tillers is crucial to maintain pasture stability and productivity [4,5]. The tillers that effectively contribute to productivity are those that regrow up to eight days after mechanical defoliation or grazing by animals [6]. Thus, one way to measure productivity is to estimate regrowth seven days after defoliation [7]. However, in situ measurements of this trait can be time-consuming, labor-intensive, and is a subjective task. Thus, the development of low-cost technologies for automated plant phenotyping could help scientists and professionals in forage breeding programs. Machine and deep learning combined with mobile devices, such as smartphones, are powerful and low-cost tools for this purpose. The development of such tools could induce less labor and time and more accuracy in the phenotyping process in forage breeding programs, leveraging the efficiency of these programs and contributing to the release of improved cultivars used to reduce the productivity gap.
Many machine learning methods, such as Support Vector Machine (SVM) and Knearest neighbors (KNN), have been employed and show outstanding results, indicating their potential role in the future of High-Throughput Phenotyping (HTP) [8,9]. Deep Learning is a subset of machine learning techniques known as a versatile tool capable of automatically extracting features and assimilating complex data using a deep neural network. Convolutional Neural Networks (CNNs) have made remarkable achievements in Computer-vision-related tasks [10]. CNN-based approaches have been widely applied to plant phenotyping because of their ability to create robust models that can be embedded in remote sensors [11,12]. The literature often neglects the use of simpler and faster digital image processing approaches. However, in the problem tackled in this study, several research papers have already compared digital image processing and deep learning to grass-like plants, especially between 2018 and 2019, where, in most cases, deep learning showed better performance [13][14][15][16].
Regarding tiller estimation, Zhifeng et al. [17] showed that Magnetic Resonance Imaging (MRI) could be used to measure rice tillers, as well as the conventional X-ray computed tomography system. Yet, an image processing procedure is still necessary. Fang et al. [18] proposed an automatic wheat tiller counting method under field conditions with terrestrial Light Detection and Ranging (LiDAR) using an adaptive layering and hierarchical clustering. Boyle et al. [19] conducted experiments using RGB images of wheat on different days and at three different angles and used a computer vision algorithm based on the Frangi filter. Deng et al. [20] trained a Faster R-CNN on three different backbones (ZFNet, VGGNet16, and VGG-CNN-M-1024) and evaluated productive rice tillers detection using mobile images. They achieved good accuracy compared to manual counting. Kristsis et al. [21] present a plant identification dataset with 125 classes of vascular plants in Greece, which include leaf, flower, fruit, stem in a tree, herb, and fern-like form. They focused the proposal on finding deep learning architectures to deploy on mobile devices. This problem has a different goal from our study. We are not concerned with finding a lightweight architecture. Our proposal aims to help HTP find the best genetic material using mobile images, where computational cost is significant but not a critical factor in our application purposes. In addition, they report their results using validation sets and not as test set [22]. Another interesting result from a grass-like image input can be found in Fujiwara et al. [23]. The authors use a CNN to estimate legume coverage with Unmanned Aerial Vehicle (UAV) imageries. This study samples image patches and estimates the coverage of timothy, white clover, and background using a fine-tuned model for each patch. They evaluate only on GoogLeNet [24].
Although we can find a rich literature in grass-like deep learning literature, to the best of our knowledge, no studies were found that investigate deep-learning-based methods to estimate the regrowth density of tillers in tropical forages using mobile phone images. Mobile phones are more accessible to most researchers than sources used in previous works (e.g., MRI and LiDAR). Furthermore, while other studies count the number of tillers [17][18][19][20], we use a score between 10 and 100 to represent a percentage of regrown tillers to select the top-k best genetic material.
The selection of top-k genotypes requires a scoring function to define the total order. Therefore the natural choice to perform this task is to treat it as a regression problem. If we train the model as a classification problem as classes of 10, 20, 30, all the way to 100, we tie the scores between these ranges, and therefore we lose the fine grain that is very important to select the top-k plants. Treating this problem as a classification problem instead of a regression problem would throw away all the potential of the total ordering possible using scores as the main output of deep learning models. Furthermore, evaluating the use of mobile phones involves two problems: (1) mobile images and (2) small models. The first problem can greatly vary when considering image quality, light, and resolution. The latter considers small models that often compromise accuracy to obtain a lighter model.
We compared small models with bigger models to verify the loss acceptable in these applications.
Our objective is to explore deep learning regression-based methods on mobile phone images to assess the regrowth of tillers. Furthermore, different from other studies that directly count the number of tillers, we propose a methodology to assess the percentage of regrown tillers using scores from 10 to 100. We collected 1124 images with two distinct mobile phones and labeled them manually. Six different architectures were evaluated using 10-fold cross-validation with and without transfer learning. We presented a quantitative and qualitative analysis for regression. Thus, our work indicates the potential of the proposed methodology for the tiller regrowth estimation, which will be useful in increasing the efficiency of the breeding program. Our work can be used to build powerful tools for scientists and researchers to evaluate and select the best cultivar candidates in forage breeding programs and contribute to increasing animal protein productivity.
The rest of this paper is organized as follows. Section 2 presents the materials and methods adopted in this study. Section 3 presents the results obtained in the experimental analysis. Section 4 discusses our achievements. Finally, Section 5 summarizes the main conclusions and points to future works.
Materials and Methods
We adopt a standard workflow (see Figure 1) of data collection, preprocessing, and training procedures.
Study Area and Dataset
The study was developed in the field at Embrapa Beef Cattle, Campo Grande, Mato Grosso do Sul, Brazil, in the Cerrado Biome ( Figure 2). Embrapa Beef Cattle holds the main Panicum maximum germplasm bank in the country and is responsible for its breeding program [3]. Panicum maximum (Guinea grass) is one of the most important tropical forage grasses because of its high production potential, nutritive value, adaptation ability to different soils and climates, and potential as an alternative source of energy [25][26][27]. Our experiments were conducted in two trials (P7 and P8) of a biparental population of Guinea grass with 210 genotypes showing a high genetic diversity.
The dataset was generated with images obtained with two mobile phones-a Redmi Note 8 Pro and a Moto G4 Play-using the Field Book app [28] that organizes the images and their traits in a CSV file. Our dataset is composed of 1124 labeled images. Tables 1 and 2 show the number of images collected by date, mobile phone, and experimental area. Each acquisition was close to 1 hour, with a variation of 10 min. The P8 trial was imaged in just one day with a single cell phone, while the P7 trial was imaged on three different days, one day with two cell phones. Considering the different dates (four different days from two seasons-spring and summer) and times (10 a.m. to 11 a.m. and 1 p.m. to 2 p.m.) the images were taken, an attempt was made to generate a dataset with high luminosity variability, making the model more generic and robust. All assessments were made seven days after harvest. Images taken with Redmi Note 8 Pro are 3264 × 1504 pixels in dimension, and Moto G4 Play's images are 3264 × 2448 in dimension. They were taken at 1.05m approximately. Figure 3 shows the in situ data collection, while Figure 4 shows samples of different regrowth density from our dataset. The regrowth density was evaluated in each plot seven days after the mechanical harvest when the regrowth density shows a higher correlation with the next harvest production. To achieve high reliability, the regrowth density measurements must be performed by the same expert (researcher or technical staff) repeatedly after a series of harvests in a year and in different years. For this study, the ground truth data were collected in the field by an Embrapa Beef Cattle researcher ( Figure 3). The regrowth was annotated as an integer score dividable by 10, varying from 10 to 100 (included). A score of 10 corresponds to a tiller regrowth of 0% to 10%, and 100 corresponds to a tiller regrowth of 90% to 100%. The literature usually uses a coarser scoring range from 1 to 5, where 1 represents a regrowth of 0% to 20% of tillers, 2 a regrowth from 20% to 40%, 3 a regrowth from 40% to 60%, 4 a regrowth from 60% to 80%, and 5 the regrowth from 80% to 100% [26]. However, we used a more refined scale to have more robustness in our work .
Deep Learning Approach
After the labeled data were organized, we approached the problem using regression with the FastAi library [29]. The Experiment was evaluated with 6 architectures: AlexNet [30], ResNet [31] (18, 34 and 50 layers), ResNeXt101 [32] and DarkNet [33]. We used the AlexNet, ResNet, and ResNeXt implementations from PyTorch [34]. For DarkNet, a repository implementation was used [35]. While using ResNeXt on FastAi, a pre-trained model library was used [36]. In addition, all the architectures were evaluated with a pre-trained model on ImageNet [30] in order to assess the influence of fine tuning.
We performed all experiments using 10-fold cross-validation with an internal hold-out procedure to create training, validation, and test sets. Each fold was divided considering 81% for training, 9% for validation, and 10% for testing. All results presented in this paper were evaluated on the test set. We trained our models on Tesla K80 GPU.
Experimental Setup
We resized the images to 224 × 224 pixels, applying random horizontal flip and max rotation of 20 degrees; both with a probability of 0.75. We trained for 65 epochs (see Figure 5). The training epochs were split into four stages of 10, 10, 5, and 40 epochs. For the first three stages, we used One Cycle Policy [37], and for the last stage, we used the standard training policy. The learning rate was chosen empirically, using the learning rate finder implemented on the FastAi library.
In pre-trained models, after the first stage, we unfroze the third-to-last layer, and after the second stage, we unfroze the whole model. The loss function used was mean square error flat.
At the inference, the predictions were rounded to the closest multiple of 10 between 10 and 100 (included). Table 3 shows how we divided our experiments regarding its architecture, pre-training status, and batch size. The hashtag (#) indicates the experiment number.
Approach Evaluation and Statistical Analysis
We evaluated all of our experiments on Mean Absolute Error (MAE) , Root Mean Square Error (RMSE), Mean Absolute Error (MAPE), and Pearson Correlation (R) and plotted its confusion matrix. Each metric was calculated with the following equations: The y represents the true value while theŷ represents the predicted value. m y and mŷ are the average of the true values and the average of the predicted values, respectively. Nonetheless, these metrics do not give better notions of lower and higher values dominance than the ground truth data. So, we were motivated to use Regression Receiver Operating Characteristic (RROC) [38]. RROC space is a plot that depicts the total underestimation (always negative) against the total over-estimation (always positive). Thus, the closer the point is from (0, 0), called RROC heaven, the better the model is. There is a diagonal dashed line UNDER + OVER = 0 that represents the points where the underestimation matches the over-estimation, making the model unbiased. We also used a histogram to evaluate how well the model distribution was learned by comparing it with the true distribution. Finally, we applied the Grad-CAM [39] visual approach.
Results
Initially, we plotted the loss curve of the models in the validation set in Figure 5. The plots show the validation loss versus the number of epochs. A point is calculated as the average loss over the folds in each epoch. The loss curve gives an overview of the training behavior of the models, and it is possible to check whether an incorrect setting of epochs affected a model result. We can see that all models converged and reached a stable line in the validation set after iteration 30. Another important observation is that no model had potholes in its loss curve, suggesting that early termination might affect the results. Table 4 shows the mean and standard deviation of the mean absolute error, root mean square error, mean absolute percentage error, and Pearson correlation over the 10-fold cross-validation of each attempt. The experiment number refers to Table 3. Regarding the standard evaluation, the top result, seen in experiment resnet50-pret, has an average MAE of 7.70 and an average RMSE of 10.97; however, the non-pre-trained counterpart did not have such good results. The best couple was ResNeXt101, which achieved an average MAE of 7.72 and 7.81 and an average RMSE of 11.02 and 11.04 with and without fine-tuning, respectively. All experiments showed a correlation higher than 0.81.
Standard Metrics: MAE, RMSE, MAPE, Pearson Correlation, and Confusion Matrix
The predictions used to plot Figures 5-8 are computed by concatenating all 10 test set results from the cross-validation procedure. In this way, all predictions have no overlapping results, representing the entire dataset as a test set without leaking data from the training to the test set. Table 3. (7). Therefore, among 209 examples, 86 were predicted correctly as 70, and the remaining examples were around the correct prediction. The area in the matrices lower than 60 represents forages with low regrowth. The goal of the breeding program is to select plants with the best regrowth, i.e., the ones with higher scores for the trait. Therefore, due to the selection applied in past generations, we expect fewer samples with scores less than 60. When we look at the prediction quality in this region of the top two best performing models, resnet50-pret and resnext101-pret (Figure 6g,i, respectively), we can observe that resnext101-pret shows a slightly blueish color pattern closer to the main descending diagonal than resnet50-pret. This pattern indicates that resnext101-pret performs better for lower scores than resnet50-pret. When we look at lower-performing models, such as alexnet-nopret Figure 6b, the results are spread all over scores lower than 60, and the model starts to hit the main diagonal after 60.
The confusion matrix plot shows some values below and above the descending diagonal. However, it is hard to evaluate whether the algorithms had any tendency to predict higher or lower values than the ground truth. One way to assess the tendency to higher or lower values is using RROC [38].
RROC Space
RROC space is a plot that depicts the total under-estimation (always negative) against the total over-estimation (always positive). Thus, the closer the point is from (0, 0), called RROC heaven, the better the model is. There is a diagonal dashed line UNDER + OVER = 0 that represents the points where the under-estimation matches the over-estimation, making the model unbiased. Figure 7 shows the RROC plot of the trained models. We can observe that all of them are under the dashed line, which indicates that the models tend to predict lower values than the ground truth values. This result corroborates with the confusion matrix where the values below the descending diagonal, especially 80, 90, and 100, are usually higher than the values above the descending diagonal.
The experiments resnet50-pret and resnext101-pret are closer to the RROC heaven. The least biased model is the one of experiment darknet-nopret. Comparing with Table 4, we observe that experiments resnet50-pret and resnext101-pret show good results; however, the RROC space analysis shows that they are biased. This shows the importance of this analysis, as the standard metrics do not show. Figure 8 shows the intersection (greenish color) of the Probability Density Function (PDF) of the ground truth data distribution and the predictions distribution of each experiment. The number of bins is fixed to 10 and represents multiples of 10 between 10 and 100 (included). The y distribution is shown in Figure 9.
Histogram Analysis
The intersection area between the distributions in each experiment shown in Table 5 is a numerical representation of the graphs. It allows us to compare the experiments using a numerical score. All the histograms learned to predict the correct distribution well; however, they showed difficulty in predicting the classes well in the end. The best histograms are from the experiment with alexnet-pret, resnext101-pret, and darknet-pret which achieved 0.93 of intersection area between both distributions. We used the Kullback-Leibler divergence (KL divergence) to measure the distance between both probability distributions. The distributions most similar to ground truth data are from experiments with alexnet-pret, resnext50-pret, and darknet-pret. Table 5. Intersection areas of the histogram shown in Figure 8. Best results presented in bold.
Visual Inspection
Experiment resnet50-pret shows the top MAE, RMSE, and correlation among the algorithms tested. We analyzed the image regions that this model considers more discriminating to define the regrowth areas, i.e., where the model looks at the image to predict the regrowing areas. For this, we look at the last activation map in the model using Grad-CAM. Figures 10 and 11 show the heatmap of Grad-CAM on Experiment resnet50-pret for the best and worst prediction for 10, 50, and 100 ground truth values, respectively. Warmer colors indicate areas that played the most important role in the model's decision, while colder colors mean the opposite.
The heatmap in Figure 10 shows a pattern where, in the lower density (regrowth score 10), the model avoids looking at the center of the plot and focuses on the border of the plot, leaving a circle in the middle where the model does not analyze. With the high density (regrowth score 100), the heatmap is stronger in an opposite way, focusing more on the center of the plot. This result corroborates with common sense that the high-density plot has more leaves in the center where the model looks. When looking at Figure 11a, the image seems to be mislabeled to 10. We can see from the image that the plot presented a relatively acceptable regrowth and much better than Figure 10a, and we believe that the model predicted a better score than the ground truth. The same occurs in the other images where the prediction seems better than the ground truth. The pattern of higher regrowth is similar to Figure 10, where the higher the regrowth, the more critical the center of the plot is.
Efficiency Analysis
We analyzed the efficiency of the experiments by comparing the average time a model takes to compute a single example. Table 6 shows the number of parameters for each experiment and the average inference time on GPU (tested on Tesla M4) and CPU. We picked 112 examples of our dataset for this analysis. As expected, the models are much faster on GPU than on CPU. Therefore, GPU is preferable to CPU. However, in our case, the time spent by inference is not an issue because we do not need the prediction in real-time, and even the slowest model (resnext101-pret) is already quite fast.
Discussion
This study estimates the regrowth density of tropical forages using mobile phone images. To achieve such a goal, we evaluated a series of standard and state-of-the-art deep learning methods from a simpler model such as AlexNet with only five layers to a more complex model such as ResNext101 with 101 layers. These models were adapted to tackle the problem as a regression problem.
For the first time, we report that deep learning methods can deliver correlations from 0.81 to 0.89 in estimating the regrowth density using mobile phone images. We believe that this result is very acceptable and has the potential to speed up data collection of regrowth density and consequently increase the efficiency of forage breeding programs. The closest approach found in the literature was the study conducted by Deng et al. [20] for rice tillers. The authors used a completely different approach. Their approach required harvesting the rice and evaluating the cross-sections of rice tillers. Using object detection, they estimated the number of productive tillers. Our approach requires just a plot image obtained from a mobile phone without harvesting or other labor-intensive intervention.
Deeper neural nets perform better than the shallower version of the same architecture in most problems [31]. In HTP, we found some controversy where the deeper model di not always produce the best result. The study conducted by Oliveira et al. [40] using aerial images taken by an Unmanned Aerial Vehicle (UAV) showed some results where the best performing model among AlexNet, ResNeXt50, MaCNN, LF-CNN, and DarkNet53 was a simple AlexNet. Intrigued by these results, we evaluated a broader range of deep learning architectures with a more diverse number of layers. Interestingly, a 50-layer (Resnet50) network achieved our best-performing result. Again, in a traditional computer vision task, we expected the 101 layer network to give the best result, which did not occur.
The analysis using RROC indicated that all models were below the descending diagonal, suggesting that deep learning models tend to undervalue the prediction of the results in the problem setting of this paper. Castro et al. [41] also plotted RROC in a biomass prediction problem using deep learning and aerial images, and in their results, this tendency did not exist. We believe that this tendency happens due to the skewed data distribution (Figure 9) toward higher values.
The heatmap results shed light on where the network is "looking" to predict the regrowth density. To the best of our knowledge, this result is the first study to address the interpretability of deep learning models on regrowth. The results indicate that the circular region is the main area to reveal the lower regrowth area. The center of the plot is the most characteristic area for higher regrowth images in deep learning.
Compared to similar works, ours differs for not using any complex sensor technology, such as MRI and LiDAR, which are highly priced and excessive compared to a mobile phone. In addition, there is no need for a scheme to take pictures on different days and rotations and handcraft features. Furthermore, the main distinction from other works is the estimated trait. We calculated a score representing the regrowth percentage of the tillers instead of counting the number of tillers.
The use of machine learning must be used with care. Although the proposed approach can give valuable estimates of tiller regrowing, it is not advisable to completely substitute the manual labeling field regrow density. It is always good to collect smaller validation sets to evaluate if the learned models still give good estimates. Therefore, the proposed approach never intended to completely replace the manual labeling of fields but rather to allow the HTP research to multiply the number of plots while reducing the need for manual labeling collection.
Conclusions
To the best of our knowledge, this is the first research that evaluated CNN-based architectures to estimate regrowth density using RGB images collected by mobile phones. From our perspective, this study also presents the following contributions according to our results: (1) deep learning can deliver correlations from 0.81 to 0.89 in estimating the regrowth density using mobile phone images; (2) the best-performing architecture is not always the deeper model for this problem; (3) the deep learning models tend to undervalue the predictions in our problem setting and; (4) the heatmap indicates the patterns that deep learning models use to predict regrowth density.
Previous works focus on estimating the tiller number. We used a score that represents the percentage of regrown tillers, and we collected a dataset with images of forages taken on different days, locations, phones, and genotypes, promoting more generalized models.
Our results indicate that we might succeed in using our methods for new data prediction. To develop new cultivars, the researchers need to evaluate and select for multiple traits in the breeding program. Thus, there is a huge consumption in time and cost, sometimes with low accuracy, for performing the phenotyping step. Thus, training new algorithms to estimate traits such as disease and insect damages, mineral deficiencies, seed number, and other traits is the next step of this work for using deep learning associated with low-cost mobile devices.
In future work, we will evaluate the problem by employing lightweight deep learning architectures to deploy the model inside the mobile phone. In this way, the annotators can speed up their labeling process, and their task is more related to validating the predictions and collecting images than labeling the plot. We also plan to evaluate the problem using the Learning-To-Rank algorithm and evaluate the use of UAV-based images.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author with the permission of Embrapa. | 6,361.6 | 2022-05-28T00:00:00.000 | [
"Computer Science"
] |
Xanthene and Xanthone Derivatives as G-Quadruplex Stabilizing Ligands
Following previous studies on anthraquinone and acridine-based G-quadruplex ligands, here we present a study of similar aromatic cores, with the specific aim of increasing G-quadruplex binding and selectivity with respect to duplex DNA. Synthesized compounds include two and three-side chain xanthone and xanthene derivatives, as well as a dimeric “bridged” form. ESI and FRET measurements suggest that all the studied molecules are good G-quadruplex ligands, both at telomeres and on G-quadruplex forming sequences of oncogene promoters. The dimeric compound and the three-side chain xanthone derivative have been shown to represent the best compounds emerging from the different series of ligands presented here, having also high selectivity for G-quadruplex structures with respect to duplex DNA. Molecular modeling simulations are in broad agreement with the experimental data.
Introduction
The first G-quadruplex binding ligand having an effect on telomerase activity was discovered in 1997 as a result of the collaboration between the Neidle and Hurley groups [1,2]. Particular attention has subsequently been devoted to the stabilization of these structures involved in key biological processes by small organic molecules, [3][4][5] especially in telomere regions [6][7][8]. However, the first anthraquinone ligands show high levels of non-specific cytotoxicity, possibly due to redox cycling [9,10]. Subsequently, in order to solve this biological problem, a new ligand core was studied: the acridine moiety [11]. The acridine core was chosen in part for its similarity with anthraquinones [12]. A small library of 3,6-disubstituted acridines was synthesised: this showed substantial improvement with respect to the previous series of anthraquinones, i.e., micromolar telomerase inhibition and lower cytotoxicity. Neidle and co-workers also increased the number of side chains on the acridine core, synthesising a library of 3,6,9-trisubstituted acridines [13]. The lead compound, BRACO-19 ( Figure 1) is one of the most studied G-quadruplex binding ligands to date, showing significant telomerase inhibitory activity (telEC 50 = 6.3 μM) [11]. In this context we wished to study aromatic cores similar to those previously described so as to induce quadruplex structures, and also possibly reducing the problems related to their unspecific cytotoxicity. In particular, our efforts have been directed towards synthetic targets represented by 3,6 and 2,7-disubstituted xanthene and xanthone derivatives ( Figure 2). This core, widely found in Nature, represents a unique class of biologically active compounds possessing numerous bioactive capabilities such as antioxidant properties [14]. These molecules constitute a restricted group of plant polyphenols, biosynthetically related to flavonoids [15]. We developed this core with the specific aim of increasing telomerase activity by rational design [16][17][18]. Molecular modeling studies predict that the xanthene moiety is at least comparable to the anthraquinone and acridine moiety in terms of G-quadruplex binding affinity [19][20][21]. It is an inherently planar chromophore and also contains a heterocyclic oxygen atom which could confer water solubility compared to the analogous anthraquinone core, which is completely insoluble in water.
Design and Molecular Docking
We used a virtual screening computer-based technique for identifying promising compounds to bind to a known G-quadruplex structure. Here we have adopted a docking screening method available in the AutoDock suite of programs. This is a software suite for predicting the optimal bound conformations of ligands to macromolecules [22][23][24]. Its use has been supported by a review by Trent and co-workers [25] showing that AutoDock optimally balances docking accuracy and ranking. This application and also the more general problem of modelling individual quadruplex structures have some similarities to those of other nucleic acid modelling. There is also the added complexity of the central ion channel and the intrinsic flexibility of the telomeric quadruplex itself. More recently, however, enhancements in the performance of AutoDock combined with the increased availability of high speed computers and computer clusters have allowed much larger computational experiments to be undertaken, where entire compound libraries are screened against pharmaceutically-relevant targets. We obtained the initial coordinates for the docking from the Protein Data Bank coordinates of the crystal structure of the parallel 22-mer telomeric G-quadruplex (PDB ID: 1KF1) which shows a single topology, the parallel fold [26]. The corresponding intermolecular energy values were used to calculate the average binding energies (and the relative standard deviations), reported in Table 1. Repeated experiments show good reproducibility, suggesting that the number of structures generated was sufficient to be statistically significant. The binding poses calculated for these compounds were then visually inspected to discard all the ligands which were not able to form hydrogen bonds with any of the guanine bases and/or to establish an electrostatic interaction with the backbone phosphate groups. There are several limitations to this methodology, mainly related to the inability of AUTODOCK and other docking programs to fully account for the non-rigidity of the quadruplex DNA structure and the likely significant polarisation effect of positive charges on the ligand molecule. Several attempts have been made to try to solve these problems, and it is clear that the calculated binding energies must not be considered as absolute values, but rather as indicative of relative ranking, which can be most useful with a series of homologous molecules having similar chemical structure [27]. However, these docking studies are in good qualitative agreement with the "threading intercalation" model proposed for this type of molecule by Hurley and coworkers, in which the drug is stacked on the terminal G-tetrad, stabilized by π-π interactions with the central aromatic core, while the side chains interact with the G-quadruplex grooves [28].
The first series of xanthene-based G-quadruplex binding ligands XA2 (dma, pip and mpz; 3a,b and c) and XO2 (dma, pip and mpz, 4a,b and c) have been the starting point of this work. Docking experiments were performed with the xanthene core on a G-quadruplex monomeric structure. These studies have shown good superimposition of the designed ligands with the terminal G-tetrad of the G-quadruplex (Figure 3), similar to that shown by anthraquinone and acridine derivatives. The binding energies calculated for molecules with a xanthenic core are in overall accord with those calculated for known ligands (Table 1). The second class of derivatives HXO2 dma, pip and mpz (8a,b and c) has been designed with the aim of improving the binding ability of previous models. We wished to obtain a compound with a greater number of oxygen atoms necessary to ensure good aqueous solubility. In view of the results from the docking simulations, we considered it appropriate to functionalize the planar structure of the xanthone in positions 2 and 7 so that the side-chains are oriented in the correct direction to be able to interact with two of the four adjacent quadruplex grooves generated by one of the loops. Moreover the carbonyl oxygen atom of the xanthone is directed away from the loop, preventing the xanthone core position becoming more decentralized and near the loop. The xanthone core appears displaced toward the central ion channel and this causes a further distancing of the side-chains from the grooves. Most of the structures obtained for the compounds of the first series on the 3' G-tetrad face show the ligand molecule in a position analogous to the one represented in Figure 3(B): one side-chain is well fitted into one of the four grooves (regardless of which one), while the other is more flexible, since it cannot reach another groove. Molecular docking studies show how the second generation of ligands has an inverted behaviour compared to the earlier compounds. It is notable that the carbonyl oxygen atom of the xanthone is oriented towards the centre of the structure, where it is stabilized by a K + ion situated between the tetrads [29]. Therefore, the side chains are closer to the grooves and they have to be shorter than those linked in position 3 and 9: only four carbon atoms compared to six for the first series are necessary.
Small aromatic cores such as xanthene, when functionalized with hydrophilic and positively charged chains, can intercalate and interact with double-stranded DNA, as demonstrated in the following sections. For this reason, we attempted to design more selective quadruplex ligands. As a A B C first approach, we expanded the surface of the aromatic core, creating a bridge between two xanthene groups. Since the first series of compounds are predicted to be able to interact with two of the four quadruplex grooves through their two side-chains, our aim was to add a small bridge that would enable the new molecule dimer bridge (11) to interact with all four quadruplex grooves, with a consequent improvement in selectivity and stability. The two monomer units can rotate independently and adapt to terminal tetrad binding site, with distinct orientations: a possible model of interaction is shown in Figure 4. In view of the findings from earlier studies that the introduction of an appropriate third substituent arm enhances selectivity (for instance trisubstituted triazines, porphyrins, and acridines such as BRACO-19), we considered it appropriate to introduce a third chain on the xanthone core, using the knowledge already acquired from the two-chain templates [13,30]. The calculated binding energy of compound XA3c (14) looks promising. The interaction of this compound is predicted with the telomeric quadruplex to show the positively charged side chains in position 3 and 6 each extending into a wide groove, while the side chain in position 9 inserts into a narrow hydrophobic pocket ( Figure 5). (14), yellow atom-type) and the monomeric G-quadruplex (light blue).
Synthesis
Following the molecular modelling studies, the aim of our work became the synthesis of xanthene and xanthone derivatives appropriately modified in order to improve aqueous solubility and ability to interact with the target quadruplex. We started from xanthene itself, which is commercially available. The first step in the strategy was to introduce the side-chains by Friedel-Crafts acylation. In this way, the 2,7-positions were functionalized. We employed chains of suitable length which are also very flexible, to be in optimal contact with the quadruplex backbones and grooves [31]. Since these chains terminate with a bromine atom, suitable for the subsequent nucleophilic substitution, using different amines, similar to those used for this purpose in the literature, we obtained a small library of compounds such as XA2 dma, pip and mpz (3a,b and c, Scheme 1) [32,33]. The secondary amines we used become tertiary after substitution and are charged under physiological conditions. To increase the planarity of the molecule, we decided to oxidize the methylene group of the xanthene to a carbonyl group with Jones reactant, obtaining the corresponding xanthone derivatives XO2 dma, pip and mpz (4a,b and c, Scheme 1). Regarding the synthesis of the second generation of derivatives (HXO2 dma, pip and mpz, 8a,b and c) we had to synthesize the xanthone core, because the 3 or 6 positions of this core are deactivated. The starting-point was 2,2',4,4'-tetrahydroxybenzophenone, which was dehydrated by generating a xanthone core with two hydroxyl groups in the desired 2 and 7 positions. This reaction is quantitative and occurs only under high temperature and pressure conditions. To carry out this dehydration, the starting material was suspended in a sealed steel tube half-filled with water. At 250 °C, the water evaporation generates the necessary pressure (about 50 atm) to promote the reaction [34] Then, side chains of suitable length were introduced, using 1,4-diidodoproane. As in the previous series of compounds, the second halogen was replaced with the desired amine. Piperidine, methylpiperazine and dimethylamine were used as amines to obtain a small library of derivatives, Scheme 2. The first step in the synthesis of the bridge dimer 11 was the creation of the bridge between the two xanthene moieties. This reaction first requires generation in situ of the corresponding carbanion, and then the addition of a stoichiometric amount of 1,3-diiodopropane. At this point, the synthesis was performed as for the first generation of xanthene. Therefore, a Friedel Craft acylation was carried out on the two positions 2 and 7, and finally the subsequent nucleophilic substitution with the amine, Scheme 3. Finally, a similar procedure was applied for the synthesis of the final compound, XA3c (14). Using an excess of 1,3-diiodopropane, it was possible to introduce only one chain on the xanthene core. The acylation and the following replacement were performed as already described. In the final step of this synthetic procedure, the secondary amine simultaneously displaces both bromine and iodine atoms of the intermediate compound, to give the final desired compound XA3c (14, Scheme 4).
Studies of Ligands Interactions with G-Quadruplex and Duplex DNA by ESI-MS Experiments
Electrospray ionization mass spectrometry (ESI-MS) is a powerful tool for studying biomolecular structures and non-covalent interactions. This technique allows the transfer of non-covalently bound complexes into the gas phase without the disruption of the complex itself and therefore the determination of the stoichiometry and, in particularly favourable cases, modes and energies of interaction. For the analysis of noncovalent complexes between nucleic acids and small molecules, De Pauw and co-workers used ESI-MS to study the interaction of double-stranded DNA with two classes of antitumor drugs: intercalators (ethidium bromide, amsacrine and ascididermin) and minor groove binders (distamycin A, Hoechst 33258, netropsin, berenil and DAPI) [35,36]. They also evaluated the DNA affinities of the minor groove binders, by quantifying the equilibrium association constants of the observed complexes and they demonstrated consistency of these values with those obtained from other traditional techniques. In the past few years, ESI-MS has been used in the study of nonconventional DNA structures, including DNA triplexes and especially G-quadruplexes [37]. It has been successfully applied to the study of the binding of G-quadruplex ligands to their target sequences in order to determine stoichiometry and relative binding affinities of such complexes [38]. Quantitative analysis of binding affinities with quadruplex DNA structures is possible, because the association constants can be calculated directly from the relative intensities of the corresponding peaks found in the mass XA3c (14) b c a spectra, with the assumption that the relative intensities in the spectrum are proportional to the relative concentrations in the injected solution.
For this study, we have chosen two oligonucleotides that can form different G-quadruplex and duplex structures: HTelo21 (5'-GGGTTAGGGTTAGGGTTAGGG-3') which comprises human telomeric repeats and is able to fold into a monomeric G-quadruplex structure, as characterized by X-ray crystallography and NMR. The association constants have been calculated directly from the corresponding peaks found in the mass spectra, since the relative intensities in the spectrum are proportional to the relative concentrations in the injected solution, as previously reported [39,40].
In order to evaluate ligand selectivity for quadruplex over duplex DNA we have studied its affinity for a self-complementary dodecamer: DK66 (5'-CGCGAATTCGCG-3'), one of the duplex DNA models most reported in the literature. The evaluation of the binding constants coming from the collected data demonstrates that all the molecules examined are good telomeric G-quadruplex ligands, able to form both 1:1 and 2:1 drug-DNA complexes. This is in agreement with the terminal stacking mode of interaction between ligands and G-quadruplex DNA, confirmed by the molecular modelling studies reported above (Figure 3), and with the existence of two binding sites, corresponding to the two terminal tetrads, resulting in two different binding constants (K 1 and K 2 ). The synthesized ligands show similar K 1 values, with log K 1 = 4.7 ÷ 4.0 for the xanthene derivatives, with XA2pip (3b) showing the highest value of K 1 with log K 1 = 4.7. In the examination of K 2 values XA2dma (3a), XA2pip (3b), and XA2mpz (3c) show similar values of K 2 with log K 2 = 4.2, 4.6, and 3.9, respectively ( Table 2). Contrary to what expected, the oxidized compounds XO2pip (4b), XO2dma (4a) and XO2mpz (4c) have quadruplex association constants in line with the respective xanthene derivatives. Although their capacity to interact with the terminal tetrads is superior due to their greater planarity, this characteristic, due to self-aggregation phenomena, limits their ability to interact with the desired target. This was also confirmed by their lower solubility in common organic solvents where they are in the form of neutral amines, and also in water where they in the form of the corresponding hydrochloride.
We therefore considered appropriate to continue to develop this core, in order to improve the ability to interact with the terminal G-tetrad. Therefore we have designed and subsequently synthesized the second series of xanthenes, as described above. The data show that also in this instance the ligand functionalized with piperidine has optimal properties. The dimethylamino derivative has similar values of K 1 , although the K 2 value is 10-fold lower. The results show that all xanthene and xanthene compounds do bind G-quadruplex structures. They are also able to form complexes with duplex DNA, although with about 10-fold lower association constants. We believe that the low selectivity for quadruplex structures is due to the extreme flexibility of the side chains that are able to interact with the grooves of both quadruplex and duplex. Another consideration is that the small size of the central core does not optimize the interactions with the aromatic surface of the terminal G-tetrad.
The results obtained for the bridge dimer are particularly interesting. Besides the expected peak of 1:1 for the DNA-ligand complex, we observed a peak of equal intensity but less defined. We believe it is for a 2:1 DNA-ligand complex. The mass/charge (m/z) value provides only the stoichiometry. The complex is found in the same range as the previous one (although the weight is almost doubled) as is the charge for the two oligonucleotides appears to be double. Peaks are normally observed with a formal charge of 5 or 4, while in this case we see a complex with double mass but at the same time also the assumed charge is doubled. Unfortunately, the resolution of the ESI-MS could not provide any more detailed information. It is evident from the data reported, that there is an improvement in overall binding to quadruplex DNA, while binding to duplex DNA remains constant, thus improving selectivity.
As for the instance of the three side chain compound XA3c (14), the observed binding constants demonstrate that this is an effective G-quadruplex ligand able to form both 1:1 and 2:1 drug-DNA complexes. This is in agreement with the general model for G-quadruplex-ligand interactions characterized by the presence of two binding sites on the external tetrad surfaces of the G-quadruplex structure, even though other explanations are possible for the 2:1 stoichiometry, since the external tetrads of the quadruplex are not identical. The logarithmic values of K 1 are above 5.8 for HTelo21, while that for K 2 is 5.6. Comparing K 1 and K 2 we can assume that no cooperativity is involved in the binding mechanism since K 2 values are always lower than those for K 1 . In order to evaluate the selectivity of XA3c (14) for quadruplex over duplex DNA we have undertaken a preliminary study of its affinity for a self-complementary dodecamer: DK66. In this case, the spectra acquired at 1:1 ratios do not show any traces of the 1:1 or 2:1 complex peaks, suggesting weaker interaction compared to that for G-quadruplexes. In order to detect a significant peak of the 1:1 drug-DNA complex, this molecule must be present in the sample at a 2:1 ratio, but even at this concentration, there is no evidence of the peak relative to the 2:1 complex; this peak can be appreciated only at higher concentration and at ratios greater than 1:4.
A more reliable analysis of quadruplex vs. duplex selectivity has been undertaken by performing competition experiments in the simultaneous presence of the G-quadruplex-forming oligonucleotide HTelo21 and fragments of double-stranded genomic (calf thymus) DNA [40,41]. We chose only one compound (XA3c, 14), as the best compound emerging from the previous series of measurements, for these competition experiments (see Table 3). The choice of calculating the "amount bound" is because this parameter is relevant to the specific biological activity of these compounds, since it can be considered representative of their capability to sequester telomeric DNA and so inhibit telomerase activity [42,43]. ESI-MS experiments were performed on the most promising and representative compounds with two known promoter G-quadruplex forming sequences, of the oncogenes: bcl2 and c-myc [43].
Also in this instance, on the basis of the data reported in Table 4, there is clearly higher activity for dimeric and three side chains derivatives compared to other series of xanthene and xanthone derivatives.
FRET Assays of Xanthene and Xanthone Derivatives on Quadruplex and Duplex DNA
Fluorescence resonance energy transfer (FRET) measurements were performed to confirm quadruplex binding activity and selectivity with respect to duplex DNA: two suitable fluorescent probes were bound to different quadruplex and duplex forming sequences, resulting in an efficient system to quickly study both quadruplex and duplex thermal stabilization by different ligands [44]. This technique has been applied both to a human telomeric sequence and to two known promoter G-quadruplexforming sequences from the oncogenes: bcl2 and c-kit1, as well as to duplex DNA (T loop). Table 5 shows that the new hydrophilic three side-chain xanthene derivative is a potent G-quadruplex ligand. In particular all the xanthene derivatives show good selectivity with respect to duplex DNA in accordance with the previous mass experiments. Nevertheless, since it may be that the duplex model could be too simplified; in this regard further investigations are required.
General
All commercial reagents and solvents were purchased from Fluka (Milano, Italy) and Sigma-Aldrich (Milano, Italy), and used without further purification. TLC glass plates (silica gel 60 F254) and silica gel 60 (0.040-0.063 mm) were purchased from Merck. 1 H and 13 C-NMR spectra were obtained with Varian Mercury 300 instruments. ESI-MS spectra were recorded on a Micromass Q-TOF MICRO spectrometer.
Molecular Modelling
The crystal structure used was that of the parallel 22-mer telomeric G-quadruplex (PDB ID: 1KF1) Ligand structures were constructed by adopting Avogadro1.0.3 for force field optimization by using the MMFF94 steepest descent algorithm. Docking studies were performed with the AUTODOCK 4.2 program [22,24]. Water molecules were removed from the PDB file, nonpolar hydrogen atom of the telomeric G-quadruplex were added to their corresponding carbon atoms, and partial atomic charges were assigned, by using ADT [45]. The Lamarckian genetic algorithm (LGA) was used to perform docking calculations. A population of random individuals was initially used (population size: 150), with a maximum number of 25,000,000 energy evaluations, a maximum number of generations of 27,000, and a mutation rate of 0.02. 100 independent docking runs were carried out for each ligand. The resulting positions were clustered according to a root-mean-square criterion of 0.5 Å. Docking module was used to calculate the intermolecular (binding) energy, obtained as a sum of electrostatic and van der Waals contributions, between ligand and DNA. The corresponding intermolecular energy values were used to calculate the average binding energies (and the relative standard deviations. (2): In a two-necked flask xanthene (1, 200 mg, 1.10 mmol) and AlCl 3 (190 mg, 1.41 mmol) were added to anhydrous DCM (2 mL) at 0 °C under Ar. To the solution 6-bromohexanoyl chloride (0.43 mL, 2.80 mmol) in anhydrous DCM (2 mL) was added dropwise. The reaction was then allowed to warm up to room temperature. After 5 h (TLC 30% hexane-CHCl 3 6:4), the solution was neutralized at 4 °C with a saturated solution of sodium bicarbonate. The crude product was than extracted with DCM (3 × 50 mL), dried over Na 2 SO 4 and taken to dryness in vacuo. The crude product was purified by flash column chromatography (30%-60% CHCl 3 in hexane). Compound 2 was obtained as a yellow oil (509 mg, 0.95 mmol, 86%). 1
General Procedure for Nucleophilic Substitution
Product 2 was dissolved in THF (5-15 mL) and treated with an excess of the amine (5 mmol) at 0 °C, then the solution was stirred overnight at room temperature. After completion (TLC 10% MeOH in DCM), the solvent was evaporated in vacuo. The crude product was dissolved in DCM (75 mL), washed three times with saturated aqueous NaHCO 3 solution (50 mL), dried over Na 2 SO 4 , filtered and taken to dryness in vacuo. The crude obtained product was purified by chromatography column (5%-40% MeOH in DCM).
General Procedure for Xanthene Oxidation
(A) Jones' reagent preparation: in a 50 mL beaker, Cr 2 O 3 (7.0 g) was dissolved at 0 °C in water (10 mL) and H 2 SO 4 (6.1 mL) then additional water (20 mL) was added. (B) The product 3 was dissolved in acetone (5-15 mL) in a two neck flask at 0 °C. An excess of Jones' reagent (approximately 1 mL, each 10 mmol, solubilized in acetone with a ratio 1:5) was added slowly over 20 min. The reaction mixture was then allowed to warm to room temperature. After 5 h the solution was concentrated and the excess of Jones' reagent was destroyed with a 5% solution of thiosulfate. The product was extracted three times with DCM, finally the organic layers were washed with saturated aqueous NaCl solution, and dried in Na 2 SO 4 . (TLC 10% MeOH in CDCl 3 ), filtered and taken to dryness in vacuo. The crude obtained product was purified by flash column chromatography (5%-40% MeOH in CHCl 3 ). (6): In a steel-box tube, a 50 mL flask containing 2,2',4,4'-tetrahydroxybenzophenone (5, 509 mg, 2.07 mmol) suspended in water (15 mL) was placed,. The box was heated under magnetic stirring up to 210 °C, so that after 30 min the pressure inside was 35 atm. After 4 h the solution was taken at room temperature and then the mixture was filtered under vacuum with a Gooch funnel. The product was dried under vacuum to give compound 6 as a brown powder (468 mg, 2.05 mmol, 99%). 1 H-NMR (300 MHz, NaOD) δ: 7.49 (2H, d, J 0 = 9.0 Hz, aromatic); 7.49 (2H, dd, J 0 = 9.0 Hz, J 1 = 1.9 Hz aromatic); 6.12 (2H, d, J 1 = 1.9 Hz, aromatic). 13 (7): In a 50 mL flask compound 6 (100 mg, 0.43 mmol) and dry K 2 CO 3 (80 mg, 0.50 mmol) were added to anhydrous DMF (3 mL). When the product was completely dissolved, an excess of 1,4-diiodobutane (0.5 mL) was added at room temperature. To the crude product distilled water (10 mL) was added and the solution was extracted three times with diethyl ether (30 mL). Then, the organic phase was washed three times with saturated aqueous NaCl solution (10 mL) and finally dried over Na 2 SO 4 . The product was purified by column chromatography (30%-60% CHCl 3 in hexane), to give compound 7 was obtained as a white solid (237 mg, 0.39 mmol, 91%). 1
Analysis of the DNA-Drug Interactions by ESI-MS
Instrumentation: All the experiments were performed on a Q-TOF MICRO spectrometer (Micromass, now Waters, Manchester, UK) equipped with an ESI source, in the negative ionization mode. The rate of sample infusion into the mass spectrometer was 5 or 10 μL/min and the capillary voltage was set to −2.6 kV. The source temperature was adjusted to 70 °C and the source pressure was set at 1.30 mbar. The cone voltage was set to 30 V and the collision energy to 10 V. Full scan MS spectra were recorded in the m/z range between 800 and 2,500, with 100 acquisitions per spectrum. Data were analyzed using the MassLynx software developed by Waters.
Sample preparation protocol: Oligonucleotides were dissolved in bi-distilled water to obtain the starting stock solutions and were annealed in 150 mM ammonium acetate buffer by heating at 90 °C for 10 min and then cooling slowly to room temperature. The final concentration of oligonucleotides stocks was 50 μM in either duplex or quadruplex units. Ammonium acetate was chosen as the buffer main component for its good compatibility with ESI-MS. Calf thymus DNA (CT) was dissolved in bi-distilled water. Since its average chain length is 13 kb, it was subjected to sonication (Sonyprep 150 sonicator) for 8 min to obtain an average length of 500 bp (according to gel electrophoresis analysis with Mass Ruler DNA ladder mix-low range). Drug stock solutions were prepared by dissolving in bi-distilled water the desired amount of drug hydrochlorides to obtain a final concentration of 100 μM.
Samples were prepared by mixing appropriate volumes of 150 mM ammonium acetate buffer, 50 μM annealed oligonucleotide stock solution, xanthene or xanthone derivatives 100 μM stock solutions and methanol. The final concentration of DNA in each sample was 5 μM (in duplex or quadruplex unit) and the final volume of the sample was 50 μL. Drugs were added at different drug/DNA ratios, ranging between 0.5 and 4. Methanol was added to the mixture just before injection (in a percentage of 15% vol.) after the complexation equilibrium in ammonium acetate was established, in order to obtain a stable electrospray signal. As a reference, samples containing only 5 μM DNA with no drug were prepared in each series.
Samples for competition experiments were prepared following the procedure described above, adding an appropriate volume of CT solution. Final concentrations of quadruplex DNA and drug solutions were always 5 μM and CT was added at two different duplex/quadruplex ratios (1 and 5), calculated on the basis of the phosphate group concentrations. In order to minimize casual errors each experiment has been repeated at least three times, in the same experimental conditions, and data were processed and averaged with the SIGMA-PLOT software.
Binding constants (K 1 and K 2 ) and percentage of bound DNA have been calculated according to previously reported formulae [39]. Considering drug-DNA complexes in 1:1 and 2:1 stoichiometry, which have been proven to be the main species present in solution in all the experiments, the formation of such complexes can be represented by two distinct equilibria: DNA + drug 1:1 drug + 1:1 2:1 which are in turn described by the following two equations: (1) and (2), respectively) can be calculated directly from the relative intensities of the corresponding peaks found in the mass spectra, with the assumption that the response factors of the oligonucleotides alone and of the drug-DNA complexes are the same, so that the relative intensities in the spectrum are proportional to the relative concentrations in the injected solution: In this way, since DNA and drugs initial concentrations (C0 and C0' respectively) are known, it is possible to obtain the concentration of each species appearing in 1 and 2: [j] = C0 • I(j)/(I(DNA) + I(1:1) + I(2:1)) (4) [drug] = C0' - The constants were determined at different drug/DNA ratios, depending on the intensity of the signals 2.5:5, 5:5, 7.5:5, 10:5 and 20:5 micromolar concentrations ratios. A further manipulation of the data leads to the calculation of the amount of ligand bound, according to an equation developed by de Pauw and his group [24] derived from Equation (4): [ligand bound] = C 0 (I(1:1) + 2I(2:1))/(I(DNA) + I(1:1) + I(2:1)) (6) This parameter, representing the total amount of the drug bound to DNA, is useful to compare the efficiency of different ligands in DNA binding, when they interact as single molecules. Since planar aromatic derivatives are known to also interact with DNA in self-aggregate forms, we decided to carry out a slightly different approach, according to another equation, described by Brodbelt and co-workers [41] which has been demonstrated to be more correct in such cases and was specifically applied to the study of the interactions between DNA and the derivatives: % bound DNA (%b) = 100 • (I(1:1) + I(2:1) )/(I(DNA) + I(1:1) + I(2:1) ) This parameter (%b) represents the percentage of DNA bound to the ligand.
Sequence Preparation
All oligonucleotides were purchased from Eurogentec (Seraing, Belgium) and used without anyfurther purification. Analyses were performed on the following oligonucleotides: where HEG is a hexaethyleneglycol linker [(-CH2-CH2-O-) 6 ] to make the hairpin loop, TAMRA (6-carboxytetramethylrhodamine) is the acceptor fluorophore, FAM (6-carboxyfluorescein) is the donor. The sequences were stored at −20 °C as 20 μM stock solutions in water. The oligonucleotides were annealed as 400 nM (2×) stock solutions in FRET buffer (potassium cacodylate 60 mM, pH = 7.4; cacodylic acid purchased from Sigma, Gillingham, UK), heating at 85 °C for 10 min, then slowly cooling to room temperature. The final concentration of the DNA in the FRET plate was 200 nM.
Sample Preparation and Measurement
The drugs were stored at 4 °C as 10 mM stock solutions in DMSO. The original stocks were first diluted with a 1 mM aqueous solution of HCl to 1 mM concentration, and the further dilutions were performed in FRET buffer, in order to obtain 2× stock solutions of the final concentrations. The experiments were performed in 96-well plates (MJ Research, Waltham, MA, USA). Briefly, each well was loaded with 50 μL of the 400 nM DNA solution, together with either 50 μL of the 2× stock solutions of the drug in FRET buffer (final volume per well = 100 μL), or 50 μL of FRET buffer for the blank. All the measurements were taken on a DNA Engine Opticon (MJ Research) with excitation at 450-495 nm and detection at 515-545 nm. Fluorescence readings were taken at 0.5 °C intervals over the range 30-100 °C; a constant temperature was maintained for 30 s prior to each reading. All the experiments were performed in triplicate [44].
Conclusions
Four different xanthene and xanthone series are introduced here as new G-quadruplex interactive compounds. Molecular modeling studies especially docking, have elucidated what may be the characteristics needed to improve each series of new compounds. The different series of G-quadruplex binding ligands has been synthesized in a small number of steps, each using a different synthetic approach. All the compounds synthesized contain an aromatic core and several side chains, in order to establish the best parameters for interaction with G-quadruplex. ESI-MS assays appear to be promising as a rapid method to evaluate G-quadruplex binding and selectivity with respect to duplex DNA.
The results show that different molecular features contribute to the efficiency of G-quadruplex interactive compounds in binding and stabilizing different G-quadruplex structures. In particular, when the π-π interactions are the same in a series of homologous compounds, such as the xanthene or the xanthone derivatives of the first three series, the length and basicity of the side chains play a major role in modulating the behavior of the different compounds, as previously reported for several other classes of compounds [4,5,10].
The most interesting ligand for targeting telomeric quadruplex DNA is the trisubstuted compound XA3c (14). This ligand showed high binding affinity by mass spectral assays to the telomeric G-quadruplex in potassium solution, as confirmed by FRET experiments. This compound merits further biological studies as a potential anti-telomerase agent. Moreover, the same molecule could be exploited as a stabilizing agent of G-quadruplex-based aptamers, in line with previous studies carried out by our and other groups [46][47][48]. | 8,076.6 | 2013-10-30T00:00:00.000 | [
"Chemistry",
"Biology"
] |
iPSC-Derived Pancreatic Progenitors Lacking FOXA2 Reveal Alterations in miRNA Expression Targeting Key Pancreatic Genes
Recently, we reported that forkhead box A2 (FOXA2) is required for the development of human pancreatic α- and β-cells. However, whether miRNAs play a role in regulating pancreatic genes during pancreatic development in the absence of FOXA2 expression is largely unknown. Here, we aimed to capture the dysregulated miRNAs and to identify their pancreatic-specific gene targets in pancreatic progenitors (PPs) derived from wild-type induced pluripotent stem cells (WT-iPSCs) and from iPSCs lacking FOXA2 (FOXA2–/–iPSCs). To identify differentially expressed miRNAs (DEmiRs), and genes (DEGs), two different FOXA2–/–iPSC lines were differentiated into PPs. FOXA2–/– PPs showed a significant reduction in the expression of the main PP transcription factors (TFs) in comparison to WT-PPs. RNA sequencing analysis demonstrated significant reduction in the mRNA expression of genes involved in the development and function of exocrine and endocrine pancreas. Furthermore, miRNA profiling identified 107 downregulated and 111 upregulated DEmiRs in FOXA2–/– PPs compared to WT-PPs. Target prediction analysis between DEmiRs and DEGs identified 92 upregulated miRNAs, predicted to target 1498 downregulated genes in FOXA2–/– PPs. Several important pancreatic TFs essential for pancreatic development were targeted by multiple DEmiRs. Selected DEmiRs and DEGs were further validated using RT-qPCR. Our findings revealed that FOXA2 expression is crucial for pancreatic development through regulating the expression of pancreatic endocrine and exocrine genes targeted by a set of miRNAs at the pancreatic progenitor stage. These data provide novel insights of the effect of FOXA2 deficiency on miRNA-mRNA regulatory networks controlling pancreatic development and differentiation. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s12015-023-10515-3.
Introduction
Forkhead Box A2 (FOXA2) is one of the earliest transcription factors (TFs) expressed during pancreatic development and remains to be expressed in all types of pancreatic cells [1]. During human pancreatic organogenesis, FOXA2 starts to be expressed at 4 weeks of gestation and continues onwards [2][3][4]. Previous studies demonstrated that FOXA2 controls the expression of several TFs and genes involved in pancreatic endocrine cell fate and β-cell functionality [5,6]. Using human pluripotent stem cells (hPSCs), we and others reported that FOXA2 plays very important roles during human pancreatic and hepatic development [7][8][9]. A recent study reported that heterozygous missense variants in FOXA2 can lead to monogenic diabetes [10]. Another study showed that in humans, risk alleles of type 2 diabetes (T2D) are associated with FOXA2-bound enhancers [11]. These findings indicate the contribution of FOXA2 defects in diabetes development and its important role during pancreatic endocrine differentiation.
Recent progress in human induced PSC (hiPSC) technology has paved the way for many essential applications that could be used for disease modeling, targeted therapy, drug screening, and precision medicine. Therefore, here, we take advantage of our recently established FOXA2 knockout hiPSC (FOXA2 -/-iPSC) model to identify the alterations in the miRNA and mRNA profiles in PPs lacking FOXA2 to understand the miRNA-mRNA regulatory networks regulating pancreatic development. Our results showed that loss of FOXA2 leads to the upregulation of numerous miRNAs targeting key PP genes involved in pancreatic exocrine and endocrine development.
Immunocytochemistry
Immunostaining was performed on differentiated iPSCs as previously reported [32,36]. Cells were washed once with PBS then 4% paraformaldehyde (PFA) was added on the cells for 20 min and placed on a shaker at room temperature. The cells were then washed with tris-buffered saline + 0.5% Tween 20 (TBST) thrice in a 10-minute interval on a shaker. Cells were then permeabilized for 15 min at room temperature using phosphate buffered saline (PBS) + 0.5% Triton X-100 (PBST) twice, later blocked overnight with 6% Bovine Serum Albumin (BSA) in PBST at 4 o C. Afterwards, guinea pig anti-PDX1 (1:500, ab47308, Abcam) and mouse anti-NKX6.1 (1:2000, F55A12-C, DSHB) primary antibodies diluted in 3% BSA in PBST were added to the cells and incubated overnight at 4 o C. Cells were washed three times with TBST and then Alexa Fluor secondary antibodies (ThermoFisher Scientific) diluted in PBS (1:500) were added for 1 h at room temperature then washed again three times using TBST. Cell nuclei were stained for two minutes with Hoechst 33,258 (DAPI) diluted 1:5000 in PBS (Life Technologies, USA). After washing three times with PBS, images were captured using inverted fluorescence microscope (Olympus).
Western Blotting
Total protein was extracted from one well of a 6-well plate using RIPA lysis buffer with protease inhibitor (ThermoFisher Scientific). Measurement of protein concentration was done using Pierce BCA kit (ThermoFisher Scientific). 20 µg of total protein were separated by SDS-PAGE and transferred onto PVDF membranes. Membranes were blocked with 10% skimmed milk in TBST then incubated with rabbit anti-FOXA2 (1:4000, #3143, Cell Signaling) and mouse anti-β-actin (1:10,000, sc-47,778, Santa Cruz) primary antibodies overnight at 4 o C. Membranes were washed with TBST then horseradish peroxidase-conjugated secondary antibodies (Jackson Immunoresearch) diluted in TBST (1:10,000) were added for 1 h at room temperature then washed again using TBST. Membranes were developed using SuperSignal West Pico Chemiluminescent substrate (Pierce, Loughborough, UK) then visualized using iBright™ CL 1000 Imaging System (Invitrogen).
RNA Extraction and RT-qPCR Analysis
1 × 10 6 cells were collected using 700 µL of TRIzol Reagent (Life Technologies) then total RNA extraction was performed using Direct-zol™ RNA Miniprep (Zymo Research, USA). For mRNA, cDNA was synthesized from 1 µg of RNA using SuperScript™ IV First-Strand Synthesis System following manufacturer protocol (ThermoFisher Scientific, USA). RT-qPCR was performed using GoTaq qPCR SYBR Green Master Mix (Promega, USA) run 1 3 in triplicates. Average Ct values were normalized to the WT samples for each gene tested. GAPDH was used as an endogenous control (primer details are listed in Supplementary Table 2).
Differential Gene Expression Analysis
Following manufacturer's protocol, NEBNext Poly(A) mRNA Magnetic Isolation Kit (NEB, E7490) was used for capturing mRNA using 1 µg of total RNA. Generation of RNA-sequencing (RNA-seq) libraries was done using NEBNext Ultra Directional RNA Library Prep Kit (NEB, E7420L) and libraries were sequenced using Illumina Hiseq 4000 system. Raw data were converted to FASTQ files using Illumina BCL2Fastq Conversion Software v2.20 while running quality controls in parallel. Pair-end FASTQ files were subsequently aligned to the GRCh38 reference genome using built-in module and default settings in CLC genomics workbench v21.0.5. Normalized expression data (TPM (transcripts per million)) mapped reads were sequentially imported into the AltAnalyze v.2.1.3 software for differential expression analysis as we described before. (https:// doi. org/ 10. 3390/ cance rs132 15350) For identifying DEGs, genes with log2 fold change (FC) > 1 and <-1 with a P-value < 0.05 were considered. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways analyses were performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID) [37].
Differential miRNA Expression and Potential Target Analysis
miRNA expression profiling was conducted on differentiated and collected PP total RNA samples from WT and FOXA2 -/-iPSCs. From the extracted total RNA, ~ 100 ng was used for miRNA library preparation following the manufacturer's instructions of the library kit (E7560S, New England BioLabs Inc., USA). The amplified cDNA constructs were purified using the Monarch PCR purification kit (Biolabs, New England). MicroRNA analysis was carried out in CLC genomics workbench 20.0 using built-in small RNA analysis workflow. miRNA count reads were normalized using the TMM (trimmed mean of M values) normalization method and log2 CPM (Counts per Million) values were subsequently subjected to differential analysis. A log2 FC > 1 with a P-value < 0.05 was used as a cutoff to determine the differentially expressed miRNA in FOXA2 -/-iPSCs versus WT-iPSCs. Pathway analysis and the microRNA target filter were employed to identify potential miRNA-mRNA networks using Ingenuity Pathway Analysis (IPA) software (QIAGEN, Germany).
Statistical Analysis
At least three biological replicates were used in most of the experiments, otherwise technical replicates were used for statistical analyses. Statistical analysis was performed using unpaired two-tailed student's t-test by Prism 8 software. Data are represented as mean ± standard deviation (SD).
Identification of Differentially Expressed Genes in iPSC-Derived Pancreatic Progenitors Lacking FOXA2
To investigate the effects of FOXA2 loss on mRNA and miRNA expression in PPs, we used two CRISPR/Cas9generated FOXA2 -/-iPSC lines with their isogenic controls (WT-iPSCs) as we recently reported [7]. Generated iPSCs were differentiated into PPs that co-express PDX1 and NKX6.1 using our established differentiation protocol ( Fig. 1A) [33]. The expression levels of FOXA2 in WT-and FOXA2 -/-PPs were validated at protein level using Western blotting where there was a clear absence of FOXA2 band in FOXA2 -/-PPs ( Fig. 1B). At PP stage, FOXA2 -/-PPs showed a significant decrease in the expression levels of the two key progenitor TFs, PDX1 and NKX6.1, as indicated by immunocytochemistry and RT-qPCR (Fig. 1C, D) which is concordant with our previously reported data [7].
Discussion
FOXA2 is an important TF that starts to be expressed at a very early stage of pancreatic development, where the first expression is detected at the definitive endoderm stage and continues in all stages. Our recent study showed that loss of FOXA2 during pancreatic differentiation of iPSCs prevents the formation of α-and β-cells [7]. However, there are currently no data available on the effects of FOXA2 deficiency on the expression pattern of miRNAs and their specific targets in PPs. Here, we provide evidence that FOXA2 deficiency is associated with significant alterations in the expression levels of miRNAs targeting key pancreatic genes at PP stage. The alterations in miRNA expression in PP derived from iPSCs lacking FOXA2 may reflect an impairment in pancreatic differentiation. A direct role for FOXA2 in regulating the expression of selected miRNAs warrants further investigation. PPs are characterized by the expression of several TFs and genes involved in directing the PPs into different types of pancreatic cells (endocrine, exocrine, and ductal cells). Our recent report showed that loss of one FOXA2 allele in iPSCs generated from a patient with FOXA2 haploinsufficiency significantly reduced the expression of pancreatic TFs involved in the development of endocrine pancreas [7]. In agreement with these findings, our RNA-Seq and RT-qPCR results showed loss of FOXA2 to significantly downregulate the expression of key endocrine-associated genes, such as PDX1, NKX6.1, NEUROG3, NEUROD1, NKX2.2, RFX6, GLIS3, HES6, ARX, PAX4, PAX6, MNX1, GATA6, FEV, INSM1, TCF7L2, GP2, and CHGA, which were targeted by several upregulated miRNAs in PPs lacking FOXA2. ARX and PAX4 are known to be essential for the formation of pancreatic α-cells and β-cells, respectively [38,39]. Furthermore, the downregulated genes associated with exocrine and ductal cell specification such as PTF1A, CPA1, CPA2, SOX9, GATA4, and ONECUT1 were also targeted by several upregulated miRNAs, indicating that FOXA2 is not only essential for pancreatic endocrine development, but also plays an important role in pancreatic exocrine and ductal development. Many of those downregulated genes are associated with diabetes and pancreatic development. These findings indicate that lack of FOXA2 negatively impacted the iPSC differentiation into exocrine and endocrine pancreas through downregulating the expression of essential pancreatic developmental genes.
miRNAs are known to play essential roles in post-transcriptional regulation through targeting mRNAs [40,41]. The role of miRNAs in regulating pancreatic β-cell development and function has been previously reported [12]. However, limited studies have tackled the role of miRNAs in regulating PP development. In the current study, we noticed that most DEmiRs had several predicted targets in PPs. On the other hand, most of key pancreatic targets were predicted targets for at least two DEmiRs. For example, we found that miR-184 expression level was upregulated and among its predicted targets is NKX6.1, which is a key TF in pancreatic endocrine development and later becomes restricted to pancreatic β-cells [42,43]. It has been reported that miR-184 participates in regulating β-cell expansion and negatively correlates with insulin biosynthesis and secretion [44][45][46].
We have also identified an upregulation of miR-9-5p in our FOXA2 -/-PPs, targeting PDX1, ONECUT1, and ARX, which were significantly downregulated. Previous studies have linked the upregulation of miR-9 cluster of miRNAs with glucose-stimulated insulin secretion impairments [26]. This cluster has also been identified as a regulator of insulin exocytosis and secretion machinery through modulating Sirt1 expression [52]. A previous study showed that miR-9 targets Onecut2 and decreases its mRNA expression in pancreatic β-cells, which subsequently leads to an increase in Onecut2 downstream target, granuphilin (a negative regulator of the insulin exocytosis) [26]. miR-124a (i.e., a precursor for miR-124-3p/5p) is expressed in human islets and has been reported to be associated with T2D. It has been found that miR-124a represses important target genes involved in pancreatic β-cell function and insulin secretion [53], including Foxa2 and Pdx1 [54]. Although our analysis did not show NKX6.1 as a predicted target for miR-124-5p, it has recently been reported that it induces pancreatic β-cell differentiation by regulating NKX6.1 expression [55]. In this study, miR-124-3p from the same miRNA cluster, was significantly increased in FOXA2 -/-PPs and its predicted targets were the downregulated pancreatic TFs, NEUROG3, PROX1, RFX6, GATA6 as well as NEUROD1, SOX9, and FOXA2, which were experimentally validated in previous research [53,56]. Our results showed miR-92a-2-5p among the upregulated DEmiRs that targets eight downregulated key pancreatic TFs including FOXA2, NKX6-1, FEV, NKX2-2, ONECUT1, SOX9, HNF1B, and TCF7L2. A recent study found that miR-92a-2-5p regulates insulin production and pancreatic β-cell apoptosis [57,58]. Our results showed increased expression of miR-577 upon FOXA2 loss. Previous studies showed that 4). Data are represented as mean ± SD; ***p < 0.001 ◂ miR-577 inhibits pancreatic β-cell activity and survival by targeting FGF21, which promotes β-cell function and survival through AKT signaling pathway [59,60]. miR-204 was found to be associated with the endocrine part of pancreatic islets and insulin regulation [61,62]. miR-15a-5p was also found to regulate insulin production by suppressing UCP-2 gene expression, a mitochondrial anion carrier that reduces oxidative stress [63], resulting in more insulin biosynthesis [64]. On the other hand, upregulation of miR-146a/b has been found to increase cytokine-induced β-cell apoptosis [65]. From these results, we speculate that the lack of FOXA2 at PP stage can cause alterations of several miRNAs important for pancreatic β-cell development and function from an early stage of pancreas development before reaching the mature β-cell stage, causing the cells to follow a different trajectory miRNAs have been also identified as epigenetic modifiers that regulate gene expression levels without targeting its mRNA sequence but by targeting important enzymes including DNA methyltransferases (DNMTs), histone methyltransferases (HMT), and histone deacetylases (HDACs) [66,67]. In addition, miRNAs are posed to epigenetic modification and regulation such as DNA methylation and RNA/histone modifications. The interchangeable relationship between miRNAs and epigenetic modifications forms the bases of miRNA-epigenetic feedback loop that can affect cellular processes [68], physiological functions and disease conditions [69]. Recently, a study has discovered that FOXA2 physically interacts with ten-eleven-translocation methylcytosine dioxygenase 1 (TET1) in which β-cell specification is significantly hindered upon TET1 loss [70]. This lays a good example of TF crosstalk with epigenetic regulators in regulating pancreatic β-cell differentiation and specification. Our miRNA-seq data identified several DEmiRs which have been previously associated with epigenetic modifications in different tissue samples [71,72]. We predict that the lack of FOXA2 does not only affect miRNAs regulating other genes, but also affects miRNAs regulating epigenetic modifications that can directly affect histone modification for accessing DNA for transcription. It was previously found that alterations in circulating miRNA expression occur in diabetic patients in which they can be even be used as biomarkers for diabetes prediction and progression [73,74]. miRNAs can also be used as biomarkers for pancreatic cancer progression and prognosis [75]. Furthermore, alterations in gene regulation by miRNA can be the cause some forms of pancreatic cancers as some miRNAs can act as oncogenes and are associated with poor disease prognosis [76]. Another important aspect of miRNAs is that they can serve as potential therapeutic agents for regenerative medicine [77,78]. Recent advances in research have led to the development of miRNA delivery systems to regulate gene expression [77]. Therefore, our identified DEmiRs may serve as potential novel biomarkers or therapeutic modulators for diabetes or pancreatic cancer diseases. However, further functional validation is required to provide a proof-of-concept for the link between identified miRNAs and diseases.
In conclusion, we showed that FOXA2 loss led to dysregulation of several miRNAs and mRNAs expressed in iPSC-derived PPs. Our findings demonstrated that FOXA2 is not only crucial for endocrine islet development, but also it is essential for exocrine pancreas development. Integrating miRNA and mRNA profiling results revealed that the potential targets of DEmiRs identified in this study are known to play an essential role in pancreatic development and function. These data provide proof of the regulatory relationship between pancreatic TFs and miRNAs in controlling the expression of main pancreatic differentiation drivers during pancreatic islet differentiation. Also, the data presented here would serve as the platform for future studies focusing on understanding the function of identified DEmiRs. In addition, further understanding of miRNA-mRNA and miRNAepigenetic feedback loop would help in identifying potential novel therapeutic strategies and targets that are not limited to FOXA2 mutations but include cancer and regenerative medicine.
Funding Open Access funding provided by the Qatar National Library. This work was funded by grants from Qatar Biomedical Research Institute (QBRI) (Grant No. IGP3 and QBRI-HSCI Project 1).
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Ethical Approval Not applicable.
Competing Interests
The authors declare no competing interests in this manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 4,216.8 | 2023-02-07T00:00:00.000 | [
"Medicine",
"Biology"
] |
Allocating health care resources: a questionnaire experiment on the predictive success of rules
Background The topic of this paper is related to equity in health within a country. In public health care sectors of many countries decisions on priority setting with respect to treatment of different types of diseases or patient groups are implicitly or explicitly made. Priorities are realized by allocation decisions for medical resources where moral judgments play an important role with respect to goals and measures that should be applied. The aim of this study is to explore the moral intuitions held in the German society related to priorities in medical treatment. Methods We use an experimental questionnaire method established in the Empirical Social Choice literature. Participants are asked to make decisions in a sequence of distributive problems where a limited amount of treatment time has to be allocated to hypothetically described patients. The decision problems serve as an intuition pump. Situations are systematically varied with respect to patients’ initial health levels, their ability to benefit from treatment time, and the amount of treatment time available. Subjects are also asked to describe their deliberations. We focus on the acceptance of different allocation principles including equity concepts and utilitarian properties. We investigate rule characteristics like order preservation or monotonicity with respect to resources, severity, or effectiveness. We check the consistency of individual choices with stated reasoning. Results The goals and allocation principles revealed show that the moral intuitions held by our experimental subjects are much more complex than the principles commonly applied in health economic theory. Especially, cost-utility principles are rarely applied, whereas the goal of equality of health gain is observed more often. The principle not to leave any patient untreated is very dominant. We also observe the degrees to which extent certain monotonicity principles, known from welfare economics, are followed. Subjects were able to describe their moral judgments in written statements. We also find evidence that they followed their respective intuitions very consistently in their decisions. Conclusions Findings of the kind presented in this paper may serve as an important input for the public and political discussion when decisions on priorities in the public health care sector are formed. Electronic supplementary material The online version of this article (doi:10.1186/s12939-017-0611-1) contains supplementary material, which is available to authorized users.
Background
When it comes to the allocation of scarce healthcare resources, decision makers are found to consider a plethora of factors and to apply several often opposing decision criteria of which equity, fairness and effectiveness are the most prominent ones [1]. Both empirically and normatively oriented researchers from various fields identify and characterize outcome-based allocation rules which should or do in fact underlay allocative decisions. In the health economic literature, a growing, concurrent concern for efficiency and equity in deciding on the allocation of healthcare resources has arisen [2], but many more distributive norms are vividly discussed.
In our exploratory study, we shed more light on the acceptance of different allocation principles and typically assumed characteristics of allocation rules applied by laypersons. We distinguish between principles that are related to goals that should be reached by distributing [3,4], like e.g. equity principles or the utilitarian principle, and properties of sequences of allocations chosen in sets of problems, like e.g. different monotonicity properties. More precisely, we used an established questionnaire design [5,6] to present a sequence of abstract hypothetical allocation problems of scarce medical resources to participants. Student respondents in the role of a physician had to solve a fixed set of 16 allocation problems. In each situation participants had to distribute a given budget of treatment time among two hypothetical patients who differed with respect to initial health level and ability to benefit from treatment per unit time input, which can be interpreted as timeeffectiveness. Subjects were also asked to note their deliberations.
The design of the study enabled us to focus our research on three levels. On the first level, we looked at situations separately and evaluated the predictive power of different allocation principles traditionally analysed and investigated by health economists who usually focus on the trade-off between equity and efficiency [2,[7][8][9]. One distinctive feature of relevant principles is the "good" to be distributed, i.e. the distribuendum [10]. Efficiency concerns usually correspond to the maximization of health gains, but most empirical studies find only weak support for such maximisation behaviour [6,[11][12][13][14]. In contrast, equity, and in particular equality principles, may concern several spheres including health gains, outcomes, and medical resources. While support for different egalitarian notions in survey experiments depends on the context [15], equality of health gains is often found to dominate other notions [6,14,16]. Additionally, proportionality concepts constitute another alternative [13,17]. In our study proportionality is related to different abilities to benefit from treatment per unit time input and may focus either on the allocation of resources or gains. Finally, it is regularly observed that participants in experiments trade off allocation principles, identify compromises and often apply conditional rules [18][19][20]. We have also investigated this phenomenon, which becomes particularly apparent in a content analysis of the subjects' deliberations.
The application of allocation principles may be accompanied by various additional considerations [21]. First of all, survey respondents often reject the complete exclusion of patients from treatment in micro-level, but not in macro-level contexts [6,13,22]. We checked the relevance of the underlying non-zero principle for our particular setting. Second, ranking individuals according to the size of the distribuendum is generally found to be an often-applied consideration when allocating scarce resources [23,24]. We investigated the fulfilment of order preservation with respect to health levels in the way that the better-off patient should remain being better off after the allocation of treatment time.
Concerning the second level of our analysis, the set of situations contained pairs specifically constructed to test the fulfilment of three monotonicity axioms under ceteris paribus (c.p.) conditions. First, it has been suggested that variations of the amount of resources available for distribution should influence all individuals in the same direction [25,26]. Therefore, we considered two hypotheses assuming unchanged initial health levels and time-effectiveness: Strong resource monotonicity: If the available amount increases (decreases), both patients c.p. receive more (less) treatment time. Weak resource monotonicity: If the available amount increases (decreases), both patients c.p. receive at least (at most) the same amount of treatment time as before.
Second, severity of illness may be used as an additional criterion for setting priorities in health care [13,27]. In our experiments, severity was changed by varying initial health levels keeping constant time-effectiveness and amounts of time available: Strong severity monotonicity: If the initial health level inclines (declines), a patient c.p. receives less (more) treatment time. Weak severity monotonicity: If the initial health level inclines (declines), a patient c.p. receives at most (at least) as much treatment time as before. Contextual irrelevance of severity: The allocation of treatment time does not change if a health level changes, c.p.
In general, we use the expression "contextual irrelevance" to indicate that decisions do not change under variations of one specific dimension of the decision problem, c.p.
Third, respondents may react to a change in ability to benefit from treatment per unit time in situations where initial health levels and amounts of time available are constant. How respondents solve the trade-off between efficiency and equity may depend on the contextual parameters. Participants who in a certain context aspire to an efficient allocation should allocate less to a patient if time-effectiveness of her treatment decreases. In contrast, individuals who follow the goal of equality of gains should compensate patients for their decreasing effectiveness. Thus, opposing hypotheses emerged assuming constant initial health levels and amounts of time: Higher effectiveness monotonicity: If time-effectiveness of their treatment increases (decreases), patients c.p. receive more (less) treatment time. Lower effectiveness monotonicity: If time-effectiveness of their treatment increases (decreases), patients c.p. receive less (more) treatment time.
Contextual irrelevance of effectiveness: The allocation of treatment time does not change if the effectiveness factor changes, c.p.
At the third level of our analysis, the focus was on individual decision-making regarding the entire sequence of decisions each respondent had to make. Since the order of situations was the same for all respondents, we could compare the development of decision behaviour. Furthermore, experimental studies tend to lack reliable insights into the "real" intentions and motivations of participants. In a somewhat arbitrary "revealed motive" ascription of cognitive processes, distributive choices are often interpreted ad hoc "as if" respondents applied certain distributive principles. This gap can at least partly be closed by incorporating qualitative elements and selfreports. Regardless of some well-known short-comings, such as the fallacy of interpreting the absence of reported motives as absence of such motives [28,29], corresponding techniques have proven to be an important tool when investigating distributive preferences regarding health care resources [30][31][32][33]. Hence, we also asked respondents to verbally describe how they proceeded when making their choices and applied a content analysis. A comparison of the sequence of individual choices and verbal statements facilitated connecting quantitative and qualitative findings and evaluating consistency of answers.
The following sections describe the methodological steps applied, present results of all steps, and discuss them, respectively.
Experiment
After a pre-test with 17 professional health economists, our study was conducted in winter 2012 with 166 German university students attending either their first lecture on health economics in a Master course or a general Bachelor lecture at the law department. The entire questionnaire study was conducted during lecture time. Before answering the questions, respondents were informed that there was no time limit and that participation was entirely voluntary and anonymous. During the experiment one of the authors, three student assistants and, in the law lecture also the lecturer were present. We created an "exam atmosphere" in the way that students were not allowed to talk to each other or to look at the sheets of their neighbours. It took respondents up to 25 min to complete the questionnaire. In each lecture, only two individuals rejected to participate, while in total 162 students agreed.
The questionnaire
The questionnaire is structured such as to facilitate investigating the validity of the different behavioural hypotheses. In total 16 different allocation problems were presented to each respondent. All hypothetical situations contained information on the amount of treatment time available (q) and individual characteristics of two different patients (i = 1, 2) who might benefit from the units of time received (t i ). In the introduction of the questionnaire (see Additional file 1) participants were informed that patients differed, first, with respect to their current health state (S i ), which was measured on a scale reaching from zero (i.e. "death") to one hundred (i.e. "perfect health"), and, second, with regard to an effectiveness factor (e i ), which described their (constant) ability to benefit from each unit of treatment time. Based on these factors a linear "health production function" was assumed: The simple functional form should make it easier for respondents to understand the implications of different allocations and, in case they could not agree to any of the allocations offered, to make individual proposals. Since the present study intends to consider only a specific set of patients' attributes it is explicitly stated that nothing is known about the causes of ill health and that patients are of the same age and have the same life expectancy. Furthermore, nothing is said about previous health levels, but it is pointed out that patients remain in the health status reached after treatment for the rest of their lives.
In each situation in the questionnaire (Additional file 1), the problem-specific characteristics are stated at the top of the corresponding table. Below, different allocations of treatment time, resulting health gains and achievable health states are presented. This information is given line-by-line to make it easier for participants to focus on their preferred distribuendum. Table 1 provides an overview of characteristics for all 16 situations, allocations offered and possible principles. Several allocations offered in each situation are based on theoretical considerationsof course without assuming that respondents would be aware of these foundations. Furthermore, all situations contain proposals, which are not theoretically grounded. Additionally, due to the explorative character of the study participants also had the option to make individual proposals and, thereby, apply "non-standard" allocation rules. By systematically varying available units of treatment time, current health states or effectiveness factors, we can determine distinct monotonicity conceptions. Compared to the baseline case in situation 1, situations 10 to 16 assume a higher ratio of effectiveness factors. Here, situation 10 serves as a further baseline. In addition, available units of time are increased in six consecutive pairs of situations. Severity differences are varied by changing either one initial health level (situations 5 to 7) or both (situations 8, 9, 15, and 16). In other cases, the difference is left unchanged but levels are either varied by the same extent (situations 13 and 14) or switched (situations 3, 4, and 12).
These systematic variations between situations were subject to two feasibility constraints: First, resulting health levels could not exceed a value of 100 points. Second, to ease the computation for respondents all points should be a multiple of 5 or even 10 so that choices do not depend on the degree of calculative simplicity of solutions. From the content point of view, discussions during the pre-test highlighted the importance of illuminating the entire domain of possible health levels including the boundary areas. Furthermore, it was suggested to offer "intermediate" allocations between specific principles to allow for compromises. The construction of situations accommodated these constraints and suggestions.
Afterwards, participants were asked to give written accounts of their deliberations. Our aim was to stimulate respondents to think about the decisions and to encourage them to express their thoughts in a generalized self-characterization of their decision. The fact that all respondents went through the same sequence of situations enabled us to compare the development of their choices and statements. Finally, participants were asked to provide socio-demographic information including sex, age, field of study, perceived family income ten years ago, expected future income, political orientation, and whether the respondent has already completed a professional training.
Analysis
Answers are analysed in four major steps. We start by looking at aggregate result. First, the investigation of the proportion of individuals answering in accordance with different allocation principles offers preliminary insights into the general acceptance of competing notions and effects of systematic variations. We apply Selten's measure of predictive success which he developed to evaluate area predictions [34]. The order preservation hypothesis can be interpreted as such an area prediction, since in each situation there are several allocations in accordance with it. For each situation, we calculate the area a as the share of allocations fulfilling the property relative to all options offered. The hit rate r is defined by the frequency of individual answers in accordance with the property relative to the number of all answers given. The measure of predictive success, m = ra, is an indicator for the quality of the prediction in each situation. One-tailed Binomial tests are applied to evaluate whether m is significantly positive.
Second, we turn to individual decisions and pairwise comparisons of selected situations. The fulfilment of the resource, severity, and effectiveness monotonicity hypotheses 5,9,11, and 13 one answer is missing. In situation 14, the questionnaires differed between both samples, viz. health economics and law lecture: The former did not contain the proposal (15,25). In the health economics lecture four respondents stated this allocation as a personal proposal is investigated by six, four, and two comparisons, respectively. Since the monotonicity hypotheses do not predict unique combinations of choices, but areas, we evaluate the quality of our hypotheses again by using Selten's measure of predictive success [34]. Here the measure is applied to pairs of choices in specific pairs of situations that can be compared with respect to the monotonicity property under consideration. If for a given context, there are competing area theories, as there are two or three in our analysis of each monotonicity concept, according to Selten's analysis the one with the higher m is the better theory. Again, one-tailed Binomial tests are utilized to assess whether m is significantly different from zero. In (Additional file 2: Tables S3 to S7) hit rates, areas of prediction and measures of predictive success for each pair of situations are presented and calculations are explained in detail. In (Additional file 3) areas of prediction used in Tables S4 to S6 of Additional file 2 are constructed in detail.
As a third step, we analyse the data on an individual level and conduct a content analysis of verbal answers of all respondents. First, we developed categories driven by classical theoretical allocation rules. Afterwards, both authors separately organised verbal statements into categories, compared their classifications and discussed potential disagreements. Post-hoc, content not assigned to any category was used to identify new categories with supplementary characteristics of rules. A second round of classifications followed. Finally, a student assistant not yet involved in the process organised all comments into theory-driven and post-hoc categories. Her resulting classifications were compared with those of the authors. Differences were discussed until final agreement was reached.
In a fourth step, we run compatibility tests. For each principle classified in the content analysis, we identify all allocations in each situation, which are in accordance with it. Some of them imply one single allocation in each scenario, for others the corresponding areas of prediction are larger. We then count how often choices in accordance with each principle can actually be observed and calculate actual average hit rates for the total sample. In Table 2, these values are then compared with average hit rates of the subsamples of those respondents who have verbally described the corresponding principle.
Results
About 56% of the participants in our experiment were studying at the law department, while 44% were enrolled in Economics, Business Administration, or Business Informatics. Comparing answering patterns of different socio-demographic groups in each situation, we cannot detect any comprehensive influences from individuals' sex, age, past or future income, professional training, or field of study. With respect to political orientation, we find that in six out of sixteen situations left-wing respondents more often supported solutions in accordance with equality of gains, while right-wing participants more frequently selected solutions leading to equality of resources.
Accordance with classical allocation principles and order preservation
For each situation, Table 1 reports frequencies of answers for all allocations offered. We focus on major results. Very few individual proposals occurred. First, we The term "average hit rate" denotes the average fraction of actual choices in all situations fitting to the corresponding principle. See Additional file 2: Table S7 for details on the calculation of areas of prediction and actual average hit rates c Conditional rules and rules utilising threshold values are very diverse and do not always and in all situations result in clear allocation proposals look at the frequencies of choices in accordance with classical theoretical conceptions. In 11 out of 16 situations, allocations in accordance with equality of health gains (EG) received the highest support. Especially, it is more preferred than the utilitarian principle (U). Nevertheless, for example the comparison between situations 1 and 8 reveals that equality of resources (ER) is more attractive if initial health levels are more equal. In contrast, equality of health states (EH) seems to be especially unattractive in situations where it corresponds to allocations which leave the better-off patient completely untreated.
Probably this concern also made many respondents choose a compromise like (5,25) in situation 7 or (5,35) in situation 16 rather than allocations in accordance with EG or EH.
In scenarios 3, 4 and 12, where EG seems to be less attractive, the worse-off patient 1 is also characterised by a higher effectiveness factor. Consequently, some respondents opted for proposals giving more to patient 1 than EG would do. One possible explanation is that participants balanced different arguments in favour of each patient rather than adopting one single principle. This may also explain stronger support for allocations such as (15,45) in situation 2 or (15,25) in situation 11, which do not result from classical theoretical concepts.
Finally, allocations in accordance with proportionality to effectivity factors rarely gained support if they focussed on resources (PR), but seemed to be more attractive with respect to health gains (PG) which coincides with ER due to the linear structure of the health production function.
Additional file 2: Table S3 summarises results on the measures of predictive success of order preservation. In 11 out of 14 situations, where order reversals of initial health levels were possible, a great majority kept the original hierarchy and the measure is significantly positive. In contrast, in situations 3, 4 and 12, the measure is significantly negative. We have already elaborated that respondents may be more in favour of EH here, which is not in accordance with order preservation in the strict sense.
Monotonicity properties
Turning to pairwise comparisons of situations, Additional file 2: Table S4 presents results on the measures of predictive success of weak and strong resource monotonicity. All measures of predictive success are positive and significant. Hence, there is strong evidence that participants pursued the goal not to reduce the health level of any patient if more treatment time is available. In three of six comparisons, weak monotonicity turns out to be the better area theory, under the other parameter constellations the theory that every patient should gain from higher amounts of treatment time has the highest predictive success. Weak monotonicity seems to yield better predictions in situations where patients are rather asymmetric in terms of current health state and of effectiveness of treatment (see situations 15 and 16).
For the three competing hypotheses, weak and strong severity monotonicity and contextual irrelevance of severity, Additional file 2: Table S5 reports the measures of predictive success for four pairwise comparisons of situations. In the first two cases, contextual irrelevance has the highest measure of predictive success, while the measure for strong severity monotonicity is even negative and significant. In contrast, in the latter two pairwise comparisons weak severity monotonicity has the highest predictive success, whereas the measure is even lowest for contextual irrelevance in the very last case. A closer look at the different contexts described by the situations reveals that in the first two pairs of situations the differences between the health states of both patients are reasonably small, whereas the latter two pairs of situations both contain situation 1 where the health state of patient 1 compared to patient 2 is remarkably higher. Hence, we are prepared to say that many respondents do not differentiate between states of severity if these are rather low and (thus) of similar size, whereas larger severity differences lead to some concern for severity. This finding will also be confirmed in our content analysis.
With respect to changes of effectiveness factors, the results summarised in Additional file 2: Table S6 reveal that all three hypotheses are of low quality in the two pairs of situations compared. Higher effectiveness monotonicity turns out to have negative measures of predictive success and therefore has to be rejected in both cases; lower effectiveness monotonicity has in both cases a predictive success close to zero. The best theory here seems to be contextual irrelevance of effectiveness.
Content analysis
One hundred fifty five out of 162 participants (96%) also provided verbal statements on allocation rules applied. In Table 2 we distinguish between classical theory-driven principles, that is different equality notions or utilitarianism, and further principles identified in the explorative part of the analysis. The second column reports how frequently each principle was mentioned. Note, that very few respondents used proportionality arguments. Furthermore, those people had difficulties to clearly signal what it was that they wanted to allocate proportionally so that corresponding principles are omitted.
As expected from the quantitative results, EG has been mentioned more often than other ideas, while the maximisation of sums of health points received least support. Some respondents even mentioned utilitarian concerns but also explained why they departed from the related rule. Here is a typical example for a corresponding statement: "In general, it was important for me that the cumulative health state is rather high than low. However, I have never chosen the maximum to avoid that one person is particularly worse off compared to the other person." In total, these classical concepts appear in less than half of all verbal statements.
Regarding non-classical categories, more than 60% of all comments stated that no one should be completely excluded from treatment and, thus, supported the nonzero principle. Typical qualitative terms include "treat everybody", "both patients", and "no exclusion".
Another 28% of the respondents expressed a preference for the worse-off patient without necessarily demanding equality: "In general, I prefer equality of time. But if a person is in good health, the other person should receive more." This statement also belongs to the group of conditional rules. Every third respondent combined at least two principles and defined conditions for a switch from one principle to another by using terms such as "but", "if", or "however". Finally, several participants specified threshold values either to identify the aforementioned switching point or to develop a separate rule: "Then the aim was to reach a health level of 50. […] By this, life would be reasonably liveable." The results in Table 2 show that in general nonclassical principles were mentioned more frequently than the classical concepts of equality or utilitarianism. Although we only asked respondents to describe the rules they developed, some stated their main motives. At least 28 respondents pronounced that they wanted to find a "fair" or "just" solution, while other motives were rarely mentioned. Hence, in our study justice concerns seem to be a prominent motivation.
Compatibility checks
A common characteristic of the four classical categories EH, EG, ER, and U is that in each situation considered each of them determines a single solution. Consequently, areas of prediction are identical as stated in the third column of Table 2 (see Additional file 2: Table S7, for details). In contrast, the concepts "no exclusion" and "preference for the sicker patient" constrain the list of compatible allocations rather than identifying a specific answer. Thus, it can be expected that more choices will fulfil these notions. Finally, several conditional rules and threshold values have been proposed, but they differ remarkably, so that joint statements about the rate of fulfilment in all situations are hardly possible.
The results reported in columns three to five of Table 2 facilitate evaluating the compatibility of verbal statements and choices in all situations. Some of the actual average hit rates for the entire sample are higher than the values of the corresponding areas of prediction. Especially the strong support for the "no exclusion" idea is visible from the actual average hit rates of about 95%. However, the differences also concern EG and, to a lesser degree, ER and "preference for the sicker patient". Hence, these considerations had a visible influence on some respondents' choices. If we focus only on subjects who explicitly described the corresponding principle, their actual average hit rates in column 5 are even higher for all principles compared to the total sample, and also to the corresponding area of prediction. The only exception is sum-maximisation. In summary, respondents have followed their verbally reported rules rather consistently.
Discussion
From our point of view, empirical work such as the present questionnaire study can be used to elicit the variety of different allocation intuitions and to identify characteristics of feasible and acceptable solutions for distributive problems in health care provision. The results of the study corroborate the pluralism and heterogeneity in basic conceptions of resource allocation in particular in medical resource allocation.
A unique principle applied by all respondents and in all situations does not exist. Instead, we observe a variety of different allocation principles. Nevertheless, aggregate frequencies of choices and especially verbal statements suggest that for many respondents health gain egalitarianism was the most important classical principle in several situations, while equality of health received more support if the worse-off patient also had a higher ability to benefit. These results confirm observations from several previous studies [6,14,16]. In contrast, health maximisation has regularly been rejected, which is also in line with previous results cited in the introduction. In our contexts, this is especially due to the fact that many respondents wanted to avoid any complete exclusion of patients from treatment.
Furthermore, we identified two specific compensation motives. On the one hand, several respondents withdrew from focussing on higher effectiveness and compensated for lower ability to benefit. On the other hand, many participants explained that they were prepared to give more to a patient if this person was clearly worse off.
Our results with respect to severity monotonicity endorse this effect. Hence, we conclude that different notions of effectiveness monotonicity are moderated by severity differences.
However, compensation motives also seem to have their limits. Despite stronger support for the worse-off patient, many respondents abstained from allocations in accordance with health egalitarianism in most situations. Consequently, order preservation with respect to health status before and after treatment has been fulfilled by an overwhelming majority of participants in almost all situations. This concept has already been identified as an important characteristic of allocation rules in different contexts [23,24], but it is remarkable that it is also relevant in health care allocation problems.
As regularly observed in empirical studies [18][19][20], many people report allocation rules that express compromises between competing allocation principles. The specific construction and systematic variation of situations allowed for a greater variety of different concerns and intermediate positions. In line with some earlier findings [21], respondents preferred stronger support for worse-off recipients of care but did not try to equalise health levels. They applied conditional rules, defined threshold values, or violated order preservation only if several arguments spoke in favour of supporting the worse-off patient. Future theoretical models should take hierarchies of principles and conditional rules into account and will have to deal with more sophisticated requirements for their application as revealed by our participants.
The content analysis forms an important complementary element to the decisions in the single situations. The high proportion of respondents who gave answers, often with long and detailed elaborations, together with the astonishing consistency between their described allocation rules and previous choices, make us confident that participants took their tasks seriously.
The present study is subject to potential limitations. First, due to the simple linear structure of the health production function, distinct principles led to identical allocations in some of our situations. Simplified allocation problems only allow for a certain set of allocation principles, so that other prominent principles might be ignored by design [9,21]. However, to keep the calculations manageable for respondents this seems a price worth paying. Furthermore, remaining variations between solutions and among situations across the entire domain of possible health levels seem to be sufficient to differentiate among principles, to allow for compromises, and to examine the relevance of allocation principles also for very high or low levels. Second, all respondents answered decision problems in the same order. Obviously, there may be ordering effects in that previous answers influenced later responses. However, with regard to our aim to interpersonally compare consistency of sequences of decisions to verbal statements it was important to let all respondents work through the same series of problems in exactly the same order. Third, the sample consisted only of students who, moreover, came from just two different fields of study. In general, experts may be biased by prejudices or conflicts of interest, while representative samples of the general public may be more well-meaning but less able to state their intuitions coherently [35,36]. With respect to the allocation of health care resources, members of the general public often tend to think about trade-offs between abstract alternatives in terms of concrete examples, thereby, solely rely on intuitions rather than welldefined abstract principles [37]. Therefore, students are often chosen as a compromise, as they are regularly seen as better able to investigate numerical decision problems analytically and less error-prone than members of the general public, while their intuitions are less biased compared to experts.
Fourth, we have presented micro-justice contexts, in which a decision maker was asked to distribute a resource between two single patients. Since these patients are described in a very abstract and non-personal manner, they could also be regarded as representatives for larger groups. Nevertheless, the general question arises as to whether results of micro-justice investigations are relevant for largescale problems. Clearly, consistency of decisions between the micro and the macro level is an important requirement for health-care rationing [38,39]. This is especially the case in a statutory health insurance system, where each patient is eligible to receive the same treatment as other patients with the same diagnosis. In practice, medical guidelines are a response to this demand.
The position of the decision maker might be a further matter of concern [40,41]. Impartiality and sympathy are preconditions for normative judgements, whereas personal involvement is likely to trigger material or immaterial self-interest. From our point view, the position of the physician in the questionnaire is in between these two pure positions. On the one hand, despite the hypothetical character of the situations respondents may have felt obliged to help both patients due to professional ethics or because they imagine to stand at the bed-side of the patients. On the other hand, patients are described in a very abstract way. The questionnaire only states numerical information being relevant for the application of different allocation principles considered. Hence, at least there is no direct real or hypothetical partiality and, indeed, many respondents mentioned 'fairness' and 'justice' as their main motives.
Conclusions
The topic of this paper is related to equity in health within a country. Health policy decision makers in almost all developed countries must cope with the fact that the growing usefulness of healthcare technologies increases the demand for healthcare services such that scarcity becomes tangible. Criteria for priority setting and rationing of healthcare resources with respect to treatment of different types of diseases or patient groups are implicitly or explicitly made. This implies that priorities are realized by allocation decisions where medical resources are distributed. Independently of which institution may make these decisions in a publicly financed healthcare system, being it the group of medical doctors or a political decision, these criteria should be chosen transparently and discussed in society. Thus, public preferences play an important role in such a discourse.
The aim of our study is to explore the moral intuitions held by non-expert participants related to priorities in medical treatment. To observe the goals and moral attitudes when allocating scarce medical resources, we use an experimental questionnaire method established in the Empirical Social Choice literature where hypothetical decision problems presented serve as an intuition pump [36]. The goals and allocation principles revealed show that the moral intuitions held by our experimental subjects are much more complex than the principles commonly applied in health economic theory. Especially, cost-utility principles are rarely applied, whereas the goal of equality of health gain is observed more often. The principle not to leave any patient untreated is very dominant. We also observe the degrees to which extent certain monotonicity principles, known from welfare economics, are followed. We find evidence that subjects followed their respective intuitions very consistently in their decisions and were able to verbally specify the allocation rules applied.
Thus, overall our exploratory experimental findings reveal insights which allocation principles may be accepted in an abstract context. Results of that kind may then serve as an important input for the public and political discussion when decisions on priorities in the public health care sector are formed [42].
Additional files
Additional file 1: The Questionnaire. (PDF 532 kb) Additional file 2: Table S3. Order preservation. Table S4. Weak and strong resource monotonicity. Table S5. Weak and strong severity monotonicity. | 8,374.8 | 2017-06-26T00:00:00.000 | [
"Economics",
"Medicine"
] |
C. Brandon Ogbunu(gafor)
© The Author(s) 2023. Published by Oxford University Press on behalf of Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. We continue our biography section, featuring Dr C. Brandon Ogbunu(gafor). The following is based on a December 2022 interview with Brandon.
How Did You Become a Scientist?
Like in evolution, Brandon says becoming a scientist was due to a combination of forces that were small and seemingly random. From an early age, Brandon had a proclivity toward understanding the natural world, which came directly from his mother who was a school teacher. She had a love for science and more broadly a love of ideas, creativity, nature, and science fiction that provided an important framework for Brandon. She also had an enormous respect for researchers and scientists, embraced diverse perspectives, and questioned the natural world around her. Growing up in this environment, Brandon gravitated toward a future studying biology.
On a path to pursue biology, Brandon's childhood presented difficulties, including seeing the socioeconomic challenges his mother faced as a single parent of three in New York City. As a result of frequent moves, Brandon was not able to get into a rhythm at school and was never a particularly committed student despite his interests in reading and conceptualizing big ideas. Brandon finished high school as a mediocre student who had invested little effort, but transitioning to college sparked a desire to apply himself to follow his passion in the sciences. Howard University, a historically Black college and university, is where Brandon caught fire academically for the first time, learning how to navigate success in the classroom while gaining a group of close supportive friends.
After exploring majors such as computer science and math, Brandon decided to focus on chemistry after being inspired by the amazing set of instructors in the department. Brandon chose to further refine his area of study in biological chemistry, inspired by growing up in the 1990s in an urban setting during the ongoing HIV/AIDs epidemic, having always been curious about disease and microbes. Brandon was excited by the way scientists work to solve complicated problems, especially those with public health implications. These interests led him to study the herpes virus during an exchange at the University of California, Berkeley. Following this program, Brandon became an undergraduate researcher in Dr. Susan Gottesman's lab at the National Cancer Institute. Here, Brandon worked on bacterial genetics and discovered just how foundational biology would be to his future reasoning and developing career.
Following his undergraduate graduation, Brandon was a Fulbright Fellow in Kenya where he studied malaria. Though he had no prior experience in the field, he knew malaria was similar to HIV in that they were both infectious diseases with huge social consequences. From this point, choosing problems with a large societal component became a central part of Brandon's identity as a scientist. With these interests further developed, Brandon had a multitude of options to choose from as he thought about further education. He applied broadly and was admitted to medical and graduate school programs ranging from biophysics to bioengineering to combined degree programs. Brandon ultimately enrolled in an MD-PhD program at Yale University as it allowed for the most flexibility and exposure to diverse fields in science.
Starting his graduate program, Brandon immersed himself in the different scientific and medical environments offered and found that evolution was where he wanted to focus his energy. At the time, an evolutionary lens on medical problems was underappreciated and relatively new. Brandon joined Dr. Paul Turner's evolutionary virology lab at Yale where he studied issues such as disease emergence and drug resistance. Since that experience with Dr. Turner, Brandon never looked back. Following graduate school, Brandon focused on population genetics working as a postdoc with Dr. Daniel L. Hartl at Harvard University and the Broad Institute. Today, Brandon is an Assistant Professor in the Department of Ecology and Evolutionary Biology at Yale University where his lab aims to illuminate the ecological, evolutionary, and societal underpinnings of infectious disease.
Who Has Been Your Biggest Mentor or Influence on Your Career?
Brandon highlights two groups of people that had the biggest impact on him: those that had a very direct influence and those that had a symbolic influence. In the first category is the foundational start at home with his mother. Brandon states that he is a version of her but with more opportunities and that she's the reason for his good qualities scientifically. In the second category, since Brandon didn't know any scientists or have any scientific role models, his heroes were hip-hop artists. Brandon found, and still believes, that hip-hop artists are some of the most expansive and creatively dynamic people he's ever come across and they are the people that taught Brandon through their music that anything is possible. Brandon applies the same mentality in his life, the notion that there are no limits, that creativity is essential, and that barriers can be broken down.
As Brandon moved on in his career, he met more people in the sciences that had a profound impact on him, one of them being Dr. Vernon R. Morris. Dr. Morris was Brandon's undergraduate research advisor, a young African American physical chemist who coauthored Brandon's first publication. Brandon fondly remembers Dr. Morris as a kind, brilliant, generous person who listened to the same music and talked the same way Brandon talked. Having this example of a scientist at the age of 18 for Brandon was immensely influential and had the biggest impact on his trajectory, so much so that Brandon aspired to be like him when he got older.
What Are Some Challenges You've Faced in Your Career?
One of the main challenges Brandon actively overcomes in his career is not being tethered to the arbitrary rules and standards that are ingrained in the current academic system and especially in science. Brandon explains to students and colleagues that what makes science fun is creating your own set of rules to study the who, what, when, where, and why of the natural world, rather than fitting into the same mold that has become the expectation. It took Brandon some time to figure this out for himself early on, but today he prides himself on who he is and his research on actively avoiding such limitations to creativity and innovation.
What Is Your Favorite Contribution to the Literature?
In this current stage of Brandon's career, his favorite thing to do is peer into the world to find places that offer unique perspectives and to leverage cultural relics to tell new Dr. C. Brandon Ogbunu(gafor), Assistant Professor of Ecology and Evolutionary Biology at Yale University. stories. For example, Brandon does this often in his daily science; he'll take a single data set and approach it from multiple different angles to say something interesting about how proteins evolve or how the mutation rate influences evolution, but he's even able to do this with books, film, and popular culture. This approach led to one of Brandon's favorite manuscripts coauthored by Michael D. Edge entitled "Gattaca as a lens on contemporary genetics: marking 25 years into the film's 'not-too-distant' future," an article published in Genetics just in time for the 25th anniversary of the film's debut. Brandon was in his first year of college when the film was released, and 25 years later, he was able to mine it for technical ideas relevant to present-day genetic and genomic technology. Brandon's favorite aspect of this paper was the process of turning on the film, pressing pause, reading the literature, returning to the film, and watching the story unfold. For Brandon, this publication marked a triumph in his approach to science and in the space of creative collaboration where he thrives.
What Do You Do for Fun Outside of Science?
Brandon's hobbies include reading fiction and comics, and watching films based in science, all things that have been historically labeled as "classic geek culture." As a former boxer, Brandon also has a passion for sports, not just playing but also observing and studying them as well. Brandon enjoys engaging in the scholarly discourse of sports and even keeps up with the literature and writes about sports himself. Wherever he can, Brandon likes to expose himself to innovative and positive spaces, like the theater, for inspiration. In addition, Brandon is passionate about social issues and stays on top of these conversations through public speaking and social media. Lastly, one of Brandon's chief hobbies is writing, an unquestionably important piece of who he is.
What's Some Advice for People Entering the Field of Science?
First and foremost, Brandon emphasizes that the rules in place within science and academia are simply goalposts and that they shouldn't constrain your possibilities. Second, aside from peer review, grant proposals, and science engagement, Brandon doesn't put effort into worrying about what other people think of him or the way he approaches his science. Brandon feels that as long as he is successful in publishing and writing grants, he has autonomy over the questions he pursues and the avenues in which those questions are answered. Brandon warns that unfortunately this internal voice of inspiration can get suppressed as early as high school and he encourages everyone to listen to that voice rather than work toward pleasing other people. He says that if this is possible, you're able to dictate long-term happiness in your career. | 2,237.2 | 2023-04-01T00:00:00.000 | [
"Computer Science"
] |
Generation of pseudonondiffracting optical beams with superlattice structures
We demonstrate an approach to generate a class of pseudonondiffracting optical beams with the transverse shapes related to the superlattice structures. For constructing the superlattice waves, we consider a coherent superposition of two identical lattice waves with a specific relative angle in the azimuthal direction. We theoretically derive the general conditions of the relative angles for superlattice waves. In the experiment, a mask with multiple apertures which fulfill the conditions for superlattice structures is utilized to generate the pseudonondiffracting superlattice beams. With the analytical wave functions and experimental patterns, the pseudonondiffracting optical beams with a variety of structures can be generated systematically. ©2013 Optical Society of America OCIS codes: (050.4865) Optical vortices; (050.5982) Photonic crystals; (070.3185) Invariant optical fields. References and links 1. J. Durnin, “Exact solutions for nondiffracting beams. I. The scalar theory,” J. Opt. Soc. Am. A 4(4), 651–654 (1987). 2. J. Durnin, J. J. Miceli, Jr., and J. H. Eberly, “Diffraction-free beams,” Phys. Rev. Lett. 58(15), 1499–1501 (1987). 3. V. Garcés-Chávez, D. McGloin, H. Melville, W. Sibbett, and K. Dholakia, “Simultaneous micromanipulation in multiple planes using a self-reconstructing light beam,” Nature 419(6903), 145–147 (2002). 4. D. McGloin, V. Garcés-Chávez, and K. Dholakia, “Interfering Bessel beams for optical micromanipulation,” Opt. Lett. 28(8), 657–659 (2003). 5. J. Arlt, V. Garces-Chavez, W. Sibbett, and K. Dholakia, “Optical micromanipulation using a Bessel light beam,” Opt. Commun. 197(4-6), 239–245 (2001). 6. Z. Ding, H. Ren, Y. Zhao, J. S. Nelson, and Z. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett. 27(4), 243–245 (2002). 7. C. Yu, M. R. Wang, A. J. Varela, and B. Chen, “High-density non-diffracting beam array for optical interconnection,” Opt. Commun. 177(1-6), 369–376 (2000). 8. Z. Bouchal, “Nondiffracting optical beams-physical properties, experiments, and applications,” Czech. J. Phys. 53(7), 537–578 (2003). 9. M. Boguslawski, P. Rose, and C. Denz, “Increasing the structural variety of discrete nondiffracting wave fields,” Phys. Rev. A 84(1), 013832 (2011). 10. M. Boguslawski, P. Rose, and C. Denz, “Nondiffracting kagome lattice,” Appl. Phys. Lett. 98(6), 061111 (2011). 11. P. Rose, M. Boguslawski, and C. Denz, “Nonlinear lattice structures based on families of complex nondiffracting beams,” New J. Phys. 14(3), 033018 (2012). 12. Y. F. Chen, H. C. Liang, Y. C. Lin, Y. S. Tzeng, K. W. Su, and K. F. Huang, “Generation of optical crystals and quasicrystal beams: Kaleidoscopic patterns and phase singularity,” Phys. Rev. A 83(5), 053813 (2011). 13. A. Kudrolli, B. Pier, and J. P. Gollub, Physica, “Superlattice patterns in surface waves,” Physica D 123(1-4), 99– 111 (1998). 14. M. Silber and M. R. E. Proctor, “Nonlinear Competition between Small and Large Hexagonal Patterns,” Phys. Rev. Lett. 81(12), 2450–2453 (1998). 15. H. Arbell and J. Fineberg, “Spatial and Temporal Dynamics of Two Interacting Modesin Parametrically Driven Surface Waves,” Phys. Rev. Lett. 81(20), 4384–4387 (1998). 16. H. J. Pi, S. Park, J. Lee, and K. J. Lee, “Superlattice, Rhombus, Square, And Hexagonal Standing Waves In Magnetically Driven Ferrofluid Surface,” Phys. Rev. Lett. 84(23), 5316–5319 (2000). 17. J. F. Nye and M. V. Berry, “Dislocations in wave trains,” Proc. R. Soc. Lond. A Math. Phys. Sci. 336(1605), 165–190 (1974). 18. M. S. Soskin and M. V. Vasnetsov, “Singular Optics,” Prog. Opt. 42, 219–276 (2001). #195482 $15.00 USD Received 8 Aug 2013; revised 16 Sep 2013; accepted 17 Sep 2013; published 25 Sep 2013 (C) 2013 OSA 7 October 2013 | Vol. 21, No. 20 | DOI:10.1364/OE.21.023441 | OPTICS EXPRESS 23441
Introduction
A nondiffracting wave field is comprehended as a monochromatic optical field, whose transverse shape remains invariant in free-space propagation.In 1987, Durnin proposed that the nondiffracting wave fields were exact solutions to the homogeneous Helmholtz equation [1].Such particular solutions can be described as Bessel functions and are called nondiffracting Bessel beams.The realizable beams that propagate with relatively small divergence angle up to a certain range; they have the finite energy and are known as pseudonondiffracting optical beams.In the same year, Durnin et al. [2] first experimentally realized a pseudonondiffracting Bessel beam in a cylindrical coordinates system.Since the breakthrough research by Durnin, nondiffracting Bessel beams have been extensively studied and applied in diverse fields, such as optical manipulation [3][4][5], optical coherence tomography [6], and optical interconnects [7].In recent years, scientists, mathematicians, and artists have been fascinated with two-dimensional (2D) kaleidoscopic nondiffracting optical patterns [8].More recently, realizing nondiffracting optical patterns related to crystalline, quasicrystalline and other ordered structures has become an intriguing issue [9][10][11][12].
A 2D superlattice pattern is a spatially periodic structure composed of two or more simple planeforms.Since Kudrolli et al. [13] first observed superlattice patterns in a two-frequency forcing Faraday experiment, the superlattice patterns have been widely studied in the experiments of parametrically driven surface waves [14][15][16].The superlattice patterns observed by Kudrolli et al. [13] are formed by the coherent superposition of two hexagonal lattice waves with a relative angle of ( ) Mathematically, there are numerous relative angles satisfying the condition for generating the superlattice waves from superposing two identical lattice waves.Even so, how to determine the specific relative angles for constructing the superlattice waves has not been explored in detail.Therefore, the determination of relative angles is the first issue for generating pseudonondiffracting optical beams with the superlattice structures.
In this paper, we theoretically derive a general condition for the relative angles to construct the superlattice waves from superposing two identical lattice waves.With the derived formulas for the relative angles, numerous superlattice patterns are numerically demonstrated.To realize the pseudonondiffracting optical beams with superlattice structures, we generate the quasi-plane waves by employing a collimated coherent laser to illuminate a mask with multiple tiny apertures.The positions of the apertures are precisely manufactured with a stencil laser cutting machine to fulfill the condition for generating the superlattice patterns.We also analyze the influence of the aperture size on the formation of the transverse unit cell in the pseudonondiffracting superlattice beam.The experimental results are found to be in a good agreement with the numerical calculations.Furthermore, we manifest the structures of phase singularities for the superlattice patterns.The optical fields with phase singularities, also known as optical vortices, have been studied and generated a lot of interest in recent years [17,18].We expect that the pseudonondiffracting superlattice beams with phase singularities can be potentially beneficial to future applications for the optical vortex beams.
Theoretical analysis for forming superlattice waves
A 2D lattice wave in polar coordinates ( ) , ρ φ which is formed by the superposition of three, four or six plane waves can be expressed as [12] ( ) where , s K is the wave vector, and q is equal to 3, 4, or 6.Considering the coherent superposition of two identical lattice waves with a relative angle in the azimuthal direction, we can obtain the superposed waves as
The superposed waves are spatially periodic when the wave vectors are located on the reciprocal lattice points of the superposed wave fields.In other words, the wave vectors can be expressed as the linear combinations of reciprocal primitive translation vectors.For instance, since the reciprocal primitive translation vectors are orthogonal for q = 4, the wave vectors shown in Fig. 1(b) must be satisfied the following conditions where ( ) Combining all conditions of the wave vectors, the most general solutions of s n′ and s m′ are given by .
Therefore, s K ′ can be rewritten in terms of s n and s m as 1 2 .
There are some accidental solutions of s n′ and s m′ which are not included in Eq. ( 4).Because these cannot be expressed in an analytic form, we focus on the most general solutions given by Eq. ( 4).As a result, the specific relative angles q Δ for spatially periodic waves are subject to following the condition By utilizing the condition of relative angles, we can generate a class of superposed waves for q = 4 with spatial periodicity.With the wave vectors in terms of the reciprocal primitive translation vectors, the reciprocal lattice constant can be given by Equation (7) indicates that the spatial period becomes longer when the value of 2 2 0 0 n m + gets larger.Following an analogous derivation, the criteria for spatial periodicity of superposed waves with q = 3 and 6 can be obtained.Since the scalar product of the reciprocal primitive translation vectors is −1/2 in the cases of q = 3 and 6, the condition of the relative angles leads to the equation ( ) and the reciprocal lattice constant is given by Consequently, the superlattice waves can be constructed by the superposed waves ( ) , ; , with specific relative angles.In the following section we present an approach to realize the pseudonondiffracting optical superlattice beams.
Generation of the pseudonondiffracting optical superlattice beams
A pseudonondiffracting Bessel beam can be generated by an annular slit illuminated with a collimated light and placed in the focal plane in front of a lens [2].Based on Fourier optics, the relation between the input field ( ) behind the lens at a distance z can be expressed as where λ is the wavelength of coherent light source, and f is the focal length of the lens.For a pseudonondiffracting Bessel beam, the input field is determined by an infinitesimally thin annulus at R ρ′ = expressed as ( ) ( ) where ( ) δ • is the Dirac delta function.A finite-energy pseudonondiffracting Bessel beam requires an annular ring of finite thickness at the input plane.The pseudonondiffracting beams with crystalline and quasicrystalline structures can be generated with a collimated light illuminating a mask with multiple apertures regularly distributed on a ring [12].Therefore, the input field just after the mask for the pseudonondiffracting beams with crystalline and quasicrystalline structures can be approximately given by ( ) ( ) where R is the radius of the ring where the apertures are located on.Equation (12) implies that the aperture size must be infinitesimal for generating ideal nondiffracting beams with crystalline structures.For generating nondiffracting superlattice beams, the input field is given by where Δ q satisfies the criteria of superlattice patterns in Eq. ( 6) or (8).However, an infinitesimal aperture is not realistic.Since the aperture sizes cannot be infinitesimal, the generated beams are called the pseudo-nondiffracting beams.Furthermore, the selection of the aperture size determines how many spatial periods can be included in the pseudonondiffracting superlattice beam.Therefore, the analysis for determining the aperture size is of crucial importance.For considering the effect of the aperture size with finite energy, we exploit the multiple Gaussian beams to model the input field just after the apertures.Based on the locations of the pinholes in Eq. ( 13), the multiple Gaussian waves for describing the input field is given by where the beam waist of the Gaussian beam is set to be the radius of the aperture a.Since the output field in the focal plane behind the lens can be found by the Fourier transform of the input field, the substitution of Eq. ( 14) into Eq.( 10) and considering z = f lead to an equation for the output field .
With transformation of coordinates from polar coordinates to Cartesian coordinates, Eq. ( 15) can be an analytical integration by utilizing Gaussian integral: x x e d x e By some algebraic operation, the output field in polar coordinates can be derived as .It can be seen that more spatial periods can be observed with smaller size of apertures, but it is arduous to generate visible patterns with too small size of apertures in experiments.Thus, the radius of aperture is selected as 85 m μ for generating clear pseudonondiffracting optical superlattice patterns.7(f), the optical superlattice pattern with q = 6 displays the exotic kaleidoscopic structure.The excellent agreement validates the theoretical analysis of superlattice waves and confirms the experimental approach.The experimental patterns also confirm our analysis of the influence of the aperture size on the transverse unit cell.
Fig. 6.Experimental patterns observed for pseudonondiffracting optical superlattice beams with q = 4 under the optimal alignment.The reliable generation of optical beams with complex structures has become increasingly important in the studies of optical manipulations.The complex optical fields with the phase singularities, so-called the optical vortex beams, have been extensively employed.For superlattice waves, the phase singularities are the undefined locations in the phase angle fields which are given by ( where ( ) are the imaginary and real parts of the superlattice waves, respectively.Figures 8(a)-8(c) illustrate the contour plots of phase fields ( ) , ; , for Figs.3(a)-3(c) to display the feature of the phase singularities, respectively.The experimental results verify that the various vortex-lattice structures can be generated by the pseudonondiffracting optical superlattice patterns with q = 3.
Conclusions
In conclusion, the general conditions of the relative angles for constructing the superlattice waves have been theoretically derived from superposing two identical lattice waves.With the derived formulas, we have numerically presented a variety of superlattice patterns.In order to realize pseudonondiffracting optical beams related to the superlattice structures, we have employed a collimated coherent laser to illuminate a mask with multiple tiny apertures.We have used a stencil laser-cutting machine to fabricate these apertures precisely, and make the positions of the apertures fulfill the conditions for generating superlattice patterns.
Considering the realistic patterns, the influence of the aperture size on the number of the transverse unit cells has been also analyzed.The excellent agreement corroborates the theoretical analysis of superlattice waves and supports the experimental configuration for generating pseudonondiffracting optical superlattice beams.Furthermore, the structures of phase singularities for some superlattice patterns have been manifested.
Figures 2 (
Figures 2(a)-2(c) depict the calculated patterns for the intensity of the superlattice waves ( ) 2
Fig. 2 .
Fig. 2. Numerical patterns for the intensity of the superlattice waves the focal plane in front of a lens and the output field
#
195482 -$15.00USD Received 8 Aug 2013; revised 16 Sep 2013; accepted 17 Sep 2013; published 25 Sep 2013 (C) 2013 OSA It can be seen that the terms in the summation represent the superlattice waves with 2 larger the aperture size, the smaller visible region, namely the less energy in the final beam.Figures 4(a)-4(c) illustrate the numerical patterns for the intensity of the output fields
Fig. 4 .Fig. 5 .
Fig. 4. (a)-(c) Numerically patterns for the output intensity profiles of pseudonondiffracting optical superlattice patterns with different radii of apertures.Based on the theoretical analysis, an optical configuration is set up to realize pseudonondiffracting optical superlattice patterns, as shown in Fig. 5.The light source was a linearly polarized 20-mW He-Ne laser with central wavelength at 632.8 nm.A beam expander was employed to generate a collimated light and reduce the beam divergence less than 0.1 mrad.By using a laser stencil-cutting machine, we fabricate the steel masks with high precision.The radius of the ring and aperture are 3 mm and 85 m μ , respectively.The focal length of the lens is 1000 mm.Interference patterns formed in the region behind the focal lens were imaged by a CCD camera. | 3,483.2 | 2013-10-07T00:00:00.000 | [
"Physics"
] |
Computerized Medical Imaging and Graphics
Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately. © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
Image registration and segmentation techniques are fundamental components of medical image analysis as they form the basis for many advanced frameworks for computerized understanding of medical imaging.For example, registration and segmentation of X-ray Computed Tomography (CT) can be used for a vast range of emerging pulmonary applications (Schnabel et al., 2016).Such applications in current and developing medical practice include: personalized adjustment of image-guided radiation therapy (IGRT) (Xing et al., 2006), assessment of disease and treatment progression, e.g.measuring temporal changes of tumor volume (Weiss et al., 2007) or diagnosis of primary pulmonary functions such as assessment of regional ventilation (Guerrero et al., 2005).
Several techniques approaching an automated partitioning of the lungs from CT have been extensively studied for a wide range of clinical pathologies and imaging protocols, and a recent review can be found here (Doel et al., 2015).Volumetric CT scans can be acquired either at different phases of the respiratory cycle (four-dimensional CT), or at different distinctive time points of treatment.Therefore, accurate lung image registration has to be applied in order to provide a common reference space to extract meaningful results for quantitative image analysis.Various registration methods for lung imaging have recently been proposed (Papie ż et al., 2014;Castillo et al., 2010).In spite of that, registration and segmentation of lung volumes are often inherently linked together: segmentation of the organ of interest can be followed by registration either to find correspondences between consecutive medical volumes acquired during treatment (longitudinal studies) or to compensate for the motion caused by e.g.breathing, or the heart.Whereas segmentation and registration, when performed as separate elements of the processing pipeline, are usually more susceptible to image noise or algorithm initialization, joint segmentation and registration approaches have been shown to be a more appropriate choice when complex medical applications are considered (Yezzi et al., 2001;Gorthi et al., 2011).One of the first attempts to segment the same object in two images, where one image is warped by a deformation was introduced in Yezzi et al. (2001).Paragios et al. (2003) proposed a joint registration and segmentation model using an active contours framework.While in Yezzi et al. (2001) the deformation model was restricted to be rigid, the motion model considered in Paragios et al. (2003) was also able to capture local non-linear deformations.In Vemuri et al. (2003), segmentation-based registration using a level-set approach was proposed.This was extended in Gorthi et al. (2011) to a generalized registration framework: an active deformation field, which merges particularly well different approaches for non-linear contour matching.
In this paper we present a novel approach for joint segmentation and registration using popular level-set algorithms (Tsai et al., 2003;Cremers et al., 2007), which have been specifically adapted to address the issues of separate lung segmentation and registration.The level-set, which is driving the non-linear image registration, is tracked by the dense displacement field similarly as in Vemuri et al. (2003), where the process of matching surfaces was estimated on a voxel-based level.Additionally, the propagation of the surface is extended by a term describing regional properties of the objects of interest similar to classic Chan-Vese segmentation (Chan et al., 2001).Through this, we can include prior information from both dense image intensity features (i.e.intensity values from the CT volumes), and local statistics of the objects of interest to obtain the spatial transformation between images and segmentation.In contrast to similar work on registration and segmentation of lung radiotherapy data (Xue et al., 2010), where a two-step segmentation and registration are iteratively repeated, our method is designed to perform truly joint registration and segmentation by treating both terms within each iteration step.The presented work can also handle segmentation of several 3D objects (in our case left and right lung are segmented separately), what extends some earlier research on joint binary (two objects) segmentation and registration (Unal and Slabaugh, 2008;Le Guyader and Vese, 2011).
The paper is organized as follows: In Section 2, the background on level-set algorithms is briefly presented.Later in this section we also describe the classic Chan-Vese algorithm (Chan et al., 2001) for segmentation of images based on intensities (Section 2.2).Next, we present the state-of-the-art level-set registration algorithm proposed in Vemuri et al. (2003) (Section 2.3) together with its extension to a generalized framework for non-linear level-set registration developed in Gorthi et al. (2011) (Section 2.4).In Section 3 the aforementioned level-set registration and segmentation algorithms are then coupled and form a novel joint registration and segmentation method, merging algorithms previously proposed separately for segmentation (Chan et al., 2001) and for registration (Vemuri et al., 2003).We also describe the details of the numerical implementation of our method in Section 4. Our new algorithm is compared against the state-of-the-art algorithms presented in this paper (Vemuri et al., 2003;Chan et al., 2001), and the results of this evaluation are presented in Section 5.The evaluation is performed using a publicly available lung CT data set (Dir-Lab) (Castillo et al., 2009) and assessed using the Dice overlap measure and the Target Registration Error (TRE) as segmentation and registration accuracy estimates, respectively.Finally, the paper is concluded in Section 6.
Level-set methods
Level-set methods, originally introduced by Osher and Sethian (1988), provide a very effective framework for numerical description of curves and surfaces and therefore are widely applicable in many areas including computational fluid dynamics problems (Sussman and Fatemi, 1999;Tryggvason et al., 2001), as well as image processing and computer vision applications (Chan et al., 2000;Vese and Chan, 2002;Zhao et al., 2000).
The principal idea behind level-set methods is to avoid explicit parametric representation of geometrical objects such as curves or surfaces, and instead represent these objects implicitly in terms of a function defined on a fixed computational grid.Contrary to explicit contour representations, level-set methods are also successful in capturing topological changes of objects.For example, level-set can easily handle splitting of a connected region into two or more disjoint parts (Osher and Fedkiw, 2006).
Curves and surfaces can be described implicitly as the zero levelsets of some sufficiently smooth function : Here x denotes a point in a region .
Level-sets for segmentation
Chan et al. (2001) proposed an algorithm, which has since been widely used for different image segmentation tasks (Osher and Fedkiw, 2006;Cremers et al., 2007), including medical images (see e.g., Tsai et al., 2003;Paragios, 2003).It is a special case of the Mumford-Shah optimal partition and approximation problem (Mumford and Shah, 1989) designed for binary images.However, it also gives very good results in case of gray-scale and vectorvalued (e.g., RGB) images (Chan et al., 2000).Next, we briefly review the method.Suppose we are given a domain divided by a contour = { (x) = 0} into two (possibly unconnected) subregions in = { (x) > 0} and out = { (x) < 0}.The function is a level-set function defining the segmenting contour.Let I(x) be an image defined on the region .The method relies on the minimization of an intensity-based energy functional given by: where | | is a length of the segmenting contour, and c 1 and c 2 denote average intensities inside and outside of the segmenting contour , respectively, in the following way: (3) Hence, the functional E given in (2) penalizes local discrepancy from the average intensity in the segmented regions.Using the gradient flow method (Ambrosio et al., 2008), the regularized minimization problem can be turned into an evolutionary partial differential equation (PDE) on the function (Chan et al., 2001).The Chan-Vese algorithm is robust with respect to noise (Chan et al., 2001) and as such it can be applied to medical images containing inevitable acquisition artefacts.Additionally, it can successfully segment images without large intensity gradients, i.e., without sharp edges.It is worth noting that the Chan-Vese algorithm can be extended to vector-valued images (Chan et al., 2000) and to finding several disjoint regions at the same time (Vese and Chan, 2002).
Level-sets for registration
Suppose we are given two images, the source I S and the target (reference) I T , defined on a rectangular domain ⊂ R n , where n is either 2 or 3. Level-set based image registration algorithm proposed in Vemuri et al. (2003) (referred to later as Vemuri's algorithm) was designed as a minimization of a difference between the input images I S and I T , as measured in the L 2 -norm.This is achieved by transforming the source image I S directly onto I T by evolving according to ∂J ∂t where (I T − I S ) can be considered as a level-set velocity function evolving along the gradient ∇J.This registration algorithm can be understood in terms of a level-set framework as matching the intensity contours of images I S and I T .However, the final result of the registration algorithm that we are looking for is a plausible displacement field u(x) : R n → R n such that images I T (x) and I S (x + u(x)) are similar in some sense.Supposing that u depends on the artificial iteration time t and according to (4), we can obtain that the displacement vector field u(x) as a limit t → ∞ of u(x, t).
When u is the solution of the evolution equation: ∇I S (U(x, t)) where U(x, t) = x + u(x, t).We use u(x, 0) = 0 as an initial condition for the problem.Since the gradient calculation in Eq. ( 5) is sensitive to noise, the input images are usually smoothed with a Gaussian kernel of variance 1 as a preprocessing step.
Joint registration and segmentation framework
Vemuri's level-set registration algorithm (Vemuri et al., 2003) is a purely intensity-based method.It does not take into consideration any prior knowledge about anatomical structures or physiological features apparent in the registered images.However, incorporating some prior knowledge about them has been shown to improve the accuracy of the registration (Yezzi et al., 2001).Gorthi et al. (2011) generalized the approach used in Vemuri's algorithm.Let G be a level-set function.Similar reasoning as in the case of Vemuri's algorithm leads to an evolutionary equation defining a way of finding the displacement field u(x, t) subject to u(x, 0) = 0.Here ˇ( G ) is a velocity function characterizing the model.The final displacement field u(x) is then obtained as a limit u(x) = lim t→∞ u(x, t).Notice that Eq. ( 5) is a special case of Eq. ( 6), for the intensity function being chosen as a level-set function and subject to the suitable choice of the velocity ˇ.This approach allows for various choices of the function G and hence, a wide scope of prior knowledge about geometrical (and anatomical) features can be easily incorporated into the model.Moreover, there exists a freedom of choice of the velocity function ˇ.Finally, several methods can be combined and different types of forces, even working on different level-sets, can be taken into consideration at the same time by adding them to the right hand side of Eq. ( 6).Gorthi et al. (2011) proposed also a particular choice of the level set function G and velocity ˇ resulting in an algorithm being an example of atlas-based registration.It assumes that the target image I T can be initially segmented (or there is an atlas available) and distinct regions are labeled.This initial segmentation can be done either manually by an expert or using an automated segmentation algorithm such as the one proposed in Section 2.2.Then, the spatial transformation between the registered image I S and the target image I T is estimated by exploiting this prior segmentation as an additional information to drive the registration.Finally, the labels from the initial segmentation in the target image regions are propagated onto the source image using the estimated displacement and therefore the segmentation of the source image is obtained.This method provides not only a spatial correspondence between two images but also allows the segmentation of several objects in a given image at the same time.The accuracy of the method depends both on the accuracy of the prior segmentation and the registration algorithm.This approach has been successfully applied in medical imaging due to the ability of exploiting prior knowledge about the anatomical structures.Suppose that the segmentation of the region is given and that can be written as a union of k non-overlapping subregions, i.e. = k .Consider a level-set function defining this segmentation: Because in this work we focus on lung images, we shall assume that is divided into sets in representing the lungs and out representing outer parts of the thoracic cage, see Fig. 1.
Although the function L captures geometrical information about the shapes in the image, it cannot be directly used in Eq. ( 6) due to the jump discontinuity between regions meaning that the gradient of function L is not well-defined.To avoid this problem, some regularization by a convolution function L with a Gaussian kernel G of variance is added.Notice that G * L ∈ C ∞ ( ).Moreover, the geometrical description of the boundaries of sets k is preserved.However, now these are no longer modeled by discontinuities in the level-set function but by the local maximum of the magnitude of the gradient of the regularized level-set function.It is also worth noting that convolution with the Gaussian kernel has a denoising effect on the image I S as it does in Vemuri's force (given by Eq. ( 5)).
In principle, the direction of the evolution of the displacement field is controlled by the gradient of the regularized level-set function together with the velocity term ˇ.However, following (Gorthi et al., 2011) a modified sign function S(x) : → { −1, 0, 1} is used so that the vector S(x) ∇ (G * L ) is always pointing from in to out .This function is well defined and nonzero in narrow bands around region boundaries and set to 0 outside those bands.
To complete the definition of the velocity ˇ from Eq. ( 6), we set c 1 and c 2 to be the mean intensities of the image I T in the region in and out , respectively as in Eq. ( 3).These values are kept constant throughout the registration process and are computed once at the beginning.We choose the evolution velocity to be defined as follows: This velocity term is inspired by the forces used in the Chan-Vese segmentation algorithm presented in Section 2.2.
Here we assume that the regions that are to be segmented in target and register source images, have the same intensities.Distances between image intensities can be reduced by matching the histograms prior to the registration algorithm.
Finally, the evolution of Eq. ( 6) takes the form ∂u ∂t subject to u(x, 0) = 0, where Note that the forces in Eq. ( 9) have a very local behavior due to the choice of the special sign function S in Eq. ( 8), being zero far from contours segmenting the image I T .This means that the points lying outside narrow bands surrounding these contours are not undergoing deformation in time.To propagate the information from the segmenting contour, we add an additional diffusion term to the evolution equation.Let ∇ 2 u denote the spatial Laplace operator acting on each of the vector components of the displacement field separately.Letting be a small positive parameter, ( 9) can be modified to in the following way: subject to u(x, 0) = 0. Note that the Laplacian of the deformation field u(x, t) is taken component wise.The primary role of the diffusion term ∇ 2 u is to propagate information coming from the mentioned narrow band over the whole domain (Eq.( 11) refers to voxelwise representation of the contour only).Nevertheless, it acts also as an additional regularization, smoothing the evolving displacement field u (Modersitzki, 2009), which is a representation of contour in our formulation of level-sets.
A huge variety of the regularizing terms for deformable image registration has been considered in image processing applications, many being modifications of the heat-equation approach presented here.Among them there are anisotropic diffusion filtering methods described in Weickert (1997).Related to that is a bilateral filtering method that can be used to capture the sliding motion between organs, occurring for example between lungs and liver or between lung lobes (Papie ż et al., 2014).The framework described here is flexible enough to take these into account.
The new model description
The framework developed in Gorthi et al. (2011) enables the simultaneous use of several types of level-set functions.This can be achieved by modifying forces on the right hand side of Eq. ( 11).
So far we considered the segmentation of the target image I T as dividing it into two regions.However, this approach can be generalized to an arbitrary number of subregions of and Eq. ( 11) can be adjusted accordingly.Suppose now that the domain is split into subregions 1 in , 2 in , and 1 out , 2 out , respectively.Moreover, assume that the sets 1 in , 2 in are strongly disjoint.By this we mean that dist( 1 in , 2 in ) > 0. The motivation for considering this case is the application to lung scans in which each lung is segmented separately.We can redefine forces used for determining the displacement field u in a straightforward manner.We replace the averages c in and c out with the average intensities c 1 in , c 2 in , c 1 out , c 2 out taken over newly segmented regions.By changing the velocity function ˇ, we replace the operator GCV 1 with its extended version being defined as follows: We propose one more extension of Gorthi's algorithm (Gorthi et al., 2011) by taking into account several regions segmented independently.This extension is kept in the spirit of Gorthi's algorithm but more than one level-set function of the kind given in ( 7) is used.We take 1 , so the level-set functions 1 L and 2 L are defined as characteristic functions of regions 1 in and 2 in respectively.We define a new vector field governing evolution of the displacement field as follows: Because the sets 1 in and 2 in are strongly disjoint by assumption and since L is independent of time, we can always find a strictly positive parameter such that dist( 1 in , 2 in ) > 2 .By setting the sign functions S 1 and S 2 to zero when x does not belong to a band of width around ∂ 1 in and ∂ 2 in respectively, we ensure that two elements of the vector field given in Eq. ( 13) have no influence on each other.Moreover, S 1 and S 2 are equal to 1 otherwise since the numerical values of level-set functions 1 L , 2 L are assigned suitably by construction.
The evolution problem for registration using prior segmentation of the selected regions can be defined by replacing GCV 1 with the operator GCV 2 or GCV 3 in (11).
As pointed out above, many evolution forces can be incorporated in the evolution equation using Gorthi's framework.Moreover, several different level-set functions can be used in Eq. ( 6) at the same time.In Section 2.3 we noticed that the image intensity function is a valid choice for the level-set function.We propose a novel combination of forces using modifications of the evolution equation proposed by Gorthi et al. (2011) together with Vemuri's registration method.
Let  ∈ [0, 1] be a weighting parameter.Consider the displacement field evolution problem subject to u(x, 0) = 0 and j ∈ {1, 2, 3}.The evolution problem given in Eq. ( 14) defines a joint segmentation and registration method combining the approaches proposed by Vemuri et al. (2003) and Gorthi et al. (2011).Note that each of them can be recovered when we choose  = 0 and  = 1 respectively.When  ∈ (0, 1), this method should bring advantages of both methods together.It exploits the prior geometrical or anatomical knowledge about regions in the images.Moreover, it uses the intensity function for matching the regions, so that the registration acts on the entire image, rather than only relying on the level-set propagation by additional diffusion filtering.
To obtain a segmentation of the image I S , we first label the regions in and out by assigning the values 1 and 0 to the points in these regions.These labels are transferred to the image I S by adding the displacement field A similar procedure is used when the image I T is divided into more than two regions by increasing the number of labels accordingly.
Numerical implementation
In this work we deal with images which are normalized (prior to segmentation and registration) to take values from the interval [0, 1].
The Chan-Vese segmentation algorithm presented in Section 2.2 was implemented as proposed in Chan et al. (2001).In the numerical implementation of the registration methods described in Section 2, we use a finite difference discretization for numerical differentiation and a forward Euler scheme for numerical time integration.Since this time-stepping scheme may lead to instabilities in the solution, a time step of size t = 0.5 was empirically chosen.In the numerical implementation of algorithms presented in this article, we use the grid naturally defined by the image voxels.That is, we choose constant step-sizes h x = h y = h z = 1 in each direction, so that the mesh is given by x i,j,k = (x i , y j , z k ) ∈ , 1 ≤ i ≤ M x , 1 ≤ j ≤ M y and 1 ≤ k ≤ M z .The region is assumed to be a cuboid with respective edges of lengths M x , M y and M z .
We define the finite differences of a function by We call D − , D + the backward and forward difference respectively.
To improve the numerical stability of this scheme, we replace the Euclidean norm |v| in Eq. ( 14) with |v| ˛= |v| 2 + ˛2, where ˛ is a small parameter (˛ ≈ 10 −4 ).We set u n i,j,k = u(x i , y j , z k , n t).Let us also define L;i,j,k = L (x i,j,k ), I n S;i,j,k = I S (x i,j,k + u n i,j,k ), I T;i,j,k = I T (x i,j,k ) to be images discretized on a given mesh and be a smoothed general level-set function.Note that replacing G by L we obtain C L , which is independent of time.In contrast, C S = G * I n S;i,j,k evolves in time.In evaluating I n S;i,j,k , linear interpolation in neighbouring points of x i,j,k + u n i,j,k is used.The gradient of the level-set function and its norm are approximated with and where following the implementation presented in Vemuri et al. (2003) we use the minmod finite difference scheme (Osher and Sethian, 1988) with We obtain a discretization of the Vemuri's force term (5) Vem n i,j,k := I T ;i,j,k − I n S;i,j,k We approximate the mean intensities for (3) of the target image I T used in the definitions of operators GCV j with mean intensities computed by where #X denotes the number of elements of the set X. Hence, the operator GCV 1 defined in Eq. ( 10) is discretized by taking where Gor stands for Gorthi and we follow a discretization introduced in Gorthi et al. (2011).Operators GCV 2 and GCV 3 are approximated in a similar way.Moreover, in the definitions ( 12) and ( 13) we choose 1 out = 2 out to represent the part of the image not occupied by lungs.
In the numerical implementation we split each time step into two stages.The first stage neglects the diffusion term: by combining Eq. ( 23), Eq. ( 21) and Eq. ( 14) with = 0 and discretizing in time with Forward Euler scheme we obtain with u 0 i,j,k = 0 and m ∈ {1, 2, 3}. (24) The second stage solves the diffusion equation ∂u ∂t = u for a small time by convolving the numerical solution ũn+1 i,j,k with a Gaussian kernel G 3 of a variance 3 : Note that the choice of parameter 3 depends on the values of ,  and t.In the numerical tests, in which we use t = 0.5, = 1 and  = 0.5 or  = 1, the variance 3 equal to the size of two voxels appears to be a good choice.Note that when we take  = 0 (and so recover Vemuri's algorithm as in Eq. ( 14)), there is no need for including the extra diffusion term and we can skip the second stage.
To obtain a segmentation of the image I S , we first label the regions in and out by assigning the values 1 and 0 to the voxels in these regions.These labels are then transferred to the image I S .A similar procedure is used when the image I T is divided into more than two regions by increasing the number of labels accordingly.
The 3D algorithm was applied to real human CT lung images of the resolution of at least 256 × 256 × 108 consisting of more than 7 million voxels.To reduce the computational cost of the method, we use a multi-resolution approach.First, the algorithm is performed on an image of quarter-resolution in each dimension, (thus 64 × 64 × 27 instead of 256 × 256 × 108).These are obtained Numerical tests confirm that the multi-resolution approach gives comparable results to a direct one, but uses fewer iterations overall and significantly decreases the computational cost.
The segmentations (labels) in the data need to be initialized by the user before optimization, and these segmentations can be generated either manually or from human body imaging atlases.
Experiments and results
In this chapter, we present a comparison of various methods that we have described in previous sections and assess them for joint segmentation and registration of lung CT images.For the evaluation of each method's accuracy we use the publicly available Dir-Lab set of CT data described in Castillo et al. (2009).This data set contains 10 pairs of complete 4D CT lung scans of patients suffering from lung or esophageal cancer.The spatial resolution of the data is known and one voxel in the image corresponds to a cuboid of size varying from 0.97 mm × 0.97 mm × 2.5 mm to 1.16 mm × 1.16 mm × 2.5 mm depending on the case.Moreover, each pair of CT images is accompanied by a set of 300 welldistributed landmarks manually identified by experts with the intra-observer error approximately equal to 1.0 mm (Castillo et al., 2009).These landmarks are used to measure the distance between images before and after the registration.Additionally, we used the expert lung segmentations which we will consider to be the gold standard for segmentation assessment.
Segmentation evaluation
To evaluate the accuracy of the segmentation method, we follow a standard approach used in biomedical imaging applications by comparing the Dice measure (Zijdenbos et al., 1994) between segmentation result produced by the assessed algorithms and the expert segmentation.Suppose that the domain is divided into two regions in and out .These sets are approximated using a segmentation algorithm by ˜ in and ˜ out .Assuming that the set in denotes the region of our interest, for example the lungs, we define the Dice coefficient as follows: Volumes of all regions are approximated by the number of corresponding unit voxels in the images.In our experiments we use the expert segmentation of the lungs in the inhale stage for the Dir-Lab database for the sets in .The sets ˜ in were approximated using segmentation methods presented before: Chan-Vese algorithm (CV), Gorthi's algorithm and its modifications using vector fields Eq. ( 10), ( 12) and ( 13) (GCV 1 , GCV 2 and GCV 3 respectively) and the joint Gorthi-Vemuri algorithm with  = 0.5 and similar modifications (GCV 1 + Vem, GCV 2 + Vem and GCV 3 + Vem respectively).The complete comparison is summarized in Fig. 2. Segmentation of the lungs in the exhale stage of low resolution 64 × 64 × 27 was used as an initial contour for the Chan-Vese algorithm.The Dice coefficient computed for the initial contour is shown on the left-hand side of Fig. 2.Even though the initial condition used for segmentation overlaps strongly with the region to be segmented, the Chan-Vese algorithm barely improves the results.Notice that the results obtained using registration-based segmentation are better in all of the studied cases.Moreover, as we shall see later, in these cases the accuracy of segmentation depends on the accuracy of registration.The best results are obtained using the GCV 1 + Vem algorithm.However, the results of GCV 2 + Vem and GCV 3 + Vem are also comparable.Detailed summary of Dice coefficient for each considered algorithm is shown in Table 1 in Appendix A.
The results produced by the registration method presented in Papie ż et al. ( 2013) show average Dice coefficients varying between 0.86 and 0.92.The joint segmentation and registration methods based on level-set registration algorithms presented in this article give comparable results with the best of 0.96 for GCV 1 + Vem beating others by around 0.03 on average.Moreover, GCV 1 + Vem algorithm achieves the highest average Dice measure among all algorithms which do not explicitly account for the sliding discontinuous motion between anatomical structures such as lungs, liver and pleura.
In Fig. 3 we present examples of lung segmentations obtained using algorithms discussed above.In algorithms incorporating Vemuri's vector field Eq. ( 5) into Gorthi's framework, we chose  = 0.5 in vector field Eq. ( 14).CT scans used in the simulations come from the Dir-Lab data base (Castillo et al., 2009).Black regions represent slices through lung segmentations done by experts.Red contours are respective segmentations obtained using considered algorithms.Notice that Fig. 3a-g visually confirm results summarized in Table 2.
Registration accuracy
In the Dir-Lab data set each image has been labeled by a number of landmarks annotating anatomical features (Castillo et al., 2009).A registration algorithm, in order to be useful from the medical applications perspective, needs to minimize the distance between points denoting the same point in patient's body before and after the registration procedure.Therefore a commonly used measure for registration evaluation (see Modersitzki, 2009) is the Target Registration Error (TRE) defined as an average Euclidean distance between corresponding points.This means that a set of points y 1 , . .., y n ∈ in the target image I T is specified together with corresponding points x 1 , . .., x n ∈ chosen in the source image I S .
where M is the number of landmarks in the images.
The TRE measure for the each registration algorithm presented in this paper is shown in Fig. 4. On the left-hand side of the fig- ure we present the initial error computed before the registration was performed.The methods evaluated in this test can be divided into three groups: intensity-based registration (Vemuri's algorithm Vemuri et al., 2003) Vem, segmentation-driven registrations (GCV 1 , GCV 2 , GCV 3 ) based on Chan-Vese segmentation (Chan et al., 2001), and the proposed joint segmentation and registration methods (GCV 1 + Vem, GCV 2 + Vem, and GCV 3 + Vem).As we can see, the TRE measure decreases for each registration.The highest accuracy in terms of the TRE is achieved by the joint segmentation and registration methods (GCV 1 + Vem, GCV 2 + Vem, and GCV 3 + Vem) with the best TRE = 3.40 mm for GCV 2 + Vem.The registration algorithms based on segmentation exploiting only the prior knowledge about the position of lungs in the target image yield worse results than the proposed method, but perform well when compared to the classic intensity-based level-set registration Vem.Moreover, a smaller TRE is achieved by the level-set segmentation-based when more regions are selected to drive the registration.This can be explained that considering more regions provides more local information to registration.Our extended algorithms coupling Gorthi's and Vemuri's methods (GCV 1 + Vem, GCV 2 + Vem and GCV 3 + Vem) give not only the most accurate results in terms of TRE but also provide us with the most accurate segmentation methods.Since the landmarks are not used in the algorithm and are used only for evaluation purposes, we consider the joint Gorthi-Vemuri algorithm as the most accurate of all methods presented here.A complete summary of TRE results is presented in Table 2 in Appendix A.
The Dir-Lab data set is widely used for registration accuracy evaluation and the TRE computed on this data are known for many methods. 1 The state-of-the-art registration algorithms involving sliding motion achieve average TRE results varying from 2.76 mm in Delmon et al. (2011), through around 1.5 mm in Papie ż et al. ( 2014) to the best known method with the TRE below 1 mm in Rühaak et al. (2013).All mentioned algorithms report better results than the TRE = 3.40 mm achieved by Gor 2 + Vem.However, the results given by the algorithms presented in this article are comparable with the results obtained by the state-of-the-art demons algorithm not modeling sliding motion (Papie ż et al., 2014).The difference between the input images was used here as a similarity measure to drive registration, however CT volume intensities may change due to lung compression, and so more sophisticated image representation (e.g.multiscale image normals Droske and Rumpf, 2007) could further improve the overall accuracy.
In Fig. 5 examples of registration errors are shown.In the experiments we used CT scans of lungs coming from the Dir-Lab data set (Castillo et al., 2009).The source image I S in Fig. 5a was registered to the target image I T in Fig. 5b.The initial difference |I T − I S | is shown in Fig. 5c.The image is dark where the error is large and bright where it is small.We applied three registration algorithms: Vemuri's, modified Gorthi's with the velocity vector field Eq. ( 13) and their combination with a coupling parameter  = 0.5.Respective errors, normalized by the same factor so they take values between 0 (white) and 1 (black), are presented in Fig. 5d-f.As we can see, the difference between source and target image decreases in the registration process.Notice that the Vemuri's algorithm results in the smallest final difference.However, due to the prior segmentation used in Gorthi's method, the errors shown in Fig. 5e-f are visibly small in the regions occupied by lungs.It is also worth noting that the anatomical shape of lungs is preserved in these images.Note also that all algorithms presented here are less accurate in lung regions close to the ribs.This is because the sliding motion occurring there is not considered in the presented methods.
The number of iteration is tuned to achieve a convergence in terms of TRE.Therefore, the runtime for the presented methods vary remarkably and are as follows.The intensity based registration (Vemuri's algorithm) takes 170 s to reach the stopping criterion (with less than 50 iterations performed).The segmentation-driven algorithms (GCV 1 , GCV 2 , GCV 3 ) require significantly more iterations to be performed (about 250) to reach the stopping criterion, 1 http://www.dir-lab.com/Results.html.and thus their runtime increases to 380 s.The higher number of iteration required to achieve convergence is expected during segmentation-driven registration since only the points lying on contours contribute to the algorithm.The displacement field is diffused to the inside of the segmented structure by the regularization model.The proposed joint segmentation and registration algorithms (GCV 1 + Vem, GCV 2 + Vem and GCV 3 + Vem) require about 160 iterations to converge with the runtime of 248 s.All methods used in this comparison were initialized with the identity transformation as the inhale and exhale volumes included in the Dir-Lab data set come from the same acquisition (there is no need for compensation of patient positioning error using rigid registration).
All algorithms were implemented using MATLAB on a Mac OS with 8 GB of memory and 1.6 GHz Intel Core i5 processor.
Conclusions
In this article we presented a novel joint segmentation and registration method using the level-set framework.In this framework, we combined the classic Chan-Vese segmentation algorithm with a non-linear intensity-based registration algorithm (Vemuri et al., 2003) using a generalized level-set formulation (Gorthi et al., 2011).It was then extended to the method using several driving forces and furthermore, the presented method was applied to lung CT scans.Compared to the standard registration approaches, our proposed method is able to incorporate a segmentation prior into the cost function with a small computational effort.Furthermore, the accuracy was compared with the state-of-the-art methods for segmentation and registration using the publicly available data set Dir-Lab (Castillo et al., 2009).The algorithm presented in this article produces very good segmentation results together with a satisfactory registration accuracy in terms of the TRE.However, the results of our joint segmentation and registration method are still inferior to those achieved by the current state-of-the-art lung registration methods when applied to the Dir-Lab data set.This may be due to discontinuous motion between anatomical structures sliding at the chest boundary interfaces, which is not modeled in the presented framework (Papie ż et al., 2014).Our future work is focused on explicit incorporation of this discontinuous motion at sliding interfaces of lungs into level-set propagation to the registration algorithm proposed here, to improve the overall algorithm's accuracy.Another direction that could be also investigated is a joint partitioning and registration of lung lobes to provide a more realistic description of lung motion (Schmidt-Richberg et al., 2012).
Fig. 1 .
Fig. 1.Example of superimposed lung segmentations in inhale state (red) and exhale state (green).Segmentation was performed by experts.The image comes from publicly available Dir-Lab set of CT data described in Castillo et al. (2009).Function L introduced in Eq. (7) takes value 1 inside the red contour and value 0 outside of it.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 2 .
Fig. 2. Comparison of Dice coefficients for the segmentation algorithms.CV denotes the Chan-Vese algorithm, GCV i represents consecutive modifications of Gorthi's algorithm and GCV i + Vem are our coupled Gorthi's-Vemuri methods.Box edges represent 25th and 75th percentiles, the central mark represents the median, and whiskers extend between maximum and minimum values.Our proposed joint registration and segmentation algorithms are more accurate.Gorthi's algorithm gives better results when coupled with Vemuri's method.
Fig. 3 .
Fig. 3. Examples of segmentation results for CT scans of lungs obtained using Chan-Vese algorithm and joint registration-segmentation algorithms introduced in Sections 2 and 3. Images present axial view through segmentations of lung CT images coming from Dir-Lab data set.Black regions are segmentations done by experts and red contour surrounds the region segmented with the use of a chosen algorithm.Algorithms GCV i + Vem, i ∈ {1, 2, 3} are considered with the choice  = 0.5.Chan-Vese segmentation algorithm appears to be the least accurate and is surpassed by the other algorithms for joint registration and segmentation.Incorporating Vemuri's algorithm into Gorthi's framework visibly improves segmentation accuracy (see e-g).These figures confirm results presented in Fig. 2. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 4 .
Fig. 4. Comparison of the Target Registration Error (in mm) for the registration algorithms.Box edges are 25th and 75th percentiles, central mark represents the median and whiskers extend between maximum and minimum values.Gorthi's algorithm based on prior segmentation gives better results than Vemuri's method.Our method incorporating Vemuri's forces in Gorthi's algorithm improved the accuracy of the methods.
Fig. 5 .
Fig. 5. Example of coronal view of 3D registration for CT scans of lungs.The presented method yields noticeable improvement in the alignment, especially in the regions closer to the lung boundaries.
Table 1
Comparison of Dice measure for the segmentation algorithms.CV denotes the Chan-Vese algorithm, GCV i represents consecutive modifications of Gorthi's algorithm and GCV i + Vem are our coupled Gorthi's-Vemuri methods.Our proposed joint registration and segmentation algorithms are more accurate.Gorthi's algorithm gives better results when coupled with Vemuri's method.
Table 2
Comparison of the Target Registration Error for the registration algorithms.Gorthi's algorithm based on prior segmentation gives better results than Vemuri's method.Incorporating Vemuri's forces in Gorthi's algorithm improved the accuracy of the methods. | 9,627.8 | 2017-06-15T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
FINITE-SAMPLE SIZE CONTROL OF IVX-BASED TESTS IN PREDICTIVE REGRESSIONS
In predictive regressions with variables of unknown persistence, the use of extended IV (IVX) instruments leads to asymptotically valid inference. Under highly persistent regressors, the standard normal or chi-squared limiting distributions for the usual t and Wald statistics may, however, differ markedly from the actual finite-sample distributions which exhibit in particular noncentrality. Convergence to the limiting distributions is shown to occur at a rate depending on the choice of the IVX tuning parameters and can be very slow in practice. A characterization of the leading higher-order terms of the t statistic is provided for the simple regression case, which motivates finite-sample corrections. Monte Carlo simulations confirm the usefulness of the proposed methods.
INTRODUCTION
A common inferential task of practical relevance is to decide whether a potential predictor variable does indeed forecast another variable of interest. In the simplest setup, practitioners thus test the null hypothesis of no predictability in the model y t = µ + βx t−1 + u t , t = 2, . . . ,T , where the regressor is usually assumed to have an autoregressive structure, with initial condition bounded in probability, x 1 = O p (1). With financial data, predictors such as dividend yields or earnings-price ratios are often quite persistent, even if still mean-reverting (typically captured by a value of ρ close to unity), and its shocks are contemporaneously correlated with the variable to be predicted (see Phillips, 2015, for a recent review). This biases the OLS estimator of the slope parameter and induces heavy non-normality of t statistics (Elliott and Stock, 1994;Stambaugh, 1999) such that tests for predictability are size-distorted. Near to unity asymptotics, obtained by letting ρ = 1 − c/T, offer a better approximation of the actual distribution of the OLS t statistic than the standard normal in this situation; cf. Elliott and Stock (1994). The limiting distribution of the OLS estimator and test is explicitly non-normal under near integration and depends on the mean-reversion parameter c and the correlation between u t and v t . Since consistent estimation of c is not possible in such highly persistent cases (Phillips, 1987), the literature suggested several different ways of circumventing the lack of knowledge on ρ. See, among others, Campbell and Yogo (2006), Jansson and Moreira (2006), Maynard and Shimotsu (2009), Camponovo (2015), Phillips (2015), and Breitung and Demetrescu (2015).
Building on the work of Phillips and Magdalinos (2007) and Magdalinos and Phillips (2009), the extended IV (IVX) estimation and testing approach introduced by Phillips and Magdalinos (2009) is gaining momentum for predictive regressions; see, for example, Gonzalo and Pitarakis (2012), Phillips and Lee (2013), Kostakis, Magdalinos, and Stamatogiannis (2015), Demetrescu and Rodrigues (2016), Demetrescu et al. (2020), or Yang et al. (2020). In the IVX framework, x t−1 is instrumented by the specifically constructed instrumental variable z t−1 = (1 − ̺L) −1 + x t−1 = t−2 j=0 ̺ j x t−1−j with initial condition z 1 = 0 and ̺ = 1−a/T η , where a > 0 and η ∈ (0,1). This "endogenous instrumentation" method has convenient properties: the persistence of z t is under control, and is below that of the near-integrated x t−1 . Regularity conditions assumed, the resulting IV estimator follows a mixed Gaussian distribution in the limit, and the limiting null distribution of the corresponding t ratio is standard normal.
Should the regressor x t be highly persistent with localization parameter c close to zero, the IVX-based test of no predictability may still be seriously distorted in finite samples, even if less so than the OLS-based test. This is clearly the case when choosing η too close to unity or a too close to zero, and the difference in terms of persistence between the instrument z t and the regressor x t becomes small: for example, the rule of thumb proposed by Kostakis et al. (2015), which sets ̺ = 1 − 1/T 0.95 , is actually equivalent to a near unit root with localizing coefficientc between 1 and 2 for sample sizes between T = 100 and T = 10,000. Kostakis et al. (2015), therefore, recommend the use of a finite-sample correction leading to reliable size control for two-sided tests. 1 We find, however, that the correction is not equally effective for tests against one-sided alternatives. This is relevant in practice, as economic theory often predicts a certain sign of the slope coefficient β.
We, therefore, examine in Section 2 the behavior of components of the t statistic that vanish in the limit, but still have an effect in finite samples. We do so in a setup allowing for deterministically varying variances and correlations of the errors u t and v t . Since the main source of distortions in finite samples appears to be the fact that the finite-sample distribution is not centered at zero (see also Stambaugh, 1999), we focus on correcting for the noncentrality of the t ratio. One way of doing so is to resort to backward and forward demeaning of the involved variables. In time series analysis, backward (or recursive, or adaptive) demeaning can be traced back to at least the work of So and Shin (1999) where recursive demeaning is shown to reduce bias in estimators of large autoregressive roots. Specifically for (panel) predictive regressions, Westerlund, Karabiyik, and Narayan (2017) resorts to forward and backward demeaning to reduce endogeneity bias. While this is shown to stabilize size, we also find that it has the side effect of reducing power in a nontrivial manner. This is a specific effect of forward demeaning in the context of persistent predictors, and not of IVX. Therefore, we discuss the use of direct approximations of the higher-order terms affecting the finite-sample behavior of the t statistic. Some depend on the localizing coefficient c, which cannot be consistently estimated, so we provide a method of side-stepping this issue. In extensive Monte Carlo experiments (see Section 3), we find it to work reasonably well under various patterns of changing error variances.
The technical details of the proofs can be found in the Appendix and in an Online Supplement, which also contains additional simulation results pertaining to conditional heteroskedasticity.
Preliminaries
Let us first specify the details of the predictive regression model we work with. Assumption 1. The data {y t , x t }, t = 2, . . . ,T, are generated from (1) and (2) To keep a realistic setup, we allow for error heterogeneity in the form of timevarying variances and correlations, as well as short-run dynamics. Specifically, we work under the following assumptions.
This would be a typical structure in predictive regressions for stock returns, where the disturbance u t is not predictable using the past of v t . We do not assume a particular distribution for the errors but only require finite fourth-order moments. Although daily returns may exhibit fat tails, standard predictive regression models are used in conjunction with monthly, quarterly, or even annual data, where infinite kurtosis is not an issue. For the same reason, the serial independence assumption we make on the innovations is justifiable. The 1-summability condition placed on the coefficients of the filter is standard in the literature involving integrated and near-integrated variables. Let ν (t/T) and notice that we have time varying variances, covariances, and correlations of the errors, as Cov((u t ,ν t ) ′ ) = t is not restricted beyond piecewise smoothness. The off-diagonal elements of t are not required to be zero, thereby allowing for predictive regression endogeneity. The assumption on H(s) allows for a wide range of covariance matrices of the innovations, including, for example, single or multiple (co-) variance shifts, smooth transition (co-) variance shifts, or even trending variances.
With W a vector of two independent standard Wiener processes and "⇒" denoting weak convergence of probability measures on the space of càdlàg real functions on [0,1] equipped with the Skorokhod topology, we have (see Cavaliere, Rahbek, and Taylor, 2010) the normalized levels of x t converge weakly to a heteroskedastic Ornstein-Uhlenbeck type process, IVX estimation relies on using the instrument with Eicker-White standard errors to account for the heteroskedasticity. The residualsû t are computed using the OLS estimator of β, as is common in the predictive IVX regression literature. 2 (1) with (2), ρ = 1, (u t ,v t ) ∼ iidN (0, ((1,δ);(δ,1))), 25,000 replications, different correlations δ, and sample sizes.
What makes the IVX approach interesting for practitioners is that the terms involving c vanish as T → ∞ and pivotal inference on β can be obtained asymptotically. See Kostakis et al. (2015) for details on IVX-based predictive regression under strict stationarity of errors, and Demetrescu and Rodrigues (2016) for a case with time-varying variances with some (nontrivial) restrictions on the correlations. In finite samples, however, the actual distribution is not centered at zero because numerator and denominator correlate, and has a variance somewhat smaller than 1, as can be seen in Figure 1. Notice also the slow convergence to the standard normal.
Higher-Order Terms
We, therefore, study corrections that make IVX-based inference in predictive regressions even more reliable. To this end, we first characterize the leading terms of the IVX t statistic.
Proposition 1. Under Assumptions 1-3 and any η ∈ ( 1 /2,1), it holds as T → ∞ that Proposition 1 provides an explanation for the finite sample behavior of t vx as observed in Figure 1 for c = 0. For instance, the direction and magnitude of the noncentrality depend on the average sign and magnitude of the correlation between the errors u t and ν t via the two terms B T and C T . It can be seen from the discussion below that, under constant correlation δ, the magnitude of the noncentrality is in fact proportional to δ. Moreover, the slow convergence of t vx to the standard normal seen in Figure 1 can also be explained by the behavior of the two terms B T and C T : although they do vanish, they do so at rate T η/2−1/2 , which is low whenever η close to unity.
The noncentrality is mainly driven by the two terms B T and C T . The first depends on the user-chosen parameters a and η (with the additional restriction η > 1 /2 required for the calculation of E(B T )), as well as on a particular form of average correlation. In fact, under homoskedasticity (H = const.), the expectation of T 1/2−η/2 B T is asymptotically equivalent to −δ/ √ 2a with δ the constant correlation of u t and ν t . If σ uν (s) = 0∀s ∈ [0,1], B T does not affect the centering of t vx .
The same holds for the second component, C T : if σ uν (s) = 0∀s ∈ [0,1], then U H and B c,H are independent; therefore, the expectation of the limit of the normalized C T is zero as well. Should there, however, be contemporaneous correlation, the behavior of C T -in particular its expectation-does depend on c. Moreover, the dependence is nonlinear, since it is easily shown that As expected, the expectation decreases in magnitude as c increases. This expression simplifies too under homoskedasticity, where T 1/2−η/2 C T has an asymptotic expectation depending on the (constant) correlation δ, namely −δ 2 a 1−e −c c . For c = 0, the case with the largest distortions, we see this expectation to be twice as large as that of the normalized B T , with the relative importance of C T diminishing as c increases. This component depends, however, on the localizing coefficient c which cannot be consistently estimated, unlike the expectation of B T . Figure 2 plots the contribution of both B T and C T to the noncentrality of the t statistic t vx . We note that heteroskedasticity only has a secondary influence 95 (see the Section 3 for more details); right panel: t = ((1,δ(t/T));(δ(t/T),1)), δ(·) switching between −0.5 and −0.95.
compared to the localizing coefficient c, and that most (but not all) of the finitesample noncentrality seen in Figure 1 for c = 0 is accounted for by the two terms.
Corrections
The term C T from Proposition 1 appears because of the full-sample demeaning of the dependent variable (see the proof of Proposition 1 for details). To deal with this, we first discuss recursive demeaning as a possible correction. In particular, we use backward recursive demeaning for the regressor and forward demeaning for the dependent variable, in that we write The motivation for such demeaning schemes is that the recursively demeaned regressor and the forward demeaned disturbance are now orthogonal irrespective of the correlation between u t and ν t , which is not the case with usual demeaning. Such orthogonal schemes of mean adjustment have been used before in predictive regressions: for example, Westerlund et al. (2017) uses such a scheme to develop a predictability test in panel predictive regression. 3 In fact, in the panel literature, forward and backward demeaning have a much longer history in dealing with the Nickell bias (Nickell, 1981); see Everaert (2013) for a recent contribution.
This effect of recursive adjustment is very much in the spirit of the proposal of Kostakis et al. (2015), who point out that not demeaning the instrument z t−1 (while still demeaning the dependent variable and the predictor itself to account for a nonzero intercept in the predictive regression) reduces the finite-sample correlation between the numerator and the denominator of the IVX t statistic. We shall closer examine the corrections of Kostakis et al. (2015) after analyzing the effect of the orthogonal mean adjustment scheme in: Proposition 2. Under the assumptions of Proposition 1, it holds as T → ∞ that Proposition 2 shows that dependence on c of the leading higher-order terms may in fact be eliminated. Our Monte Carlo study (see Section 3) shows that t rec vx performs quite well in terms of size in spite of the remaining term B T , so the first correction we suggest is orthogonal mean adjustment.
The Monte Carlo study also shows that the local power of t rec vx is low. To see why, examine with· t denoting forward demeaned quantities. Under the alternative β = 0, power is driven by the cross-product (z t−1 −z t−1 ) (x t−1 −ẍ t−1 ). But the forward demeaned x t−1 may be rewritten as The effect on t rec vx under the alternative is similar to that of a weak instrument. This phenomenon is caused by the correction itself and not by the IVX instrumentation.
Turning our attention to the corrections proposed by Kostakis et al. (2015), they result in v withω 2 u andω 2 v estimators of the long-run variances of u t and v t , andλ uv an estimator of the long-run covariance of u t and v t . The behavior of t W vx is discussed in Proposition 3. Under the assumptions of Proposition 1, it holds as T → ∞ that t W vx = Z T + B T + C T + o p T η/2−1/2 with Z T ,B T , and C T from Proposition 1.
Proof: See Appendix B.
Although t W vx has the same leading terms as t vx , we give in Section II of the Online Supplement some of the higher-order terms of t vx which are of order O p (T η−1 ) and missing from t W vx . Since they contribute to the noncentrality (cf. Kostakis et al., 2015Kostakis et al., , p. 1516 and also the differences seen by comparing Figures 1 and 2), their absence likely improves the finite-sample behavior of t W vx and thus explains our findings in the Monte Carlo section that the two-sided t W vx statistic performs remarkably well. The impact of the terms B T and in particular C T on the one-sided versions of t W vx is, however, not negligible and we, therefore, move on to propose explicit corrections for B T and C T .
The quantities involved in the expectation of B T may for instance be estimated using smoothed residuals, delivering an estimate of In dealing with C T , it may be tempting to proceed analogously. Yet, with c unknown and no consistent estimator available, this approach seems of limited applicability in general. Alternatively, one may try to match the functional 1 0 e −c(1−s) σ uν (s)ds using the expectation of another functional depending on c. We illustrate this idea for the case of homoskedasticity, where we have under homoskedasticity. Therefore, 2 Var J c,H ( 1 /2) = 1−e −c c , which suggests employing a quantity with this expectation to accommodate the noncentrality induced by C T .
While it is likely possible to modify this approach in certain particular cases (say for breaks in variances and covariances at suitable times), the general case seems out of reach. We, therefore, propose the use of the correction for the homoskedastic case (and also point to Figure 2 as additional motivation for this proposal). In fact, our simulations in Section 3 and in Section III of the Online Supplement show this to work reasonably well under heteroskedasticity too.
The resulting correction term for the expectation of B T is then where δ may be estimated as the correlation of ν t and u t based onû t andν t from an AR(p) approximation of x t with p selected via an information criterion (we resort to the Akaike IC).
For C T , the natural choice following from the property of the Ornstein-Uhlenbeck process discussed above is then 1/(ψ 2 σ 2 ν )T −1 x 2 [T/2] , leading to since ω 2 = ψ 2 σ 2 ν is simply the (stationary) long-run variance of v t (which may be estimated either based on x t , or-as we proceed in our simulations-on the residuals of a first-order autoregression of x t ).
It should be noted, however, that this delivers a noisy proxy for the mean of C T : while it will remove the noncentrality due to C T (at least under homoskedasticity), it will at the same time marginally inflate the variance of the corrected t statistic. The presence of the estimatorω 2 in the denominator further inflates the variance: since we employ a nonparametric estimator, its variability in finite samples is large enough to affect the positive effect of the correction. Concretely, it induces outliers in the distribution of the correction and inflates the variance of the corrected statistic. To deal with these issues, we add finite-sample modifications which do not affect the asymptotics.
Finally, should x t be stationary instead of near-integrated, this bias correction may overcorrect, since, for ρ away from unity, the standard normal asymptotics do relatively well even when ̺ is close to unity; see Kostakis et al. (2015). A practical adjustment of the correction is to restrict ̺ in t * vx to be smaller than an estimate of ρ. In particular, we suggest ̺ = min{̺,ρ}, whereρ is the OLS estimator in a firstorder autoregression of x t . Asymptotically, this restriction makes no difference under near-integration, but prevents the bias correction to "overshoot." To sum up, we suggest to use, withb T = −δ/ 2T 1 − min{̺,ρ} , the statistic whereδ/3 is intended to capture finite-sample correlation ofĉ T and t vx , and is tuned to homoskedasticity. Our Monte Carlo study in Section 3 and in Section III of the Online Supplement shows that t * vx works well under heteroskedasticity too.
FINITE SAMPLE EVIDENCE
In this section, we provide finite sample evidence on the merits of the remedies proposed in this paper. We use a data generating process (DGP) as outlined under equations (1) and (2) with independent innovation process governed by a bi-variate normal distribution with a correlation coefficient of δ = −0.95 (which is typical for predictive regressions with stock returns; see, e.g., Phillips, 2015), as well as time-varying volatility. Concretely, The size study results are generated using 10,000 replications and considering c ∈ {0,1,5,10,30,50} together with β = 0 for T = 250 and 500. To analyze the behavior of the corrected tests under the alternative, we consider a sequence of local alternatives characterized by β = b T √ 1 − δ 2 , for b ∈ {−26, − 24, . . . ,−2,0,2, . . . ,26}. Note under b = 0, the size properties of the test will be recovered. Since the sign of β might be known in practice (as is often the case when the choice of the predictor is motivated by economic theory 4 ), we consider local alternatives covering both situations, β < 0 and β > 0, alongside with cases where two-sided testing is of interest. All through this section, we fix a = 1 and η = 0.95 following the recommendation of Kostakis et al. (2015).
We compare four versions of the IVX statistic testing the null β = 0: the original IVX t statistic (t vx ), the finite-sample adjusted version of Kostakis et al. (2015) (t W vx ), 5 the IVX t statistic computed with orthogonal mean adjustment (t rec vx ), as well as our bias-corrected proposal (t * vx ). Table 1 shows the finite-sample rejection frequencies at the 5% nominal level for strong negative contemporaneous correlation δ = −0.95. 6 The finite-sample noncentrality of the standard IVX t statistic, t vx , leads as expected to huge size distortions that only drop to reasonable levels for c = 10 if not c = 30. The time variation of the variance influences these distortions, but not by much. Also, they do not drop with increasing T, as predicted by the small rates in Proposition 1. The statistic t W vx on the other hand shows that the finite-sample corrections introduced in Kostakis et al. (2015) work excellently in the two-sided case. Only for c = 50, can one observe a very slight tendency to overreject (with rejection frequencies closer to 6% than to 5% for T = 250). However, the t W vx statistic does not behave too well in each tail taken alone, as it tends to overreject to the right (one sees large rejection frequencies for small c, and even for c = 50 we note rejection frequencies above 8%) and to underreject to the left (this is most visible for small c, where the rejection frequencies are below 1%). This also does not significantly improve for larger T = 500, and exhibits little variation across the different variance patterns. The statistic with backward and forward recursive demeaning, t rec vx , has very good size control (with some exceptions for c = 0, where rejection frequencies of 7% may be observed for the test against right-sided alternatives, and some cases of under-rejections: for left-sided testing under downward breaks and c = 0,1 we observe rejection frequencies of 2 or 3%). Finally, the t * vx statistic has the best size control of all four tests: while it sometimes underrejects for left-sided testing (in the same situations where the t rec vx statistic was undersized), most rejection frequencies lie between 4% and 6%, with only a handful of cases where the 6% threshold is exceeded, and no rejection frequency above 7%.
Summing up, all three modified statistics may be used in a two-sided testing situation in what concerns size control. For one-sided testing situations, the use of t W vx is not recommended as it overrejects to the right and severely underrejects to the left, which has a dampening effect on rejection frequencies under the alternative; see below. The simulations will also confirm the power-reducing effect of the orthogonal mean adjustment scheme mentioned after Proposition 2.
We present in Figures 3-5 plots of rejection frequencies of the four statistics compared for c = 0,10,30 and all variance patterns and test variants (left-, right-, and two-sided).
For left-sided testing, it is t * vx that has best rejection rates in all cases. Compared to t vx and t W vx , this is because t * vx is centered correctly and therefore not undersized. Here, t vx seems to perform a bit better than t W vx . The test based on backward and forward adjustment has poor rejection properties under the alternative; the gap 5 Kostakis et al. (2015) consider a Wald statistic W for which t W vx 2 = W. 6 The findings are symmetric in the sign of δ; moreover, size behavior improves uniformly for decreasing magnitude of δ so we do not include the exact figures to save space. (1) and (2) with (1) and (2) with v t = φv t + ν t for φ = 0.5, where (u t ,ν t ) ∼ iiN(0, t ) and t exhibits constant correlation δ = −0.95 and time-varying variances. We set ρ = 1 − c /T for various c and ̺ = 1 − 1 /T 0.95 and use standard normal critical values. See the text for details. (1) and (2) with v t = φv t + ν t for φ = 0.5, where (u t ,ν t ) ∼ iiN(0, t ) and t exhibits constant correlation δ = −0.95 and time-varying variances. We set ρ = 1 − c /T for various c and ̺ = 1 − 1 /T 0.95 and use standard normal critical values. See the text for details. (1) and (2) with v t = φv t + ν t for φ = 0.5, where (u t ,ν t ) ∼ iiN(0, t ) and t exhibits constant correlation δ = −0.95 and time-varying variances. We set ρ = 1− c /T for various c and ̺ = 1− 1 /T 0.95 and use standard normal critical values. See the text for details.
to the other tests decreases as c increases, but rejection rates drop anyway with increasing c.
For right-sided testing, t vx rejects very often, but this is of course due to the extreme liberality compared to the other tests. The test based on t rec vx performs, like before, worst (again, with differences decreasing as c increases). To the right, t W vx typically rejects more often than t * vx , but keep in mind that it is also quite oversized, even if not as oversized as the uncorrected t vx .
Finally, examining the two-sided tests, we observe as expected a combination of the findings for the left-and right-sided tests, with the difference that the t W vx test is now correctly sized and the corresponding test decisions are now reliable. The test based on t * vx is also correctly sized, and the power ranking of the two depends on the sign of β under the alternative. While t W vx is more powerful against rightsided alternatives, but less powerful against left-sided ones, t * vx exhibits a more balanced behavior. Again, the larger c, the closer the rejection frequencies of the three corrected tests.
Summing up, when the theory provides clear justifications about using a onesided test, we could safely recommend the use of t * vx . For two-sided testing, one has the choice between t W vx and t * vx , with the symmetry of the rejection frequencies under the alternative being an argument in favor of t * vx , and the higher power against right-sided alternatives (or left-sided, should the correlation δ be positive) being an argument in favor of t W vx . Altogether, as our Monte Carlo results show, we would like to stress that t W vx has a very good size control for two-sided testing. Allowing for conditional heteroskedasticity does not alter this general recommendation (see Section III of the Online Supplement for further Monte Carlo simulation results supporting this claim).
CONCLUDING REMARKS
A convenient approach in the context of predictive regressions where the persistence of the endogenous forecasting variable is unknown, is to turn to IV regressions where a so-called extended instrumental variable with a controlled level of persistence is constructed. The resulting IVX estimator is asymptotically mixed Gaussian and makes for standard asymptotic inference. Finite-sample deviations from the asymptotic limit can, however, be quite serious. Typically manifested in the form of noncentrality, they depend heavily on how the IV estimator is constructed.
In this paper, we provide a structured approach to control the small sample noncentrality of the IVX t statistic for a given instrumental variable, and as a result control the size distortions. First, we develop a higher-order expansion of the corresponding IVX t statistic and as such provide a theoretical understanding of the small sample deviations of the t statistic from its limit. This in turn suggests ways to center the t statistic at the origin under the null. Combining forward and recursive demeaning does account for most leading terms of the bias at the cost of some loss of power. An explicit correction for the noncentrality achieves similar size control but without the power reduction. These proposals do not assume any parametric restriction on the persistence of the extended instrumental variable, and rather provide, for any given parameterization thereof, a corresponding way of reducing noncentrality.
Our recommendations do not concern Wald tests of the null of no predictability in multiple regressions, the main reason being that the corrections already proposed by Kostakis et al. (2015) are quite effective for the Wald statistic. We leave a full analysis of higher-order terms of the involved quadratic forms to future work.
Our Monte Carlo study shows that all of these proposals provide substantial remedies to small sample size distortions to the IVX t statistic while maintaining relatively good properties under the alternative. Further, when the effect of a forecasting variable is negative, we suggest using a left-sided t statistic with one of the corrections we provided in this paper, since our Monte Carlo study provides evidence that such a strategy is associated with a better statistical power compared to using a two-sided test. For two-sided alternatives, the Wald test of Kostakis et al. (2015) offers the better balance between size and power.
SUPPLEMENTARY MATERIAL
Then,
Proof of Proposition 1
Begin by applying Lemma A.2; since T − η /2 = o(T η /2− 1 /2 ), we may focus on 1 For the second term, we have (using the arguments in the proof of Lemma A.1 of Demetrescu et al., 2020) | 7,168 | 2020-08-10T00:00:00.000 | [
"Economics"
] |
Effect of PCE on Properties of MMA-Based Repair Material for Concrete
Methyl methacrylate (MMA)-based repair material for concrete has the characteristics of low viscosity, excellent mechanical properties, and good durability. However, its application is limited due to its large shrinkage. Existing studies have shown that adding perchloroethylene can reduce the shrinkage. On this basis, other properties of modified MMA-based repair materials were tested and analyzed in the present study. The results revealed that the addition of perchloroethylene (PCE) can hinder the polymerization reaction of the system. When CaCO3 with a mass fraction of 30% was added, the viscosity of the material was within the range of 450–500 mPa·s, and the shrinkage decreased to approximately 10%. The bending strength of MMA, and MMA modified by PCE, repair materials at 28 days could reach up to 28.38 MPa and 29.15 MPa, respectively. After the addition of HS-770 light stabilizer with a mass fraction of 0.4%, the retention ratios of the bending strength of materials with ratios of P0 and P3 could reach 91.11% and 89.94%, respectively, after 1440 h of ultraviolet radiation. The retention ratio of the bending strength of the material could reach more than 95% after immersion in different ionic solutions for 90 days.
It was found that when the width of a crack is less than 0.2 mm, the repair effect of using a methyl methacrylate (MMA)-based repair material is ideal. The viscosity of MMA can reach as low as 0.8 mPa·s [18]. Furthermore, MMA has good heat resistance, wear resistance, chemical corrosion resistance, and impermeability [19][20][21], and has a strong bond with cement concrete [22]. However, the volume shrinkage of MMA during polymerization is approximately 21%. This large shrinkage affects the bonding effect. Therefore, it is necessary to improve this shrinkage [23,24].
There are two main ways to reduce shrinkage. One approach is to change the structure of the polymer itself, while the other approach is to add inorganic fillers or low profile additives [25,26]. Li, W.-G. [27] studied the effect of adding alumina and aluminum hydroxide to a MMA-based repair material on the shrinkage ratio, and found that the volume shrinkage of the sample decreased. Ou, Y.-G. [28] studied a system of polyvinyl acetate and styrene to improve the shrinkage of MMA. The results revealed that the shrinkage decreased, but the bonding strength also decreased. Su, Z.-Y. [29] preliminarily studied the modification of MMA material by adding epoxy resin, and found that the volume shrinkage significantly decreased. Mun, K.-J. and Choi, N.-W. [30] used different contents of unsaturated polyester resins as adhesives to study the effect on the properties of expanded polystyrene-based polymethyl methacrylate mortar. The experimental results revealed that a low shrinkage can be achieved. Li, X. [31] mixed MMA with sodium acetate to improve the shrinkage. The results revealed that the shrinkage ratio decreased. However, the viscosity of the repair material increased, and the fluidity became worse at the same time. Yang, Z.-W. [32] added perchloroethylene to a MMA system, and the shrinkage ratio decreased to a certain extent. In addition, experiments on bond strength were carried out. It was found that the adhesiveness was good enough to meet the requirements for repair engineering, but other properties were not studied. In addition to the consideration of shrinkage, other properties of modified MMA-based repair materials should also meet the requirements of repair engineering [33]. Therefore, other properties of modified MMAbased repair materials, such as viscosity, bending strength, ultraviolet aging resistance, and chemical erosion resistance were tested and analyzed in the present study.
The inorganic filler was mainly 1500 mesh heavy calcium carbonate (CaCO 3 ). The relevant information about the calcium carbonate is shown in Table 1. Table 2 shows the ratio of raw materials used in the preparation of the MMA-based repair material.
Preparation of the MMA-Based Repair Material
In the storage process of MMA, a small amount of the inhibitor was added to the MMA to prevent the slow self-polymerization reaction from occurring. In general, the content is less than 0.001% by weight, but this would affect the polymerization reaction. As a result, the polymerization inhibitor should be initially removed. For the method, the raw material of MMA was distilled for 10 min at 50 • C in a water bath. Then, after removing the polymerization inhibitor, the MMA was added to a three-mouth flask together with the PCE, initiator, plasticizer, and other additives, and the polymerization reaction was carried out under stirring at 80 • C in a water bath [34]. The sample preparation device is shown in Figure 1. During the reaction process, it was necessary to determine the viscosity of the system at all times, in order to prevent explosive polymerization. After the reaction had preceded for 35-50 min, the viscosity was relatively high, and the heating was stopped in time. Then, the three-mouth flask was taken out and cooled in cold water, in order to obtain the prepolymer of the repair material. content is less than 0.001% by weight, but this would affect the polymerization As a result, the polymerization inhibitor should be initially removed. For the met raw material of MMA was distilled for 10 min at 50 °C in a water bath. Then, afte ing the polymerization inhibitor, the MMA was added to a three-mouth flask with the PCE, initiator, plasticizer, and other additives, and the polymerization was carried out under stirring at 80 °C in a water bath [34]. The sample preparatio is shown in Figure 1. During the reaction process, it was necessary to determine cosity of the system at all times, in order to prevent explosive polymerization. A reaction had preceded for 35-50 min, the viscosity was relatively high, and the was stopped in time. Then, the three-mouth flask was taken out and cooled in co in order to obtain the prepolymer of the repair material.
Viscosity
The viscosity of the repair material was measured using a NDJ-1 rotary vis (Shenzhen, China). The viscometer equipment is shown in Figure 2. First, the sam placed in the container. Then, the appropriate rotor and rotational speed were Finally, the motor was started, and the data were read after the pointer was stab two parallel tests were measured, the average value was taken as the viscosity. W sample with the ratio of P0 in Table 2 was prepared, the viscosity was measured ent time points, in order to determine the effect of the reaction time on the viscosi repair material. Samples with ratios of P0, P1, P2, P3, P4, and P5 were prepared b different proportions of PCE. Then, the viscosity of these different systems was m at the same time, and compared with each other. The viscosity of the repair material was measured using a NDJ-1 rotary viscometer (Shenzhen, China). The viscometer equipment is shown in Figure 2. First, the sample was placed in the container. Then, the appropriate rotor and rotational speed were selected. Finally, the motor was started, and the data were read after the pointer was stable. After two parallel tests were measured, the average value was taken as the viscosity. When the sample with the ratio of P0 in Table 2 was prepared, the viscosity was measured at different time points, in order to determine the effect of the reaction time on the viscosity of the repair material. Samples with ratios of P0, P1, P2, P3, P4, and P5 were prepared by adding different proportions of PCE. Then, the viscosity of these different systems was measured at the same time, and compared with each other.
Shrinkage
A small test-tube was measured to obtain the mass m01 and volume V0. Then, th prepared sample was poured into the test-tube, and the total mass was measured to obtai
Shrinkage
A small test-tube was measured to obtain the mass m 01 and volume V 0 . Then, the prepared sample was poured into the test-tube, and the total mass was measured to obtain the mass m 02 . As shown in Figure 3, the test-tube was placed in an oven at 60 • C for 4 h. Then, it was taken out, and placed indoors. After five days, the test-tube was broken, and the sample was taken out. The mass m 1 was obtained by weighing, the volume V 1 was measured using the drainage method, and the shrinkage ratio S was calculated according to Formula (1). Two samples of each ratio were taken for the measurement, and their average value was taken as the result. Here: S: the shrinkage of the sample, %; m 1 : the quality of the sample after curing, g; V 1 : the volume of the sample after curing, mL; m 01 : the mass of the small tube, g; m 02 : the mass of the small test tube + the sample, g; V 2 : the volume of the small test tube, mL.
Shrinkage
A small test-tube was measured to obtain the mass m01 and volume V0. T prepared sample was poured into the test-tube, and the total mass was measured the mass m02. As shown in Figure 3, the test-tube was placed in an oven at 60 °C Then, it was taken out, and placed indoors. After five days, the test-tube was bro the sample was taken out. The mass m1 was obtained by weighing, the volume measured using the drainage method, and the shrinkage ratio S was calculated a to Formula (1). Two samples of each ratio were taken for the measurement, and erage value was taken as the result. Here: S: the shrinkage of the sample, %; m1: the quality of the sample after curing, g; V1: the volume of the sample after curing, mL; m01: the mass of the small tube, g; m02: the mass of the small test tube + the sample, g; V2: the volume of the small test tube, mL.
Bending Strength
As a material for repairing cracks, the most important mechanical property was the bending strength [35]. After the accelerator with a mass fraction of 0.5% was added, the prepolymer was evenly stirred and poured into a mold, with a size of 100 × 15 × 5 mm 3 , and cured at 60 • C. Then, the strength test was carried out on a CSS-44020 universal testing machine, (Changchun, China) and the maximum load of bending failure of the material was recorded. The bending strength of the repaired material was calculated according to Formula (2). The samples of each ratio were measured three times, and the average value was calculated.
Here: L: the distance of force between two points of the material, m; b: the width of the section of the material specimen, m; h: the height of the section of the material specimen, m.
Durability
The durability of the repair material included the ultraviolet aging resistance, chemical erosion resistance, thermal shock aging resistance, and frost resistance. The first study was the ultraviolet aging resistance. Different proportions of bis (2,2,6,6-tetramethyl-4piperidyl) sebacate (HS-770) type light stabilizer were added into the samples of P0 and P3. Then, the samples were uniformly stirred, poured into a 100 × 15 × 5 mm 3 mold, and cured at 60 • C. Afterwards these were placed in an ultraviolet box for 1440 h, taken out to test using the CSS-44020 universal testing machine, and the bending strength was calculated according to Formula (2). Next, these were compared with the blank specimens to obtain the retention ratios of the bending strength. Each group of samples was measured three times and the average value was calculated. The position of the ultraviolet lamp and sample specimen in the ultraviolet box is shown in Figure 4.
Formula (2). The samples of each ratio were measured three times, and the average value was calculated.
Here: fm: the ultimate bending strength of the material, MPa; Fmax: the maximum load of bending failure of the material, N; L: the distance of force between two points of the material, m; b: the width of the section of the material specimen, m; h: the height of the section of the material specimen, m.
Durability
The durability of the repair material included the ultraviolet aging resistance, chemical erosion resistance, thermal shock aging resistance, and frost resistance. The first study was the ultraviolet aging resistance. Different proportions of bis (2,2,6,6-tetramethyl-4piperidyl) sebacate (HS-770) type light stabilizer were added into the samples of P0 and P3. Then, the samples were uniformly stirred, poured into a 100 × 15 × 5 mm 3 mold, and cured at 60 °C. Afterwards these were placed in an ultraviolet box for 1440 h, taken out to test using the CSS-44020 universal testing machine, and the bending strength was calculated according to Formula (2). Next, these were compared with the blank specimens to obtain the retention ratios of the bending strength. Each group of samples was measured three times and the average value was calculated. The position of the ultraviolet lamp and sample specimen in the ultraviolet box is shown in Figure 4. Good chemical corrosion resistance can make the repair material firmly bond with old concrete in a humid environment, and this can also prevent water and various ions from causing further erosion of the repair materials [36]. In the experiment, a standard cement-mortar ratio with a water-cement ratio of 0.5 and a mortar ratio of 1:3 was used in a mold 20 × 20 × 20 mm 3 . Then, the P0 and P3 repair materials were applied to six surfaces of the mortar, with a thickness of approximately 1 mm. Afterwards, the samples were placed into water with the NaCl solution with a mass fraction of 5%, and the MgSO4 solution with a mass fraction of 5%. Next, these were taken out and weighed after a certain Good chemical corrosion resistance can make the repair material firmly bond with old concrete in a humid environment, and this can also prevent water and various ions from causing further erosion of the repair materials [36]. In the experiment, a standard cement-mortar ratio with a water-cement ratio of 0.5 and a mortar ratio of 1:3 was used in a mold 20 × 20 × 20 mm 3 . Then, the P0 and P3 repair materials were applied to six surfaces of the mortar, with a thickness of approximately 1 mm. Afterwards, the samples were placed into water with the NaCl solution with a mass fraction of 5%, and the MgSO 4 solution with a mass fraction of 5%. Next, these were taken out and weighed after a certain period of time to calculate the change in quality before and after the immersion. The chemical corrosion resistance of the MMA-based repair material was evaluated by comparing this with the blank samples. Each group of samples was measured twice, and the average value was calculated.
The samples of P0 and P3 were prepared and poured into a mold of 100× 15 × 5 mm 3 . Then, a thermal shock cycle experiment was carried out. The thermal shock cycle experiment designed for the present study was carried out according to the following steps. After heating at 105 • C for 30 min, the repaired materials were immediately taken out and placed in a −20 • C freezer for 30 min as a thermal shock cycle [37]. Then, the specimens were tested using the CSS-44020 universal testing machine, and the bending strength was calculated according to Formula . Afterwards these were compared with the blank specimens to obtain the retention ratios of the bending strength after 300 cycles. Each group of samples was measured three times, and the average value was calculated.
Concrete matrix and repair materials are often damaged due to freezing and thawing. Hence, the ability of the anti-freeze-thaw cycle should be taken into account in the preparation of the repair material [38]. The freeze-thaw cycle experiment designed for the present study was carried out according to the following conditions. The freezing-thawing temperature ranged from −20 • C to 20 • C, and the freezing-thawing time was 4 h each time. The specimens were tested using the CSS-44020 universal testing machine, and the bending strength was calculated according to Formula , with cycles of 50 times and 200 times, in order to determine the frost resistance. Each group of samples was measured three times, and the average value was calculated.
Viscosity
From the relationship between viscosity and time in Figure 5, the reaction can be roughly divided into three stages: The first stage is before 20 min, the second stage is between 20 min and 35 min, and the third stage is after 35 min. The main processes in the first stage of the system were the initiator being decomposed to free radicals, and these free radicals initiating the MMA monomer. Then, the main chain of polymerization slowly grew. Since the molecular weight was relatively small, the viscosity slowly changed. In the second stage, with the increase in free radicals and chain activation centers, the main chain significantly increased, the molecular weight increased, and the viscosity of the system began to increase. In the third stage, the reaction was violent and gave off a lot of heat, which further promoted the reaction, thereby inducing the viscosity to sharply increase with time. It is noteworthy that the reaction time must be controlled well, in order to prevent the phenomenon of explosion. In addition, it can be observed by comparing the different curves in Figure 5 that the conversion time of P1, P3, and P5 after the addition of PCE was delayed in the three stages. The time point for the transition from the first stage to the second stage was 25 min for P1, 29 min for P3, and 33 min for P5. The time point when the second stage transitioned to the third stage was 40 min for P1, 42 min for P3, and 43 min for P5. It can be observed that the addition of PCE hindered the polymerization reaction of the system, to some extent. Since there are four chlorine substituted atoms in the PCE molecule, the chlorine atom is much larger than the hydrogen atom. Therefore, a relatively obvious steric hindrance effect would occur when PCE is linked to the main chain of MMA. This reduces the curing reaction rate and reaction exothermic, and makes the curing process more stable, thereby causing the reaction process to become slower [39,40].
Shrinkage
Yang Z W [32] prepared a repair material by adding PCE to a MMA system, and tested its shrinkage ratio. The experimental results revealed that the shrinkage ratio of the MMA-based repairing material after the addition of PCE decreased, to a certain extent. When the mass of PCE was 10% of the mass of MMA, the shrinkage ratio, which was only 15.25%, was the lowest among all the test results. However, the shrinkage was still relatively high for the repair material, and still needed to be reduced. In addition to changing In addition, it can be observed by comparing the different curves in Figure 5 that the conversion time of P1, P3, and P5 after the addition of PCE was delayed in the three stages. The time point for the transition from the first stage to the second stage was 25 min for P1, 29 min for P3, and 33 min for P5. The time point when the second stage transitioned to the third stage was 40 min for P1, 42 min for P3, and 43 min for P5. It can be observed that the addition of PCE hindered the polymerization reaction of the system, to some extent. Since there are four chlorine substituted atoms in the PCE molecule, the chlorine atom is much larger than the hydrogen atom. Therefore, a relatively obvious steric hindrance effect would occur when PCE is linked to the main chain of MMA. This reduces the curing reaction rate and reaction exothermic, and makes the curing process more stable, thereby causing the reaction process to become slower [39,40].
Shrinkage
Yang, Z.-W. [32] prepared a repair material by adding PCE to a MMA system, and tested its shrinkage ratio. The experimental results revealed that the shrinkage ratio of the MMA-based repairing material after the addition of PCE decreased, to a certain extent. When the mass of PCE was 10% of the mass of MMA, the shrinkage ratio, which was only 15.25%, was the lowest among all the test results. However, the shrinkage was still relatively high for the repair material, and still needed to be reduced. In addition to changing the structure of the repair material, an often used method to reduce shrinkage is the addition of inorganic fillers. The present experiment investigated the influence of CaCO 3 on the shrinkage of MMA repair materials, and the results are shown in Figure 6. It can be observed from the figure that the shrinkage of the samples with ratios of P0 and P3 had an obvious decreasing trend with the increase in the proportion of CaCO 3 . The shrinkage ratio of the repair material can be reduced to approximately 10% when CaCO 3 with a mass fraction of 30% of the mass of MMA is added. Li, X. [31] wanted to add sodium acetate to MMA for shrinkage modification. When the mass fraction of sodium acetate was 3% of the MMA mass, the shrinkage rate was 18%. Compared with that, the effect of shrinkage reduction in the present experimental study was more obvious. However, the addition of CaCO 3 would definitely change the viscosity of the system [41]. The results for the viscosity of the system after the addition of CaCO 3 are shown in Table 3. It can be observed from Table 3 that the viscosity of the system increased with the increase in CaCO 3 ratio. The viscosity of the repair material was within the range of 450-500 mPa·s when CaCO 3 , with a mass fraction of 30%, was added. The viscosity sharply increased when the mass fraction of CaCO 3 was more than 20%. Hence, the excessive viscosity did not meet the requirements of construction. This requires the consideration of both the shrinkage and viscosity of the repair material. of CaCO3 was more than 20%. Hence, the excessive viscosity did not meet the requirements of construction. This requires the consideration of both the shrinkage and viscosity of the repair material.
Bending Strength
Samples with ratios of P0, P1, P2, P3, P4, and P5 were tested for bending strength. The experimental results are shown in Figure 7.
Bending Strength
Samples with ratios of P0, P1, P2, P3, P4, and P5 were tested for bending strength. The experimental results are shown in Figure 7.
Bending Strength
Samples with ratios of P0, P1, P2, P3, P4, and P5 were tested for bending strength. The experimental results are shown in Figure 7. It can be observed from Figure 7 that the prepared repair material had a higher bending strength, and that the strength in the early stage was higher. At three days, the bending strength reached up to 19~20 MPa. However, the bending strength slowly developed in the later stage, and its strength reached up to 27~30 MPa at 28 days. The flexural strength It can be observed from Figure 7 that the prepared repair material had a higher bending strength, and that the strength in the early stage was higher. At three days, the bending strength reached up to 19~20 MPa. However, the bending strength slowly developed in the later stage, and its strength reached up to 27~30 MPa at 28 days. The flexural strength of the PMMA mortar prepared by Mun K J and Choi N W was within 25-35 MPa [30], which is similar to the flexural strength of the repair material studied in the present experiment. The addition of PCE had no adverse effect on the bending strength of the repair material, and the strength was enhanced to a certain extent. However, the increased effect was not obvious. It should be noted that the size of the specimen formed in the present study was 100 × 15 × 5 mm 3 . If the specimen size changes it is necessary to consider the size effect. It is possible for the bending strength of the specimen to decrease with the increase in section size of the specimen.
CaCO 3 treated with a coupling agent was added as a filler to the MMA-based repair material. This can significantly improve the shrinkage of the material. However, this would have an impact on the mechanical properties of the repair material. CaCO 3 with a mass fraction of 10%, 20%, 30%, 40%, and 50% of the mass of MMA was added into the samples, with the ratios of P0 and P3, respectively. Then, the bending strength of the samples was tested. The bending strength at 28 days was selected, and the relationship between the bending strength and proportion of CaCO 3 was determined, as shown in Figure 8.
It can be observed from the Figure that the addition of CaCO 3 had an effect on the bending strength of the repair material. As the proportion of CaCO 3 increased, the bending strength of the repair materials obviously decreased. The largest decrease was approximately 25%, when CaCO 3 with a mass fraction of 50% was added. This was mainly due to the addition of CaCO 3 to the samples. Although this increased the viscosity of the samples, a relatively large viscosity is not conducive for the dispersion of CaCO 3 in the system. Hence, the CaCO 3 must be agglomerated in the system. As the proportion of CaCO 3 increased, the dispersion effect became worse, and there was more agglomeration in the system. As a result, the bending properties of the resin further decreased [42].
CaCO3 treated with a coupling agent was added as a filler to the MMA-based repair material. This can significantly improve the shrinkage of the material. However, this would have an impact on the mechanical properties of the repair material. CaCO3 with a mass fraction of 10%, 20%, 30%, 40%, and 50% of the mass of MMA was added into the samples, with the ratios of P0 and P3, respectively. Then, the bending strength of the samples was tested. The bending strength at 28 days was selected, and the relationship between the bending strength and proportion of CaCO3 was determined, as shown in Figure 8. It can be observed from the Figure that the addition of CaCO3 had an effect on the bending strength of the repair material. As the proportion of CaCO3 increased, the bending strength of the repair materials obviously decreased. The largest decrease was approximately 25%, when CaCO3 with a mass fraction of 50% was added. This was mainly due to the addition of CaCO3 to the samples. Although this increased the viscosity of the samples, a relatively large viscosity is not conducive for the dispersion of CaCO3 in the system. Hence, the CaCO3 must be agglomerated in the system. As the proportion of CaCO3 increased, the dispersion effect became worse, and there was more agglomeration in the system. As a result, the bending properties of the resin further decreased [42]. Figure 9 presents the bending strength of the samples when they were irradiated in the ultraviolet box for 1440 h after the addition of the light stabilizer with different mass fractions of the mass of MMA. Figure 10 presents the retention ratios for the bending strength. It can be observed from the Figure that the retention ratios for the bending strength significantly increased after the addition of the light stabilizer. When the mass fraction of the light stabilizer was 0.4%, the retention ratio of the bending strength was above 90%. The reason for this was that the light stabilizer HS-770 is a kind of hindered amine light stabilizer, which can be transformed into nitroxyl radicals, partially under the Figure 9 presents the bending strength of the samples when they were irradiated in the ultraviolet box for 1440 h after the addition of the light stabilizer with different mass fractions of the mass of MMA. Figure 10 presents the retention ratios for the bending strength. It can be observed from the Figure that the retention ratios for the bending strength significantly increased after the addition of the light stabilizer. When the mass fraction of the light stabilizer was 0.4%, the retention ratio of the bending strength was above 90%. The reason for this was that the light stabilizer HS-770 is a kind of hindered amine light stabilizer, which can be transformed into nitroxyl radicals, partially under the photooxidation condition. These nitroxyl radicals can capture the active radicals generated in the polymer, thereby inhibiting the occurrence of a photooxidation reaction, and playing a stabilizing role [43]. The strength retention rate of the epoxy mortar repair materials studied by Liu F can reach up to approximately 86% under the condition of ultraviolet irradiation for 1000 h [44]. These two repair materials are similar in terms of anti-ultraviolet aging performance, and the performance was excellent.
Ultraviolet Aging Resistance
Materials 2021, 14, x FOR PEER REVIEW 10 of 16 photooxidation condition. These nitroxyl radicals can capture the active radicals generated in the polymer, thereby inhibiting the occurrence of a photooxidation reaction, and playing a stabilizing role [43]. The strength retention rate of the epoxy mortar repair materials studied by Liu F can reach up to approximately 86% under the condition of ultraviolet irradiation for 1000 h [44]. These two repair materials are similar in terms of antiultraviolet aging performance, and the performance was excellent. In addition, the addition of PCE had a certain degree of influence on the ultraviolet aging resistance of the repair material. Samples with the ratios of P0 and P3 were prepared, and the HS-770 light stabilizer with a mass fraction of 0.4% was added. The four groups of samples were labeled P0, P0 + HS-770, P3, and P3 + HS-770, and placed into an ultraviolet box. The bending strength was tested at different times points. The experimental results are shown in Figure 11. It can be observed from the curve in Figure 11 that the bending strength of the repair material significantly decreased with time when the light stabilizer HS-770 was not added. Among these, the ratio of P3 was more significantly decreased. The main reason for this is that PCE contains four chlorine atoms, which results In addition, the addition of PCE had a certain degree of influence on the ultraviolet aging resistance of the repair material. Samples with the ratios of P0 and P3 were prepared, and the HS-770 light stabilizer with a mass fraction of 0.4% was added. The four groups of samples were labeled P0, P0 + HS-770, P3, and P3 + HS-770, and placed into an ultraviolet box. The bending strength was tested at different times points. The experimental results are shown in Figure 11. It can be observed from the curve in Figure 11 that the bending strength of the repair material significantly decreased with time when the light stabilizer HS-770 was not added. Among these, the ratio of P3 was more significantly decreased. The main reason for this is that PCE contains four chlorine atoms, which results in a relatively large steric hindrance of the system and poor stability of the generated polymer, when PCE is linked to the main chain of PMMA. Table 4 shows the mass changes of mortar blocks coated with MMA-based repair material, and not coated with MMA-based repair material, before and after soaking in different solutions. It can be observed from Table 4 that the impermeability of the ordinary mortar not coated with the MMA-based repair material was very poor. Furthermore, the water content reached as high as 12.79% after being soaked in clean water for 90 days. The impermeability improved greatly after the application of the MMA-based repair material, and the water content was only approximately 3% after 90 days. Therefore, the MMAbased repair material can form a very thin resin film after curing on the surface of the mortar blocks, and this resin film can effectively prevent the flow of water and various ions, thereby achieving a sealing effect. Table 4 shows the mass changes of mortar blocks coated with MMA-based repair material, and not coated with MMA-based repair material, before and after soaking in different solutions. It can be observed from Table 4 that the impermeability of the ordinary mortar not coated with the MMA-based repair material was very poor. Furthermore, the water content reached as high as 12.79% after being soaked in clean water for 90 days. The impermeability improved greatly after the application of the MMA-based repair material, and the water content was only approximately 3% after 90 days. Therefore, the MMAbased repair material can form a very thin resin film after curing on the surface of the mortar blocks, and this resin film can effectively prevent the flow of water and various ions, thereby achieving a sealing effect. The samples with two ratios of P0 and P3 were poured into a mold with a size of 100 × 15 × 5 mm 3 . After five days, the samples were immersed in clean water; the NaCl solution with a mass fraction of 5% and the MgSO 4 solution with a mass fraction of 5%, respectively. The bending strength after 90 days was tested and compared with the blank samples, and the retention ratios of bending strength were calculated. Each group of samples was measured three times, and the average value was calculated. The results are shown in Table 5. In the table, it can be observed that samples with the ratios of P0 and P3 had a strong resistance to chemical erosion after soaking for 90 days. The retention ratios of the bending strength could reach over 95% in the NaCl solution and the MgSO 4 solution. The reason for this is that the MMA-based repair material had a compact structure after curing, which could effectively prevent the penetration of liquid and ions. In addition, the MMA repair material is an organic material, which has little interaction with chloride ion, sulfate ion, and other ions. ce, it had a high retention ratio for bending strength in the environment of the salt solution. The water glass suspension double-liquid grouting material studied by Wang H X was soaked in a Na 2 SO 4 solution for 180 days. The mass loss rate was 5%, while the strength loss rate was approximately 25% [45]. It can be observed from the comparison of the data that the MMA repair material studied in the present experiment had good chemical erosion resistance. 3.6. Thermal Shock Aging Resistance Table 6 shows the retention ratios for the bending strength of the MMA-based repair material after 300 thermal shock cycles. It can be observed from the data in Table 6 that the bending strength of the two ratios, P0 and P3, of the repair materials significantly decreased. They decreased from 28.38 MPa and 29.15 MPa, to 18.55 MPa and 17.85 MPa, respectively, and the retention ratios for the bending strength were only 65.36% and 61.23%, respectively. The thermal shock resistance of the repair material with PCE was worse. Han, Y.-F. [14] conducted a thermal shock resistance cycle test on a modified epoxy resin, and the strength retention rate after 28 days of testing was 67.52%. By comparing these two materials, it was found that the thermal shock aging resistance of both materials was poor. Under the condition of high temperature, the thermal movement of the molecular chain of the MMA-based repair material was violent. Part of the molecular chain was too late in rearrangement when the temperature suddenly dropped, which resulted in fracture, and the mechanical properties of the repair material declined due to repeated cycles. The stability of the polymer with the ratio of P3 decreased due to the presence of PCE in the system, and part of the PCE broke from the main chain at high temperature, making the thermal shock cycle performance worse. In the present experiment, polypropylene fiber (PPF) with different mass fractions was selected to be added into the samples, in order to improve the thermal shock performance. The bending strength after 300 thermal shocks is shown in Figure 12. The retention ratios for the bending strength were obtained by comparing this with the blank sample. The results are shown in Figure 13. It can be seen from the figure that the retention ratios of the bending strength of the It can be seen from the figure that the retention ratios of the bending strength of the repaired materials after the thermal shock cycle were significantly improved after the addition of PPF. The retention ratios of the ratios of P0 and P3 increased from 65.36% and 61.23%, to 87.01% and 86.74%, respectively, after adding PPF with a mass fraction of 1.5%. It can be seen from the figure that the retention ratios of the bending strength of the repaired materials after the thermal shock cycle were significantly improved after the addition of PPF. The retention ratios of the ratios of P0 and P3 increased from 65.36% and 61.23%, to 87.01% and 86.74%, respectively, after adding PPF with a mass fraction of 1.5%. The bending strength reached 24.69 MPa and 25.28 MPa, which were higher than the bending strength of ordinary concrete. It is generally believed that the fiber is surrounded completely by the matrix after being added into the material, and that the stress is transmitted to the fiber through the interface. The strength of the reinforcing fiber is much higher than the strength of the covering layer, thereby improving the crack resistance and strength of the material [46]. Table 7 shows the experimental results after the freeze-thaw cycle test. It can be observed from Table 7 that the repair materials with the ratios of P0 and P3 had a good freeze-thaw cycle resistance. For these, the retention ratios for the bending strength of the repaired materials after 50 freeze-thaw cycles reached more than 98%, and the retention ratios for the bending strength after 200 freeze-thaw cycles also reached more than 95%. The reason for this is because MMA is relatively dense after complete curing. This prevents water from entering to a certain extent, and prevents the damage from frost under low temperature conditions. In addition, the properties of the polymer repair material are relatively stable at low temperatures. Han, Y.-F. [14] tested the frost resistance of modified epoxy resin, and the strength retention rate was 60.68% after testing for 15 days. Compared with that, the frost resistance of the MMA repair material was excellent.
1.
The addition of PCE can hinder the polymerization reaction of the system to a certain extent, and make the reaction rate slow down.
2.
The addition of CaCO 3 can effectively reduce the shrinkage of the MMA-based repair material, and this can also increase the viscosity of the repair material. The viscosity of the material is approximately 500 mPa·s, and the shrinkage ratio can be reduced to approximately 10% when CaCO 3 with a mass fraction of 30% is added.
3.
The MMA and MMA modified by PCE repair materials both have good mechanical properties, and the bending strength at 28 days can reach up to 28.38 MPa and 29.15 MPa, respectively. The bending strength decreased after the addition of CaCO 3 , and the largest decrease was approximately 25%, when CaCO 3 with a mass fraction of 50% was added. 4.
The present study revealed that the durability of MMA and MMA modified by PCE repair materials is good. After the addition of HS-770 light stabilizer with a mass fraction of 0.4%, the retention ratios for the bending strength of materials with the ratios of P0 and P3 after 1440 h of ultraviolet irradiation could reach up to 91.11% and 89.94%, respectively. The retention ratios for the bending strength of the two ratios of P0 and P3 could reach above 95% after soaking in the NaCl solution with a mass fraction of 5%, and the MgSO 4 solution with a mass fraction of 5%, for 90 days. After the addition of the polypropylene fiber with a mass fraction of 1.5%, the retention ratios for the bending strength of the two ratios of P0 and P3 after 300 thermal shock cycles could reach up to 87.01% and 86.74%, respectively. Furthermore, the retention ratios for the bending strength of the two ratios of P0 and P3 could reach up to 97.29% and 95.13%, respectively, after 200 freeze-thaw cycles. | 9,444.4 | 2021-02-01T00:00:00.000 | [
"Materials Science"
] |
Graphene Oxide for Nonlinear Integrated Photonics
Integrated photonic devices operating via optical nonlinearities offer a powerful solution for all‐optical information processing, yielding processing speeds that are well beyond that of electronic processing as well as providing the added benefits of compact footprint, high stability, high scalability, and small power consumption. The increasing demand for high‐performance nonlinear integrated photonic devices has facilitated the hybrid integration of novel materials to address the limitations of existing integrated photonic platforms. Recently, graphene oxide (GO), with its large optical nonlinearity, high flexibility in altering its properties, and facile fabrication processes, has attracted significant attention, enabling many hybrid nonlinear integrated photonic devices with improved performance and novel capabilities. This paper reviews the applications of GO to nonlinear integrated photonics. First, an overview of GO's optical properties and the fabrication technologies needed for its on‐chip integration is provided. Next, the state‐of‐the‐art GO nonlinear integrated photonic devices are reviewed, followed by comparisons of the nonlinear optical performance of different integrated platforms incorporating GO as well as hybrid integrated devices including different kinds of 2D materials. Finally, the current challenges and future opportunities in this field are discussed.
Introduction
By avoiding the inefficient opticalelectrical-optical conversion, all-optical signal generation, amplification, and processing based on optical nonlinearities offers processing speed that far exceed that of electrical devices, [1][2][3] underpinning a variety of applications in many fields such as optical communications, [4][5][6][7] photonic computing, [8,9] optical manipulation, [10,11] specialized optical sources, [12,13] microscopy, [14,15] metrology, [16,17] spectroscopy, [18,19] optical cloaking, [20,21] and quantum information processing. [22,23] Compared to bulky discrete off-chip devices, photonic integrated circuits fabricated by wellestablished complementary metal-oxide semiconductor (CMOS) technologies provide an attractive solution to implement compact nonlinear optical devices on a chip scale, thus harvesting great dividends for integrated devices such as high stability and scalability, low power consumption, and large-scale manufacturing. [24][25][26] Although silicon-on-insulator (SOI) has been the dominant platform for photonic integrated circuits, its indirect bandgap is a significant handicap for optical sources, and its centrosymmetric crystal structure poses an intrinsic limitation for secondorder nonlinear optical applications. Furthermore, its strong twophoton absorption (TPA) at near-infrared wavelengths limits its third-order nonlinear optical response in the telecom band. [2,27] Other CMOS compatible platforms such as silicon nitride [5,28,29] and doped silica [30,31] have a much lower TPA, although they still face the limitation of having a much smaller third-order optical nonlinearity than silicon. To address these issues, the onchip integration of novel materials has opened up promising avenues to overcome the limitations of these existing integrated platforms. Many hybrid nonlinear integrated photonic devices incorporating polymers, [32,33] carbon nanotubes, [34,35] and 2D materials [36][37][38] have been reported, showing significantly improved performance and offering new capabilities beyond those of conventional integrated photonic devices.
2D materials, such as graphene, black phosphorus (BP), transition metal dichalcogenides (TMDCs), hexagonal boron nitride (hBN), and graphene oxide (GO), have motivated a huge upsurge in activity since the discovery of graphene in 2004. [39] With Laser Photonics Rev. 2023, 17,2200512 www.advancedsciencenews.com www.lpr-journal.org atomically thin and layered structures, they have exhibited many remarkable optical properties that are intrinsically different from those of conventional bulk materials. [40][41][42][43][44][45][46] Recently, there has been increasing interest in the nonlinear optical properties of 2D materials, which are not only fascinating in terms of laboratory research but also intriguing for potential practical and industrial applications. [47][48][49][50][51][52][53][54] Among the different 2D materials, GO has shown many advantages for implementing hybrid integrated photonic devices with superior nonlinear optical performance. [41,[55][56][57][58][59][60][61] It has been reported that GO has a large third-order optical nonlinearity (n 2 ) that is over 4 orders of magnitude higher than silicon [62,63] as well as a linear absorption that is over 2 orders of magnitude lower than graphene in the infrared region. [56,64] The former is critical for improving the efficiency of nonlinear wavelength conversion, whereas the latter allows for a low film loss, which is beneficial for enhancing the nonlinear optical response that scales nonlinearly with light power. In addition, GO has a heterogenous atomic structure that exhibits noncentrosymmetry, yielding a large second-order optical nonlinearity that is absent in pristine graphene that has a centrosymmetric structure. The bandgap and defects in GO can also be engineered to facilitate diverse linear and nonlinear optical processes. These material properties of GO, together with its facile synthesis processes and high compatibility with integrated platforms, [64,65] have enabled a series of high-performance nonlinear integrated photonic devices. Here, we provide a systematic review of these devices, highlighting their capabilities in a range of nonlinear optical processes (Figure 1a) as well as a comparison of different integrated platforms. Figure 1b summarizes the typical applications of the nonlinear optical processes in Figure 1a, which cover a broad scope including all-optical wavelength conversion, [33,66] alloptical switching and modulation, [10,11] all-optical sampling and characterization, [67,68] laser mode locking, [69,70] Kerr frequency combs, [71,72] broadband optical sources, [12,13] nonlinear optical imaging, [14,15] optical parametric amplifiers, [4,73] and quantum optics. [22,23] This review paper is organized as follows. In Section 2, the optical properties of GO, including both the linear and nonlinear properties, are introduced, particularly in the context of integrated photonic devices. Next, the fabrication technologies for integrating GO films on chips are summarized in Section 3, which are classified into GO synthesis, film coating on chips, and device patterning. In Section 4, we review the state-of-the-art nonlinear integrated photonic devices incorporating GO. In Section 5, detailed comparison for the nonlinear optical performance of different integrated platforms incorporating GO is presented and discussed. The comparison of nonlinear integrated photonic devices incorporating different 2D materials is provided in Section 6. The current challenges and future perspectives are discussed in Section 7. Finally, the conclusions are provided in Section 8.
Optical Properties of GO
GO, which contains various oxygen-containing functional groups (OCFGs) such as epoxide, hydroxyl, and carboxylic, all attached on a graphene-like carbon network, is one of the most common derivatives of graphene. [74][75][76][77] The heterogeneous atomic structure including both sp 2 carbon sites with -states and sp 3 -bonded carbons with -states makes GO exhibit a series of distinctive material properties, particularly in its 2D form. In this section, we briefly introduce GO's optical properties, including both the linear and nonlinear properties and focusing on the near-infrared telecom band (around 1550 nm). Table 1 provides a comparison of the basic optical properties of GO with typical 2D materials such as graphene, TMDCs, and BP as well as bulk materials such as silicon (Si), silica (SiO 2 ), silicon nitride (Si 3 N 4 ), and high index doped silica glass (Hydex) used for implementing integrated photonic devices. In the following, we provide detailed introduction of GO's optical properties based on Table 1.
Linear Optical Properties
In contrast to graphene that has a bandgap of zero, [40] GO has a typical bandgap between 2.1 and 3.6 eV, [74,78] which yields low linear light absorption in the telecom band. Although in Table 1 the optical extinction coefficient k of GO (0.005-0.01) is not as low as for Si, Si 3 N 4 , and SiO 2 , it is nonetheless still much lower than the other 2D materials, particularly graphene with a k that is over 100 times higher than GO. This property of GO is highly attractive for nonlinear optical applications such as self-phase modulation (SPM) and four-wave mixing (FWM) that require high power to drive the nonlinear processes. On the other hand, GO has a refractive index n that is around 2 across a broad optical band from near-infrared to mid-infrared regions. [57,62,77,79] This results in a low material dispersion, which is critical for implementing devices with broad operation bandwidths, e.g., broadband FWM or SPM devices based on phase matching. [2,4] The bandgap of GO can be engineered by using different reduction methods to change the ratio of the sp 2 and sp 3 fractions, [104,105] thus yielding a variation in its material properties. Figure 2a compares the atomic structures of graphene, GO, reduced GO (rGO), and totally reduced GO (trGO). As can be seen, with the continued removal of the OCFGs, GO gradually reduces and finally converts to trGO. As compared with graphene, trGO has a similar carbon network but with more defects. The differences in the properties of trGO and graphene mainly come from these defects, which can form not only during the reduction process but also the oxidation process associated with the conversion from graphene to GO. [106,107] Figure 2b) compares the measured n, k of GO, rGO, trGO, and graphene. [79,108] As the degree of reduction increases, both n and k of rGO increase and show a trend towards graphene, with the n and k of trGO being extremely close to those of graphene. In contrast to bulk materials that have limited tuning ranges with respect to n and k (e.g., typically on the order of 10 −4 -10 −3 for n of Si [24] ), GO has a very wide tuning range for both n (from ≈2 to ≈2.7) and k (from < 0.01 to ≈2), which underpins many photonic devices that have excellent phase and amplitude tuning capabilities. [79,108] Similar to graphene and TMDCs, [109][110][111] GO films exhibit strong anisotropy in its optical absorption in a broad band from the visible to the infrared regions. [64,112] This property is useful for implementing polarization selective devices with wide operation bandwidths.
1.60 μm × 0.66 μm, and 2.00 μm × 1.50 μm, respectively. Figure 3a shows schematics of the hybrid waveguides. For comparison, we choose integrated waveguides with planarized top surfaces with each waveguide coated with 1 layer of GO film (≈2 nm in thickness [56,57] ). Unless elsewhere specified, the bare integrated waveguides in our following discussions are the same as those in Figure 3a. Figure 3b shows the transverse electric (TE) mode profiles for the hybrid waveguides, which were simulated via commercial mode solving software using the measured n of 2D layered GO films at 1550 nm in refs. [79,108]. Due to the significant anisotropy of 2D materials, the in-plane lightmatter interaction is normally much stronger than the out-of- plane interaction. [64,110] Therefore, in nonlinear integrated photonic devices incorporating 2D materials, the TE polarization is usually chosen to support the in-plane interaction between the 2D films and the evanescent field leaking from the waveguides. Figure 3c compares the refractive indices of GO, Si, Si 3 N 4 , Hydex, and SiO 2 over a wavelength range of 1500-1600 nm measured by spectral ellipsometry. GO has a refractive index that is higher than either Hydex or SiO 2 , but lower than Si 3 N 4 and Si. Si has the highest refractive index amongst the three waveguide materials, which results in the tightest light confinement in the waveguide and hence the smallest waveguide geometry. It should also be noted that the refractive index n of GO in Figure 3c only shows results for a film with 5 GO layers. For practical GO films, both their linear and nonlinear optical properties slightly change with layer number (i.e., film thickness), mainly due to the scattering loss stemming from film unevenness and imperfect contact between adjacent layers as well as the defects, impurities, and thermal dissipation in the multi-layered film structure. [41] The trends of the properties of layered GO films in evolving from 2D monolayers to quasi bulk-like behavior have been observed in refs. [58] and [64]. In our following theoretical analysis, we use the experimentally measured material property parameters and account for the dependence of GO film's properties on the layer number. Figure 3d shows the dispersion of the bare Si, Si 3 N 4 , and Hydex waveguides without GO films. The dispersion of the GOcoated Si waveguide is also shown for comparison. All the dispersions were simulated using the refractive indices in Figure 3c. The bare Si waveguide has normal dispersion, whereas the bare Si 3 N 4 and Hydex waveguides have slight anomalous dispersion. After coating with GO films, the GO-Si hybrid waveguide has a slightly reduced normal dispersion, while the GO-Si 3 N 4 and GO-Hydex waveguides exhibit slightly enhanced anomalous dispersion (not shown in Figure 3d) since the curves for these wavegudies almost overlap with those of the uncoated waveguides), indicating that incorporating GO films could benefit phase matching for FWM or SPM in these waveguides.
Figures 3e,f show the ratios of power in GO relative to the power in the waveguide core for different numbers of GO layers N, respectively, which were calculated based on the simulated TE mode files for the hybrid waveguides, assuming that the GO film thickness is proportional to N. The thickness of the GO film was assumed to be proportional to N in the simulation. For the hybrid waveguides with the same GO layer number, GO-Si waveguide has the strongest evanescent field leakage and mode overlap with the GO film, mainly a result of its smaller waveguide geometry. All the hybrid waveguides show an increased mode overlap with the GO films as N increases, reflecting the fact that the increase of GO film thickness can enhance the interaction between light and GO.
In Figure 4a, we compare the linear propagation loss of the hybrid waveguides versus GO layer number N, which was calculated by commercial mode solving software using the measured k of 2D layered GO films at 1550 nm in refs. [79,108]. For practical GO films, the value of k slightly increases with N, which mainly results from the accumulated film imperfections induced by film unevenness, stacking of multiple layers, and localized defects. [57,64] As can be seen, the GO-Si waveguide has a much higher propagation loss than comparable GO-Si 3 N 4 and GO-Hydex waveguides, and all of these waveguides show an increased propagation loss with increasing N. This is similar to the results shown in Figure 3e, indicating that an enhanced GO mode overlap results in increased linear propagation loss. Mode overlap plays an important role in balancing the trade-off between enhancing the third-order optical nonlinearity while minimizing linear loss to achieve the optimized performance for the GO hybrid waveguides, which has been discussed in detail in refs. [113,114].
The linear propagation loss of practical GO hybrid waveguides exposed to air can change with input light power, especially at high average powers. [55,58] Such power-dependent linear loss (PDLL) results from power-sensitive photo-thermal changes in the GO films, including a range of effects such as photothermal reduction, thermal dissipation, and self-heating in the GO layers. [55,58,115] The photo-thermal changes arising from these sources show some interesting features. First, in a certain power range where the light power is not sufficiently high to induce permanent changes in the films, the changes can recover back when the light power is turned off. Second, their time responses (typically on the order of 10 −3 s [55] ) are much slower than those of the ultrafast third-order nonlinear optical processes (typically on the order of 10 −15 s [11,38] ). Finally, these changes are sensitive to the average light power in the GO films, and so are easily triggered by continuous-wave (CW) light with high average power. In contrast, for optical pulses with a high peak power but a low average power, the PDLL induced by these changes is not obvious. [55,58] Figures 4b-d compare the excess linear propagation loss induced by the PDLL (∆PL PDLL , after excluding the corresponding linear propagation loss in Figure 4a) versus average power of input CW light for the hybrid GO-Si, GO-Si 3 N 4 , and GO-Hydex waveguides, respectively. The ∆PL PDLL increases with both the GO film thickness and the average power for all the hybrid waveguides, reflecting the fact that there are more significant photo-thermal changes in thicker GO films, and particularly at higher average powers. The GO-Si waveguide shows much higher ∆PL PDLL than the GO-Si 3 N 4 and GO-Hydex waveguides with the same GO layer number-a result also arising from its stronger GO mode overlap that allows for a higher power in the GO film.
Nonlinear Optical Properties
Upon interaction with an external optical electric field having a high intensity, on the order of interatomic fields (i.e., 10 5 -10 8 V m -1 [49] ), materials can exhibit nonlinear optical responses accompanied by novel phenomena such as the generation of new frequencies, or with their linear optical parameters such as n and k becoming field-dependent. [116] In the past decade, the superior nonlinear optical properties of 2D materials have been widely investigated and recognized. [48,49,52,117] For GO, its heterogeneous structure and tunable bandgap enable distinctive nonlinear optical properties for a diverse range of nonlinear optical processes. [55,56,58,64] Generally, the nonlinear response of a material excited by an external optical field (E(t), scalar) can be expressed as (in scalar form for brevity, and in the dipole approximation) [2,48] where P(t) (scalar) is the light induced polarization, 0 is the vacuum permittivity, and (i) (i = 1, 2, 3, …) are the i th -order optical susceptibilities, which generally are tensors of rank (i + 1).
In Equation (1), (1) describes the linear optical properties such as n and k, while the material's nonlinear optical properties including the second-order, third-order, and higher-order nonlinear responses are described by (2) , (3) , and (n) (n ≥ 4), respectively. Since the value of (i) normally decreases rapidly with i, the efficiency of (n) (n ≥ 4) processes is much lower than that of (2) and (3) processes which dominate applications based on the materials' optical nonlinearities. Note that the susceptibilities in Equation (1) are complex, with the real and imaginary parts corresponding to the changes of the refractive index and optical absorption, respectively.
In this paper, we focus on GO's third-order optical nonlinearity, highlighting the on-chip integration of GO films for both enhanced Re ( (3) ) and Im ( (3) ) processes. For the second-order optical nonlinearity, we note that large (2) values of GO arising from its noncentrosymmetric atomic structure have been reported recently, [118,119] but its application to chip-scale devices is still in its infancy. Therefore, we provide a discussion on the future perspectives for this in Section 7.
The Re ( (3) ) processes (also termed parametric processes [2,48] ), represented by four-wave mixing (FWM), self-/cross-phase modulation (SPM/XPM), and third harmonic generation (THG), play an integral role in all-optical signal generation and processing with an ultrafast time response on the order of femtoseconds. [6,120,121] In Table 1, the Kerr coefficients (n 2 ) of relevant materials are also compared. The absolute value of n 2 for GO is about 10 times lower than that of graphene but still much higher than those of MoS 2 , WSe 2 , and BP. On the other hand, the n 2 of GO is about 4-5 orders of magnitude higher than Si, Si 3 N 4 , and Hydex, and so this forms the motivation for the on-chip integration of GO to implement hybrid devices for third-order nonlinear optical applications. The performance of hybrid nonlinear optical devices is a combined result of several factors, including not only the materials' optical nonlinearity but also their loss, dispersion, and mode overlap. Detailed comparison of the nonlinear optical performance of the bare and GO-coated integrated waveguides will be provided in Section 5.
For many nonlinear optical processes, the terms arising from Im ( (3) ) involve nonlinear optical absorption such as two-photon absorption (TPA), saturable absorption (SA), or multi-photon absorption (MPA). [2,48] The relatively large bandgap of GO results in low TPA in the telecom band that is helpful for improving the efficiency for the Re ( (3) ) processes. [2,3] In contrast to the TPA process where the absorption increases with light intensity, SA exhibits the opposite trend, due to the saturation of excited electrons filling the conduction band and hence preventing further transitions due to Pauli blocking. [53,122] In Table 1, the negative for GO is induced by SA, which originates from the groundstate bleaching of the sp 2 domain. [63,123,124] The SA in GO is useful for applications such as mode-locked fiber lasers [125][126][127] and all-optical modulators. [11,128] The bleaching of light absorption at high intensities is also beneficial for boosting processes arising from the Re ( (3) ). Compared to the photothermal changes mentioned in Section 2.1, SA is an ultrafast third-order nonlinear op-tical process determined by the peak input light power, and so it is more easily triggered by optical pulses with high peak powers. In contrast, the SA-induced loss change is not as observable for CW light with relatively low peak powers. In a passively modelocked fiber laser, the saturable absorber in the fiber loop attenuates the low-intensity light but transmits the high-intensity light when the cavity oscillates, which allows for suppression of weaker pulses in addition to the continuous background light and selective amplification of the high-intensity spikes. GO and rGO featuring broadband absorption and ultrafast recovery times have been used as high-performance saturable absorbers in modelocked fiber lasers. [127,129,130] As mentioned in Section 2.1, the bandgap of GO can be changed by using different reduction methods. [104,105] By increasing the degree of reduction, a switch in sign for both n 2 and of GO films has been observed during the transition from GO to trGO. [62,63] The large dynamic tunable ranges for n 2 and provide high flexibility in tailoring the performance of nonlinear integrated photonic devices incorporating GO.
Figures 5a−c compare the excess propagation loss induced by GO's SA (∆PL SA , after excluding the corresponding linear propagation loss in Figure 4a) versus peak input power of the optical pulses (P pk ) for the hybrid GO-Si, GO-Si 3 N 4 , and GO-Hydex waveguides, respectively. These results were calculated based on the SA theory in refs. [70,131] using the measured SA parameters for practical GO films in ref. [56]. For all the hybrid waveguides, ∆PL SA becomes more significant for increasing number of layers and input peak power, reflecting more significant SA in thicker GO films and at higher peak powers. Similar to the trend seen in Figure 4, GO-Si waveguides with stronger mode overlap show higher ∆PL SA than the GO-Si 3 N 4 and GO-Hydex waveguides for the same number of GO layers.
Figures 6a-c compare the overall excess insertion loss induced by SA (∆SA), after excluding the corresponding linear insertion loss, as functions of P pk and waveguide length L for the uniformly coated GO-Si, GO-Si 3 N 4 , and GO-Hydex waveguides, respectively. In each figure, the results for five different GO layer numbers calculated based on the corresponding results in Figure 5 are provided. To highlight the difference, different ranges for the waveguide length were chosen in Figures 6a − 6c. It is seen that ∆SA increases with both GO layer number and input peak power, which is consistent with Figure 5. In addition, ∆SA also increases with waveguide length, reflecting a more significant SA-induced insertion loss difference for longer waveguides.
On-Chip Integration of GO Films
The distinctive material properties of GO have motivated its onchip integration for implementing functional hybrid integrated devices. [64,[132][133][134] The facile solution-based synthesis process of GO and its high compatibility with integrated device fabrication offer competitive advantages for industrial manufacturing beyond laboratory, which has thus far been a challenge for the majority of 2D materials. In this section, we review the fabrication techniques for integrating GO films on chips, which are divided into GO synthesis, film coating on chips, and device patterning.
GO Synthesis
Material synthesis is the first step before integrating GO films onto chips. In contrast to graphene that has very low solubil-ity in water, GO can be dispersed in aqueous and polar solvents, thus allowing for solution-based material synthesis. The Brodie method [135] and the Hummers method [136] are the two basic GO synthesis approaches, both of which have long histories and have been modified on the basis of the initially proposed methods. [65,137] Figures 7a,b show schematic illustrations of these two methods. For the Brodie method, graphite is treated with fuming nitric acid (HNO 3 ) and potassium chlorate (KClO 3 ) in order to attach the OCFGs (Figure 7a), whereas for the Hummers method, the oxidation of graphite is achieved via treatment with potassium permanganate (KMnO 4 ) and sulfuric acid (H 2 SO 4 ) (Figure 7b). Compared to the Brodie method, the Hummers method is more facile and shows better compatibility with CMOS fabrication technologies. Figures 7c,d show another two GO synthesis approaches that are well known for the GO community-the Staudenmaier method and the Hofmann method. Both of these are modifications of the Brodie method, with slight changes in the procedure intended to produce highly oxidized GO. [138,139] The former uses a mixture of concentrated fuming HNO 3 and H 2 SO 4 followed by adding KClO 3 , whereas the latter uses concentrated HNO 3 in combination with concentrated H 2 SO 4 and KClO 3 . Some modified Hummers methods have also been proposed, [65,140] where the amount of KMnO 4 and H 2 SO 4 were engineered to improve the oxidation efficiency and hence the oxidation degree.
The above methods can produce a large volume of exfoliated GO sheets with a high concentration of OCFGs, which are easily disintegrated into smaller flakes. The lateral size (typically varying from several tens of nanometers to several tens of microns) and thickness (typically on the order of nanometers) of the GO flakes can be controlled by varying the mixing or sonication parameters. GO films consisting of large-size (>10 μm) flakes show better performance in terms of electrical/thermal conductivity as well as mechanical/sieving capability, whereas GO films made from small-size flakes are advantageous in achieving conformal coating on substrates with complex structures, particularly for integrated devices having feature sizes on the micron or nanometer scale.
Film Coating on Chips
The second step is to coat GO films onto integrated chips. In contrast to sophisticated film transfer processes used for the on-chip integration of graphene and TMDCs, the coating of GO films can be realized using solution-based methods without any transfer processes. Figure 8 shows schematic illustrations of two typical GO film coating strategiessolution dropping and self-assembly. Both of these are compatible with the Brodie and the Hummers methods and are suited to large-scale fabrication, but they also show differences, particularly with respect to film uniformity and thickness.
Solution dropping methods, mainly including drop casting [142] and spin or spray coating, [143,144] are simple and rapid to directly coat GO films in large areas. The main steps in these methods include solution preparation, solution dropping, and drying (Figure 8a). The relatively low film uniformity and large film thicknesses are the main limitations for these methods, which make it challenging to achieve film conformal coating of integrated [112] Copyright 2014, OSA Publishing, reproduced with permission. [146] Copyright 2018, Elsevier Ltd, and reproduced with permission. [144] Copyright 2010, American Chemical Society. The sample images in (b) are reproduced with permission. [79] Copyright 2019, American Chemical Society.
waveguides. The typical film unevenness that they produce is >10 nm, and the typical film thicknesses are >100 nm. [112,145] In contrast to solution dropping methods, self-assembly methods can achieve both high film uniformity (< 1 nm [79] ) and low film thickness (down to the thickness of 2D monolayers [64] ). Figure 8b shows the process flow for self-assembly. First, a GO solution composed of negatively charged 2D GO nanoflakes synthesized via the Brodie or the Hummers methods is prepared. Second, the target integrated chip with a negatively charged surface is immersed in a solution with positively charged aqueous polymers to obtain a polymer-coated integrated chip with a positively charged surface. Finally, the polymer-coated integrated chip is immersed in the prepared GO solution, where a GO monolayer is formed onto the top surface through electrostatic forces. By repeating the above steps, layer-by-layer coating of GO films on integrated chips can be realized, with high scalability and accurate control of the layer number or the film thickness. The strong electrostatic forces also enable conformal film coating of complex structures (e.g., wire waveguides and gratings) with high film uniformity. In addition, unlike film transfer approaches where the coating areas are limited by the lateral size of the exfoliated 2D films, [147,148] the film coating area for the self-assembled methods is limited only by the size of the substrate and the solution container, which makes them excel at large-area film coating. By using plasma oxidation, the removal of GO films coated from integrated devices can be easily achieved, allowing for the recycling of the integrated chips and recoating of new GO films.
Device Patterning
Device patterning is critical for engineering functionalities of advanced integrated devices. In Figure 9, we summarize the typical methods used to pattern GO films, including inkjet printing, laser writing, lithography followed by lift-off, pre-patterning, and nanoimprinting. All of these methods have strong potential for Laser Photonics Rev. 2023, 17, 2200512 Figure 9. Schematic illustration of typical methods for patterning GO films: a) inkjet printing, b) laser writing, c) lithography & lift-off, d) pre-patterning, e) nanoimprinting, and f) scan-probe lithography. In a-f), the figure in the right side of each row shows an image for as-fabricated samples. The sample images in a-f) are reproduced with permission. [164] Copyright 2011, Springer Nature, reproduced with permission. [132] Copyright 2019, Springer Nature, reproduced with permission. [58] Copyright 2020, Wiley-VCH, reproduced with permission. [152] Copyright 2020, Springer Nature, reproduced with permission. [165] Copyright 2011 American Vacuum Society, and reproduced with permission. [154] Copyright 2020, Springer Nature, respectively.
industrial manufacturing, and each of them has advantages for specific applications. In Table 2, we compare the different GO film patterning methods.
Inkjet printing is a simple and rapid GO film patterning method that can simultaneously achieve film coating and patterning. It is compatible with solution dropping coating methods, and is usually employed to fabricate patterns over large areas, with relatively low resolution on the order of microns. [149,150] Figure 9a shows the process flow, where specialized ink solutions need to be prepared before printing. The printing processes involve forming a jet of single droplets, drop casting, and droplet drying, similar to solution dropping methods, with the pattern shape and position normally controlled via programs.
Laser writing is a one-step, noncontact, and mask-free film patterning method that has been widely used for patterning polymers, [156,157] metal surfaces, [158,159] and 2D materials. [160,161] Figure 9b illustrates the process flow for patterning GO films using laser writing. The laser source can consist either of CW or pulsed lasers, with an objective lens used to focus the laser beam. Laser writing involves complex processes such as photochemical reduction, thermal melting/sublimation, and structural reorganization, [162] ultimately resulting in localized thinning or ablation of GO films depending on the laser power. Laser writing can be used to pattern both thick films deposited by solution dropping and thin films coated via self-assembly, The patterning resolution is mainly determined by the spot size of the focused laser beam, which typically ranges from several microns to hundreds of nanometers. [163] One of the largest advantages of laser patterning is the flexibility. Because no mask is needed in the fabrication process, arbitrary patterns can be enabled by swiftly changing the controlling computer program, making this method perfect for pattern design and prototyping. Lithography followed by lift-off is another widely used GO film patterning method. [58,64] Unlike laser writing that performs patterning and etching simultaneously, for lithography the patterns are first formed on photoresist using well-developed techniques in the integrated circuit industry such as photolithography and electron beam lithography. The patterns on the photoresist are then transferred to GO films deposited on the photoresist via lift off processes that are common for fabricating integrated metal electrodes (Figure 9c). Compared to GO films coated via solution dropping, films formed by self-assembly show a better lift-off outcome owing to their strong adhesion to the substrates enabled by the electrostatic forces. The patterning resolution is determined by both the lithography resolution and the film property. For visible, ultraviolet, deep ultraviolet (DUV) photolithography, the patterning resolution is mainly limited by the lithography resolution (typically > 300 nm), which is much larger than the sizes of the exfoliated GO nanoflakes (typically ≈50 nm). For electron beam lithography with a higher patterning resolution (typically < 100 nm), the influence of the GO film thickness and flake size becomes more prominent, especially when the minimum feature size is < 150 nm. [64] Direct coating of GO films onto prepatterned structures, or prepatterning, is a simple method that can realize large-area GO film patterning. It relies on pre-fabrication to pattern the target substrates and conformal coating of GO films (Figure 9d), thus being suitable for the self-assembled GO films. [152] Prepatterning is normally used for mass producing repetitive patterns. Similar to lithography followed by lift off, the patterning resolution is mainly limited by the minimum gap width of the pre-patterned structure when it is >300 nm, and the GO film thickness and flake size when it is <150 nm.
Nanoimprinting is a film patterning method that can achieve a very high patterning resolution (e.g., down to ≈10 nm [166] ). Similar to lithography followed by lift off, it also needs to pattern photoresist before transferring the patterns onto the GO films. Instead of using photolithography or electron beam lithography, prefabricated imprint molds are employed to pattern the photoresist (Figure 9e). To fabricate different patterns, different molds are required, therefore this approach is mainly used for fabricating relatively simple and repetitive patterns. [153] Scanning probe lithography (SPL) is another technique that has been employed to directly pattern GO films, [154,155] where the patterning was realized by using a scanning probe tip to induce localized reduction, thinning, or ablation of GO films (Figure 9f). Similar to laser writing, the SPL process does not need any mask or photoresist, and the pattern can be controlled via computer program. Unlike laser writing that normally has a patterning resoultion >300 nm, the patterning resolution of SPL can reach below 100 nm (e.g., ≈12 nm [154] ), which is mainly limited by the size of the employed probe tip.
Finally, it is worth mentioning that the fabrication techniques used to incorporate GO films in Figures 7-9 are not limited to nonlinear integrated photonic devices. Rather, they are universal and can be used to fabricate other integrated photonic devices such as polarizers, [64,167] lenses, [108,168] and sensors, [169,170] and also integrated electronic devices such as field-effect transistors, [169,171] supercapacitors, [172,173] and solar cells. [174,175] For nonlinear integrated photonic devices, selfassembly methods are more widely used than solution dropping methods, mainly due to the high film uniformity and low film thickness they can achieve, which result in low film loss that is desirable for boosting nonlinear optical processes such as FWM and SPM. It should also be noted that the properties of GO and rGO films are affected by the fabrication methods. As a result, the quality and consistency of synthesized materials in practical settings vary widely. For practical GO and rGO films, their linear optical properties can be quantified by refractive index n and extinction coefficient k measured via spectral ellipsometry, [79] and their nonlinear optical properties can be quantified by Kerr coefficient n 2 and nonlinear absorption coefficient characterized via Z-scan measurements. [63,176,177] The reduction degree of GO can be quantified by a few parameters such as the C-O ratio, the I D /I G ratio, and the I 2D /I G ratio. [178,179] The first one can be mea-Laser Photonics Rev. 2023, 17, 2200512 www.advancedsciencenews.com www.lpr-journal.org sured via the X-ray photoelectron spectroscopy, whereas the last two can be obtained from Raman spectroscopy.
Enhanced Nonlinear Optics in GO Hybrid Integrated Devices
The large optical nonlinearity and low loss of GO, along with its facile fabrication processes for large-scale and highly precise onchip integration, have enabled many hybrid integrated devices with superior nonlinear optical performance. [55][56][57][58]119,129] In this section, we summarize the state-of-the-art nonlinear integrated photonic devices incorporating GO.
As a typical third-order process, FWM has been widely exploited for all-optical signal generation, amplification, and processing. [4,28,33,67,180] Enhanced FWM in GO hybrid integrated devices was first demonstrated using Hydex waveguides, [57] where FWM measurements were performed for 1.5 cm long waveguides uniformly coated with 1-5 layers of GO. A maximum conversion efficiency (CE) of ≈-47.1 dB that corresponded to a net CE enhancement of ≈6.9 dB was achieved for the device with 2 layers of GO (Figure 10a).
Enhanced FWM in Hydex microring resonators (MRRs) with patterned GO films was subsequently demonstrated. [58] Benefitting from the resonant enhancement in the MRRs, a maximum CE of ≈-38.1 dB that corresponded to a CE enhancement of ≈10.3 dB was achieved for a MRR with a patterned film including 50 GO layers (Figure 10b). Based on the FWM measurements, the change in n 2 of GO films as a function of the layer number and light power was also analyzed, showing interesting trends in evolving from 2D materials to bulk-like behavior. Following the experimental demonstration, detailed theoretical analyses and optimization were performed in ref. [114] showing that CE enhancement up to ≈18.6 dB can be achieved by optimizing the GO coating length and coupling strength of the MRR.
Enhanced FWM in GO-Si 3 N 4 waveguides has also been demonstrated, [55] where FWM measurements were carried out for GO-coated planarized Si 3 N 4 waveguides having different GO film lengths and thicknesses, achieving a maximum CE of ≈-56.6 dB that corresponded to a CE improvement of ≈9.1 dB for a device with a 1.5 mm long patterned film including 5 GO layers (Figure 10c). The patterned device also showed a broadened conversion bandwidth compared to the uncoated and uniformly coated devices. A detailed analysis of the influence of the GO film parameters and the Si 3 N 4 waveguide geometry was provided in ref. [113] showing that the CE enhancement can be further increased to ≈20.7 dB and the conversion bandwidth can be improved by up to 4.4 times.
SPM is another fundamental third-order process that has wide applications in wideband optical sources, pulse compression, frequency metrology, and optical coherence tomography. [181,182] Enhanced SPM in GO-Si waveguides has been reported, [56] where SPM measurements were performed for Si wire waveguides conformally coated with GO films having different lengths and thicknesses. Significant spectral broadening of picosecond optical pulses after passing these waveguides was observed, showing a maximum broadening factor (BF) of ≈4.34 for a device with 10 GO layers (Figure 10d). By coating GO films, the effective nonlinear figure of merit (FOM) of the hybrid waveguide was improved by up to 20 times compared to the bare Si waveguide. According to theoretical calculations based on the experimental results 1, a maximum BF of ≈27.8 can be achieved by optimizing the GO film parameters and Si waveguide geometry. In addition to enhanced SPM, strong SA in the GO-coated Si waveguide was also observed, as evidenced by a decrease in the measured excess insertion loss relative to the bare Si waveguide for an increased pulse energy (Figure 10e). It was also observed that the hybrid waveguides with thicker GO films showed a more prominent SA, although at the expense of higher linear loss.
Comparison of Different Integrated Platforms Incorporating GO
As reviewed in Section 4, enhanced nonlinear optical responses have been achieved for integrated Si, Si 3 N 4 , and Hydex devices incorporating GO. In this section, we provide a detailed comparison of the nonlinear optical performance of these integrated platforms. We compare FWM using CW light and SPM-induced spectral broadening using optical pulses in GO-coated Si, Si 3 N 4 , and Hydex waveguides. We used the material parameters obtained from experimental measurements [55][56][57] to calculate the FWM and SPM performance parameters based on the theory in refs. [183][184][185][186] and accounted for the variation in loss arising from photo-thermal changes and SA in the GO films. The comparisons of the nonlinear figure-of-merits for different GO hybrid waveguides are also provided. Figure 11 shows the FWM CE as a function of waveguide length L and pump power P p for hybrid Si, Si 3 N 4 , and Hydex waveguides uniformly coated with GO films. Similar to Figure 6, we show the results for 5 different numbers of GO layers (i.e., N = 1, 2, 5, 10,20). For each of the hybrid waveguides, the CE increases with P p , while as a function of L, it first increases and then decreases, reaching a maximum value at an intermediate waveguide length. As the layer number N increases, the L corresponding to the maximum CE becomes smaller. These trends reflect the trade-off between third-order nonlinearity improvement and propagation loss increase for the hybrid waveguides, with the former dominating for relatively small N and L, and the latter becoming more obvious as N and L increase. For the waveguides with the same N, the CE of the GO-Si waveguide is much higher than the GO-Si 3 N 4 and GO-Hydex waveguides, although its waveguide length is shorter. This can be attributed to the larger third-order optical nonlinearity of Si as well as the stronger GO mode overlap in the GO-Si waveguide. Figure 12 compares the CE enhancement (∆CE) of the hybrid waveguides relative to the uncoated waveguides. In Figure 12a, we show the results for the waveguides uniformly coated with GO films. For all of these hybrid waveguides, the CE enhancement decreases with waveguide length L, reflecting the fact that a shorter length yields better CE enhancement. For the waveguides coated with thicker GO films, although the initial CE enhancement (at very small L) is higher, it decreases more rapidly with L, thus resulting in a decreased range of L with positive CE enhancement. Figure 12b presents the corresponding results for the waveguides with patterned GO films, where the length of the uncoated waveguide is fixed at L, and the GO film coating length L c varies from 0 to L. Similar to the relation between CE and L in Figure 11, the CE enhancement reaches a maximum for an Reproduced with permission. [57] Copyright 2018, AIP Publishing. b) Reproduced with permission. [58] Copyright 2020, Wiley-VCH. c) Reproduced with permission. [55] Copyright 2020, Wiley-VCH. d,e) Reproduced with permission. [56] Copyright 2020, American Chemical Society. Rev. 2023, 17, 2200512 www.advancedsciencenews.com www.lpr-journal.org intermediate L c , and the L c corresponding to the highest ∆CE decreases with GO layer number N. This also results from the trade-off between the third-order nonlinearity and loss. The CE enhancement of GO-Si waveguides is lower than the GO-Si 3 N 4 and GO-Hydex waveguides, in contrast to a higher CE achieved for the GO-Si waveguides in Figure 11. This reflects an interesting trade-off between achieving high relative CE enhancement versus high overall CE in these GO hybrid integrated waveguides. Figure 13 shows SPM-induced spectral evolution of optical pulses traveling along the hybrid Si, Si 3 N 4 , and Hydex waveguides uniformly coated with GO films. For comparison, we show the BFs at an intensity attenuation of -20 dB (i.e., BF -20 dB ) for different waveguides, together with the corresponding propagation lengths (L p ). For each of the hybrid waveguides, BF -20 dB first increases and then decreases with GO layer number N, achieving a maximum spectral broadening at an intermediate film thickness. As N increases, L p decreases and the optical pulses vanish at shorter propagation lengths, reflecting that the loss increase becomes dominant for the waveguides with longer lengths or thicker GO films. Similar to Figure 11, the larger n 2 of Si and stronger GO mode overlap result in the GO-Si waveguides showing a much more significant spectral broadening than comparable GO-Si 3 N 4 and GO-Hydex waveguides even for shorter lengths. Unlike the symmetric spectral evolution in the GO-Si 3 N 4 and GO-Hydex waveguides, the spectral evolution in the GO-Si waveguides exhibits a slight asymmetry due to free-carrier effects in Si. Figure 14a compares the relative BF (rBF) versus waveguide length (L) for hybrid Si, Si 3 N 4 , and Hydex waveguides uniformly coated with GO films, where the rBF is defined as the ratio of the BF of the hybrid waveguide to that of the uncoated waveguide. Note that the BF here corresponds to the value at the waveguide output, which is different from BF -20 dB in Figure 13. For the GO-Si and GO-Si 3 N 4 waveguides with thicker GO films (N ≥ 5), the maximum rBF is achieved for an intermediate L, whereas for these waveguides with thinner GO films and all the GO-Hydex waveguides, the rBF monotonically increases with L. This is consistent with the trade-off between the third-order nonlinearity and loss in Figure 13. Figure 14b compares the rBF versus GO coating length (L c ) for the waveguides with patterned GO films, where the length of the uncoated waveguide is fixed at L, with L c varying from 0 to L. Similar to the trend in Figure 14a, the maximum rBF is also achieved for an intermediate L c for the GO-Si and GO-Si 3 N 4 waveguides when N ≥ 5. In contrast to the higher BF achieved for the GO-Si waveguides in Figure 13, their rBF is lower than the GO-Si 3 N 4 waveguides, which is similar to the trade-off between achieving a high relative CE enhancement and a high overall CE in Figures 11 and 12.
Laser Photonics
In Table 3 and Figure 15, we quantitatively compare the nonlinear optical performance of GO-Si, GO-Si 3 N 4 , and GO-Hydex waveguides, together with corresponding results for the bare integrated waveguides to highlight the benefit brought by incorporation of GO films into these integrated waveguides. We calculated two figure-of-merits, i.e., FOM 1 and Figure 14. a) SPM-induced relative BF (rBF) versus waveguide length (L) for hybrid Si, Si 3 N 4 , and Hydex waveguides uniformly coated with different numbers of GO layers. b) rBF versus GO coating length (L c ) for hybrid Si, Si 3 N 4 , and Hydex waveguides patterned with different numbers of GO layers. The lengths of the uncoated Si, Si 3 N 4 , and Hydex waveguides are L = 10 mm, L = 40 mm, and L = 16 cm, respectively. The patterned GO films are assumed to be coated from the start of the waveguides. In a,b), the parameters of the input optical pulses are the same as those in Figure 13. The FOM 1 is defined in terms of nonlinear absorption, and it can be expressed as: [2,3] where n 2 and TPA are the effective Kerr coefficient and TPA coefficient of the waveguides, respectively, and is the light wavelength. The results of FOM 1 are provided in Table 3. It increases with GO layer number N, and the FOM 1 of GO-Si waveguide is lower than comparable GO-Si 3 N 4 and GO-Hydex waveguides. The former results from the increase in the third-order optical nonlinearity and the latter is due to the strong TPA of Si. The FOM 2 is defined based on the trade-off between thirdorder optical nonlinearity and linear loss. [187] It is a function of waveguide length L given by: where is the waveguide nonlinear parameter and L eff (L) = [1 -exp (-L × L)] / L is the effective interaction length, with L denoting the linear loss attenuation coefficient. Figures 15a,b show L eff and FOM 2 versus L for hybrid Si, Si 3 N 4 , and Hydex waveguides uniformly coated with 1 and 10 GO layers, respectively. Different ranges of L are chosen and the results for the bare waveguides (i.e., N = 0) are also shown. These results were calculated based on Equation (3) using the measured linear and nonlinear optical parameters of practical GO films in refs. [55,57,58]. For all of these hybrid waveguides, FOM 2 first rapidly increases with L and then increases more gradually as L becomes larger. For a small L, the FOM 2 's of the hybrid waveguides are higher than that of comparable uncoated waveguide, whereas when L becomes large enough, the FOM 2 of the uncoated waveguide gradually approaches and even surpasses those of the hybrid waveguides. This reflects that the negative influence induced by increased linear loss becomes more dominant as L increases, which are consistent with the results in Figures 11-14. In contrast to FOM 1 , the FOM 2 of GO-Si waveguide is higher than comparable GO-Si 3 N 4 and GO-Hydex waveguides, mainly due to the large n 2 of Si and its strong GO mode overlap.
Comparison of Nonlinear Integrated Devices Incorporating Different 2D Materials
In the past decade, many nonlinear integrated photonic devices incorporating different kinds of 2D materials have been demonstrated, showing improved performance than comparable devices without 2D materials. In Table 4, we provide a summary of these devices for applications based on a variety of nonlinear optical processes such as FWM, SPM, XPM, SHG, DFG, and SA. Compared to integrated waveguides incorporating other 2D materials, GO hybrid waveguides have lower linear propagation loss. Further, the highly precise control of the film size and thickness, along with the capability for conformal coating on complex structures, offers unique advantages in engineering and optimizing the performance of GO hybrid integrated devices. Given the high flexibility in changing the properties of GO, the potential for performance optimization is even larger, and the variety of GO based nonlinear integrated photonic devices is well beyond those based on a single 2D material.
In future research on GO nonlinear integrated photonic devices, some unexplored third-order optical nonlinear processes such as THG and XPM, various second-order nonlinear optical processes, and the comparison between the performance of nonlinear integrated photonic devices incorporating rGO and graphene will potentially be very hot topics, and similar work on integrated photonic devices incorporating graphene and TMDCs in Table 4 could provide a guidance for the design of experiments and for performance comparison.
Challenges and Perspectives
As discussed above, GO has distinct nonlinear optical properties with excellent compatibility with different integrated platforms, yielding many high-performance hybrid nonlinear integrated photonic devices. Despite the current successes, this is just the beginning of a huge field, there is still much room for future improvement in material properties, device fabrication, and in creating new applications. In this section, we discuss the challenges and perspectives for fully exploiting the significant potential of GO for nonlinear integrated photonics.
Most of the state-of-the-art GO nonlinear integrated photonic devices incorporate GO films with little modification or optimization of their properties. The reality, however, as discussed in Section 2, is that GO's properties can be significantly changed by manipulating the OCFGs. This offers a high degree of flexibility in engineering its capabilities for different nonlinear optical processes. For example, a large optical bandgap of GO could benefit the FWM, SPM, and XPM processes by reducing the linear loss and nonlinear loss such as TPA. Whereas for SA, a small optical bandgap is often needed to enhance the light absorption and improve the modulation depth. As shown in Figure 16, the methods for tuning GO's material properties can be classified into two categories-reduction and doping. The reduction methods mainly involve thermal, [200] laser, [108,201] chemical, [202,203] or microwave based reduction. [178] Among them, thermal and laser reduction are simple and rapid, but usually suffer from limitations in terms of residual OCFGs and generated defects. Whereas chemical and microwave reduction show better capability in completely removing the OCFGs and well preserving the carbon network without introducing many defects. [200,201] By using wet chemistry and microwave reduction methods, [178,204] synthesizing high-quality rGO with properties extremely close to those of graphene has been realized. The synthesis of graphene-like materials via GO reduction can exploit the advantages offered by GO fabrication processes including a high production yield and a high CMOS compatibility, providing a viable solution for the mass production of graphene based devices.
In contrast to the removal of OCFGs that occurs during the reduction of GO, doping methods introduce foreign atoms such as nitrogen, boron, and sulfur into the chemical structure of GO, thus enabling new material properties. The doping methods mainly consist of laser, [105] chemical, [205] plasma, [206] and annealing based doping. [207] For laser doping, the doped area can be well controlled and patterned with a focused laser beam, but is often challenging for patterning large areas in a short time. In contrast, plasma, chemical, and annealing doping methods have shown strong ability to achieve highly efficient GO doping over large areas at the expense of a low patterning accuracy. [205][206][207] Although the linear propagation loss of the state-of-the-art integrated waveguides incorporating GO films is already over 100 times lower than comparable devices incorporating graphene, there is still significant room to reduce the loss even further. In principle, GO with a bandgap >2 eV has negligible linear absorption below its bandgap, e.g., at near-infrared wavelengths (with a photon energy of ≈0.8 eV at 1550 nm). The light absorption of practical GO films is mainly caused by defects as well as scattering loss stemming from imperfect layer contact and film unevenness. [57,64] The loss from these sources can be reduced further by modifying the GO synthesis and film coating processes, e.g., by using GO solutions with improved purity and optimized flake sizes. Reducing the linear loss of the GO films will not only enhance the performance of state-of-the-art GO FWM and SPM devices, but also facilitate many new nonlinear optical applications such as supercontinuum generation (SCG) [181,208] and optical micro-comb generation. [31,209] The current research on GO nonlinear integrated photonic devices mainly focuses on their large third-order optical nonlinearity. However, in addition to this, an excellent second-order optical nonlinearity of GO has been reported, [118,119] which will underpin future research on GO devices for many second-order optical nonlinear processes such as second-harmonic generation (SHG), sum/difference frequency generation (SFG/DFG), the Pockels effect, and optical rectification. Unlike the third-order optical nonlinearity that exists for all materials, the second-order optical nonlinearity can only occur in noncentrosymmetric materials or at the surface of centrosymmetric materials where the inversion symmetry is broken. [48,210,211] In contrast to graphene that has a centrosymmetric atomic structure, GO has a highly heterogenous atomic structure that yields a large second-order optical nonlinearity that can be tuned by changing the atomic structure of GO via reduction or doping methods. This, along with its high compatibility with integrated platforms, will enable functional second-order nonlinear integrated photonic devices with many applications, such as ultrafast signal processing and generation based on the SHG, [48] tunable terahertz plasmon generation based on the DFG, [194] high-speed electro-optic modulators based on the Pockels effect, [212] and broadband photodetectors based on the optical rectification. [213] The use of GO films as saturable absorbers in mode-locked fiber lasers has already been demonstrated, [47,48] and the SA in integrated waveguides incorporating GO has also been observed. [56] Although GO has a large optical bandgap with relatively low SA as compared with graphene, its SA capability can be improved by engineering the defect states in GO or by reducing it to obtain a graphene-like material. In the near future, integrated photonic devices incorporating GO or rGO with strong SA capability are expected to open new horizons for implementing on-chip mode-locked lasers, [214] broadband all-optical modulators, [128] pulse compression systems, [215] and photonic neural networks. [216] The overall nonlinear optical performance of GO hybrid integrated photonic devices depends on many factors related to GO's material properties, including not only nonlinear properties such as second-order or third-order optical nonlinearity and nonlinear light absorption, but also linear properties such as linear loss and dispersion. It is complicated by effects such as changes in GO's material properties with light power and film thickness. In the meantime, these extraordinary properties of layered GO films also yield a lot of new capabilities that cannot be achieved with conventional integrated devices made from only bulk materials, which allow more degrees of freedom to engineer the device performance and functionality.
As discussed in Section 2, there are photothermal changes in the GO films that result in the PDLL in practical GO films. In contrast to the reversible photothermal changes at low light powers, the loss increase induced by the photo-thermal changes can become permanent at high powers that exceed certain thresholds. The permanent loss increase of GO limits its use for high-power nonlinear optical applications. Recently, it was found that by us-ing an electrochemical method to modify the degree of oxidation of GO, [217] it can retain a high third-order optical nonlinearity with significantly improved (> 100 times) material stability under high-power laser illumination. In future work, the on-chip integration of this modified GO is promising to yield GO hybrid integrated photonic devices with superior power handling capability.
For practical GO films under light irradiation, different nonlinear optical processes coexist [218,219] and the interplay between these can result in complex behaviors. For example, the loss induced by TPA can deteriorate the FWM and SPM performance, whereas the reduced loss arising from SA could have a positive effect. In addition, different nonlinear optical processes may have different excitation conditions. For example, TPA occurs when the photon energy of incident light is larger than half of the material's bandgap, whereas SA can be efficiently excited when the single photon energy of incident light is just above the bandgap energy. In practical applications, the different nonlinear optical processes in GO need to be appropriately managed and balanced depending on the specific nonlinear optical application and the wavelength region of interest. For instance, the bandgap of GO can be engineered via reduction or doping to meet the requirements of specific nonlinear optical applications in specific wavelength regions.
Assembling different 2D materials to construct van der Waals heterostructures has ushered in many significant breakthroughs in recent years. [220,221] Due to the ease of fabrication and high flexibility for changing its properties, GO offers vast possibilities for implementing heterostructures based on different materials. Currently, some heterostructures including GO or rGO have been investigated, e.g., polymer / GO heterostructure, [222] titanium carbide/rGO heterostructure, [223] and vanadium pentoxide/rGO heterostructure. [224] However, the optical nonlinearity of GO or rGO heterostructures, particularly in the form of integrated devices, are yet to be investigated, hinting at more significant breakthroughs to come.
Phase matching is a prerequisite for achieving efficient nonlinear processes such as FWM, SPM, XPM, and THG. For GOcoated integrated waveguides, their waveguide dispersion can be engineered by reducing or patterning GO films to alleviate the phase mismatch. This would improve the FWM bandwidth and the SPM spectral broadening, and pave the way for broadband frequency comb generation [225,226] and SCG. [208,227] In materials with a positive Kerr coefficient n 2 (e.g., Si, Si 3 N 4 and Hydex glass), phase matching occurs for anomalous dispersion. This requires growing thick films to achieve anomalous dispersion in the telecom band, which has been a major challenge for Si 3 N 4 films due to stress-induced cracking. [29] Recently, laser-reduced GO films with negative values of n 2 have been reported. [62,63] In future work, it is anticipated that the use of rGO with a negative n 2 can reduce the phase mismatch in Si 3 N 4 waveguides with normal dispersion, which would lower the requirements for achieving phase matching in normal-dispersion devices, thus rendering them capable of playing more important roles in nonlinear optical applications.
Slot waveguides, with enhanced light-matter interaction enabled by the strong light confinement in the subwavelength slot regions, provide a better structure to exploit the material properties of GO. [33,228] Although GO has shown advantages in confor-Laser Photonics Rev. 2023, 17,2200512 mal coating integrated wire waveguides, [79] this is still challenging for narrow slot regions with widths < 100 nm and heights > 200 nm. This is mainly limited by the size of GO flakes used for self-assembly, which is typically ≈50 nm. By modifying the GO synthesis methods and using more vigorous ultrasonication, GO flakes with smaller sizes can be obtained, which are expected to address this issue and enable the implementation of GO hybrid slot waveguides with the significantly improved nonlinear optical performance. In addition to MRRs, [58] other resonant device structures can be employed to enhance the light-GO interaction based on the resonant enhancement effect, such as subwavelength gratings, [229] photonic crystal cavities, [36] and whisper-gallery-mode cavities. [230,231] Although there has been a lot of works investigating the nonlinear optical performance of GO and rGO, many of these have been semi-empirical. More physical insights, such as the anisotropy of the optical nonlinearity, the dependence of the nonlinear optical properties on the reduction/doping degree, and the interplay between Re ( (3) ) and Im ( (3) ) processes, remain to be explored. Previously, the optical nonlinearity of thick GO films (>1 μm) was characterized via the widely used Z-scan method, [62,63] However, for extremely thin 2D films (<20 nm), it is very difficult to accurately distinguish the weak response induced by the 2D films from the backgroud noise in the Z-scan measurements. Moreover, the ultrathin 2D films are easily damaged by the perpendicularly focused laser beam. The fabrication techniques for integrating GO films allow for precise control of their thicknesses and sizes, which yields new possibilities for investigating fundamental physical insights of 2D GO films. This, in turn, will also facilitate the full exploitation of the great potential of GO in nonlinear integrated photonic devices. This synergy will have a long-lasting positive impact, which will be a strong driving force for the continuous improvement of device performance and broadening of applications.
Accompanying the continuous improvement in the knowledge and control of GO's material properties as well as the development of its fabrication techniques, it is expected that many new breakthroughs in GO nonlinear integrated photonics will happen. The delivery of mass-producible hybrid nonlinear integrated photonic devices with significantly improved performance serves the common interest of many photonic industries, which will accelerate the applications of 2D materials out of laboratory and assure that the research in this area will benefit the broader community.
Conclusion
The on-chip integration of GO with a large optical nonlinearity and a high degree of flexibility in changing its properties represents a promising frontier for implementing high-performance nonlinear integrated photonic devices for a wide range of applications. In this paper, we review the progress in GO nonlinear integrated photonics. We summarize the optical properties of GO and the fabrication technologies for its on-chip integration. We review a range of GO hybrid integrated devices for different nonlinear optical applications, and compare the nonlinear optical performance of different integrated platforms. We also discuss the challenges and perspectives of this nascent field. Accompanying the advances in this interdisciplinary field, we be-lieve that GO-based nonlinear integrated photonics will become a new paradigm for both scientific research and industrial applications in exploiting the enormous opportunities arising from the merging of integrated devices and 2D materials. | 14,667.8 | 2023-01-02T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Iterative parallel registration of strongly misaligned wavefront segments
: The paper presents an algorithm for the precise registration of multiple wavefront segments containing large misalignment and phase differences. The measurement of a wavefront with huge dynamics or a large aperture size can be carried out in multiple Shack-Hartmann sensor measurements of segments of the wavefront. The registration algorithm is flexible with respect to the shape of the wavefront and can reconstruct a plane as well as divergent wavefronts, making it suitable for freeform wavefronts. The algorithm enables parallel registration of the wavefront segments which is carried out in an iterative manner to compensate for large misalignment errors. A simulative analysis of the proposed algorithm compares its performance to a fast parallel registration (FPR) algorithm and the established iterative closest point (ICP) algorithm. For a sensor misalignment of up to 100 µ m and 3 mrad the algorithm registers a plane and a divergent wavefront with a precision that is a factor 4 and 12 better than the registration precision of the FPR and ICP algorithm.
Introduction
A Shack-Hartmann sensor (SHS) is well suited for the evaluation of optical systems, as it provides a vibration-insensitive and reference free measurement of the optical wavefront with a large dynamic range [1,2]. This makes the SHS a frequently used device in ophthalmology [3], adaptive optics [4,5], free-space optical communication [6], optical system alignment [7] and production of optical systems and components [2]. However, wavefronts exceeding the sensors dynamic range or aperture size can only be captured in one measurement by using additional supporting null optics [8]. The drawback of supporting optics is that they are additional sources of errors, limiting the measurement quality [9]. In an alternative concept the SHS is combined with a positioning system to enable a measurement of the wavefront beyond the dynamic range or aperture size of the sensor [10][11][12]. In particular, segments of the wavefront are measured by the SHS part by part. Adjacent wavefront segments are measured with a spatial overlap, enabling the reconstruction of the entire wavefront using registration algorithms [13,14] that are also capable of handling deviations of the sensor from the intended measurement positions. As registration errors limit the measurement quality of the wavefront, a precise registration algorithm is necessary for the assessment of high-end optical systems. In the last decades freeform optics grew in popularity because of their high optical performance [15]. The shape of wavefronts generated by freeform optics can be of any type, demanding algorithms for the registration of wavefronts beyond plane wavefronts. In [16] a fast and parallel registration (FPR) algorithm is reported, where wavefront segments are registered in parallel within less than a second. The algorithm is analysed with respect to sensor misalignment up to 5 µm and 200 µrad and shows high quality results. However, a poor calibration of the positioning system or application-specific requirements with respect to measurement time, travel range, etc. might entail even larger sensor misalignment. The registration performance of the FPR algorithm deteriorates in case of larger sensor misalignment, as the used local approximation of the global mismatch metric limits its applicability to measurements with only moderate sensor misalignment.
The contribution of this paper is the development and evaluation of an algorithm that enables high-quality registration of plane as well as divergent wavefronts also in the presence of large sensor misalignment of the order of 100 µm and 3 mrad. Section 2. introduces the algorithm and discusses its properties. Section 3. presents a simulative analysis of the algorithm and Section 4. concludes the paper.
Measurement concept
In the proposed measurement concept the SHS measures segments of the entire wavefront at specific sensor positions [11,12]. Typically, the sensor deviates from its nominal position and alignment, because of uncertainties and errors in the positioning system (see Fig. 1), resulting in misaligned wavefront segments in the global frame (FG). The size of a wavefront segment is limited to the size of the sensor aperture. If adjacent measurements overlap, the misaligned wavefront segments can be registered to reconstruct the entire wavefront. In particular, the wavefront segments are registered by minimizing their overlap mismatch using rigid body transformation and wavefront propagation as illustrated in Fig. 2. The latter one is used to minimize phase differences between the wavefront segments caused by the misalignment of the sensor or a scan trajectory deviating from the wavefront of a specific phase. The sensor aperture of an SHS consists of a lenslet array, where at each lenslet the local gradient of the incident wavefront is measured [17]. From the measured gradients the corresponding wavefront segment can be reconstructed in form of a point cloud, which is a set of threedimensional points contained in the wavefront segment. For wavefront segments with large dynamics the corresponding point clouds can be determined from the phase distribution on the lenslet array [16], as the phase gradients are directly determined from the SHS measurement [18]. There are several algorithms available for the reconstruction of a distribution from a discrete set of local gradients, which can be divided into zonal and modal reconstruction algorithms [19,20]. Zonal reconstruction is typically preferred, as it better preserves details of the wavefront [21], which are important for a successful registration of the segments. The normal vector of the wavefront segment at each point of the point cloud is directly determined from the gradient measurements and is necessary for the proposed registration algorithm.
Iterative parallel registration algorithm
After the wavefront reconstruction the point cloud of each wavefront segment i = 1..U is determined in the local coordinate system of the SHS. For the reconstruction of the entire wavefront the sensor position and alignment in FG is necessary for each measurement. However, because of uncertainties in the positioning system, the exact position of the sensor is subject to uncertainty. The initial guess for the sensor position of the measurement of wavefront segment i is defined by the translation vector T 0i ∈ R 3 and the rotation matrix R 0i ∈ R 3×3 . Conveniently, the nominal position of the sensor is used for T 0i and R 0i . FSi is defined as the local coordinate system of the SHS positioned in FG with T 0i and R 0i . The reconstructed point cloud of segment i is directly represented in FSi, denoted as P i 0i with elements x i 0ij ∈ R 3 and normal vectors n i 0ij ∈ R 3 . The upper index defines the coordinate system in which an object is represented and j is an index to specify an individual point. Transformation from FSi into FG is given by the rigid body transformation x 0ij = R 0i x i 0ij + T 0i ∈ P 0i and n 0ij = R 0i n i 0ij , as illustrated in Fig. 3. The upper index is omitted for objects represented in FG. As the actual position deviates from the assumed nominal position, P 0i is not correctly positioned and the point clouds have an overlap mismatch. Additionally, the actual sensor position might be at different phases. To remove the phase differences, the wavefront segments have to be propagated and as an initial guess S 0i ∈ R can be defined for the propagation distance of segment i. To register the wavefront segments in parallel, each point cloud is transformed by where θ i ∈ R 3 defines a rotation with R(θ i ) as the corresponding rotation matrix. k i ∈ R 3 defines a translation and s i ∈ R the distance along which the wavefront segment is propagated additionally to S 0i . The parameters are with respect to FSi and collected in the vector a T i = (k T i , θ T i , s i ) ∈ R 7 . A metric for the overlap mismatch between two transformed segments (i,k) is where the functions W i i (·, a i , S 0k ) = W i i (a i , S 0k ) and W i k (·, a k , S 0i ) = W i k (a k , S 0i ) denote the transformed segments in FSi. q kin ∈ R 2 is a sampling point that belongs to the overlapping region of the segments and lies in the x-y plane of FSi. The squared differences between the segments are added up. Instead ofM ik (a i , a k ) the metric is considered, where both wavefront segments are back-propagated by S 0i providing the advantage that only one segment has to be propagated by presumed propagation data. Despite backpropagation, Eq. (4) can be used as an alternative to Eq. (3) for registration of the segments [14]. With Eq. (2) the following point clouds are contained in the segment functions of Eq. (4): transformed from FSk to FSi given by With Eq. (4) a global mismatch metric for the entire overlap mismatch in the set of wavefront segments is given by which is the sum over all overlapping wavefront segments and N denoting the total number of sampling points. A T = (a T 1 , .., a T U ) ∈ R 7 U are the transformation parameters of all segments. For the global mismatch metric of Eq. (7), the FPR algorithm considers for each overlapping segment pair (i,k) the point clouds of Eq. (5) with initial positioning data, i.e.
The FPR algorithm is carried out in three steps. In Eq. (8) the two point clouds are represented in the local coordinate system of one point cloud, i.e. FSi. In the first step of the FPR algorithm, the point cloud P i 0i of the local coordinate system and the corresponding normal vectors n i 0ij are interpolated for subpixel registration, leading to the interpolants F i (x, y) ∈ R for the point cloud and N i (x, y) ∈ R 3 for the normal vectors. As P i 0i denotes the unpropagated point cloud with respect to FSi, it has the same shape for any presumed registration data, defined by R 0i , T 0i and S 0i . Hence, for any change of the presumed registration data the same interpolants can be used meaning that they have to be determined only once which is the advantage of using the global mismatch metric of Eq. (7). χ i kin ∈ P i k (0, ∆S 0ki ) define the points that belong to the overlapping region with P i 0i . The x-y components of χ i kin define the sampling points q kin ∈ R 2 of the metric for the overlap mismatch in Eq. (4). In the second step of the FPR algorithm the corresponding point in The segment functions at the sampling points in Eq. (4) can then be approximated by where z ikn ,z ikn ∈ R are the z components of χ i kin andχ i kin . C ikn ,C ikn ∈ R 7 are determined by χ i kin andχ i kin and the corresponding normal vectors. With Eq. (10) the global mismatch metric of Eq. (7) can be approximated by with B ikn =z ikn − z ikn . In the third step of the FPR algorithm the parameters A * that register the point clouds are determined by minimizing the approximated global mismatch metric of Eq. (11). The minimization of this expression can be carried out by solving a matrix equation where Q ∈ R V×7 U and B ∈ R V are determined by the coefficients of Eq. (11) with V denoting the total number of squared terms. As registration determines the entire wavefront up to a rigid body transformation and a wavefront propagation, the seven transformation parameters of one point cloud are set to 0 to make Q T Q invertible. Conveniently, this is the point cloud of the wavefront segment in the center of the set of segments to keep registration parameters small. If the parameters A * are large, owing to large sensor misalignment, the quality of the approximation of the wavefront segments at the sampling points in Eq. (10) decreases and the FPR algorithm shows registration errors. The iterative fast parallel registration (IFPR) algorithm is based on the FPR algorithm but not limited by the approximation of Eq. (10).
Similarly to the FPR algorithm, the IFPR algorithm is initialised with T 0i , R 0i and S 0i for the presumed sensor position and phase data of the measurements. The first three steps of the IFPR algorithm are the same as for the FPR algorithm. First, the interpolation of P i 0i and the corresponding normal vectors is carried out leading to F i (x, y) and N i (x, y). Second, for each overlap the corresponding points and normal vectors are determined based on Eq. (9). Third, the parameters that minimize Eq. (11), i.e. A * = (a * T 1 , .., a * T U ), are determined by solving Eq. (12).
Inserting Eq. (2) into Eq. (1) using A * leads to The can be considered as a better guess for the registration data. In the fourth step of the IFPR algorithm the relative change of the global mismatch metric with respect to the improved registration data is considered to evaluated whether the registration is sufficient. The relative change of the global mismatch metric is given by where M 0g and M 1g are M g (A = 0) (see Eq. (7)) evaluated for T 0i , R 0i , S 0i and T 1i , R 1i , S 1i , respectively. The approximation of the global mismatch metric in Eq. (11) is exact for A = 0, hence it can be used to compute Eq. (14). If there is a significant relative change of the global mismatch metric by using the improved registration data, there might be the chance for an additional improvement of the registration data by applying the FPR algorithm with T 1i , R 1i , S 1i as the presumed registration data. This is equivalent to repeating step two and three of the IFPR algorithm.
Step one is not repeated, owing to the fact that the same interpolants are used to approximate Eq. (11) for any presumed registration data. The iteration including step two to step four is then repeated until the relative change of the global mismatch metric is smaller than a threshold denoted by ε indicating the convergence to the correct registration data T * i , R * i and S * i and serving as a stopping condition for the iteration. The steps and the iteration of the IFPR algorithm are illustrated in Fig. 4.
The complexity of the first iteration in dependence of the number of segments (U) and the number of points per overlap (PPO) used for the registration can be divided into the following terms The first term describes the complexity of the interpolation (step 1), which is independent of the number of PPO. The second term describes the complexity of step 2 and 4, where α ∈ R with αU being the number of overlaps. The last two terms describe the complexity of step 3. In particular, they describe the complexity of the symmetric product Q T Q and the Cholesky decomposition used to solve Eq. (12) [22]. In conclusion, Eq. (15) shows that the computational effort of one iteration has a linear dependence on the number of PPO and a cubic dependence on the number of segments.
Algorithm analysis
The performance of a registration algorithm is influenced by several quantities, e.g. sensor misalignment, measurement noise, etc. The performance of the IFPR algorithm is evaluated with respect to these quantities in a simulation-based analysis. For a comparison of the IFPR algorithm to other algorithms, the FPR algorithm [16] and the established iterative closest point (ICP) algorithm [23] are considered.
Simulation setting
A plane and a divergent wavefront, depicted in Fig. 5(a) and 5(b) respectively, is considered for the evaluation of the registration performance of the algorithms. The plane wavefront is generated by collimating a spherical wavefront with a meniscus lens [14,24]. It has a diameter of 50 mm and a peak to valley (PV) of 11 µm containing mainly spherical aberration (PV= 11 µm) and secondary spherical aberration (PV= 0.5 µm). The second wavefront is a spherical wavefront (diameter of 30 mm) with a divergence of 140 • and contains mainly spherical aberration (PV= 6 µm), astigmatism (PV= 7 µm) and coma (PV= 6 µm) resulting in a total PV of 13 µm. The plane wavefront is measured at 25 sensor positions in the x-y plane arranged in a chessboard pattern and the sensor aperture is a square with a side length of 13 mm. There are 43 sensor positions for the measurement of the divergent wavefront, where a circular sensor aperture with a diameter of 7 mm is used. The size of the lenslets, contained by the sensor aperture, is set to 130 × 130 µm 2 meaning that the total number of lenslets per sensor aperture is 10.000 for the plane wavefront and 2.200 for the divergent wavefront. The wavefronts and the SHS measurements are simulated with a custom raytracing software implemented in MATLAB (The MathWorks Inc., Natick, MA, USA) and using OpticStudio (Zemax LLC, Kirkland, WA, USA). With the software sensor misalignment, measurement noise as well as systematic measurement errors are simulated. After the measurement, the wavefront segments are reconstructed by a spline-based zonal reconstruction algorithm [25]. Then the reconstructed wavefront segments are registered by the algorithms. The algorithms are running on a personal computer with 6 cores and a processor frequency of 2.6 GHz.
With the ICP algorithm the wavefront segments are registered sequentially, meaning that, starting from the wavefront segment in the center, the wavefront segments are consecutively added in a spiral way [14]. For the IFPR and FPR algorithm, the point clouds are interpolated with cubic interpolation and the normal vectors with linear interpolation [16]. The threshold for the stopping condition of the IFPR algorithm (see Fig. 4) is set to ε = 1 3 , which turns out to be a suitable value, as during convergence the metric is typically decreased by orders of magnitude with one iteration. The result of an algorithm is evaluated in three steps. First, the registered wavefront is fitted into the original wavefront and the difference between the wavefronts is computed. Second, measurement noise and systematic measurement errors are removed from the difference to determine only the registration errors. Third, the root mean square (RMS) and PV values of the registration errors are determined.
Reference configuration
Sensor misalignment defines a deviation of the actual sensor positioning data from the nominal positioning data and is reflected by k i ∈ R 3 and θ i ∈ R 3 of Eq. (2). In particular, k i reflects translational and θ i rotational misalignment of wavefront segment i. For the simulation of misalignment, the parameters are randomly distributed between predefined misalignment ranges, which is [−50, 50] µm for the components of k i and [−1500, 1500] µrad for the components of θ i . Measurement noise is simulated by overlaying the point clouds of the segments with a normal noise distribution with zero mean and a standard deviation of 10 nm. The overlap area of adjacent measurements is set to 20 % of the area of the sensor aperture. In the measurement of the divergent wavefront the overlap size is 20 % of the area of the sensor aperture or larger because of the complex shape of the wavefront. The average number of PPO used for registration is 928 for the plane wavefront and 315 for the divergent wavefront. In the following sections the algorithms are analysed with respect to misalignment ranges, noise standard deviation, overlap size and the number of PPO, where the values used for these quantities in this section define the reference configuration.
The registration errors of the IFPR, FPR and ICP algorithm for the reference configuration are depicted in Fig. 6 and Fig. 7 for the plane and the divergent wavefront, respectively. The IFPR algorithm attains high quality registration results of both wavefronts with an RMS registration error of 14 nm for the plane and an RMS registration error of 31 nm for the divergent wavefront. Registration of the divergent wavefront leads in general to larger errors, since a larger number of wavefront segments has to be registered and a smaller number of PPO is used as compared to the plane wavefront. The IFPR algorithm requires 3 to 4 iterations to register the wavefronts of the reference case as illustrated in Fig. 8 demonstrating its fast convergence. Depending on the initial guess for the registration data a certain amount of iterations is needed, as the approximation of transformed segments of the FPR algorithm (see Eq. (10)) is less qualitative the larger the necessary transformation parameters are. This explains the larger RMS registration errors of the FPR algorithm of 24 nm for the plane wavefront and 122 nm for the divergent wavefront depicted in Fig. 6(b), 7(b) and 7(c). The IFPR algorithm registers the plane wavefront at least about a factor 2 better than the other algorithms. For the divergent wavefront the improvement is a factor 4. The ICP algorithm has the largest registration errors, as the wavefront segments are sequentially registered, leading to an accumulation of the registration errors. Moreover, the ICP algorithm can not propagate the wavefront segments and does not compensate phase differences between them, leading to enlarged registration errors for the divergent wavefront. Analysis shows that the IFPR algorithm attains comparable results with respect to freeform wavefronts. Increasing the dynamics of the plane wavefront to a PV of 1 mm the RMS registration error is still around 10 − 20 nm.
Influence of misalignment
Sensor misalignment is caused by uncertainties and errors in the positioning system. The misalignment is divided into translational and rotational misalignment reflected by parameters k i and θ i for segment i. In the simulation the components of k i and θ i are randomly picked for a defined misalignment range.
The influence of the misalignment range on the registration quality of the algorithms with respect to the plane and the divergent wavefront is depicted in Fig. 9. Translational misalignment ranges are considered up to ±100 µm. For the rotational misalignment two ranges, i.e. ±1.5 and ±3 mrad, are exemplarily considered. The values for the misalignment ranges are on a realistic order with respect to a multi-axis positioning system [26]. The plots show the RMS registration error on a logarithmic scale in dependence of the misalignment range. The PV registration error is typically larger than the RMS registration error a factor 5 to 8. The IFPR algorithm has for all considered misalignment ranges an RMS registration error smaller 25 nm for the plane wavefront and smaller 50 nm for the divergent wavefront. For large misalignment of ±100 µm and ±3 mrad, the IFPR algorithm has registration errors a factor 4 and a factor 12 smaller than the other algorithms for the plane and the divergent wavefront respectively.
In general, the IFPR algorithm attains better and more robust registration performance than the other algorithms. Only for the divergent wavefront and small misalignment the FPR algorithm has slightly smaller registration errors than the IFPR algorithm. This is explained, as for wavefront segments with weak features, the wavefront segments might be shifted more against each other than necessary with many iterations. Hence, there are two reasons for the stopping condition of the iteration. First, it avoids unnecessary iterations saving computation time. Second, in case of wavefront segments with weak features, it prevents the IFPR algorithm from too large in-plane shifts of the wavefront segments. For translational misalignment smaller ±30 µm and rotational misalignment smaller ±1.5 mrad, the FPR algorithm obtains results comparable to the IFPR algorithm, as the approximation of Eq. (10) is still sufficiently good.
Influence of noise and systematic error
Background light, readout and dark currents are sources of noise in the measurement with an SHS [27]. The total measurement noise is simulated by adding a normal noise distribution to the wavefront segments. Moreover, an SHS measurement might contain systematic errors. In this section a systematic measurement error is simulated by an additional error distribution added to the wavefront segments with a PV = 5 nm. The distribution of the systematic error is depicted in Fig. 10 and based on a typical systematic error distribution of a calibrated SHS [28]. The influence of measurement noise on the algorithms performance in presence of the systematic measurement error is shown in Fig. 11. For both wavefronts and the considered noise standard deviations the IFPR algorithm attains the best registration results of the algorithms with a minimum RMS registration error of 6 nm for the plane and 23 nm for the divergent wavefront. Compared to the results of the reference configuration where no systematic error is simulated, the IFPR and the FPR algorithm do not decrease in registration performance, while the ICP algorithm, especially for the divergent wavefront, has larger registration errors explained by the reduced robustness of sequential registration to measurement errors as compared to parallel registration. Especially for wavefront segments with weak features and large misalignment, the wavefront segments might be shifted to far by the FPR algorithm, as Eq. (10) qualitatively describes the transformed segments only for sufficiently small transformation parameters. Basically, noise prevents the segments from getting shifted too far, as the overlap mismatch increases. Hence, the registration errors might increase with a smaller noise standard deviation.
Influence of points per overlap
The points in the overlap regions contain the surface information that enables registration of the segments. Typically, more PPO improve the registration result, because more information of the segments shape is available. Nevertheless, a smaller number of PPO has the benefit of a decreased computation time. In Fig. 12 the influence of the average number of PPO used for registration on the computation time of the algorithms and on their registration results is depicted with a uniform distribution of the selected points in the overlap region.
As expected, the computation time of the algorithms decreases with a smaller number of PPO. Only the computation time of the ICP algorithm for the registration of the plane wavefront increases despite a decrease of the number of PPO from 150 to 100, which is explained by more iterations used by the algorithm. The IFPR algorithm has the smallest registration errors for all considered numbers of PPO, demonstrating its robustness despite a small amount of surface information. For 100 PPO the IFPR algorithm has a computation time between 200 ms and 300 ms with an RMS registration error of 21 nm for the plane and 64 nm for the divergent wavefront. Increasing the number of PPO to 300 reduces the RMS registration error for the divergent wavefront to 32 nm while the computation time increases to 470 ms. For the plane wavefront the RMS registration error of the IFPR algorithm increases when the number of PPO is increased from 200 to 300 showing that more PPO might reduce the registration quality in some cases. With 600 PPO, the IFPR algorithm registers the plane wavefront in 410 ms with an RMS registration error of 15 nm. Despite requiring three iterations to register the wavefronts, the computation time of the IFPR algorithm is not three times the computation time of the FPR algorithm. The reason for this is that some parts of the FPR algorithm have to be carried out only in the first iteration of the IFPR algorithm, e.g. the interpolation of the point clouds, the determination of the points belonging to an overlap, and need not to be repeated in following iterations.
Influence of overlap size
Besides increasing the number of PPO, more surface information is obtained by increasing the overlap size. Additionally, the algorithms get more sensitive to out of plane angles between the wavefront segments. The drawback of an increased overlap size is that more wavefront segments have to be measured leading to an increased measurement time. The RMS registration error in dependence of the overlap size, in percent of the area of the sensor aperture, is shown in Table 1. For all overlap sizes the average number of PPO used for registration is set to a constant value of 300. By increasing the overlap size from 20 % to 40 % the RMS registration error of the IFPR algorithm decreases by a factor of 3 to 4. In particular, the RMS registration error gets smaller than 10 nm for both wavefronts making the IFPR algorithm applicable for the evaluation of high-end optical systems. The RMS registration error of the ICP algorithm decreases by a factor of 15 when the overlap size is increased from 20 % to 40 %, demonstrating the high sensitivity of sequential registration to available surface information. The registration quality of the FPR algorithm hardly improves with the considered increase of the overlap size. This is explained by the incorrect interpretation of the surface information, as the approximation of Eq. (10) is less qualitative for the considered large sensor misalignment. In summary the high quality registration performance of the IFPR algorithm with RMS registration errors down to 10 nm in the presence of large sensor misalignment up to 100 µm and 3 mrad is successfully demonstrated. With a computation time of less than 500 ms on a personal computer the evaluation of high-end optical systems in time critical applications is possible.
Conclusions
In this paper, an algorithm for the precise registration of SHS measurements is proposed. The benefit of the proposed algorithm is that it can cope with large sensor misalignment, where other algorithms lack in performance. The wavefront to be registered, can be a plane as well as a divergent wavefront, making the algorithm applicable for a wide range of tasks including the evaluation of freeform optics. The algorithm registers the wavefronts in an iterative manner and mathematics are discussed. A simulation-based analysis of the algorithm performance is carried out, analysing influencing factors such as sensor misalignment, measurement errors and available surface information. In this course the results of the IFPR algorithm are compared to the FPR and the established ICP algorithm. For translational misalignment of up to 100 µm and rotational misalignment of up to 3 mrad, the proposed algorithm reconstructs a plane wavefront with registration errors a factor 4 and a divergent wavefront with registration errors a factor 12 smaller than the registration errors of the FPR and ICP algorithm. The considered wavefronts are reconstructed in a few iterations (3 to 4) by the proposed algorithm. The computation time of the algorithm on a personal computer is less than 500 ms making the algorithm suitable for time critical applications. For an overlap size between the measurements of 40 % of the sensor aperture area, the proposed algorithm achieves RMS registration errors smaller 10 nm enabling a qualitative assessment of high-end optical systems.
Future work concerns applications of the algorithm as well as further analysis of the algorithm with respect to the shape of the wavefront. | 7,267.4 | 2021-09-29T00:00:00.000 | [
"Physics"
] |
Profiling of subcellular EGFR interactome reveals hnRNP A3 modulates nuclear EGFR localization.
The aberrant subcellular translocation and distribution of epidermal growth factor receptor (EGFR) represent a major yet currently underappreciated cancer development mechanism in non-small cell lung cancer (NSCLC). In this study, we investigated the subcellular interactome of EGFR by using a spectral counting-based approach combined with liquid chromatography-tandem mass spectrometry to understand the associated protein networks involved in the tumorigenesis of NSCLC. A total of 54, 77, and 63 EGFR-interacting proteins were identified specifically in the cytosolic, mitochondrial, and nuclear fractions from a NSCLC cell line, respectively. Pathway analyses of these proteins using the KEGG database shown that the EGFR-interacting proteins of the cytosol and nucleus are involved in the ribosome and spliceosome pathways, respectively, while those of the mitochondria are involved in metabolizing propanoate, fatty acid, valine, leucine, and isoleucine. A selected nuclear EGFR-interacting protein, hnRNP A3, was found to modulate the accumulation of nuclear EGFR. Downregulation of hnRNP A3 reduced the nuclear accumulation of EGFR, and this was accompanied by reduced tumor growth ability in vitro and in vivo. These results indicate that variations in the subcellular translocation and distribution of EGFR within NSCLC cells could affect tumor progression.
Introduction
The epidermal growth factor receptor (EGFR) signaling is one of the most commonly deregulated pathways in human tumor. Despite the firmly-established significance of this pathway in tumor growth, however, targeted treatment aimed to disrupt EGFR has yielded only modest medical success in the past 2 decades. An exception is non-small cell lung cancer (NSCLC) patients carrying EGFR activation mutations: such patients initially showed very promising responses to treatment with an EGFR kinase inhibitor, but almost all of the treated patients eventually developed resistance to the EGFR kinase inhibitor 1 . These unsatisfactory effects are, in part, due to the highly complex of the EGFR network pathway.
Significant research efforts have sought to gain deeper information of the EGFR signaling and EGFR-mediated fatal oncology in human cancer. Recent studies have shown that activated EGFR may escape lysosomemediated degradation and recycle to the plasma membrane or undergo intracellular trafficking to subcellular organelles, such as nuclei 2,3 and mitochondria 4,5 . Within these organelles, EGFR may exert novel functions that differ from its typical function as a transmembrane receptor tyrosine kinase. In support of this view, the functionality of EGFR has been shown to depend on its subcellular location 6 , and EGFR was shown to undergo shuttling into the cell nucleus and mitochondrion upon ligand binding, EGFR-targeted therapy, and other stimuli (e.g., radiation) 7 . As the EGFR localized in these organelles can display novel functions and may regulate the response of a tumor to therapy, it is important to characterize the novel functions of EGFR in these organelles.
The presence of full-length EGFR in the nucleus has been recognized for over 20 years 8 . Nuclear EGFR performs as a tyrosine kinase, transcriptional mediator, and regulator of other biological functions. Within the cell nucleus, EGFR roles as a transcriptional mediator through its specific transactivation domain 9 and through its connections with RNA helicase A 10 and/or DNA-binding transcription factors that are highly presented in tumors, including STAT3 2 , E2F1 11 , and STAT5 12 . The nuclear increase of EGFR has been associated with cancer malignancy, poor patient survival, and drug resistance 3,13,14 . In line with these links, studies have shown that nuclear EGFR activates the expression of cyclin D1 9 , inducible nitric oxide synthase 2 , B-Myb 11 , COX-2 15 , aurora A 12 , c-Myc 16 , and breast cancer resistance protein 17 . Nuclear EGFR also keeps its tyrosine kinase activity and phosphorylates proliferating cell nuclear antigen to stimulate cell growth and DNA repair 18 . Heterogeneous nuclear ribonucleoprotein A3 (hnRNP A3) has been reported to interact with nuclear EGFR and stabilize mRNAs involved in aerobic glycolysis in response to irradiation 19 . Overall, the current evidence suggests that blocking the nuclear functions of EGFR may maximize the efficacy of EGFR-targeting agents and other anticancer therapies. However, the natural and pathological significances of nuclear EGFR in cancers remain mostly unidentified. Efforts to map the relationships at work within the subcellular interactome of EGFR could help us fundamentally understand the mechanisms that govern tumor development and therapeutic resistance, leading to alternative treatment strategies. To address this issue, we employed a label-free spectral counting-based proteomics approach to investigate the EGFR subcellular interactome in a NSCLC cell line. We further examined a selected nuclear EGFR-interacting protein, hnRNP A3, and found that it contributes to the nuclear accumulation of EGFR in NSCLC.
Profiling of the subcellular EGFR interactome in CL1-5 cells
The translocation of EGFR to non-canonical subcellular locations, including the nucleus and mitochondria, represents a major yet underappreciated mechanism of NSCLC development. To further understand the putative functions of subcellularly distributed EGFR and EGFR signaling at different subcellular locations, we investigated the EGFR interactome at three subcellular locations (cytosol, mitochondria, and nucleus) to characterize the proteins that interact with the EGFR at these subcellular locations. As previous studies have shown that EGFR are internalized to nucleus and mitochondria without EGF treatment [20][21][22] , we investigated the subcellular interactome of EGFR in the physiological condition without any stimulation in this study. Figure 1a shows an overview of the strategy used to analyze the EGFR interactome at each subcellular location and functionally explore each identified interacting protein. We used the NSCLC cell line, CL1-5, which is a highly invasive cell line that was derived from CL1-0 cells 23 and displays higher EGFR expression compared to the less-invasive CL1-0 cell line 24 . Proteins were isolated from the mitochondria, nucleus, and cytoplasm, and immunoprecipitated with an antibody against EGFR (Fig. 1b). The subcellular proteins in the EGFR-immunoprecipitates were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE, Fig. 1c), extracted from the gel, and identified by free-labeling approaches combined with liquid chromatography-tandem mass spectrometry (LC-MS/ MS).
Spectral counting-based quantification of identified EGFRinteracting proteins
To further assess the proteins that appeared to interact with EGFR, the relative amounts of proteins identified in the immunoprecipitates were determined by spectral counting-based protein quantification. The fold change for each protein was determined from the ratio of the average spectral count (SC) of the protein in the anti-EGFR fraction versus that in the control IgG fraction. Proteins with fold changes more than two standard deviations (SD) above the mean ratio (i.e., above 6.92, 7.92, and 17.84 for the cytosolic, mitochondrial, and nuclear fractions, respectively) were considered to be EGFR-interacting proteins. Based on the cutoff, 58, 79, and 67 EGFR-interacting proteins were observed in the cytosolic, mitochondrial, and nuclear fractions, respectively (Supplementary Table 1). Among them, 54, 77, and 63 proteins were specific to the cytosolic, mitochondrial, and nuclear groups, respectively (Supplementary Table 1). In contrast, glucose-6-phosphate isomerase was found in the cytosolic and mitochondrial groups; 60S ribosomal protein L6 was detected in the mitochondrial and nuclear groups; and filaggrin, desmocollin-1, and suprabasin were identified in the cytosolic and nuclear groups.
Bioinformatics analysis of the EGFR interactome
To determine the biological processes that are most likely to be affected by the presence of the EGFRassociated complexes, we used DAVID to annotate the functions of the EGFR-interacting proteins in each subcellular fraction (Supplementary Table 1). The enriched biological processes were as follows: for the cytosolic fraction, translation, rRNA processing, protein complex biogenesis, regulation of apoptosis, and epidermis development; for the mitochondrial fraction, energy generation, oxidation/reduction, mitochondrion organization, membrane organization, and transmembrane transport; and for the nuclear fraction, RNA processing and ribonucleoprotein complex biogenesis (Table 1). Pathway analyses performed using the KEGG database revealed that the EGFR-interacting proteins of the cytosolic fraction were involved in ribosome-related pathways, those of the nuclear fraction were involved in spliceosome-related pathways, and those of the mitochondrial group were related to pathways involved in the metabolism of propanoate, fatty acid, valine, leucine, and isoleucine ( Table 2).
We further used the STRING online database to establish a network of protein-protein interactions (PPIs) between the identified EGFR-interacting proteins Fig. 1 Identification of EGFR-interacting proteins in CL1-5 cells. a Illustration of the combined label-free proteomics and experimental approach used to investigate the subcellular interactome of EGFR in NSCLC. Mitochondrial, cytosolic, and nuclear proteins were isolated from CL1-5 cells and immunoprecipitated with anti-EGFR antibody for proteomic analysis. Receptor interactome identification was performed, and specific pairs with high likelihood of interaction were validated experimentally. b The proteins from whole cell lysates (WCL), mitochondrial fractions (Mit), cytosolic fractions (Cyt), and nuclear fractions were analyzed for selected markers by Western blotting (top panel). The utilized markers included mtHSP70 for mitochondria, Lamin B for nuclei, E-cadherin for plasma membrane, and ERK for cytoplasm. In the bottom panel, the proteins were immunoprecipitated with anti-EGFR antibody and Western blotting was used to detect EGFR in the immunoprecipitates. c The proteins in the immunoprecipitates were separated by SDS-PAGE and stained with Coomassie blue. Table 1). The analyses yielded 268, 24, and 204 strong interaction links between the EGFRinteracting proteins identified in the cytosolic, mitochondrial, and nuclear fractions, respectively (Fig. 2). In line with the results from our DAVID and KEGG analyses (Tables 1, 2), the STRING analysis generated a module that depicted interactions between EGFR and proteins grouped into the RNA processing/splicing and ribonucleoprotein complex biogenesis interaction networks (Fig. 2). The RNA processing/splicing group primarily included hnRNP family proteins, such as hnRNP A0, hnRNP A3, hnRNP DL, hnRNP M, and hnRNP UL1. The highest score was found for hnRNP A3, which is involved in RNA processing/splicing and the spliceosome (Tables 1, 2). Since hnRNP A3 is reportedly overexpressed in lung cancer 25 and has been shown to interact with nuclear EGFR in A549 cells 19 , we selected hnRNP A3 for further study, and set out to examine the functional role of its interaction with nuclear EGFR.
Expression levels of hnRNP A3 and EGFR in paired NSCLC tumor and adjacent normal tissues
To address the functional role of the putative EGFR-hnRNP A3 interaction in the nucleus, we used immunohistochemistry (IHC) to detect the expression levels of hnRNP A3 and EGFR in 15 NSCLC tumor tissues and paired adjacent normal sections. The clinical characteristics of the patients are summarized in Supplementary Table 2. Representative IHC results for hnRNP A3 and EGFR (brown staining) from an overall stage 1 patient are shown in Fig. 3a. The percentage of positive staining ranged from 0 to 100% in all samples. The clinical relevance of hnRNP A3 and EGFR expression in paired NSCLC tumor and adjacent normal tissue samples is The Database for Annotation, Visualization, and Integrated Discovery (DAVID, version 6.7) was applied to functionally annotate enriched proteins, using the annotation category GOTERM_BP_FAT. Processes with at least five protein members and p values less than 0.01 were considered significant.
summarized in Supplementary Table 3. Elevated expression of hnRNP A3 and EGFR was detected in the tumor section compared with the adjacent normal section. To determine if hnRNP A3 showed nuclear colocalization with EGFR in NSCLC, we used immunofluorescence (IF) staining to examine the expression patterns of these proteins in the paired tumor and adjacent normal tissues of an overall stage 3 patient. As shown in Fig. 3b, c, hnRNP A3 and EGFR showed elevated colocalization in tumor sections compared with adjacent normal sections. The elevated colocalization of hnRNP A3 and EGFR in tumor sections was also examined by IHC double staining in a NSCLC patient. As shown in Fig. 3d, the colocalization of EGFR and hnRNP A3 was clearly much higher in the tumor sections than the hyperplasia sections. In addition, the tissue extract was prepared from a frozen NSCLC tissue and immunoprecipitated with anti-hnRNP A3 or anti-IgG. As shown in Fig. 3e, EGFR was readily detected in the immunoprecipitates pulled down by anti-hnRNP A3. These results suggest that nuclear hnRNP A3 and EGFR interact in NSCLC.
hnRNP A3 and EGFR interact in the nuclei of NSCLC cells
To confirm that hnRNP A3 interacts with EGFR in the nucleus, we immunoprecipitated the nuclear proteins of CL1-5 and A549 cells using an antibody against EGFR. As shown in Fig. 4a, hnRNP A3 was detected in EGFR immunoprecipitates from both CL1-5 and A549 cells. To further evaluate the nuclear colocalization of EGFR with hnRNP A3, we used in situ IF staining to analyze the subcellular distributions of EGFR, hnRNP A3 and DAPI (a nuclear marker). As shown in Fig. 4b, hnRNP A3 and EGFR were highly colocalized in the nuclei of both cell lines. As shown in Fig. 4c, about 35 and 55% of EGFR were localized in the nucleus of CL1-5 and A549 cells, respectively. Three-channel colocalization analysis indicated that about 20 and 30% of colocalization of EGFR, hnRNP A3, and DAPI in CL1-5 and A549 cells, respectively. Taken together, these results, together with those from our analyses of the interactome and clinical tissues, strongly indicate that hnRNP A3 and EGFR interact in vitro and in vivo.
hnRNP A3 is essential for the nuclear translocation of EGFR As hnRNP A3 has been shown to shuttle cargo between the cytosol and nucleus [26][27][28] , we postulated that it might perform this function for EGFR. To test this hypothesis, we used IF staining to examine the effects of hnRNP A3 knockdown on the cellular distribution of EGFR. As shown in Fig. 5a, b, the colocalization of EGFR with DAPI was decreased in CL1-5 and A549 cells depleted of hnRNP A3. To confirm these IF staining results, we employed nuclear fractionation and Western blot analysis to examine the expression levels of EGFR and hnRNP A3 in nuclear and whole-cell lysates (WCL). As shown in Fig. 5c, while the level of hnRNP A3 was greatly reduced in the hnRNP A3-depleted cells, the level of EGFR in the whole-cell lysates remained unchanged. In contrast, the levels of nuclear EGFR were greatly reduced in hnRNP A3-depleted CL1-5 cells compared to the siN control (Fig. 5d). As the total level of EGFR was not affected by the depletion of hnRNP A3, it is likely that the reduced localization of EGFR in the nucleus of hnRNP A3depleted cells was accompanied by the redistribution of EGFR to membrane, cytoplasm, or cytoplasmic organelles. In addition, we also examined how stable depletion of hnRNP A3 affected the nuclear localization of EGFR. CL1-5 cells stably depleted of hnRNP A3 by sh-hnRNP A3 (shA3-1 and shA3-2) were subjected to nuclear fractionation, and the levels of EGFR in nuclear and whole-cell lysates were assayed by Western blot analysis. As shown in Fig. 5e, the levels of total EGFR in WCL of shA3-1-and shA3-2-depleted CL1-5 cells were similar to that of the sh-V control, whereas the levels of nuclear EGFR were greatly reduced in shA3-1-and shA3-2-depleted CL1-5 cells compared to the sh-V control. Collectively, these results show that hnRNP A3 modulates the nuclear localization of EGFR in NSCLC.
Effects of hnRNP A3 depletion on the cell proliferation, anchorage-independent growth, and in vivo tumor growth of NSCLC As hnRNP A3 is reportedly overexpressed in lung cancer 25 , we next examined its effects on cell proliferation and anchorage-independent growth in CL1-5 and A549 cells. As shown in Fig. 6a, transient depletion of hnRNP A3 inhibited cell proliferation in CL1-5 cells (Fig. 6a, left panel) and A549 cells (Fig. 6a, right panel). Similarly, stable depletion of hnRNP A3 by shRNA suppressed the Fig. 3 Immunohistochemical (IHC) and immunofluorescence (IF) staining of EGFR and hnRNP A3 in NSCLC tumor and adjacent normal tissues. a Tumor (T) and adjacent normal (N) sections from an overall stage 1 patient were examined by hematoxylin & eosin staining (H&E) and IHC staining (magnification, ×400) for the detection of hnRNP A3 and EGFR. The immunoreactivity of hnRNP A3 and EGFR in tumor (T) and adjacent normal (N) of IHC staining was scored and indicated in each panel. b IF staining was used to assess the expression levels of hnRNP A3 and EGFR from an overall stage 3 patient. c The colocalization of hnRNP A3, EGFR, and 4′,6-diamidino-2-phenylindole (DAPI) from (b) was analyzed using the MetaMorph software. d IHC double staining was used to detect the colocalization of hnRNP A3 and EGFR from the tumor (bottom) and the hyperplasia (top) sections of a NSCLC patient. The green color represents EGFR signal and the brown color represents hnRNP A3 signal. The deepblue color (green plus brown) indicates the colocalization and are marked with arrows. Magnification: ×400. e Whole cell lysates were prepared from a frozen NSCLC tissue and immunoprecipitated (IP) with anti-hnRNP A3 or anti-IgG as control. The immuno-precipitated proteins were then analyzed by Western blot.
anchorage-independent growth ability of CL1-5 cells (Fig. 6b). These results suggest that hnRNP A3 may be involved in the tumorigenesis of NSCLC. To determine if hnRNP A3 affects NSCLC tumorigenesis in vivo, we examined how stable depletion of hnRNP A3 affected tumor growth in a xenograft mouse model. As shown in Fig. 6c, the growth of shA3-1 cell-derived tumors was slower than that of sh-V control cell-derived tumors. Similarly, the excised shA3-1 tumors were considerably smaller than the sh-V control tumors. To evaluate if the depletion of hnRNP A3 also reduced the nuclear localization of EGFR, we subjected the excised tumors to IF staining of hnRNP A3 and EGFR. As shown in Fig. 6d, e, the levels of hnRNP A3, nuclear EGFR, cMyc, cyclin D1, aurora A, and COX-2 were greatly decreased in shA3-1 tumors compared to sh-V tumors. These results indicate that the downregulation of hnRNP A3 reduced tumor growth in vivo, possibly by decreasing the levels of nuclear EGFR and its target regulation, including cMyc, cyclin D1, aurora A, and COX-2.
Discussion
In this study, we examined the EGFR interactome at three subcellular locations (cytosol, mitochondria, and nucleus) in order to deduce the functionality of EGFR at these subcellular locations. Using free-labeling approaches combined with LC-MS/MS, we identified 58, 79, and 67 EGFR-interacting proteins in the cytosol, mitochondria, and nucleus, respectively (Supplementary Table 1). Our enrichment analysis of categories belonging to biological processes revealed that the cytosolic EGFRinteracting proteins were associated with translation, rRNA processing, peptide cross-linking, protein complex biogenesis, regulation of apoptosis, and epidermis development (Table 1). Pathway analyses using the KEGG database revealed that the cytosolic EGFR-interacting Fig. 4 hnRNP A3 interacts with EGFR in the nucleus. a Whole cell lysates from CL1-5 and A549 cells were further processed to obtain nuclear fractions. The proteins from each nuclear fraction (CN; 1 mg) were immunoprecipitated (IP) with anti-EGFR or anti-IgG (control), and the proteins in the immunoprecipitates were analyzed by Western blotting. The tested markers included Lamin B for nuclei, E-cadherin for plasma membrane, mtHSP70 for mitochondria, and HSP90 for cytoplasm. b IF staining were was used to assess the subcellular distributions of hnRNP A3, EGFR, and DAPI (nuclei) in CL1-5 and A549 cells. Colocalization of hnRNP A3 and EGFR in the DAPI nucleus is seen as a white-colored spot, and c the colocalization of hnRNP A3, EGFR, and DAPI in~100 cells was analyzed using the MetaMorph software.
proteins were involved in ribosome-related pathways ( Table 2). Our findings suggested a previously unknown function of EGFR, namely that cytosolic EGFR may interact with ribosomal proteins to promote the translational program in NSCLC. In tumor cells, the activation of survival signaling pathways increases overall protein synthesis and enhances cellular metabolism, tumor growth, and metastasis [29][30][31] , and the deregulation of translation can enable tumors to resist clinical treatment 32 . In this regard, it is interesting to speculate that the deregulation of translation and ribosomes may contribute to drug resistance, especially in the NSCLC cells harboring constitutively activating mutations of EGFR.
The EGFR-interacting proteins in mitochondria were found to correlate with precursor metabolite generation, energy production, and mitochondrion organization ( Table 1). The mitochondrial EGFR-interactome included Dnaja3 (Table 1, also known as Tid-1); this protein has been shown to govern the mitochondrial localization of EGFR, and the mitochondrial accumulation of EGFR has been shown to promote metastasis in NSCLC 33 . While The summary data shown in (b) indicate the means ± SD from three independent experiments; **p < 0.01 and ***p < 0.001, as assessed with the Student's t-test. c Western blot analysis for the expression level of hnRNP A3 and EGFR. β-Actin was used as a loading control. d The lysates of CL1-5 cells transfected with si-hnRNP A3 were fractionated to isolate nuclear fraction for Western blot analysis. e CL1-5 cells were infected with sh-hnRNP A3 (sh-A3-1 and sh-A3-2) or sh-V (control) and stable clones of hnRNP A3-knockdown cells were obtained. The levels of relevant proteins in nuclear fractions (CN, 10 μg) and WCL (20 μg) were analyzed by Western blot analysis. The cytoplasmic marker, HSP90, and nuclear marker, Lamin B, were included to validate the purity of the nuclear fraction. the functions of mitochondrial EGFR remain poorly understood, our pathway analyses revealed that many of the mitochondrial EGFR-interacting proteins are involved in the metabolism of propanoate, fatty acids, and degradation of valine, leucine, and isoleucine ( Table 2). The palmitoylation of mitochondrial EGFR has been shown to induce mitochondrial fusion and promote cell survival in prostate and breast cancer 34 . The products of valine, leucine, and isoleucine catabolism may enter the citric acid cycle and link to the metabolism of propanoate and fatty acids 35 . In the future, it could be interesting to investigate exactly how mitochondrial EGFR modulates these metabolic pathways to promote metastasis.
The EGFR-interacting proteins in the nucleus were found to be involved in RNA processing/splicing, ribonucleoprotein complex biogenesis, and cellular macromolecular complex assembly (Table 1). Pathway analyses revealed that the nuclear EGFR-interacting proteins were involved in spliceosome-related pathways ( Table 2). Consistent with this finding, our STRING-based EGFR interactome profiling generated an EGFR interaction module that grouped into the RNA processing/splicing and ribonucleoprotein complex biogenesis interaction networks (Fig. 2). The RNA processing/splicing group primarily included hnRNP family proteins, including hnRNP A0, hnRNP A3, hnRNP DL, hnRNP M, and hnRNP UL1. Several members of hnRNP A/Bs, which are comprised of A1, A2, A3, and A0 36 have been identified in the nuclear EGFR interactome (Supplementary Table 1). HnRNP A1, A2, and A3 have been shown to copurify with the splicing complexes 37,38 . The functions of hnRNP A1 and A2 in the splicing of oncogenes and tumor-related genes may explain the frequent dysregulation of hnRNP A/Bs in different types of cancers 39,40 . HnRNP A3 has been reported to interact with nuclear EGFR and stabilize the mRNA involved in the aerobic glycolysis in response to irradiation 19 . However, compared to the well-studied members of hnRNP A1 and A2, the role of hnRNP A0 and A3 in RNA processing/splicing are poorly understood. In this study, hnRNP A3 was detected as a spliceosome component that interacted with nuclear EGFR and is predicted to function in a related manner similar to that Fig. 6 Effect of hnRNP A3 downregulation on cell proliferation and tumorigenesis in vitro and in vivo. a CL1-5 and A549 cells were transfected with si-hnRNP A3 (siA3-6 and siA3-8) or siN (control). After being cultured for 48 h, the transfected cells were monitored for cell proliferation. b, c CL1-5 cells were infected with sh-hnRNP A3 (shA3-1 and shA3-2) or empty vector (sh-V) and stable clones of hnRNP A3-knockdown cells were obtained. The ability of these cells to perform anchorage-independent growth ability in soft agar was examined, as shown in (b). The growth of these cells in subcutaneously implanted mice is shown in (c). d IF staining of EGFR, hnRNP A3, and DAPI in excised xenograft tissues. e IHC staining of cyclin D1, COX-2, aurora A, and c-Myc in excised xenograft tissues. Magnification: ×400. The data shown in (a-c) represent the means ± SD from three independent experiments; *p < 0.05, **p < 0.01, and ***p < 0.001 as assessed using the Student's t-test.
of hnRNP A1 and A2. Nuclear EGFR has also been shown to regulate the stability of mRNAs related to the VEGF pathway in stress-exposed NSCLC and head and neck cancer cell lines 41 . It is also quite possible that the ability of nuclear EGFR to activate the expression of cyclin D1 9 , inducible nitric oxide synthase 2 , B-Myb 11 , COX-2 15 , aurora A 12 , c-Myc 16 , and breast cancer resistance protein 17 may be related to this RNA processing function.
From among the nuclear EGFR-interacting proteins, we selected hnRNP A3 for further study. Our results suggest that hnRNP A3 is involved in the nuclear accumulation of EGFR in NSCLC. For example, IF staining of EGFR and hnRNP A3 in tumor and adjacent normal tissues obtained from NSCLC patients revealed that hnRNP A3 and EGFR exhibited elevated colocalization in tumor sections compared with adjacent normal sections (Fig. 3). Consistent with a previous study, hnRNP A3 was detected predominantly in nucleus with a minor expression in the cytosol 42 . Depletion of hnRNP A3 did not affect the total level of EGFR, but reduced the nuclear accumulation of EGFR (Fig. 5). These data suggest that the reduced nuclear EGFR accumulation is accompanied with increased relocation of EGFR to membrane, cytoplasm, or cytoplasmic organelles. Depletion of hnRNP A3 also reduced anchorage-independent growth ability and tumor cell growth both in vitro and in vivo (Fig. 6). These results suggest that nuclear EGFR plays an important role in the tumorigenesis of NSCLC. Since hnRNP A/Bs have been reported to be overexpressed in lung cancer 43 , our identification of hnRNP A/Bs as nuclear EGFR interacting proteins suggests that such an interaction may involve yet-to-be identified mechanism to facilitate the functions of hnRNP A/Bs in mRNA processing/trafficking. Nuclear EGFR may employ its kinase activity to phosphorylate hnRNP A/Bs or it may function as transcriptional regulator to regulate the expression of hnRNP A/Bs. Thus, the phosphorylation status and the expression levels of hnRNP A/Bs may be affected by the nuclear EGFR.
In summary, we herein examined subcellular EGFR interactomes, analyzed the putative functions of EGFR at these subcellular locations, and report that a nuclear EGFR-interacting protein selected for further study, hnRNP A3, modulates nuclear EGFR accumulation and tumor growth in NSCLC.
Cell lines and culture media
The human NSCLC cell line, CL1-5, was established from CL1-0 cells via selecting for increased invasive capability using a Transwell chamber assay 23 . These cells were kindly provided by Dr. Pan-Chyr Yang (Department of Internal Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan). A549 cells were obtained from the American Type Culture Collection (Manassas, VA, USA). CL1-5 and A549 cells express wild-type EGFR 44 . In the absence of ligand stimulation, EGFR was not phosphorylated in these cells. CL1-5 cells were cultivated in RPMI-1640 supplemented with 10% fetal bovine serum (FBS) and 100 U/ml penicillin and streptomycin together. A549 cells were cultured in Dulbecco's modified Eagle's medium containing 10% FBS, 2.5 mM L-glutamine, 0.5 mM sodium pyruvate, and 100 U/ml penicillin and streptomycin. All cell lines were confirmed by short tandem repeat analysis and mycoplasma PCR. All culture media and FBS were purchased from Life Technologies (Grand Island, NY, USA).
Tumor specimens
The 15 paired tumor and adjacent normal tissues were obtained from NSCLC patients who underwent surgical resection at Chang Gung Memorial Hospital. This study was approved by the Ethics Committee of Chang Gung Memorial Hospital. Written informed consent was obtained from all patients.
Subcellular fractionation
Nuclear, cytoplasmic, and mitochondrial fractions were obtained from lung cancer cells using a Qproteome mitochondria isolation kit (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol 33 . Cells were suspended in a lysis buffer, which selectively disrupts the cell membrane without solubilizing it, resulting in the isolation of cytosolic proteins.
Western blotting, immunoprecipitation, and immunofluorescence staining
Western blotting, immunoprecipitation, and immunofluorescence (IF) were performed as described previously 24,33 . The distributions of hnRNP A3 and EGFR in NSCLC tissues and cells were determined by IF staining as described previously 33 . Cells or tissues were visualized under confocal microscopy (LSM 700; Carl Zeiss, Jena, Germany) and the MetaMorph software was used to examine colocalization (MetaMorph Inc., Nashville, TN, USA). The analysis colocalization of EGFR (red) and hnRNP A3 (green) in nucleus (DAPI/blue) using Metamorph software are illustrated in Supplementary Fig. S1.
Immunohistochemical assay
Immunohistochemical staining for EGFR and hnRNP A3 was performed as described previously 24 . These tissues were examined for the extent of EGFR and hnRNP A3 staining by a pathologist (W. Y. Chuang) in a blinded manner 45 . The immunoreactivity for the hnRNP A3 and EGFR was semiquantitatively scored by the percentage of positive-staining tumor cells in a representative large section of each tissue specimen. The immunoreactivity was grouped into four groups according to the percentage of the positive tumor cells: negative (0%), low (1-50%), medium (51-95%), and high (96-100%). IHC double staining was performed with MultiView (mouse-HRP/ rabbit-DAB) IHC kit (Enzo Life Sciences, Inc., NY, USA), by following the protocols provided by the manufacturer.
In-gel digestion and mass spectrometric analysis
The immunoprecipitates were separated by 10% SDS-PAGE and stained with a Colloidal Blue Staining Kit (Thermo Fisher Scientific). Destaining was performed with 10% methanol and 5% acetic acid, each gel lane was cut into 10 pieces, and each piece was further separated into two replicates. The pieces were then dehydrated in acetonitrile (Mallinckrodt Baker) and dried in a SpeedVac. Proteins were reduced with 25 mM NH 4 HCO 3 containing 10 mM dithiothreitol at 60°C for 30 min, alkylated with 55 mM iodoacetamide at room temperature for 30 min, and then digested by trypsin (20 μg/ml; Promega, Madison, WI, USA) overnight at 37°C. The digested peptides were extracted by acetonitrile and dried in a SpeedVac.
The extracted peptides were identified by LTQ-Orbitrap Discovery (Thermo Fisher, Waltham, MA) coupled with high-performance liquid chromatography. Briefly, peptide extracts were reconstituted in solution A (0.1% formic acid), loaded across a trap column (Zorbax 300SB-C 18 , 0.3 × 5 mm 2 ; Agilent Technologies, Taiwan) at a flow rate of 0.2 μL/min in solution A, and separated on a resolving 100-mm analytical C 18 column (inner diameter, 75 μm) using a 15-μm tip (New Objective, Woburn, MA, USA). The peptides were eluted with a 60-min gradient at a flow rate of 0.25 μL/min. The LTQ Orbitrap was operated using the Xcalibur 2.0 software (Thermo Fisher). The data were acquired in a data-dependent mode containing one MS scan, using the Orbitrap at a resolution of 30,000 and 10 MS/MS scans (in the linear ion trap) for the 10 most abundant precursor ions. The m/z scan range for the MS scans was set as 350-2000 Da, and the ion signal of (Si (CH 3 ) 2 O) 6 H + at m/z 445.120025 was used as a lock mass for internal calibration. To increase identification coverage, the precursor ions selected for MS/MS analysis were dynamically excluded for 180 s [46][47][48] .
Database searching and protein identification
The obtained spectra were searched with the Mascot algorithm (Version 2.1; Matrix Science, Boston, MA, USA) against the Swiss-Prot human sequence database (released March 2018, selected for Homo sapiens, 20,198 entries) of the European Bioinformatics Institute. The peak list was generated using the Thermo ExtractMSn software (Version 1.0.0.8, May 2012 release). The mass tolerances for parent and fragment ions were set as 10 ppm and 0.5 Da, respectively. Oxidation on methionine (+15.99 Da) and carbamidomethylation on cysteine (+57 Da) were set as variable and fixed modifications, respectively. The enzyme was set as trypsin and up to one missed cleavage was allowed. The random sequence database was used to estimate false positive rates for protein matches. The resulting files were further integrated using the Scaffold software (Version 4.2.1; Proteome Software, Portland, OR, USA), which included the PeptideProphet algorithm 49 for assignment of peptide MS spectra and the ProteinProphet algorithm 50 for grouping peptides to a unique protein or protein family (if the peptides are shared among several isoforms). The thresholds for PeptideProphet and ProteinProphet probability were set as 0.95 to ensure an overall false discovery rate below 0.5%. Only proteins with two unique matching peptides were retained.
Spectral counting-based protein quantification and bioinformatics analysis
To identify the binding partners of EGFR, we compared the protein levels between immunoprecipitation products of the control vector and EGFR vector groups using a previously described label-free spectral counting-based quantification method 51 . Briefly, the exclusive spectrum count for each identified protein was exported from the Scaffold software in Excel format (Microsoft, Redmond, WA, USA). To reduce differences between analyses, the normalized spectral count (NSC) of each protein was calculated by the spectral count (SC) for that protein divided by the total SC of the analysis. The fold change was estimated as the ratio of the average of normalized SCs in the EGFR group to that in the control group. Because not all proteins were identified in all replicates, the SCs of unidentified proteins or missing values in a certain sample were assigned a value of one; this enabled us to avoid overestimating the fold changes and dividing by zero.
The biological pathways and processes involved with the EGFR-interacting proteins were revealed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (http://www.genome.jp/kegg/pathway.html) and the Database for Annotation, Visualization, and Integrated Discovery (DAVID, version 6.8, https://david. ncifcrf.gov/), respectively 52 . The known and predicted associations between the EGFR-interacting partners were analyzed with the STRING online software (version 10.5, https://string-db.org/cgi/input.pl). A combined confidence score of ≥0.9 was used as the cutoff criterion 53 .
Plasmids and establishment of stable knockdown subclones
Stable knockdown of hnRNP A3 was achieved using a short hairpin RNA approach (shRNAs). Packed lentiviruses expressing shRNAs designed to knock down hnRNP A3 (TRCN0000245295 and TRCN0000074511) were obtained from the RNA interference consortium (National RNAi Core Facility, Academia Sinica, Taipei, Taiwan). Cells were infected with lentivirus and cultured in the presence of puromycin for 2 weeks. The stable knockdown CL1-5 cells were designated shA3-1 and shA3-2, while the control was called sh-V (control).
Assays for cell viability and anchorage-independent growth
Trypan blue exclusion assay for viable cell counts and soft agar colony formation assays for anchorageindependent growth were performed as described previously 55,56 .
Subcutaneous xenografts
Each xenograft was established by subcutaneously injecting 2 × 10 6 cells into the right flank of 6-week-old male Balb/c nude mice (n = 8 per group, National Laboratory Animal Center, Taipei, Taiwan). Tumor volumes were measured twice a week using calipers, and tumor volumes were calculated according to the formula V = 0.5 × W 2 L, where W was taken as the smaller diameter and L was taken as the larger diameter. Mice were sacrificed by CO 2 asphyxiation on day 15 after inoculation. All animal experiments were performed according to the guidelines for the Animal Care Ethics Commission of Chang Gung Memorial Hospital.
Statistical analysis
The presented results represent three independent experiments with similar results. Statistical differences were evaluated using the Student's t-test and considered significant at p < 0.05. | 8,122.8 | 2020-04-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Interpretable machine learning-based clinical prediction model for predicting lymph node metastasis in patients with intrahepatic cholangiocarcinoma
Objective Prediction of lymph node metastasis (LNM) for intrahepatic cholangiocarcinoma (ICC) is critical for the treatment regimen and prognosis. We aim to develop and validate machine learning (ML)-based predictive models for LNM in patients with ICC. Methods A total of 345 patients with clinicopathological characteristics confirmed ICC from Jan 2007 to Jan 2019 were enrolled. The predictors of LNM were identified by the least absolute shrinkage and selection operator (LASSO) and logistic analysis. The selected variables were used for developing prediction models for LNM by six ML algorithms, including Logistic regression (LR), Gradient boosting machine (GBM), Extreme gradient boosting (XGB), Random Forest (RF), Decision tree (DT), Multilayer perceptron (MLP). We applied 10-fold cross validation as internal validation and calculated the average of the areas under the receiver operating characteristic (ROC) curve to measure the performance of all models. A feature selection approach was applied to identify importance of predictors in each model. The heat map was used to investigate the correlation of features. Finally, we established a web calculator using the best-performing model. Results In multivariate logistic regression analysis, factors including alcoholic liver disease (ALD), smoking, boundary, diameter, and white blood cell (WBC) were identified as independent predictors for LNM in patients with ICC. In internal validation, the average values of AUC of six models ranged from 0.820 to 0.908. The XGB model was identified as the best model, the average AUC was 0.908. Finally, we established a web calculator by XGB model, which was useful for clinicians to calculate the likelihood of LNM. Conclusion The proposed ML-based predicted models had a good performance to predict LNM of patients with ICC. XGB performed best. A web calculator based on the ML algorithm showed promise in assisting clinicians to predict LNM and developed individualized medical plans.
Intrahepatic cholangiocarcinoma (ICC) is the second most common pathological type of primary liver cancer, after hepatocellular carcinoma (HCC) [1], accounting for approximately 10%~20% of all cases [2,3].The incidence rate of ICC has increased during the last several decades [1,4,5].ICC has an extremely poor prognosis and also is a highly invasive malignant tumor [1,2], the 5-year overall survival rate has been reported in the range of 22-44% [1,6].In the progress of invasion, lymph node metastasis (LNM) is commonly observed, the rate of lymph node metastasis is about 25%~50% [7].Median survival times in ICC patients with no lymph node metastasis is 19.0~37.6 months, whereas those with LNM had only 9.0~22.9months [8].Surgery serves as the major method of treatment for ICC patients [3], lymphadenectomy is crucial to accurately stage the disease and guide decisions around adjuvant chemotherapy [9].However, no international consensus has been reached on management of the lymph nodes during the operation.Based on the essential impact of lymph node metastasis on staging and treatment in ICC patients, the identification of the probability of LNM has great effective clinical significance [10,11].
Usually, radiological image is a main method to judge lymph node status, however the limitations can't be ignored.The sensitivity and specificity of CT diagnosis is 40%~50% and 77%~77%, respectively, and MRI is lower than CT scan [12], although the positron emission tomography (PET/CT) has higher accuracy in the assessment of LNM in patients with ICC [13], due to the high cost of PET/CT, it is not possible to routinely monitor all patients with this method.In clinic practice, pathology serves as the gold standard for LNM, but detailed information is unknown until after surgery [10].Thus, reliable prediction models of LNM through clinical factors are urgent required.Various prediction models [3,7,[14][15][16][17][18] have been constructed to predict the prognostic of ICC patients.As for the prediction model of LNM, although previous studies [7-9, 16, 18-20] have integrated potential risk factors to construct several predictive models, we don't found that current studies have developed and validated a model to predict LNM using ML algorithms.
Recently, Machine learning (ML) algorithm, as an emerging and popular type of artificial intelligence (AI), has attracted more and more attention due to the ability to predict events occurrence and outcome and was widely applied to health-care data analysis, aid in clinical decision-making [21], especially in predicting possibility of metastatic diseases in malignant tumor patients [22,23].
Herein, we developed and validated ML-based models using clinical characteristics to predict the probability of LNM in ICC patients.And a machine learning algorithm with the strongest predictive power is visualized by using a web calculator.This study will be helpful for surgical planning and clinical management.
Patient population
The Ethics Commission of the Fifth Medical Center of PLA General Hospital approved this present retrospective study (2019002D).All patients signed informed consent before surgery.Between Jan 2007 and Jan 2019, 345 patients who underwent surgical resection and regional lymphadenectomy for ICC at the Fifth Medical Center of PLA General Hospital were enrolled in this study.
Included patients had ICC proven by histopathology.The exclusion criteria were as follows:(1) history of other malignant tumors; (2) undergoing anticancer therapy (radiotherapy or chemotherapy) for liver malignancy before surgery; (3) primary liver cancer with mixed types and metastatic liver tumors; (4) incomplete clinical records.
Feature selection for modeling
The collected clinical features were conducted dimension reduction and screened by LASSO analysis, which was utilized to select optimal features with non-zero coefficients as risk factors from the development cohort and minimize the risk of overfitting [24].The results of backward step-wise regression analysis in the optimal features datasets were included in univariate and multivariate logistics regression analyses.Then, the clinical variables in the univariate regression independently related to LNM were further analyzed by multivariate regression analysis, the LNM independently related variables with p-values < 0.05 in multivariate regression analysis were presented to generate predictive models for patients with ICC.
Development of the predictive models
Machine learning algorithms outperform traditional regression methods when predicting the outcomes [25].
Conclusion
The proposed ML-based predicted models had a good performance to predict LNM of patients with ICC.XGB performed best.A web calculator based on the ML algorithm showed promise in assisting clinicians to predict LNM and developed individualized medical plans.Keywords Intrahepatic cholangiocarcinoma, Machine learning algorithms, Lymph node metastasis, Web calculator In this study, we implemented six ML algorithms to develop predictive models as follows: Random Forest (RF), Logistic regression (LR), Extreme gradient boosting (XGB), Gradient boosting machine (GBM), Multilayer perceptron (MLP), and Decision tree (DT) [26,27].Afterward we employed 10-fold cross-validation in the model development and calculated the average value of AUC of the receiver operating characteristic curve to compare prediction power of illustrated models.Using the Permutation Importance analysis to assess the importance of predictors in each ML-based model predicting LNM.We calculated Pearson's correlation coefficients to assess collinearity among the variables and plotted the correlation heat map.Finally, based on the best-performing model, we designed a web calculator as a predictive tool easily and accurately accessible to clinicians, making it possible to quantitatively calculate the individual probability of LNM.
Statistical analysis
We applied the mean ± standard deviation (SD) to described the continuous variables and compared using the student's t tests, while categorical variables were expressed as percentages or frequencies and determined the significant difference using the chi-square test.Statistical analysis was performed with R software (version 4.05), including logistics regression analysis, baseline tables.Machine learning models and web calculator were built using Python (version 3.8).Statistical significance levels were set at .05.
Baseline characteristics
The baseline characteristics between ICC patients with LNM and without LNM are detailed shown in Table 1.According to the inclusion and exclusion criteria, a total of 345 ICC patients have been enrolled.The median survival time was 20.49 months in patients without LNM, which was significantly different from patients with LNM (the median survival time = 7.83 months).Patients with LNM had higher mortality and shorter survival time than those without LNM (p < 0.001).This revealed that lymph node metastasis has a huge negative effect on survival of ICC patients.Patients with tumor diameter > 5 cm were more susceptible to metastases in lymph node.In addition, smoking, ALD (alcohol liver disease), white blood cell (WBC), boundary and diameter were all significantly associated with LNM (P-value < 0.05).However, there were no significant differences in NAFLD (non-alcoholic fatty liver disease), hyperlipidemia, image number, and Mg between the two groups (Table 1, p > 0.05).
Variable importance and Pearson correlation of variables
Permutational importance quantified the variable importance in each ML algorithms (Fig. 3), WBC ranked first in five algorithms, the importance of variables in the XGB model is arranged in the following order: WBC, boundary, diameter G, smoking, ALD.In Fig. 4, we evaluated the correlation of the variables using Pearson's correlation, and visualized the relationship of them via a heat map, indicating that no significant correlation and no collinearity among the variables for LNM, indicating that the variables are independent of each other and no collinearity among the variables.WBC, followed by boundary, were the most important features in XGB, a significant negative correlation had been found between them.
Establishment of a web calculator
Based on the XGB model, we built an easy-to-use web calculator based on the XGB algorithm for clinicians to calculate the individualized likelihood of LNM in ICC patients with a simple input of easily accessible clinical variables (Fig. 5).
Discussion
Intrahepatic cholangiocarcinoma originates from the malignant transformation of the bile ducts epithelium, and represents more aggressive compared to HCC [1], with the 5-year overall survival ranging from 15% to 40% [1,6].The incidence of LNM in ICC is much higher than that in HCC [29].Indeed lymph node status is critical for therapy selection and has been identified as one of the most important factors for prognosis [6].A few of studies demonstrated that lymphadenectomy (LND) improved long-term survival outcome of ICC patients [30,31], thus, LND should be a routine method for radical resection in ICC [32,33].Whereas other studies reported that LND didn't improve survival outcome of ICC patients , with associated surgery-related complications [34,35].It's reported that approximately 50% of the patients did not dissect lymph node dissection [36], which may result in mis-or under-staging and further compromised their outcomes [32,36].For ICC patients, accurate prediction of LNM will facilitate clinical treatment decision-making for the appropriate diagnosis and surgical planning.Accordingly, we used a novel type of AI-machine learning-to predict LNM in ICC patients.Using ML algorithms, we developed and validated six models to predict LNM in 345 patients with ICC.We found that XGB model (average AUC=0.908) had greatest predictive performance in internal validation.Unlike some nomogram models [14,19], we further provided dynamic construction.Consequently, based on the XGB model, a web calculator has been established to estimate visually individual probability of LNM and improved the applicability of the model.
In our study, multivariate logistic regression analysis founded that ALD, smoking, boundary, diameter, and WBC were independent predictive factors of LNM in patients with ICC (Table 2).As an independent risk factor, the influence of WBC on prognosis has been reported.Shirono et al [37] found that the serum WBC level was negatively associated with survival time in ICC patients, furthermore illustrated that patients with the WBC level was more than 6800/µL had a short survival time.In this study, we demonstrated that WBC was an independent predictor for the presentation of LNM in ICC patients.We also revealed that the risk of LNM was significantly increased when serum WBC level was more than 7180/µL.According to the permutation importance of variables in Fig. 3, WBC ranks first among the five prediction models and deserves the most attention when predicting LNM.WBCs include monocytes, lymphocytes and neutrophils.Monocytes have roles in promoting tumor invasion and angiogenesis [38].In addition, tumor-associated macrophages developed from monocytes, can promote tumor lymphangiogenesis by the secretion of pro-lymphangiogenic factors and transdifferentiation into lymphatic endothelial cells [39].Subimerb et al. reported that the monocyte in patients with Cholangiocarcinoma is correlated with a poor prognosis [40].On the other hand, lymphocytes play an essential role in immune response, low counts may result in an insufficient immunological reaction against tumor progression and metastasis [38].Previous research has revealed that lymphocyte to monocyte ratio (LMR) was associated with N stage and distant metastasis [41].Peng et al. reported that the pre-LMR served as a predictor for early recurrence of Cholangiocarcinoma [42].Meanwhile, a high neutrophil count was associated with poor prognosis and recurrence in ICC [43].Stefan et al. reported that neutrophil to lymphocyte ratio was independently associated with worse overall survival among ICC patients [44].In the present study, a high WBC level maybe reflect increasing in monocytes or neutrophil.The effects of monocytes, lymphocytes and neutrophils on lymph node metastasis should be further studied.
In addition, we concluded that tumors with diameter less than 5cm were less likely to occur LNM, which is similar to previous conclusion [20].What's more, we performed more detailed studies for tumor (diameter>5cm), according to multivariate logistics regression analysis results, compared to tumor with 5-10cm, larger tumor (diameter more than 10cm) had a higher metastatic risk to lymph nodes (OR:5.89VS 3.14).Due to the biological growth behavior of ICC, larger tumor volume means that the tumor has a longer growth cycle and further increases the possibility of lymph node invasive risk.
In addition, the present study found that the type of ICC boundary on radiological image was closely related to LNM, a distinct boundary played a protective role in reducing the likelihood of LNM occurrence, similar result has been reported previously [20].Microinvasion may reveal a possible mechanism of tumor aggressiveness to lymph nodules [45].As showed in Fig. 4, boundary served as the second important feature after WBC.Two other independent predictive factors were ALD and smoking.A metaanalysis of eight studies [46] reported that alcohol was major risk factors for ICC.Drinking alcohol causes alcoholic liver disease, which is greatly associated with increased ICC risk [47], as smoking dose [48].Nonetheless, the relationship between ALD, smoking and LNM in ICC patients was comprehended poorly.Interestingly, we found that ALD was a protective factor for LNM.This finding seems to contradict the existing literature identifying ALD as a risk factor for various cancers, including ICC [46,47].To reconcile this apparent paradox, we propose several hypotheses.First, ALD-induced immunosuppression may alter the host's immune landscape, reducing the attack of immune cells on cancer cells and thus reducing the spread of lymphoid tumors (Gao & Bataller, 2011).Second, liver pathology associated with ALD, particularly cirrhosis, may adversely alter the hepatic microenvironment, impeding tumor cell migration and invasion due to tissue reorganization and vascular changes [49] .Third, there may be a potential selection for survival bias, whereby ALD patients who die prematurely due to liver disease complications do not have sufficient time to develop LNM, leading to an underestimation of the risk factors associated with LNM in long-lived populations.Finally, the chronic inflammatory state associated with ALD may inhibit tumor spread, contrary to the generally accepted view that inflammation promotes cancer progression [50,51].These considerations highlight the complexity and individual variability of tumor biology and underscore the need for further research to elucidate the mechanisms by which ALD affects ICC metastatic behavior, thereby providing new insights into therapeutic approaches and patient management.Smoking was significantly associated with LNM and was an independent risk factor for LNM.Therefore, in people with a preliminary diagnosis of ICC, we recommend smoking cessation.However, whether quitting smoking can reduce the risk of LNM in patients with a history of smoking needs to be further verified.
To our knowledge, this paper is the first study to develop and validate a predictive models for predicting LNM in ICC applying machine learning algorithms.The model distinguishes from linear models adopted by previous studies, which can maximize clinical parameters and improve the diagnosis accuracy.
The XGB model initially proposed by Chen et al. in 2016 possessed the best prediction performance [22], it has a high accuracy and fast processing time and has been regarded as a more reliable algorithm when the sample size is limited [52].Therefore, XGB is suitable for our study which is a small sample from a single medical center.
Finally, we established a concise, visualizable and dynamic online application based on XGB model, the real-time risk of LNM can be calculated and more rational and specific treatment regimens for patients can be tailored according to the personal information.For example, when an ICC patient presented with the following clinical characteristics: tumor diameter less than 5 cm, no boundary, no smoking, ALD and serum WBC count is 5000/µL.We inputted above data into the web calculator, then the application integrated each factor and calculated automatically total probability of LNM, the output result was approximately 6.5% (Fig. 5), indicating that the patient had a low risk to lymph node metastasis.Therefore, we do not recommend further PET/CT monitoring and lymph node dissection.
Conclusions
To sum up, we constructed a machine learning-based predictive model with a good performance to predict LNM in patient with ICC based on independent factors, including ALD, smoking, boundary, tumor diameter and WBC level.In addition, we did an attempt to translate research outputs into clinical practices by builting an online calculator, and the real-time predictive tool may aid in decision-making and management of ICC patients.
Limitations
Some limitations in our study can't be ignored.Firstly, as a retrospective study, selection bias was inevitable.In addition, the present study is small sample size from a single institution, our study is the lack of validation in an external dataset.In the future, external validation and large-scale multicenter studies will be required to validate our results.Thirdly, the inclusion of variables may affect the accuracy of the prediction model due to the highly subjective of the discrimination of tumor boundary and measure of diameter.Finally, there is a lack of analysis of LNM by neutrophils, lymphocytes and monocytes.According to previous studies [41], preoperative lymphocyte/monocyte ratio is associated with metastasis, and studying the subsets of WBC may improve the accuracy of prediction. | 4,092.2 | 2024-04-19T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Effects of Sound-Pressure Change on the 40 Hz Auditory Steady-State Response and Change-Related Cerebral Response
The auditory steady-state response (ASSR) elicited by a periodic sound stimulus is a neural oscillation recorded by magnetoencephalography (MEG), which is phase-locked to the repeated sound stimuli. This ASSR phase alternates after an abrupt change in the feature of a periodic sound stimulus and returns to its steady-state value. An abrupt change also elicits a MEG component peaking at approximately 100–180 ms (called “Change-N1m”). We investigated whether both the ASSR phase deviation and Change-N1m were affected by the magnitude of change in sound pressure. The ASSR and Change-N1m to 40 Hz click-trains (1000 ms duration, 70 dB), with and without an abrupt change (± 5, ± 10, or ± 15 dB) were recorded in ten healthy subjects. We used the source strength waveforms obtained by a two-dipole model for measurement of the ASSR phase deviation and Change-N1m values (peak amplitude and latency). As the magnitude of change increased, Change-N1m increased in amplitude and decreased in latency. Similarly, ASSR phase deviation depended on the magnitude of sound-pressure change. Thus, we suspect that both Change-N1m and the ASSR phase deviation reflect the sensitivity of the brain’s neural change-detection system.
This cerebral response was based on comparisons between preceding and novel stimuli with some form of sensory memory [12,15]. The magnitude of this cerebral response depended on the degree of the sound-feature change [1,7,13]. Thus, the cerebral response seems to be a type of auditory change-related cerebral response (called "Change-N1", and its magnetic counterpart "Change-N1m"). Change-N1 is also evoked by an abrupt decrease (dec-Change-N1) [8,9,16], as well as an abrupt increase (inc-Change-N1) in sound pressure.
The auditory steady-state response (ASSR) is a neural oscillation that is phase-locked to a repeated sound stimulus. It can be recorded by EEG and MEG. The ASSR becomes stable at approx. 200 ms after onset. In humans, the ASSR can be recorded with maximum amplitudes when stimuli are presented at 40 Hz. The ASSR has been implicated in the functional integrity of the local neural network for auditory processing. Several studies reported that the 40 Hz ASSR phase varies after stimulus changes, such as a noise [17], interaural-phase difference [18,19], frequency [20], and a gap [21].
Ross suggested that ASSR phase deviation might be a type of auditory-change response [19]. However, it remains unclear whether ASSR phase deviation could be affected by the degree of the sound-feature change in the same way as the change-related cerebral response described in previous studies [1,7,13]. In the present study, using click-train sounds, we simultaneously recorded the 40 Hz ASSR and Change-N1m evoked by abrupt change in sound pressure, and we investigated the effect of the magnitude of the sound-pressure change on these two cerebral responses.
Subjects
The experiment was performed with 10 healthy volunteers (2 females and 8 males; mean age: 35.6 years; 22-54 years) with normal hearing. All subjects had no history of substance abuse, neurological, otolaryngologic, or psychiatric disease. They were medication-free. The study was approved in advance by the Ethics Committee of the National Institute for Physiological Sciences, Okazaki, Japan (18A036). Written consent was obtained from all of the subjects.
Sound Stimuli
The subjects were instructed to watch a silent movie, ignoring the sound stimuli delivered through ear pieces (E-A-Rtone 3A, Aero, Indianapolis, IN, USA). The presented stimulus was a train of 1 ms clicks at 40 Hz. The control stimulus was 1000 ms in length and 70 dB in sound-pressure level. The change stimuli (deviant) were also 1000 ms long; the first 500 ms was identical to the control stimulus, which was followed without a blank by similar 500 ms click-trains whose sound pressure was different. Therefore, the sound pressure of the deviants changed abruptly at the midpoint. The sound-pressure changes of the deviants were −5, −10, −15, 5, 10, or 15 dB; thereby, there were 6 deviants. The trail-trial interval was 1500 ms. All sound stimuli were presented randomly at an even probability in a session. The time necessary for recording was 22-25 min.
MEG Recording
Magnetic responses were recorded with a helmet-shaped 306-channel MEG system (Vector-view; ELEKTA Neuromag, Helsinki, Finland) comprised of 102 triple-sensor elements in a magnetically shielded room. Each sensor element consisted of two orthogonal planar gradiometers and one magnetometer coupled to a multi-superconducting quantum interference device (SQUID), providing 3 independent measurements of the magnetic fields per sensor. In this study, we analyzed MEG signals recorded using 204 planar-type gradiometers. Signals were recorded with a bandpass of 0.1-330 Hz and digitized at 2000 Hz. Trials with noise larger than 3000 fT/cm were excluded from averaging. For each stimulus, at least 120 trials are averaged.
Dipole Source Modeling
We analyzed the recorded MEG waveforms using brain electric-source analysis (BESA, version 6.0, GMbH, Munich, Germany). First, we estimated 2-equivalent current dipole (ECD) models (1 in each temporal region) for cerebral responses. A spherical head model was used for the dipole source analysis. Figure 1 shows the dipole source-modeling procedure. For the 40 Hz ASSR, bandpass filters were 35-45 Hz. The baseline was a 100 ms period before sound onset. The first 500 ms was identical among all stimuli. In order to create an ECD model, dipoles were estimated across the time window from 400 to 500 ms in the control condition because the ASSR was stable during this period. The obtained dipole model was applied to all ASSR waveforms in each subject, and source-strength waveforms were used for analysis. For the Change-N1m, bandpass filters were 1-35 Hz. The baseline was a 100 ms period before the sound change. We obtained Change-N1m using subtracting waveforms for the control stimulus from that for the +/− 15 dB deviant-stimulus conditions. The measurement interval of the Change-N1m peak was approx. 100-180 ms after the change onset in a continuous sound. A 20 ms period around the Change-N1m peak was used for dipole analysis. Similar to ASSR analysis, the estimated dipole models for +/− 15 dB deviant-stimulus conditions were applied to the remaining of the increase (+10 and +5 dB) and decrease (−10 and −5 dB) deviant-stimulus conditions in each subject.
Data Analysis
In the ASSR, the time interval between peaks to each click, which was stimulus-locked with each click, showed transient deviation in the deviant condition ( Figure 1C). Although it is well known that ASSR amplitude changes after the change onset, the ASSR phase deviation was used as an index in the present study. We defined phase deviation as the time difference between peak latencies for each click between the control-and deviant-stimulus conditions. When analyzing Change-N1m values, to minimize the problem due to a baseline shift, we determined amplitude as the amplitude between the peak of Change-N1m and the polarity-reversed peak at an earlier latency, as in our previous studies [1,8,10]. If a positive peak was not detected, the voltage at 40 ms after the change onset was measured. The head coordination system was set by the nasion and two reference points anterior to the ear canals. The x-axis was fixed at the preauricular points and defined the right and left directions.
The y-axis was defined as the anterior-posterior directions through the nasion. The z-axis was defined as the superior-posterior directions.
The statistical significance of the source location was assessed by discriminant analysis using the x, y, and z coordinates as variables. For analyses of the ASSR phase shift and Change-N1m values, we conducted multivariate analysis of variance (MANOVA) with repeated measures of within-subject factors (increase/decrease in sound pressure, left/right hemisphere, and degrees of change), and the Bonferroni-Dunn test as a post hoc comparison. P-values < 0.05 were considered significant.
Results
Reliable ECDs for dec-Change-N1m were not estimated in two subjects because of a low signal/noise (S/N) ratio. In one subject, the 40 Hz ASSR phase was not stable in the 100 ms duration before the change onset in the control stimulus. These three MEG responses were excluded from further analysis. The ECDs of the ASSR, inc-Change-N1ms, and dec-Change-N1ms were estimated to be located at the auditory cortex on both hemispheres. The location of ECDs responsible for Change-N1m and ASSR are shown in Figure 2. The results of discriminant analysis revealed that the ECD location did not significantly differ between the ASSR and Change-N1m for all conditions (p = 0.35-0.90).
ASSR Phase Deviation
In the deviant-stimulus conditions, peak-latency interval for each click became shorter after the change in sound pressure, and subsequently returned to the steady state. As shown in Figure 3, phase deviation in deviant-stimulus conditions reached a maximum at around 100-200 ms after the sound pressure's change onset. As the magnitude of change increased, deviation was prolonged.
Discussion
Our present findings clarified the effect of an abrupt change in sound pressure on the ASSR phase and the Change-N1m. The ASSR phase alternated after the change onset and subsequently returned to the steady state in all of the deviant conditions. The time course of the ASSR phase elicited by an abrupt change in this study is congruent with previous studies [17,19,21]. Regardless of increase/decrease in sound pressure, both the ASSR phase deviation and the Change-N1m values (peak amplitude and latency) depended on deviance in sound pressure from the preceding sound.
As in earlier MEG studies, ASSR dipoles were estimated to be in the auditory cortex [17][18][19][20][21]. The dipoles for Change-N1m were estimated in similar areas ( Figure 2). Change-N1m is elicited by any type of auditory changes, including sound onset [7]. Previous MEG studies reported that all estimated dipoles were located in the lateral part of the transverse gyrus, and their location did not differ [2,11,13]. A functional magnetic-resonance-imaging study revealed lacking tonotopic organization in lateral regions of the auditory cortex [22]. Combined with our MEG studies, we speculate that Change-N1m reflects processes for detecting changes of any type in the surrounding environment rather than the processing of basic sound features. In the present study, dipole locations were not different between ASSR and Change-N1ms. This finding was incongruent with previous studies showing that ASSR sources were located more medially than N1m. However, comparison of the ECD location between ASSR and Change-N1m was not the main purpose of the present study.
An abrupt change in sound feature induces a cerebral response that is based on a comparison between preceding and current stimuli. This auditory-change-detection system was investigated by using Change-N1 responses. As the degree of the sound-pressure change from the baseline increased, Change-N1 amplitude increased and peak latency decreased [1,7]. The present results confirmed this. Similarly, we observed that, as the degree of sound-pressure decrease increased, dec-Change-N1m amplitude increased and its peak latency decreased. Interestingly, ASSR phase deviation showed similar behaviors against the magnitude of the sound-pressure change. One hypothesis for ASSR generation is that it is produced by the superimposition of a midlatency response (MLR) for each click sound [23]. The amplitude and latency of the MLR components are affected by sound intensity [24], but the transient ASSR phase deviation in the present result could not be explained by this superimposition hypothesis. Both ASSR phase deviation and Change-N1m values depended on the degree of change in sound pressure. Thus, we consider that both are transient cerebral responses to the event in the surrounding environment. Considering that the maximum phase deviation of the ASSR and the peak of Change-N1m were detected in the same time range (about 100-200 ms after the change onset), both responses could be produced by a similar neural circuit for auditory-change detection.
Although the stimulation paradigms differ, these findings are congruent with those in mismatch negativity (MMN) studies [25,26]. Thus, regardless of increase or decrease in sound pressure, both Change-N1 and MMN seemed to depend on deviance from preceding sound pressure. Traditional MMN is elicited by comparison between deviant stimuli and a repeated standard stimulus in an oddball paradigm. However, multifeature MMN paradigms have been reported [27][28][29], which has merit in that MMN to different deviant stimuli could be recorded in a single session and in a relatively short time, similar to the present stimulus paradigm. However, such an MMN paradigm with sound-to-sound intervals seems to not be suitable to record ASSR.
It is well known that each hemisphere is optimized for various tasks for processing sensory information, and both hemispheres have complementary roles. Although ASSR phase deviation did not show a significant difference between hemispheres in this study, Change-N1m amplitude was larger in the right hemisphere than in the left, which is congruent with a previous study [2]. In addition, Change-N1m peak latency was shorter in the right hemisphere. The present results confirmed that the right hemisphere plays an important role in detecting auditory environmental changes.
The 40 Hz ASSR is useful for assessing the ability to integrate sensory information with high test-retest reliability [30]. Kwon et al. first reported that subjects with schizophrenia showed diminished ASSR power and a delayed phase of the ASSR oscillation [31]. One of the possible underlying mechanisms is that gamma-amino butyric acid (GABA) inhibitory interneurons modulate the generation and synchronization of neural oscillation. Deficits in the ASSR linked to abnormal GABA transmission in schizophrenia have been reported [32,33]. We suggest that a 40 Hz click-train stimulus with an abrupt change in sound pressure could be useful to simultaneously assess the sensitivity of auditory-change detection in addition to the functional integrity of the local neural network. Change-N1m could also be measured by this method. The stimulus paradigm used in the present study makes it possible to multidimensionally assess cognitive deficits in psychiatric disorders.
Conclusions
We investigated the relationship between two automatic cerebral responses, ASSR and Change-N1m, and effects of the magnitude of sound-pressure change in a train of 40 Hz click sounds. The results show that both the ASSR phase deviation and Change-N1m reflect the automatic cerebral process of change detection. However, there are some limitations in the present study. Although we used dipoles in the auditory cortex for both ASSR and Change-N1m, it is known that subcortical or frontal regions contribute to ASSR and N1 as well, which might affect the results. Furthermore, in order to validate the present results, we need to adopt other methods, such as time-frequency analysis that addresses intertrial power and neural-phase-locking underlying evoked responses [34], as well as ASSR [17]. | 3,308.2 | 2019-08-01T00:00:00.000 | [
"Biology"
] |
Selective Inhibition of PKCβ2 Restores Ischemic Postconditioning-Mediated Cardioprotection by Modulating Autophagy in Diabetic Rats.
Diabetic hearts are more susceptible to myocardial ischemia/reperfusion (I/R) injury and less sensitive to ischemic postconditioning (IPostC), but the underlying mechanisms remain unclear. PKCβ2 is preferentially overactivated in diabetic myocardium, in which autophagy status is abnormal. This study determined whether hyperglycemia-induced PKCβ2 activation resulted in autophagy abnormality and compromised IPostC cardioprotection in diabetes. We found that diabetic rats showed higher cardiac PKCβ2 activation and lower autophagy than control at baseline. However, myocardial I/R further increased PKCβ2 activation and promoted autophagy status in diabetic rats. IPostC significantly attenuated postischemic infarct size and CK-MB, accompanied with decreased PKCβ2 activation and autophagy in control but not in diabetic rats. Pretreatment with CGP53353, a selective inhibitor of PKCβ2, attenuated myocardial I/R-induced infarction and autophagy and restored IPostC-mediated cardioprotection in diabetes. Similarly, CGP53353 could restore hypoxic postconditioning (HPostC) protection against hypoxia reoxygenation- (HR-) induced injury evidenced by decreased LDH release and JC-1 monomeric cells and increased cell viability. These beneficial effects of CGP53353 were reversed by autophagy inducer rapamycin, but could be mimicked by autophagy inhibitor 3-MA. It is concluded that selective inhibition of PKCβ2 could attenuate myocardial I/R injury and restore IPostC-mediated cardioprotection possibly through modulating autophagy in diabetes.
Introduction
Ischemic heart disease (IHD) is one of the most common perioperative complications with high mortality and disability particularly in patients with diabetes [1]. The most effective treatment for IHD is to restore the blood perfusion of ischemic myocardium, but paradoxically, this may cause lethal heart injury, termed "myocardial ischemia-reperfusion (I/R) injury " [2]. Ischemic postconditioning (IPostC), achieved by transient brief interruptions of reperfusion by ischemic episodes, has been considered as an effective maneuver combat lethal reperfusion injury [3]. However, in diabetic condition, the hearts are more vulnerable to myocardial I/R injury and less or not responsive to IPostC [4], but the underlying mechanisms are still unclear.
Numerous studies suggest the activation of protein kinase C (PKC), a family of serine/threonine kinases with important physiological functions, is potentially responsible for the exacerbation of myocardial I/R injury in diabetes [5,6]. However, the role of PKC in myocardial I/R injury is complicated by multiple isoforms, each with varying cellular distribution and opposing function at times [7][8][9]. Of the various isoforms of PKC, the activation of PKCβ2 induced by hyperglycemia is most frequently implicated in cardiovascular complications in diabetes [10,11]. Several studies have shown that PKCβ activation negatively modulates mitochondrial energy status and autophagy [12,13]. Autophagy is an important cellular self-protection mechanism that eliminates misfolded proteins and damaged organelles. However, autophagy just like a double-edged sword, excessive or low levels of autophagy, may result in harmful or damaging effects [14]. It has been shown that diabetes exhibits abnormal autophagy [15], and myocardial I/R injury exacerbates the dysfunctional autophagy activity [16,17]. Our previous study has shown that selective inhibition of PKCβ2 ameliorates myocardial I/R injury in diabetic rats [6]. Moreover, regulation of autophagy improves cardiac function [18,19], ameliorates myocardial I/R injury, and restores IPostC cardioprotection in diabetes [20]. Furthermore, under the circumstance of LPS-induced oxidative stress and cellular injury in cultured cardiomyocytes, overactivation of PKCβ2 was associated with autophagy activation [21]. However, the roles of PKCβ2 in autophagy status and in particular the impact of their potential interaction on the loss of IPostC cardioprotection in diabetes have not been elucidated. In the present study, we hypothesize that hyperglycemia-induced PKCβ2 activation involves autophagy abnormality. Our data suggest that the selective inhibition of PKCβ2 attenuates myocardial I/R injury and restores IPostC cardioprotection by inhibiting autophagy in diabetes.
Experimental Animals and Induction of Type 1 Diabetes.
This study was conformed to the regulations of Guide for the Care and Use of Laboratory Animals of the National Institutes of Health (NIH Publication No. 80-23) and approved by the Institutional Animal Care and Use Committee of Wuhan University. Male Sprague-Dawley rats (230 ± 10 g, 7-8weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co. Ltd. All the rats were housed in the Animal Centre of Renmin Hospital of Wuhan University with a 12-hour (h) light-dark cycle and a standard environment. After 3 days of adaptive feeding, the rats were fasted for 12 hours and then administered single intraperitoneal injection of streptozotocin (STZ) (60 mg/kg, Sigma-Aldrich, St. Louis, MO, USA) dissolved in citrate buffer to induce diabetes as we described. [22] The control rats were administered single intraperitoneal injection of the same volume of citrate buffer. After 72 hours of the injection, random blood glucose was measured, and the glucose level >16.7 mmol/l were considered diabetic and used for the study.
2.2. Myocardial I/R Injury Models. The fourth intercostal space on the left side of the rat was opened to expose the heart. A white-lined blood vessel, which is the LAD (left anterior coronary artery), is seen at the lower edge of the left atrial appendage. The 6-0 band surgical needle is used for thread ligation. When the ventricle turns white and the electrocardiogram shows that the ST segment immediately elevated and the T wave becomes high, these suggest myocardial ischemia. The I/R injury model was achieved by 30 min ischemia and followed by 120 min reperfusion. The sham rats underwent the same surgical procedures without ligation. IPostC was induced by the cycles of 10 s of reperfusion and ischemia after ischemia for 30 min as we described previously [23]. [24], which was dissolved in DMSO as stocking inhibitor, and then diluted with normal saline for rats or DMEM for cell culture (dilution is often more than 1 : 10000) as working inhibitor before injection. CGP53353 (10 μg/kg) was intravenous injection pumped 10 minutes before ligation of the LAD.
Determination of Myocardial Infarct
Size. At the end of 120 min of reperfusion, myocardial infarct size was measured by staining with 0.25% Evans Blue dye (Sigma-Aldrich) and 1% 2,3,5-triphenyltetrazolium chloride (TTC, Sigma-Aldrich) as we described previously [25]. The area unstained by Evans Blue dye was identified as the area at risk (AAR), and the area unstained by TTC was considered as the infarcted tissue. Myocardial infarct size was showed as a percentage of the AAR (% of AAR).
Measurement of Creatine Kinase-MB (CK-MB) and
Lactate Dehydrogenase (LDH). The levels of serum CK-MB are specific indicators reflecting the extent of myocardial damage. After 120 min of reperfusion, the arterial blood was collected, and then the serum CK-MB was measured by using a commercial ELISA assay kit according to the manufacturer's instructions (Jiancheng, Nanjing, China).
2.6. Cell Culture. Embryonic rat cardiomyocyte-derived H9C2 cell line was obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). Cells were cultured in DMEM (Gibco Laboratories) containing 10% (v/v) FBS (Gibco Laboratories) and 1% antibiotics under a humidified atmosphere of 5% CO 2 and 95% air at 37°C as we previously reported. [21] The cells were randomly divided into seven groups: (1) High glucose (30 mM, HG); (2) HG+HR; (3) HG+HR+CGP53353 (1 μM); (4) HG+HPostC; (5) HG+HPostC+CGP53353 (1 μM); (6) HG+HPostC+3-MA (10 mM); and (7) HG+HPostC+CGP53353+rapamycin (100 nM). HG was induced by 50% glucose injection at a final concentration of 30 mM for 48 h. All treatments were administered 1 h before HR. After HG exposure for 48 h, the HR procedure was performed. For HR exposure, cells were maintained under anoxic conditions in chambers gassed with a mixture of 94% N2, 5% CO2, and 1% O2 at 37°C for 4 h and reoxygenation for 2 h. HPostC was achieved after 4 h of hypoxia and before 2 h of reoxygenation, including 3 cycles of 5 min of reoxygenation and hypoxia. Journal of Diabetes Research 2.7. Determination of Mitochondrial Membrane Potential (MMP). MMP was measured by JC-1 staining (Beyotime, China) according to the experimental protocol. Firstly, H9C2 cells were cultured in a 6-well plate with corresponding treatment. Then, removed corresponding medium and wash 3 times with PBS. After that, cells were incubated with 1 ml of 10 μg/ml JC-1 dye for 25 min at 37°C. Cells were subsequently washed three times with JC-1 buffer. Finally, cells were scanned with epifluorescence. Under a high level of MMP, JC-1 aggregates into the matrix of the mitochondria to form a polymer (J-aggregates), which can produce red fluorescence; on the contrary, JC-1 cannot aggregate mitochondria in the matrix, JC-1 is a monomer at this time, and green fluorescence can be produced.
Western
Blot. Heart tissue and cultured cells were homogenized in RIPA buffer containing the phosphatase inhibitor. The homogenates was centrifuged to collect the supernatant as total protein preparations, and the protein concentration of each sample was measured using a BCA kit (Beyotime). Equal amounts of proteins were separated via SDS-PAGE and transferred to PVDF membranes for immunoblot analysis as described previously [21]. Primary antibodies against GAPDH (1 : 1000 dilution, Cell Signaling Technology, USA), phospho-PKCβ2 (Ser 660 ) (1 : 1000 dilution, Cell Signaling Technology, USA), Beclin-1 (1 : 1000 dilution, Abcam company, USA), P62 (1 : 1000 dilution, Abcam company, USA), and LC3 (1 : 1000 dilution, Cell Signaling Technology, USA) were used in the present study. GAPDH was used as a loading control and all the results were presented as percent change relative to control measurement.
2.9. Statistical Analysis. Data are expressed as mean ± SD. All statistical analyses were performed by one-way or two-way analysis of variance (ANOVA) using an SPSS 24.0 Software. A value of P < 0:05 was considered to statistically significant difference.
Results
3.1. IPostC-Reduced Myocardial I/R Injury in Control but Not in Diabetic Rats. In the present study, we first investigated the effects of IPostC on myocardial I/R injury in STZ-induced diabetic rats and age-matched control rats. After STZ induction for 8 weeks, the diabetic rats showed higher water intake, food consumption, and plasma glucose, accompanied with lower body weight than control rats (Table 1). When the rats underwent myocardial I/R, the infarct size (% AAR) and plasma CK-MB in diabetic rats were larger than that in the corresponding control rats (Figures 1(a) and 1(b)). IPostC significantly reduced the infarct size and CK-MB in agematched control rats, but failed to elicit protective effects in diabetic rats (Figures 1(a) and 1(b)), indicating that IPostCmediated cardioprotection was compromised by diabetes.
IPostC-Reduced Myocardial I/R-Induced PKCβ2
Activation and Autophagy in Control but Not in Diabetic Rats. We previously demonstrated that hyperglycemiainduced PKCβ2 activation [6] and downregulated autophagy [20] were involved in the decreased tolerance to myocardial I/R injury in diabetes. In the present study, we determined whether IPostC could affect PKCβ2 activation and autophagy status in diabetic rats and age-matched control rats. As shown in Figure 2(a), diabetes significantly increased PKCβ2 phosphorylation on Ser 660 without influencing total PKCβ2 expression (indicating PKCβ2 activation), accompanied with downregulated autophagy status detected by decreased ratio of LC-3II/LC-3I (Figure 2(b)). Myocardial I/R significantly increased PKCβ2 phosphorylation and the ratio of LC-3II/LC-3I as compared with the corresponding sham group both in control and diabetic rats, and these alterations induced by myocardial I/R were attenuated by IPostC in control but not in diabetic rats (Figure 2).
Selective Inhibition of PKCβ2 with CGP53353 Restores
IPostC-Mediated Cardioprotection in Diabetic Rats. Based on the above findings, we next investigated the treatment effects of CGP53353 (a selective inhibitor of PKCβ2) on IPostC in diabetic rats. The CGP53353 vehicle had no effects on infarct size and CK-MB (data not shown). As shown in Figure 3(a), CGP53353 could well inhibit PKCβ2 phosphorylation both in I/R and IPostC group, though there was no significant difference in PKCβ2 phosphorylation between them. As shown in Figures 3(b) and 3(c), IPostC had no significant effects on postischemic infarct size and plasma CK-MB in diabetic rats. However, with the treatment with CGP53353, both infarct size and CK-MB in diabetic rats were significantly reduced in the I/R group, which were further decreased by IPostC. These suggest that CGP53353 treatment could restore IPostC-mediated cardioprotection in diabetic rats and confer synergistic or added cardioprotection.
Effects of CGP53353 on Myocardial Autophagy Status in
Diabetic Rats. We then evaluated the treatment effects of CGP53353 on autophagy status in diabetic rats subjected to myocardial I/R and IPostC. Myocardial I/R significantly increased the ratio of LC3II/LC3I (Figure 4(a)) and Beclin-1 expression (Figure 4(b)), accompanied with decreased P62 expression (Figure 4(c)). IPostC alone did not affect these alterations. By contrast, all these alterations induced by myocardial I/R were significantly attenuated by PKCβ2 inhibition with CGP53353, which were further attenuated by combination of CGP53353 and IPostC (Figure 4). All the results are expressed as mean ± SD, n = 8. Differences in general characteristics were determined by one-way analysis of variance (ANOVA) followed by Tukey's test. * * P < 0 01 vs. the control group.
Selective Inhibition of PKCβ2 with CGP53353 Restored
HPostC Protection against HR-Induced Cell Injury. We confirmed the treatment effects of CGP53353 on HPostC in H9C2 cells during HG conditions. As shown in Figures 5(a) and 5(b), HR stimulation significantly decreased cell viability and increased LDH release as compared with the normoxic group, accompanied with increased activation of PKCβ2 ( Figure 5(c)), and these alterations were not affected by HPostC. Selective inhibition of PKCβ2 with CGP53353 treatment significantly attenuated HR-induced decrease of cell viability and increase of LDH release, which were further attenuated by the combination of HPostC (Figures 5(a) and 5(b)).
Role of Autophagy in CGP53353-Restored HPostC
Protection in H9C2 Cells. To further investigate the role of autophagy in the beneficial effects of CGP53353, we also used autophagy inducer rapamycin and inhibitor 3-MA to treat the cells. As shown in Figures 6(a) and 6(b), 3-MA had similar effects to CGP53353 in restoring HPostC protection from cell injury detected by increased cell viability and decreased LDH release. However, after the addition of autophagy Journal of Diabetes Research inducer Rap, the beneficial effects of CGP53353 were abolished (Figures 6(a) and 6(b)). We then determined the treatment effects of CGP53353 on LC3 II and LC3 I expression. As shown in Figure 6(c), CGP significantly attenuated the increase of the ratio of LC3II/LC3I induced by HR, which was further attenuated by the addition of HPostC. Similar effects were shown by the treatment of autophagy inhibitor 3-MA. Autophagy inducer Rap abolished the effects of CGP53353 and HPostC on the ratio of LC3II/I induced by HR.
Effects of CGP on Mitochondrial
Membrane Potential (MMP) in H9C2 Cells Exposed to HR and HPostC. We also determined the loss of MPP by detecting JC-1 monomeric cells to evaluate mitochondrial damage in H9C2 cells exposed to HG and HR. As shown in Figure 7, HR insult significantly increased the percentage of JC-1 monomeric cells, which was not affected by HPostC alone. CGP53353 treat-ment significantly reduced the percentage of JC-1 monomeric cells induced by HR, which was further reduced by the combined use of HPostC. Similar effects were shown by the treatment of autophagy inhibitor 3-MA. In contrast, autophagy inducer Rap abolished the effects of CGP and HPostC on MPP.
Discussion
In the present study, we have demonstrated that the cardioprotection of IPostC is compromised in diabetes, which is associated with the excessive activation of PKCβ2 induced by hyperglycemia. Selective inhibition of PKCβ2 restored IPostC cardioprotection by modulating autophagy status. To our knowledge, this is the first study to investigate the relative roles of PKCβ2 and autophagy in IPostC in diabetes. IPostC is well demonstrated to be effective against myocardial I/R injury in nondiabetic conditions [26,27]. IPostC Journal of Diabetes Research is achieved by transient brief interruptions of reperfusion by ischemic episodes, so it can be used for treatment of unpredictable myocardial ischemia and confer greater clinical application prospects, such as cardiac interventional surgery [28][29][30]. However, the presence of diabetes or hyperglycemia renders the hearts more resistant to the infarct sizelimiting effects of IPostC [31][32][33]. In the present study, we found that IPostC significantly reduced myocardial I/R injury in age-matched control rats but not in diabetic rats.
Similar effects of HPostC on HR injury were found in H9C2 cells. These results indicate that diabetes may blunt IPostC cardioprotection.
It is well demonstrated that the activation of PKCβ plays a critical role in myocardial I/R injury in nondiabetic rodents [34]. In diabetic condition, PKCβ is excessively activated by hyperglycemia in vascular complications of diabetes [35]. We further found that PKCβ2, but not PKCβ1, was preferentially overactivated in the myocardium [10], which resulted Journal of Diabetes Research in the increased vulnerability of myocardial I/R injury in diabetes [6]. Our results showed that diabetes significantly increased myocardial PKCβ2 activation, which were further increased by myocardial I/R insult. We speculated that the PKCβ2 activation was also attributable to the loss of IPostC cardioprotection in diabetes. Indeed, after the treatment of CGP53353, a selective inhibitor of PKCβ2, myocardial I/R-induced postischemic infarct size, and CK-MB were significantly attenuated, which were further attenuated by IPostC. Similarly, HPostC significantly reduced HG and HR-induced LDH release and JC-1 monomeric cells and increased cell viability in the presence of CGP53353. Thus, the loss of IPostC cardioprotection in diabetes may be in part explained by PKCβ2 activation, and selective inhibition of PKCβ2 may be a useful therapy to preserve IPostC cardioprotection.
Autophagy occurs at basal level and is critical for maintaining the homeostasis of cells through removing damaged organelles and misfolded proteins [36]. It is believed that autophagy plays a "double-edged sword" role in the development of cardiovascular diseases, and excessive or low levels of autophagy may lead to a negative impact [14]. Diabetes exhibits abnormal autophagy [15], and myocardial I/R injury exacerbates the dysfunctional autophagy activity [16,17]. During autophagy, misfolded protein and damaged organelles are captured in double-membraned vesicles (autophagosomes) and degraded through lysosomal fusion [37]. P62 delivers ubiquitinated cargoes for autophagic degradation, and activating autophagy can reduce the expression of P62 [38]. Additionally, Beclin-1 and the ratio of LC3II/LC3I are proved to be autophagosomal markers in mammals [39]. The current study confirmed the impaired autophagy Journal of Diabetes Research function in STZ-induced type-1 diabetes, and myocardial I/R excessively induced autophagy status indicated by increased ratio of LC3II/LC3I and Beclin-1 expression with decreased p62. It has been shown that PKCβ negatively modulates mitochondrial energy status and autophagy, and inhibition of PKCβ with pharmacological inhibitor shows an increase in autophagy both in vitro and in vivo study [12,13]. In the present study, we found selective inhibition of PKCβ2 with CGP53353 attenuated autophagy dysfunction induced by diabetes and myocardial I/R and restored IPostC cardioprotection in diabetic rats. Similar effects of CGP53353 on HPostC were mimicked by modulating autophagy with the treatment of autophagy inhibitor 3-MA in H9C2 cells exposed to HG and HPostC, but the beneficial effects of CGP53353 were reversed by autophagy inducer rapamycin.
Our results indicate that selective inhibition of PKCβ2 activation may modulate autophagy to a moderate level to exert beneficial effects.
In summary, the results of the current study demonstrate that the compromised cardioprotection of IPostC is associated with excessive activation PKCβ2 activation, which contributes to autophagy dysfunction. Selective inhibition of PKCβ2 restores IPostC cardioprotection possibly through modulating autophagy status. Therefore, PKCβ2 blockade may be a useful approach for attenuating myocardial I/R injury and preserving the effectiveness of IPostC.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Authors' Contributions
Yafeng Wang, Lu Zhou, and Fengnan Huang carried out the experiment, Wating Su and Yuan Zhang were in charge of data analysis, Shaoqing Lei and Wating Su conceived the study, Yafeng Wang drafted the manuscript, Zhong-yuan Xia and Zhengyuan Xia helped with data interpretation, and Shaoqing Lei revised the manuscript and approved the submission. Yafeng Wang and Lu Zhou contributed equally to this work. | 4,554.2 | 2020-04-03T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
A Simplified Electrical-Based Model for Electroporation Dynamics
Calculating pulsed electric field (PEF)-induced pore formation using the Smoluchowski equation (SME) can be computationally expensive, even when reduced to the asymptotic SME (ASME). These issues are exacerbated when incorporating additional physical phenomena, such as membrane temperature gradients or shock waves, or incorporating pore formation into multiscale models starting from an external stimulus at the organism level. This study presents a rapid method for calculating the membrane-level effects of PEFs by incorporating a semi-empirical equation for transmembrane potential (TMP)-dependent membrane conductivity into a single-shell model for calculating the TMP. The TMP calculated using this approach and the ASME agreed well for a range of electric field strengths for various PEF durations and AC frequencies below and above the threshold for pore formation. These results demonstrate the feasibility of rapidly predicting TMP, which is easily measured, during pore formation strictly from electrical properties and dynamics without needing to explicitly calculate pore dynamics, as required when using the SME and ASME.
Electroporation, which occurs when the applied PEF sufficiently increases the transmembrane potential (TMP) to allow for pore formation [3], [74], [75], is crucial for these applications.Upon removing the electrical stimulus, the resulting pores may either reseal (reversible electroporation), which may facilitate molecular or drug transport into the cells to modify cellular function, or continue to grow (irreversible electroporation), which leads to cell rupture and death, often through necrosis [75].Generally, pulse durations shorter than the charging time of the membrane (typically hundreds of nanoseconds to 1 µs) do not sufficiently charge the membrane to induce conventional electroporation [76], [77].Instead, they require stronger applied electric fields and often generate a larger number of smaller pores [3], [78], [79], [80], [81], which may still be sufficiently large to allow ions into the cell for applications such as blocking action potentials across nerves [82] or permitting calcium transport for platelet activation [31].Moreover, even applying trains of such nanosecond PEFs (nsPEFs) only increases the number of pores, but not the size [83].While their duration may prohibit fully charging the cell membrane, such nanosecond PEFs (nsPEFs) can charge intracellular structures to induce changes in cellular function, such as calcium release or apoptosis [72].
To properly assess the PEF parameters necessary for these applications, robust theoretical models have been developed to assess pore dynamics [3], [74], [84].The most common approach is the Smoluchowski equation (SME) [74], which gives the probability density function n (r, t) denoting the number of pores per unit area with radii between r and r + dr at an instantaneous time t [85].While the SME elucidates the dynamics of pore number and size following PEFs, it requires solving a partial differential equation, given by [86] 1 kT where D is the diffusion constant of pores, ϕ (r) is the pore energy, k is the Boltzmann constant, T is the absolute temperature, and S (r) is the source term that represents the creation and destruction of pores [86].Neu and Krassowska [86] outlined several challenges with solving this equation, including its reliance on many parameters that could not be measured directly, the difficulty in relating these constants to measured values, and computational challenges introduced by the exponential terms in the pore creation.They performed an asymptotic analysis to reduce the SME to an ordinary differential equation with fewer parameters and improved computational efficiency.This is especially valuable when trying to link pore formation to other physical phenomena.
For instance, Joshi et al. [69] developed a self-consistent theory for action potential behavior in a neuron exposed to a PEF by incorporating the shunt conductance resulting from the pore dynamics determined using this asymptotic Smoluchowski equation (ASME) with the standard cable model for a neuron [87].It also allows assessing electroporation dynamics for more realistic multiscale systems using commercial software (e.g., COMSOL Multiphysics) by reducing the computational expense that would be required by the full SME [88].While reducing the PDE to an ODE makes the ASME much more computationally inexpensive than the SME, the ASME still requires tracking the pore number and pore size, which becomes prohibitive for the shorter pulse durations due to the large number of small pores generated [80], [81].Increasingly, PEF-induced bioeffects require accounting for multiphysics and multiscale phenomena.For instance, Goldberg et al. coupled Poisson's equation, the Nernst-Planck equations for ion motion, membrane deformation, and the SME to assess membrane permeabilization [89].Multiphysics modeling coupling the SME with the Nernst-Planck model have also examined bioeffect cancellation due to bipolar PEFs [90].More realistic multiphysics models have included three-dimensional models using the ASME to probe electroporation in irregularly shaped cells [91], arrays of multiple cells [92], and cells that undergo PEFinduced deformation [93], [94].The computational expense increases with the incorporation of additional physical phenomena.Moreover, the computational expense increases by incorporating phenomena across multiple length scales.One example is assessing skin electroporation by assessing the electric field effects and diffusion from length scales ranging from the skin to an individual membrane by using molecular dynamics [95].Weinert et al. developed dynamic, multiphysics simulations of electroporation of various tissues in rabbits, although they did not assess the dynamics of electroporation at the pore level [96].In addition to PEFs, RF and microwave radiation may also penetrate the body and induce multiphysics phenomena that may influence TMP and cellular function [97].
This study seeks to avoid this computational difficulty by developing a strictly electrical-based approach to modeling electroporation dynamics without directly calculating the pore dynamics.We accomplish this by combining a semi-empirical relationship between cell membrane conductivity and TMP [98] with a system of equations used to calculate TMP [99] to calculate TMP self-consistently during various electromagnetic waveforms (both PEFs and AC fields) without needing to directly calculate pore density.We achieve excellent agreement between this TMP-based approach and the ASME with minimal computational expense.This approach would allow for rapid coupling with other physical phenomena, such as membrane temperature gradients [100], or coupling with multiphysics software, such as Sim4Life or COMSOL Multiphysics [88], to assess multiscale effects from the organism to membrane level.
Section II presents the derivation of the simplified electrical-based electroporation model.We compare this simplified model to the ASME for various square PEFs, sinusoidal fields, and exponential fields in Section III.We discuss the differences in the models and remark on future analyses using the simple model in Section IV.Section V provides concluding remarks.
II. MODEL DERIVATION
In ASME models [86], the TMP is calculated explicitly at each timestep using a finite difference approach based on the conservation of energy, current density, and electric flux where applicable.Throughout this study, we follow the implementation of the ASME from Talele et al. [101].Instead of the full SME, the ASME solves where ψ is the pore creation rate coefficient, m is the TMP evaluated at time t, V ep is the characteristic voltage of electroporation, and N eq is the equilibrium pore density, which is a function of m [101].Using the TMP, (2) can be solved directly by linearization to provide the net change to the pore density from which individual pores are created.The radius r q of a given pore is updated at each timestep by linearizing where D is the diffusion coefficient for pores and w m is the lipid bilayer energy [101].
Pores can drastically change the electrical properties of the membrane by changing the amount of water in the membrane (which changes the membrane permittivity) and facilitating the passage of ions across the membrane (which alters the membrane conductivity).A useful measure of the overall degree of the relative poration of a membrane is the fractional pore area (FPA).For q pores, FPA is given by where A m is the area of the membrane or a segment of the membrane if radial effects are considered.Equation (4) can be used to update σ m at each timestep by considering the fractional pore area as a weighted average of the conductivities of the suspension σ s and sealed plasma membrane σ pm as These models generally track and evolve pores on an individual basis.Thus, it is necessary to individually update and manage pore distributions containing approximately 10 3 -10 8 pores at each of the 10 3 -10 4 discrete time steps.The management of pores in these models generally consumes the majority of model runtime, especially when many small pores are created, such as for nsPEFs with high electric field strengths.Several computational optimizations can be used, such as managing pores that form nearby temporally and spatially as groups [101], which can reduce the number of pore variables that need to be managed.However, the choice of the size of these groups is somewhat arbitrary, making it difficult select an optimal ''group size'' that effectively reduces the number of operations needed to maintain pores without significantly altering pore evolution dynamics.Additionally, pore management can be sped up by using process and/or instruction level parallelism, which can reduce the amount of time required to manage pores at the cost of additional model complexity.While both methods can dramatically speed up ASME models, neither can eliminate the computational load of managing pores individually.
While the ASME is more computationally efficient than the SME, which provides the most fidelity for pore dynamics, it is still computationally expensive for performing parametric analyses, particularly when pore densities increase dramatically, which occurs for PEFs with short durations and strong electric fields.Moreover, from a practical perspective, experimentalists often do not specifically measure pore dynamics directly, but instead use exclusion to dyes of various sizes to determine pore size distribution [102].The most direct measurement that is often made is the TMP [83], [103].While the TMP may be extracted from either the SME or ASME, particularly when it is obtained self-consistently by considering the membrane conductivity as a function of pore density [104], which is a function of the applied PEF and resulting induced TMP, this still requires directly calculating the pore density using either the ASME or SME.Here, we derive a simple, computationally inexpensive electroporation model based strictly on electrical behavior.Rather than calculate pore dynamics directly, we incorporate a semi-empirical relationship between membrane conductivity and TMP [98] into an equation for the TMP based on solving Laplace's equations for a single-shell cell (i.e., a cell with no nucleus).From first principles, Kotnik et al. derived [99] where E(t) is the applied electric field, R is the cell radius, θ is the polar angle, and F(t) is a cell property dependent parameter defined by where d is the membrane thickness and the admittivity operators are given by Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
where λ x and ε x are the conductivity and permittivity of the extracellular medium (x = o), cytoplasm (x = i), and membrane (x = m).While we use a simple single-shell model here for proof of principle, other more complicated models of TMP may ultimately be considered [105], [106], [107], [108].Electroporation is incorporated into the membrane conductivity by the semi-empirical relationship [98] λ where λ m0 is the initial membrane conductivity and β and K 1 are constants of the electroporation model.We consider λ m (t) = λ m0 , which neglects dynamic changes due to electroporation, to benchmark to the results of Kotnik et al. [99] that consider a constant λ m .Table 1 summarizes the parameters used in these calculations.
We next move the denominator of ( 7) to the left-hand side of ( 6), apply the derivatives to E(t), λ m (t),and m (t), and combine terms to obtain an equation for m (t) incorporating electroporation effects with the left-hand side given by and the right-hand side given by where the coefficients are given by and Equations ( 11)-( 13) represent the system of equations we will numerically solve for a given E(t), since K 1 and β are assumed to be constants, leaving m (t) as the only unknown.
We consider the behavior during the duration of a square pulse of duration T {i.e., where u z (t) = 1for t > z,} by modeling a unit step from 0 < t < T (i.e., neglecting the rise-and fall-times of a typical trapezoidal pulse).The TMP using this simple model is calculated by where x is the percent of the initial pulse being subtracted due to electroporation (i.e., x accounts for the suppression of the TMP due to pore formation) and T peak is the time at which the peak TMP occurs (corresponding to electroporation).Without electroporation, x = 1.With electroporation, x < 1 (generally between 0.7 to 0.9) and is fit in the interpolation table.Again, the ASME models the applied electric field using a unit step function [i.e., E (t) = E 0 u 0 (t)], but only calculates through the pulse duration at t = T to simulate the square pulse without explicitly accounting for the decay to E (T ) = 0 at the end of the pulse.While the ASME yields accurate results and is more computationally efficient than the full SME, our aim is to further reduce the computational expense required over a broad range of parameters of potential electrical stimulation.The long-term goal of the project is to assess the interactions of electrical stimulation on a multiscale level, including the (a) cell membrane, (b) whole cell, (c) surrounding tissue, (d) nerves (to assess action potential propagation and/or suppression), and (e) the full organism.The SME and ASME provide comprehensive analyses detailing pore dynamics and address (a) and (b).For (c)-(e), we aim to use tissue and/or animal models (such as a rat leg) from multiphysics software packages, such as Sim4Life (https://zmt.swiss/sim4life/)or COMSOL Multiphysics.The parametric analyses across various pulse parameters make the computational expense of the ASME (and SME) prohibitive.
For example, consider a 1 µs square pulse with an applied electric field amplitude of E 0 = 6.75 × 10 5 V/m, a time step of t = 0.5 ns, and a total simulation time of 3 µs to compare simplified model (running in MATLAB) and the ASME (running in Python).With a standard personal laptop (e.g., 16 GB installed RAM and an Intel® Core™ i7-6500 processor) the simplified model outputs the solution in ∼10 s and the ASME completes in ∼30 s.While the time difference for these single pulses does not make the ASME prohibitive, the difference in computational expense greatly increases once we incorporate pulse trains or for more intense pulses that produce more pores.At that point, the burden of tracking the pore dynamics vastly increases the ASME computation time (e.g., a single 60 kV/cm, 60 ns pulse runs in ∼1 min and a train of five pulses runs in ∼19 min), while the simplified model's computation time will simply be multiplied by the number of pulses (e.g., if a single pulse runs in 10 s, a train of five pulses will run in 50 s).Thus, while the ASME is a robust model that can handle arbitrary waveforms and provide detailed pore dynamics, our simplified model empirically outperforms the ASME model-particularly for intense pulses and pulse trains that generate many pores.This makes this model a valuable computational tool.However, the results of the simplified model must still be benchmarked to those from the ASME to determine the various fitting parameters.High frequency electroporation is becoming increasingly important for improving transfection, making this process potentially valuable for predicting TMPs phenomenologically under pulse trains [109], [110], [111], further highlighting the need for computational tools that can effectively handle high numbers of pulses.
We accomplish this by considering three different time regimes during the stimulus: Regime1-from time t = 0 until the beginning of electroporation, which corresponds to the pore-induced arresting of the TMP increase at its peak [corresponding to the first term on the RHS of ( 14)]; Regime2-from the peak TMP until the end of the applied pulse at t = T [corresponding to both terms on the RHS of ( 14)]; and Regime 3-from the end of the pulse until the end of the simulation time (corresponding to the exponential decay).We solve ( 10)-( 13) numerically by fitting β for Regime 1, then subtract a portion of those results due to the enhanced membrane conductivity after the peak in accordance with (14) to fit x.In Regime 3, the TMP decays following PEF removal, so we assume exponential decay of the form where a and τ are fit from the results of the ASME.
A parametric study between ASME and ( 10)-( 15) for various applied field strengths and pulse durations yielded an interpolation table for a, τ , and x.For all cases considered in this study, β = 7. Fig. 1 shows a, τ , and x for various pulse durations and electric fields.The ''goodness'' of the simplified model compared to the ASME model is quantified using where R 2 gives the coefficient of determination, i,ASME is the TMP of the ASME at each time step, i,Simplified is the TMP of the simplified model, and avg is the time-averaged TMP of the ASME.
For RF fields and exponential pulses, given by E (t) = E 0 cos (2πft) and E (t) = E 0 exp (−t/τ 1 ), respectively, where f is frequency and τ 1 is the exponential time constant, we calculated the TMP below and above the electroporation threshold.Below the electroporation threshold, we simultaneously solved ( 10)-( 13) with (9).However, at the beginning of electroporation, we substitute λ m0 = λ m0,2 into (9) to account for the decrease in TMP due to the enhanced membrane conductivity due to pore formation.The two solutions are then assigned to the relevant time matrices in accordance with m (t) = m (t) u 0 (t) − m2 t − T peak u T peak (t), where T peak is the time at which m (t) reaches its peak before electroporation arrests TMP, which yields the full TMP solution.For for an RF field, and for an exponential pulse, where the coefficients are defined as where f is in Hz and τ 1 is in s.Furthermore, for a cosine field, β = m cos E 0 + b cos , where m cos = −7 × 10 −12 f + 6 × 10 −6 and b cos = 3 × 10 −6 f + 5.For an exponential pulse, β = 6.5 below the electroporation threshold and β = 8 above the electroporation threshold.These semi-empirical fits came from parametric analyses and also recover the subelectroporation behavior.Accounting for electroporation in this way allows us to incorporate its effects solely through the electrical properties (i.e., the membrane conductivity) without having to track pore dynamics which is especially helpful in high-field, short pulse duration cases where there are many small pores.
III. RESULTS
We next compare the results from the simplified electroporation model to the ASME.We fit the simplified model to the ASME to determine τ and a for an applied pulse as a function of E 0 for T = 1, 2, and 4 µs to create an interpolation table for assessing other PEFs.For each T considered, both τ and a approach constants with increasing E 0 since pores eventually form; increasing E 0 further will not appreciably change TMP.
We then apply the interpolation table constructed from Fig. 1 to the simple model from ( 10)-( 13) for an applied pulse to assess the quality of the fit to the ASME results.The comparison of the simplified model and ASME model is given by R 2 , calculated using (16).Fig. 2 considers an applied PEF of E 0 = 5×10 4 V/m, which is below the electroporation threshold and allows us to obtain an accurate fit without using the fitting parameters, since no membrane pore formation occurs.Fig. 3 applies E 0 = 9.47 × 10 4 V/m to benchmark the simple model to previous ASME results [101] to validate both the ASME and simplified models.Figs. 4 and 5 consider E 0 = 5×10 5 V/m and E 0 = 1×10 6 V/m, respectively, which exceed the electroporation threshold.Figs.2-5 consider pulse durations of (a) 1 µs, (b) 2 µs, and (c) 4 µs and report R 2 for each pulse duration.
Fig. 6 validates the simplified model by comparing the results using the simplified model with the fitting parameters to the ASME model for E 0 = 6.75 × 10 5 V/m and T = 1µs.We determined R 2 = 0.97 between the ASME and simplified models, indicating excellent agreement between the two solutions when using the fitting parameter values from the interpolation table.Fig. 7 demonstrates how we combine the solution responses for the exponential pulse (Fig. 7a) and the RF field (Fig. 7b).Figs.8-10 compare the ASME to the simplified model for RF fields with frequencies of (8) 250 kHz, (9) 500 kHz, and (10) 1 MHz and electric field amplitudes of (a) 10 kV/m, (b) 100 kV/m, (c) 235 kV/m, and (d) 500 kV/m.The average agreement between the two models for these cases is R 2 = 0.98.These results indicate that the electric field required to induce electroporation increases with increasing frequency since the TMP for a given electric field amplitude decreases with increasing frequency.This behavior is consistent with prior calculations of TMP assuming constant membrane conductivity [99], which would correspond to sub-electroporation conditions.
Figs. 11 and 12 compare the ASME and simplified models for exponential pulses with time constants of τ 1 = 1µs and τ 1 = 2µs, respectively, for electric field amplitudes of (a) E 0 = 5 × 10 4 V/m, (b) E 0 = 10 5 V/m, (c) E 0 = 3 × 10 5 V/m, and (d) E 0 = 5 × 10 5 V/m.The average agreement between the two models for these cases is R 2 = 0.96.While these results agree well with the ASME, both models assume minimal variation in the parameters in Table 1.Understanding the simplified model's sensitivity to these parameters is important.Ideally, we would perform a formal error propagation analysis as outlined in [112]; however, the lack of a closed form solution to the simplified model prevents us from directly applying this procedure.We can perform a parametric analysis by comparing the result 8010 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.when we vary the cellular parameters from Table 1 to the original result obtained using Table 1.Considering the pulse parameters used for Fig. 6, we run the simplified model with each variable (i.e., ε m , ε 0 , ε i , λ 0 , λ i , and λ m0 ) having a different variation δ x n = 0.1, 0.3, 0.5, 0.7, and 0.9 from its nominal value (e.g., δ x n x n where δ x n = 1 indicates no variation from the nominal value) and calculate R 2 between the resulting and the nominal result (i.e., δ x n = 1 for all parameters).Fig. 13 demonstrates that variation in λ m0 has no effect on , variation in ε 0 has minimal effect, and variation in λ i and ε m have the greatest effect.
We next expand the parametric analysis to examine how introducing variation in multiple parameters alters .Considering δ x n = 0.2, 0.4, 0.6, 0.8, and 1 with x n = ε m , ε i , λ 0 , and λ i gives a total of 625 simulations (and 625 R 2 values).Due to the difficulty in effectively reporting this large dataspace, we report the average R 2 over this range of data, R 2 avg , for the various conditions.For the square waveform, we obtain R 2 avg = 0.72.Repeating this analysis with the electrical parameters for the RF waveform from Fig. 9c and exponential waveform from Fig. 11c yields average R 2 avg ∼ 0.5 and ∼0.6, respectively.Considering δ x n = 0.5, 0.6, 0.7, 0.8, 0.9, and 1 gives the average R 2 avg of 0.9, 0.8, and 0.8 for the square, RF, and exponential waveforms, respectively.Thus, parameter variation is unlikely to dramatically influence calculations of TMP as long as δ x n > 0.5 for each parameter.While performing a sensitivity analysis for the ASME would provide information that could improve the fidelity of the simplified model, the current model provides a reasonable first step for calculating TMP under various electrical waveforms.
IV. DISCUSSION
The main goal of this study was to develop and demonstrate a rapid, semi-empirical method for determining the effects of electrical waveforms on membrane permeabilization in spherical cells from strictly electrical arguments without directly calculating pore dynamics.
We first considered a pulse and determined the fitting parameters necessary to achieve excellent agreement between the simple and ASME models over a wide range of pulse durations.Fig. 2 shows that the TMP is characterized by a smooth increase, plateau, and subsequent decrease when below the electroporation threshold.Conversely, Fig. 3 demonstrates that just above the electroporation threshold, the TMP increases rapidly to its peak (∼1.4 V) and then decreases sharply due to the onset of pore formation, before plateauing until the end of the pulse due to the presence of pores and decaying exponentially after the pulse ends.Figs.3-6 demonstrate similar behavior, while also showing that the electroporation threshold and corresponding plateau occur more rapidly with increasing electric field.Of note, we used interpolated fitting parameters from the interpolation table to compare the simplified model to other ASME runs that were not used to obtain the table.Fig. 6 shows that the ASME and the simplified model agree well without having to directly fit the simplified model to the ASME.This indicates that the simplified model can recover the dynamics of ASME for a range of PEF parameters using the regimes for the fits and interpolation table developed here.This indicates the feasibility of using this simplified approach as a predictive tool for assessing multiscale and multiphysics phenomena rather than solely serving as a fit to the ASME.
Furthermore, this simplified model based strictly on electrical behavior without needing to directly assess pore dynamics also elucidates the behavior of TMP during electroporation.For instance, we directly incorporate the TMP-dependent membrane conductivity for the full pulse duration, showing the smooth initial TMP increase preelectroporation, the rapid increase at the onset of electroporation, the sharp initial decrease due to rapid pore formation, the plateau of the TMP as pores are opening/closing, and the final exponential TMP decrease after the pulse is turned off.
In addition to traditional square-shaped PEFs, we also assessed RF fields and exponential pulses to demonstrate that the simplified approach works when the assumed temporal symmetry of the square pulse no longer exists.Figs.[8][9][10] show that a stronger electric field amplitude is necessary to induce electroporation at higher frequencies.For example, for f = 250 kHz, E 0 = 100 kV/m induces a slight degree of pore formation, as indicated through TMP behavior, while E 0 = 235 kV/m induces stronger pore formation.On the other hand, for f = 1 MHz, E 0 = 235 kV/m induces a slight degree of electroporation, while E 0 = 500 kV/m induces significant electroporation.Much as shorter duration PEFs fail to fully charge the membrane due to their high-frequency contents, narrowband RF fields of high-frequency will also fail to fully charge the membrane, resulting in a lower TMP than lower frequency fields [99].Thus, a higher amplitude is necessary at higher frequencies to achieve the same TMP, and similar pore dynamics, as lower frequency fields.Figs.11-12 demonstrate that the dynamics for exponential pulses with increasing time constant, which essentially makes the magnitude of the electric field higher for a longer duration, act similarly to increasing the duration for a square pulse.
These results demonstrate the effectiveness of assessing electroporation dynamics across a range of electrical waveforms by using strictly electrical behavior, specifically the TMP and assuming TMP-dependent membrane conductivity during PEF application.By not calculating the distribution of pore sizes, this approach reduces computational requirements across a range of electrical waveforms, particularly for high 8012 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.intensity, short duration PEFs that generate a large density of small pores, which can be expensive even using the ASME rather than the full SME.While this method uses a semi-empirical approach to determine the fitting constant between the membrane conductivity and TMP, we have demonstrated that it yields excellent agreement with the ASME even for cases that we did not use to develop the fitting constants.
While some degree of fidelity in the model may be lost by not accurately calculating pore dynamics, this sort of approach provides value by only requiring electrical properties, which may be measured for cells of interest [113] and may also change during or after exposure [114].Moreover, as Neu and Krassowska [86] pointed out when deriving the ASME, many of the parameters used in the SME are not readily known or measured, while the ASME and the simple electrical-based model we present here avoid this issue.Although the ASME and our simplified approach use fitted parameters, they tend to be based on phenomenological behavior that can be readily measured, such as the TMP.However, with the simplified approach, one does not gain insight into the spatial distribution of pores or ion/molecular transport; thus, the SME or ASME is necessary when those characteristics are of interest.
The simplified model provides a straightforward method to input desired pulse parameters and obtain the TMP as a function of time (i.e., a single array that is designed for initial multiphysics analyses).Conversely, the ASME outputs the TMP, pore density, pore size, pore location, and several other specific parameters at every time step and every polar angle (e.g., with 25 angular segments, as considered here, the TMP result alone consists of 25 of the arrays that the simplified model outputs).While these results are desirable for certain analyses, the amount of data quickly becomes unnecessarily cumbersome when the TMP at a specific 10)-( 13) considering ( 9) as defined and Part 2 shows the numerical solution of ( 10)-( 13) considering ( 9) with λ m0 = λ m0,2 .angle is the only parameter of interest.Furthermore, the computational expense of the ASME for high-intensity pulses and pulse trains drastically increases, which would make the simplified model a more attractive option.Thus, the user can Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.FIGURE 13.Assessment of R 2 comparing the simplified model with no variation in parameter to the simplified model with variation in a single parameter represented by the fraction δ x , where δ D 1 indicates no variation (i.e.., δ x n x n = x n , where x n = ε m ,ε 0 ,ε i ,λ 0 ,λ i , and λ m0 ).Variation in λ m0 has no effect on , variation in ε 0 has minimal effect, and variations in λ i and ε m have the greatest effect.
choose the simplified model to rapidly estimate TMP for a multiscale or broad-sweeping parametric analysis or the ASME (or full SME) to provide information regarding pore dynamics or information regarding the angular distribution of the TMP.Ultimately, the ASME and simplified models both serve as options for assessing PEF-induced biological effects.The simplified model will serve as a foundational tool in future work assessing multiscale behavior in the presence of an electrical waveform.
Furthermore, future work will couple these results to Sim4Life to assess a full-scale model to provide insight into how these modified TMPs and additional shunt conductivities will influence the action potential initiation and propagation when starting from exposure to an electromagnetic waveform at the organism level.This analysis will aid in characterizing the multiscale effects of the PEFs and provide guidance for more in-depth simulation and experimental work to assist in biological, medical, and defense applications.
A. MODEL LIMITATIONS
While this study considers λ m (t), it assumes a constant ε m .Extending this analysis to include a time-dependent membrane permeability will elucidate additional behavior and provide a more comprehensive prediction of TMP behavior with electroporation.Additionally, while we consider microsecond duration PEFs here for example cases, this approach could be applied to develop fitting functions for nano-or picosecond pulses and pulse trains.Furthermore, this additional analysis could aid in developing fitting functions for β, a, and τ for square pulses analogous to those derived for the RF and exponential pulses, which would mitigate the need for the interpolation table and improve the accuracy of the model.
V. CONCLUSION
We have developed a rapid method for calculating the TMP of cells permeabilized due to exposure to various electrical waveforms by applying a semi-empirical relationship between the membrane conductivity and the TMP without needing to directly calculate the pore dynamics.While the SME provides greater fidelity into pore dynamics, its computational expense makes it undesirable for rapidly assessing the pore formation as additional physical phenomena (such as temperature gradients and shock waves) are incorporated into multiphysics, multiscale models.Moreover, although the ASME alleviates some of this computational burden, it still requires tracking pore growth, which becomes computationally expensive when many pores are formed at shorter PEF durations (on the order of nanoseconds) and strong electric field intensities.The simplified model presented here demonstrates the feasibility of considering electropermeabilization strictly from an electrical perspective.This rapid model for membrane dynamics may ultimately be linked to tissue and organism level multiscale models for assessing the exposure to various electrical waveforms that may be relevant for occupational safety or therapies.
FIGURE 1 .
FIGURE 1. Fitting parameters as a function of applied electric field with τ shown in (a), (c), and (e), and a shown in (b), (d), and (f) for pulse durations of 1 µs (a-b), 2 µs (c-d) and 4 µs (e-f); x is shown in (g).
FIGURE 2 .
FIGURE 2. Transmembrane potential as a function of time t in response to a unit step pulse for E 0 = 5×10 4 V/m for pulse durations of (a) 1 µs, (b) 2 µs, and (c) 4 µs using the asymptotic Smoluchowski (ASME) equation and the simplified model, with the ''goodness-of-fit'' shown by R 2 in (d).
FIGURE 3 .
FIGURE 3. Transmembrane potential as a function of time t in response to a unit step pulse for E 0 = 9.47×10 4 V/m for pulse durations of (a) 1 µs, (b) 2 µs, and (c) 4 µs using the asymptotic Smoluchowski (ASME) equation and the simplified model, with the ''goodness-of-fit'' shown by R 2 in (d).
FIGURE 4 .
FIGURE 4. Transmembrane potential as a function of time t in response to a unit step pulse for E 0 = 5×10 5 V/m for pulse durations of (a) 1 µs, (b) 2 µs, and (c) 4 µs using the asymptotic Smoluchowski (ASME) equation and the simplified model, with the ''goodness-of-fit'' shown by R 2 in (d).
FIGURE 5 .
FIGURE 5. Transmembrane potential as a function of time t in response to a unit step pulse for E 0 = 1×10 6 V/m for pulse durations of (a) 1 µs, (b) 2 µs, and (c) 4 µs using the asymptotic Smoluchowski (ASME) equation and the simplified model, with the ''goodness-of-fit'' shown by R 2 in (d).
FIGURE 6 .
FIGURE 6. Transmembrane as a function of time t in response to a unit step pulse for E 0 = 6.75×10 5 V/m for a pulse duration of 1 µs using a, τ , β, and x values from the interpolation R 2 value comparing the asymptotic Smoluchowski equation (ASME) and the simplified model is 0.97, indicating excellent agreement between the two.
FIGURE 8 .
FIGURE 8. Transmembrane potential as a function of time t in response to a cosine pulse for f = 250 kHz for applied electric field amplitudes of E 0 10 kV/m, (b) 100 kV/m, (c) 235 kV/m, and (d) 500 kV/m comparing the results of the asymptotic Smoluchowski (ASME) equation and the simplified theory.
FIGURE 9 .
FIGURE 9. Transmembrane potential as a function of time t in response to a cosine waveform for f = 500 kHz for applied electric field amplitudes of E 0 (a) 10 kV/m, (b) 100 kV/m, (c) 235 kV/m, and (d) 500 kV/m comparing the results of the asymptotic Smoluchowski (ASME) equation and the simplified theory.
FIGURE 10 .
FIGURE 10.Transmembrane potential as a function of time in response to a cosine waveform for f = 1 MHz for applied electric field amplitudes of E 0 (a) 10 kV/m, (b) 100 kV/m, (c) 235 kV/m, and (d) 500 kV/m comparing the results of the asymptotic Smoluchowski (ASME) equation and the simplified model.
FIGURE 11 .
FIGURE 11.Transmembrane potential as a function of time in response to an exponential waveform for τ 1 = 1µs for applied electric field amplitudes of E 0 (a) 50 kV/m, (b) 100 kV/m, (c) 300 kV/m, and (d) 500 kV/m comparing the results of the asymptotic Smoluchowski (ASME) equation and the simplified model.
FIGURE 12 .
FIGURE 12. Transmembrane potential as a function of time in response to an exponential waveform for τ 1 = 2µs for applied electric field amplitudes of E 0 (a) 50 kV/m, (b) 100 kV/m, (c) 300 kV/m, and (d) 500 kV/m using the asymptotic Smoluchowski (ASME) equation and the simplified model.
TABLE 1 .
Parameters used in the mathematical models. | 8,646.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Automatic Smart Irrigation System Using IOT
- India’s population has reached beyond 1.2 billion and the population rate is increasing day by day, so after 25-30 years there will be serious problem of food, such that the development of agriculture is necessary. Today, farmers incur the problem of water scarcity due to lack of rain. The main objective of this project is to provide an automatic irrigation system that saves time, money of the farmer. The traditional farm land irrigation techniques require manual intervention. With the automated technology of irrigation, the human intervention can be minimized. Whenever there is a change in humidity of the soil, the sensor senses the humidity change and irrigates the field automatically using a popular technology called the ‘Internet of Things’. The project makes use of simple IoT technology and is economic making it feasible even in economically backward areas.
I. INTRODUCTION
Ever imagined a world where machines or things communicate with each other. Imagine a network of physical objects-devices, vehicles, buildings and other items embedded with electronics, software, sensors and network connectivity that enable these objects to collect and exchange data. Machine to machine, machine to infrastructure, internet of intelligent things, intelligent system and that is Internet of Things (IoT) and its potential is huge.
IoT describes a world where just about anything can be connected and communicate in an intelligent fashion. In other words, with the Internet of Things, the physical world is becoming one big information system. IoT being one of the blooming technologies in today's world has various real time applications which prove to be really useful. The scope of the technology is vast promising to be one of the technologies of recent times.
With the water requirements in irrigation being large, there is a need for a smart irrigation system that can save about 80% of the water. This prototype aims at saving time and avoiding problems like constant vigilance. It also helps in water conservation by automatically providing water to the plants or gardens depending on their water requirements. It can also prove to be efficient in Agricultural fields, lawns and parks. As technology is advancing, there is always a chance of reducing risks and making work simpler. Embedded and micro controller systems provide solutions for many problems. This application precisely controls water system for gardens by using a sensor micro controller system. It is achieved by installing sensors in the field to monitor the soil temperature and soil moisture.
Smart irrigation systems estimate and measure diminution of existing plant moisture in order to operate an irrigation system, restoring water as needed while minimizing excess water use.
Intelligent automatic plant irrigation system concentrates watering plants regularly without human monitoring using a moisture sensor. The circuit is build around a comparator Op-amp (Operating Amplifier) and a timer which drives a relay to switch on a motor. The system uses a hardware component, which is subjected to variation with the environmental conditions. One may wonder why Smart irrigation is required. During manual irrigation, the water requirement of plants or crops is not monitored. Even when the soil is moist enough, water is still provided. This water is not absorbed by the plants and thus is wasted. Hence a system is used to monitor the water requirements.
This prototype monitors the amount of soil moisture and temperature. A predefined range of soil moisture and temperature is set, and can be varied with soil type or crop type. In case the moisture or temperature of the soil deviates from the specified range, the watering system is turned on or off. In case of dry soil and high soil temperature, it will activate the irrigation system, pumping water for watering the plants.
This technology is recommended for efficient automated irrigation systems and it may provide a valuable tool for conserving water planning and irrigation scheduling which is extendable to other similar agricultural crops. Maximum absorption of the water by the plant is ensured by spreading the water uniformly using a servo motor. So there is minimal wastage of water. This system also allows controlling the amount of water delivered to the plants when it is needed based on types of plants by monitoring soil moisture. This project can be used in large agricultural area where human effort needs to be minimized. Many aspects of the system can be customized and fine-tuned through software for a plant requirement. The system used a number of components which on the other hand are easy to operate or use.
II. EXISTING SYSTEM
In most existing systems, the threshold value of moisture is not taken into consideration and the field is irrigated at random time intervals, leading to over-irrigation or under irrigation of field and this in turn affects the crop productivity. There are cases where the threshold value of moisture is fixed leading to another disadvantage. Different crops need different environment condition to grow and when the moisture content of the system is fixed, conditions may not be appropriate for the crop's growth and yield. A method is proposed to monitor the soil moisture and the irrigation is done only when the moisture content goes below the threshold value.
SMART IRRIGATION
This prototype aims at saving time and avoiding problems like constant vigilance. It also helps in water conservation by automatically providing water to the plants or gardens depending on their water requirements. As technology is advancing, there is always a chance of reducing risks and making work simpler. Embedded and micro controller systems provide solutions for many problems. This application precisely controls water system for gardens by using a sensor micro controller system. It is achieved by installing sensors in the field to monitor the soil temperature and soil moisture which transmits the data to the microcontroller for estimation of water demands of plants.
INTELLIGENT IRRIGATION SYSTEM IN SENSOR NETWORKS
Irrigation is need of farmer to save water resource which is essential and need to use in minimum quantity because it is not free forever to use and not conversational resource. In drip irrigation water is given to root of plants to save water and stop land infertility and nutrition count. In irrigation farmer have to keep time table for irrigation which changes as per crop, soil and weather. Web based intelligent drip irrigation system is one and only solution to water management and precision agriculture. In web based system we can control water supply using solenoid valve. This whole system is micro control based and can be operated from remote location through web based so there is no need to concern about irrigation timing as per crop or soil condition. Sensor is used to take sensor reading of soil like soil moisture, temperature, air moisture and light micro controller take decision control by user (farmer). Web based intelligent irrigation system helps a farmer to take decision on water management in farm and there is no need to maintain irrigation time table .Irrigation time table can be fetch and map from agriculture university or government web site as per soil and crop type. It gives maximum profit from minimum cost.
AUTOMATED IRRIGATION SYSTEMS USING WIRELESS SENSOR NETWORKS
Irrigation is the artificial application of water to the soil. There are various technological improvements in irrigation including automated irrigation. Automated irrigation implies operation of the system without any manual intervention. An automated system utilizes technologies like timers, sensors, computers, mechanical appliances, etc. Here we are presenting a comparative study of optimizing irrigation using remotely monitored embedded system, zigbee or hotspot, using wireless sensor networks particularly for drip irrigation and a micro controller based optimization that uses cellular internet interface which allows data inspection and irrigation scheduling to be programmed through web page. Implementation of these systems can be potentially used in water limited geographical areas.
III. PROPOSED SYSTEM
The soil moisture sensor senses the amount of moisture content in the soil which is uploaded to the Arduino board. The Arduino board transfers the control over the system to the relay module which is responsible for switching operations. The relay module ensures proper irrigation of the field turning it on when the value of moisture is below the threshold value and turns off the supply when the moisture content is sufficient for the crop or plant thereby preventing under irrigation or over irrigation. The state of the relay module is indicated by the LED. A simple process flow representation of the system is shown in figure 1. From the representation, it is clearly understood that the working of the system is simple and can be controlled easily.
WORKING PRINCIPLE
The system consists of various hardware components that are put together to sense and irrigate the fields automatically. Each of these components has unique function to perform and the system attains full efficiency when each of these components work properly.
Construction
The soil moisture sensor is kept in the field in order to sense the moisture content in the soil regularly. The sensed information from the sensor can be either in analog form or digital form and they are connected to the Arduino board depending on the form of sensed information. The voltage from the Arduino is supplied to the Analog Digital Converter (ADC) and the relay module. The ground pin from the Arduino is connected to the ground of relay module.
The output from the Arduino is given to the relay module. The other side of the relay module is connected to the water pump through a 12V power supply. One end of the water pump is kept inside a water source and the other side is kept in the soil. Each of the components in the system is connected properly to ensure proper working of the system in order to make sure that it gives the best efficiency.
Work Flow
The soil moisture sensor kept in the soil senses the soil moisture regularly and sends the sensed data to the Arduino. The Embedded C program that is already uploaded into the Arduino contains the threshold value of moisture. The moisture must always be within the range in order to maintain the crop from damage. When the sensed value is found to be less than the value mentioned in the code, that is, if the moisture content in the soil is less than the threshold value, the control on the system is transferred to the relay module which performs the switching operation. The relay module now turns on the switch which allows the flow of water.
Another condition that should be taken into consideration is the prevention of over-irrigation in the field. The soil moisture sensor continues to sense the amount of moisture and when the field is provided with sufficient water, the relay module turns off the switch thereby preventing overirrigation. The state of the relay module can be identified by the LED found in it.
Fig.3 Circuit Connection
The water is supplied at regular intervals because the sensor senses the moisture regularly making it have the maximum efficiency of the system.
Merits
This technology is recommended for efficient automated irrigation systems and it may provide a valuable tool for conserving water planning and irrigation scheduling which is extendable to other similar agricultural crops. Maximum absorption of the water by the plant is ensured by spreading the water uniformly using a servo motor. So, there is minimal wastage of water.
This system also allows controlling the amount of water delivered to the plants when it is needed based on types of plants by monitoring soil moisture and temperature. This project can be used in large agricultural area where human effort needs to be minimized. Many aspects of the system can be customized and fine tuned through software for a plant requirement.
Demerits
The system fails in case of power failure and alternate arrangement for power has to be made so that maximum efficiency of the system can be attained. The working condition of each component has to be checked periodically because component failure has become a serious issue. Various other issues include the farmer's ignorance about the state of the system. The farmer has to visit the field in order to check the condition of the system at regular intervals.
RESULT
The smart irrigation system was tested on a garden plant. In the Arduino code, the moisture range was set as 40%-100%, providing optimum condition for plant growth. Moreover, this system proves to be cost effective and proficient in conserving water and reducing its wastage. The below figure 5 shows the moisture content being displayed on the monitor. This display of moisture takes place at regular intervals depending on the delay time mentioned in the program source code.
Fig.5 Moisture Display
The below figure 6 represents the watering of soil when the moisture content is below the threshold value. This irrigation or water flow is controlled by the relay module which performs the switching operations ON and OFF.
IV. CONCLUSION AND FUTURE SCOPE
In the present era, the farmers use irrigation technique through the manual control, in which the farmers irrigate the land at regular intervals. This process seems to consume more water and results in water wastage. Moreover, in dry areas where there is inadequate rainfall, irrigation becomes difficult. Hence, we require an automatic system that will precisely monitor and control the water requirements in the field. Installing Smart irrigation system saves time and ensures judicious usage of water. Moreover, this architecture uses Arduino which promises an increase in system life by reducing power consumption. It also reduces the human intervention therefore less energy of the farmer of the farmer is required.
Our project can be improvised by adding a Webscaper which can predict the weather and water the plants or crops accordingly. If rain is forecasted, less water is let out for the plants. Also, a GSM module can be included so that the user can control the system via smart phone. A water meter can be installed to estimate the amount of water used for irrigation and thus giving a cost estimation. A solenoid valve can be used for varying the volume of water flow. Furthermore, Wireless sensors can also be used. Monitoring of other growth or soil parameter can also be included just by connecting the sensors and modifying the source code of the project. This integration can also reduce the number of other hardware components used in the system thereby reducing the total cost of the system. The system will continuously send the data on the cloud. These data can also be accessed using Bluetooth on Android App. If there is no internet present, the farmer can control the system through the App that is the Semiautomatic system. The plant's growth can be detected earlier by measuring the pH content of the soil which can help the farmers in numerous ways. The farmer gets to know earlier that what crops can be grown in the field. | 3,474.8 | 2019-02-28T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Agricultural and Food Sciences",
"Computer Science"
] |
Seven-parameter statistical model for BRDF in the UV band
A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retroreflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the fiveand seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band. ©2012 Optical Society of America OCIS codes: (290.1483) BSDF, BRDF, and BTDF; (290.5820) Scattering measurements; (290.5880) Scattering, rough surfaces. References and links 1. C. Lavigne, G. Durand, and A. Roblin, “Ultraviolet light propagation under low visibility atmospheric conditions and its application to aircraft landing aid,” Appl. Opt. 45(36), 9140–9150 (2006). 2. M. A. Velazco-Roa and S. N. Thennadil, “Estimation of complex refractive index of polydisperse particulate systems from multiple-scattered ultraviolet-visible-near-infrared measurements,” Appl. Opt. 46(18), 3730–3735 (2007). 3. M. Minnaert, “The reciprocity principle of linear photometry,” Astrophys. J. 93, 403–410 (1941). 4. J. Stover, Optical Scattering, Measurement and Analysis (SPIE Press, 1995). 5. C. L. Walthall, J. M. Norman, J. M. Welles, G. Campbell, and B. L. Blad, “Simple equation to approximate the bidirectional reflectance from vegetative canopies and bare soil surfaces,” Appl. Opt. 24(3), 383–387 (1985). 6. P. Beckman and A. Spizzichino, The Scattering of Electromagnetic Waves from Rough Surfaces (Pergamon, 1963). 7. J. L. Roujean, M. Leroy, and P. Y. Deschamps, “A bidirectional reflectance model of the earth’s surface for the correction of remote sensing data,” J. Geophys. Res. 97(20), 455–468 (1992). 8. X. Li and A. H. Strahler, “Geometric-optical bidirectional reflectance modeling of the discrete crown vegetation canopy: effect of crown shape and mutual shadowing,” IEEE Trans. Geosci. Rem. Sens. 30(2), 276–292 (1992). 9. I. G. E. Renhorn and G. D. Boreman, “Analytical fitting model for rough-surface BRDF,” Opt. Express 16(17), 12892–12898 (2008). 10. K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967). 11. Z. S. Wu, D. H. Xie, P. H. Xie, and Q. N. Wei, “Modeling reflectance function from rough surface and algorithms,” Acta Opt. Sin. 22, 897–901 (2002). 12. K. J. Dana, B. van Ginneken, S. K. Nayar, and J. J. Koenderink, “Reflectance and texture of real-world surfaces,” ACM Trans. Graph. 18(1), 1–34 (1999). 13. Y. Barnes and J. J. Hsia, “UV bidirectional reflectance distribution function measurements for diffusers,” Proc. SPIE 1764, 285–288 (1993). 14. M. P. Newell, L. A. Whitlock, and R. A. M. Keski-Kuha, “Extreme ultraviolet scatter from particulate contaminated mirrors,” Proc. SPIE 2541, 174–185 (1995). 15. M. P. Newell and R. A. M. Keski-Kuha, “Bidirectional reflectance distribution function of diffuse extreme ultraviolet scatterers and extreme ultraviolet baffle materials,” Appl. Opt. 36(22), 5471–5475 (1997). 16. T. Zurbuchen, P. Bochsler, and F. Scholze, “Reflection of ultraviolet light at 121.6 nm from rough surfaces,” Opt. Eng. 34(5), 1303–1315 (1995). 17. M. P. Newell and R. A. M. Keski-Kuha, “Extreme ultraviolet BRDF measurements: instrumentation and results,” Proc. SPIE 2864, 453–464 (1996). 18. C. Amra, “Light scattering from multilayer optics. I. Tools of investigation,” J. Opt. Soc. Am. A 11(1), 197–210 (1994). 19. C. Amra, “Light scattering from multilayer optics. II. Application to experiment,” J. Opt. Soc. Am. A 11(1), 211–226 (1994). #165850 $15.00 USD Received 30 Mar 2012; revised 24 Apr 2012; accepted 4 May 2012; published 11 May 2012 (C) 2012 OSA 21 May 2012 / Vol. 20, No. 11 / OPTICS EXPRESS 12085 20. C. Amra, “From light scattering to the microstructure of thin-film multilayers,” Appl. Opt. 32(28), 5481–5491 (1993). 21. J. M. Elson, J. P. Rahn, and J. M. Bennett, “Light scattering from multilayer optics: comparison of theory and experiment,” Appl. Opt. 19(5), 669–679 (1980). 22. C. Amra, C. Grèzes-Besset, and L. Bruel, “Comparison of surface and bulk scattering in optical multilayers,” Appl. Opt. 32(28), 5492–5503 (1993). 23. H. L. Zhang, Z. S. Wu, Y. H. Cao, and G. Zhang, “Measurement and statistical modeling of BRDF of various samples,” Opt. Appl. 40, 197–208 (2010).
Introduction
Interest in the study of scattering and reflective properties in the UV band (200 nm to 400 nm) has recently increased.The applications of such properties include UV space object detection and subsurface defect short wavelength monitoring in the semiconductor industry, among others [1,2].
The bidirectional reflectance distribution function (BRDF) is often used to describe the directional dependence of the scattering properties of a surface.The BRDF has been extensively studied and surveyed in various fields [3][4][5][6][7][8][9][10].Many BRDF models have been developed, and they can be classified into two categories, namely, purely empirical and analytical models.
Both models have their advantages and disadvantages.The purely empirical BRDF models are simple and useful but have no physical basis, such as the Minnaert BRDF model [3], Walthall model [5], etc.The analytical BRDF models are derived from more complex physical theory by simplifying some assumptions and approximations.They are needed in many applications, especially in simulation software, but they usually require the determination of many parameters.Examples include the Roujean [7], LiSparse-Dense BRDF [8], and Renhorn and Boreman 14-parameter [9] models.
Most BRDF models are developed over the visible and near-infrared spectrum.There are numerous papers that have discussed the measurement or model method in the visible to infrared band [7][8][9][10][11][12].Although there are some related papers reporting UV scatter in rough surfaces [13][14][15][16][17], a suitable BRDF model in the UV band is still lacking.In the UV band, the refraction light spectrum is often not easy to be measured, and we should consider more complex model to describe the measurement data.Bulk or volume scattering are usually caused by the in-homogeneities of materials.Relative studies about volume scattering provide a powerful tool to investigate the scattering properties of roughness surfaces [18][19][20][21][22].
The present paper aims to provide a model of reflection by a rough surface that can successfully predict the experimental findings in the UV band.In section one, the schematic diagram of the instrument used to performing scatter measurements is presented.In section two, a novel seven-parameter model is developed.There are three terms in this model: surface scatter, bulk scatter, and retro-reflection.An optimizing modeling method is used to model the BRDF measurement data of typical samples in the UV band.In the last section, the calculation results using the new seven-parameter model are compared with the fiveparameter model.
Measurement of BRDF in the UV band
A BRDF is actually an angle-resolved energy distribution.As shown in Fig. 1, an element of surface dA is illuminated by a source of incident wave with wave vector i k .Letting r k be the reflected direction wave vector, the symbol ẑ denotes the normal of the mean surface of the z axis and n denotes the normal direction of the micro-facet dA .Letting α be the angle between n and ẑ , where γ is the angle between i k and n , angles α and γ satisfy the following relationships ( ) where θ and φ are the zenith and azimuthal angles, respectively.The subscripts i and r denote the incident and reflected angles, respectively.The definition of BRDF, r f , can be expressed as the differential radiance dL r scattered by a uniformly illuminated, homogeneous material per unit differential incident irradiance dE i , , , , , , , Another equivalent definition of the BRDF is ( ) where s Ω is the solid angle through which the scattered power s P is collected.It is then normalized with respect to the total incident power i P .On the other hand, cos r θ is the factor that can be thought of as a correction to adjust the illuminated area to its apparent area when viewed from the direction of the scatter.Figure 2 shows the schematic diagram of the instrument used to perform angle-resolved optical scatter measurements.This BRDF measurement installation system was designed and constructed by the Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences.This measurement system has three parts, i.e., the source, sample holder, and receiver.The design details of each part are described elsewhere [23].
The UV light source used in these investigations was a 150 mW-power deuterium lamp, which can generate continuous emission spectra.After emerging from the exit slit, the beam is quasi-collimated by passing through a convex lens.After passing through two mirrors, the beam is reflected to the slit.By passing through the slit, the beam enters the sample and is then be collected by the detector.The system can measure three-dimensional BRDF data by rotating the sample and light source using electromotors A, B, and C in different threedimensional directions.The detector used was a spectrometer made by Ocean Optic Co. in USA.The sample can be rotated about a vertical axis to allow different angles of incidence, and the detector can be rotated independently about the same axis to allow the measurement of angular scatter distribution.The step of the zenith angle is 1°, and the step of the azimuth angle is 5°.Near the specular direction, the sample angle is 1° to guarantee sufficient precision.The accuracy of the zenith and azimuth angles is 0.1°, and the relative error of the measured BRDF of this system is less than 5%.The incident UV wavelength varies from 250.45288 nm to 368.89652 nm.The total 1024 output voltages s V for the sample are measured within this wavelength range.The voltages of reflectance standard plate ref V are also measured within the corresponding wavelengths.Then, the ratio is as follows ( , ; , , ) ( , ; , , ) where ( , ; , , ) f θ ϕ θ ϕ λ and ( , ; , , ) f θ ϕ θ ϕ λ are the BRDFs of the sample and reflectance standard plate at a given incident wavelengths λ , respectively.
Notably, V ref of the reference standard plate must be determined under the same measurement conditions used to detect the samples, and all measurements must be performed correspondingly.In our experiments, the reference standard plate is the pressed polytetrafluoroethylene plate, which can be considered as a perfect Lambertian plate.
Development of a seven-parameter BRDF model
Torrance and Sparrow [10] have deduced a useful light-scattering model, which assumes that the surface consists of small, randomly disposed mirror-like facets.An implied criterion of this model is that the root-mean-square (RMS) surface roughness is greater than the wavelength of the incident radiant energy.There are three parameters in the Torrance-Sparrow model, based on which Wu [11] has proposed a five-parameter BRDF model, whose expression can be expressed as ( , , , ) , , , cos exp 1 cos cos cos cos 1 1 cos where For isotropic materials at a given wavelength, the BRDF can be safely assumed to be invariant to azimuthal rotations of the incident and 0 UV wavelengths are shorter than visible and infrared bands.Thus, a new model that suited the experimental measurement data in the UV band is developed.Our model partly adopted the model of Wu and partly adopted the Renhorn-Boreman fourteen-parameter analytical The corresponding bulk scatter and retro-reflection terms are introduced to improve the accuracy of the simulation results of the five-parameter model.
According to the Renhorn-Boreman model [9], a Gaussian surface auto-covariance function ( , ) g x y is given by ( , ) exp ( ) where g σ is the RMS surface roughness and ρ is the inverse of the surface correlation length.Its Fourier transform corresponding to the surface power spectrum is ( , ) G ξ η .The parameters ξ and η are direction cosines defined as 0 sin( ) An exponential auto-covariance function is given by 2 ( , ) exp ( ) The Fourier transformation results of Eq. ( 9) can be obtained as follows using Mathematic under the condition 0 1, 0 1 In the Renhorn-Boreman model, the exponential statistical distribution is combined with the Gaussian distribution.Taking the angle of incidence into account, the Renhorn-Boreman model yields the two-dimensional Lorentzian BRDF by ( ) where σ is the integrated reflectance.
The Renhorn-Boreman BRDF model, which considers the surface scattering and includes the bulk and retro-reflection scattering sections, is partly adopted.The total BRDF is given by three separate parts as follows The surface scattering BRDF part is described by Eq. ( 6), whereas the bulk scattering and retro-reflection scattering BRDF are described as follows where 1 ρ and 2 ρ are the parameters related to the bulk and retro-reflection scatterings, respectively.Given that the proportions of the bulk and retro-reflection scattering sections are very small, only the masking and shadowing effects are considered in the surface scattering section.For the bulk and retro-reflection scattering sections, only the two-dimensional Lorentzian BRDF is considered.
Comparison of the seven-and five-parameter BRDF models
At this point, a new BRDF model has been developed.There are seven parameters to be determined in this model, namely, { } k k k a b ρ ρ .The artificial immune network algorithm is used to fit the parameters in the model.The optimization method is used to determine the optimum values of the fitting parameters of the BRDF to minimize the squared error between the measured data and the model.
( with the fitted BRDF derived from the seven-parameter (solid curves in blue) and five-parameter (short dash curves in red) models.The sample used in this simulation is a fine grinding rough aluminum plate.The zenith angles of incidence are 20°, 30°, 45°, and 60°.The scattering angle of scatter is over a range of angles from -70° to 70°.In the given wavelengths, six groups of measured data at various angles of incidence are usually used to fit the parameters to be determined (only four groups of measured data are shown in the Figs.[3][4][5][6].For convenience, the azimuthal angles of incidence and scatter are both set to 0°.The wavelengths of incidence are 266nm Another example is shown in Figs. 5 and 6.Table 2 lists the model parameters, RMS error, and computing time of sample #2.The sample used in this case is a white painted surface.
The mirror-reflective section of sample #2 is more obvious than that of sample #1.Thus, sample #2 is relatively smoother than sample #1.The seven-parameter model still has less RMS error for different incidence wavelengths in this case.
Conclusion
A new seven-parameter BRDF model of rough surfaces in the UV band is established.An angle-resolved BRDF measurement system operates on some typical samples at different incident angles.The measured data are compared with the results of the seven-parameter model.Both for different wavelengths of the same sample or for different samples, the new seven-parameter model meets the experimental data better than the five-parameter one.It can significantly improve the calculation accuracy, although it costs a little more calculation time.
Both seven-and five-parameter model has its advantages and disadvantages.Optimization method will become more difficult when parameter in a model is so many.And the calculation time will be longer.If the precision requirement is not high, for most engineeringoriented application fields, the five-parameter model is a good enough model.But for some This paper offers a new developed BRDF model of rough surfaces in the UV band.The scattering properties of materials in the UV band are elucidated.And this kind of researches is helpful to understand the scattering properties of materials in the UV band.
iϕ=
. There are two components in Eq.(6); one is the specular reflection from mirror-like surface facets, and the other is a diffuse component.In the modified Torrance-Sparrow model of Wu[11], there are five parameters a , b , b k , d k , and r k to be determined.b k is the mirror-direction component, d k is related with the diffuse reflection component, and r k is related to the distribution of the #165850 -$15.00USD Received 30 Mar 2012; revised 24 Apr 2012; accepted 4 May 2012; published 11 May 2012 (C) 2012 OSA subsurface dA (determined by the slope distribution of the surface).
function of the sub-surface, exp[ (1 cos ) ] description of the Fresnel reflectance function, and ( ) ϕ λ is the masking and shadowing effect.In this model, the exponential function in the Torrance-Sparrow model is substituted by an elliptical function to describe the distribution of the normal of the facets.The exponential function with two parameters is used to substitute the Fresnel reflectance function to avoid the calculation of many trigonometric functions.The model can be used to describe isotropic surfaces with non-polarization incident light.The selection criteria of these five parameters are to minimize the RMS errors between the simulation and experimental data.
Fourier transform of the surface auto-covariance function is
Fig. 4 .(
Fig. 4. Comparison of the seven-and five-parameter models of sample #1 for various angles of incidence when wavelength of incidence is =369nm λ
Fig. 5 .-Fig. 6 .
Fig. 5. Comparison of the seven-and five-parameter models of sample 2# for various angles of incidence when wavelength of incidence is =266nm λ
#
165850 -$15.00USD Received 30 Mar 2012; revised 24 Apr 2012; accepted 4 May 2012; published 11 May 2012 (C) 2012 OSA special use such as in the UV band, it is difficult to detect the scattering signal.If you want to improve the calculation precision, more precision model should be used.
To ensure that the analytical two-dimensional integral of the BRDF is unitary, the parameter N gives #165850 -$15.00USD Received 30 Mar 2012; revised 24 Apr 2012; accepted 4 May 2012; published 11 May 2012 ) Measured BRDF data of sample #1: Fine grinding rough aluminum plate. | 4,075.4 | 2012-05-21T00:00:00.000 | [
"Physics"
] |
Stability Analysis of the Left Bank Slope of Baihetan Hydropower Station Based on the MF-DFA Method
Based on the left bank slope of Baihetan hydropower station in Southwestern China, a high-precision microseismic monitoring system was established. An early warning model of surrounding rock mass deformation and failure based on MF-DFA was proposed.*e results showed that the multifractal characteristics of the microseismic and blasting waveform time series in the left bank slope were obvious, and the multifractal spectrum width of the blasting waveform is much larger than that of microseismic waveform. Before the slope cracks increased, the multifractal time-varying response characteristics of microseismic waveform showed strong regularity, which could be regarded as a precursor of surrounding rock mass deformation. Before the deformation and failure of surrounding rock mass, the multifractal spectrum width Δα showed an increasing trend while the multifractal spectrum of microseismic waveforms Δf(α) presented a decreasing trend, which can be regarded as a precursor of surrounding rock mass deformation; when deformation and failure occurred, Δα showed a decreasing trend and Δf(α) showed an increasing trend, which can be regarded as a deformation failure period; after the occurrence of deformation and failure, both Δα and Δf(α) showed a steady trend, and Δf(α) would approach to the zero line, which can be regarded as a stable period.
Introduction
e monitoring and early warning of high slope instability have always been a research hotspot and difficulty in the field of rock mechanics and engineering [1][2][3]. e analysis methods of slope stability mainly include the engineering geological analysis method, model testing, numerical simulation methods, limit equilibrium, limit analysis, and reliability analysis method. ese methods play an important role in solving the problem of rock slope stability research. Che et al. [4] conducted a series of shaking table physical tests to study the propagation of seismic waves in jointed rock mass and their influence on the stability of rock slope with discontinuous jointed rock slope in high and steep bedding. Griffiths and Fenton [5,6] first applied the finite element method to the stability analysis of the slope, promoting the development of the finite element method in the slope engineering. Baker and Garber. [7] used variational method to search for the minimum safety factor and its sliding surface. Hungr et al. [8] extended the Janbu method and Bishop method to three dimensions, which improved the calculation accuracy. Sutcliffe et al. [9] discussed the ultimate bearing capacity of jointed rock foundations based on the limit analysis, presented an extensive parametric analysis, and researched the effect of strength properties and joint orientation on the bearing capacity of jointed rock. Xiao-Li and Liu et al. [10] analyzed the reliability of rock slope in the form of stability factor and explored the relationship between probability of failure and mean safety factor.
Although some achievements have been made in slope stability analysis, the monitoring and early warning of instability have not been completely solved. e conventional monitoring adopts the 'point, line' layout, which has certain spatial limitations. In recent years, microseismic (MS) monitoring technology has been used as a high-precision rock mass fracture and deformation safety monitoring method. By preembedding MS sensors in the monitoring target area, pick up the elastic waves released by microcrack inside the rock mass in real time, automatic inversion calculation quantitative seismic parameter, predict the macroscopic deformation and failure of the rock mass in advance, and then evaluate the overall stability of the engineering rock mass. Xu et al. [11,12] successfully carried out MS monitoring of the high and steep rock slope on the left bank of Jinping first stage hydropower station and the right bank slope of Dagangshan hydropower station, evaluated the stability of the high rock slope of hydropower station, and achieved a lot of research results in the aspect of rock dynamic disaster. e MS signal collected by the MS monitoring system is a complex nonlinear and nonstationary time series. e fracture of the slope rock mass often has the characteristics of discontinuous multiscale. Compared with the simple fractal dimension, the multifractal method can describe the fluctuation of the rock fracture signal at different levels more accurately. Multifractals, also known as multiscale fractals, represent self-similar fractal systems with different local characteristics. From the perspective of statistical physics, multifractal is an inhomogeneous set which consist of probability subsets with many different singular exponents. At present, the methods to estimate the multifractal spectrum mainly include box counting method, histogram method, partition function method, Wavelet-Based Detrended Fluctuation Analysis method (WB-DFA), Wavelet Transform Modulus Maxima method (WTMM), and Multifractal Detrended Fluctuation Analysis method (MF-DFA). e box counting method adopts a regular gridding method, which does not reflect the distribution of fractal body in the regional space, and the estimation of fractal dimension is very unstable in some cases [13]. e histogram method converges very slowly. Although the partition function method is relatively simple, its calculation results cannot fully reflect the distribution of singularity. Manimaran et al. [14] first proposed the WB-DFA method, which mainly used discrete wavelet transform to decompose signals and extract trends. It did not need to divide intervals in advance, and a certain shape of wavelet is used to approximate trends. However, when the data length is short and the spectrum is narrow, the calculation error is larger. WTMM was proposed by Mallat and Zhong [15], which mainly uses continuous wavelet transform for signal analysis, can process strong unsteady time series, and can estimate the local Hölder exponent, but it requires highquality data and more detailed parameters to be adjusted and it cannot accurately distinguish between single fractal and multiple fractal. MF-DFA was first proposed by Kentelhardt et al. [16]. Compared with the previous method, its estimation result is better overall.
Based on the left bank slope of Baihetan hydropower station in Southwestern China, the MF-DFA method was adopted, the multifractal spectrum was estimated, the MF-DFA preset parameters were determined, the multifractal characteristics of the rock microcrack waveform and the blast vibration waveform were comparatively studied, and the nonlinear dynamic characteristics of the MS waveform were revealed. On this basis, the MS waveform multifractal time-varying response characteristics of rock slope deformation and failure process are discussed, and a rock slope deformation early warning model based on multifractal theory is established.
Engineering Background
e Baihetan hydropower station is located at the junction of Ningnan County in Sichuan Province and Qiaojia County in Yunnan Province, as shown in Figure 1. It is the second cascade hydropower station developed in the lower reach of Jinsha river. It is 182 km away from the upstream Wudongde hydropower station and 195 km away from the downstream Xiluodu hydropower station. e hydropower station adopts an all-underground powerhouse layout, and the left and right bank underground powerhouses adopt a symmetrical layout with a total installed capacity of 16000 MW. It is currently the second largest installed capacity hydropower station over the world. e excavation picture of slope excavation site is shown in Figure 2.
Geological Condition.
e two sides of the dam site are syncline geology, and the river valley has an asymmetric 'V' shape with the left bank low and the right bank high. e typical section of the left bank slope along the dam arch axis is shown in Figure 3(a). e direction of the abutment on the left bank is approximately north-south and inclines 60°to the east. e stratum lithology of the left dam foundation is mainly composed of laminar basalt (P 2 β) and a small amount of clastic rock (T 3 x) and limestone (P 1 m). e weak structural plane in the study area of the left bank slope mainly includes faults F14 and F17, interlayer staggered zones C3-1 and C3 developed along the rock strata, intraformational disturbed zones LS331 and LS337, and many structural cracks, as shown in Figure 3(a). e geological engineering plane figure of left bank slope is shown in Figure 3(b), and the strike rose diagram of dominant joints is shown in Figure 3(c) e mechanical properties of rock masses and weak structural planes were obtained by the experiment of Hydro-China Huadong Engineering Corporation, as shown in Table 1. e maximum principal stress is σ1 � 8.0-11.0 MPa with orientation of N40W and dip angle of 15°, the medium principal stress is σ2 � 7.0-9.0 MPa with orientation of N12E and dip angle of -48°, and the minimum principal stress is σ3 � 6.0-8.0 MPa with orientation of N74E and dip angle of − 29° [18].
MS Monitoring of the Left Bank Slope
e left bank slope of Baihetan hydropower station adopted the MS monitoring system produced by Canadian ESG (Engineering Seismology Group). e MS monitoring system was successfully installed and operated on November 10, 2014. e network topology of the MS monitoring system is shown in Figure 4. e MS monitoring network consists of a Hyperion data processing system, 3 Paladin data 2 Advances in Civil Engineering acquisition substations, and 18 uniaxial acceleration sensors. e sensors are installed in the sidewalls of three tunnels (i.e., grouting tunnels and drainage tunnels) at different elevations (i.e., 610 m, 660 m, and 750 m). Hyperion data processing system mainly includes HANS signal real-time acquisition and recording software, WaveVis waveform processing software, and SeisVis 3-dimensional visualization software.
e HANS signal real-time acquisition and recording software can control parameters such as the sampling frequency, signal gain, and signal trigger threshold of Paladin data acquisition substation. e sampling frequency of the MS monitoring system in the left bank slope of Baihetan is 20 kHz, the threshold value of the ratio of short to long Windows is adopted to trigger, with a threshold value of 3, and the response frequency range of the sensor is 50 Hz∼5 kHz. WaveVis waveform processing and analysis software can automatically identify or manually process the collected waveform files and obtain the time, position, moment magnitude, energy release, and other parameters of MS events through inversion calculation.
e SeisVis 3dimensional visualization software can display the processing results of MS events in real time and identify the potential danger areas by analyzing the activity characteristics of MS events. e sensor receives the elastic wave to generate an electrical signal, which is transmitted to Paladin substation via the cable. Paladin data acquisition substations transmit the collected MS data to Hyperion processing system through communication cable (optical fiber cable). Hyperion data processing system automatically filters background noise and completes record storage of MS events, providing users with complete waveform and spectrum information for analysis and research.
With comprehensive consideration of economic conditions, technical conditions, and engineering conditions, aiming at the problems of excavation deformation and safety construction of the left bank slope of Baihetan hydropower station, the optimal arrangement plan of sensors of the MS monitoring system is proposed as shown in Figure 5
Basic Principle of the MF-DFA Method.
Fractal is generally divided into two categories, one is geometric selfsimilarity or uniform fractal, and the other is statistical selfsimilarity or nonuniform fractal, i.e., multifractal. Geometric self-similarity is usually described by a simple fractal dimension D, but the fractal in nature is generally statistically self-similar, which needs to be described by the multifractal spectrum f(α) − α. Multifractal spectrum f(α) − α, also known as singular spectrum, is a commonly used parameter to describe multifractal. e segmented structure of singular measure can be analyzed by multifractal spectrum [19]. In multifractal calculation, the fractal body will be divided into 660m level S9 S10 S11 S12 S13 S14 610 m level S15 S16 S17 S18
Algorithm of the MF-DFA Method.
e MF-DFA calculation program consists of five steps, of which the first three steps are traditional DFA.
Let the time series of the MS waveform signal be }, which is a nonlinear and nonstationary sequence.
Step 1: construct the signal profile: where 〈x〉 is the mean of the time series {x(k)}; i.e., Step 2: divide the signal profile Y(i) into N s intervals of equal time length s; i.e., 6 Advances in Civil Engineering Since N is not necessarily an integer multiple of s, the signal profile Y(i) will have a residual value during the division process. In order to make full use of the data and retain this part of the residual value, the above division process can be repeated from the tail of the signal profile Y(i); at this time, 2N s equal length intervals will be obtained.
Step 3: the least square method is used to fit the local trend of the data in each interval in Step 2, and then the variance is calculated. is step is the most timeconsuming part of the MF-DFA.
In the time series, the elimination of the 'trend' is completed by subtracting the fitting polynomial y v (i) from the signal profile Y(i), so different fitting orders m can reflect the degree of elimination of 'trend.' Steps 1-3 are the traditional DFA method.
Step 5: make q-order volatility function F q (s) − s double logarithmic graph.
If {x(k)} has self-similarity characteristics, i.e., {x(k)} is a multifractal time series, then there is a power law relationship between q-order volatility function F q (s) and s: where h(q) is the generalized Hurst exponent, which represents the correlation of the original sequence and the size of h(q) depends on the value of q. For stationary time series, when q � 2, h(2) is the same as Hurst exponent. Normally, F q (s) is an increasing function of s.
If {x(k)} is a single fractal time series, F 2 (s, v) has the same scale among all intervals, and h(q) is a constant, independent of the value of q.
In particular, when q � 0, formula (6) diverges, and then h(0) can be determined by the logarithmic averaging process: (6) and (7) and get For simplicity, assume that the length of {x(j)} is an integer multiple of equal time length s; i.e., According to the definition of profile Y(i) constructed in formula (1), the box probability measure P s (v) can be obtained: e mass exponent τ(q) is determined by the partition function χ q (s): By comparing formulas (12) and (13), the following can be obtained: e generalized multifractal dimension D(q) can be expressed as
Advances in Civil Engineering
It is worth noting that, as mentioned in the previous section, for a single fractal time series, h(q) has nothing to do with the value of q, but the generalized multifractal dimension D(q) is still related to the value of q. e singularity exponent α and the multifractal spectral function f(α) can be obtained through the change of Legendre: According to the above algorithm, the calculation process is shown in Figure 6.
Key Parameter Setting of the MF-DFA Method.
Multifractal theory has been widely applied in many fields such as physics, biomedicine, economics, materials science, and geology [20]. e characteristics of the nonstationary time series obtained under different conditions are obviously different, such as the length of the signal time window and the fluctuation trend.
erefore, it is necessary to trial calculation and preset the key parameters to obtain more reliable estimation results. Key parameters of multifractal mainly include time length s, weight factor q, and fitting order m. e value of the parameter affects the calculation result from different aspects, and, at the same time, we can understand the application of multifractal theory in engineering MS monitoring from a deeper level. e parameters of the MS waveform of the left bank slope of Baihetan are calculated and preset as follows. e value of q ranges from − 20 to 20, the equal component Δq � 0.04, and a total of 101 curves were obtained. According to formula (7), the slope of the fitting curve is generalized Hurst exponent, which is used for the estimation of multifractal spectrum, as shown in Figure 7(b). It can be clearly seen from Figure 7 that when Log 2 (s) � 8∼12, the fitting effect is better. In particular, the local Hurst exponent h(q) with a large time length will show a smooth and slow change trend, which is related to a large calculation interval and a small number of intervals.
Based on the above considerations, the value of the multifractal time length s of the MS signal in the left bank slope of Baihetan is s min � 2 8 � 256 and s max � 2 12 � 4096.
Weight Factor q.
e value of the weight factor q should include positive and negative values in order to periodically weight the fluctuation changes in the time series. e size of q value represents the proportion of the RMS of local fluctuations in the whole. For a large q value, it means that the big fluctuation occupies the dominant position in the whole time series. h(q) mainly describes the scale behavior of the big fluctuation. On the contrary, for small q value, small fluctuations are dominant, and h(q) mainly describes the scale behavior of small fluctuations. erefore, the value of q should avoid larger or smaller values to reduce the error they caused at the tail of the multifractal spectrum. In the actual calculation, when the value of q has no significant effect on the calculation result, the range of value of q can be cut off [22].
Four typical MS event waveform time series in the left bank slope of Baihetan were selected for MF-DFA calculation. e absolute value of q ranged from 0.5 to 50. e calculation results are shown in Table 2. It can be clearly seen from Table 2 that when |q| was 0.5∼20, the value of Δα varies greatly, indicating that the value of q has a great effect on the calculation result. erefore, the value range of q should be increased. However, when |q| was 20∼50, the variation error was controlled within 0.01, indicating that when |q| was 20, the calculation result has tended to be stable, and the range of q can be cut off.
e same result can also be obtained from h(q) − q graph shown in Figure 8. e values of |q| are 0.5, 3, 20, and 50, respectively. When |q| was 0.5, h(q) − q graph was approximately a straight line; when |q| was 3, h(q) − q graph had a certain curvature; and when q was 20 and 50, two curves had a certain similarity. From the perspective of 'global-local,' h(q) − q graph when |q| was 0.5 and 3 can be regarded as the 'local distribution graph' of the central part of h(q) − q when |q| was 20 and 50. erefore, it cannot fully reflect the overall trend of h(q) − q graph.
Fitting Order m.
In the MF-DFA calculation, the larger fitting order m can ensure that the multifractal spectrum is not affected by the nonstationary trend in the time series. However, a larger value of m may lead to overfitting of a small sample time series, and the calculation time will increase. In the calculation process, in order to ensure the stability of F q (s), m should also satisfy m + 2 ≤ s [23].
Taking the waveform of the MS event in the left bank slope of Baihetan at 13 : 30 on June 15, 2016, as an example, Fq(s) − s relationships at different m orders are calculated, respectively, and the contour projection is shown in Figure 9. It can be seen from Figure 9 that when m � 1∼2, Fq(s) − s relationship fluctuates greatly and the fitting effect is not good, while when m ≥ 3, the fitting effect is better. 8 Advances in Civil Engineering Considering the huge amount of data and the calculation time, the fitting order in the multifractal calculation of the MS waveform on the left bank slope of Baihetan is m � 3.
In summary, the preset parameters of MS multifractal of the left bank slope of Baihetan hydropower station are as follows: Solve the detrending sequence and calculate the variance
Multifractal Characteristics of MS Signals
e signals collected by MS monitoring of rock slope mainly include rock microfracture signals (MS signals), blasting vibration signals (blasting), mechanical vibration signals, current interference signals, car whistle, and unknown signals. e multifractal characteristics of different signals are obviously different. e following mainly analyzes the multifractal characteristics of rock microfracture and blasting vibration waveform time series by MF-DFA method. Figure 10 shows the typical rock microfracture signal and rock blasting vibration signal, and Figure 11 shows the multifractal spectrum corresponding to the two types of typical signals.
Multifractal Spectrum of MS Signals.
In Figure 11, Δα is the width of the multifractal spectrum, which represents the multifractal strength of the waveform and the complexity of the fluctuation. e larger the Δα is, the greater the multifractal strength of the waveform is, and the more intense and complex the fluctuation is. And vice versa, the calculation can be expressed as Δα = α max − α min . It can be seen from Figure 11 that the multifractal spectrum width of rock microfracture waveform Δα 1 = 0.99 is much smaller than the multifractal spectrum width of blasting vibration waveform Δα 2 = 3.14, indicating that the multifractal strength of blasting vibration waveform is larger and the fluctuation is more intense and complex. Δf(α) represents the proportion of large fluctuations and small fluctuations in the waveform. e larger the Δf(α) is, the larger the proportion of small fluctuations in the waveform will be and vice versa. e calculation can be expressed as Δf(α) = f(α max ) − f(α min ). It can be seen from Figure 11 that the multifractal spectrum of rock microfracture waveform Δf(α 1 ) = − 0.06, which is larger than the multifractal spectrum of blasting Δf(α 2 ) = -0.21, indicating that small fluctuations in the MS waveform account for a large proportion.
Multifractal Characteristics of Rock Microfracture
Waveform with Background Noise. In the previous section, the multifractal characteristic of a typical MS waveforms is described. However, in actual MS monitoring of rock slope, the collected MS signals were often mixed with various types of noise due to the complexity of the engineering field construction and the rock mass itself. e following studied the effect of background noise on multifractal spectrum estimation by MF-DFA calculation. e MS waveforms A∼J with background noise collected by different channels at the same time were selected for MF-DFA calculation, and the waveform is shown in Figure 12. It can be seen from Figure 12 that the amplitudes of the waveforms from A∼J are in descending order, including ADHJ with a large background noise and BCEF with a small background noise. e mass exponent τ(q) − q graph and the multifractal spectrum f(α − α) graph were made for A∼J, as shown in Figure 13. Figures 13(a) and 13(b) show the multifractal characteristics of waveform corresponding to the window size of 1500 ms, and Figures 13(c) and 13(d) show the multifractal characteristics of the waveform corresponding to the window size of 750 ms (i.e., 375 ms to 1125 ms in Figure 12). ere are two main advantages of focusing the multifractal analysis window on the effective waveform area: (1) speeding up calculation efficiency, suitable for a large number of waveform multifractal calculations, and (2) better reflecting the multifractal characteristics of effective waveforms, which is beneficial to distinguish the multifractal difference of waveform produced by different inducements. In general, different waveforms for multifractal difference analysis should ensure the same time window size.
According to the mass exponent τ(q) − q graph shown in Figure 13(a), the curve corresponding to the rock microfracture waveform ADHJ with background noise is located above the left end and below the right end in the mass exponent τ(q) − q graph. Meanwhile, the waveform ADHJ has a smaller α max in the multifractal spectrum f(α) − α diagram shown in Figure 13(b). e time window was narrowed to make the rock microfracture waveform fill the window as much as possible, to improve the effectiveness of waveform multifractal analysis, and MF-DFA was calculated again. e results of the recalculation are shown in Figures 13(c) and 13(d). e calculation results are more obvious: the width of the multifractal spectrum Δα of the rock microfracture waveform ADHJ with large background noise is small, and the microfracture waveform BC with low background noise and large amplitude has a larger spectrum width Δα.
is result reflects the effectiveness of MF-DFA method in removing nonstationary trend; that is, MF-DFA method puts more emphasis on the multifractal characteristics of rock microfracture waveform after removing noise. erefore, when the MF-DFA is used to analyze the MS waveform data collected at different channels without sufficient noise filtering, the waveform with small background noise and large amplitude should be selected for analysis as much as possible to improve the reliability of multifractal spectrum estimation. Advances in Civil Engineering 13 necessary to focus on the analysis of the time-varying multifractal characteristics of the rock microfracture waveform during the monitoring period. Combined with the field macrofailure law, a rock slope deformation early warning model based on the MS waveform multifractal time-varying response characteristics was established. Figure 15 shows the time-varying law of multifractal spectrum parameters Δα and Δf(α) of the MS waveform of the rock mass near the crack T3301. In Figure 15, the upper limit of Δα and the lower limit of Δf(α) have very obvious time series characteristics, so the evolutionary trend of Δα takes the upper limit of the MS event, and the evolutionary trend of Δf(α) takes the lower limit. As can be seen from Figure 15, since June 1, the number of MS events had not increased rapidly, but the overall Δα showed a sharp increase trend, reaching a maximum of 1.53, indicating that the multifractal strength of rock microfracture waveform increased and the fluctuations became complex and intense. e corresponding Δf(α) showed a sharp decrease trend; the minimum reached − 0.58, indicating that the proportion of large fluctuations in the waveform time series increased. is was due to the strong unloading effect of blasting excavation, which hinders the further expansion of microcracks in hard rock mass. e number of MS events induced was small, but the local stress was highly concentrated and the strain energy was increased, that is, the 'quiet period' before deformation and failure [24]. After June 15, Δα briefly fell back to the valley and then increased to 1.3 again and the corresponding Δf(α) increased from the valley. In this process, the local stress continued to increase until the rock bearing capacity was exceeded, resulting in the connection of microcracks at some locations, the stress concentration position began to shift, and the strain energy began to be released. When the crack expansion was blocked again, the local stress reconcentrated, and the strain energy accumulated again. After June 29, Δα decreased as a whole and was in a stable state, indicating that the fluctuation of the microfracture waveform time series was relatively smooth. And the value of Δf(α) also stabilizes near the zero line after a small increase. At this time, the microcracks almost were completely connected under the action of high stress, forming cracks, causing macroscopic failure, and the stress and strain energy were released.
Comparison with Conventional Monitoring.
e timevarying response characteristics of multifractal spectrum parameters Δα and Δf(α) were closely related to the initiation, development, expansion, and penetration of rock microfractures. Before the deformation and failure of the rock mass, Δα showed an increasing trend and Δf(α) showed a decreasing trend, which can be regarded as a precursor signal of deformation warning; when deformation and failure occurred, Δα showed a decreasing trend and Δf(α) showed an increasing trend, which can be regarded as a deformation failure period; after deformation and failure, both Δα and Δf(α) showed a steady trend, and Δf(α) as a whole would be near the zero line, which can be regarded as a stable period. e division result is shown in Figure 16. In particular, when Δα and Δf(α) increase and decrease for several times, this indicates that the stress concentration degree is getting higher and higher, and the strain energy is accumulating more and more, which indicates that larger deformation and failure will occur, and it is also regarded as the deformation period. erefore, reinforcement measures should be taken immediately for the slope to timely control the continuous growth of cracks and prevent the failure of slope deformation and instability.
Compared with conventional monitoring, the monitoring results of displacement meter CX04 (Figure 17(a)) installed at the height of 615 m are shown in Figure 17(b). Since the installation date of CX04 in June 17, the cumulative deformation of the T3301 crack was 1.35 mm, indicating that the deformation of T3301 was in a slow-growing state, that is, in the 'precursor period.' From June 17 to July 8, the cumulative deformation displacement of the T3301 crack was about 6.30 mm. And it was the largest deformation region within the monitoring range of CX04 during the period; the deformation rate reached about 0.3 mm/d; this period was in the 'deformation period.' From July 8 to July 15, the cumulative deformation displacement of the T3301 crack was 0.87 mm, and the average deformation rate was reduced to 0.11 mm/d. And, after July 12, the rock mass had almost no deformation and reached a stable state, which corresponded to the 'stable period.' Combined with field construction, at the end of June and early July, the prestressed anchor cable was installed at the 605-600 m elevation of the dam foundation slope. It enhanced the integrity of the slope rock mass, increased the bearing capacity of the slope rock mass to a certain extent, increased the antisliding friction resistance of the unstable surface, improved the stress adjustment path, effectively controlled the further development of crack, and made the slope rock mass temporarily stable. e early warning analysis of rock slope deformation and failure based on the multifractal timevarying response characteristics of MS signals had a good correspondence with conventional monitoring in time and space. It can accurately describe the mechanical response characteristics of rock slope under the action of excavation unloading. It proves the feasibility of the early warning method of rock slope deformation and failure in this study, which can provide an important reference for rock slope design and safe construction.
Early Warning Model of Rock Slope Deformation
Based on the MF-DFA Method. In actual situations, it is impossible to know when Δα and Δf(α) reach the peak or valley before large deformation occurs, so the 'precursor period' and 'deformation period' should not be divided at the peak or valley. e warning model is optimized according to the actual situation, as shown in Figure 18. After a period of 'stable period,' Δα shows an increasing trend for the first time and Δf(α) shows a decreasing trend for the first time, then entering the 'precursory period.' Δα reaches the peak and shows a decreasing trend, Δf(α) drops to the valley and shows an increasing trend, then the 'precursor period' ends, and the 'deformation period' starts. If Δα and Δf(α) increase and decrease for several times afterwards, it is still considered to be in the 'deformation period' until the arrival of the next 'stable period.'
Conclusion
e MS monitoring technique was adopted to stability analysis of the left bank slope of Baihetan hydropower station. e multifractal characteristics of MS waveforms were carried out based on the MF-DFA method. e following conclusions were drawn.
Based on the MF-DFA method, the multifractal preset parameters of the MS waveform of the left bank slope of Baihetan hydropower station were determined as follows: s min � 2 8 � 256, s max � 2 12 � 4096, |q| � 20, and m � 3. e difference between the multifractal spectrum of typical MS waveform and blasting waveform was obvious: the width of the multifractal spectrum Δα of the MS waveform was much smaller than that of the blasting waveform, indicating that the multifractal strength of the blasting waveform was larger and the fluctuation was more intense and complex. e multifractal spectrum Δf(α) of the MS waveform was larger than that of the blasting waveform, indicating that small fluctuations in the MS waveform account for a relatively large amount. Furthermore, selecting a waveform with small background noise and large amplitude for analysis can improve the reliability of the multifractal spectrum estimation results. An early warning model of deformation and failure of rock slope based on MF-DFA was proposed. Before the deformation and failure of surrounding rock mass, Δα showed an increasing trend and Δf(α) showed a decreasing trend, which can be regarded as 'precursor period;' when deformation and failure occurred, Δα showed a decreasing trend and Δf(α) showed an increasing trend, which can be regarded as 'deformation period;' after deformation and failure, both Δα and Δf(α) showed a steady trend, and Δf(α) as a whole would be near the zero line, which can be regarded as 'stable period.' Compared to the conventional monitoring data, the early warning model was verified to be feasible. e MF-DFA-based early warning method of rock slope deformation and failure can accurately describe the mechanical response characteristics of rock slope under excavation and unloading.
Data Availability
Some or all data and codes generated or used during the study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this paper. | 7,788.2 | 2020-07-31T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
Joint Distraction Combined with Mesenchymal Stem Cell Intra-articular Injection Attenuates Osteoarthritis
Background: Conservative treatments of osteoarthritis (OA) are limited to symptom relief and novel methods to attenuate OA progression are lacking. Objective: In this study, we investigated the effectiveness of knee joint distraction (KJD) combined with mesenchymal stem cells (MSCs) intra-articular injection (KJD+MSCs) in OA rat model. Methods: OA rat model was established by anterior cruciate ligament transection plus medial meniscus resection in right knee in SD rat. The KJD+MSCs treatment started 3 weeks after the OA surgery. There were two other groups as knee joint distraction only (KJD) and no treat (OA). Three weeks after the treatment, distraction external xators were removed and rats were kept for further 3 weeks. The rats were then terminated, samples were subject to micro-CT and histology examinations to evaluate the changes of the articular cartilage tissues, subchondral bone and the secondary inammation. Results: Safranin-O/fast green staining showed that articular cartilage injury was most obvious in the OA group than that in the KJD group and the least in the KJD+MSCs group. Immunohistochemistry examinations showed that the KJD+MSCs group had the lowest percentage of MMP13 or ColX positive chondrocytes comparing to other groups. Micro-CT data indicated that the abnormal change in the subchondral region of the tibia in the KJD+MSCs group was signicantly less than that in the KJD group or OA group. Finally, immunohistochemistry result showed that the knee joint in the KJD+MSCs group had the least number of CD68-positive cells among all the groups. Conclusions: Joint distraction combined with mesenchymal stem cells injection alleviated cartilage degradation, reduced irregular ossication of subchondral bone and secondary inammation, suggesting it could be a new method to halt the OA progression. Tb.Sp.: SMI: MMP13: Matrix metallopeptidase13; Type X collagen.
Introduction
Osteoarthritis (OA) is a common degenerative disease of joints. Its pathological features are mainly cartilage degenerative disease, irregular subchondral ossi cation and secondary in ammation of synovial membrane [1][2][3]. The clinical treatment of OA is very limited. Drug therapy can only relieve the pain of the knee joint, but not alleviate the progress of the disease. Many patients must undergo total arthroplasty to alleviate stiffness and improve the function of joint movement in the later stages of disease. Epidemiological studies show that about 44% of patients receiving total arthroplasty are younger than 65 years old [4], which means they may have to face joint revision surgery in their later years.
Therefore, new therapies are urgently needed to slow the progress of OA.
Abnormal mechanical loading plays an important role in OA progression [5][6][7][8]. Joint distraction is a novel therapy that can modulate this overloading and enable intrinsic joint tissue regeneration supposedly by correcting the proper biochemical and biomechanical joint homeostasis [8,9]. Joint distraction is a surgical procedure in which the two bony ends of a joint are gradually separated to a certain extent and for a certain period of time by use of an external xation frame. In the rst report in 1978, Judet and his team pulled an ankle joint for a large cartilage defect through a movable hinged external xator, and the results showed that the movable joint distraction treatment made the defect in the load-bearing area repair satisfactorily [10]. Subsequently, the treatment of OA by joint distraction received more attention.
Following on from series of animal and clinical experiments on joint distraction, satisfactory results have shown that joint distraction can promote cartilage repair, alleviate pain in patients with OA, and improve quality of life [7,11]. In a clinical study, Professor Lafeber's team evaluated the 9-year treatment outcome of knee joint distraction (KJD) and found that KJD promoted long-lasting clinical and structural improvement [11]. In our previous study, we found that KJD can attenuate OA progression by reducing cartilage damage, subchondral bone abnormalities, and secondary in ammation in a rat OA model [7]. Unfortunately, even though the experimental treatment of OA by joint distraction has made great progress, there are still some limitations. First, almost all researchers focus on the alleviation of cartilage degradation in OA by joint distraction, while there are few reports on the other two important pathological manifestations of OA (subchondral osteosclerosis and secondary in ammation). This underlies de ciencies in understanding the mechanism of the treatment of OA by joint distraction. Second, in a clinical study, the treatment of OA by joint distraction alone takes a relatively long period of time, usually 2-3 months, which affects the quality of daily life of patients. Also, there are complications such as nail tract infection [12,13], which can increase the risk of affecting the success of joint replacement in the future. To shorten the treatment time of joint distraction and give patients a good treatment experience, a better treatment needs to be found.
Mesenchymal stem cells (MSCs) are a kind of stem cell with a wide range of sources, easy to collect and with multi-differentiation potential. They have long been considered to be an ideal source of cells for stem cell therapy [14][15][16][17]. In animal or clinical experiments, there have been many reports of MSCs intraarticular injection for the treatment of OA. Overall, these reports indicated that the use of MSCs in the treatment or prevention of OA can alleviate cartilage degradation and subchondral sclerosis [7,[18][19][20].
For example, in the articles published by our research group in 2017, MSCs pre-treated by chondrogenesis induction and differentiation and then reverse-differentiated into stem cells had a better therapeutic effect than without pre-treated MSCs or no MSCs treatment in a rat OA model and the underlining mechanism is mainly through the epigenetic modi cation [19,20]. In clinical research, Wakitani et al. [21] rst reported the application of MSCs mixed with type I collagen hydrogel transplantation into the cartilage defect in OA patients. Arthroscopic results showed that 24 weeks later, the original cartilage defect had been repaired by white tissue and the morphology was similar to hyaline cartilage. The biopsy section analysis also con rmed that hyaline cartilage regeneration did exist in the transplantation site. In another study, published by our research team, we found that MSC injection treatment has a potential therapeutic effect for wrist OA, as shown by numerical improvement in performance and pain scores [22]. From these reports of preclinical and clinical studies, it has been shown that MSC injection can have a positive effect as an OA therapy. However, some researchers believe that MSC injection can only be used in the early stage of OA. For the advanced stage, the severe subchondral sclerosis of the joints and increasing secondary in ammation are a hostile micro-environment to MSCs, which would hinder their therapeutic effect [23]. Therefore, nding a combination therapy which can modulate the micro-environment of the joint will greatly improve the treatment of OA with MSC transplantation. Joint distraction may allow that modulation.
In this study, we explore the feasibility of knee joint distraction combined with mesenchymal stem cell injection (KJD + MSCs) in the treatment of OA in a rat model. We focus on the effects on this treatment in cartilage degeneration, irregular subchondral bone remodeling and secondary in ammation, which are known as the characteristic changes of OA.
Isolation and cultivation of MSCs
MSCs were isolated and cultivated from transgenic Sprague Dawley (SD) rats with green uorescent protein (GFP) in Professor Gang Li's Laboratory at the Prince of Wales Hospital, Chinese University of Hong Kong. First, one week old transgenic SD rats were killed and complete femurs were placed in DMEM culture medium with 10% fetal bovine serum (FBS). Second, marrow was removed from the marrow cavity with a one milliliter syringe to get as much bone marrow as possible into the cell culture dish.
Third, the cell suspension was put into an incubator. The culture medium was exchanged to remove nonadherent cells after 48 hours. When bone marrow cells reached about 80% con uence, the cells were digested with 0.25% trypsin (Amersco, Ohio, USA) containing 0.02% EDTA and passaged at 1:3. MSCs can be used in subsequent experiments in the 3rd to 6th generations.
Animals
All SD rats were 16 weeks old and weighed 450-500 g. All animal experiments were approved by the Animal Ethics Committee of Jinan University (Ethics Reference No.: 20180824-04). Only the right knee of the rats received the surgery. Each rat was injected with a solution of 0.2% (vol/vol) xylazine and 1% (vol/vol) ketamine in PBS for anesthesia. The right hind leg was disinfected with 70% alcohol after removing the hair and then was exposed through a medial parapatellar approach. The patella was dislocated to the lateral side and the knee joint was xed in the full exion position. Then the anterior cruciate ligament was transected and the medial meniscus was resected using micro-scissors (ACLT + MMx). After the surgery, the incisions were sutured in turn.
The rats were nursed normally for three weeks after surgery without any restriction on activity. Previous studies have shown that rats have persistent pathological changes of post-traumatic OA by this time [24].
The rats were randomly divided into three groups (n = 5 each): an OA group, a knee joint distraction group (KJD) and a knee joint distraction combined with mesenchymal stem cells (MSCs) injection group (KJD + MSCs). The rats in the OA group were treated as controls. In the KJD group, the rats were tted with an external xator and treated with joint distraction for three weeks. The KJD + MSCs group rats received joint distraction therapy, and then MSCs were injected into the joint cavity (100 µl, 0.5 × 10 6 cells) three days after joint distraction. The cells were mixed with clinical sodium hyaluronic acid (HA) (Biochemical Industry Corporation; Imported Drug Registration Certi cate No. H20140533). In the KJD, HA without cells, was injected as a control. In order to study whether this therapy has a relatively long-term therapeutic effect, we did not immediately sacri ce the rats after treatment, but dismantled the external xator three weeks after the joint distraction treatment, without restricting the activity of rats, and followed up for another three weeks. After that, the rats were killed with an excessive dose of anesthetics, and the samples were taken for further analysis (Fig. 1).
Joint distraction procedure
We designed a speci c external xator for this study. This external xator consists of three nails (1.2 mm in diameter) that are surgically drilled into the medial side of the knee joint. With the customized threepoint positioner, we xed the uppermost nail to the medial epicondyle of the femur, and the other two nails to the upper segment of the tibia. To ensure joint mobility during distraction, we added a customized cannula (1.3 mm in diameter) to the nail of the medial epicondyle of the femur. Finally, the customized external xator was xed to the three nails. In rats receiving joint distraction, the joint space was stretched by one mm. This distance was measured by X-ray as a reference for the normal joint space of the control side [7]. The maximum exion and extension angle of the knee joint was observed after the operation. We used X-ray to detect the success of joint distraction before and after surgery. All animals were allowed to move freely without restriction post-operatively (Fig. 1).
Digital radiographs
Joint space width of the rat right knee was measured using the digital X-ray (MX-20, Faxitron X-Ray Corp., Wheeling, IL, US) with an exposure time of 6000 ms and a voltage of 32 kV.
Micro-CT analysis
Changes and quantitative analysis of subchondral bone microstructures in rats were detected by highenergy micro-CT (UCT40, Scanco Medical, Basserdorf, Switzerland). At the end of the study, the knee joints of the rats were separated with all corresponding soft tissues removed at the same time, xed in 10% formalin for 24 hours, and then examined by µCT. A three-dimensional reconstruction (3D) image of mineralized tissue was made of the subchondral bone area of the tibia on one side of the rat. The domain value was 160 mg hydroxyapatite/cm 3 ) and a Gaussian lter (sigma = 0.8, support = 2) was used to suppress noise. Sagittal images of the tibial subchondral bone were used to perform 3D histomorphometric analysis. We de ned the region of interest to cover the whole subchondral bone medial compartment and used a total of 100 consecutive images from the medial tibial plateau for 3D reconstruction and analysis.
Morphological and Immunohistochemical analysis
After removing the soft tissue from the knee joint of rats, the samples were xed in 10% formalin for 48 hours, then decalci ed in 10% EDTA for 21 days, and nally embedded in para n. Sagittal sections 5 µm thick were performed of the whole right knee. The section was mounted on a slide and stained with safranin-O/fast green.
Statistical analysis
In accordance with the ARRIVE guidelines, we have reported measures of precision, con dence, and sample size to provide an indication of signi cance. All statistical analyses were performed using SPSS15.0 software. The data were analyzed via one-way ANOVA. Assumptions of the ANOVA were assessed using the Shapiro-Wilk test of normality and Levene's test for homogeneity of variance. The result of the Levene's test was used to determine the post hoc testing strategy. If not signi cant, the LSD-t post hoc test was employed. If Levene's test was signi cant, the ANOVA was followed by a Dunnett's T3 post hoc test for unequal variance. Data are reported as mean ± standard deviation, and values of p < 0.05 were considered signi cant. The graphs were generated in GraphPad Prism 6 (GraphPad Software, San Diego, CA, USA).
Results
Joint distraction combined with mesenchymal stem cell injection can alleviate cartilage degradation X-rays of the knee joint indicated that KJD signi cantly enlarged the knee joint space in the distraction group compared with that in the OA group (Fig. 2a). Safranin-O/fast green staining showed that cartilage injury was most obvious in the OA group compared to the other two groups. KJD + MSCs had the smallest degree of cartilage damage among the three groups (Fig. 2b). More importantly, the number of MMP13 (an enzyme associated with the cartilage degradation) positive chondrocytes was signi cantly lower in the KJD + MSCs group than in the KJD or OA groups. Similarly, the number of Col X (a marker of the cartilage hypertrophy) positive chondrocytes in the KJD + MSCs group was signi cantly less than that in the KJD and OA groups (Fig. 3). These results suggest that joint distraction combined with mesenchymal stem cell injection can alleviate cartilage degradation and limit OA progression, and the therapeutic effect can last longer than joint distraction alone.
Joint distraction combined with mesenchymal stem cells injection can reduce the secondary in ammation
To investigate whether this treatment would attenuate the secondary in ammation of the OA joint, CD68 (a marker of macrophages) positive cells were detected in the knee joint space of the affected joint in the different groups. Many cells in ltrated into the knee joint space in all three groups. We used immunohistochemical staining to analyze the number of CD68 positive cells in the knee joint space to measure the level of secondary in ammation. Several CD68 positive cells were found in the joint space in all three groups. Immunohistochemical staining showed that the level of secondary in ammation in the OA group was more severe than in the KJD and KJD + MSCs groups (Fig. 5). The fewest CD68 positive cells in the joint space were found in the KJD + MSCs group, indicating that KJD + MSCs treatment has excellent therapeutic effects on reducing secondary in ammation in the OA joint (Fig. 5).
Discussion
In this study, we used an ACLT + MMx OA model to evaluate the effect of KJD + MSCs intra-articular injection on OA. We found that KJD + MSCs injection can reduce secondary in ammation, cartilage degradation and irregular ossi cation of the subchondral bone effectively. Although in this experiment, KJD alone showed improvement over the non-treated OA group, combination therapy was signi cantly better than both. The mechanism is hypothesized to be due to joint distraction relieving aberrant mechanical stress and MSCs regulating the in ammation, allowing intrinsic tissue repair of the joint. Therefore, it is indicated that KJD combined with MSCs injection can achieve a mutually reinforcing therapeutic effect in OA treatment.
The therapy of young OA patients is a worldwide challenge. Younger patients have a high demand for joint functional activities, which will accelerate the wear of arti cial joints. Second, most joint replacement patients face the problem of revision surgery due to the limitation of the service life of current joint prostheses. Therefore, clinical and basic researchers are trying to nd alternative treatments for this group of patients. Since the rst use of joint distraction in the treatment of arthritis, it has been considered an alternative treatment for younger patients with OA [25], and clinical and animal experiments have shown the advantages of this treatment approach [26]. For example, Yang Xu et al. [27] used joint distraction to treat severe traumatic ankle arthritis in patients with an average age of 30.3 ± 14.3 years.
After treatment, the joint pain of the patients was relieved, and the joint space increased by three mm after one year. In our previous study of KJD in the rat OA model, we observed that the cartilage defect of the treated group was smaller than that of the control group, and the in ammatory factors and subchondral bone mineral density also were better than those of the control group. Therefore, we inferred that joint distraction could decelerate secondary in ammation, cartilage degeneration and subchondral sclerosis [7]. However, in our previous study, we did not have a follow-up period, so we did not know how long the therapeutic effect lasted after joint distraction. To answer this question, in this study we de ned a three week follow-up period after KJD treatment. Our results show that KJD + MSCs has a better therapeutic effect than KJD only or no treatment, as evidenced by the level of cartilage damage (Figs. 2-3), abnormal subchondral bone remodeling (Fig. 4) and secondary in ammation (Fig. 5). Our results are consistent with other research using KJD that this therapy can delay OA progression. We also found that the therapeutic effect is better when combined with MSCs, indicating that MSCs play an important role in regulating the micro-environment of the OA joint (Figs. [2][3][4][5]. MSCs are a kind of adult stem cell with multiple differentiation potential that can be isolated from various tissues such as bone marrow, adipose tissue, and synovium. MSC treatment of OA has been widely used in animal and clinical experiments. These experiments have demonstrated safety and are associated with cartilage regeneration, pain relief, and knee joint function improvement. Most of the experiments showed satisfactory results. According to previous experiments, it can be concluded that MSCs play a therapeutic role mainly through directional differentiation, regulation of immunity, and antiin ammatory and exocrine effects [28][29][30][31]. For example, in a clinical study, published by our research team, we found that MSC injection treatment has a potential therapeutic effect for wrist OA, as shown by numerical improvement in performance and pain scores [22]. In animal studies published by our research group, we found that MSCs that were pre-treated by chondrogenesis induction and differentiation and then reverse-differentiated into stem cells, had better therapeutic effects by relieving cartilage damage and subchondral sclerosis than in OA group or without pre-treated MSC treatment group [19,20]. The articular cavity environment of patients with OA is different from that of normal people. In ammatory factors (such as IL-1, TNF), pericellular matrix (such as hyaluronic acid concentration) and an extremely hypoxic environment affect the growth and function of MSCs [7,32]. In the present study, we observed that KJD + MSCs had better results than the other groups (Figs. [2][3][4][5]. Especially, we nd that the number of CD68+ (macrophage marker) cells signi cantly decreased more in the affected joint in the KJD + MSCs group than in the other groups (Fig. 5). In this study, we did not nd that MSCs directed differentiation to chondrocytes (data not shown). Taken together, we speculated that in this study, MSCs would play a role in the regulation of immunity, and have anti-in ammatory and exocrine effects, which combined with joint distraction would regulate the micro-environment of the OA joint, facilitating joint intrinsic repair (Figs. 2-5).
There are several mechanisms to explain the treatment of OA by joint distraction. First, the alignment of the joint is corrected to avoid compression of the damaged joint, which is conducive to the repair of the articular surface [7]. Second, joint distraction can generate intermittent hydrostatic pressure, thus stimulating MSCs in the joint cavity to play a therapeutic role [13]. However, the number of MSCs in the articular cavity is relatively small, and it is di cult to stimulate MSCs through joint distraction. Therefore, in current reports of joint distraction treatment of OA, the treatment time is usually longer than two months [25,33]. In the natural progression of OA, a longer joint distraction cycle will shorten the interval between joint distraction and joint replacement. In order to shorten the joint distraction treatment cycle, in this experiment, we injected MSCs into the articular cavity to increase the number of MSCs. Even though the treatment period of this experiment was only three weeks, which is shorter than that of other experiments, the results showed that the combination KJD + MSCs therapy is better than KJD alone in slowing down the processes of secondary in ammation and subchondral sclerosis and so on (Figs. [2][3][4][5]. This proves that intra-articular injection of MSCs enhances the therapeutic effect of KJD, and the combined treatment of MSC intra-articular injection and KJD has complementary effects.
Conclusion
In conclusion, in this study, we demonstrated that knee joint distraction combined with mesenchymal stem cell injection can alleviate cartilage degradation, reduce irregular ossi cation of subchondral bone and secondary in ammation in the rat OA model. This is more effective to delay OA progression than KJD alone. Immunohistochemical staining showed the expression of MMP13 (brown) and Col X (red uorescence) positive cells in cartilage in the three experimental groups. The number of MMP13 and Col X positive cells in articular cartilage in the KJD+MSCs injection group was signi cantly less compared with the OA and KJD groups. The nuclei were stained with hematoxylin (black) and DAPI (blue uorescence). The scale is | 5,226.2 | 2020-11-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Development of an Interdigitated Electrode-Based Disposable Enzyme Sensor Strip for Glycated Albumin Measurement
Glycated albumin (GA) is an important glycemic control marker for diabetes mellitus. This study aimed to develop a highly sensitive disposable enzyme sensor strip for GA measurement by using an interdigitated electrode (IDE) as an electrode platform. The superior characteristics of IDE were demonstrated using one microelectrode of the IDE pair as the working electrode (WE) and the other as the counter electrode, and by measuring ferrocyanide/ferricyanide redox couple. The oxidation current was immediately reached at the steady state when the oxidation potential was applied to the WE. Then, an IDE enzyme sensor strip for GA measurement was prepared. The measurement of fructosyl lysine, the protease digestion product of GA, exhibited a high, steady current immediately after potential application, revealing the highly reproducible measurement. The sensitivity (2.8 nA µM−1) and the limit of detection (1.2 µM) obtained with IDE enzyme sensor strip were superior compared with our previously reported sensor using screen printed electrode. Two GA samples, 15 or 30% GA, corresponding to healthy and diabetic levels, respectively, were measured after protease digestion with high resolution. This study demonstrated that the application of an IDE will realize the development of highly sensitive disposable-type amperometric enzyme sensors with high reproducibility.
Introduction
Glycated proteins, such as hemoglobin A1c (HbA1c) or glycated albumin (GA), are important glycemic control markers for diabetes mellitus. These proteins are the product of one of two possible nonenzymatic reactions: either a reaction between glucose and the N-terminal valine residue of hemoglobin's β-chain or a reaction between glucose and lysine residues at the surface of human serum albumin. The level of HbA1c reflects the average blood glucose level over a period of 2-3 months, while the level of GA reflects the blood glucose level over a period of 2-3 weeks, depending on their life spans [1]. While HbA1c is currently the most widely used long-term glycemic control marker for diabetes, GA has advantages over HbA1c [2][3][4]. First, GA reflects the glycemic condition over a shorter period than HbA1c; thus, GA levels change rapidly according to the change in blood glucose level. Second, GA levels accurately reflect glycemic status under the condition of hematologic disorders such as anemia and variant hemoglobin, where abnormal HbA1c levels are observed. GA is expected to be utilized increasingly along with or as an alternative to HbA1c 2 of 14 as a long-term glycemic control marker. Currently, in clinical applications, GA is measured with the enzymatic method at the central testing laboratory using the enzyme reagent for an autoanalyzer that has been commercialized by Asahi-Kasei Pharma (Tokyo, Japan) as Lucica ® GA-L [5]. In this enzymatic analytical kit, fructosyl amino acid oxidase (FAOx) is employed. First, GA is digested with protease, and released ε-fructosyl lysine (ε-FK) is oxidized by FAOx. Then, hydrogen peroxide produced through the enzymatic reaction is measured spectroscopically [6]. This enzyme-based GA measurement was first approved for the market mainly in Asian countries, such as Japan (from 2002), China (from 2003), Korea (from 2013), Indonesia (from 2013) and Taiwan (from 2015). Recently, enzyme-based GA measurement has also been approved in the EU (from 2015) and by the Food and Drug Administration (FDA) in the USA (from 2017). Therefore, the importance and usefulness of GA has been increasingly recognized worldwide. Although GA is a useful glycemic control marker, point-of-care testing (POCT) for GA has not yet been developed. Therefore, the development of a simple and rapid measurement method for GA suitable for POCT is required.
Electrochemical biosensors based on oxidoreductases have a wide application field in medical care and food and environmental protection. These biosensors are cost effective and enable rapid measurement. The most representative and commercially available disposable electrochemical enzyme sensors are those for the self-monitoring of blood glucose (SMBG), which are dedicated for the glycemic level control of diabetes mellitus. These biosensors are usually composed of enzymes, artificial electron acceptors (mediators) and disposable electrodes. SMBG is based on endpoint assays, where the sequential reaction of substrate oxidation with an enzyme and reduction of the mediator finish in several seconds, and the produced reduced-form mediator is measured with the chronoamperometry method. Previously, our group reported an SMBG-type endpoint assay-based disposable electrochemical enzyme sensor for the measurement of GA using FAOx, which is the same enzyme as used in the current enzymatic assay [7]. First, GA is digested by protease, and ε-FK is released. Then, FAOx oxidizes ε-FK, and the oxidized-form mediator is reduced simultaneously. The amount of produced reduced-form mediator is measured with the chronoamperometric method. As the electron mediator, hexaammineruthenium (III) chloride (a Ru complex) was used along with a disposable screen-printed carbon electrode (SPCE). To obtain the chronoamperometric signal, FAOx, the Ru complex and the sample were reacted on the electrode area for 1 min to oxidize the existing substrate (ε-FK) and produce a reduced-form mediator. Then, the potential to reoxidize the mediator was applied, and the current was monitored. The current response depending on the GA concentration was observed successfully; however, the sensitivity and period required for monitoring need to be improved. In the chronoamperometric measurement, the observed current response follows the Cottrell equation; thus, generally, the current decreases over time until the steady-state current is reached. The current values obtained at the fixed time, after the potential is applied (usually at several seconds), are used as the representative current values for each sample. Therefore, the sensitivity is crucially dependent on the current sampling time after the potential application. Additionally, there is the possibility of the time lag of the sampling time when different lots of electrochemical meters are used. Since the response current decreases with time, the sampling time lag caused by the meter resulted in poor reproducibility. Therefore, to achieve a higher sensitivity and a higher reproducibility of the measurement, the magnitude of the current values and the response time required to reach the steady state current should be considered.
Interdigitated electrodes (IDEs) are known as one of the geometries of microelectrodes and have received increasing attention. IDEs consist of two individual arrays of microelectrodes in an interdigitated configuration. These electrodes have been reported in amperometric biosensors employing dual potentiometry, where two IDEs of a pair are used as working electrodes (WEs), individually, to achieve highly sensitive sensors based on redox mediator recycling-based signal amplification [8][9][10][11][12][13]. For instance, one working electrode (WE1) is held at an oxidative potential, which drives the oxidation of the electrochemically active species, while the other working electrode (WE2) is held at a reductive potential to drive the opposite reaction, the reduction. Thus, species produced at one electrode diffuse to the other electrode, where they are converted back to their previous form. This process is called redox cycling and was first demonstrated by Bard et al. [14]. In this mode, a uniform concentration gradient of redox species between two WEs is formed immediately, and the reduced-form and oxidized-form species are continuously supplied to WE1 and WE2, respectively, by diffusion from each electrode. Therefore, a greatly amplified and steady current is obtained. The total amount of redox-coupled species can be measured with high sensitivity. In this dual mode, the collection efficiency, which is the ratio of current values obtained with WE2 to that of WE1, or in other words, the ratio of oxidized-form species that reach WE2 from WE1, is an important parameter when evaluating the signal amplification. This parameter is dependent on the distance between WE1 and WE2 [15]. Although dual potentiometry has often been applied for the detection of small quantities of redox species in amperometric sensors, dual potentiometry is not applicable to measure the concentration ratio of reduced-form and oxidized-form mediators in the mixture solution, which is the most common principle for disposable-type amperometric enzyme sensors employing electron mediators.
The superior characteristics of IDEs prompted us to use IDE as an alternative platform technology for disposable enzyme sensors, especially for SMBG-type amperometric sensor strips, without the employment of conventional dual potentiometry. The most prominent property of IDE is the distance (gap) between two individuals of microelectrodes in an interdigitated configuration; namely, the gap between the two electrodes is smaller (~50 µm) than the diffusion layer created during the redox reaction on the anode and on the cathode. Therefore, if each individual microelectrode of an IDE is used as a working electrode and as a counter electrode, the reaction at the counter electrode will keep the reduced mediator concentration on the working electrode almost constant. Thereby, a steady-state current will immediately be achieved, and a high electrical current will be observed in the sensor.
In this study, we aimed to develop a highly sensitive disposable enzyme sensor strip for GA measurement by using an IDE as the electrode platform. The superior characteristics of IDEs were investigated using one microelectrode of the IDE pair as the WE and the other as the counter electrode (CE) and by measuring different concentrations of ferrocyanide in a mixture with ferricyanide. The oxidation current immediately reached the steady state when the oxidation potential was applied to the WE. Then, an IDE enzyme sensor strip for GA measurement was prepared using FAOx and an Ru complex as an electron mediator. The IDE enzyme sensor strips showed a high steady current immediately after potential application, with a higher sensitivity than that of the SPCE-based enzyme sensor strip that we previously reported. This study demonstrated that the application of an IDE as an alternative electrode platform will realize the development of highly sensitive disposable amperometric enzyme sensors with high reproducibility.
Electrode Characterization
The superior characteristics of IDEs were first investigated by cyclic voltammetry (CV) measurement and chronoamperometry (CA) measurement of ferrocyanide in a mixture with ferricyanide. The IDE strip was configured in either IDE WE-IDE CE mode, where one microelectrode of the IDE pair was used as the WE and the other as the CE (Figure 1b), or IDE WE-plate CE mode, where one microelectrode of the IDE pair was used as the WE and the external plate electrode on the IDE strip was used as the CE (Figure 1c). The cyclic voltammograms are shown in Figure 2. As a result, the steady-state oxidation current without a peak, which was not diffusion limited, was observed in IDE WE-IDE CE mode ( Figure 2, red line). In contrast, when one IDE was used as the WE and the plate electrode was used as the CE (IDE WE-plate CE mode), the peak current from oxidation of ferrocyanide, which was the diffusion limited current, was observed ( Figure 2, blue line), and the current value was smaller than that of the IDE WE-IDE CE mode. the plate electrode was used as the CE (IDE WE-plate CE mode), the peak current from oxidation of ferrocyanide, which was the diffusion limited current, was observed ( Figure 2, blue line), and the current value was smaller than that of the IDE WE-IDE CE mode. Cyclic voltammograms of IDE strip. Each cyclic voltammogram was obtained in the solution containing both 1 mM ferrocyanide and 9 mM ferricyanide, as a total concentration of 10 mM ferrocyanide/ferricyanide. Cyclic voltammogram in red was obtained using IDE with IDE WE-IDE CE mode, and the one in blue was obtained using IDE with IDE WE-plate CE mode, in 100 mM KCl. The sweep rate was 10 mV/s. Then, the CA measurement was performed with the IDE strip. To mimic SMBG-type measurements, which measure the small amount of reduced mediator produced by the enzyme reaction in the large amount of oxidized mediator, the total concentrations of ferrocyanide and ferricyanide were kept constant (100 mM), and the concentration of ferrocyanide was changed from 0 to 10 mM.
The response curve and the correlations between current and ferrocyanide concentration measured in IDE WE-IDE CE mode are shown in Figure 3a,b. The response current reached a steady state immediately after the potential was applied and was dependent on the ferrocyanide concentration ( Figure 3a). The ferrocyanide concentration-dependent current ( Figure 3b) showed good linear correlation, and, remarkably, the slope of the calibration curve was independent of the time (5, 10 and 30 s) after application of the oxidation potential. Figure 3c,d shows the response curve and the correlations between current and ferrocyanide in IDE WE-plate CE mode. The current gradually decreased until it reached a plateau after the application of the potential (Figure 3c). The ferrocyanide concentrationdependent current showed good linear correlations ( Figure 3d); however, the slope of Figure 2. Cyclic voltammograms of IDE strip. Each cyclic voltammogram was obtained in the solution containing both 1 mM ferrocyanide and 9 mM ferricyanide, as a total concentration of 10 mM ferrocyanide/ferricyanide. Cyclic voltammogram in red was obtained using IDE with IDE WE-IDE CE mode, and the one in blue was obtained using IDE with IDE WE-plate CE mode, in 100 mM KCl. The sweep rate was 10 mV/s. Then, the CA measurement was performed with the IDE strip. To mimic SMBG-type measurements, which measure the small amount of reduced mediator produced by the enzyme reaction in the large amount of oxidized mediator, the total concentrations of ferrocyanide and ferricyanide were kept constant (100 mM), and the concentration of ferrocyanide was changed from 0 to 10 mM.
The response curve and the correlations between current and ferrocyanide concentration measured in IDE WE-IDE CE mode are shown in Figure 3a,b. The response current reached a steady state immediately after the potential was applied and was dependent on the ferrocyanide concentration ( Figure 3a). The ferrocyanide concentration-dependent (Figure 3b) showed good linear correlation, and, remarkably, the slope of the calibration curve was independent of the time (5, 10 and 30 s) after application of the oxidation potential. Figure 3c,d shows the response curve and the correlations between current and ferrocyanide in IDE WE-plate CE mode. The current gradually decreased until it reached a plateau after the application of the potential (Figure 3c). The ferrocyanide concentrationdependent current showed good linear correlations ( Figure 3d); however, the slope of each correlation was dependent on time since the application of the potential (5 s, 10 s and 30 s). Furthermore, the observed current values in IDE WE-plate CE mode were smaller than those in IDE WE-IDE CE mode.
The slopes, y-intercept and linear regression coefficients of the CA measurement of ferrocyanide mixed in ferricyanide with IDE (Figure 3b,d) are summarized in Table S1. Since the slopes of the calibration curve are independent of the sampling time with IDE WE-IDE CE mode, the relative standard deviation (RSD) value of the slopes is small (2.8%). In contrast to this, with IDE WE-IDE plate mode, the slopes clearly depend on the sampling time, therefore, the RSD value (20%) was large compared to IDE WE-IDE CE mode. Since the y-intercept values are almost the same and the RSD values of the y-intercept are the same between two electrode configuration modes, the background current is not affected by the electrode configuration mode, and only the sampling time dependence of the slope was different between two modes. These results indicated that the superior characteristics of IDEs were demonstrated in IDE WE-IDE CE mode. This was achieved when one microelectrode of the IDE pair was used as the WE and the other as the CE, where the WE and the CE are placed in close distance (30 µ m) with each other; thus, a non-diffusion limited, steady-state current was observed with both CV and CA measurements. The potential of the CE versus the RE during the CA measurement was monitored ( Figure S1). The results indicated that the potential of CE was sufficient to reduce ferricyanide. This was consistent with the fact that the oxidation reaction (consumption of reduced species) occurred at the WE and the corresponding reduction reaction (production of reduced species) occurred at the CE. Consequently, a concentration gradient was formed between the WE and the CE, and the reduced-form mediator produced at the CE diffused to the WE. Thus, diffusion from the CE was dominant in the supply of the reduced-form mediator to the WE compared with dif- too small to discuss the RSD values. This observation also suggested that the dispersion of the obtained current values is not affected by the electrode configuration mode.
These results indicated that the superior characteristics of IDEs were demonstrated in IDE WE-IDE CE mode. This was achieved when one microelectrode of the IDE pair was used as the WE and the other as the CE, where the WE and the CE are placed in close distance (30 µm) with each other; thus, a non-diffusion limited, steady-state current was observed with both CV and CA measurements. The potential of the CE versus the RE during the CA measurement was monitored ( Figure S1). The results indicated that the potential of CE was sufficient to reduce ferricyanide. This was consistent with the fact that the oxidation reaction (consumption of reduced species) occurred at the WE and the corresponding reduction reaction (production of reduced species) occurred at the CE. Consequently, a concentration gradient was formed between the WE and the CE, and the reduced-form mediator produced at the CE diffused to the WE. Thus, diffusion from the CE was dominant in the supply of the reduced-form mediator to the WE compared with diffusion from the bulk. Consequently, the reduced-form mediator was continuously supplied to the WE at a steady rate from the CE and was not diffusion limited; therefore, an immediate, steady-state and high electrical current was observed, as we expected. On the other hand, when the distance between the WE and the CE was large (IDE WE-plate CE mode), the supply of the reduced-form mediator to the WE was independent of the formation of the reduced form at CE but dependent on diffusion from the bulk, and a diffusion-limited current was observed. The distance between the WE and the CE was important only in the IDE WE-IDE CE mode, with which a large steady-state current was obtained.
Measurement of Fructosyl Lysine and GA with an IDE Enzyme Sensor Strip
IDE enzyme sensor strips for GA measurement were then constructed using FAOx as the enzyme and the Ru complex as the electron mediator.
First, Nα-Carbobenzyloxy-Nε-fructosyllysine (Z-FK), the synthetic analog of ε-FK, was used as the substrate to evaluate the operational conditions of the IDE strip as the platform of a disposable GA sensor. The response curve and calibration curve of the Z-FK measurement are shown in Figure 4. The response curve (Figure 4a) indicates that the current immediately reached a plateau after the oxidation potential was applied, as was observed with the ferrocyanide/ferricyanide measurement (Figure 3a). The current values increased depending on the Z-FK concentration. Figure 4b shows the calibration curves, which are a correlation between the observed current and the Z-FK concentration, at 5, 10 or 30 s after the potential was applied. A good linear response was observed over the entire Z-FK concentration range (0-500 µM), with a linear regression coefficient of R 2 = 0.999. The sensitivity and the slope of the calibration curve obtained at each period after potential application were identical and were not dependent on the sampling time. This was a significant difference and an advantage considering the reproducibility of the sensor signal compared with the enzyme sensor we previously reported using SPCE as the electrode [7], which showed dependency on the sampling time, and their slopes were different depending on the sampling time. The limit of detection (LOD), defined as the Z-FK concentration corresponding to the mean background current +3 standard deviations, was 1.2 µM. The LOD with the IDE was approximately 33 times lower than that with the SPCE, which showed an LOD of 40 µM Z-FK. The achieved sensitivity, 2.8 nA µM −1 , was improved by 5.7 times using an IDE as the electrode compared with our previous achievement, which used an SPCE, with a sensitivity of 0.49 nA µM −1 . The GA level is expressed as the proportion of the GA concentration to the total albumin concentration (%). The standard level of GA is from 11 to 16% according to the Japan Diabetes Society. At the current clinical site, the GA level is used as the glycemic control marker for the assessment of treatment effectiveness, but not for the diagnosis of diabetes. Therefore, there is no prescribed cut-off value to diagnose diabetes. The tentative target level for the treatment of diabetes patients is suggested as GA level < 20% by the Japanese Society for Dialysis Therapy. Considering that the average albumin concentration in the serum is 5 g/dL and the molecular weight of human serum albumin is 66.5 kDa, the calculated ε-FK concentrations are 83-120 µM for 11-16% GA and 150 µM for 20% GA. Therefore, the LOD and sensitivity are sufficient to detect and distinguish GA concentrations between healthy individuals and patients with diabetes.
The slopes, y-intercept and linear regression coefficients for Z-FK measurement (Figure 4b) are summarized in Table S3. Since the slope of the calibration curve is independent of sampling time, the RSD value of the slope (2.1%) was small, the same as for the ferrocyanide measurement (Table S1). Table S4. Comparing with ferrocyanide measurement (Table S2), the RSD values are larger and dispersed for the Z-FK measurement with IDE enzyme sensor strip (0.8-12%). This large RSD values might be due to the lot-to-lot variations of enzyme sensor strip and the measurement process. To prepare the enzyme sensor strip, we put the mixture containing enzyme and mediator on the IDE, dried them, and then applied the spacer and cover to make the capillary on the electrode. It is possible that there are lot-to-lot variations of the prepared sensor strip due to inconsistency of performance. In addition, in the measurement process, after injecting the sample solution in the capillary on the sensor strip, the time and process are required to dissolve the dried enzyme and mediator on the enzyme sensor strips with the sample solution. This dissolving process might affect the homogeneity of the sequential reaction of the substrate oxidation with the enzyme and reduction of the mediator and might lead to the dispersion of the sensor signal. These points might be the reason why the RSD values of the Z-FK measurement with enzyme sensor strips are larger compared with the ferrocyanide solution measurement with bare IDE (Table S2). However, in the practical application, the signals with good reproducibility are expected with manufactured enzyme sensor strips.
Then, protease-digested GA samples were measured with an IDE enzyme sensor strip. To obtain a calibration curve of the protease-digested GA sample, several concentrations of Z-FK were spiked into the protease-digested nonglycated albumin sample, and their response was analyzed. The response curve is shown in Figure 5a. The calibration curve of the current at 5 s after application of the potential versus Z-FK concentration is plotted in Figure 5b. The linear range was 0-500 µ M Z-FK with a linear regression coefficient of R 2 = 0.996, and the LOD was 25 µ M in the presence of protease digestion materials. The presence of several molecules derived from protease-digested materials, such as amino acids or peptides derived from albumin, may alter pH, ionic strength, and/or viscosity of sample solution, and affect the sensitivity and LOD of the sensor. The presence of these derivatives might have a negative impact on the enzyme activity and/or electrochemical reaction by affecting the diffusion of the mediator. Consequently, the sensitivity and LOD of the sensor might be changed in the presence of protease-digested non-glycated albumin. In addition, the target detection range of ε-FK concentration, which is calculated from physiological level of GA, is around the hundreds µ M level. Considering this target range of GA measurement, the sensor will still be sensitive enough to cover the physiological range of the ε-FK concentration, even in the protease-digested sample, where the LOD increases from 1.2 µ M to 25 µ M. Therefore, we concluded that the sensor will be used in the protease-digested sample; even the LOD is drastically increased in the presence of derivatives of protease digestion. In this study, two different GA samples were used, 15 and 30%. The GA level is the indicator of glycemic control and at the current clinical site, the GA value is used for the assessment of treatment effectiveness, but not for Table S4. Comparing with ferrocyanide measurement (Table S2), the RSD values are larger and dispersed for the Z-FK measurement with IDE enzyme sensor strip (0.8-12%). This large RSD values might be due to the lot-to-lot variations of enzyme sensor strip and the measurement process. To prepare the enzyme sensor strip, we put the mixture containing enzyme and mediator on the IDE, dried them, and then applied the spacer and cover to make the capillary on the electrode. It is possible that there are lot-to-lot variations of the prepared sensor strip due to inconsistency of performance. In addition, in the measurement process, after injecting the sample solution in the capillary on the sensor strip, the time and process are required to dissolve the dried enzyme and mediator on the enzyme sensor strips with the sample solution. This dissolving process might affect the homogeneity of the sequential reaction of the substrate oxidation with the enzyme and reduction of the mediator and might lead to the dispersion of the sensor signal. These points might be the reason why the RSD values of the Z-FK measurement with enzyme sensor strips are larger compared with the ferrocyanide solution measurement with bare IDE (Table S2). However, in the practical application, the signals with good reproducibility are expected with manufactured enzyme sensor strips.
Then, protease-digested GA samples were measured with an IDE enzyme sensor strip. To obtain a calibration curve of the protease-digested GA sample, several concentrations of Z-FK were spiked into the protease-digested nonglycated albumin sample, and their response was analyzed. The response curve is shown in Figure 5a. The calibration curve of the current at 5 s after application of the potential versus Z-FK concentration is plotted in Figure 5b. The linear range was 0-500 µM Z-FK with a linear regression coefficient of R 2 = 0.996, and the LOD was 25 µM in the presence of protease digestion materials. The presence of several molecules derived from protease-digested materials, such as amino acids or peptides derived from albumin, may alter pH, ionic strength, and/or viscosity of sample solution, and affect the sensitivity and LOD of the sensor. The presence of these derivatives might have a negative impact on the enzyme activity and/or electrochemical reaction by affecting the diffusion of the mediator. Consequently, the sensitivity and LOD of the sensor might be changed in the presence of protease-digested non-glycated albumin. In addition, the target detection range of ε-FK concentration, which is calculated from physiological level of GA, is around the hundreds µM level. Considering this target range of GA measurement, the sensor will still be sensitive enough to cover the physiological range of the ε-FK concentration, even in the protease-digested sample, where the LOD increases from 1.2 µM to 25 µM. Therefore, we concluded that the sensor will be used in the protease-digested sample; even the LOD is drastically increased in the presence of derivatives of protease digestion. In this study, two different GA samples were used, 15 and 30%. The GA level is the indicator of glycemic control and at the current clinical site, the GA value is used for the assessment of treatment effectiveness, but not for the diagnosis of diabetes. Therefore, there is no prescribed cut-off value to diagnose diabetes. According to the Japan Diabetes Society, the GA level of a healthy subject is 11-16%. The Japanese Society for Dialysis Therapy suggested that the therapeutic target GA level of diabetic patients should be <20%. This indicated that when diabetic patient under the treatment shows a GA value higher than 20%, such as 30%, a revision of treatment will be required. Therefore, the samples with GA values of 15 or 30% represent the one for the healthy subject or the one for the diabetic patient whose glycemic level was not adequately controlled, respectively. Figure 5c shows the sensor responses toward two concentrations (15 and 30%) of protease-digested GA samples. An immediately plateauing current was obtained, and the current value was dependent on the GA level. The RSD values of obtained currents for 15 and 30% GA samples were 6 and 10%, respectively. These values were in the same range with those for the Z-FK measurement (Table S4). Therefore, the reproducibility of the protease-digested GA sample measurement might also be affected by lot-to-lot variations of sensor strip preparation and the dissolving process of enzyme and mediator dried on the sensor strip by sample solution in the measurement process. The ε-FK concentrations of two protease-digested GA samples were determined to be 79 ± 4 µM and 136 ± 17 µM for 15 and 30% GA, respectively, based on the calibration curve obtained with Z-FK in the presence of protease-digested albumin (Figure 5b). The difference between the current values obtained with 15% GA and 30% GA was 104 nA, which was more than three-fold greater than the results obtained in the enzyme sensor with an SPCE (34 nA). These results indicated that a high-resolution measurement was successfully achieved for GA measurement when an IDE was employed as the electrode in the IDE WE-IDE CE mode.
Several studies about the electrochemical biosensor for GA measurement have been reported using enzymes, antibody, aptamer or peptide as the biorecognition molecules and based on amperometry, electrochemiluminescence (ECL), impedance, voltammetry or field effect transistor (FET) as sensing principles [7,[16][17][18][19][20][21][22][23]. These studied are summarized in Table 1. Among these studies, this study showed the smallest sample volume and shortest waiting time, which is the period before starting electrochemical measurement after the sample solution was added on the electrode. Considering the time taken to perform the electrochemical measurement among these studies, ECL, square wave voltammetry (SWV), impedance and FET required several minutes to obtain the resulting signal as these need to sweep the apply potential or change the potential with various frequencies. On the other hand, with the amperometry method, especially in this study, the resulting signal was obtained within 5 sec after starting the electrochemical measurement by potential application. Therefore, the developed enzyme sensor using IDE in this study showed superior characteristic compared with other reported sensing systems.
Not only dual potentiometry-based amperometric sensors have been reported using IDEs but also various types of electrochemical biosensors. Regarding chronoamperometric biosensors, antibodies [8][9][10], aptamers [11], peptides [12] and enzymes [13] have been used as recognition molecules, and they are based on dual potentiometry. Regarding impedimetric biosensors using IDE, antibodies [24][25][26][27][28] and aptamers [29] have been used as the primary molecular recognition elements, as affinity-based detection requires strong binding and selective biomolecules for impedance measurement. With impedimetric biosensors, measurement is based on the access of a redox probe to the electrode surface, and changes occur when an insulating layer is produced with the recognition element/antigen complex, which reduces the transfer of electrons. As a result, an increase in charge transfer resistance (Rct) occurs. Regarding capacitive biosensors, antibodies [30][31][32][33], aptamers [34,35], and affimers [36] have been used as recognition elements. Capacitive biosensors rely on changes occurring within the electrical double layer (EDL), where the thickness is changed according to binding of a target to a recognition element on the electrode. Changes in the target concentration correspond to changes occurring at the EDL. Nondual potentiometry-based amperometric sensors using enzymes have also been reported [37,38]. Sharma et al. immobilized an enzyme-glucose oxidase [37] or cholesterol oxidase [38] on one microelectrode of an IDE pair and measured the substrate concentration in the presence of ferricyanide as the electron mediator by applying the oxidation potential to both the WE1 and WE2. In this measurement, the reduced-form mediator produced by the enzyme reaction (oxidation of the substrate) was oxidized at both WE1 and WE2. The authors applied only the oxidation potential to the WEs; however, since the distance between the WEs and CE was large, the current response time was slightly slow. On the other hand, in this study, by focusing on the distance between the WE and the CE, and by using one IDE of a pair as the WE and the other IDE as the CE, a highly sensitive endpoint assay-type disposable enzyme sensor for GA was developed. Furthermore, the application of the IDE in the IDE WE-IDE CE mode should not be limited to GA measurement. By using other enzymes for other target molecules, highly sensitive and reproducible disposable enzyme sensors are expected based on an IDE as the platform electrode.
Electrode Characterization by Ferrocyanide/Ferricyanide Redox Couple Measurement
Ag/AgCl paste was dried onto an Au plate at the top of a 4-electrode-IDE strip to be used as a reference electrode. First, cyclic voltammetry (CV) measurements were performed. A drop (5 µL) of a mixture of 1 mM ferrocyanide and 9 mM ferricyanide (the total concentration of ferrocyanide and ferricyanide was 10 mM) in 100 mM KCl was deposited on the electrode area. CV measurements were performed with the potential range from +0.19 to +0.6 V vs. Ag/AgCl at 10 mV/sec. Then, chronoamperometry (CA) measurements were also performed. A drop (5 µL) of a mixture of various concentrations of ferrocyanide (0-10 mM) along with ferricyanide, with the total concentration of ferrocyanide and ferricyanide being 100 mM, in 100 mM KCl was deposited on the electrode strip. A potential of +0.4 V vs. Ag/AgCl was applied, and the current was recorded over 60 s. Both CV and CA measurements were performed either in IDE WE-IDE CE mode, where one microelectrode of the IDE pair was used as the WE and the other as the CE (Figure 1b), or IDE WE-plate CE mode, where one microelectrode of the IDE pair was used as the WE and the external plate electrode on the IDE strip was used as the CE (Figure 1c).
Preparation and Characterization of IDE Enzyme Sensor Strip for GA Measurement
A volume of 0.8 µL of a solution of 60 U/mL FAOx (optimized concentration, see Figures S3 and S4), 300 mM Ru-complex (optimized concentration, see Figures S5 and S6), and 0.25% sucrose in 100 mM PPB, pH 8.0, was dropped onto 2-electrode-IDE strips and dried at 25 • C. Then, a spacer and a cover were attached to the electrodes. All sensors were used immediately in this study.
For the measurement, 0.8 µL of samples of different concentrations of Z-FK was injected into the spacer layer of the enzyme sensor strips. The potential of +0.1 V vs. Au was applied 60 s after sample injection, and the current was observed. Additionally, various concentrations of Z-FK contained in protease-digested, nonglycated albumin and protease-digested GA samples were measured with the same method.
Conclusions
In this study, we focused on the superior characteristics of IDEs when they are used as an alternative platform technology for disposable enzyme sensor strips for GA measurement, a glycemic control marker for diabetes. We demonstrated that by using a pair of IDEs as the WE and the CE, the distance between the WE and CE was relatively small, and a time-independent, steady-state and large current was achieved. Furthermore, the obtained current was dependent on the concentration of only the reduced-form mediator in the presence of the oxidized-form mediator when the oxidation potential was applied to the WE. The prepared IDE enzyme sensor strip for GA measurement showed a large, steady current, which led to higher sensitivity than that of the SPCE in our previous study.
The measurements of protease-digested GA samples were also demonstrated to have high sensitivity with the IDE. The novel application of IDE for the development of highly sensitive and reproducible endpoint assay-type enzyme sensors has been demonstrated, and further application of IDE as a platform for various enzyme-based sensors is expected.
Supplementary Materials: The following are available online, S1: Characteristic parameters for CA measurements of ferrocyanide with IDE (Table S1, S2), S2: Measurement of the potential of the counter electrode during the chronoamperometry (CA) measurement using the interdigitated array electrode (IDE) in IDE WE-IDE CE mode ( Figure S1, S2), S3; Parameters of CA measurements of Z-FK with IDE enzyme sensor strip (Table S3, S4), S4; Optimization of the FAOx concentration for the enzyme sensor for the glycated albumin (GA) measurement ( Figure S3, S4, Table S5), S5; Optimization of the mediator concentration for the enzyme sensor for the glycated albumin (GA) measurement ( Figure S5, S6, Table S6). | 8,481.2 | 2021-01-31T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
A NEW OCCURRENCE OF Limnoperna fortunei ( DUNKER 1856 ) ( BIVALVIA , MYTILIDAE ) IN THE STATE OF SÃO PAULO , BRAZIL
The freshwater mussel Limnoperna fortunei (Dunker 1856) (Bivalvia, Mytilidae) has been found in the Paraná river, near Rosana, São Paulo. This is the first record of this specie in São Paulo State. This population of Limnoperna fortunei seems to be young and in a colonization process.
Limnoperna fortunei is a freshwater bivalve, native to rivers of China and southeastern Asia.It was first recorded for the Americas in 1991 at Bagliardi beach in the Argentinian littoral zone of Rio de La Plata (Pastorino et al., 1993).Introduction into South America is probably due to the discharged thousands of tons of ballast water with high bivalve larvae concentrations.
Along with Corbicula fluminea (Müller 1774) and Corbicula largillierti (Philippi 1811), Limnoperna fortunei (known as "golden mussel") is the third invading freshwater bivalve species to enter South America from southeastern Asia via the Rio de La Plata.
In 1993, the presence of this mussel was registered in the Rio de La Plata (from Punta Piedras to Punta Lara).The same occurred in 1996 in the Paraná river (Zárate, San Pedro, Rosário, Santa Fé), Argentina, and in 1998 both in the Salado river (Santo Tomé), Paraguay, and the Paraguay river (Corumbá in Mato Grosso do Sul State), Brazil (Darrigran et al., 2000).
The specie lives as marine mytilids and has a bissal attachment to solid substrata.In the subtropical region, it exhibits rapid growth, a short life span, and possesses planktonic (veliger) larvae.The adults are dioic, with two-thirds of the population being female and reproducing at least once or twice per year (Ricciardi, 1998;Magara et al., 2001).
L. fortunei was already considered a pest when it invaded the Hong Kong area in the late 1960s.This specie in particular as well as those having bissal attachments have become serious threats to normal functioning of both aquatic ecosystems and water intake systems, as demonstrated by the recent invasion history of the Eurasian zebra mussel Dreissena polymorpha (Pallas 1771), which has caused profound ecological and technological impact in North America (Ricciardi, 1998;Boltovskoy & Cataldo, 1999).According to Darrigran et al. (1999), the principal problems relating to the invasion by L. fortunei into water distribution and irrigation systems are: pipe diameter reduction, pipeline blockages, water velocity decrease caused by friction, empty shell accumulation, water pipeline contamination by mass mortality, and filter occlusions.In addition, one must consider the ecological impact caused by this exotic specie, specially with regard to competition with native bivalves for space and food.Darrigran & Pastorino (1995) registered the occurrence of 80,000 mussels/m 2 in Bagliardi beach, Argentina.Since their introduction, the number of individuals has increased dramatically within a short period of time, reaching densities of more than 100,000 mussels/m 2 (Cataldo et al., 2002).
Recently, many steps have been taken to try to control or decrease the effects caused by this invasive specie.These include: manual or mechanical removal, use of electric fields, temperature control, and anti-incrustation painting (Cataldo et al., 2002).
The aim of the present work is to revise the distribution of Limnoperna fortunei in the Paraná river, as of November 2002 to include Rosana municipality, São Paulo State, Brazil.
MATERIAL AND METHODS
The Paraná river, in western São Paulo State at the state line with Mato Grosso do Sul, was divided randomly into three collection regions: one upstream, another in the middle course of the river, and the third downstream near the city of Rosana.
At each collection point, many places along the river banks and in outcrops in the riverbed were visited, covering approximately 10 km.With a 30 min capture effort and the help of three people, many sites were chosen.
Sediment from the chosen sites and rocky outcrops were sifted by hand through a sieve (5 mm mesh size).The clams were found, collected, packed in duly labeled plastic bags, and transported in thermal boxes with ice.Aquatic plant roots were also analyzed.All animals captured in each collection were identified and preserved in 70% alcohol.
The coordinates of each site were registered with a GPS.At the site with coordinates 22°32'56,9"S and 53°02'48"W (municipality of Rosana) 118 specimens of L. fortunei were collected.Each clam was measured with a Vernier caliper to the nearest 0.05 mm for length (greatest anteriorposterior distance), width (greatest distance through the valves), and height (greatest dorso-ventral distance perpendicular to the hinge line).The appropriate statistical analyses were then performed.
RESULTS AND DISCUSSION
Fig. 1 shows the occurrences of this specie in South America and the first record in São Paulo State, in the Parana river.
Fig. 2 shows specimens of Limnoperna fortunei that were found in the Paraná river in November 2002.All captured clams were measured.Their shell length varied from 8 to 22 mm; width, from 3 to 8.2 mm; and height, from 3.9 to 10.2 mm.High values were obtained for the correlation index (more than 0.9).This data shows an intimate relationship among the biometric measurements of the shells.
In the greatest percentage of the clams, the shell length varied from 12 to 13.5 mm.Boltovskoy & Cataldo (1999) verified that, during the first year, these animals reached 20 mm in length, and by the end of the second year they were 30 mm long, and the asymptotic or maximum theoretical length was 35 mm.According to Morton (1982), the maximum length can vary from 30 to 40 mm.
Using measurements made of the captured clams from the sites analyzed, we could infer that the population discovered is probably in the first year of life and, therefore, a young population in a full-blown installation process.
The rapid expansion and great densities of the exotic specie Limnoperna fortunei, besides the economic damage caused to electric-power plants and irrigation systems, can greatly impact the aquatic ecosystem.Documented impacts suggest that filtration activity of a dense Limnoperna fortunei population reduces phytoplankton biomass and turbidity levels (and promoting prolific ma-crophyte growth); suppresses zooplankton populations, thus limiting food availability; increases sedimentation rates; and alters contaminant cycling in lentic habitats and large rivers, among other damage (Ricciardi, 1998).
The recent history of the invasion of Limnoperna fortunei, as well as its potential for colonization, has been encouraging studies about the biology of this pest in an effort to identify possible steps for its prevention and control.
Fig. 1 -
Fig. 1 -Map of South America with registered occurrences of the exotic specie Limnoperna fortunei and the new occurrence in 2002. | 1,495.2 | 2004-11-01T00:00:00.000 | [
"Biology"
] |
Economic Analysis of Perennial Crop Systems in Dak Lak Province, Vietnam
Dak Lak province, Central Highlands, Vietnam presents an interesting case in perennial crop systems, of which coffee and black pepper are the two premier commodities and contribute a large part to economic growth provincially and at the national level. In recent years, in addition to mono-cropping systems, intercropping systems for diversification have developed quickly. This paper focuses on (1) comparing the economic efficiency of mono-coffee systems (MCSes), mono-pepper systems (MPSes), and coffee and pepper intercropping (CPI) by analyzing startup cost, annual cost, and profits; and (2) identifying the main factors affecting farmers’ decisions to convert their crop systems. The study was carried out by investigating 90 perennial crop samples using the three perennial crop systems (MCSes, MPSes, and CPI) in 2017–2018. Additionally, in-depth interviews and focus group discussion (FGD) methods were applied to collect more information about the operations of each system. Another survey with 37 samples (new plantations) was carried out to compute the startup cost. The findings showed evidence that MCSes had the lowest startup and annual costs, whereas MPSes had the highest costs of the three perennial crop systems. MCSes used less manure or compost in the initial setup and overused chemical fertilizer in annual production. Similarly, MPSes had high pesticide-stimulant costs in the production process to sustain crop development. The study indicated that CPI not only had the highest economic efficiency, but also created the best family employment opportunities of the three systems. Additionally, the study found some social factors that strongly influenced farmers’ decisions to shift their cropping system: These included ethnicity, education, training, and crop failure, in addition to economic factors (profits).
Introduction
Vietnamese agriculture plays an important role in economic growth, providing 20% of national gross domestic product (GDP) [1]. After the Renovation program (known as Doi Moi), Vietnam engaged in international trade and freer investment. Cash crops (annual and perennial crops) were regarded by the government as principal drivers for crop-growing households and the rural population to reduce poverty [1]. In this mix, black pepper and coffee were considered major agricultural products in Vietnam [2]. During the periods of the 2000s and 2011-2013, the perennial crop growing area increased from 2.2 million to 3.8 million hectares (about 7% per year), and most of this expansion was for coffee and rubber exports [3]. Coffee is one of Vietnam's 10 most important Dak Lak province is divided into 13 districts, one district-level town, and one city, all involved in perennial crop production. Cu M'gar and Cukiun districts are the largest coffee-and pepper-growing area regions. Buon Ma Thuot is a central city with favorable conditions (market, transportation, agri-services), and has the longest established perennial crop system. Three regions have more heterogeneous (fertility and weather) conditions and are suitable for perennial crops (see Figure 2). Cu M'gar district, Cu Kuin district, and Buon Ma Thuot city are examined in this study (see Figure 2). Dak Lak province is divided into 13 districts, one district-level town, and one city, all involved in perennial crop production. Cu M'gar and Cukiun districts are the largest coffee-and pepper-growing area regions. Buon Ma Thuot is a central city with favorable conditions (market, transportation, agri-services), and has the longest established perennial crop system. Three regions have more heterogeneous (fertility and weather) conditions and are suitable for perennial crops (see Figure 2). Cu M'gar district, Cu Kuin district, and Buon Ma Thuot city are examined in this study (see Figure 2). Dak Lak province is divided into 13 districts, one district-level town, and one city, all involved in perennial crop production. Cu M'gar and Cukiun districts are the largest coffee-and pepper-growing area regions. Buon Ma Thuot is a central city with favorable conditions (market, transportation, agri-services), and has the longest established perennial crop system. Three regions have more heterogeneous (fertility and weather) conditions and are suitable for perennial crops (see Figure 2). Cu M'gar district, Cu Kuin district, and Buon Ma Thuot city are examined in this study (see Figure 2).
Focus Group Discussions (FGDs)
Three FGDs were conducted with (7-8 participants) who had production experience. The aim was to collect and understand farm activities and difficulties in production.
Key Informant Interviews
Both observation trips and in-depth interviews with key people (elderly people, heads of communes, extension workers) were used to collect preliminary information on field situations and to get to know farmers: These provided complementary information.
Household Surveys
Households were selected by a stratified random sampling method of the three perennial systems including MCS, MPS, and CPI. The 86 producers selected provided information for each of the three systems. In fact, the households list provided 90 samples for data about perennial crop production because some households owned more than one piece. The farms were between 0.5 ha and 2 ha, which is similar to the average area of local farms [10].
Additionally, 37 farms were surveyed a second time to collect information (farm age was about 1-3 years) to evaluate the startup cost completely due to the long life spans of coffee and pepper plantations (25 years was typical in this study). The surveys were conducted from December 2017 to April 2018. The perennial crop sample distribution is presented in Table 1.
Data Analysis
In this study, farm profile, cost-return, and comparative analysis were used to examine the differences in economic efficiency, in addition to descriptive statistical analysis (means, percentages, charts, and growth rate). Many indicators such as production cost, revenue, value added, and profit indicate which cropping systems have the best economic performance for households [27]. The Kruskal-Wallis and Mann-Whitney methods were applied to the test results.
Moreover, a binary regression model of the most common approaches was used to examine farmers' decisions between two alternatives [28]. Using socioeconomic characteristics, this model tests the probability of a farmer's decision to change current systems to another. Accordingly [29,30], the logistic binary regression model equation was: where P i = probability of the event occurring, B 0 = a constant term, B j = a coefficient, and X = independent variables.
where gender is a dummy (1 if male, 0 if female); ethnicity is of the head of household (HH) and is a dummy (1 if Kinh, 0 if others); education level is of the HH and is in years; experience is of the HH and is in years; training is a percentage of the HH and is a dummy (1 if yes, 0 if no); related family labor is number of people; lack of water is a percentage of the household and is a dummy (1 if yes, 0 if no); crop failure is a percentage of the household and is a dummy (1 if yes, 0 if no); and profit/cost is a percentage.
Information on Farm Households and Perennial Crop Systems in Dak Lak Province
The Characteristics of Perennial Crop Households and Farms Table 2 describes the characteristics of perennial crop households in this research study. Most of the surveyed households were Kinh people and of northern ethnicities (Tay, Dao) who were migrants from New Economic Zones [31] or came due to other unregulated migration, estimated at 68%. There were 30% female HHs, except for the Ede, where 38% were female HHs. In terms of education and experience, most had finished the 8th year of school of Vietnamese education, except some illiterate Ede ethnic households. Farmers had significant experience in perennial crop production (around 10 years), which was provided by their parents, neighbors, social media, and worker extensions. Additionally, 47% of surveyed farmers stated that they participated in training courses, which are implemented by the local authorities and companies.
Regarding farm size, the total cultivated area was less than 2 ha per household, including perennial crop area (coffee, pepper, cashews) as well as annual crop area (rice, beans, corn). The surveyed area was similar to official reports of the Dak Lak People Committee, 2017 [3,9,32]. In particular, the average perennial crop area was small, about one hectare per family. This was because in the past, the total productive areas were quite large, but families have now shared the land with their children as they grew up, got married, and raised families. Another reason is that most households were migrants from outside the area after the 2000s. It is not easy to own large cultivated pieces of land because of high land prices.
Family members provided labor, with 90% working full-time as husband and wife. Some families were supported by their children. Other children went to school or became workers in nearby provinces. As a result, most households had to use hired labor, especially in the harvest season. In this study, the average family contribution to farm labor was estimated at 2.23 members.
Financially, about half of the farms observed borrowed from financial organizations both formal and informal, where a part of a loan was used for annual investment costs (buying fertilizer, pesticide, and hired labor), and the rest was used for ongoing costs, building a house, or supporting children's education. In addition, interviewees described their irrigation source, which plays an important role in perennial crop production (i.e., irrigation helps coffee to break the flower bud and triggers homogeneous blossoming and cherry development). Growers have been faced with drought challenges (38% of households) and dying crops (41%), which has created difficulties in sustainable production [33,34]. The results showed that over 90% of interviewees were using well water rather than surface resources such as dams, streams, and lakes.
The profile of three perennial crop systems is presented in Table 3. For MCSes, the average growing area was estimated at 1.1 hectare. The density reached 958 trees per hectare, which was a lower density than the technical standard (1.100 trees/ha) [35]. The explanation for the low density was that plantations get pet and disease infections (41%), with many trees dying. In FGDs, the farmers admitted that MCS was a relatively simple system to plant and care for, especially Robusta coffee (i.e., over 90% of survey households had Robusta) [18]. However, the yield reached only 2.1 tonnes per hectare because of the high proportion of aging tree stock (17 years in this study) and the fluctuation of weather.
MPSes were started initially from residential gardens of very small acreage in the last decades. In recent years, they have been developed strongly due to favorable prices. However, the average pepper size was the smallest scale compared to all the rest, by 0.8 ha. According to respondents in the survey, instead of renewing an old coffee plantation, they shifted to pepper cultivation to take advantage of good prices in the following two ways: First, pepper was planted in vacant spaces of old coffee plantations, and after that, farmers cut down coffee trees to cultivate MPSes. Second, old or unproductive coffee orchards were removed. After that, pepper was planted in the area. This is why pepper density was not higher, only 1344 trees/ha. Unfortunately, black pepper is a disease-sensitive crop (i.e., pepper growers confront disease), and black pepper disease affected about 2000 ha in 2017 (equal to 13.2% of all plant diseases in the whole country, including foot rot or quick wilt disease, Pollu disease, slow decline or slow wilt, and stunt disease), but the dramatic collapse of coffee and rubber prices as well as high pepper prices encouraged farmers (the Dak Lak People Committee, 2018).
Most MPSes have been planted since the 2000s, which is quite recent (7.43 years), to create high yields estimated at 2.3 tonnes/ha. The surveyed data showed that MPSes were cultivated in an unregulated way (in areas not zoned for farming), with more wooden and concrete pillars than live plants. Rubber trees could be used as pillars and play areas, and ponds and rice fields were taken over to have more land to grow pepper. This was because growers expected pepper crop development to gain quantity quickly as well as propagate commercially through cutting as soon as possible, which was revealed by the farmers.
In terms of CPI, as a diversification model it has the existence of economies of diversification, economies of scale, and diversification efficiencies at the farm level [7,[35][36][37][38][39]. Moreover, pepper is intercropped with other crops to generate higher yields than mono-cropping, which reduces pet and disease incidence, spreads risk, makes an effective use of labor, and mitigates market risks [25,[40][41][42][43].
In Dak Lak province, CPI was formed from coffee and pepper, which were grown together in the same field, in which pepper was grown initially into a coffee garden for planting as a shade tree. Over time, this model has been widely and enthusiastically used in farmers' plantations. According to the surveyed data, CPI planting was quite young (7.3 years of pepper, 13 years of coffee, and 1.0 ha of plot size). Besides that, the density was mentioned to be estimated at 964 coffee trees and 914 pepper trees per ha, including two intercropping methods: Group (a small sub-area of coffee and pepper was planted in the orchard) and intersection (two coffee rows or three coffee rows or five coffee rows to intercrop one pepper row, where pepper was designed at the intersection point of coffee holes). Additionally, the yield of CPI was estimated at 2.3 tonnes of coffee and 1.8 tonnes of pepper/hectare (Table 3). According to interviewees, high pepper density led to decreased yield due to the competition for space and light. Furthermore, many respondents admitted that there were some production difficulties due to specific techniques (i.e., irrigation and harvesting).
Input Cost
Because coffee and pepper have a long life span, the cost needs to identify both the startup and annual costs, which play an important role in influencing the growth and productivity of perennial crop systems. The specific cost of establishment is presented in Table 4. This startup cost was estimated at 38.5 million for MCSes, 147.5 million for MPSes, and 65.3 million VND per hectare for CPI. It included various semi-costs such as land preparation, materials, and labor. The surveyed results showed that most plantations were renewed by replacing old and unproductive coffee trees (95%). Specific startup costs were analyzed: Concerning land preparation, they focused on cutting trees, ploughing, and cleaning operations to grow new crops. Normally, farmers often hire contractors to do these activities instead of doing it themselves. Buying old coffee tree also helped farmers to pay a part for this cost. Therefore, this cost occupied 4 million: 4.9 million and 3.3 million VND per hectare (see Table 4).
Materials expenditures had the highest cost of startup costs, including pillars, digging holes, nursery costs, fertilizer, and pesticide. The study showed that MPSes had higher material costs than MCSes and CPI, by 130.5 million VND per hectare, in which the pillar cost (either concrete or wooden) was dominant in the materials costs, making up about 90 million VND per hectare.
According to farmers, pepper was planted immediately along with the pillars, and grew up better than pepper on its own. Additionally, farmers harvested pepper as soon as possible to take advantage of the high pepper price of the market, which attracted farmers to use concrete and wooden pillars to grow black pepper.
In contrast, CPI had lower pillar costs than MPSes due to using a live plant as support propagated by the farmer or purchased at a low cost. The disadvantage of CPI was the farmers' costs of taking care of the plant (at least the first year after planting the pepper crop), estimated at 15.3 million VND (concrete and wooden pillar price was around 160,000 VND/pillar, whereas a live plant support was about 7,000 VND/pillar (the exchange is 1 USD = 23,020 VND). Therefore, based on this, a recommendation for reducing establishment costs in MPSes and CPI was using a live plant instead of a wooden or concrete pillar.
One kind of material cost was the nursery, where plants were propagated by family or purchased certificated. According to local authorities, propagated coffee plants were provided for farmers to replace old coffee trees, but there was not any policy for new pepper plants. As a result, the nursery propagated pepper for MPSes, which was the highest cost compared to CPI and MCSes, at 9.6 million VND per hectare.
The next cost was fertilizer, manure, and chemical fertilizer. MPSes and CPI needed to have applied high levels of fertilizer (23 million and 19 million VND per hectare, respectively), whereas MCSes only used 3 million VND per hectare (see Table 4). Coffee farmers used less manure or compost in the startup process, which influenced the growth quality of the crop and productivity [44]. The study found that most farmers applied more manure or compost than chemical fertilizer. Manure (created from the waste of pigs, cows, chickens, and coffee straw materials in the first week after planting), including farmer-produced (30%) and purchased (70%), was given at 10 to 20 kg/tree.
As for the labor cost at planting, labor was used for cleaning, preparing plantation, digging holes, planting nursery plants, and setting pillars, where MPSes had the highest hired labor cost, at 3.2 million VND per hectare, because MPSes need strong male workers to set the wooden and concrete pillars (see Table 4). According to farmers, setting pillars required intensive labor and needed men rather than women. In contrast, CPI had higher family labor and lower hired labor costs than MCSes and MPSes, accounting for 18.4 million and 1.3 million VND/ha. This meant that CPI took better advantage of family labor (both men and women) in taking care of planting instead of setting pillars (as in the case of MPSes), saving on hired labor costs.
The annual cost of MCSes was 43.6 million, the cost of MPSes was 86.7 million, and the cost of CPI was 86.3 million VND per hectare. These costs had higher intermediate and labor costs, which were two main components.
With respect to intermediate costs, MPSes had the highest, at 38.7 million VND per hectare, with MCSes at 18.5 million and CPI at 28.5 million VND per hectare (see Table 5). Simultaneously, fertilizer and pesticide stimulants tended to be overused in production. The surveyed data also reported that fertilizer accounted for 70%, 64%, and 73% of intermediate costs for MCSes, MPSes, and CPI, respectively. In particular, inorganic fertilizer was applied as a key input in coffee and other industrial crops in three systems. For instance, coffee farmers' use of agro-chemical fertilizer made up 56% of total immediate costs, whereas fewer farmers said they used organic fertilizer in this model, accounting for only 14% of immediate costs. The reasons for the small number of farmers using manure was that (1) manure was more expensive than chemical fertilizer, and (2) a large number of old coffee trees needed to be replanted (after about 25 years) [31], which discouraged farmers from investing in manure. For poor farmers, the cost of tree renewal is a serious threat to their livelihoods, so using compost exceeds their investment capabilities [45]. In addition, pesticide stimulants for MPSes were costly, at 10.4 million VND per hectare (26.8% of intermediate costs (IC)), compared to others (see Table 5). This was explained by (1) pepper plantations having a high incidence of disease (i.e., foot rot, slow decline, 90%) [6,43]; (2) most plantations dealing with infectious diseases (i.e., 90% of the surveyed households had at least ten crops lost to diseases); and (3) the results of FGDs showing that pepper crops using wooden and concrete pillars had worse disease infections. Whereas households expected pepper to grow quickly to catch a good market, it led to more stimulants use. However, if pepper prices continued to drop, as coffee did some years ago, farmers in FGDs revealed they would have to reduce fertilizer and other inputs. According to Ho, 2014, the ration of output to pesticide only reached 0.13% [46]. In 2017, growing pepper areas overshot provincial master plans, accounting for 150% growth (cited from the head of DARD). This led to a glut on the market and reduced pepper prices by half to just 110.000 VND (4.8 USD) per kilogram compared to the year before. Source: Authors' own calculations. 1 Interest rate: 10%. 2 The life expectancy of perennial crops was 25 years [31]. Most amortization equipment (equipment was owned by the farmers) was considered to be linearly fixed at 25 years. * For labor, perennial crop systems require high labor inputs [45,46]. Most labor costs were for harvest, MPSes at 39.2 million (45.5% of annual costs) and CPI at 45.8 million VND per hectare (58% of annual costs), with both being more labor-intensive than MCSes (see Table 5). This is because they required more labor, especially black pepper (i.e., 40 kg fresh pepper/day were collected as compared to 100 kg fresh coffee/day). This created many labor pressures in harvest, especially during pepper harvest season, while harvest time could not be expanded due to crop characteristics. Additionally, CPI has a coffee harvest period from September to November, and pepper is from February to April, which facilitates the use family labor. This system had the greatest number of family labor days compared to the others, making up about 213 days over the year (see Table 6). To conclude, the available evidence showed that MCSes incurred the lowest production costs, whereas MPSes had the highest. However, MPSes used more intensive farming inputs than technical standards required locally. This will affect sustainability (i.e., health and environment risks), as well as create problems accessing export markets in the future due to high chemical residues in products [32].
There was differing economic efficiency in the three perennial crop systems illustrated in Table 6 by total output, value added, net farm income, profit, and the ratio of profit to intermediate cost.
The output of MCSes, MPSes, and CPI reached about 81 million, 254 million, and 286 million VND per hectare (where total output equaled coffee and/or pepper yield multiplied by coffee and/or pepper price) ( Table 6). Net farm income was about 37 million for MCSes, 167 million for MPSes, and 200 for CPI, whereas profit figures were 19 million, 135 million, and 165 million VND per hectare, respectively (Table 6).
Of the three systems, indicators for MCSes, including revenue, net farm income, profit, the ratio of profit/IC, and the ratio of profit/family labor days, were lower than the others. This is explained by coffee price losses in recent years, now about one-third of pepper prices. Furthermore, aging tree stock led to declining productivity (Tables 3 and 6). The inefficiency level of mono-cropping, synchronization, and segregation was around 18% [25].
In contrast, CPI had the best performance for the above indicators among the three systems due to the presence of economies of scope for coffee and pepper. For instance, the ratio of profit/IC and the ratio of profit/family labor days of CPI were the highest, 7 and 0.9, respectively.
Affected Farmers' Decisions on Perennial Crop Practice
The research used binary regression to explain the relationship between the socioeconomic characteristics of perennial crop farmers and those using other crop farming. It examined the probability of a farmer's decision to shift crop system from current farming to the others based on socioeconomic factors (see Table 7).
We note that FDT is proportional to the probability of a farmer's decision in transforming production. Thus, the socioeconomic factors associated significantly with this probability including ethnicity of HH (3.672), training (2.783), and crop failure (4.278), which had a positive relationship with a farmer's decision. This means that the Kinh ethnicity preferred to convert their cropping system (i.e., 60% of surveyed Kinh households planned to convert current farming to mixed cropping systems, including coffee-pepper-fruit). Higher education levels, higher percentages of training, and higher percentages of crop failures (because of plant diseases) improved the probability of transforming cropping. On the other hand, there was a negative relationship between profit/cost and a farmer's decision (−1.273), and low profits from farm activity increased the probability of transforming to other systems.
Discussion
The study found that the startup operations of the three systems required spending high financial investments, and also man-days. However, MCSes had the lowest cost compared to MPSes and CPI. At the same time, MCSes replanting has been supported by local governments through nurseries. This has been considered with limited finances. However, spending less on manure or compost in the establishment of a coffee plantation influenced the growth and quality of crops and productivity. MPSes had the highest capital requirement in comparison to the others, with especially the use of concrete and wooden pillars needing more man-days (and requiring men rather than women). It was likely to reduce women's employment opportunities. Additionally, using wooden pillars in MPSes was considered damaging to the environment, as they destroy the forest. This created many challenges for the farmers and local authorities. For future development, wooden and concrete pillars should instead be replaced by live plant supports in MPSes to become sustainable. CPI uses live plants as supports, which not only saves on pillar costs compared to MPSes, but is also highly labor intensive, creating more employment opportunities, including opportunities for women's employment throughout production in various operations such as pruning and taking care of plants.
In the production process, MCSes had smaller annual costs than the others because of less hired labor and less pesticide stimulant use. The overuse of chemical fertilizers can have negative environmental impacts [46]. MPSes apparently applied more pesticide and stimulants than required for local technical standards and also more than the others used. This not only increased annual costs but also negatively affected sustainable development (i.e., health and environmental risks), as well as adversely affecting access to export markets due to high levels of chemical residues in products [30]. Both MPSes and CPI had higher costs of production than MCSes, which created difficulties for low-income farmers and the poor in choosing which system was suitable for them. In particular, CPI had the highest labor costs in production compared to the others. As a result, small farms' labor availability needed to be considered in choosing the appropriate perennial crop system.
Conclusions
Perennial crops grow well in favorable regions such as the Central Highlands in general and Dak Lak province in particular, with diversified systems including mono-systems and intercropping systems. In this study, the authors analyzed the economic efficiency of three perennial crop systems, MCSes, MPSes, and CPI, evaluating establishment cost, production cost, and return indicators.
The study found that the startup operations of the three systems required not only a high financial investment, but also many man-days, and MCSes had the lowest cost compared to MPSes and CPI.
In the production process, MCSes had smaller annual costs than the others because of less hired labor and lower pesticide stimulant use. However, there was overuse of chemical fertilizer. Similarly, MPSes suffered from higher pesticide and stimulant use than technical standards required locally and than the other crops did.
Economic analysis found that CPI was economically the most viable of the three systems. Additionally, CPI created higher profit per family labor unit than other systems. In addition to economic factors, social factors also affected the decision process in choosing suitable farming systems. These social factors included education, training, and farm status.
However, due to limitations of time and expertise, the study sample size was not extended, which would have generated more accurate results. Additionally, the study did not analyze the economic efficiency of each mono-cropping system deeply to compare the economic efficiency for each perennial crop system over two years. In the next study, a larger data set needs to be examined. Additionally, a social analysis of the three systems will be carried out in future studies. | 6,434.4 | 2018-12-24T00:00:00.000 | [
"Economics",
"Agricultural and Food Sciences"
] |
Development of Mathematics Learning Tools Based Van Hiele Model to Improving Spatial Ability and Self-Concept Student’s of MTs.S Ulumuddin
This research aims to describe: (1) product validity of learning device development based on Van Hiele model, (2) the practicality of product development of learning device based on Van Hiele model, (3) the effectiveness of product development of learning device based on Van Hiele model, (4) improvement of students' spatial skills by using the tools that have been developed based on Van Hiele model, (5) increase the ability of self-concept students by using the device that has been developed based on Van Hiele model. The research was a development research. The development model was used 4-D model consisting of four stages, namely: Defining, designing, developing and distributing. The study to test trials was conducted on the class VIII of MTsS Ulumuddin. Test trials 1 conducted on the class VIII-2 and test trials 2 was conducted on the class VIII-3. Based on the results of the development iti¯s showed that: (1) The valid learning tools developed with the total average validity of RPPi¯s score was 4.50, Student Booki¯s score was 4.30, Master Booki¯s score was 4.30, and LASi¯s score was 4.40; (2) The practical learning tools developed showed based on studentsi¯ activities within the prescribed tolerance limits; (3) The effective learning tools developed can be seen based on from the students' learning completeness in the classic, and achievement of learning objectives of at least 80% (4) The average achievement of students' spatial ability in the trial I was 3.15 increased to 3.51 in trial II; and (5) The average achievement of students' self-concept trial I was 3.03 increased to 3.16 on trial II.
Introduction
In the implementation of learning, learning tools play an important role in the learning process, as concluded from Sanjaya's statement (2009) [1] through a good and accurate planning process, teachers are able to predict how much success will be achieved. Thus the possibilities of failure can be anticipated by each teachers. Moreover the learning process will be directed and organized, and teachers can use the time effectively for the success of the learning process.
The aim of developing learning tools is to improve and produce a new product. In addition, it aims to produce learning tools that are able to solve the learning problem in the classroom, because actually there is no single source of learning that can meet all kinds of learning process needs. In other words, teachers need to think about the learning objective first, especially in improving students' mathematical abilities including the spatial ability and self-concept in order to select the learning tools.
National Academy of Science (2006) [2] suggests: "Spatial thinking serves three purposes. It has (1) a descriptive function, capturing, preserving, and conveying the appearances of and relations between objects, (2) an analytic function, enabling an understanding of the structure of objects and (3) an inferential function, generating answers to questions about the evolution and function of objects ". This explains that thinking spatially has three aims: Describing functions, analyzing functions and finding answers to an object's function. Each student should try to improve spatial ability and sensations that are useful in understanding relations and feature of geometry to solve math problems and problems of daily life.
Students have difficulty to visualize in order to solve geometry problems. This is supported by an interview with one of the teachers at MTsS Ulumuddin, M. Nur, S.Pd who said that the students were still having difficulties in understanding the issues related to solids. Those difficulties include the difficulty in visualizing images and giving the right perception of images or geometry problems.
Based on the context of everyday life, spatial ability also needs to be improved, this refers to the opinion of Barke and Engida (2001) [3] who suggested that the spatial ability not only plays an important role in the success of mathematics and other lessons, but also on various types of professions. In the National Academy of Science (in Syahputra, 2013) [4] it is said that many fields of science that require spatial ability in the application of science include astronomy, education, geography, geossciences, and psychology. Some of the above statements state how important the spatial ability to be mastered by the students is, but the reality in the field is very contrary to what is expected. In fact, students' spatial abilities are still low and problematic.
In addition to spatial ability, there are aspects of psychology that contribute to the success of a person in completing the task well. In the Depdiknas (2006) [5] point five it is mentioned that the purpose of learning mathematics is that the learners are expected to have an attitude of appreciating the use of mathematics in studying the problem, as well as the attitude of resilient and confident in solving the spatial problem. It suggests that mathematics learning also emphasizes in terms of mathematical dispositions including self-concept students.
Self-concept is a person's perspective on him/herself, on his/her weaknesses and strength, including in planning the vision and mission of life. According to Seifert and Hoffnung (Desmita, 2010) [6], self-concept is an understanding of self or idea about self. Accordingly, it could be said that self-concept is the basis for being able to adapt and be formed because of a feedback process from other individuals.
Various learning models have been developed by practitioners and educational researchers in order to solve the educational problems in the field such that the Van Hiele model. To improve spatial ability mathematics and self-concept of students, a proper learning model is needed to develop that ability so that learning can stimulate students to learn independently, creatively, and more actively in following the learning activities.
According to Van Hiele, three main elements in the teaching of geometry i.e.time, teaching materials and teaching methods that are decided will be able to improve students' thinking ability to the higher levels if those elements are arranged in an integrated way. (Suherman, 2003) [7]. Some studies also reinforce the use of this model because based on research conducted by Ahdhianto (2016) [8], it was concluded that "the overall score of students after using the Van Hiele model based learning module of plane geometry was 82.8 which was in good category and meets the Minimum Criteria of Mastery Learning (KKM)". Further research was conducted by Yadil (2009) [9] that concluded that the learning scenario with Van Hiele model used can improve students' understanding.
Based on the problem that has been described, the purpose of this research are: 1) to develop the learning tools based on Van Hiele model that fulfill the valid, practical and effective criteria, 2) to describe the improvement of spatial ability and self-concept of the students by using learning-oriented model of model-based learning model Van Hiele developed.
Research Method
The category of this research is Research and Development. The development model was used the Thiagarajan 4D development model which consisted of four development stages.
Research Subjects and Objects
Subjects in this study were students of class VIII-2 and VIII-3 MTs Ulumuddin academic year 2016/2017 which each class consisted of a total of 26 students. While the object in this study was a mathematics learning tool in MTs Class VIII which was orientated in developed Van Hiele model.
Learning Tools Development
Learning tools developed in this research were Lesson Plan (RPP), Teacher Handbook (BPG), Student Book (BS), Student Activity Sheet (LAS) and research instrument that was Spatial Ability Test of Mathematics (TKSM). Learning tools development were done by using the Thiagarajan 4D development model [10]
Instruments and Data Collection Techniques
The instruments used in this study included the instruments for assessing the quality of learning tools i.e. aspects of validity, practicality and effectiveness. Instruments used were observation sheets, questionnaires, and tests.
The Validity of Learning Tools
Learning tools are said to be valid if they meet the criteria of content validity and construct validity. The validity of content was done by five validators by giving score 1 to 5 in each assessment column based on aspects: (1) format, (2) language, (3) content, and (4) illustrations. Furthermore the overall expert assessment was processed by calculating the average score to obtain the criteria of content validity assessment as follows: Information: Va is the determination score of the validity scale of the learning tools.
The Van Hiele model-based learning tools meet the expected content validity if the validator's average assessment of all learning tools is valid or highly valid. If not meet, then it is necessary to re-do the validation activities. And so on until learning tools that meet the content validity are obtained.
Next, construct validity of spatial and self -concept spatial tests was carried out before being used for field trials. Then spatial ability test items and self-concept questionnaires were tested outside the research subjects to measure validity and reliability. To measure the validity of the item, the following correlation formula of product moment (Arikunto, 2012) [11] can be used: Information : xy r : correlation coefficient between variable x and y xy ∑ : the number of multiplications between x and y x : score of test item y : total score N : number of subjects Furthermore, to calculate the reliability coefficient of test items, the following Alpha-Cronbach formula (Arikunto 2012) [11] was used: Information:
The Practicality of Learning Tools
The practicality of the learning tools was observed based on the validator's assessment and the implementation of learning tools. The validator assessment criteria are met if it is found on the validation sheet that all validators states that learning tools can be used with "a few revisions" or "no revision". Furthermore, the learning tools implementation was observed based on the observer's assessment where they chose score 1 to 5 on each aspect of learning tools implementation that were Lesson Plan (RPP), Teacher Handbook (BPG), Student Book (BS), Student Activity Sheet (LAS). The average total score obtained was categorized into the following percentage of learning implementation. [12] Information: k = Average total of learning tools implementation. The criteria of learning tools implementation are met if the minimum average total score is in the Good category.
The Effectiveness of Learning Tools
The effectiveness of instructional tools was observed based on: (1) the completeness of students' learning outcomes based on spatial and self-concept, and (2) student's responses to learning components and tools.
Completeness of student learning outcomes was seen based on the results of spatial ability test in the form of essay test consisting of five questions. The effectiveness criteria based on students' learning completeness classically are met if ≥ 85% get the score ≥ 2.67 from the scale of four.
The activity of the students is checked on the basis of the average assessment by the observer on all aspects of the observed activity. The effectiveness criteria based on student activity are achieved when they meet the ideal percentage of time tolerance (Sinaga, 2007) [13].
Student responses were observed based on student responses to questionnaire. Effectiveness criteria based on student responses are met if ≥ 80% subject classically give a positive response [13], that is on all aspects being asked related to the learning tools and implementation.
Description of Learning Tools Development Stage
Learning tools development has been completed by using Thiagarajan Four-D development model with details as follows:
Define
The aim of teaching could be identified by first analyzing instructional needs. The process of identifying instructional needs began with identifying problems on the field. Based on the diagnostic tests given, the spatial ability and self-concept of students were still low. Observations and interviews with teachers and students indicated that the cause was that students were not used to spatial and self-concept ability in learning activities. This was also supported by the state of teachers who had not been able to develop learning tools that focus on developing students' spatial ability and self-concept. Based on these findings, the main objective in developing this tool is to develop and improve students' spatial ability and self-concept.
Design
The main activity of this stage was to write the initial draft of learning materials including the Lesson Plans (RPP), Teacher Handbook (BPG), Student Book (BS), Student Activity Sheet (LAS), and test of learning outcomes and questionnaires to measure students' spatial ability and self-concept. The instructional materials were based on KI, KD, and indicators on cube and block material, and adjusted to the purpose of training and improving students' spatial ability and self-concept. Based on these objectives, five essays and self-concept questionnaires consisting of a total 30 questions were prepared.
Develop
At this stage an evaluation of learning tools that have been developed was carried out. Formative evaluation was done with two stages: (1) evaluation of experts and practitioners, and (2) field trials. The goals were to look at their weaknesses and improve the tools that have been developed.
The result of expert evaluation and practitioner in the form of content validity assessment showed that all learning tools met valid criteria, with average content validity of RPP = 4.50, LAS = 4,40, BPG = 4,30, and BS = 4,30.
All of the spatial ability test items and self-concept questionnaires met valid and reliable criteria. Instrument reliability was used to determine the test result. After the calculation, reliability test of spatial ability of 0.69 (high category) and self-concept questionnaire of 0.87 (very high category) were obtained.
Field trial or trial I conducted to see the practicality and effectiveness of learning tools. In the trial I, the learning tools have not met all the practical and effective criteria, so there was a need for a revision and to do the second trial.
Revisions were made based on the findings of learning tools weaknesses in trial I. revisions were made to Lesson Plans related to learning time allocation, as well as on Student Book and Student Activity Sheet related to the materials taught. After the revisions were completed, further field trial or trial II were conducted to review the practicality and effectiveness of learning tools, as well as the improvement of students' spatial and self-concept ability between trials.
Disseminate
The dissemination stage is the final stage in the 4-D development model. At this stage, learning tools that have been used in trial in the research class will be used again by comparing the developed learning tools (experimental class) to the tools that are usually used by the mathematics teachers at MTsS Ulumuddin (control class). However, this stage was not carried out by researchers, due to the lack of time, cost and energy so that this stage is not discussed in depth.
Description of the First Trial Results
The practicality criteria of learning tools based on validator assessment were met, since all validators assessed that the developed learning tool can be used with "a few revisions" or "no revision". Implementation of learning tools was met, in terms of the average of all learning meetings that got the percentage of 81.46% (good category). Based on these descriptions, the learning tools developed meet the practical criteria.
The result of the spatial ability test showed that the total number of subjects who completed the score ≥ 2.67 were up to 21 students from 26 students or 80.77%. Description The percentage of classical completeness criteria of the spatial ability in trial I is shown in Figure 1. Figure 1, it can be seen that, students' learning completeness is classically from spatial ability test result in trial I. So it did not meet the completeness criteria of the expected classical learning outcomes Further criteria of effectiveness based on student activity. The average percentage of time used by the students to conduct each activity category for 4 meetings was 23.44%; 14.76%, 34.38%, 25.09% and 2.08%. The average percentage is obtained from the results for the average percentage of the frequency of activity for each category with the number of meetings, which are 4 meetings. The average percentage of time students use in doing activity categories can be represented by the diagram in Figure 2. Based on the data in Figure 2, it can be seen that the percentage of student activity time has met the criteria of achieving the ideal time percentage.
Then the criteria of effectiveness based on student responses were also met, because students who gave positive responses to components and implementation of learning were up to ≥ 80% i.e 94.56%.
Overall, learning tools developed met the valid and practical criteria, but did not meet the criteria of effectiveness. This was because the completeness aspects of student learning outcomes had not been classically met. Thus, the learning tools should be revised. And then trial II should also be carried out.
Description of the Second Trial Results
The practicality criteria of learning tools based on the validator assessment were met in accordance with the description in the trial I. The implementation of learning tools in the second trial was also met based on the average of all learning meetings that gain the percentage of 85.14% (good category). Based on these descriptions, the learning tools developed met the practical criteria.
American Journal of Educational Research
The result of the spatial ability test showed that the total number of subjects who completed the score ≥ 2.67 reached 24 students from 26 students or 92.31%. Description Percentage of classical completeness criteria of spatial ability in trial II is presented in Figure 3. Based on the data in Figure 3, it can be seen that the students' learning completeness is classically from spatial ability test result in trial II. So that in trial II already meet the completeness criteria of expected classical learning outcomes.
Furthermore, the effectiveness criteria are based on student activity. The average percentage of time that students used to conduct each activity category for 4 times was 22.74%; 14.93%; 34.55%; 26.04%; and 1.74%. The average percentage is obtained from the results for the percentage of the average frequency of activity for each category with the number of meetings, which are 4 meetings. The average percentage of time students use in performing activity categories can be represented by the diagram in Figure 4. Based on the data in Figure 4, it can be seen that the percentage of student activity time has fulfilled the criteria of achieving the ideal time percentage.
Then the criteria of effectiveness based on student response was also met, because students who gave positive responses to the components and implementation of learning were up to ≥ 80% i.e 95.80%. Overall, the learning tools based on Van Hiele model developed has met the valid, practical and effective criteria.
The Improvement of Students' Spatial Ability
The improvement of students' spatial ability was observed based on an increase in average score of spatial ability posttest from trial I to trial II. Students' post-test average score in trial I was 3,15 while in trial II became 3.51. So there was an increase of 0.36 or 9%. Further improvement in students' spatial ability was also observed based on the increase of average score on each spatial ability indicator from trial I to trial II. In each indicator of spatial ability there was an increase in the average spatial ability in the indicator of the ability to recognize that the size and shape of the subject were constant despite the different stimulus of 0.48, the indicator of the ability to imagine a change in the shape of a particular object or change the order of parts of an object of 0.35, on the indicator of the ability to think quickly and accurately about the rotation on 2-dimensional or 3-dimensional objects of 0.23, the indicator of the ability to understand the shape of an object or part of an object and the relationship between the object is 0.12 , and on the indicator of the ability to recognize the structure or shape of space and accurately imagine a change from a given object perspective of 0.27. For more details we can see in the following diagram presented in Figure 5. Based on Figure 5 above, it can be concluded that the spatial ability of trial I to trial II is seen from the average value of total and the average value of each indicator increased through the development of learning devices based on the model of Van Hiele developed. This suggests that the use of learning tools based on Van Hiele model developed has an impact on improving students' spatial ability. This is in line with Syahputra's research (2013) [4] states that "there is a change in students' spatial ability (KS) both in the good category school as well as in the average category schools. In the good category schools, the average spatial ability (KS) of students who received realistic mathematics learning increased by 0.55 while the conventional learning increased by 0.22. In average category schools, the average spatial ability (KS) of students who received realistic mathematics learning increased by 0.19 while students who received conventional learning increased by 0.16 ". Similarly, research conducted by Syarah et all, (2013) [14] showed that there was a difference in the spatial increase of students that was between students who got problem based learning with students who were given conventional learning. From the averages of both groups, it could be seen that students who were given problem-based learning had the higher spatial abilities than students who were given conventional learning. This was indicated by the average of the spatial gain of students who were given problem-based learning that was 0.39 higher than students' who were given regular learning that was only 0.19.
Average Spatial Ability for Each Indicator
Trial I Trial II The Improvement of Students Self-Concept Ability.
The improvement of students self-concept were observed based on an increase in post-test average score of self-concept from trial I to trial II. The average post-test score of the students on trial I was 3.03 while in trial II became 3.16. So there was an increase of 0.13 or 3.25%. Further enhancement of self-concept was also seen in each self-concept indicators: (1) Students' knowledge of mathematics; (2) Student's opinion about mathematics; and (3) Assessment of how much students love mathematics. It is pointed out that the use of learning tools based on Van Hiele model has an impact on improving student self-concept. For more details we can see in the following diagram presented in Figure 6. Figure 6 above, it can be concluded that the ability of self-concept students from trial I to trial II is seen from the average value of the total and the average value of each indicator increased through the development of learning devices based on Van Hiele model developed. This shows that the use of learning tools based on Van Hiele model developed has an impact on the improvement of students' self-concept ability. In terms of improving students' self-concept abilities, for indicators of outlook and assessment is largely determined by the achievement of knowledge indicators.
Conclusion
Based on the results of data analysis and discussion in this study, presented several conclusions as follows: a) The validity of learning tools developed included in the valid category with the total average validity of RPP's score was 4.50, Student Book's score was 4.30, Master Book's score was 4.30, and LAS's score was 4.40, item about spatial ability test and self-concept questionnaire students have also been in the category valid. b) Learning device developed has been practically used that has met the practical criteria seen from the student activity that the percentage of student activity time has met the criteria of achieving the ideal time percentage. c) Learning devices developed based on the Van Hiele model have met the effective criteria. Effective criteria showed from: (1) Students' learning mastery has been classically achieved 92.31% in trial II; (2) achievement of learning objectives of at least 80%; and (3) and more than 80% of students responded positively to the developed model Van Hiele learning device.
d) The improvement of students spatial ability using learning tools based on Van Hiele model on cube and beam material is the average achievement of students' spatial ability in trial I was 3.15 increased to 3.51 in trial II. In addition, the average of each student's spatial indicator indicator increased from trial I to trial II. e) The improvement of self-concept of students using learning tools developed based on Van Hiele model-based learning on cube and cube materials is the average achievement of self-concept students in trial I was 3.03 increased to 3.16 in trial II. In addition, the average of each self-concept indicator declared from trial I to trial II.
Suggestions
Based on the results of the research and the conclusion above, it can be suggested several things as follows: a) The resulting learning tools still need to be piloted in other schools in various conditions to obtain a truly qualified learning tool (as a continuation of the deployment stage in the 4-D development model). b) Teachers are advised to make learning device in the form of teacher book and student's own book in accordance with the characteristics of the students because more know this is the teacher itself. so that learning device can support the teacher in improving student ability. c) In the formation of discussion groups are advised to not only pay attention to heterogeneity, but also the comfort of students in group. d) To improve students spatial skills it is recommended that teachers focus on improving students' skills on orientation and relationships. e) To improve the ability of self-concept students are advised that teachers focus on improving indicators of student knowledge in mathematics | 5,948.6 | 2017-11-13T00:00:00.000 | [
"Mathematics",
"Education"
] |
Optimizing Coronary Computed Tomography Angiography Using a Novel Deep Learning-Based Algorithm
Coronary computed tomography angiography (CCTA) is an essential part of the diagnosis of chronic coronary syndrome (CCS) in patients with low-to-intermediate pre-test probability. The minimum technical requirement is 64-row multidetector CT (64-MDCT), which is still frequently used, although it is prone to motion artifacts because of its limited temporal resolution and z-coverage. In this study, we evaluate the potential of a deep-learning-based motion correction algorithm (MCA) to eliminate these motion artifacts. 124 64-MDCT-acquired CCTA examinations with at least minor motion artifacts were included. Images were reconstructed using a conventional reconstruction algorithm (CA) and a MCA. Image quality (IQ), according to a 5-point Likert score, was evaluated per-segment, per-artery, and per-patient and was correlated with potentially disturbing factors (heart rate (HR), intra-cycle HR changes, BMI, age, and sex). Comparison was done by Wilcoxon-Signed-Rank test, and correlation by Spearman’s Rho. Per-patient, insufficient IQ decreased by 5.26%, and sufficient IQ increased by 9.66% with MCA. Per-artery, insufficient IQ of the right coronary artery (RCA) decreased by 18.18%, and sufficient IQ increased by 27.27%. Per-segment, insufficient IQ in segments 1 and 2 decreased by 11.51% and 24.78%, respectively, and sufficient IQ increased by 10.62% and 18.58%, respectively. Total artifacts per-artery decreased in the RCA from 3.11 ± 1.65 to 2.26 ± 1.52. HR dependence of RCA IQ decreased to intermediate correlation in images with MCA reconstruction. The applied MCA improves the IQ of 64-MDCT-acquired images and reduces the influence of HR on IQ, increasing 64-MDCT validity in the diagnosis of CCS. Supplementary Information The online version contains supplementary material available at 10.1007/s10278-024-01033-w.
Introduction
The European Society of Cardiology (ESC) recommends coronary computed tomography angiography (CCTA) as the diagnostic method of choice for patients with suspected chronic coronary syndrome (CCS) with a low to intermediate pre-test probability (PTP) [1].Currently, 64-row multidetector single-source CT (64-MDCT) is considered the minimum requirement for proper CCTA imaging [2].The 64-MDCT systems have been shown to be a valid and accurate diagnostic tool, even when compared to ≥ 128-MDCT or dual-source CT (DSCT) [3,4].In addition, 64-MDCT is widely available, making it an indispensable diagnostic tool in patients with CCS [1,5,6].Although 64-MDCT can provide perfect images under optimal conditions, its limited temporal resolution makes it susceptible to motion artifacts in patients with high or variable heart rates (HR), especially in the right coronary artery (RCA) [1,7,8].Several approaches have been proposed to reduce motion artifacts in 64-MDCT, both in terms of hardware modification (gantry rotation time, half scan rotation, high-pitch imaging, prospective (PGI) and retrospective electrocardiographic (ECG)-gated imaging) and HR control (beta-blockers or ivabradine) [3,4,8,9].However, these approaches have limitations either due to physical limits or contraindications [9,10].For further image enhancement, novel software-based approaches in the form of motion correction algorithms (MCA) offer a suitable solution for motion-disturbed images.
Several MCA based on different technical approaches have been introduced in the last decade [11].However, only a few MCA have proven their clinical utility and are commercially available [12,13].Furthermore, the clinical applicability of most of these MCA is limited mainly because of either vendor-specificity, high effective dose, poor performance at high or irregular HR, or long computation time [11,[14][15][16].The latest MCA variants are based on deep-learning networks [11].In several phantom trials and small patient studies, they have shown remarkable results in improving the image quality (IQ) of motion-impaired images in an acceptable computation time [11,14,17].However, clinical data for these deep learning-based MCA are still scarce.The aim of this study was to evaluate the performance of a recently introduced deep learning-based MCA (Deep PAMoCo) on IQ in a large set of real-world patient CCTA data sets and to demonstrate the potential clinical utility of this MCA [15].
Image Data, Algorithm, and CT-Scanning
124 CCTA data sets of consecutive patients scanned with the same 64-MDCT system and the same CT protocol were retrieved from the Picture Archiving and Communication System and included in this study.The clinical indication for CCTA was according to clinical guidelines [2].Original image data were anonymized, and patients are not identifiable.Consecutive patient data in which at least one vascular segment was affected by motion artifacts were selected for the evaluation with a conventional reconstruction algorithm (CA) and the MCA.Since the MCA is applied to already reconstructed image data, no raw data is required.The MCA can, therefore, be used on different CT systems without any limitations.
The function of the applied MCA is based on partial angle reconstructions (PAR) computed with a motion vector field (MVF) generated by a Deep Neural Network (DNN).After an initial reconstruction of the CCTA images, the position of the coronary arteries is determined using a segmentation software.PAR of the coronary arteries are created from this data by forward and backprojecting data.PAR are characterized by a very high temporal resolution, virtually freezing the individual PAR.The PAR are then mapped by a MVF to the same motion state.MVF are generated by a DNN and compute a motion vector for each PAR.Finally, the motion-corrected PAR are re-inserted into the original reconstruction, resulting in a motion-compensated image.More detailed technical information about the MCA can be found elsewhere [15].
The scanning protocol included calcium-scoring, test-bolus-tracking, and CCTA.CCTA imaging was performed using a 64-MDCT (Siemens Definition 64, Siemens Healthineers, Erlangen, Germany) with a gantry rotation time of 0.33s, a collimation of 64 × 0.6mm, an automatic, weight-adjusted tube voltage between 100 and 120kVp, and automatic exposure control.Acquisition was performed with PGI.PGI was performed at a maximum HR ≤ 80 beats per minute (bpm) during an R-R interval of 60-80% in diastole (average 68%).Low-dose calcium-scoring was performed before CCTA to estimate the patient's calcium load.A calcium score of 1000 was considered the upper limit for CCTA.Patients with a calcium score >1000 were referred to the catheter laboratory.CCTA was performed by trained staff.Beta-blockers were administered orally or i.v. if HR was ≥65bpm after checking contraindications.Sublingual nitroglycerine was administered 2-3min before the examination.For the examination, patients were placed in the supine position, head first.The field of view (FOV) was estimated considering the size of the heart (approximately from 2cm below the carina to the lower edge of the apex cordis).Contrast medium (CM; Solutrast 370, Bracco, Milan, Italy) was administered via an antecubital intravenous line at a flow rate of 6ml/s followed by 30ml of saline at the same flow rate.Body mass index (BMI), age, sex, mean HR, and intra-cycle HR changes (ΔHR) were registered.
Image Quality Assessment
Images were evaluated by a radiology resident trained for the evaluation of CCTA images.IQ was assessed persegment, per-artery (right coronary artery = RCA, left anterior descending artery = LAD, left circumflex artery = LCx), and per-patient.Per-segment assessment was performed in regard to the Society of Cardiovascular Computed Tomographyguidelines for the interpretation and reporting of CCTA [18] using a 17-segment approach.A minimal vessel diameter of 2mm was chosen for quality evaluation.IQ was determined using a 5-point Likert score in terms of image evaluability.The 5-point Likert score provides accurate information on IQ without being overwhelming.Evaluability was determined based on image readability and the amount of motion artifacts according to previous studies [19]: 1 = unacceptable; 2 = below average; 3 = average; 4 = above average; 5 = excellent (Table 1).The total amount of motion artifacts was assessed by counting the motion artifacts per-artery (RCA, LAD, LCx) by identifying typical patterns of motion artifacts as "crescents," "tails," and "horns" (Fig. 1A).MCA-inserted artifacts were assessed by identifying typical patterns as "steps" or vessel "duplications" (Fig. 1B).
Results
CCTA data sets of 124 patients were evaluated (Table 2).Of these, eleven data sets were excluded due to severe stack transition, vessel calcifications, and medical devices (stents and pacemakers) producing massive artifacts.BMI was missing in 20 patients; sex, age, ΔHR, and mean HR in nine patients.IQ of 113 patients, 333 arteries, and 3019 segments was evaluated (Fig. 2 and 3; Supplementary Table 1).Per-patient, unacceptable or below-average images decreased from 9.65% to 4.39%, and above-average or excellent images increased from 67.54% to 77.2%.Per-artery, the RCA improved significantly.Here, the percentage of unacceptable or below-average images decreased from 36.36% to 18.18%, and above-average or excellent images increased from 31.82% to 59.09%.Per-segment, RCA segments 1 and 2 benefited from the MCA.Unacceptable or below-average images decreased from 33.63% to 22.12% and from 71.68% to 46.9%, respectively, while above-average or excellent images increased from 44.25% to 54.87% and from 19.47% to 38.05%, respectively.The total number of artifacts was determined per-artery (Fig. 4; Supplementary Table 2).We observed a decrease in motion artifacts from 3.11 ± 1.65 to 2.26 ± 1.52 in the RCA.There was no significant decrease in motion artifacts in the LAD or LCx.In 11 out of 3019 segments, the IQ deteriorated due to MCA-inserted artifacts, especially in RCA segments 1 and 3.These artifacts mostly resembled vessel "duplications" or "steps".The correlation between IQ and BMI, age, mean HR, ΔHR, and sex was tested per-artery using Spearman's Rho (Fig. 5; Supplementary Table 3).Mean HR and IQ correlated significantly negatively in all three coronary arteries.The correlation was strong for RCA reconstructed with CA and intermediate for MCA.Correlation was weak for LAD and LCx
Discussion
In this study, we evaluated the performance of a novel deep learning-based MCA by comparing IQ of 64-MDCTacquired CCTA images.As in previous studies, the RCA and its segments 1 and 2 were found to be most prone to motion artifacts, as these are the most motile vessel segments [7].MCA reconstruction had the greatest effect in these segments in improving IQ and reducing the total number of motion artifacts.Baseline IQ of LAD and LCx per-artery and per-segment was initially much better; MCA-improvement of LAD and LCx was negligible.On the per-patient level, we observed an overall improvement of IQ.By evaluating potential disturbers, we found a significant negative correlation between mean HR and IQ for RCA, LAD, and LCx in CA-and MCA-reconstruction.However, the influence of mean HR was strong in the CA-reconstruction and intermediate in the MCA-reconstruction of the RCA.Correlation between mean HR and IQ of LAD and LCx was weak in both CA and MCA.BMI, age, sex, and ΔHR had no significant impact on IQ.
Recently, various MCA-based approaches have been published to mitigate motion artifacts.Two vendor-specific MCA are currently available (2023): SnapShot Freeze (SSF) 1 and its successor SSF2 (GE Healthcare, Waukasha, WI, USA) [13,20].In the clinical setting, SSF1 improved IQ and interpretability in ≥ 64-MDCT independent of HR and BMI [21,22].In addition, good IQ was maintained even at high HR, allowing wider application of PGI leading to a lower total effective dose [21,23].Therefore, SSF1 is considered a useful tool to assist CCTA in CCS diagnosis [12].Positive effects of SSF2 on IQ are even more profound compared to its predecessor [13].Unfortunately, both MCA are vendor-specific and only applicable on vendor-specific CT scanners [17].Besides SSF1 and 2, there have been several attempts to develop even more effective and widely applicable MCA [11].However, most of these suffer from limitations due to high effective dose, poor performance at high or irregular HR, or long computation time [11,16,24,25].The recently introduced deep learning-based MCA might be a solution.Deep learning-based MCA can be applied post-acquisitionally without the need for raw data [26].By this, they have a very short computation time and can be used vendor-independently [11,15].However, larger studies on the performance of deep learning-based MCA are still scarce.Therefore, their clinical applicability cannot yet be assessed although phantom studies are promising [11,15,25].
In this study, we have found that the applied deep learning-based MCA Deep PAMoCo improves the IQ of 64-MDCT-acquired images [13,15].By this, the rate of non-diagnostic images and false-positive results could be remarkably reduced, especially at higher HR [22,27,28].As CCTA is already considered to have a high-negative predictive value, this could further increase its validity for the diagnosis of CCS [1].Especially regarding the limited temporal resolution of 64-MDCT, the presented MCA seems to be attractive to enhance 64-MDCT-acquired images.However, the applied MCA can also be expected to be useful in combination with high-end imaging technology, as high or irregular HR can also disturb ≥ 128-MDCT and DSCT imaging [29].Besides IQ improvement, the tested MCA could also reduce the effective dose during CCTA, as PGI could be applied at higher HR, and by this more widely [21,23].However, as IQ still correlated with HR at an intermediate level, the presented MCA should be considered as a support and not as a substitute for HR control [30].Finally, the tested MCA seems to be especially attractive in regard to its broad applicability due to its short computation time of 15s per entire CCTA image and its vendor-independent use [11,15,17].Thus, the presented MCA resembles a loweffort software upgrade for CCTA imaging performed with a 64-MDCT.
This study has limitations.Firstly, since we wanted to test the ability of the MCA to compensate for motion artifacts and to improve IQ, patient data were not given in this trial.Secondly, in this study, we had to exclude eleven images completely and two partially because of stack transition, vessel calcifications, and medical devices (stents and pacemakers) producing massive artifacts.In addition, due to a lack of documentation, we were unable to determine BMI in 20 patients and mean HR, ΔHR, age, and sex in 9 patients.Thirdly, the evaluation of IQ was conducted by a sole professional.Consequently, we cannot provide an inter-observer agreement.Fourthly, the IQ assessment was conducted by employing a 5-point Likert score, consistent with previous research [19].However, it is essential to note that there is no officially recommended approach for evaluating IQ, and therefore, the assessment lacks standardization.Consequently, the comparability with studies utilizing different assessment scores is restricted.Fifthly, the primary objective of this study was to evaluate the performance of the applied MCA in enhancing the IQ of real patient CCTA images.It is crucial to emphasize that the findings should not be generalized to other deep learning methods, given our limited study population and the focus on a sole MCA.Sixthly, this was a single-center study.We recommend further studies at other radiology centers to increase the power and validity of our findings.Moreover, as this study aimed to evaluate the impact of a deep learning-based MCA on IQ, we cannot draw conclusions regarding its clinical utility.Further research is needed to evaluate the impact of MCA on diagnostic accuracy e.g. using invasive coronary angiography as a reference.Thus, it would also be possible to evaluate the impact of vessel calcification on IQ and MCA-related effective dose reduction.Finally, we did not compare the tested MCA with vendor-specific or other MCA.Thus, we cannot determine the superiority of the presented MCA.
Conclusion
In conclusion, this study has demonstrated on the one hand that the applied deep learning-based MCA is able to improve IQ in a large set of 64-MDCT-acquired real-patient images and, on the other hand, to reduce HR impact on IQ.Thus, the presented MCA can be considered as a promising example of deep learning-based MCA.Now, further studies should be done to evaluate the effectiveness of the presented MCA in regard to other MCA and to assess its clinical utility and diagnostic accuracy.
Statistical analysis was carried out with JASP team(2022).JASP (version 0.16.4) [computer software].Continuous variables are expressed as mean ± standard deviation (SD).The central tendency of non-dichotomous categorical variables is expressed as median and percentage.Significance was tested using paired samples tests.A one-tailed p-value of <0.01 is considered to indicate statistical significance in IQ assessment.IQ between the CA and the MCA was compared using the Wilcoxon-Signed-Rank test for ordinal variables.Rank-Biserial correlation was chosen as the effect size measurement.Normality of continuous data was assessed by applying the Shapiro-Wilk test.As continuous data were not normally distributed, the non-parametric Wilcoxon-Signed-Rank test and Rank-Biserial correlation were applied.Correlation analysis between BMI, age, sex, mean HR, and ΔHR and IQ was performed using Spearman's Rho.A two-tailed p-value of <0.01 is considered to indicate statistical significance.Graphs were created using GraphPad Prism, Prism 9 for Windows 64-bit, version 9.5.1 (733), January 26, 2023, tables were created using Microsoft® Excel® 2019 MSO (Version 2303 Build 16.0.16227.20202)64 Bit.
Fig. 1 A
Fig. 1 A Motion artifact elimination by MCA at segment 2. B MCA inserted artifacts at segment 3 (n = 11)
Fig. 2 Fig. 3
Fig. 2 Median and interquartile range of IQ per-patient and per-artery with CA and MCA due to a 5-point Likert score.Significance is marked with an asterisk
Fig. 4 Fig. 5
Fig. 4 Mean ± SD of motion artifacts per-artery.Significance is marked with an asterisk
Table 1
Likert score description
Table 2
Study population | 3,787 | 2024-03-04T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Enceladus and Titan: Emerging Worlds of the Solar System (ESA Voyage 2050 White Paper)
Some of the major discoveries of the recent Cassini-Huygens mission have put Titan and Enceladus firmly on the Solar System map. The mission has revolutionised our view of Solar System satellites, arguably matching their scientific importance with that of their planet. While Cassini-Huygens has made big surprises in revealing Titan's organically rich environment and Enceladus' cryovolcanism, the mission's success naturally leads us to further probe these findings. We advocate the acknowledgement of Titan and Enceladus science as highly relevant to ESA's long-term roadmap, as logical follow-on to Cassini-Huygens. In this white paper, we will outline important science questions regarding these satellites and identify the pertinent science themes we recommend ESA cover during the Voyage 2050 planning cycle. Addressing these science themes would make major advancements to the present knowledge we have about the Solar System, its formation, evolution and likelihood that other habitable environments exist outside the Earth's biosphere.
Introduction Why Explore Titan?
From Voyager 1's glimpse of a hazy atmosphere to the successful entry and landing of the Huygens probe, Saturn's largest moon, Titan, remains to be an enigmatic Solar System body. Arguably the closest resembling Solar-System body to the Earth, Titan boasts a diverse landscape of lakes and rivers that are kept 'flowing' by the methane cycle -a striking parallel with the water cycle on Earth. Moreover, its thick, hazy atmosphere is sustained by a whole host of chemical processes that create complex organic compounds. For these reasons, we advocate Titan exploration as one of ESA's science priorities in the pursuit of emerging worlds in our Solar System and its potential to inform us about exotic exoplanetary systems.
Why Explore Enceladus?
Enceladus is another unique planetary body. It is a small active moon that hides a global ocean under its thick icy crust. In its south polar region, the ocean material escapes through cracks in the ice. The escaping material forms a large plume of salty water that is rich in organic chemical compounds. Such key chemicals, in concert with ongoing hydrothermal activities and a tidally heated interior, make Enceladus a prime location for the search of a habitable world beyond the Earth. Enceladus science is highly relevant to ESA's goals in the next planning cycle and we recommend the acknowledgement that exploring Enceladus can make major advancements, as well as provide a unique opportunity to answering outstanding questions on habitability and the workings of the Solar System.
Overarching Science Themes
The exploration of Titan and Enceladus will address science themes that are central to ESA's existing Cosmic Vision programme, particularly on habitability and workings of the Solar System. The remarkable discoveries revealed by Cassini-Huygens, led to the proposal of a Large-class mission in response to the Cosmic Vision call with the goal of exploring Titan and Enceladus (Coustenis et al., 2009). The proposal was accepted for further studies, however did not go further. Over the last decade, numerous NASA missions have been proposed to build on the successes of Cassini-Huygens and explore these emerging worlds. In June 2019, NASA selected Dragonfly as their next New Frontiers mission to advance the search for the building blocks of life on Titan.
We advocate for these overarching themes since they encompass some of humankind's biggest open questions and should therefore remain a priority in ESA's next planning cycle. Missions to Titan and Enceladus would not only be a natural and logical follow-on to the successes of Cassini-Huygens; it would provide optimal laboratories to test questions pertaining to these overarching themes, namely: (i) What are the conditions for the emergence of life? (ii) How does the Solar System work? (iii) How are planetary bodies formed and how do they evolve? In addition to bringing multidisciplinary Solar System science, addressing these questions can enhance our knowledge of exoplanetary systems and therefore foster synergy between Solar System scientists and the rapidly growing community of exoplanetary scientists.
Science Themes for Titan Titan's Atmosphere
Titan is well-known for its extensive atmosphere (e.g. Niemann et al., 2005;Wahlund, 2005). Because of its composition and complex organic chemistry (Waite et al., 2007;Vuitton et al., 2014), Titan's atmosphere is thought to be similar to that of early Earth, making it an obvious choice for studies on the origin of life.
The first signs of significant chemical complexity in Titan's atmosphere came from Voyager images of the satellite, which was obscured by an orange haze with a blue outer layer at the top of the atmosphere. This hid the surface from the visible cameras, and led Sagan et al (1993) to suggest tholins at Titan. Cassini's instruments were able to penetrate this with radar, infrared and visible imaging, and the Huygens probe descended through the atmosphere. From orbit, complex chemistry involving neutrals, cations and anions (Waite et al., 2007, Coates et al., 2007 was found. Some of the remarkable new results from the Cassini mission included: the unexpected presence of heavy negatively charged molecular ions (up to 13,800 u/q) and dust/aerosol particles (e.g. Coates et al 2011;Desai et al., 2017) making up a global dusty ionosphere (Shebanits et al., 2017); the formation of a 'soup' of organic (pre-biotic) compounds, including contributions to Titan's signature orange haze, as shown in Figure 2 (e.g. Waite et al., 2007;Vuitton et al., 2009); the unexpected impact of the solar EUV on the un-Chapman-like ionosphere (Ågren et al., 2007).
Titan's atmospheric chemistry is initiated in the ionosphere (thermosphere) primarily by the solar EUV on the dayside and energetic particle influx on the nightside (e.g. Cravens et al., 2006;Shebanits et al., 2013). It should be noted that while remote sensing provides excellent overview of Titan's ionosphere (e.g. Kliore et al., 2011) , detailed studies require in-situ measurements, not in the least due to the influence of the heavy negative charge carriersmolecular ions and dust/aerosols.
It was postulated at Titan that the high mass anions would drift down through the atmosphere, as tholins, eventually reaching the surface causing the dunes and falling in the lakes. The observations showed the highest masses of anions at the lowest altitudes (Coates et al., 2009) with the density showing similar trends (Wellbrock et al., 2013). The first chemical models showed that the low mass anion species may be CN-, C3N-and C5N- (Vuitton et al., 2009). Chemical schemes are only beginning to provide theories as to how the larger species can be produced. Charging models may explain how species of ∼100 u could be ionized and aggregated to form >10,000 u molecules (Lavvas et al. 2011;Lindgren et al. 2016), but only a few studies have looked at precise chemical routes for producing molecules >100 u. An example is the formation of Polycyclic Aromatic Hydrocarbons (PAH) and more complex tholins, which are prebiotic polymer molecules, from simple hydrocarbons and nitrogen available as aerosols in Titan's atmospheric haze. It is clear the whole picture of chemical chains connecting all species of molecules is unclear.
Dusty plasma in Titan's ionosphere (e.g. Shebanits et al., 2016) (as well as Enceladus plume, e.g. deserves special attention. Dusty plasma in space physics generally is a relatively new field, relevant for moon-produced plasma tori in Saturn and Jupiter systems, ionospheres of Earth (noctilucent clouds, e.g. Shukla, 2001) and Saturn (Morooka et al., 2019), cometary comas (e.g. Gombosi et al., 2015 and interstellar clouds (Sagan and Khare, 1979). For Titan's ionosphere, the dust in question may have different names (tholins, aerosols, dust grains, heavy negative ions) but generally refers to nm-sized grains or larger, with masses of more than a few hundred atomic mass units (e.g. Coates et al., 2007;Lavvas et al., 2013). The lack of consensus on the nomenclature in fact underlines the recency of the field.
Titan's dust grains form in the in-situaccessible ionosphere and are indeed impossible to measure with the available remote sensing methods. Dusty plasma is also Waite et al., 2007. important for the energy budget as it increases ionospheric conductivities (Yaroshenko and Lühr, 2016). Aerosols/dust grains in general are also relevant to cloud formation (Anderson et al., 2018).
Key measurements: Ultraviolet, visible, infrared and millimeter/micrometer wave spectra will remotely constrain organic compounds in Titan's atmosphere and surface. High resolution mass/energy spectrometers and Langmuir probes will differentiate and constrain properties of neutrals, positive and negative ions, electrons and aerosols/dust grains. Radio occultations will resolve structure of the atmosphere.
Summary: Titan's atmosphere is an excellent laboratory to study pre-biotic organic chemistry that directly ties into the question of the origin of life, and it is one of the available sites to study dusty plasma, a rapidly emerging field in the space community.
Titan's Energy Budget
Titan is the largest moon in the Saturnian system and the only moon in the Solar System known to harbor a significant atmosphere. The moon lacks an intrinsic magnetic field and its radial distance from the planet at 20 Saturn radii (1 Saturn radius = 60,268 km) places it very close to the nominal distance of the subsolar outermost boundary of Saturn's magnetosphere, which is not static in response to variations in the solar wind dynamic pressure. This means Titan is at times inside the magnetosphere of Saturn, and at other times outside making it fully exposed to the solar wind. All of this adds to a very complex plasma interaction, where the moon can encounter not only the corotating plasma from the Saturnian magnetosphere, but also shocked solar wind (e.g. Wei et al., 2009) and even unperturbed solar wind (Bertucci et al., 2015).
Atmospheric evolution and space weathering
The surfaces and atmospheres of Saturnian moons are continuously irradiated with the magnetospheric plasma, solar photon, cosmic dust, and ring grains, all of which are responsible for long-term alteration of surface and atmospheric materials on geological timescales (Giga years), known as 'space weathering'. The space weathering accompanies dissociation and synthesis of molecules in the materials, followed by modification in surface and atmospheric spectra.
Titan represents the epitome of a nonstationary interaction and only through a detailed exploration of its environment (with a space weathering perspective), can the escape mechanisms and the amount of atmospheric loss to Saturn's magnetosphere and interplanetary space be appropriately addressed. The understanding of present-day escape conditions for Titan will contribute greatly to the elaboration of more realistic hypotheses about the evolution of Titan's atmosphere in the past and also the importance of atmospheric evolution in relation to habitability.
While Mars represents an evolved system embedded in the solar wind without a global magnetic field, where most of the atmosphere has been lost, Titan represents a system that, even without a global field, is protected by the Saturnian magnetosphere (most of the time) and the presence of a thick atmosphere, with a composition similar to that of the early Earth. Thus, understanding the relative contribution of the different escape mechanisms is important to further enhancing our current understanding of atmospheric evolution in the Solar System and the conditions for habitability both in our Solar System and in exoplanets and exomoons. During the Cassini era, several Key scientific question: What is the nature of atmospheric chemistry and cloud formation at Titan? studies focused on understanding both the neutral escape (e.g. Tucker et al. 2009) and ion escape (e.g. Coates et al. 2012, Regoli et al. 2016). However, current datasets are neither sufficient nor instructive enough to separate the complex interaction due to the large variability of the upstream conditions.
An early Earth
The atmosphere of Titan, composed mainly of N2 (~94%) and CH4 (~6%) has been likened to that of the primordial Earth, making this moon a natural environment to study processes that took place during the evolution of our own atmosphere. Modelling has shown Earth's early atmosphere may have been rich in hydrogen and methane (Tian et al., 2005). Moreover, key organic molecules in terrestrial prebiotic chemistry, such as hydrogen cyanide (HCN), cyanoacetylene (HC3N) and cyanogen (C2N2), are formed at Titan (Teanby et al., 2006).
Key measurements: Plasma, neutral, and dust measurements will constrain influx and escape of material. Magnetic and electric field measurements will measure current systems that may arise and help constrain the energy budget of the system. Summary: Titan's atmosphere and surface are subjected to large variability in the ambient plasma and magnetic field by virtue of its orbit radius, continuous galactic cosmic rays, interplanetary dust, and solar photons. Altogether, these make the physical and chemical interactions of Titan and its environment inherently nonsteady state at varying timescales.
Titan's Geology and Interior
Titan has one of the most diverse surfaces across the Solar System, mainly due to its active methane cycle, which is somewhat analogous to the water cycle on Earth. Titan's complex surface has been modified by a variety of geological exogenic and/or endogenic processes. Some of the exogenic ones include impact cratering and aeolian/fluvial and/or lacustrine processes,
Key scientific questions:
What is the response of Titan's atmosphere to extreme solar wind events, and how does it compare to quiescent events in the magnetosphere? What is the energy budget of Titan's atmosphere? Does Titan have an equilibrium state? How similar is Titan to an early Earth? while the endogenic ones include tectonism and potentially cryovolcanism (e.g. Elachi et al. 2005;Porco et al. 2005;Jaumann et al. 2009;Le Gall et al. 2010;Lopes et al. 2010;Wood et al. 2010).
Titan's surface investigation is complicated by its atmosphere. The thick and dense atmosphere of Titan can only be penetrated remotely in specific windows at near-infrared and radar wavelengths. Before 2004 and the entry of Cassini in the Saturnian system, bright and dark albedo Titan features were observed in near-infrared images taken by ground based telescopes and the Hubble Space Telescope (e.g. Coustenis et al. 2005). The 13 years of Titan investigation by Cassini and Huygens' landing on the surface in 2005 eventually revealed a remarkably Earth-like surface in terms of geomorphology, with dunes, highlands, dried and filled lakes, river channels and more. The Cassini investigation also unveiled Titan to be an organic-rich world (e.g. Janssen et al. 2016;Malaska et al. 2016;Hayes et al. 2018). Indeed, while Titan's crust is made of water ice, it is covered almost everywhere at the surface by a sedimentary organic layer of likely photochemical origin. The sediment materials are eroded, transported, and deposited from sources that are yet unclear, and organized to form landscapes that vary with latitude (dunes in the equatorial belts, plains at mid-latitude, labyrinthic terrains near the poles; Lopes et al. 2013;Solomonidou et al. 2018). The role of the methane cycle in the landscape distribution on Titan is yet to be understood. In addition, even if sedimentary processes seem to dominate Titan's surface, some features also suggest tectonism and cryovolcanism. The pursuit of the study of Titan's surface composition and its connection with the interior may unveil locations that are of importance to astrobiology and the search for life in the Solar System.
After the Cassini golden era and despite the great number of ground-breaking discoveries made by the 127 Cassini flybys of Titan, there are still open questions regarding the formation and evolution of the surface, its chemical composition, and the interactions between the surface, the interior, and the atmosphere. September 2017 marked the end of the Cassini mission; by then, ~65% of the surface had been imaged by the radar instrument with a spatial resolution in the range of 300m -4km, and only ~20% of the surface had been captured by the Visual and Infrared Mapping Spectrometer (VIMS) with a resolution better than 10 km/pixel. It therefore remains terra incognita. The analysis of VIMS data provided significant results and insights on Titan's nature. However, the aforementioned resolution is not adequate for thorough and detailed investigation of the geology of a planetary body. In addition, the optimal use of surface data with Cassini instruments were made through the combinations of radar and VIMS data, which unfortunately were very limited due to the spacecraft's orbital constraints. In the future, such synergy between instruments would be of great value for a better understanding of Titan's geological history and evolution.
The Cassini and Huygens data, the multiple years of data analysis, laboratory studies, and theoretical and experimental modeling prompt the science goals for the future planetary missions to Titan. Below are a number of key features and processes at Titan.
Impact craters
The abundance and size distribution of impact craters usually provide insight into the relative age of planetary surfaces. Titan, compared to other Saturnian moons, displays a very limited number of impact craters on its surface, indicating a relatively young and active surface (e.g. Wood et al. 2010;Werinsky et al. 2019). Even though some studies have provided constraints (Radebaugh et al. 2008;Neish and Lorenz, 2012;Lopes et al. 2016), the surface age of Titan still remains uncertain (probably between 200 Ma and 1 Ga).
Winds
One of the major geological processes on Titan's low latitudes is the aeolian (wind) activity. The dark, organic-rich terrains that dominate the equator are giant dune fields, which are hundreds of km long, a few km wide and about 50-150m in height. Strong winds occurring at the equinoxes seem to control their direction (Charnay et al. 2015), but local topography and ground humidity seem to play a role too. Further investigation of the dune morphometry and composition would help better understand Titan's meteorology and geology.
Plains
60% of the surface of Titan is covered by plains that appear uniform at the resolution of the Cassini radar (300 m at best). Future missions will have to unravel the mystery of these features, which likely hold important clues on the evolution of Titan's surface.
Lakes
Titan is the only other planetary body in the Solar System to possess bodies of liquid on its surface that are stable in time. Cassini observations of Titan have revealed three seas and ~650 polar lakes, 200 being empty and more than 300 filled or partially filled (e.g. Stofan et al. 2007;Hayes, 2016). Modeling suggests the liquid composition to be a mixture of methane and ethane with the contribution of dissolved nitrogen (e.g., Sagan and Dermott, 1982). However, Cassini data rather suggest a dominance of the methane (Mastrogiuseppe et al. 2014;. It remains to explain the fate of ethane which is produced in abundance in the atmosphere by photochemistry. In addition, most of Titan's smaller lakes are characterised as sharp-edged depressions with raised rims and ramparts surrounding some of them (e.g. Birch et al. 2018;Solomonidou et al. 2019). The origin of both and, more generally, the formation mechanism of Titan's lakes remains unknown.
Another surface process that has not yet been identified, but is speculated, is cryovolcanism. Outgassing by cryovolcanism has been proposed as a possible replenishment mechanism for Titan's atmospheric methane (e.g. Lopes et al. 2007) and a number of plausible cryovolcanic landforms were proposed on the basis of their morphology (e.g. lobate flows in Sotra Patera) and/or because surface changes were observed (Solomonidou et al. 2016). However, there is no "smoking gun" for cryovolcanism on Titan and the idea of cryovolcanism as a possible shaping process remains controversial (e.g. Moore and Pappalardo, 2011). Future missions will shed light on the exchanges between the interior, the surface, and the atmosphere of Titan.
Astrobiology
Titan harbors a combination of complex hydrocarbons and organic molecules in addition to a water ocean beneath its ice shell and potential cryovolcanism. Liquid water can also exist at the surface for a limited time, e.g. after an impact or a cryoeruption. All these suggest conditions potentially suitable for life as we know it and future missions should investigate Titan's chemistry and search for biosignatures.
Key measurements: High spatial resolution radar and infrared spectrometer will map Titan's surface. Complete coverage will be achieved by an orbiter. Mass spectrometer will determine chemistry of Titan lakes. Sonar will determine depth of a Titan sea. Gravity measurements will characterise Titan's interior. High-resolution and highsensitivity mass spectrometry will identify key molecules in search of biosignatures.
Summary: Titan is arguably the most Earth-like Solar System body. Its methane cycle draws an analogy with Earth's water cycle. The abundance of organic material, water and energy source due to potential cryovolcanism qualifies the satellite as a prime candidate for a habitable world in the Solar System.
Titan's Interaction with Saturn's Magnetosphere
Titan's orbit radius of 20 Saturn radii places it, most of the time, within Saturn's magnetosphere, and embedded in the rapidly rotating, magnetised plasma that flows at ~100 km s -1 , much faster than the Titan orbital speed of ~6 km s -1 . As the plasma flows past and around Titan, magnetic field lines that are 'frozen' into the moving plasma drape around the moon, thus forming downstream lobes in which the field generally points towards Titan in one lobe, and away in the other. The Key scientific questions: What are the characteristics of Titan's habitability and what potential biosignatures should we look for? What is the composition and distribution of materials on and beneath Titan's surface? What are the lakes made of? Is the interior active? magnetic field configuration arising from this type of interaction depends on the field and plasma conditions upstream of Titan.
Before Cassini, the common perception was that the upstream field would be oriented north-south, the equatorial direction of Saturn's dipole field. The Cassini mission, however, has shown a very different picture. Saturn's disk-like plasma sheet continuously flaps up and down past Titan, with a period close to what we believe is the true rotation period of the planetaround 10.7 hours. This flapping does not come from any tilt in Saturn's magnetic equator (Saturn's internal field is almost perfectly aligned with the planet's rotational axis). Rather, it arises from a rotating wave-like pattern imposed on the sheet by rotating systems of electric currents in the magnetosphere, flowing on field lines extending ~10-15 Saturn radii. As the plasma sheet moves, the upstream field changes, being dominantly north-south when Titan is near the centre of the plasma sheet. These changes were characterised by Bertucci et al (2009), who surveyed Cassini magnetometer data during spacecraft flybys of Titan.
Fossil magnetic field
Later, Achilleos et al (2014) used a model of the plasma sheet (magnetodisk) to study one fly-by in detail. They found that the magnetospheric flux tubes that flow closest to Titan may carry with them the imprint of a very different kind of upstream field compared to the imprint carried by plasma in the far-Titan space. This is because the upstream field is continually changing. This change in magnetic 'imprint' could become even more pronounced if the boundary of Saturn's magnetosphere moves inward or outward past Titan. When this happens (albeit relatively rarely), Titan transitions between a magnetospheric and a solar wind regime, a process first discovered by Bertucci et al. (2008) in the T32 flyby of Cassini. At T32 closest approach (CA) to Titan, the magnetic field, remarkably, had a southward component. By contrast, the ambient solar wind magnetic field surrounding the CA interval had a northward component. The southward component near CA is consistent with the draped field that would have been seen, had Titan been continuously immersed in Saturn's magnetosphere throughout the encounter. But at CA, both Titan and Cassini were, unambiguously, outside the magnetosphere. In fact, Titan had been there for at least ~15 minutes, just after spending up to three hours in the magnetosphere. Hence, the field that was imprinted on Titan's ionosphere during its magnetospheric excursion survived there for at least 15 minutes. The time range of the (external) magnetic imprint on Titan was found to be ~15 minutes -~3 hours, hence constraining the lifetime of the imprint fossil field at Titan, and raises the intriguing prospect of 'magnetic archeology', where close flybys of Titan could potentially reveal details of ambient fields to which Titan has been exposed up to about three hours in the past.
The interaction between Titan and the magnetosphere is bidirectional. While the incoming plasma and magnetic field defines many aspects of the interaction at the moon, the presence of the moon itself affects the local magnetosphere in ways that are yet not fully understood. The model of the interaction, as described by computer simulations (e.g. Simon et al. 2015) predicts the existence of a pair of extended Alfvén wings (standing Alfvén waves) that connect with the planet's ionosphere. However, these wings have not yet been detected, most probably due to the limited spatial coverage of the Cassini flybys.
Internal magnetic field
A study of Cassini magnetometer data in the near-Titan environment by Wei et al. (2010) demonstrated that the upper limit of a putative permanent dipole moment would be ~0.78 nT RTitan 3 , not significantly different from zero. The lack of a magnetic dynamo inside Titan is consistent with the incompletely differentiated interior suggested by Cassini gravity measurements (Iess et al., 2010). The existence of an induced dipole moment, arising from the penetration of a slowly varying external field into a subsurface conducting region, remains an open questionthe variation of the magnetospheric field on the ~29-year orbital timescale of Saturn, for example, may be a viable candidate for induction. Any induced dipole moment would be expected to change direction during Saturn's equinox, at which the time average direction of the ambient magnetospheric field changes direction, due to the displacement of the mean current sheet from above to below the equator (for the equinox captured by Cassini). Wei et al. (2018) have recently reported finding a possible reversing induced field signature through comparing pre-and post-equinox field data from Cassini. This suggests that further characterisations or constraints on the induced dipole at Titan are needed for finally answering the question of whether an electrically conducting region, such as an ocean, exists beneath the surface.
Key measurements: Magnetometers will measure the fossil magnetic field of Titan and measure any induced magnetic field generated in Titan's interior. Electric field antennas, magnetometers and plasma spectrometers will measure any potential electrodynamic coupling between Titan and Saturn.
Summary: Titan's interaction with Saturn is both highly dynamic and bi-directional.
It is unclear whether Titan has an internally generated magnetic field.
Enceladus' Plume
Cassini has also dramatically revolutionised our view of the small icy moon Enceladus, of ~500 km in diameter, orbiting at 4 Saturn radii. It was discovered that strong plumes emanate from warm 'tiger stripe' features on its south polar surface (Dougherty et al., 2006, Porco et al., 2006. These plumes of water vapour and ice grains are thought to be the long-suspected source of particles Key scientific questions: How exactly does Titan interact with Saturn? What is the power transfer between the two systems? Is there an induced magnetic field generated in Titan's interior that might be associated with a subsurface ocean or a weak dynamo? making up Saturn's E-ring, and also the dominant source for neutrals and plasma in Saturn's magnetosphere. In-situ observations here revealed primarily water vapour and trace amounts of hydrocarbon-based neutral gas (Waite et al., 2009), as well as watergroup positive ions that slow, divert and even stagnate the magnetospheric flow (Tokar et al., 2009). Directly over the plume sources, charged nanograin populations have been observed that are related to the tiger stripes but dispersed in their motion by Saturn's magnetic field (Jones et al., 2009). Negative water group ions, possibly with additional species consistent with hydrocarbons, are also seen.
Ongoing hydrothermal activities
Repeated Cassini sampling of Enceladus' plume ejecta, simulations and laboratory experiments have concluded present-day hydrothermal activity at Enceladus may resemble what is seen in the deep oceans of Earth. The detection of sodium-salt-rich ice grains emitted from the plume suggests that the grains formed as frozen droplets from a liquid reservoir that has been in contact with rocks (Postberg et al., 2009). Gravity measurements suggest the presence of a subsurface sea at depths of 30-40 km and extending up to south latitudes of about 50° (Iess et al., 2014). These findings hint rock-water interactions in regions surrounding the core of Enceladus resulting in chemical 'footprints' being preserved in the liquid and subsequently transported upwards to the near-surface plume sources, where they are eventually ejected. Furthermore, the detection of nanometer-sized silica particles indicates ongoing high-temperature (>90 °C) hydrothermal reactions associated with global-scale geothermal activity. (Hsu et al., 2015;Sekine et al., 2015;Choblet et al., 2017).
Tidal forces
The brightness of the plume of Enceladus has been shown to depend on the orbital phase of Enceladus (Hedman et al., 2013). Since Enceladus' orbit is slightly elliptical, tidal stresses will act on the moon and the cracks in its south polar region will either be more or less open depending on the distance between Enceladus and Saturn. Hedman et al., 2013 showed that the plumes' brightness is several times greater when Enceladus is around apocenter (farthest away from Saturn) than when the moon is around pericenter (closest to Saturn). A change in plume brightness may be caused by a change in the size distribution of the grains and not solely to their total mass. Several studies (e.g. have shown variability in the amount of water molecules ejected from the cracks in the icy crust of Enceladus. However, the correlation between the orbital phase of Enceladus and the modulation in the ejected material has been difficult to confirm (e.g. Hansen et al., 2015). The Cassini observations of the plume have raised many questions about the driving processes, the time modulations, the structure of the plume, and the dynamics of the ejected material.
The water that is ejected from the south polar region of Enceladus creates a neutral water torus around Saturn, along the orbit of the moon. Subsequently, transport, photoionization, and electron impact ionization of the neutral material creates a plasma disk at the same location. The disk has been suggested to vary with longitude (Gurnett et al., 2007), local time (Holmberg et al., 2014), season (Tseng et al., 2010) and the solar cycle (Holmberg et al., 2017). These asymmetries found in the plasma disk are still under investigation. A complicating factor to finding a clear modulation in the plasma disk is that the source of disk material, i.e., the Enceladus plume, is also varying.
Astrobiology
The Ion Neutral Mass Spectrometer (INMS) onboard Cassini sampled Enceladus' plume and found ammonia, along with various other organic compounds, deuterium and very probably 40 Ar (Waite et al., 2009). Since ammonia acts as an anti-freeze, its presence is strong evidence for the existence of liquid water, given that the measured temperatures exceed 180 K near the fractures from which the jets emanate (Spencer et al., 2006). Temperatures were measured by Cassini's Composite Infrared Spectrometer (CIRS), which detected 3-7 GW of thermal emission from the south polar troughs and confirming an internal heat source. This makes Enceladus the third known solid planetary body that is sufficiently geologically active for its internal heat to be detected by remote sensingafter Earth and Io.
INMS also detected molecular hydrogen in the plume (Waite et al, 2017). Ongoing hydrothermal reactions of rock containing reduced minerals and organic materials have been invoked as the most plausible source of this hydrogen. Waite et al. (2017) further postulated that the relatively high hydrogen abundance in the plume signals thermodynamic disequilibrium that favours the formation of methane from CO2 in Enceladus' ocean. This state of disequilibrium is exploited by some forms of life (chemolithotrophs) as a source of chemical energy. H2 metabolisms are used by some of the most phylogenetically ancient forms of life on Earth (Raymann et al., 2015), while on modern Earth, geochemical fuels such as H2 support thriving ecosystems even in the absence of sunlight (Kelley et al., 2001).
That said, while Cassini has flown through and directly sampled Enceladus' plumes, it did so with instruments that were 20 years old and with very limited capabilities. The aforementioned discoveries cannot categorically confirm evidence of biological processes. More complex and highly sensitive analyses will recognise long chain molecules and amino acids that are uniquely interesting targets in the search for life. Even more complex analyses such as identifying the chirality (left-handed vs righthanded) of amino acids will be very instructive. Nearly all amino acids on Earth are left-handed since biological processes requires this basic consistency for proteins to fold. It is therefore expected that any proteinbased life will "choose" a particular chirality, i.e. all left-handed or all right-handed, rather than an equal mixture of the two (Creamer et al., 2016).
Dusty Plasmas
In the plume, the exhaust from the south pole creates a different plasma regime, dusty plasma. When the ice particles from the fractures immerse into the ambient plasma, they acquire charges. The charge state of the particle varies depending on the surrounding plasma condition. In the dense plasma, as in Saturn's inner magnetosphere, the electrical potential of the dust becomes slightly negative (Kempf et al., 2006;Wahlund et al., 2009), and the grain charge number varies from single to several thousand (e.g., Horanyi et al., 1992;Yaroshenko et al., 2009). The charged grains from Enceladus in Saturn's magnetosphere are under the influences of both gravity as well as electromagnetic forces. When the number of charged grains is large and inter-grain distance is small compared to the plasma Debye length, charged dust particles participate in the collective behavior, i.e. dusty plasma in contrast to dust-laden plasma.
A number of Enceladus flybys by
Cassini provided direct measurements of the dust and plasma in the plume regions. The charged grains have been directly confirmed by the Cosmic Dust Analyzer (CDA) (Kempf et al., 2008), the Radio Plasma Wave Science (RPWS) antenna (Kurth et al., 2006), and the plasma spectrometer (CAPS) as high energy charged particles (Hill et al., 2012). The total charge number of the grains has been inferred by the Langmuir probe and the magnetic field measurements . Combining these measurements concluded that the typical size of the outgas from the south pole are nanometers to micrometers, however, the overall negative charge in the plume is carried by the nanometer to submicrometer grains (Dong et al., 2015), and they are in the dusty plasma regime .
Key measurements:
Modern chemical spectrometers will identify long chain molecules, such as essential amino acids required for biological processes, with the capability of discriminating between lefthanded and right-handed chirality. Chemical spectrometers will also identify other important compounds, relative abundances and oxidation states that are key ingredients for biological processes. Infrared and ultraviolet spectrometers will monitor the plume gas, its activity and dust distributions. Dust analysers and plasma spectrometers (electron and ion) will measure a wide range of the size distribution of grains. Plasma spectrometers and Langmuir probes will measure electrical potential of the grains, as well as electron temperature and ion speeds. Gravity measurements will charactertise Enceladus' interior.
Summary:
Enceladus has ongoing hydrothermal activity from its tidally heated interior. The plume emanates from the southern polar region and has been measured to contain water, volatiles and organic compounds. The plume originates from a subsurface salty ocean. Altogether, the knowns of (i) an accessible salty ocean, (ii) organic compounds, (iii) energy and (iv) hydrothermal activity make Enceladus a prime candidate to explore habitability outside the Earth's biosphere.
Key scientific questions: Is life present in
Enceladus now? What is the chemistry of its plume? How does the chemistry evolve over time? Does the chemistry contain signatures of biology?
Electric current system
The coupling between Enceladus and Saturn is in many aspects similar to the one observed near Jupiter's moon Io (Neubauer, 1980). It includes the existence of accelerated plasma and magnetically field-aligned electric currents, associated with Alfvén wings, producing auroral footprints on the Saturn's atmosphere. Indeed, two striking observations of the Enceladus auroral footprint have been reported by Pryor et al. 2011 and observations of accelerated electron beams associated with plasma wave emissions (Gurnett, et al. 2011) and field aligned electric currents (Engelhardt et al. 2015) have been also reported near the moon at the edge of the plume.
Since the plume is located near the south pole of Enceladus, a north-south asymmetry in Enceladus' plasma interaction and the Alfvén wing system is introduced. The south pole plume launches Alfvén waves, which are partially blocked by the solid body of Enceladus. This leads to hemispheric coupling currents along the Enceladus flux tube and associated discontinuities in the magnetic field (Saur et al. 2007, Simon et al. 2014. In addition to the spatial asymmetries of the interaction, the plasma interaction is also time-dependent due to the diurnal variability of Enceladus' plume activity , Hedman et al. 2013).
More recently when Cassini sampled Saturn's topside ionosphere, Sulaiman et al. (2018) reported observations of plasma processes and strong electric currents demonstrably linked to Enceladus. The detection of such phenomena when Cassini was so close to Saturn underlined the nonlocality and spatial extent of the everpresent coupling between Enceladus and Saturn, thus indicating these two bodies are in continuous energy exchange between each other. This magnitude of this energy, however, remains to be quantified.
Space weathering
As introduced earlier, prebiotic polymers are synthesized in Titan's atmosphere. Such a process could also happen on Enceladus' surface because there are source materials, e.g., ammonia and tholins, already suggested from surface reflectance at ultraviolet wavelengths (Zastrow et al., 2012).
Space weathering inhomogeneity potentially exists at Enceladus because it has an inhomogeneous magnetic field induced by electromagnetic interactions between Saturn's magnetosphere, Enceladus' surface, plume, and subsurface ocean (e.g., Jia et al., 2010).
If
we successfully associate characteristics of the space weathering with the cumulative dose of irradiation, the duration of weathering can be estimated from the surface spectrum, i.e. more weathered surfaces have been irradiated for a longer time. The absolute duration of weathering, in turn, tells us the duration of the induced magnetic field that modifies the weathering. The induced magnetic field duration constrains when the interiors were molten, started pluming, and simultaneously induced the magnetic field. Key measurements: A magnetometer will measure the strength and direction of the electric current and Alfvén wing coupling Saturn to Enceladus, as well as measure any induced magnetic field. Plasma spectrometers will measure the beam speeds and energies associated with the coupling. Electric field antennas and magnetic search coils will constrain frequencies and powers of plasma waves that arise as a result of this interaction. Altogether the fields and particles suite will constrain the power of this coupling. Highly sensitive ultraviolet spectrometer will remotely detect the auroral footprints on Saturn's atmosphere, that have only been detected very few times by Cassini. In-situ plasma measurements will quantify incident plasma flux and composition irradiated to Enceladus' surface. Ultraviolet, visible, infrared, millimeter/micrometer wave spectra will remotely detect distribution of space weathering activity.
Summary: Saturn is persistently coupled to
Enceladus through a large and extensive system of electric currents along Saturn's magnetic field lines. As Enceladus orbits Saturn, it traces a circle of dynamism on Saturn's northern and southern atmospheres that are magnetically conjugate to Enceladus' orbit. The combination of spatial asymmetries on Enceladus' surface and temporal variability of its plume activity means this interaction is highly non-uniform. Enceladus is prone to space weathering by incident plasma, which might be capable of synthesising organic compounds.
Titan and Enceladus Science in Context
The scientific themes summarised in the previous sections are relevant to a wide range of disciplines within physics, chemistry and biology spanning micro-(e.g. fundamental chemistry) to macro-scales (e.g. evolution). Below is a list of some emerging and fast-growing fields that are relevant to the exploration of Titan and Enceladus.
Exoplanets and exomoons
The rapid growth of the exoplanetary community is reflected by multiple selected European and international missions, such as Plato, Euclid, ARIEL, JWST, etc. The exploration of Titan and Enceladus will uniquely complement these remote observations by providing an in-situ perspective to the knowledge of exoplanetary and/or exomoon composition, structure and formation.
Ocean worlds and Astrobiology
The topic of ocean worlds in the Solar System has witnessed a 'boom' in the last decade with the selection of ESA's JUICE and NASA's Europa Clipper missions to the Galilean moons. The potential to understand ocean worlds through the exploration of Titan and Enceladus is limitless. These moons together offer a diverse range of topics in this area that include: (i) Bodies of liquid of various sizes (e.g. lakes on Titan, ocean on Enceladus), (ii) Surface and subsurface bodies, (iii) Depletion and replenishment of bodies (e.g. lakes on Titan), (iv) tidally heated interiors, and (v) Chemical and geological processes (e.g. rock-water interactions). Combined with JUICE and Europa Clipper findings, the exploration of Titan and Enceladus will bring strong constraints on the presence of liquid water further away from the Sun than previously supposed by the standard habitability zone models in the Solar System, and would provide essential new constraints for the search for habitable worlds Key scientific questions: What is the strength of Saturn's interaction with Enceladus? In other words, how much energy is transferred between the two bodies? How is Enceladus' surface affected by space weathering? What organic compounds are synthesized by the space weathering process at Enceladus? outside our Solar System, in exoplanetary systems.
The recent selection of Dragonfly, a mission that will send a mobile robotic rotorcraft lander to Titan, for the NASA New Frontiers program, testifies to the interest of the international planetary science community for Saturn's largest moon. Titan is one of the most compelling astrobiology target in the Solar System and Dragonfly will assess its prebiotic chemistry and habitability visiting multiple locations at the surface including dunes and a young crater.
Atmospheres
The combination of Titan's chemically rich atmosphere and highly variable space environment affords a vast spectrum of atmospheric dynamics, chemistry and cloud formation phenomena that can be explored. In tandem with comparative datasets of other Solar System bodies, a large parameter space can be constructed from which exoplanet and exomoon atmospheres can be characterised.
Origin and evolution of the Solar System
The satellites around giant planets can offer clues of how the Solar System evolved in time. For example, the mass-distance relationship of the icy moons suggests a possible linkage to the origin and evolution of giant planets' ring systems (Charnoz et al., 2010). Moreover, the circumstances allowing the capture of small objects into satellites do not exist in the current stage of the Solar System. Understanding the properties of the irregular satellites around giant planets therefore provides a unique window to look into the past of the Solar System evolution.
Dust in the Solar System and planetary formation
Dust and gases are the fundamental elements for the formation of stars and planets. Recent studies consider the effects of magnetorotational instability of plasmas. However, the evolution of dusty plasma in the absence of the UV ionization at the center of the protoplanetary disk and the interactions between dust and plasma are missing links. It is postulated that dust-plasma interaction must influence the nucleation of grains as well as their subsequent agglomeration. The Enceladus plume and Titan's atmosphere are the sites to investigate the nature of charged dust and its interaction with plasma, and may give hints to addressing questions on planetary formation mechanisms.
Payload
The diverse scientific opportunities highlighted in Sections 3 and 4 call for a range of instruments, most of which can participate in more than one experiment. In-situ instrumentation is required for direct sampling of Titan's atmosphere and lakes, Enceladus' plume and their interaction with Saturn's environment. Their capabilities include, but are not limited to: Mass spectroscopy of ions and neutrals, plasma analysis (ions, electrons), aerosol and dust detection, electric and magnetic field direction, frequency and power. Multispectral remote sensing instrumentation is required for the characterisation of Titan's atmosphere, their surfaces and interiors. These capabilities include, but are not limited to: Radar imaging and sounder, ultraviolet, visible, infrared, millimeter and micrometer spectroscopy and imaging, gravity radiometry, seismometer, thermal sensing. These instruments have strong European heritage from previous, existing and upcoming missions such as JUICE, Cassini/Huygens, Rosetta, BepiColombo, etc.
Planetary protection
Titan and Enceladus hold the likelihood of hosting biosignatures, thus making planetary protection considerations necessary. Fortunately, collecting samples from Enceladus' plume means possible biosignatures originating from the moon's interior can be obtained without penetrating the surface. This greatly mitigates risks associated with planetary protection, as well as mission complexities and costs in general. For this reason, we argue that Enceladus poses the least risk for the search for biosignatures in the outer Solar System.
Radiation
Saturn has relatively weak radiation belts, thus making radiation considerations manageable. The outermost edge of the main radiation belt is situated at 3.5 Saturn radii, which is planetward of Enceladus' orbit (4 Saturn radii) and Titan's orbit (20 Saturn radii).
L-class or M-class (Titan and Enceladus Orbiter; Bioinspiration and Biomimetrics)
An orbiter with a probe/lander would cover all the identified scientific goals. This will ensure the spatial coverage required to fully map the surfaces of the satellites and characterise their interiors using gravity and induction measurements. A Titan orbiter would also serve as a communication link between Earth and a probe/lander on the ground or lake. Titan has a relatively large mass and is far enough away from Saturn to impose a reasonable ΔV cost. Enceladus, on the other hand, possesses a very low mass (0.8% Titan's mass) and orbits deep within Saturn's gravity well thus demanding an orbit insertion ΔV that is prohibitively large. Efficient tour designs have been explored such as a leveraging tour with Titan, Rhea, Dione and Tethys to reach Enceladus orbit. This was found to require less than half of the ΔV of a direct Titan-Enceladus transfer . Free-return cycler trajectories are also possible, where a spacecraft shuttles between Enceladus and Titan using little or no fuel (Russell and ).
Numerous diverse Titan missions have been proposed in the last decade (Reh, 2009;Oleson et al., 2015;Barnes et al., 2012). A mission to the Titan-Enceladus system, TandEM, has been extensively studied as an L-class mission by Coustenis et al. (2009), where they explored the possibilities of a hot air balloon (Titan), mini-probes (Titan) and penetrators (Titan and Enceladus). In June 2019, Dragonfly was selected as NASA's next New Frontiers mission. The mission will send a robotic rotorcraft lander to Titan in order to explore prebiotic chemistry and habitability. Such heavier-than-air flight is made possible by Titan's thicker atmosphere (1.5x that of the Earth's) and smaller gravitational acceleration.
A recent concept of a versatile aerialaquatic robotic aircraft provides the capability of in-situ near-surface atmosphere and surface liquid observations (McKevitt, 2019). The operation has heritage in robotic work inspired by observations of the natural world the field of bioinspiration and biomimetrics. A 'plunge diving' manoeuvre, inspired by the gannet seabird (Liang et al., 2013), involves the aircraft plunging nose-first into the surface of Titan's lake. The vehicle is capable of relaunching and ejecting a mass of liquid collected from the area of launch, as shown in Figure 9 (Siddall and Kovac, 2014). Through this, measurements of the compositions of Titan's lakes and near-surface atmosphere can be achieved. Entry and descent data can also be used to perform upper and midatmospheric observations, in a similar way to the Huygens descent. Figure 9 -Impression of a 'plunge diving' manoeuvre by an aerial-aquatic aircraft inspired by the gannet seabird (inset). Inset adapted from Liang et al. (2013).
A Saturn orbiter with multiple satellite flybys would also provide significant scientific return on the aforementioned themes. The Saturn orbiter would have stateof-the-art payload that exclusively affords satellite science.
S-class or F-class (Plume flyby)
A flyby mission could provide significant scientific return, however this would focus on a single theme, namely habitability. A spacecraft with focused payload would fly through Enceladus' plume and conduct ultra-high resolution and sensitivity measurements of its composition in search for biosignatures.
Radioactive power sources
Electrical power sources for outer planet missions are a key enabling technology. Electrical power and spacecraft heating are issues for any mission beyond the orbit of Jupiter due to the low solar irradiance at such distances. This is further complicated by the decrease in solar panel efficiency at low intensities (LILT). To supply sufficient, long-term, and uninterrupted power, solar panels of an unfeasibly large area would be required. This makes both Radioisotope Thermoelectric Generators (RTGs) and Radioisotope Heating Units (RHUs) necessary to power and heat the spacecraft, respectively. Within Europe, 241 AM is favoured over 238 Pu as a source of radioisotopes. Despite 241 AM having a lower power density that that of 238 Pu, its longer half-life and more cost-effective production makes it the economical alternative. A mission to explore Titan and Enceladus would greatly benefit from an independent European power source.
Electromagnetic cleanliness
EMC issues should be explored to satisfy requirements imposed by some of the listed payload elements on the spacecraft, e.g. plasma packages and magnetometers. This would be especially crucial to resolve any induced and fossil magnetic fields that may be found on Titan.
Telecommunications
Given the range of the proposed target, science data return will be limited by bandwidth. Most deep space missions use Xband links, while few also use Ka-band, to transmit telemetry. Nowadays, NASA's Deep Space Network (DSN) 70-m radio antennas provide the maximum rates. Since Cassini relied on NASA's DSN, a mission to Titan and Enceladus would therefore require similar capabilities to achieve the minimum science data return. Further studies of new or upgraded telecommunications technologies are welcomed. This includes both Earthdirect and intra-spacecraft (i.e. relay between probe and spacecraft) communications.
Autonomous guidance, navigation and control
Autonomous GN&C systems are required whenever position and attitude must be known precisely and updated quickly. Proximity missions, particularly small body proximity, will require onboard autonomous GN&C to detect and avoid surface hazards and especially to minimise planetary protection risks. This technology more broadly applies to flyby, small body rendezvous and orbiting, landing, atmospheric entry and "touch and go" sampling, for example to cope with severe and unpredictable contact forces and torques.
Mass spectrometry and dust analysis
The surprise discovery of Enceladus' plume by Cassini meant the mass spectrometer and dust analyser onboard were not specifically designed for such measurements. The composition of the plume was determined using a low resolution mass spectrometer. A more detailed analysis of the plume in search for complex biosignatures would require a mass spectrometer with higher capability of measuring masses up to 1000 u, with a high resolution exceeding 24,000 m/Δm and a high sensitivity of one part per trillion. The state-of-the-art mass spectrometer should not only be capable of identifying complex organic chains, but also differentiating their chirality. Similarly, the dust and ice analysers should have higher capabilities, resolution and sensitivity to pick up individual ice and dust grains with micron and sub-micron diameters. These requirements have been studied in detail by the Enceladus Life Finder team [Lunine et al., 2015].
Summary and Perspectives
This white paper briefly describes outstanding questions pertaining to Titan and Enceladus -legacies of the successful Cassini-Huygens mission. We make the case that such questions are not merely specific to these two mysterious systems but have much broader and deeper implications for humakind's outstanding questions at large of habitability in the Solar System. For these reasons, we recommend the acknowledgement of Titan and Enceladus as priorities for ESA's Voayage 2050 programme and to combine efforts, in science and technology, with international agencies to launch a dedicated mission to either or both tagets, much like ESA's key involvement in some of the most successful planetary missions like Cassini-Huygens.
The core proposers fully support complementary white papers led by S. Rodriguez and G. Choblet advocating Titan and Enceladus science, respectively. Altogether, these white papers are a testament to the size, strength and diversity of the Titan and Enceladus community. | 11,984 | 2019-08-06T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Statistical control algorithm for the process of rapid solidification rate of the melt
To improve the efficiency and quality of the process of rapid solidification rate of the melt, it is necessary to improve its controllability. This can be done using a set of statistical methods. It is proposed to use the algorithm for consistent application of statistical methods and improve product quality. It enables data collection, processing and analysis, including process stability analysis, process capability analysis, trend analysis and evaluation of relationships between parameters. The algorithm can be used to fulfill requirements in a quality management system.
The article deals with the application of statistical methods for consistent analysis of the quality of the process of rapid solidification rate of the melt. The obtained results are supposed to be used in the structure of the quality management system.
The process of rapid solidification rate of the melt have been developed in Moscow Aviation Technological Institute (currently included in the Moscow aviation Institute) since the eighties. Serious results are obtained in the development of the process theory. The technology is developing in the direction of obtaining materials with new properties that are difficult or impossible to obtain by other methods [1][2][3]. First of all, it should be noted the production of materials for the manufacture of solders, filters, magnets, catalysts, reinforcing elements, etc. The process of rapid solidification rate of the melt is aimed at obtaining a wide range of products in the form of fiber, wire, tape with unique special properties and intended for wide use in the engineering industry.
Application of technology of high-speed solidification of melt and obtaining the corresponding original properties of production is economically expedient only in case of realization of the set results. It is necessary to ensure the stability of the quality and increase the capacity of the process. This applies to product quality and quantity. The consistent application of a group of statistical methods is a means of assessing such problems. They are designed to collect, process, analyze data and make appropriate management decisions. It is proposed to use the algorithm of statistical process control (figure 1). It provides the ability to meet the requirements of ISO 9001:2015 data analysis for process monitoring.
This algorithm was developed as a result of research of properties of a wire of solders on the basis of copper and fiber of steel of austenitic class. However, it is universal and can be applied to other materials.
It is necessary to create a system of indicators for monitoring and measurement. The presence of an automated system practically does not limit the number of parameters. But at the initial stage it is necessary to limit the key parameters of the process. As a rule, this is the speed of the crystallizer disk, the material feed rate and the melt temperature [4]. Product quality indicatorsgeometrical parameters of fiber or wire cross-section-are controlled.
Statistical data collection includes planning and execution of control operations, including primary processing. Particular emphasis is placed on checking for outliers. This step includes the following elements: 1 There are two groups of factors that affect mass phenomena and processes and are a source of variation: General and random factors. Common factors (common to all units of the mass population) act equally (permanently) for each unit of the population. They make these units similar to each other, create common patterns for them, form a typical level for the units of a qualitatively homogeneous population. Random factors are individual, they act on individual units of the mass population and form deviations of individual values of the characteristic from the typical level [5].
Construction of statistical variation series allows to make a preliminary analysis of the quality of the process. Measures of Central tendency need to be identified. These include the arithmetic mean, mode, and median of the distribution. The level of variation in the statistical population should also be assessed. In this case, the scope is used for the preliminary estimate and the standard deviation as the main measure. It should be borne in mind that in the process of high-speed solidification, batches with different but similar characteristics can be obtained. In this case, it is necessary to apply the coefficient of variation, which is a relative indicator [6].
Statistical hypothesis tests include testing the normality of the distribution. It allows you to get a preliminary result about the stability of the process. We determine the presence of special factors during the implementation of the process. At this stage, it is necessary to eliminate or reduce their impact. In addition, statistical hypothesis testing allows for a comparative analysis of individual batches of raw materials and final products.
To obtain the specified parameters of the wire or fiber affects several indicators of the process. The degree of influence they have different. This should be considered for management. Multivariate correlation and regression analysis should be used to analyze and then predict the results. The presence of strong dependencies will increase the efficiency of decisionmaking and improve the quality of management. In the process of rapid solidification rate of the melt, not just correlation analysis, but rather multiple correlation plays an important role, as a number of interacting product characteristics need to be taken into account simultaneously and the process is controlled through the use of several technical parameters. A multiple is the correlation between a dependent variable y and a set of independent variables x1,x2,...,xk. However, in the vast majority of studies, there is an effect of intercorrelation, and sometimes even multicollinearity, which makes a direct approach to measuring multiple correlation difficult. We need a new approach that allows you to quickly and reasonably obtain information for subsequent management decisions and actions. For this purpose it is proposed to apply numerical methods.
The process must be stable. However, often at the process of rapid solidification rate of the melt t the statistical tendency of results is observed. For example, it can occur due to insufficient resistance of the disk-crystallizer. The statistical time series should be analyzed. Identify trends or variations in results. You should consider the trend for process management. For example, you can predict the replacement of the crystallizer disk. To assess t the process of rapid solidification rate of the melt s and identify trends, a mechanical or analytical alignment is proposed, as well as the use of a set of statistical indicators, including absolute level increases, growth rates and other absolute, relative and average indicators.
In the manufacture of products, the main instrument of analysis remains the control chart. It is necessary to select the control card, conduct a preliminary analysis of the process and conduct subsequent management. As a rule, a double control map is used, which includes a map of arithmetic means and a map of standard deviations. If the results are unstable, median maps are used in the samples. A span control map can be used to simplify the processing of the results. Experience with the control chart shows the importance of control at the initial stage of hardening. This is due to the instability of the process at the start. Features of the process do not allow to do with traditional control cards. The presence of trends leads to the need to use cumulative sum maps that provide an assessment of constant shifts in the values of quality indicators. The specified requirements for the fiber crosssection and the need for constant monitoring of their implementation is a prerequisite for the use of acceptance control cards that provide analysis of two aspects: the stability of the process and its ability to perform the task. One of the advantages of the acceptance control card is that there is no unnecessary control, i.e., no need to unnecessary adjustments to the process when it takes place in the "process acceptance zone", i.e. in a satisfactory condition in terms of ensuring tolerance.
To analyze the capabilities of the process to meet the specified requirements, the C p and C pk coefficients should be used to assess the accuracy and configuration of the process. For normal operation of the process it is necessary to obtain a value greater than one. It is also possible to estimate the number of suspected inconsistencies in the wire or fiber. This makes it possible to evaluate the efficiency of the process of rapid solidification rate of the melt and its economic feasibility. P p and P pk coefficients can be used to analyze production conditions The final action, in accordance with the requirements of ISO 9001:2015, is the continuous improvement of the process. As a rule, this is due to the requirements of the consumer. After that, the cycle repeats. Improving the process of rapid solidification rate of the melt involves a serial connection to the analysis of the most significant factors. The main tool is the correlation and regression analysis with visualization through the use of Pareto diagrams. Evaluation of the effectiveness of the measures taken is to increase the coefficient of determination in the study of the relationship between technological parameters and quality indicators.
This algorithm allows to implement the principle of quality management -making decisions based on facts. As a rule, it consists in selecting a quality indicator, data collection, data processing and data analysis and making a decision [7]. Extensive experience in the use of this technology determines the main indicators. Other actions are implemented using the proposed algorithm. Decision-making in this case can be implemented automatically.
Thus, the paper proposes an algorithm for the complex application of statistical methods. It allows you to analyze and control the process of high-speed solidification of the melt. The algorithm has great versatility. It can be used for other technologies. The algorithm complies with the requirements of ISO 9001:2015 and provides data monitoring and analysis. The algorithm allows to improve customer satisfaction, to assess the conformity of products, assess process capability. This allows for corrective and preventive measures. Implementation of the algorithm in the quality management system will improve the effectiveness and efficiency of the system. | 2,343.6 | 2019-11-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Using Scratch to Teach Undergraduate Students' Skills on Artificial Intelligence
This paper presents a educational workshop in Scratch that is proposed for the active participation of undergraduate students in contexts of Artificial Intelligence. The main objective of the activity is to demystify the complexity of Artificial Intelligence and its algorithms. For this purpose, students must realize simple exercises of clustering and two neural networks, in Scratch. The detailed methodology to get that is presented in the article.
I. INTRODUCTION
Scientific Method is key in the development of technologically advanced communities [6], and it is of great importance that our students include it in their curricula. As many other competences, it is convenient that some knowledge of scientific method is learned at the undergraduate stage, and any effort done in the Educational System to foster the learning of it undoubtedly pays.
For a better understanding of scientific method, the Educational Community widely recognizes that school curricula must move on from traditional expositive classes to more informal and collaborative contexts. The fact is, however, that active participation of the students is difficult to achieve at the classroom. One of the reasons should appeal to the interests of the students themselves; in order to achieve active involvement, the classes should make use of more attractive resources, such as fun, games, social interaction, observation of real problems, novelty, etc. This is specially clear in the case of undergraduate students. Surprisingly, there is a quite widespread mistrust of science in the post-truth era where we live. This makes it necessary to insist on a bigger effort to spread the benefits of science among younger students [1], [2].
The present article presents the design and development of several simple educational exercises to promote understanding and learning of Artificial Intelligence (AI) at schools with the usage of Scratch, which is a graphic programming environment. By allowing novices to build programs by snapping together graphical blocks that control the actions of different actors and algorithms make programming fairly easy for beginners.
AI is one of the technologies that will transform our society, economy and jobs in a greater extent along the next decades. Some of the most known examples of AI are driverless cars, chatbots, voice assistants, internet search engines, robot traders, etc. These systems can be embedded in physical machines or in software, and the promising capacities of both architectures makes it necessary for society and politicians to regulate the functions and limits of these devices [8]. Despite the myth of destructive AI represented in films such as Terminator or I, Robot, the truth is that nowadays most usual smart algorithms consist of a series of simple rules applied to large series of numbers, and the result of that is called Artificial Intelligence.
British Parliament and other institutions recommend the education from high school of Artificial Intelligence [28], regardless of the pace of development of this technology, in order to cope future technological and social challenges.
The main reason is to improve technological understanding, enabling people to navigate an increasingly digital world, and inform the debate around how AI should, and should not, be used.
Computational thinking is a problem-solving method that uses techniques typically practiced by computer-scientists and is increasingly being viewed as an important ingredient of STEM learning in primary and secondary education [13]. On the other hand, several studies in higher education research report on the low student pass rate in mathematics. Therefore, research into mathematics education has been greatly emphasized in the last decade [20]. Though education reform efforts have been made around the world, the trouble lies on the fact that most of schools are trying to prepare students for the future by continuing what was done in the past [13].
Thus, in this paper an educational proposal for teaching the basic mathematics behind simple AI algorithms is presented. The solution is developed in an open source platform, Scratch, and the main objective for the students is to be aware of the rules behind intelligent systems rather than learning or memorizing anything. The specific tasks to understand are automatic clustering, learning and prediction with AI. The software is designed for students of 16-18 years. The algorithms chosen for teaching are specifically designed or adapted to the mathematical background of the students. Moreover, the article will be publicly available in a web repository 1 .
The present article is divided in the following sections. In section II a background for teaching AI is presented. In Section III the mathematics that will be used in the software are described. In Section IV the methodology that teachers will use with the students is detailed. Finally, in Section V, conclusions of the experiment are analyzed.
II. BACKGROUND
There is a wide consensus among computer scientist that it is quite difficult to teach the basics of AI [11]. This is due to the lack of a unified methodology and to the blend of many other disciplines which involve a wide range of skills, ranging from very applied to quite formal [12]. AI modeling, algorithms and applications may be taught using tools as simple as paper and pencil, traditional computer programming or hands-on-computer programming [10]. Several papers discuss the prerequisites needed to understand machine learning [19].
Some experts suggest that high school education reforms should encompass a drive on STEM skills and coding in schools; others support the idea of focusing on digital understanding, rather than in skills.
Nowadays, however, the tendency is to teach coding to students, mostly framed in robotics workshops [13], [16], [17]. Seymour Papert [14] laid much of the groundwork for using robots in the classroom in the 1970s. An argument for teaching to children and students using robots is that they see these machines as toys [18]. Studies show that robotics generates a high degree of student interest and engagement and promotes interest in maths and science careers.
[12] uses a Lego robot to make the students build the hardware and control the software that manages the sensors and the environment. [10] uses a low-cost robot platform to teach students how a neural network is trained and built for a navigation robotic problem. There is the possibility that students pay attention to the robotic exercise rather than to the AI learning fundamentals, and this is what [15] tries to settle down in another robotic navigation problem. Both authors specify which are the premises to be transmitted to the students about the objective of the activity. These premises are, first, underlining that the exercise is about Artificial Intelligence and not robotics; second, emphasizing that they are clearly defined open projects, with specific start and end points.
However, although robotics is recognized as a proper way for teaching computational-thinking (CT) skills to students [22], in this article we present a wider approach to AI, based on the teaching of some mathematics with the help of Scratch [21], which will let us introduce the students into more than one algorithm.
CT can be defined as the process of recognizing aspects of computation in the world that surrounds us and applying tools and techniques from Computer Science to understand and reason about both natural and artificial systems and processes [22], [23].
The software used to enhance CT on students should be easy for a beginner to overcome the difficulty of creating working programs, but it should also be powerful and extensive enough to satisfy the needs of more advanced programmers. Several programming tools fit these criteria in varying degrees: Scratch, Crickets, Karel the Robot, Alice, Game Maker, Kodu, Greenfoot and Agentcubes. All those tools are based on the Logo philosophy [14]. Graphical programming environments are relatively easy to use and allow early experiences to focus on designing and creating, avoiding issues of programming syntax.
From a pedagogical perspective, computational tools are capable of deepening the learning of mathematics and science contents [24], [25], and the reverse is also true [26]. Scratch is today one of the most popular of these programming environments, and it has proved to be very effective for the engagement and motivation of young students with no experience at programming [27].
III. CONTENT
This article will depict algorithms that can be practiced using Scratch in a workshop with students 16-18 years old. After the workshop, the students should be able to understand, play and eventually code these algorithms.
Specific algorithms are: And the exercises that students will have to do are described next.
K-means
The algorithm K-means, developed in 1967 by MacQueen [3], is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. K-means is a clustering algorithm which tries to show the students how items in a big dataset are automatically classified, even while new items are being added. The adopted technique for creating as many clusters as we want, is the minimum square error rule.
Starting from a given cloud of N points and a smaller cloud of K mass centers, the aim of this activity is that the student learns how to cluster the points into K groups, by programming an application in Scratch. In order to do the clustering, each point will belong to the cluster defined by the closest mass center. Finally, each of the K clusters will be colored with a different color (see Figure 1).
Neural network
The basic idea behind a neural network is to simulate lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a human-like way. The main characteristic of this tool is that a neural network learns all by itself. The programmer just needs to design the physical structure (number of outputs, inputs, hidden layers) and set some very simple rules involving additions, multiplications and derivatives. They are based on perceptrons, which were developed in the 1950s and 1960s by the scientist Frank Rosenblatt [7], inspired by earlier work by Warren McCulloch and Walter Pitts [4]. It is important to note that neural networks are (generally) software simulations: they are made by programming very ordinary computers. Computer simulations are just collections of algebraic variables and mathematical equations linking.
Habitually, neural networks use backpropagation type algorithms which require the usage of derivatives. However, Figure 1. Initial and final clouds of points the target students of this exercise (16-18 years old) do not have this mathematical operation in their curricula yet. As a consequence, the neural networks have been created with a logic activation function, which are understandable by the students. Necessary formulas are presented to them without any previous mathematical demonstration.
Different exercises are developed with neural networks. First, a simple neural network with two inputs and an output neuron, is trained with an AND logic gate (see Figure 2). In each iteration, students will see the different weights that the neural network gets. Next, and OR logic gate will be used, and as a consequence, students will observe how adjusting parameters change with this new data. The mathematical description of this neural network is described in Subsection III-1.
As a second exercise of this type, a more complex neural network exercise is presented (Figure 3. Considering a 3-2-1 neural network (three inputs, a hidden layer with two neurons and an output) the students will train the neural network with AND and OR logic gates again. The detailed operations of this exercise appear in Subsection III-2.
1) Simple Neural Network operation: The neuron obtains an output Y 1 from the two inputs Input 1 and Input 2 using the corresponding weights.
The activation function at N 1 defined as shown next: where Y 1 is the output of the neural network. The error is defined as the difference between the desired output Desired output and the obtained output.
Each time the function is executed, the algorithm updates the weights using backpropagation and gradient descent rule, until output Y 1 converges to the desired output Desired output.
The new value of the first weight will be the sum of its previous value and the product of the first input, the learning rate (LR) and the error.
Similarly, the new value of the second weight will be: 2) Complex Neural Network operation: There are three neurons: two at the hidden layer (N 1 and N 2 ) and one at the output (N 3 ). Their calculus depend on the inputs (X 1 , X 2 and X 3 ) and weights (W 1 , W 2 , W 3 , W 4 , W 5 ) following next expressions: There is an activation function at N 3 defined as shown next: where N 3 is the output of the neural network.
Error is defined as: Employing again the backpropagation and gradient descent rule, the calculus for updating the weights result in next equations: Once the values of the error and weights are calculated, the student has to store them on the corresponding neurons.
IV. METHODOLOGY
The workshop of Artificial Intelligence will be based on the usage of the educational tool Scratch [21]. Scratch is a visual programming language, and its online community is targeted primarily at children and young students. Using Scratch, users can create online projects and develop them into almost anything by using a simple block-like interface. When they are finished, students can share their projects and discuss their creations with each other. Before the students start the exercises here proposed, it is convenient that they acquire some knowledge of this language.
In all algorithms presented in Section III, the first task of the student is to fill in the white gaps left to him/her among the lines of code, marked with a comment. That is, the students do not need to create the algorithm or write the whole code themselves; the code will be provided for the most part.
Students will work in pairs and the workshop will be structured in the following steps: • At the beginning, the teachers will give a short explanation of 15-20 minutes about AI and the objective of the workshop, with a twofold aim: first, to demystify Artificial Intelligence; second, to understand some simple mathematics underneath the computations.
• The students will work in couples. Teachers will provide them with some written theoretical background about the algorithms and the instructions for the exercises (K-means and neural networks). Moreover, teachers will explain in another 10 minutes of presentation the basics of the algorithms. • Students will have one hour to finish the codes (20 minutes for K-means and 40 minutes for both parts of the neural networks) and eventually execute the applications to see whether they work properly. During this personal tasks teachers will be at hand ready to assist whenever necessary. • Finally, at the end of the session, the teachers will provide the students with the finished proposed answers in Scratch software for the students to check them.
A. K-Means
The algorithm is developed using one main block and three function blocks of Scratch code. The function blocks are called NewDataSet, KMeans and ColourPoints.
The student must finish up the three function blocks: the block NewDataSet, block KMeans and block ColourPoints.
The main block will create the mass centers and the cloud of points. The set of mass centers contains K random points with X coordinates between (−230, 230) and Y coordinates between (−170, 170). These K points will be the mass centers of the clusters (see Figure 4). The first task is to finish the programming of the block NewDataSet that will create the cloud of N points. The cloud must contain N points with X coordinates between (−230, 230) and Y coordinates between (−170, 170) (see Figure 5). Block KMeans stores the number of the mass center that will be assigned to each point. To do that, the student must code the calculation of the Euclidean distance from each of the points to each the mass centers in variable A. The program will then find the minimum distance of them all and fill up vector Clusters, which contains the numbers of the cluster to which each point belongs.
Finally, block ColourPoints graphs the clouds of points and the cloud of centers of mass, each in a colour given by vector Clusters.
The algorithm here presented just tries to show automatic clustering, and not getting equivalent size clusters.
B. Neural Network: AND/OR logic gate
The algorithm is developed using a main block, which initializes the data, and two blocks, Neuron and ExecuteButton.
The interface used is described in Figure 2. The student must code the equations that define the function that performs the only neuron of the network. The equations to calculate is the update of W 2 , as can be seen in Figure 7.
The students must train the neuron using two sets of data: one set for the AND logic gate and another for the OR logic gate.
C. Complex Neural Network Operation
This exercise is based on training a three inputs neural network with AND and OR logic gates. As an additional complexity comparing to previous exercise, this neural network is multilayered. It uses three neurons, Neuron1, Neuron2 and Neuron3, and ExecuteButton.
The exercise is planned to fill the gaps of the following operations: • Net calculus of N 3 • Update of W 2 and W 5
V. CONCLUSIONS
The paper presents teacher-guided, easy to implant activities that can be performed at schools using Scratch. Moreover, the operations have been adapted for [16][17][18] year-old students' mathematical background.
The tasks presented are scalable; students can delve into the maths involved in the mathematical iterations or into the Scratch code itself, or even propose new neural networks to deal with other problems.
The work can be extensible to students of different ages and more AI algorithms can be added to the system.
The easiness of the equations of K-means and neural networks permit their implementation in other formats, such as MS Excel, which in some occasions could be more familiar to students than Scratch.
Moreover, the students will realize that the mathematical knowledge acquired throughout the year will help them finishing the programming of automatic clustering and neuron networks, being able to train AND and OR logic gates.
As a future work, first, the authors should organize the AI workshop several times and measure the grade of the objective achievement in collaboration with pedagogical researchers. Secondly, more programming languages should be explored (GeoGebra, HTML, ShynnyApps) in order to implement more exercises, such as the usage of neural networks for data prediction.
VI. ACKNOWLEDGEMENTS
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777720. | 4,345.6 | 2019-03-30T00:00:00.000 | [
"Computer Science",
"Education"
] |
Studies of the Maltose Transport System Reveal a Mechanism for Coupling ATP Hydrolysis to Substrate Translocation without Direct Recognition of Substrate*
The ATPase activity of the maltose transporter (MalFGK2) is dependent on interactions with the maltose-binding protein (MBP). To determine whether direct interactions between the translocated sugar and MalFGK2 are important for the regulation of ATP hydrolysis, we used an MBP mutant (sMBP) that is able to bind either maltose or sucrose. We observed that maltose- and sucrose-bound sMBP stimulate equal levels of MalFGK2 ATPase activity. Therefore, the ATPase activity of MalFGK2 is coupled to translocation of maltose solely by interactions between MalFGK2 and MBP. For both maltose and sucrose, the ability of sMBP to stimulate the MalFGK2 ATPase was greatly reduced compared with wild-type MBP, indicating that the mutations in sMBP have interfered with important interactions between MBP and MalFGK2. High resolution crystal structure analysis of sMBP shows that in the closed conformation with bound sucrose, three of four mutations are buried, and the fourth causes only a minor change in the accessible surface. In contrast, in the open form of sMBP, all of the mutations are accessible, and the main chain of Tyr62–Gly69 is destabilized and occupies an alternative conformation due to the W62Y mutation. On this basis, the compromised ability of sMBP to stimulate ATP hydrolysis by MalFGK2 is most likely due to a disruption of interactions between MalFGK2 and the open, rather than the closed, conformation of sMBP. Modeling the open sMBP structure bound to MalFGK2 in the transition state for ATP hydrolysis points to an important site of interaction and suggests a mechanism for coupling ATP hydrolysis to substrate translocation that is independent of the exact structure of the substrate.
ATP-binding cassette (ABC) 2 transporters move various substrates across membranes, with substrate movement coupled to the hydrolysis of ATP. Although the ATPase activity of ABC exporters like P-glycoprotein is generally stimulated by substrate binding, the ATPase activity of ABC importers is acti-vated by a peripheral substrate-binding protein and not the free substrate (for recent reviews see Refs. [1][2][3]. However, the mechanism of ATPase regulation is still not fully understood. Here, we use one of the most well studied ABC importers, the Escherichia coli maltose transporter (MalFGK 2 ), to investigate the roles of maltose-binding protein (MBP) and maltose itself in regulation of ATPase activity.
In its resting state MalFGK 2 contains a substrate-binding site that is exposed to the cytoplasm (4). In the periplasm, MBP binds maltose, which stabilizes a change from an "open" to a "closed" conformation, enabling it to stimulate the MalFGK 2 ATPase (5,6). Interactions with closed, maltose-bound MBP lead to exposure of the MalFGK 2 maltose-binding site to the periplasmic side where maltose can move from MBP into an occluded translocation pathway (7,8). After ATP hydrolysis, the transporter returns its binding site to the cytoplasmic face to allow the substrate to enter the cytoplasm. This is known as the alternating access model of maltose transport (4) and may be a common mechanism among ABC transporters (2,9,10).
The structure of a transition state complex between MBP and MalFGK 2 , as well as biochemical data (7,8), indicates that maltose enters the substrate-binding site of MalFGK 2 prior to ATP hydrolysis, but it is unclear how maltose-bound MBP activates the MalFGK 2 ATPase (11) and how ATP hydrolysis is coupled to the movement of maltose across the membrane. Of particular interest are the roles that maltose itself might play in regulating the ATPase activity of MalFGK 2 .
There are two ways maltose could regulate ATP hydrolysis. The first is by stabilizing the closed conformation of MBP, and the second is through direct interactions with MalFGK 2 . Although it is clear from previous studies that substrate-induced domain closure in MBP is critical for robust stimulation of the MalFGK 2 ATPase and substrate transport (6), it is not known whether direct interaction between maltose and MalFGK 2 is also required for ATPase activity.
To address this question, we have used an MBP mutant that is able to bind an alternative substrate, sucrose, with high affinity (12). The sucrose-binding MBP (sMBP) enables us to present the maltose transporter with either maltose or sucrose in equivalent contexts and distinguish whether the substitution influences the ATPase activity of MalFGK 2 . Sucrose is a good alternative substrate for this purpose because experiments by Shuman and co-workers (13,14) have shown that it has a very poor ability to compete for the maltose-binding site in MalFGK 2 , indicating that the change in sugar structure is sufficient to disrupt specific binding interactions with MalFGK 2 .
Using sMBP, we have determined that ATP hydrolysis by MalFGK 2 is not dependent on the exact nature of the substrate, and therefore the coupling of ATPase activity to substrate translocation is due solely to interactions between MBP and MalFGK 2 . Based on these findings and detailed structural analysis of sMBP, we propose that a productive interaction between MalG and the vacated maltose-binding site in MBP is required for ATP hydrolysis. In this manner, substrate translocation from MBP to MalFGK 2 is coupled to ATP hydrolysis without requiring a direct interaction between maltose and MalFGK 2 .
MATERIALS AND METHODS
Cloning of MBP Mutants-Plasmid pDIM-C8MalE, containing sMBP, was kindly provided by Ostermeier and coworkers (12). A 923-bp Kpn21/BclI fragment of this vector, containing the W62Y and E111Y substitutions, was ligated into pLH1, which contains the MBP signal sequence for export to the periplasm. The D14L and K15F mutations were substituted by mutagenic PCR using the following primers: 5Ј-CTGGATTAACGGCCTTTTCGGCTATAACGGTCT-CGC-3Ј and 5Ј-GCGAGACCGTTATAGCCAAAAAG-GCCGTTAATCCAG-3Ј.
To produce intracellularly expressed sMBP and wtMBP with a hexahistidine affinity tag, restriction cut sites for EheI and HindIII were added to excise the two genes (without localization tag) using the following primers: 5Ј-CGCCTCGGCTGG-CGCCAAAATCGAAG-3Ј and 5Ј-CGCCGCATCCGGCAT-TTAAGCTTATTACTTGGTGATACGAG-3Ј. Digested PCR products were then ligated into the multicloning site of pPROEX-HTa (Invitrogen) to introduce an N-terminal hexahistidine tag attached by a tobacco etch virus protease cleavable linker. Cleavage of this linker left an N-terminal glycine-alanine insertion that was common to both the sMBP and wtMBP used in this study.
Expression and Purification-Hexahistidine-tagged sMBP and wtMBP were expressed and purified from HS3309 (MalE Ϫ/Ϫ ) E. coli by Ni 2ϩ -affinity chromatography, removal of the affinity tag by tobacco etch virus protease cleavage, and ion exchange chromatography, as reported previously (15). Both proteins were denatured in 6 M guanidine and dialyzed exhaustively to remove trace sugars before being refolded by dropwise dilution and stored at Ϫ80°C in 50 mM Tris-HCl, pH 8 (15).
Preparation of wt-MalFGK 2 -containing Proteoliposomes-MalFGK 2 was overexpressed from plasmids pNT1SK ϩ and pMR111 in E. coli HS3399 cells, which contain deletions for all transporter components. Membrane fractions were prepared and solubilized as reported previously (15).
Liposomes were prepared from Avanti TM crude E. coli phospholipids, and after homogenization by sonication, the liposomes were combined with MalFGK 2 -containing membranes by detergent dilution (15). The proteoliposomes were frozen at Ϫ80°C under N 2 until used.
ATPase Assays-ATPase measurements were made in a solution of 50 mM Tris-HCl, pH 8.0, 100 mM KCl, and 10 mM MgCl 2 , with proteoliposomes added to a final concentration of 0.1 mg/ml protein. Purified sMBP or wtMBP was added at various concentrations and in the presence or absence of 5 mM maltose or sucrose. ATP hydrolysis at 37°C was measured in vitro by assaying the appearance of inorganic phosphate, using ammonium molybdate, as described previously (15).
Phases were determined by molecular replacement with the wild-type proteins (PDB codes 1ANF and 1OMP). Rigid body refinement of the two isolated domains was carried out first to capture any domain movements relative to the wild-type structures. Structures were refined using CNS ( (12). The sMBP molecule has four point mutations, D14L, K15F, W62Y, and E111Y, all within the substrate binding cleft. Although wild-type MBP (wtMBP) has a dissociation constant (K D ) of 1 M for maltose and no ability to bind sucrose (6), sMBP has a K D for maltose of 24 M and for sucrose of 6.6 M (12). These values were confirmed for our sMBP constructs using fluorescence titrations (data not shown). Furthermore, we measured substrate-induced conformational changes in sMBP, in solution, by small angle x-ray scattering. Sucrose-induced changes in the conformation of sMBP were identical to changes seen in wtMBP (supplemental Fig. S1), with the ligand-bound and unliganded conformations of sMBP clearly matching the ligand-bound and unliganded conformations of wtMBP, respectively (data not shown). We also observed that sMBP could complement the growth of wtMBP-deficient E. coli on M9 maltose minimal media (data not shown). Although growth with sMBP was 3-4 times slower compared with that observed with wtMBP, in control experiments with no binding protein, there was no growth. Therefore, sMBP interacts productively with MalFGK 2 to promote maltose transport.
Sucrose-binding MBP-sMBP is a mutant form of MBP developed by Ostermeier and co-workers
sMBP stimulates MalFGK 2 with Bound Sucrose or Maltose-MalFGK 2 has a binding site that is relatively specific for maltodextrins (13), and binding of the substrate to this site may be important for stimulation of the MalFGK 2 ATPase. To determine the importance of specific interactions between maltose and MalFGK 2 , we used sMBP to present MalFGK 2 with either maltose or sucrose as a transport substrate and measured the resulting stimulation of ATPase activity in vitro. Consistent with literature findings, MalFGK 2 -containing proteoliposomes showed a low level of basal activity (Fig. 1). This activity was unaltered by the addition of 5 mM maltose or sucrose, confirming that free sugar cannot stimulate the transporter in the absence of MBP (data not shown; see Ref. 11). When 20 M sMBP was added to wild-type MalFGK 2 , no statistically signif-icant increase in activity was observed. However, in the presence of either 5 mM maltose or sucrose, sMBP stimulated a 3-fold increase in ATP hydrolysis over background (Fig. 1A, bars with diagonal lines). We have therefore observed that, with respect to ATPase activation, MalFGK 2 cannot distinguish sucrose from maltose. To demonstrate that the equivalence of maltose-and sucrose-bound sMBP was not limited to 20 M sMBP concentration, a range of concentrations, from 1 to 100 M, was tested (Fig. 1B). Across this concentration range, maltose-and sucrose-bound sMBP stimulate the MalFGK 2 ATPase to similar levels; the overall trend (in both cases a proportional increase in MalFGK 2 ATPase activity as sMBP concentration is raised) suggests that the mechanism for stimulation is the same, irrespective of the sugar with which MalFGK 2 comes into contact.
Although maltose-and sucrose-bound sMBP both stimulate the MalFGK 2 ATPase to the same extent, the absolute levels of ATPase activity produced by sMBP were much lower than those produced by wtMBP. For example, in the presence of maltose, the level of ATPase stimulation by 20 M wtMBP was 40-fold higher than the stimulation produced by the same concentration of either maltose-or sucrose-bound sMBP. In addition, unliganded sMBP did not produce a significant increase in MalFGK 2 ATPase, in contrast to unliganded wtMBP that consistently produces a 2-fold stimulation ( Fig. 1A) (11,15). Therefore, although the substitution of sucrose for maltose did not influence stimulation of the MalFGK 2 ATPase, when compared with wtMBP the mutations in sMBP have drastically compromised its overall ability to stimulate the MalFGK 2 ATPase.
Structural Analysis of Open and Closed sMBP-To determine how the mutations in sMBP disrupt its ability to stimulate MalFGK 2 , we solved the crystal structures of sMBP in both the sucrose-bound and substrate-free forms to resolutions of 2.0 and 1.5 Å, respectively (Table 1). In both forms sMBP adopts a wild-type fold, with main chain atoms differing from wtMBP by a root mean square deviation for C␣ positions of 0.48 Å in the closed form and 0.55 Å in the open form.
Well defined electron density for sucrose was seen in the binding site of the sMBP sucrose structure; the electron density clearly defines each hydroxyl group of sucrose and does not fit maltose ( Fig. 2A). Like maltose-bound wtMBP, sMBP binds sucrose through hydrogen bonds with each of the two sugar rings. The first nonreducing glucose unit is common to both maltose and sucrose and occupies an identical binding pocket in wtMBP and sMBP (Fig. 2B). The second sugar ring differs between maltose and sucrose, being an ␣-1,4-linked reducing glucose in maltose and an ␣-1,2-linked fructose in sucrose; as a result, sucrose adopts a 90°bend compared with maltose. This bend allows the C3 hydroxyl to hydrogen bond with residue W62Y, which was likely selected for this purpose (Fig. 2B). The
Energetic Coupling in the Maltose Transporter
bend also creates a cavity in the binding site and separates the sugar from residues 14 and 15 (Fig. 2C). The D14L, K15F, and E111Y mutations modify this hydrophobic cavity by removing what would otherwise be unmatched buried charges or polar groups.
We also solved the crystal structure of the open conformation of sMBP to 1.5 Å resolution and were surprised to find a significant change in the structure of the ligand-binding site. The W62Y mutation is able to adopt an alternative conformation that displaces the main chain segment from residues Tyr 62 to Gly 69 (Fig. 3). This difference was evident from clear electron density for the Tyr 62 side chain in two different places, one of which necessarily displaces Phe 67 and is therefore incompatible with the wild-type main chain conformation (Fig. 3C). The only way this change can be accommodated is for residues 62-69 to partially extend into the substrate binding cleft. The occupancies of the two conformations of residues 66 -69 were set such that the temperature factors for the wild-type conformation are similar to the main chain average, as is the case with open, wild-type MBP (5). On this basis, the occupancy of the wildtype conformation is estimated at 0.4 and that of the alternative conformation is 0. 6.
Both the open and closed conformations of MBP are involved in maltose transport (8, 14 -16). To understand why sMBP has such a compromised ability to stimulate the MalFGK 2 ATPase, we compared its surface in both the open and closed conformations to that of wtMBP. The changes in sMBP necessary to support sucrose binding require only side chain substitutions, most of which are buried in the sugar-binding site and are not surface accessible in the closed form of the protein. As a result, the surface morphology of closed sMBP is virtually unaltered from closed wtMBP (Fig. 4A), with only a slight perturbation caused by the exposure of a methyl group on D14L (supplemental Fig. S2).
In contrast to the closed state, the open conformation of sMBP fully exposes all four binding site mutations to the solvent (Fig. 4B) as well as the alternative and partially disordered conformations for residues 62-69, caused by the W62Y mutation (Fig. 3). To summarize, our structural analysis found that in the sucrose-bound closed form, sMBP closely mimics the surface morphology of wtMBP, but open unliganded sMBP displays a drastically altered sugar-binding site. (Table 1); coordinates for wtMBP and maltose were from PDB code 1ANF (25). A, bound sucrose (dark green) and the mutated residues (pale green) are shown, along with 2F o Ϫ F c electron density for the sucrose (blue mesh) contoured at 2. The electron density map was calculated using phases from the partially refined structure, prior to the addition of sucrose to the binding site. B, hydrogen bonding interactions between sucrose and sMBP (top) are compared with those between maltose and wtMBP (bottom). Hydrogen bonding interactions to the first glucose ring are the same for both proteins. C, comparison of the ligand-binding sites of sMBP and wtMBP. The molecular surface that sMBP and wtMBP have in common is shown in gray; carbon atoms from maltose and wtMBP are shown in orange and yellow, respectively, and those from sucrose and sMBP are shown in dark green and pale green. The conformation of sucrose creates a cavity that would normally be filled with atoms from the second glucose unit of maltose, to which three charged residues (Asp 14 , Lys 15 , and Glu 111 ) would be hydrogenbonded, as illustrated in B. The mutations in sMBP (pale green) convert these three charged residues to neutral residues. APRIL 9, 2010 • VOLUME 285 • NUMBER 15
JOURNAL OF BIOLOGICAL CHEMISTRY 11293
Altered Interactions between Open sMBP and MalFGK 2 -To investigate how the mutations in open sMBP could cause such a drastic defect in its ability to stimulate the MalFGK 2 ATPase, we replaced wtMBP with sMBP in the crystal structure of MBP-MalFGK 2 that corresponds to the transition state for ATP hydrolysis (7,8). The backbone positions of sMBP fit the MBP component of the trapped transition state to a root mean square deviation of 0.84 Å.
The binding of sMBP to the transporter transmembrane (MalFG) domains was not obviously compromised across the exterior surface of the binding protein, including contacts between sMBP and the MalF P2 arm (17,18). However, mutations D14L, W62Y, and E111Y disrupted interactions of the maltose-binding site with residues 253-258 of MalG, which occupy the maltose-binding site in the transporter transition state (Fig. 5). These residues include an invasive structure known as the MalG P3 "scoop loop," named for its probable role in excluding maltose from the ligand-binding site in MBP (8). A previous study showed that a 31-residue insertion into this loop did not affect assembly of MalFGK 2 but abolished transport by the system by disrupting interactions with MBP (19). In the case of the interaction with sMBP, D14L clashes with Asn 254 of MalG, whereas W62Y and E111Y remove stabilizing van der Waals and hydrogen bond interactions. In addition, the alternative conformation adopted by residues 62-69 will interfere with MalG interactions. Altogether, the mutations in the open conformation of sMBP would be expected to disrupt interactions with the MalG P3 loop in the transition state for ATP hydrolysis.
In summary, the mutations in sMBP have a drastic effect on its ability to stimulate MalFGK 2 ATPase activity. Structural analysis indicates that this effect is due to a disruption of interactions between residues 253 and 258 of MalG and the empty sugar-binding site of MBP as it occurs in the transition state for ATP hydrolysis. The magnitude of this effect shows that these interactions are critical for stimulation of the MalFGK 2 ATPase.
DISCUSSION
We observed that a sucrose-binding mutant of MBP was able to stimulate the ATPase activity of MalFGK 2 with either maltose or sucrose present as substrate. Although the level of stimulation by sMBP was only 2-3% of that produced by wtMBP, we believe the system is operating along the same reaction pathway as the fully wild-type system. The ATPase measurements were carried out in a well characterized proteoliposome system in which the ATPase activity of MalFGK 2 is tightly coupled to interactions with MBP. Because MalFGK 2 is wild type, and sMBP adopts the same open and closed structures as wtMBP, the conformational changes in the system as a whole will be similar for sMBP and wtMBP. In addition, sMBP is able to mediate growth on minimal maltose media, showing that the sMBP-stimulated ATPase activity is associated with maltose transport in vivo.
The activation by sMBP was indistinguishable between maltose and the nonphysiological substrate sucrose. The available evidence suggests that sucrose is unable to interact with the maltose-binding site in MalFGK 2 . For example, in transport assays using MBP-independent MalFGK 2 mutants, sucrose was incapable of competitively inhibiting the transport of maltose (13) indicating that the substrate-binding site of MalFGK 2 has little, if any, affinity for sucrose. This can be explained using the MalFGK 2 structure in complex with maltose (8); modeling sucrose into the same position as maltose results in clashes with MalF residues 383, 433, and 436, including steric clashes with backbone atoms. Because sucrose is unable to occupy the maltose-binding site of MalFGK 2 , the observation that maltoseand sucrose-bound sMBP have equal abilities to stimulate MalFGK 2 demonstrates that specific binding of the carbohydrate by MalFGK 2 is not important for activation of its ATPase. Therefore, it is the substrate-induced conformational change in MBP, but not the identity of the substrate itself, that is critical for stimulation of the MalFGK 2 ATPase.
In principle, ATP-dependent transporters should couple ATP hydrolysis to the actual movement of substrate. Our results with sMBP show that direct interactions with the substrate are not required for stimulation of the MalFGK 2 ATPase, and therefore coupling of ATP hydrolysis to substrate translocation must depend solely on interactions between MBP and MalFGK 2 . In this regard, the very strong defect in the ability of sMBP to stimulate the MalFGK 2 ATPase indicates that a critical interaction between MBP and MalFGK 2 has been disrupted by the mutations.
Both the open and closed conformations of MBP interact with MalFGK 2 during the catalytic cycle (3,7,8,15). The surface of closed sMBP is almost identical to that of wtMBP, suggesting that this conformation is not responsible for the reduced ability of sMBP to stimulate the MalFGK 2 ATPase. In fact, Leu 14 (Asp 14 in wtMBP) is the only mutant residue that causes a change in the exposed surface of closed sMBP and could therefore alter interactions with MalFGK 2 . Although functional genetic screens have demonstrated that the region around residue 14 is important for the interaction of MBP and MalFGK 2 (20,21), the only change found at residue 14 in the genetic screens was a mutation to tyrosine, a much larger residue that cannot be buried in the ligand-bound conformation of MBP and would therefore produce a large change in the surface of the closed conformation. In contrast, the D14L mutation is mostly buried and almost isosteric (supplemental Fig. S2), resulting in only a very small change in the surface, namely a 2-Å extension of an existing hydrophobic patch (supplemental Fig. S2). Therefore, the small effect of the D14L mutation on the surface of closed sMBP does not provide a convincing explanation for the profound effect of the mutations on the ability of sMBP to stimulate MalFGK 2 .
The surface of open sMBP, on the other hand, is drastically altered by the exposure of mutant residues in the sugar-binding site and the creation of an area of conformational instability due to the W62Y mutation. On this basis, the profound defect in sMBP is most likely due to a disruption of interactions between the open, rather than the closed, conformation of MBP. This conclusion is consistent with an important role for open MBP in stabilization of the transition state for ATP hydrolysis (8,15).
In fact, the reduced activation of MalFGK 2 ATPase by sMBP coincides with a disruption of interactions between sMBP ligand-binding site residues and the invasive MalG P3 loop of In addition, the main chain residues 62-69 are partially disordered; the alternative conformation for these residues is shown in magenta. Coordinates for open and closed wtMBP correspond to 1OMP (5) and 1ANF (25), respectively. (8)). Residues 254 -257 of MalG (cyan) extend into the sugar-binding site, making contacts with wtMBP residues 14, 62, and 111 (yellow). The contacts made between MalG and the ligand-binding site would be affected by the mutations in sMBP. In addition, interaction with the MalG P3 loop would be disrupted due to disorder in sMBP residues 62-69, as outlined in Fig. 3. A role for the MalG P3 loop in energetic coupling is consistent with its position in MalFGK 2 (Fig. 6). The P3 loop is connected to MalG helices 15 and 16, which extend from the scoop loop to the MalG C terminus, located in a hydrogen bond network equidistant between the two ATP-binding sites of MalK 2 .
Our data indicate that in addition to extracting maltose from the MBP sugar binding cleft (8), interactions between the MalG P3 loop and MBP also play a direct role in promoting ATP hydrolysis. These interactions do not depend on the specific chemical identity of the substrate, and therefore a similar mechanism might be operative in multidrug exporters and other ABC transporters that couple ATP hydrolysis to the transport of diverse substrates. | 5,639.2 | 2010-02-10T00:00:00.000 | [
"Biology"
] |
The gravity of light-waves
Light waves carry along their own gravitational field; for simple plain electromagnetic waves the gravitational field takes the form of a pp-wave. I present the corresponding exact solution of the Einstein-Maxwell equations and discuss the dynamics of classical particles and quantum fields in this gravitational and electromagnetic background.
Setting the stage
The gravitational properties of light waves have been studied extensively in the literature [1]- [11]. In this lecture I describe the exact solutions of Einstein-Maxwell equations discussed in [5,6] and some applications.
The discussion concerns plain electromagnetic waves propagating in a fixed direction chosen to be the z-axis of the co-ordinate system. As they propagate at the universal speed c, taken to be unity: c = 1 in natural units, it is useful to introduce light-cone co-ordinates u = t − z, v = t + z. Then the electromagnetic waves to be discussed are described by a transverse vector potential This expression explicitly makes use of the superposition principle for electromagnetic fields, guaranteed in Minkowski space by the linearity of Maxwell's equations and wellestablished experimentally. The corresponding minkowskian energy-momentum tensor is the only non-vanishing component of which in light-cone co-ordinates is Here the components of the transverse electric and magnetic fields are expressed in terms of the vector potential (1) by the prime denoting a derivative w.r.t. u. The same expression for light-waves also holds in general relativity, the corresponding special solution of the Einstein equations being described by the line element For this class of metrics [12,13] the only non-vanishing components of the connection are and the complete Riemann tensor is given by the components As a result the Ricci tensor is fully specified by which matches the form of the energy-momentum tensor (3) and thus allows solutions of the Einstein equations specified by with Φ 0 representing a free gravitational wave of pp-type.
Geodesics
The motion of electrically neutral test particles in a light-wave (1) is described by the geodesics X µ (τ ) of the pp-wave space-time (5). They are found by solving the geodesic equationẌ the overdot denoting a derivative w.r.t. proper time τ . The equation for the geodesic lightcone co-ordinate U (τ ) is especially simple, as its momentum (representing a Killing vector) is conserved: Another conservation law is found from the hamiltonian constraint obtained by substitution of the proper time in the line element: where v = dX/dT is the velocity in the observer frame. Finally, using (11) to substitute U for τ , the equations for the transverse co-ordinates become For quadratic pp-waves Φ(u, x i ) = κ ij (u) x i x j this takes the form of a parametric oscillator equation For light-like geodesics the equations are essentially the same, except that the hamiltonian constraint is replaced by Note that in Minkowski space, where Φ = 0, this reduces to v 2 = c 2 = 1. These equations take a specially simple form for circularly polarized light waves sharply peaked around a central frequency where the domain of a(k) is centered around the value k 0 with width ∆k and central amplitude a 0 . Then and therefore Φ = µ 2 x 2 + y 2 , Then equation (14) reduces to a simple harmonic oscillator equation with angular frequency µ in the U -domain.
Field theory
In the previous sector we studied the equation of motion of test particles, supposed to have negligible back reaction on the gravitational field described by the metric (5). Similarly one can study the dynamics of fields in this background space-time in the limit in which the fields are weak enough that their gravitational back reaction can be neglected. First we consider a scalar field Ψ(x) described by the Klein-Gordon equation It is convenient to consider the Fourier expansion w.r.t. the light-cone variables (u, v): Note that Then the amplitudes ψ satisfy the equation This equation can be solved explicity for the circularly polarized wave packets which lead to the simple quadratic amplitude (18). Then The right-hand side describes a couple of quantum oscillators with frequency ω = 2µ|q| possessing an eigenvalue spectrum 2µ|q| (n x + n y + 1) ≡ 4σ|q|, n i = 0, 1, 2, ...
Thus equation (23) reduces to The final result for the scalar field then becomes
Electromagnetic fluctuations in a light-wave background
On top of an electromagnetic wave described by equation (1) there can be fluctuations of the electromagnetic field. The general form of the Maxwell field then is of the form Because of the linearity of Maxwell's equations the field equations for the wave background and the fluctuations separate. The fluctuating field equations in the gravitational pp-wave background are derived from the action and read where ∆ ⊥ = ∂ 2 x + ∂ 2 y . As the fluctuating field equations possess their own gauge invariance they can be restricted without loss of generality by the constraint However, this does not yet exhaust the freedom to make gauge transformations, as the condition (30) is repected by special gauge transformations As can be seen from the first equation (29) these transformations can be used to eliminate the component a v by taking We are then left with a fluctuating field component a u restricted by (30): implying a u to satisfy the Gauss law constraint The only remaining dynamical degrees of freedom are now the transverse components a i which are solutions of the Klein-Gordon type of equations For pp-backgrounds of the special form (18) these solutions take the form (26) with m 2 = 0.
In the full theory also the gravitational field must fluctuate in a corresponding fashion. In the limit where the fluctuations are due to irreducible quantum noise, a corresponding quantum effect must be present in the space-time curvature. In view of the result (9) for the photon fluctuations in the light-beam itself these are expected to take the form of associated spin-0 graviton excitations. | 1,390.2 | 2018-09-12T00:00:00.000 | [
"Physics",
"Geology"
] |
PC-3-Derived Exosomes Inhibit Osteoclast Differentiation by Downregulating miR-214 and Blocking NF-κB Signaling Pathway
Prostate cancer is a serious disease that can invade bone tissues. These bone metastases can greatly decrease a patient's quality of life, pose a financial burden, and even result in death. In recent years, tumor cell-secreted microvesicles have been identified and proposed to be a key factor in cell interaction. However, the impact of cancer-derived exosomes on bone cells remains unclear. Herein, we isolated exosomes from prostate cancer cell line PC-3 and investigated their effects on human osteoclast differentiation by tartrate-resistant acid phosphatase (TRAP) staining. The potential mechanism was evaluated by qRT-PCR, western blotting, and microRNA transfection experiments. The results showed that PC-3-derived exosomes dramatically inhibited osteoclast differentiation. Marker genes of mature osteoclasts, including CTSK, NFATc1, ACP5, and miR-214, were all downregulated in the presence of PC-3 exosomes. Furthermore, transfection experiments showed that miR-214 downregulation severely impaired osteoclast differentiation, whereas overexpression of miR-214 promoted differentiation. Furthermore, we demonstrated that PC-3-derived exosomes block the NF-κB signaling pathway. Our study suggested that PC-3-derived exosomes inhibit osteoclast differentiation by downregulating miR-214 and blocking the NF-κB signaling pathway. Therefore, elevating miR-214 levels in the bone metastatic site may attenuate the invasion of prostate cancer.
Introduction
Prostate cancer is one of the most common malignant tumors, with bone as the preferential metastatic site [1,2]. Without effective intervention, persistent invasion of prostate cancer will soon lower the quality of life of affected patients and even result in death [3]. Bone metastatic lesions can be divided into two categories, osteoblastic or osteolytic, depending on the radiographic characteristics. The differences are caused by an imbalance between bone formation and bone resorption, i.e., whether osteoblasts or osteoclasts are dominant [4,5]. Prostate cancer usually leads to osteoblastic bone metastasis. Studies have found that prostate cancer cells release many cytokines to promote osteoblast differentiation [6][7][8]. However, the effects of prostate cancerderived exosomes on osteoclasts remain unclear.
Exosomes are extracellular vesicles with a diameter of 30-150 nm and a density of 1.13-1.19 g/ml. Whether in a physiological state or pathological state, cells can secrete exosomes to transfer certain proteins, lipids, and nucleic acids to the recipient cells by endocytosis or membrane fusion for transcellular regulation [9]. These exosomes perform various functions in immune response, antigen presentation, cell migration, cell differentiation, tumor progression, and bone metabolism [10,11]. It has been shown that exosomes derived from melanoma cells and labeled with fluorescent dye can infiltrate lung and bone tissues, advancing development of metastases [12]. In prostate cancer, more microvesicles are present in metastatic sites than in normal tissues [13]. However, the underlying interaction has yet to be elucidated.
MicroRNAs (miRNAs) are noncoding RNAs of approximately 22 nucleotides that mainly repress gene expression at the posttranslational level by imperfect base pairing to complementary sequences in the 3 untranslated region of mRNA [14]. It has been well established that miRNAs play an important role in various cellular processes such as tumor 2 BioMed Research International progression, immune regulation, and damage repair [15]. A recent study found that silencing miR-214-3p in osteoclasts significantly enhanced bone resorption and weakened the osteolytic metastasis of breast cancer [16]. Furthermore, miR-214 can enhance the bone-resorbing ability of osteoclasts and increase the expression of osteoclast markers such as Acp5, Ctsk, and Mmp9 [17]. Researchers have suggested that miR-214 can activate the PI3K/Akt pathway by targeting Pten to positively regulate osteoclastogenesis, which indicates that miR-214 is a strong contributor to osteoclast differentiation [17].
In this study, we aimed to explore the effects of prostate cancer exosomes on osteoclast differentiation and the role of miR-214 in the process. We isolated exosomes from prostate cancer cell line PC-3 and cocultured the exosomes with osteoclast precursor cells. We found that PC-3-derived exosomes remarkably inhibited differentiation of osteoclasts by downregulating miR-214 and repressing the NF-B signaling pathway. Thus, miR-214 upregulation could become a potential therapeutic method to attenuate prostate cancer bone metastasis.
Materials and Methods
. . PC-Cell Culture. PC-3 cells were purchased from the Chinese Academy of Sciences Type Culture Collection. PC-3 cells were cultured in Roswell Park Memorial Institute-1640 medium (RPMI-1640; Gibco, USA) supplemented with 10% fetal bovine serum (Biological Industries, Israel), 100 units/ml penicillin, and 100 mg/ml streptomycin (Gibco) at 37 ∘ C with 5% CO 2 atmosphere. The medium was replaced every 2-3 days. When PC-3 cells reached 80% confluence, the medium was substituted with RPMI-1640 with nonexosome serum (Thermo Fisher Scientific, USA) for 2 days. Then, the cell culture supernatant was collected for exosome isolation.
. . Exosome Isolation. For exosome isolation, ultracentrifugation was applied as described previously [18]. The PC-3 cell culture supernatant mentioned in the previous section was harvested and centrifuged at 300×g and 4 ∘ C for 10 min to remove floating cells. Further centrifugation at 10,000×g and 4 ∘ C for 60 min was performed to remove cell debris. Then, the supernatant was passed through a 0.22-m filter and ultracentrifuged at 120,000×g and 4 ∘ C for 2 h using an XPN-100 rotor (Beckman Coulter, USA). The exosome pellet was rinsed with Dulbecco's phosphate-buffered saline (DPBS), and the ultracentrifugation at 120,000×g was repeated. After that, the supernatant was discarded carefully, and the exosome pellet was resuspended gently with DPBS. The exosome protein content was determined by BCA protein assay.
. . Exosome Characterization. We determined the number and size distribution of the exosomes with a NanoSight LM10 (Malvern, UK). One milliliter of sample was injected into the sample chamber with a sterile syringe. All measurement steps were conducted according to the manufacturer's guidelines. Moreover, the morphology of exosomes was observed by transmission electron microscopy (TEM). Five microliters of sample was dropped onto carbon-coated 200-mesh copper grids for 1-min incubation. Extra liquid was absorbed gently by filter paper around the border of the grids. Then, the sample was negatively stained with 2% aqueous solution of phosphotungstic acid for 30 s. Extra liquid was absorbed by filter paper again. The grids were examined using the H-7650 TEM (Hitachi, Japan) at 80 kV. After being heated for 1 min, the particle morphology was observed. In addition, nanoparticle tracking analysis was performed to assess the size distribution of PC-3 exosomes using the NanoSight LM10 (Malvern). Furthermore, the expression of exosome markers was measured by flow cytometry using an Accuri C6 flow cytometer (Becton Dickinson, USA).
. . Human Osteoclast Induction. Osteoclasts were induced from human peripheral blood mononuclear cells (PBMCs) as previously described [19]. In brief, human peripheral blood was acquired from a healthy volunteer in a centrifuge tube primed with 1000 U/ml heparin. Written informed consent was obtained before the procedure, which was approved by the Committee of Clinical Ethics of the Zhujiang Hospital. Then, the peripheral blood was diluted 1:1 with phosphatebuffered saline (PBS) and layered gently on Histopaque-1077 (Sigma-Aldrich, USA) for centrifugation (400×g, 30 min, 25 ∘ C). Next, the buffy coat was aspirated carefully and transferred into a new centrifuge tube, in which the PBMCs were washed with PBS and centrifuged twice at 250×g for 10 min. After that, PBMCs were resuspended in complete RPMI-1640 containing 30 ng/ml macrophage colony-stimulating factor (M-CSF; Sino Biological, China) and cultured in a 6well plate at a density of 6 × 10 6 cells/ml/well for 3 days. Nonadherent cells were then removed, and adherent cells were considered mononuclear cells. We continued to culture the mononuclear cells with complete RPMI-1640 containing 30 ng/ml M-CSF for another 3 days for cell growth. Thereafter, cells were cultured in complete RPMI-1640 (exosomefree serum) containing 30 ng/ml M-CSF and 50 ng/ml receptor activator of nuclear factor B ligand (RANKL; Sino Biological, China) with or without various concentrations of PC-3 exosomes for 10 days. Then, osteoclasts were observed by tartrate-resistant acid phosphatase (TRAP) staining. Levels of several osteoclast differentiation marker genes were measured by qRT-PCR and western blotting.
. . TRAP Staining. Osteoclasts were stained using a TRAP staining kit according to the manufacturer's instructions (Sigma Aldrich, USA). First, osteoclasts were fixed with Fixative Solution (a combination of 25 ml citrate solution, 65 ml acetone, and 8 ml of 37% formaldehyde) for 30 s at room temperature and rinsed with deionized water three times. Then, for preparation of the staining solution, 0.5 ml Fast Garnet GBC base solution and 0.5 ml sodium nitrite solution were mixed for 30 s and added into a 100-ml beaker containing 0.5 ml naphthol AS-BI phosphate solution, 2 ml acetate solution, and 1 ml tartrate solution. Next, osteoclasts were immersed in the mixed solution and incubated at 37 ∘ C for 1 h and finally rinsed with deionized water three times. The TRAP-positive cells (containing > 3 nuclei) were observed by microscopy.
. . MiRNA Mimic/Inhibitor Transfection. MiR-214 mimic and inhibitor and negative control (NC) siRNA were purchased from GenePharma Co., Ltd. (Shanghai, China). Mononuclear cells seeded in 6-well plates until 80% confluency were transfected with 50 nM miR-214 mimic, miR-214 inhibitor, or NC using RFect siRNA transfection reagent (Baidai, China) according to the manufacturer's instructions. After incubation for 48 h, cells were cultured in fresh medium and induced by M-CSF and RANKL for 10 days.
. . Total RNA Extraction. After different treatments, total RNA extraction was performed using TRIzol reagent (TaKaRa, Japan). In brief, osteoclasts were washed with PBS twice before TRIzol (1 ml/well in a 6-well plate) was added. Then, the lysis solution was moved to an Eppendorf (EP) tube and mixed with 0.2 ml chloroform, followed by centrifugation at 12,000 rpm at 4 ∘ C for 15 min. The supernatant was transferred to a new EP tube and mixed with 0.5 ml isopropanol. After centrifugation at 12,000 rpm at 4 ∘ C for 10 min, the upper aqueous phase was removed, and the RNA precipitate was washed in 75% ethanol by centrifugation again at 12,000 rpm at 4 ∘ C for 10 min. Thereafter, the upper aqueous phase was discarded, and the RNA precipitate was air-dried at room temperature. Finally, 20 l RNase-free water was added into each EP tube to dissolve the precipitate, and the concentration and quality of total RNA were measured by a spectrophotometer.
. . RT-qPCR. Reverse transcription was performed using a PrimeScript RT reagent kit (TaKaRa). QPCR analysis was conducted using a SYBR Premix Ex Taq II . . Western Blotting. A bicinchoninic acid protein assay kit (Beyotime, China) was used for cell lysis and protein concentration measurement. Extracted protein was mixed with 1× loading buffer and boiled for 10 min before 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Then, protein from the gel was transferred onto polyvinylidene fluoride membranes (Boster, USA). These membranes were subsequently blocked in 3% bovine serum albumin (BSA) at room temperature for 1 h and incubated with primary rabbit monoclonal antibody (anti-CTSK, NFATc1, p65, p-p65, IKBA, p-IKBA, or GAPDH; Boster) at 4 ∘ C for 12 h. After washing with Tris-buffered saline with Tween 20 (TBST) on a rocking table three times, the membranes were incubated with goat anti-rabbit IgG secondary antibody (Boster) at room temperature for 1 h and washed with TBST three times again. The blots were observed using a Tanon 4200 SF automated fluorescence chemiluminescence image analysis system (Tanon, China).
. . Statistical Analyses. All data are presented as mean values ± standard deviations. Comparisons were performed using Student's t-test with GraphPad Prism version 6.0. P < 0.05 considered to indicate statistical significance. All experiments were repeated at least three times.
. . Characteristics of PC-Exosomes.
To investigate how prostate cancer affects bone cell growth and causes bone metastases, exosomes were isolated from the prostate cancer cell line PC-3 by ultracentrifugation. Electron microscopy revealed that the vesicles were morphologically homogeneous and had a typical cup shape (Figure 1(a)). Nanoparticle tracking analysis revealed that most vesicles ranged from 70 to 120 nm in size; the distribution was around 100 nm and peaked at 85 nm (Figure 1(b)). Furthermore, flow cytometry showed that transmembrane proteins CD63 and CD81, which are specific surface markers of exosomes, were present in 69.6% and 84.2% of exosomes, respectively (Figure 1(c)).
. . PC--Derived Exosomes Inhibit Osteoclast Differentiation.
To explore the effect of exosomes on osteoclasts, osteoclast precursors were cocultured with PC-3 exosomes for 10 days. Despite stimulation with M-CSF and RANKL, the differentiation of osteoclast precursors was inhibited, as shown by TRAP staining. In addition, as the concentration of exosomes increased, the inhibition became more severe (Figure 2(a)). Nearly complete inhibition of osteoclastogenesis was observed at an exosome concentration of 50 ng/1000 cells. Moreover, we detected the levels of miR-214, which were significantly reduced as the concentration of PC-3derived exosomes increased (Figure 2(b)). At the same time, mRNA and protein expression of several specific markers of mature osteoclasts, including CTSK, NFATc1, and TRAP, was significantly decreased in the exosome groups compared with that in cells treated with only M-CSF and RANKL [20,21] (Figures 2(c) and 2(d)). These results suggested that downregulation of miR-214 is linked to the inhibition of osteoclast differentiation.
. . MiR-Downregulation Inhibits Osteoclast Differentiation.
To investigate the potential effect of miR-214 on osteoclast differentiation, cells were transfected with miR-214 mimic or inhibitor or NC. TRAP staining indicated that a high level of miR-214 improved osteoclast differentiation, whereas a low level of miR-214 hampered differentiation ( Figure 3(a)). Expression of miR-214 in the mimic group was 300-fold higher than that in the NC group, whereas expression in the inhibitor group was almost one-third lower than in the NC group (Figure 3(b)). The results of qPCR and western blotting were consistent; downregulated miR-214 led to decreased expression of CTSK, NFATc1, and TRAP, whereas upregulated miR-214 increased expression of these specific genes (Figures 3(c) and 3(d)). Thus, miR-214 downregulation repressed osteoclast differentiation.
. . PC--Derived Exosomes Block the NF-B Signaling Pathway through miR-
Downregulation. For further investigation of the underlying mechanism, we focused on the response of the NF-B signaling pathway. First, we measured the protein expression of p-p65, p65, p-IKBA, and IKBA in osteoclasts cultured with various concentrations of PC-3-derived exosomes. The phosphorylation of p65 and IKBA was significantly repressed by PC-3-derived exosomes in a concentration-dependent pattern (Figure 4(a)). However, total p65 and IKBA levels were not remarkably altered. Then, we investigated whether miR-214 affected the NF-B signaling pathway. The levels of p-p65, p65, p-IKBA, and IKBA were measured in the miR-214 mimic or inhibitor and NC groups, which revealed that levels of p-p65 and p-IKBA significantly increased in the miR-214 mimic group. In contrast, their levels significantly decreased in the miR-214 inhibitor group, compared with those in the NC group. Furthermore, miR-214 overexpression and inhibition had little effect on p65 and IKBA levels (Figure 4(b)). These results strongly suggested that PC-3-derived exosomes block the NF-B signaling pathway through miR-214 downregulation.
Discussion
Our study revealed that exosomes derived from prostate cancer cell line PC-3 remarkably inhibited differentiation of osteoclasts. During this process, the expression of several marker genes of mature osteoclasts including CTSK, NFATc1, and ACP5 decreased, and miR-214 was downregulated. Furthermore, the NF-B signaling pathway was blocked. Our results are consistent with the fact that metastasis of prostate cancer is mainly osteoblastic, in which bone formation is enhanced and bone resorption is weakened [22,23]. To further explore the mechanism underlying the interaction of PC-3 exosomes and osteoclasts, we focused on the potential role of miR-214. We found that miR-214 overexpression promoted osteoclast differentiation, whereas miR-214 downregulation repressed the differentiation. In addition, we found that a low level of miR-214 inhibited activation of the NF-B signaling pathway, which suggests the importance of miR-214 in osteoclast differentiation. Therefore, miR-214 upregulation, promoting osteoclastogenesis, may resist osteoblastic metastasis of prostate cancer.
Previously, a study reported that exosomes derived from murine prostate cancer cell line TRAMP-C1 inhibited differentiation of murine osteoclasts [24]. Researchers also showed that exosomes derived from prostate cancer cells promoted osteogenic differentiation of human mesenchymal stem cells by delivering miR-940 [25]. These results indicated that prostate cancer cells induce bone metastasis by promoting osteoblast differentiation and repressing osteoclast differentiation, so as to enhance bone formation and inhibit bone resorption. It is worth noting that a study indicated that exosomes derived from lung adenocarcinoma cells promoted osteoclast differentiation [26]. Because lung cancer cells commonly induce osteolytic bone metastasis according to clinical reports, our results are reasonable and demonstrate the diverse functions of exosomes [25]. Furthermore, our work demonstrated that PC-3 exosomes inhibit osteoclast differentiation by blocking the NF-B signaling pathway. NF-B is a transcription factor required for osteoclast differentiation and growth [27]. It plays a crucial role in the early stage of osteoclast fusion through activating c-Fos and NFATc1 [28,29]. In the classical pathway, the phosphorylation of IKBA initiates p50/p65 dimers, which subsequently translocate to the nucleus and bind to DNA sequences, activating transcription [30]. Accordingly, cells release many proinflammatory cytokines to promote osteoclast formation [31].
In our study, PC-3 exosomes significantly decreased the level of miR-214 in osteoclasts. Further, miR-214 downregulation inhibited osteoclast differentiation through repressing the NF-B signaling pathway, which is consistent with a previous study [17]. MiR-214 is an important regulator in bone homeostasis and bone-related diseases including osteoporosis, osteosarcoma, and bone metastases [32]. In addition to promoting osteoclastogenesis, miR-214 can inhibit osteoblast differentiation by targeting ATF4 [33]. Therefore, therapeutic miR-214 mimics may attenuate the progression of prostate cancer bone metastases. However, further investigation is necessary to clarify the mechanism of miR-214 regulation.
To our knowledge, this is the first study to investigate the inhibitory effects of PC-3 exosomes on osteoclast differentiation. Nevertheless, the study still had limitations. Firstly, we failed to identify a specific molecule (RNA/protein) in exosomes that may be a main contributor to inhibition of osteoclast differentiation. Secondly, our work was carried out only in vitro, and the effect of PC-3 exosomes on animal bone remodeling warrants further research.
Conclusions
In this study, we found that exosomes derived from prostate cancer cell line PC-3 remarkably inhibited differentiation of osteoclasts by downregulating miR-214 and repressing the NF-B signaling pathway. Our findings suggest that miR-214 upregulation could become a potential therapeutic method to attenuate prostate cancer bone metastasis.
Data Availability
The data used to support the findings of this study are included within the article. | 4,197.2 | 2019-04-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Stock Market Development and Economic Growth : The Case of West African Monetary Union
Stock market is an indicator of an economy financial health. It indicates the mood of investors in a country. As such, stock market development is an important ingredient for growth. The stock exchange of West African monetary union is fairly new compared to many countries. This paper examines the impact of stock market development on growth in West African monetary union. A time series econometric investigation is conducted over the period 1995 -2006. We analyze both the short run and long run relationship by constructing an ECM. Two measures of stock market development namely size and liquidity are used. We define size as the share of market capitalization over GDP and liquidity as volume of share traded over GDP. We found that stock market development positively affect economic growth in West African monetary union both in the short run and long run.
Introduction
In line with thinking of the new growth theorists, a well developed financial sector facilitates high but sustainable growth.The link between finance and growth has been controversially debates in economic literature.Many researchers are of the view that there still exists great dichotomy regarding the role of financial intermediaries in facilitating sustainable economic growth in the long-term.
In this paper, we explore the relationship between stock market development and economic growth in West African monetary union for the period 1995-2006. West . West African monetary union has experienced sustained and consistent growth over the years despite being affected by the disadvantages of a small country.The Stock Market of West African monetary union (BRVM) is fairly new, established in 1998.However, it is one of the best performing stock market in Africa.It is one of the seven stock markets which trade automatically in Africa.We use two measures of stock market development namely SIZE and LIQUIDITY.SIZE is denoted as market capitalization as a percentage of GDP.The assumption behind this measure is that overall market size is positively correlated with the ability to mobilize capital and diversify risk on an economy-wide basis.LIQUIDITY is calculated as value of shares traded on the stock market exchange divided by GDP.The total value traded ratio measures the organized trading of firm equity as a share of national output and therefore should positively reflect liquidity on an economy-wide basis.The total value traded ratio complements the market capitalization ratio: although a market may be large, there may be little trading.
The structure of this paper is as follows, section 2 presents a brief overview of the literature and we present overview of the West African stock market in section 3. The methodology and data measurement is described in section 4. Section 5 depicts the empirical results, section 6 presents the Policy Recommendations, and we conclude in section 7.
A Brief Overview of the Literature
Theoretically, a growing literature argues that stock market development boost economic growth.Greenwood and Smith (1997) show that large stock markets can decrease the cost of mobilizing savings, thus facilitating investment in most productive technologies.Bencivenga et al (1996) and Levine (1991) argue that stock market liquidity (the ability to trade equity easily) is crucial for growth.Although many profitable investments require a long run commitment of capital, savers do not like to relinquish control of their savings for long periods.Liquid equity markets ease this tension by providing an asset to savers that they can quickly and inexpensively sell.Simultaneously, firms have permanent access to capital raised through equity issues.Moreover, Kyle (1984) and Holmstrom and Tirole (1993) argue that liquid stock markets can increase incentives for investors to get information about firms and improve corporate governance.Finally, Obstfeld (1994) show that international risk sharing through internationally integrated stock markets improves resource allocation and can accelerate the rate of growth.From the point of view of Greenwood and Jovanovic (1990); King and Levine (1993), a new stock exchange can increase economic growth by aggregating information about firms" prospects, thereby directing capital to investment with returns.These effects of a stock market opening result in a measured increase in productivity.Stock exchanges exist for the purpose of trading ownership rights in firms, and a new stock exchange may increase productivity growth for this reason as well.According to North (1991), the creation of a stock exchange can increase economic growth by lowering the costs of exchanging ownership rights in firms, an important part of some institutional stories of economic growth.Furthermore, Bencivenga and Smith (1992) state that a new stock market also can increase economic growth by reducing holdings of liquid assets and increasing the growth rate of physical capital, at least in the long run.In the shortrun, however, the equilibrium response of the capital stock to a new stock exchange can be negative because the opening of an exchange can increase households" wealth and raise their contemporaneous consumption enough to temporarily lower the growth rate of capital.
In principle, a well-developed stock market should increase saving and efficiently allocate capital to productive investments, which leads to an increase in the rate of economic growth.Stock markets contribute to the mobilisation of domestic savings by enhancing the set of financial instruments available to savers to diversify their portfolios.In doing so, they provide an important source of investment capital at relatively low cost (Dailami and Aktin, 1990).In a well-developed stock market share ownership provides individuals with a relatively liquid means of sharing risk when investing in promising projects.Stock markets help investors to cope with liquidity risk by allowing those who are hit by a liquidity shock to sell their shares to other investors who do not suffer from a liquidity shock.The result is that capital is not prematurely removed from firms to meet short-term liquidity needs.Moreover, stock markets play a key role in allocating capital to the corporate sector, which will have a real effect on the economy on aggregate.Debt finance is likely to be unavailable in many countries, particularly in developing countries, where bank loans may be limited to a selected group of companies and individual investors.This limitation can also reflect constraints in credit markets (Mirakhor and Villanueva, 1990) arising from the possibility that a bank's return from lending to a specific group of borrowers does not increase as the interest rate it charges to borrowers rises (Stiglitz andWeiss, 1981 andCho, 1986).
The arguments for stock market development were supported by various empirical studies, such as Levine and Zervos (1993); Atje and Jovanovic (1993); Levine and Zervos (1998).Although these studies emphasise the importance of stock market development in the growth process, they do not simultaneously examine banking sector development, stock market development, and economic growth in a unified framework.On the other hand Levine and Zervos (1993); Atje and Jovanovic (1993); Levine and Zervos (1998); Rousseau and Wachtel (2000) and Beck and Levine (2003) show that stock market development is strongly correlated with growth rates of real GDP per capita.More importantly, they found that stock market liquidity and banking development both predict the future growth rate of the economy when they both enter the growth regression.They concluded that stock markets provide different services from those provided by banks.This is also consistent with the work by Levine and Zervos (1995) and the argument by Demirguc-Kunt (1994) that stock markets can give a big boost to economic development.
Stock exchanges are expected to accelerate economic growth by increasing liquidity of financial assets, making global risk diversification easier for investors, promoting wiser investment decisions by saving-surplus units based on available information, forcing corporate managers to work harder for shareholders" interests, and channeling more savings to corporations.In accordance with Levine (1991), andBenchivenga andSmith andStarr (1996) they emphasized the positive role of liquidity provided by stock exchanges on the size of new real asset investments through common stock financing.Investors are more easily persuaded to invest in common stocks, when there is little doubt on their marketability in stock exchanges.This, in turn, motivates corporations to go to public when they need more finance to invest in capital goods.Another important contribution of stock exchanges to economic growth is through global risk diversification opportunities they offer.However, Saint-Paul (1992); Deveraux andSmith (1994) andObstfeld (1994) argue quite plausibly that opportunities for risk reduction through global diversification make high risk, high return domestic and international projects viable, and, consequently, allocate savings between investment opportunities more efficiently.Stock prices determined in exchanges, and other publicly available information help investors make better investment decisions.Better investment decisions by investors mean better allocation of funds among corporations and, as a result, a higher rate of economic growth.In efficient capital markets prices already reflect all available information, and this reduces the need for expensive and painstaking efforts to obtain additional information (Stiglitz, 1994).From the point of view of Schumpeter (1912), technological innovation is the force underlying long-run economic growth, and that the cause of innovation is the financial sector's ability to extend credit to the entrepreneur.
The study done by Levine and Zervos (1998), find a positive and significant correlation between stock market development and long run growth.Greenwood and Smith (1996) show that stock markets lower the cost of mobilizing savings, facilitating investments into the most productive technologies.Obstfeld (1994) shows that international risk sharing through internationally integrated stock markets improves resource allocation and accelerates growth.Bencivenga et al. (1996) and Levine (1991) have argued that stock market liquidity, the ability to trade equity easily, plays a key role in economic growth; although profitable investments require long run commitment to capital, savers prefer not to relinquish control of their savings for long periods.Liquid equity markets ease this tension by providing assets to savers that are easily liquidated at any time.
Yet Kyle (1984) argues that, an investor can profit by researching a firm, before the information becomes widely available and prices change.Thus investors will be more likely to research and monitor firms.To the extent that larger, more liquid stock markets increase incentives to research firms, the improved information will improve resource allocation and accelerate economic growth.The role of stock markets in improving informational asymmetries has been questioned by Stiglitz (1985) who argues that stock markets reveal information through price changes rapidly, creating a free-rider problem that reduces investor incentives to conduct costly search.The contribution of liquidity itself to long-term growth has been questioned.Demirguc-Kunt and Levine (1996) point out that increased liquidity may deter growth via three channels.First, it may reduce saving rates through income and substitution effects; second, by reducing the uncertainty associated with investments, greater stock market liquidity may reduce saving rates because of the ambiguous effects of uncertainty on savings; third, stock market liquidity encourages investor myopia, adversely affecting corporate governance and thereby reducing growth.
The one important study mentioned earlier is one by Levine and Zervos (1998) who are among the first to ask whether stock markets are merely burgeoning casinos or a key to economic growth and to examine this issue empirically, finding a positive and significant correlation between stock market development and long run growth.However, Levine and Zervos's use of a cross-sectional approach limits the potential robustness of their findings with respects to country specific effects and time related effects.The legal liberalization of the stock market increased the importance of the stock market.It does not only link the importance of the stock market to economic growth over time, but also interpret it in relationship to the universal banking system.In a frictionless Arrow-Debreu world there is no room for financial intermediation.Explaining the role played by stock markets or banks requires building in frictions such as informational or transaction costs into the theory.Different frictions motivate different types of financial contracts, markets and institutions.
An Overview of the Stock Market of the West African stock market.
The establishment of an organized financial market was provided for in the treaty of November 14, 1973 forming the West African Monetary Union (WAMU), initially made up of seven countries (Benin, Burkina-Faso, Ivory Coast, Mali, Niger, Senegal, and Togo).The Union recently expanded with the addition of an eighth member (Guinea Bissau).In 1991, monetary authorities began considering setting up a single, efficient financial market for all WAMU countries.Since economies in the West African Monetary Zone were opening up more and more, economic regulation mechanisms, particularly those used to indirectly manage currency and generate savings, had to be adopted.Furthermore, creating a common financial market for all countries in the WAMU sub-region seemed to be a good way to strengthen regional integration for developing trade among the member states.From then on, besides the various integration sites in the zone-insurance, social assistance and commercial law-the existence of a central bank (BCEAO), a common banking commission and now a financial market-including a common securities exchange-seemed the best option without minimizing the symbolic aspect it gave to the project and the economies of scale.From that date on, many types of expertise were used, particularly that from France, the US, Canada and the World Bank, to conduct the project's design phase.Also, the Union's Council of Ministers decided in December 1993 to create a Regional Financial Exchange (BRVM: Bourse Regionale des Valeures Mobilieres) and so mandated the Central Bank of West African States (BCEAO) to conduct the project.The stock exchange creates a market place where companies can raise capital, often referred to as primary market.At this market shares are issued for the first time to the public; and shareholders can trade in shares of listed companies, that is, secondary market.At this market, shareholders buy and sell existing shares.
Market Indices
Market movements and trends in the West African regional stock market are depicted by two market indices namely the BRVM Composite, and BRVM 10.This information is made available on the BRVM's website in order to allow even foreign investors to have information on a real time basis.
-The BRVM COMPOSITE consists of all stocks admitted to trading.
-The BRVM 10 is composed of ten companies most active in the market.
The formulation and the selection criteria for the BRVM COMPOSITE and BRVM 10 inspired by the main stock market indices in the world, especially in the index FCG, the International Financial Corporation, a company affiliated with the World Bank.
The formula takes into account the indices of market capitalization, the volume of transactions per session and frequency of transactions.In addition, only the shares are used for the calculation of indices.
Model
We consider two measure of stock market development namely size and liquidity: SIZE denotes market capitalization as a % of GDP at constant price whereas LIQUIDITY denotes total value of share traded as a % of GDP at constant price.We build our model based on the following augmented production.
Y t = ƒ(FDI t , HUMAN t , SMD t ) (1)
Where Yt denotes real GDP per capita; FDI denotes foreign direct investment, HUMAN denotes human capital and SMD denotes stock market development.The econometric model can write as reduced form logarithm equation for SIZE and LIQUIDITY; Over the years, the country has experienced sustain and consistent growth.Many factors have contributed to this namely successful trade liberalization, political stability, institutional factors among others.However, it can be argued two main factors that have help the country in the attainment of sustained growth is FDI and human capital.
FDI is increasingly being recognized as a major source of economic development.The general belief is that FDI facilitates the transfer of technology, organizational and managerial practices, skills and access to international market.Investors generally tend to adopt a two-stage process when evaluating countries as investment locations.
First phase involve screening potential investors based on economic fundamentals.In the second phase, those countries which pass the first phase are evaluated based on the incentives they offer.Thus, as a factor in attracting FDI, incentives are secondary to the more fundamentals determinants such as market size, access to raw materials and availability of skilled labour.
Foreign Direct Investment plays a pivotal role in the development of WAMU's economy.It is an integral part of the global economic system.Advantages of FDI can be enjoyed to full extent through various national policies and international investment architecture.Both the factors contribute enormously to the maximum FDI inflows in West African monetary union, which stimulates the economic development of the Region.
Foreign Direct Investment in West African monetary union is allowed through four basic routes namely, financial collaborations, technical collaborations and joint ventures, capital markets via Euro issues, and private placements or preferential allotments.
FDI inflow helps the West African monetary union to develop a transparent, broad, and effective policy environment for investment issues as well as, builds human and institutional capacities to execute the same.
Attracting foreign direct investment has become an integral part of the economic development strategies for West African monetary union.FDI ensures a huge amount of domestic capital, production level, and employment opportunities in the developing countries, which is a major step towards the economic growth of the country.FDI has been a booming factor that has bolstered the economic life of West African monetary union, but on the other hand it is also being blamed for ousting domestic inflows.FDI is also claimed to have lowered few regulatory standards in terms of investment patterns.The effects of FDI are by and large transformative.The incorporation of a range of well-composed and relevant policies will boost up the profit ratio from Foreign Direct Investment higher.The Economic growth is one of biggest advantages of FDI enjoyed by West African monetary union, which is enormously benefited from foreign direct investment.A remarkable inflow of FDI in various industrial units in West African monetary union has boosted the economic life of country.Over the years, successive governments have put considerable effort in attracting FDI.
It is highly recognized that human capital is an important determinant of growth.Successive West African monetary union government have invested a lot in human capital namely education.The literacy rate of West African monetary union is one of the highest in Africa.
Data
Data was obtained from different source.FDI (expressed as a % of GDP) was obtained from the World Development Indicators (WDI); the data on stock development measures namely SIZE and LIQUIDITY was obtained from west African regional stock market journal various bulletin, HUMAN (proxied by secondary enrollment ratio was obtained from Central Statistical Office, of west African monetary union (UMOA).
Estimation Result
The Long Run Equation Table 1 and Table 2 reports result for the Long Run Equation of model 2. The results indicate that all the independent variables have the expected positive sign and are highly significant.
Both measures of stock market development demonstrate the importance of stock market development to growth.A 10% increase in SIZE leads to a 1.75% increase in RGDPPC whereas a 10% increase in LIQUIDITY leads to a 6.33% increase in RGDPPC.These results suggest that development of the stock market is an important ingredient for economic growth.However, LIQUIDITY has a greater impact on growth rather than SIZE.
We check for the presence of multicollinearity using the variance inflation factor (VIF).As a rule of thumb, a variable whose VIF values are greater than 10 may merit further investigation when it comes to multicollinearity.Equation (1) produces a VIF of 4.88 and equation (2) 3.37.
Table 3 and Table 4 depict results from the short run equations.The results are replicated compared to the long run ones.The Adjusted R 2 is 0.7635 and 0.7954 which indicate the ability of the model to fit the data reasonably well.The lagged error terms have the required negative sign and are significant at 1%.This reinforces the finding of along run relationship among the variables.
The results in Table 5-8 and Table 5-9 indicate that the immediate effect of SIZE as well as LIQUIDITY is positive and significant.In fact, the immediate impact of all other variables namely HUMAN and FDI is positive and significant.The size of the coefficient of the error correction terms, namely -0755 and -0.635 for equation ( 1) and ( 2) suggests a high speed adjustment from the short run deviation to the long run equilibrium in RGDPPC.
It indicates that 75% (for equation 1) and 63% (equation 2) of the deviation is corrected every year.
Policy Recommendations
The findings from this study raise some policy issues and recommendations, which will reinforce the link between the stock market and economic growth in West African monetary union.
Given that the stock market operate in a macroeconomic environment, it is therefore necessary that the environment must be an enabling one in order to realize its full potentials.
The demand for the services of the stock market is a derived demand.With the existence m of a positive relationship between stock market development and economic growth, it is pertinent to recommend that there should be sustained effort to stimulate productivity in both the public and private sectors.
The determination of stock prices should be deregulated.Market forces should be allowed to operate without any hindrance.Interference in security pricing is inimical to the growth of the market.
The stock market is known as a relatively cheap source of funds when compared to the money market and other sources.The cost of raising funds in the West African monetary union market is however, regarded to be very high.There should be a review downward, of the cost, so as to enhance its competitiveness and improve the attractiveness as a major source of raising funds.
Given the present political dispensation, all the tiers of government should be encouraged to fund their realistic developmental programmers through the stock market.This will serve as a leeway to freeing the resources that may be used in other sphere of the economy.
Conclusion
The model analyzes relationship between stock market development and economic growth in West African monetary Union over the period of time 1995 to 2006.Using two measures of stock market development namely Size and Liquidity, we found that stock market development is an important ingredient for growth in West African monetary Union since the stock market gives a general idea of an economy's health.We adopt the simple two step procedure of Engle and Granger when it comes to the econometric methodology.Given the small size of our sample and the number of parameters to be estimated, the Engle -Granger approach is more attractive than the Johansen approach which would require the estimation of a system of 3 equations, implicitly there is a loss of degree of freedoms.The positive relationship between stock market development and economic growth is replicated in both the long run and short run equations.
Our two controlling variables have the expected positive result and are highly significant.Both FDI and HUMAN are crucial determinants of growth in West African monetary Union.
The emerging literature on FDI stipulates that FDI's positive impact on growth depends on local conditions and absorptive capacities.Essential among these capacities is financial development.This model provides support for this hypothesis in the context of West African monetary Union.
Like FDI, the importance of human capital to economic growth in not a doubt.Today's workplace, with its focus on managerial skills and technological innovation, imposes higher educational demands on the labor force of developing nations, including West African monetary Union. .Lower labor cost is no longer sufficient to attract investments.In its place, the "human capital of the local labor force" is gaining momentum as labor cost differentials or proximity to raw materials become less important in decisions to locate technology-intensive facilities.Like other capital, human capital could be increased through investment in, and commitment to human factors such as education, training and healthcare.
Strong human capital attracts and encourages growth, not the other way around.An educated population also leaves an enduring effect economically with a larger tax base and socially through increased political involvement.Although easier said than done, an investment in human capital should be a part of any economic development policy.The availability and the prevalence of a nation's human capital determine the rate of growth of its economy and integration in world markets | 5,506.2 | 2010-07-14T00:00:00.000 | [
"Economics"
] |
Crystal Structure of DNA Cytidine Deaminase ABOBEC3G Catalytic Deamination Domain Suggests a Binding Mode of Full-length Enzyme to Single-stranded DNA*
Background: The mechanism for DNA cytidine deaminase APOBEC3G (A3G) interacting with single-stranded DNA (ssDNA) is not well characterized. Results: The crystal structure of a head-to-tail dimer of the A3G catalytic deamination domain (A3G-CD2) was obtained. Conclusion: The dimer structure of A3G-CD2 suggests a binding mode of full-length A3G to ssDNA. Significance: The dimer structure of A3G-CD2 may represent a structural model of full-length A3G. APOBEC3G (A3G) is a DNA cytidine deaminase (CD) that demonstrates antiviral activity against human immunodeficiency virus 1 (HIV-1) and other pathogenic virus. It has an inactive N-terminal CD1 virus infectivity factor (Vif) protein binding domain (A3G-CD1) and an actively catalytic C-terminal CD2 deamination domain (A3G-CD2). Although many studies on the structure of A3G-CD2 and enzymatic properties of full-length A3G have been reported, the mechanism of how A3G interacts with HIV-1 single-stranded DNA (ssDNA) is still not well characterized. Here, we reported a crystal structure of a novel A3G-CD2 head-to-tail dimer (in which the N terminus of the monomer H (head) interacts with the C terminus of monomer T (tail)), where a continuous DNA binding groove was observed. By constructing the A3G-CD1 structural model, we found that its overall fold was almost identical to that of A3G-CD2. We mutated the residues located in or along the groove in monomer H and the residues in A3G-CD1 that correspond to those seated in or along the groove in monomer T. Then, by performing enzymatic assays, we confirmed the reported key elements and the residues in A3G necessary to the catalytic deamination. Moreover, we identified more than 10 residues in A3G essential to DNA binding and deamination reaction. Therefore, this dimer structure may represent a structural model of full-length A3G, which indicates a possible binding mode of A3G to HIV-1 ssDNA.
Human A3G exists as monomer, dimer, and tetramer, depending on the DNA substrate and salt concentration. It possesses two homologous deaminase domains, an inactive N-terminal CD1 domain (i.e. A3G-CD1) required for Vif, DNA, and RNA binding and an active C-terminal CD2 domain (i.e. A3G-CD2) required for catalysis and motif specificity (28 -30). The CD1 domain is also suggested to be required for the incorporation of A3G into virions (29). A3G deaminates cytidine processively 3Ј 3 5Ј on ssDNA (31,32). The processive deamination reactions have been decided in a non-random way (31,32). To date, the three-dimensional structure of the free A3G-CD2 domain has been determined by NMR (33)(34)(35)) and x-ray crystallography (36 -38). The three-dimensional structures of free APOBEC2 (39) and other APOBEC3 sub-family members, such as A3A (40,41), A3C (42), and A3F (43,44), have also been reported. The structural basis for Vif hijacking CBF- and CUL5 E3 ligase was recently revealed too (45). However, the mechanism of how A3G-CD2 or full-length A3G and other members of its family interact with ssDNA is still not well understood.
To address how A3G-CD2 interacts with HIV-1 ssDNA, we crystallized A3G-CD2 in the presence of the ssDNA containing one target motif sequence 5Ј-CCC-3Ј. A novel head-to-tail dimer structure of A3G-CD2 was obtained. In its surface, a continuous groove for ssDNA binding was found. Our structural analysis and biochemical assays suggest that this dimer structure may represent a structural model of full-length A3G. From this model, we identified more than 10 new residues in both A3G-CD1 and A3G-CD2 critical to the deamination catalysis reaction.
Expression and Purification of A3G-CD2 and Its Variants-
The DNA corresponding to gene of wild-type (WT) A3G-CD2 (residues 193-384) or its variant was cloned into a pET28a vector containing an N-terminal His tag and thrombin cleavage sites. A3G-CD2 and its variants were expressed in Rosetta (DE3)plys Escherichia coli cells. Cell cultures were grown to A 600 value equal to about 0.8 and induced with a final concentration of 0.5 mM isopropyl 1-thio--D-galactopyranoside for 20 h at 18°C. Cells were resuspended in nickel-binding buffer (20 mM Tris-HCl, pH 7.5, 500 mM NaCl, 10 M ZnCl 2 , and 0.5 mM dithiothreitol (DTT)) with protease inhibitor (DNase I) and lysed at 15,000 p.s.i. using a hydraulic cell disruption system (constant System JINBO Benchtop) (Guangzhou Juneng Biology and Technology Co., Ltd., Guangzhou, China). The lysate was centrifuged at 12,000 rpm and 4°C for 55 min to remove cellular debris prior to loading into a nickel-nitrilotriacetic acid resin (GE Healthcare). The protein was washed with nickelbinding buffer 20 mM imidazole and then eluted by a five-step gradient of nickel-binding buffer with 40 mM imidazole, 100 mM imidazole, 250 mM imidazole, and 500 mM imidazole, respectively. Fractions containing A3G-CD2 or its variants were concentrated and purified on a Superdex75 16/600 GL column (GE Healthcare) previously equilibrated with buffer B (50 mM HEPES, pH 7.5, 50 mM NaCl, 50 M ZnCl 2 , 5 mM DTT).
X-ray Crystallization Screening and Data Collection-Purified A3G-CD2 and its D370A variant were quantified by A280 and then mixed with ssDNA (it was commercially synthesized at an HPLC grade from Shanghai Sangon Biotech Co., Ltd. (Shanghai, China) with a sequence 5Ј-TTAACCCTTA-3Ј) at a molar ratio of 1:1.2 (protein/ssDNA) and concentrated to final concentrations of 10 and 20 mg/ml for crystallization. At 18°C, the crystals of the WT A3G-CD2 protein were grown by sittingdrop vapor diffusion against a reservoir containing 0.04 M citric acid, 0.06 M Bistris propane, pH 6.4, 20% (w/v) polyethylene glycol 3350, whereas the crystals of D370A were grown by sitting-drop vapor diffusion against a reservoir containing 0.1 M sodium citrate tribasic dehydrate, pH 5.6, 20% (v/v) 2-propanol, 20% (w/v) polyethylene glycol 4000. The crystals were flashfrozen in Paratone-N. X-ray diffraction data were collected at beamline BL17U of the Shanghai Synchrotron Radiation Facil-ity using a MAR CCD MX-225 detector. The wavelength of the radiation was 0.9792 Å, and the distance between the crystal and the detector was 300 nm. The exposure time for each frame was 1 s with a 1º oscillation, and 360 frames were collected. The data were indexed, integrated, and scaled using the HKL-2000 program suite (46).
Structural Determination-The A3G-CD2 and D370A structures were determined by molecular replacement, using the program PHENIX-Auto MR (47) and the structure of the A3G-CD2 2K3A variant (PDB code 3IR2) as the search model (37). Iterative rounds of model rebuilding and simulated annealing torsion angle refinement were performed using the programs Coot (48) and PHENIX Refine (47). Identification of proper sequence registry was confirmed with the location of the catalytic zinc site and the presence of bulky aromatic residues. Ramachandran plot analysis revealed that 93.7 and 6.7% of the residues of WT A3G-CD2 protein and 91.8 and 8.2% of the residues of A3G-CD2 D370A variant were in most favored and allowed regions, respectively. The final model of WT protein contains residues 194 -381 in monomer H and residues 196 -246, 253-316, and 321-381 in monomer T. Weak electron density was observed for residues 246 -253 and 316 -321 in monomer T, whereas the final model of the D370A variant contains residues 193-381.
Real-time Studies of A3G-CD2-catalyzed Deamination by NMR-A series of one-dimensional 1 H NMR spectra of the reported HIV-1 virus ssDNA with the sequence 5Ј-ATTC-CCAATT-3Ј (34) were acquired as a function of time at 20°C in NMR buffer (50 mM Na 2 HPO 4 , 50 mM NaCl, pH 7.5, 50 M ZnCl 2 , 2 mM DTT), after adding concentrated A3G-CD2 solution. To accurately assign uridine NMR signals, three ssDNAs with sequences of 5Ј-ATTCCUAATT-3Ј, 5Ј-ATTCUCAATT-3Ј, and 5Ј-ATTCUUAATT-3Ј were used as controls. Concentrations of A3G-CD2 and its variants were fixed at 9.4 M. The intensities of the 1 H-NMR signal belonging to U 6 was used for quantification. Real-time monitoring of A3G-CD2 catalyzed deamination reaction by NMR was performed to extract initial rates (Ͻ5% dC 3 dU conversion) for a series of substrate concentration (37.5, 56.3, 93.8, 117.2, and 140.6 M). K m values were obtained using Michaelis-Menten module with the software Prism 5 (GraphPad Inc.).
E. coli-based Deaminase Activity Assays-The intrinsic DNA cytidine deaminase activity of full-length A3G and its variants was measured by expressing these proteins in ung-deficient E. coli BW310 and by quantifying the frequency of Rif R -conferring rpoB mutations (33,49), as described in the previous report (35). A lot of single-base mutations in rpoB lead to active site amino acid replacements that confer Rif R . In each case, five single colonies were grown at 37°C in LB medium containing 100 g/ml ampicillin and induced overnight by 1 mM isopropyl 1-thio--D-galactopyranoside at 18°C. Then appropriate volumes of cells were spread on plates containing 100 g/ml rifampicin to select for Rif R mutants and to plates containing 100 g/ml ampicillin to determine the number of viable cells. Colonies were allowed to form overnight at 37°C and then counted manually.
E. coli Immunoblots-The full-length A3G and its variant constructs were expressed in E. coli strain BW310. Proteins were generated by overnight expression at 18°C in LB medium containing 100 g/ml ampicillin. To induce expression, cells were diluted 1:10 in LB medium containing 100 g/ml ampicillin and 1 mM isopropyl 1-thio--D-galactopyranoside and grown for 1 h at 37°C. Cells were pelleted and resuspended in SDS gel loading buffer (50 mM Tris-Cl, pH 6.8, 100 mM -mercaptoethanol, 2% SDS (v/v) glycerol). Lysates were heated at 95°C for 5 min and fractionated by SDS-PAGE. Proteins were transferred to a polyvinylidene difluoride (PVDF) membrane and probed with a rabbit anti-A3G polyclonal serum. The primary antibody was detected by incubation with donkey peroxidase-conjugated anti-rabbit IgG (Shanghai Sangon Biotech Co. Ltd.), followed by chemiluminescent imaging.
RESULTS
Overall Fold of Novel Head-to-tail A3G-CD2 Dimer Crystal Structure-In the presence of ssDNA during crystallization, two monomers of human A3G-CD2 domain (residues 193-384) occupy one asymmetric unit. The final crystal structure was determined at 1.8 Å resolution and solved by molecular replacement, using the structure of the A3G-CD2 2K3A variant (residues 191-384; PDB code 3IR2) (37) as a model. The final refinement statistics are summarized in Table 1. Different from the previously reported tail-to-tail or head-to-head conformation (PDB code 3IR2) (37), the crystal structure demonstrates a new head-to-tail dimer conformation (to simplify discussion, the two monomers are referred to here as monomer H (head) and monomer T (tail), respectively), which means that the N terminus of the monomer H interacts with the C terminus of the monomer T, as shown in Fig. 1. The two monomers are identical to each other with a root mean square deviation value of 0.19 Å for backbone C ␣ atoms in the secondary structural regions. They contain a core sandwich-like ␣--␣ fold, consistent with the reported cytidine deaminases (33-36, 39 -41, 43), in which the monomer structure has five  strands encircled by six ␣ helices on both sides (Fig. 1). The catalytic zinc ion in each monomer was coordinated directly by the side chains of residues His-257, Cys-288, and Cys-291 and indirectly by catalytic center Glu-259 via a water molecule. The secondary structural elements are numbered after the x-ray crystal structures of WT A3G-CD2 (residues 197-380, PDB codes 3E1U and 3IQS) (36) and of its 2K3A variant (PDB code 3IR2) (37). Different from structures 3E1U and 3IQS (refined from 3E1U) but similar to the reported structures of A3G-CD2 (PDB codes 3IR2, 2KEM, 2JYW, and 2KBO) (33)(34)(35)37), the second  strand in the current dimer structure is discontinuous. Loop 3 (residues 246 -253) and loop 7 (residues 316 -322) are missed in monomer T, which is also distinct from the head-to-head or tailto-tail dimer conformation (PDB code 3IR2). Thus, on the whole, in terms of the overall fold, this structure is similar to the previously reported structures, but many key differences were still observed.
Differences between the Crystal Structures-The reported crystal structures of A3G-CD2 (residues 197-380) with PDB codes 3E1U and 3IQS have a continuous 2 sheet (36), significantly differing from those (discontinuous 2/2Ј sheets) in three NMR structures (PDB codes 2YJW, 2KBO, and 2KEM) (33)(34)(35), the x-ray crystallographic tail-to-tail or head-to-head dimer structure (3IR2) (37), and the current dimer structure. The different regions include 1-2, 2Ј-␣2, and ␣2-3, which are due to the ambiguity in electron density. The bulge between the 2-2Ј strands in the NMR structures and 3IR2 is obviously an intrinsic feature of A3G-CD2 structure rather than an experimental artifact (37). Thus, for correct comparison, monomer H in the current A3G-CD2 dimer was just overlapped with one monomer of the tail-to-tail A3G-CD2 2K3A variant dimer ( Fig. 1), which produces root mean square deviation values of 1.64 Å for all of the backbone C ␣ atoms in the region of residues from 193 to 384 and of 0.30 Å for all of the backbone C ␣ atoms in the secondary structure regions. This observation indicates that the monomer conformation of the current A3G-CD2 dimer is almost identical to that of 3IR2. The main differences are located in loop 1 (residues 206 -215), loop 3 (residues 245-256), and loop 7 (residues 315-320), which are all involved in ssDNA binding and stabilize the active deamination center. The orientations of the side chains of the residues in sequence P 210 WVR 213 in loop 1; of the residues His-248, His-249, Phe-252, and Glu-254 in loop 3; and of the residues in sequence Y 315 DDQ 318 in loop 7 are apparently distinct from those in PDB entry 3IR2 (Fig. 2, A-C). Among these residues, Trp-211 (37), Arg-213 (33,34,36,37), Tyr-315 (36), and Asp-316 and -317 (36) have been confirmed to be crucial for the deaminase activities. Moreover, it has been suggested that the sequence Y 315 DDQ 318 in loop 7 specifically recognizes the second cytosine in the target motif sequence of ssDNA (5Ј-CCC-3Ј), due to its polarity, whereas the sequence Y 307 YFW 310 in loop 7 of A3F specifically identifies G or T in the ssDNA sequence 5Ј-(G/ T)C-3Ј (36,43). Analysis on the surface of 3IR2 and the current structure indicates that the conformational changes in loops 1, 3, and 7 result in a bigger ssDNA-binding groove in the monomer of 3IR2 than that in the current dimer structure (Fig. 2, D and E), thus probably enhancing DNA binding. Moreover, the positively charged side chain of Arg-213 points to the ssDNA binding groove in 3IR2 structure, strengthening DNA binding, but it deviates from the ssDNA binding groove in the current structure. These observations may account for the fact that the A3G-CD2 2K3A variant has deaminase activities ϳ2.7-fold more than its WT protein (33). Intermolecular Interfaces in Head-to-tail A3G-CD2 Dimer-The obviously big difference between A3G-CD2 head-to-tail dimer and tail-to-tail dimer (or head-to-head dimer) (PDB code 3IR2) is the intermolecular interface. In monomer H of the current head-to-tail dimer, helix ␣2 and loop 3 form an L-shaped hook, which stabilizes the dimer conformation by interacting with helix ␣6Ј and loop 1Ј in monomer T. Thus, the current head-to-tail dimer contains three main interfaces, which are between loop 3 and loop 1Ј, between helix ␣2 and helix ␣6Ј, and between loop 3 and helix ␣6Ј with surface areas of 223, 374, and 317 Å 2 , respectively (Fig. 3A). The total surface area is 1032 Å 2 , much larger than those in the tail-to-tail dimer conformation (901 Å 2 ) and in the head-to-head dimer conformation (604 Å 2 ), respectively. This indicates that the head-totail dimer conformation might be more stable than those of the tail-to-tail dimer and head-to-head dimer.
In the head-to-tail dimer conformation, the helix ␣2 in monomer H is almost perpendicular to the helix ␣6Ј in monomer T, making up the largest interface. Residues Gly-373Ј, Ala-377Ј, and Gln-380Ј in the C terminus of helix ␣6Ј in monomer T form hydrogen bond nets by interacting with residues Leu-263 and Asp-264 in helix ␣2 in monomer H through several water molecules (Fig. 3, B and C). The side chain of Arg-376Ј in monomer T has hydrophobic interactions with the side chain of Phe-268 in monomer H (Fig. 3D). To estimate the functional significance of this interface, four variants (D264A, F268A, R376A, and Q380A) were designed to disrupt the observed interactions. An NMR enzymatic assay was performed to measure the catalytic efficiency on the deamination at base C 6 in the reported sequence of 5Ј-ATTC 4 C 5 C 6 AATT-3Ј (34). These variants show decreased DNA deaminase activity in vitro ( ). In the structure 3E1U, an intramolecular salt bridge was observed between the side chains of residues Asp-264 and Arg-256 in loop 3 (33) (Fig. 3E), which obviously stabilizes the conformation of loop 3. Therefore, the mutation from Asp-264 to Ala-264 not only disrupts FIGURE 1. The overall fold of a novel head-to-tail A3G-CD2 dimer. A, ribbon representation of the two monomers (monomer H (gray) and monomer T (green)). B, the secondary structure elements of A3G-CD2 in this dimer, represented in ribbon mode. C and D, ribbon representation of the tail-to-tail and the head-to-head A3G-CD2 dimer conformations, respectively. E, the superimposition of one monomer (gray) in the head-to-tail dimer with one monomer (orange) in the tail-to-tail or head-to-head dimer. The spheres represent zinc ions in the three-dimensional structures.
the interactions between loop 3 in monomer H and the helix ␣6Ј in monomer T but also makes loop 3 more flexible, destabilizing the ssDNA binding center and thus further resulting in the reduced catalytic efficiency of the D264A variant. The mutation from Arg-376 to Ala-376 keeps its hydrophobic interactions with the side chain of Phe-268 ( Fig. 3D) /K m R376A ϭ 8.10 ϫ 10 Ϫ4 min Ϫ1 M Ϫ1 , reduced 6.7fold), consistent with the previous observation that Arg-376 was involved in the ssDNA binding activity and catalytic reaction (36,37).
In the second interface, loop 3 in monomer H acts as a hook to catch the helix ␣6Ј in monomer T. The side chains of the residues Gln-245 and Arg-256 and the backbone oxygen atoms of the residues His-250 and Gly-251 in monomer H form a hydrogen bond network through water molecules with the charged side chains of Asp-370Ј and Arg-374Ј in monomer T (Fig. 3, F and G). To evaluate the contributions of these residues to the catalytic deamination, we replaced Gln-245, Arg-256, Asp-370Ј, and Arg-374Ј with alanine. Compared with the WT protein, the Q245A and R256A variants nearly abolish the catalytic efficiency ( , reduced 75-fold). Obviously, these mutations destroy the hydrogen bond interactions observed above and thus account for the changes in the catalytic efficiency. These results are different from those observed in structure 3IR2, where a side chain of Gln-245 was coordinated to the zinc ion in the dimer interface (37).
The D370A variant has smaller K cat and K m values (K m D370A ϭ 11.89 Ϯ 2.49 M, K cat D370A ϭ 0.010 Ϯ 0.00038 min Ϫ1 ) than the WT protein, with only about 16% of the catalytic efficiency of the WT protein (K cat D370A /K m D370A ϭ 8.58 ϫ 10 Ϫ4 min Ϫ1 M Ϫ1 ), suggesting that the D370A variant might have a stronger binding affinity to ssDNA than the WT protein (the DNA binding groove in the structure of the D370A variant becomes wider, as shown in Fig. 2F). We tried to crystallize the complex of D370A with ssDNA. Different from the WT protein, in the presence of ssDNA during crystallization, one asymmetric unit contains a D370A molecule. No electronic density was observed for ssDNA either. Its final crystal structure was determined at 1.7 Å resolution. One molecule of D370A forms a dimer with another molecule in an adjacent asymmetric unit in a head-to-tail way. The structure of the D370A variant reveals that the mutation from Asp-370 to Ala-370 not only directly results in the breakage of the hydrogen bond between Asp-370Ј (in loop 7) and Gln-245 (in loop 3) (Fig. 3F) but also indirectly impairs the hydrophobic interaction between His-248 (in loop 3) and Trp-211Ј (in loop 1) (Fig. 3H), and the intramolecular salt bridge interaction between Arg-374 (in
Structure of APOBEC3G Catalytic Deamination Domain
helix ␣6) and Asp-316 (in loop 7) (the distances between the side chains of Arg-374 and Asp-316 become bigger in D370A than those in the WT protein, as shown in Fig. 3, K and L). Thus, the decrease in the catalytic efficiency of the A3G-CD2 D370A variant further reveals that the stability of the active center is important to the catalytic deamination reaction.
To assess the importance of Arg-374 in the cytidine deamination, we replaced Arg-374 by Ala-374. Compared (36,37). On one hand, we think that the mutation from Arg-374 to Ala-374 may directly destroy the intramolecular salt bridge between Arg-374 (in helix ␣6) and Asp-316 (in loop 7) (Fig. 3K) and the intermolecular salt bridge between Arg-374 and Glu-209Ј (in loop 1) (which was observed in the tail-to-tail dimer conformation; Fig. 3M), both interactions making loops 1 and 7 more flexible. On the other hand, the replacement of Arg-374 by Ala-374 disrupts additional hydrogen bonds between the Arg-374 side chain and backbone oxygen atoms of residues Gly-251 and His-250 in loop 3 (Fig. 3G), which results in loop 3 being more flexi- FEBRUARY 13, 2015 • VOLUME 290 • NUMBER 7
JOURNAL OF BIOLOGICAL CHEMISTRY 4015
ble. Thus, like the mutation from Asp-370 to Ala-370, the mutation from Arg-374 to Ala-374 destabilizes the active deamination center of A3G-CD2 by changing the conformations of loops 1, 3, and 7.
The interaction between loop 3 in monomer H and loop 1Ј in monomer T composes the third interface, in which the side chains of the residues His-250 and His-248 in loop 3 have weak hydrophobic interactions with the residues Pro-210Ј and Trp-211Ј in loop 1Ј as well as hydrogen bond interactions between the His-250 side chain nitrogen atom and the Pro-210 backbone oxygen atom through one water molecule (Fig. 3H) Before measuring the catalytic efficiency of each variant of A3G-CD2, we tested their aggregation states by running an analytic Superdex TM G75 column (10/300) (Fig. 4). The results suggest that all mutations on the residues mentioned above have no effects on the aggregation state of A3G-CD2. Therefore, we can exclude the possibility that the differences in the catalytic efficiency of A3G-CD2 variants resulted from A3G-CD2 aggregation state changes.
We further investigated the contributions of the residues in the interface in the head-to-tail A3G-CD2 dimer to the deaminase activities of full-length A3G through an E. coli-based deaminase activity assay (Fig. 5). The expression of the fulllength A3G and its variants was confirmed by running E. coli immunoblots (Fig. 5). The variants, including P210A, P210G, Q245A, R256A, D264A, D370A, R374A, R376A, and Q380A, demonstrate weaker deaminase activities. In addition, the residue Phe-252 (in loop 3) is not located in the interface, but it has hydrophobic interactions with the side chain of residue Arg-256 (in loop 3) within one monomer (Fig. 3E) of the current head-to-tail dimer conformation (this interaction was also observed in the x-ray structure 3E1U) (33), which may stabilize the active center by fixing the conformation of loop 3. Thus, the mutation from Phe-252 to Ala-252 impairs this hydrophobic interaction and decreases the activities of HIV-1 virus ssDNA deamination by A3G-CD2. The H248G, H250A, and H250G variants have higher deaminase activities than WT protein, consistent with in vitro enzymatic results from A3G-CD2 H248G, H250A, and H250G variants.
In summary, the interfaces in the current head-to-tail dimer conformation present new insights into the residues involved in HIV-1 virus ssDNA binding and the catalytic deamination. Mutagenesis studies on those residues further confirm that the stability of the active center is extremely important to catalytic C 3 U deamination. Nine new residues (Pro-210, Gln-245, His-248, His-250, Phe-252, Asp-264, Phe-268, Asp-370, and Gln-380) in the A3G-CD2 domain necessary for HIV-1 ssDNA binding and the catalytic deamination reaction were observed from this new dimer conformation.
DISCUSSION
The Head-to-tail Dimer Conformer of A3G-CD2 Reveals the Mode of Full-length A3G Binding to ssDNA-To well understand how the A3G CD1 and CD2 domains work together to facilitate cytidine deamination, a holoenzyme structure is a prerequisite. Although the three-dimensional structure of A3G-CD1 domain is not available, different models of full-length A3G were previously constructed. Two of them were predicted based on the APOBEC2 structure because APOBEC2 has amino acid sequence identical to A3G-CD1 (24%) and A3G-CD2 (31%) domains. One model, where A3G-CD1 and A3G-CD2 are tethered through the interactions between their -strands, was successfully used to identify three residues (Arg-122, Trp-127, and Asp-128) important for packaging A3G into virions (50). The other model, however, produced from the extended NMR structure of A3G-CD2 domain (PDB code 2KEM) (35), suggested that the -strands in A3G-CD1 and A3G-CD2 were distant. Instead, an N-terminal pseudocatalytic domain, including the interdomain linker and some of helix ␣6 of A3G-CD1, packs A3G-CD1 and A3G-CD2 together. Unfortunately, in this model, the catalytic deaminase domains (i.e. A3G-CD2 and N-terminal pseudocatalytic domain) point away from each other and from the nucleic acid binding site, creating a topological dilemma. The third model, the tail-to-tail dimerization model of the full-length A3G-DR (A3G was treated with RNase), and tetramer model of the full-length A3G-D (A3G was not treated with RNase) were generated by small angle x-ray scattering and shape reconstruction methods. This model implied that the full-length A3G in either low molecular mass or high molecular mass is symmetrically associated (51). This model was further refined into the fourth model after the A3G-CD2 three-dimensional structure was reported, through an in-cell quenched fluorescence resonance energy transfer (FRET) assay, small angle x-ray scattering, and other techniques (52). In this model, A3G was self-associated via its CD2 domain, forming a dimer structure. It seems that this model is more reasonable than any other reported model. Its low resolution, however, limits its usage in the analysis of the biological functions of the full-length A3G.
It is well known that A3G deaminates ssDNA processively with a strong 3Ј 3 5Ј bias (31,32,34). When there is more than one target motif in the sequence of ssDNA, A3G deaminates the 5Ј-CCC target motif 5-fold more rapidly than the 3Ј-CCC target motif (53). Unlike either WT full-length A3G or its monomer F126A/W127A mutant, the A3G-CD2 alone, which does not oligomerize, catalyzes ssDNA-dependent C 3 U deamination (33,36,49), displaying no deamination polarity and no dead zone. The non-catalytic A3G-CD1 plays an indispensable role in stabilizing ssDNA binding, enhancing the catalysis, and establishing 3Ј 3 5Ј deamination polarity and processivity. To explain the polarity and processivity of the deamination by A3G, Chelico et al. (53) suggested the fifth structural model of the full-length A3G, in which A3G effectively deaminates the ssDNA 5Ј-proximal CCC motif only upon binding to ssDNA in an active orientation. However, it is still difficult to obtain information about how A3G-CD1 is involved in ssDNA binding from this model. FEBRUARY 13, 2015 • VOLUME 290 • NUMBER 7
JOURNAL OF BIOLOGICAL CHEMISTRY 4017
To address this, we predicted an A3G-CD1 model through the SWISS model server (54 -57) on the basis of its high sequence identity (43.68%) to A3C (42) (Fig. 6A), which contains six ␣-helices and five -sheets. The 2 sheet is continuous, different from that in the A3G-CD2 structure. This structural model has a root mean square deviation value of 3.27 Å for all backbone atoms upon superimposing with the monomer T in the current head-to-tail A3G-CD2 dimer, indicating that its overall fold is almost identical with that of the A3G-CD2 domain (Fig. 6B). Thus, we raise a question of whether the current head-to-tail A3G-CD2 dimer conformation is a potential three-dimensional structural model of the full-length A3G.
Third, in what orientation does ssDNA upon its bind to fulllength A3G? In our current A3G-CD2 head-to-tail dimer conformation, close to the catalytic center of the residue Glu-259 (ligated to Zn 2ϩ ion), there is a deep pocket for accommodating the first base C m in the 5Ј-proximal -C mϩ2 C mϩ1 C m -target motif in ssDNA (the ssDNA sequence is assumed to be 5Ј-C mϩ2 C mϩ1 C m -C nϩ2 C nϩ1 C n -3Ј). By overlapping the structure of monomer H with that of the mouse cytidine deaminase in complex with cytidine in the active site (PDB code 2fr6), we found that the hot spot cytidine was really docked into this site near the residues -Y 315 DDQ 318 -in the conserved sequence -R 313 IYDDQ 318 -in loop 7 (Fig. 6, C and D). This sequence was previously predicted to specifically identify the second base, C mϩ1 , in the 5Ј-proximal C mϩ2 C mϩ1 C m target motif in ssDNA, which is different from the residues -Y 307 Y 308 FWD 311 -in loop 7 of A3F-CD2 (specifically binding to the second base T mϩ1 in 5Ј-T mϩ2 T mϩ1 C m A-3Ј) (43,44) and from the residues -Y 130 DYD 133 -in loop 7 of A3A-CD1 (specifically binding to the second base, T mϩ1 or C mϩ1 , in target motif 5Ј-T mϩ1 C m -3Ј or 5Ј-C mϩ1 C m -3Ј) (40,41). Thus, upon binding to full-length A3G, the 5Ј terminus of ssDNA is located in the A3G-CD2 side, whereas its 3Ј terminus interacts with the A3G-CD1 side. In other words, ssDNA binds to full-length A3G in an active orientation, as shown in Fig. 6, C and D, which accords with the previous prediction (53). The ssDNA interaction cavity displays large hydrophobic regions and negatively charged regions in the electrostatic potential surface of A3G-CD2 plus the A3G-CD1 structural model (Fig. 6E). The main hydrophobic regions distribute near the 3Ј-end of the ssDNA, and the negatively charged regions mainly locate close to the catalytic center.
In conclusion, based on the structural analysis, NMR-based enzymatic assay, and E. coli-based deaminase activity assay, we suggest that the head-to-tail dimer conformation may represent a structural model of full-length A3G. This model indicates that A3G may bind to HIV-1 virus ssDNA in an active orientation. According to this structural model, several new residues in A3G-CD1 (including Pro-25, Glu-61, His-72, Asp-130, and Leu-184) were found to be necessary to ssDNA binding and catalytic deamination. | 6,963.6 | 2014-12-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
State and socio-demographic group variation in out-of-pocket expenditure, borrowings and Janani Suraksha Yojana (JSY) programme use for birth deliveries in India
Background High out-of-pocket-expenditure (OOPE) deters families from seeking skilled/institutional care. ‘Janani Suraksha Yojana (JSY), a conditional cash transfer programme launched in 2005 to mitigate OOPE and to promote institutional deliveries among the poor, is part of Government of India’s efforts to achieve Millennium Development Goals (MDGs) 4 and 5. The objective of this study is to estimate variations in OOPE for normal/caesarean-section deliveries, JSY-programme use and delivery associated borrowings - by states and union territories, and socio-demographic profiling of families, in India. Methods Secondary analysis of data from the District Level Household Survey (DLHS-3), 2007–08. Mean and median OOPE, percentage use of JSY and percentage of families needing to borrow money to pay for delivery associated expenditure was estimated for institutional and home deliveries. Results Half (52%) of all deliveries in India occurred at home in 2007/08. OOPE for women having institutional deliveries remained high, with considerable variation between states and union territories. Mean OOPE (SD) of a normal delivery in public and private institution respectively in India were Rs. 1,624 and Rs. 4,458 and for a caesarean-section it was Rs. 5,935 and Rs. 14,276 respectively. There was considerable state-level variation in use of the JSY programme for normal deliveries (15% nationally; ranging from 0% in Goa to 43% in Madhya Pradesh) and the percentage of families having to borrow money to pay for a caesarean-section in a private institution (47% nationally; ranging from 7% in Goa to 69% in Bihar). Increased literacy and wealth were associated with a higher likelihood of an institutional delivery, higher OOPE but no major variations in use of the JSY. Conclusions Our study highlights the ongoing high OOPE and impoverishing impact of institutional care for deliveries in India. Supporting families in financial planning for maternity care, additional investment in the JSY programme and strengthening state level planning are required to increase the proportion of institutional deliveries.
Background
In India, high out-of-pocket expenditure (OOPE) is one of the main deterrents to seeking skilled/institutional care [1,2]. With OOPE on health increasing as a proportion of household expenditure [3], poor families (particularly the two lowest quintiles) are becoming particularly vulnerable when these expenditures exceed their capacity to pay [4].
Since maternal mortality is generally lower where a higher proportion of deliveries are conducted by skilled birth attendants, experts feel this should be a central element of any policy or programme that aims to reduce maternal deaths [7]. The maternal mortality ratio (MMR) in India declined substantially from 398 per 100,000 live births in 1997-98 [8] to 212 per 100,000 live births in 2007-09 [9], with under-five mortality rate (U5MR) declining from 109 per 1,000 live births in 1992-93 [10] to 74 per 1,000 live births in 2005-06 [11]. The proportion of institutional deliveries increased from 39% in 2005-06 [11] to 73% in 2009 [12]. However, India is still far from achieving its Millennium Development Goals (MDGs) 4 and 5 (38 deaths per 1,000 live births for child mortality and less than 100 deaths per 100,000 births for maternal mortality) and universal institutional delivery care, by 2015 [13].
In 2005, Government of India launched 'Janani Suraksha Yojana (JSY)' programme', a safe motherhood intervention under the National Rural Health Mission (NRHM), with the objective of reducing maternal and neo-natal mortality by promoting institutional deliveries among the poor [13,14]. JSY is the largest conditional cash transfer programme in the world in terms of number of beneficiaries and constitutes a major Indian health care programme [13,15]. It is a centrally sponsored demand generation programme for 100 percent cash transfer to incentivise women/family to give birth in health facilities. Even though JSY is a centrally sponsored scheme, its implementation differs across the states and union territories [14]. Within five years JSY has made substantial strides, with the number of beneficiaries increasing from 0.74 million in 2005-06 to 10 million in 2009-10 [16], thus covering around 40 percent of total deliveries in the country. Its budgetary allocation has also increased from US$ 8.5 million in 2005-06 to US$ 275 million in 2008-09 [13].
Janani Suraksha Yojana (JSY) programme guidelines
According to JSY's guideline, after delivery in a public or accredited private health facility, eligible women receive Rs 600 in urban areas and Rs 700 in rural areas. In ten High Focus-Non North Eastern (NE) states (Uttar Pradesh, Uttarakhand, Bihar, Jharkhand, Madhya Pradesh, Chhattisgarh, Himachal Pradesh, Rajasthan, Orissa, and Jammu & Kashmir) all pregnant women are eligible, and benefits are paid regardless of whether they deliver in a government or in a private accredited institution, and regardless of birth order. Benefits for institutional delivery are Rs. 1,400 in rural areas and Rs. 1,000 in urban areas. In Non High Focus states, women are eligible for the cash benefit only for their first two live births and only if they had a below poverty line (BPL) card issued by the government or if they were from a scheduled caste or tribe. Pregnant women can also receive cash assistance for transport to the nearest government health facility for delivery. Each state determines the amount of assistance, but the minimum is Rs. 250. It is paid to pregnant women on arrival and registration at the facility. Women who deliver at home are still eligible for a cash payment to cover the expenses associated with delivery, but only if they are 19 years of age and older, belong to BPL household and gave birth to their first or second child. Such mothers are entitled to Rs. 500 per delivery. JSY is being implemented through Accredited Social Health Activists (ASHAs), who identify pregnant women and help them to get to a health facility. ASHAs receive payments of Rs. 200 in urban areas and Rs. 600 in rural areas per in-facility delivery assisted by them in high focus states. [14,15,17].
Although a small number of micro studies [18][19][20][21][22] have provided estimates of OOPE to family for delivery care, these estimates were confined to small geographic areas in India. We used a nationally representative cross-sectional dataset [District Level Household and Facility Survey-Phase 3 (DLHS-3)], to provide robust estimates of OOPE to family of delivery care for all the states and union territories in India, except for Nagaland, as it was not covered under DLHS-3. Specific objectives of our study are: 1. To estimate the average OOPE for women/families according to the type (normal/caesarean-section) and place (home/government hospital/private hospital) of delivery, in the states/union territories of India; 2. To examine inter-state variations in percent JSY beneficiaries and percent families who had to borrow money/sell property to meet the delivery expenses for normal and caesarean-section deliveries; 3. To outline how average OOPEs, percent JSY beneficiaries and percent families borrowing vary for normal/caesarean-section deliveries according to socio-demographic profiling of families in India.
Methods
The DLHS-3 collected data on OOPE to family on delivery care from ever married women who had a live/still birth between January 2004 and December 2008. However, we confined our analysis to births/deliveries between January 2007 and December 2008, as state-wise implementation of the JSY programme was highly variable during previous years [14]. We have adopted the DLHS-3 definition for 'type of delivery' and 'place of delivery' [23]. A delivery not requiring intervention in the form of an operation/use of forceps/ cut and stitches was termed 'normal vaginal delivery'; an operation was termed 'caesarean-section' ('c-section'); and the use of forceps/cut/ stitches was termed 'instrument/assisted' delivery. A delivery in a public institution [Government hospital, dispensary, urban health centre/post/family welfare centre, community health centre/rural hospital, primary health centre, sub centre, Ayurveda, Yoga, Unani, Siddha, & Homeopathy (AYUSH) hospital/clinic] was classified as 'public institution delivery'. A delivery in a private hospital/clinic or private AYUSH hospital/clinic, was classified 'private institution delivery'. A delivery in a woman's or her parents' home was classified as 'home delivery' , and a delivery occurring at a Non-governmental organisation (NGO)/Trust hospital/clinic, en route to the hospital, work place, other places was classified as 'other place'.
The OOPE incurred by family on delivery care, percent families who had borrowed money/sold property for meeting delivery care expenses, percent JSY beneficiary families/women are the main outcome measures of this study. Expenditure incurred by the woman/family on transportation was obtained only for institutional deliveries. If there was no expenditure on transportation it was coded as '0' , else the actual expenditure was coded, up to a maximum of Rs. 89,999 a . Delivery care expenditures (irrespective of place and type) include: antenatal care (ANC), delivery, and medicines during the period [23]. If no expenditure was incurred for delivery, it was coded as Rs. '0' , otherwise the actual expenditure was coded up to a maximum of Rs. 99,996 b . By adding expenditures on transportation and delivery care we have computed a new variable, 'out-of-pocket expenditure (OOPE) of a delivery'.
States and union territories of India were grouped according to the National Rural Health Mission (NRHM) classification [24], as JSY compensation policies mainly vary according to this classification [14][15][16]: 10 High Focus -Non North Eastern (NE) states; 7 High Focus -NE states; 11 Non High Focus -Large states; and 6 Non High Focus -Small States & Union Territories (UT).
We also measured variations in OOPE on normal/c-section delivery according to the following socio-demographic characteristics: Caste (scheduled caste, scheduled tribe, other backward caste, others); Maternal education (no education, 1-5 years, 6-11 years, and 12 years or more); Quintiles of household wealth index (poorest, second, middle, fourth, richest); Location of residence (rural, urban); Pregnant women's interaction with health worker [registered the pregnancy and got advice (at least once) on institutional delivery] (yes, no); and Got full ANC (yes, no).
Statistical analysis
The OOPE to family of delivery care was analyzed by estimating mean & standard deviation (SD) and median & inter-quartile range (IQR) values, because OOPE on delivery care data were heavily skewed. Chi-square and one-way analysis of variance (ANOVA) tests were used to test significance of difference between proportions and means respectively. We applied weights for the state in entire analysis. Analysis was undertaken in SPSS-19.
Results
The response rate of women who had a live/still birth between January 2007 and December 2008 in DLHS-3 was 93% (N=92,563). Out of these women, data on OOPE for delivery care were available for 83,510 (90.2%), and information on OOPE and type of delivery and place of delivery was available for 83,493 (90.2%). The mean OOPE to family, only on ANC and delivery care for all births in India in 2007/08 was Rs. 2,037 (SD=4,509) and median of Rs. 500 (IQR=150-2,000). Mean expenditure exclusively on transportation for the 36,524 (39%) women who had an institutional delivery was Rs. 322 (SD=893), median of Rs. 150 (IQR=50-400). Mean (total) OOPE to family for maternity/delivery care (transportation + ANC+ delivery expenditure) was Rs. 2,169 (SD=4,647) with a median of Rs. 600 (IQR=200-2,000).
Flow chart
Summary profile of delivery care in India by type (normal/caesarean-section) and location private institution/private institution/home), in 2007-08. Figure 1 provides a summary profile of OOPEs associated with delivery care in India according to type and place of delivery. Of all the deliveries in India in 2007-08, 90% were classified as 'normal', 8% as 'c-section' and 2% as 'instrument/assisted'. The breakdown of the 90% normal deliveries by location of delivery was as follows: home (52%); government hospital (25%); private hospital (12%); and others/NGOs (1%). The breakdown of the 8% of c-sections was as follows: private institution (5%); public institution (3%); and others (0.4%). The mean OOPE associated with a c-section birth was eight times that for a normal delivery, and high expenditures associated with these c-sections forced almost one-intwo women/families to borrow money. Mean OOPE of a normal delivery in public institution (Rs. 1,624) was three times that for a home delivery (Rs. 466), while a normal delivery in a private institution (Rs. 4,458) was three times that occurring in a public institution. One in every four women/families who had a normal delivery at home borrowed money, even though mean expenditure was only Rs. 466. One in every three women who had a normal delivery in public/private institution borrowed money. The JSY programme reach was mainly confined to public institution deliveries (43%) with almost negligible reach to private institution (6%) or home (3%) deliveries. State level variations in OOPE, borrowings and JSY use for normal deliveries at public and private institutions and at home Figure 2 presents mean OOPE, percent borrowings and percent JSY beneficiaries of a normal delivery in a public institution by state/UT (see Tables 1 and 2 for more detailed information). With large interstate variations, mean OOPE of a normal delivery in a public institution was least expensive (Rs. 381) in Daman & Diu and most expensive in Manipur (Rs. 3,984), with a national average of Rs. 1,624. In only nine out of 34 states/UTs, median OOPE was less than the JSY compensation amount of Rs. 700 (Table 1). Mean OOPE is not the sole determinant of families having to borrow money. For example, despite high mean OOPE (Rs. 3,230) in Arunachal Pradesh, only 8 percent families opted for borrowing, while despite a low mean OOPE (Rs. 1,769) in West Bengal, a large proportion (60%) of families opted to borrowings. There were considerable state-wise variations in percent JSY beneficiaries even among the ten high focus-non NE states (76% in Madhya Pradesh and 5% in Jammu & Kashmir), when 100% these women are technically eligible to receive JSY benefit. Among the high focus -NE states, Assam did well in terms of JSY outreach followed by Mizoram. In non high focus states/ UTs, JSY use was generally low. Figure 3 presents mean OOPE, percent borrowings and percent JSY beneficiaries for a normal delivery in a private institution by state/UT (see Tables 1 and 2 for additional data). Excluding Bihar, Lakshadweep, Delhi and Arunachal Pradesh -mean OOPE for a normal delivery in the remaining states ranged from Rs. 3,000-8,000. Irrespective of mean OOPE, these deliveries were generally associated with higher borrowings and fewer JSY beneficiaries. At national level, only 6% of these deliveries received JSY benefit. Borrowings for these deliveries were high in Bihar, Orissa and Andhra Pradesh, while percent borrowings were lower in Maharashtra, Meghalaya and Mizoram. Irrespective of NRHM classification of states/ UTs, JSY reach to deliveries in private institutions was generally poor across all the states, excluding Tamil Nadu, Andhra Pradesh and Mizoram, where more than 20% received JSY benefit. Figure 4 presents mean OOPE, percent borrowings and percent JSY beneficiaries for a normal delivery at home by state/UT (see Tables 1 and 2 for additional data). Mean OOPE for a normal delivery at home was Rs. 466, with one-fourth of women/families requiring to borrow money, while a negligible (3%) proportion of them received the JSY benefit. Mean OOPE of a home delivery across the states/UTs may broadly be divided into three broad groups: less than Rs. 500 in 16 states; between Rs. 500-1,000 in 16; and more than Rs. 1000 in two states. High mean OOPE of these deliveries were generally associated with high borrowings and poor JSY outreach (less than 10% in 29 of the 34 states/UTs).
State level variations in OOPE, borrowings and JSY use for c-section deliveries at public and private institutions
Data on mean OOPE, percent borrowings and percent JSY beneficiaries for c-section deliveries at public and private (Tables 2 and 3). Use of the JSY programme among women having a c-section in a private institution was 8% nationally, ranging from 0% to 25%. Irrespective of NRHM classification of states/UTs, these deliveries were generally associated with higher borrowings and fewer JSY benefits.
Socio-demographic variations in OOPE, borrowings and JSY use for all normal/c-section deliveries in India
Variations in mean OOPE, percent borrowings and percent JSY beneficiariesaccording to socio-demographic profiling of all normal and c-section deliveries in India are presented in Figures 7 and 8 (see Tables 4 and 5 of a normal delivery was more than double in those who had full ANC or who interacted with a health worker during pregnancy as compared to their respective group counterparts. JSY reach and proportion borrowing did not differ significantly according to ANC use and women's interaction with health worker (Table 4). Excluding education and wealth index, in the remaining socio-demographic groups, variations in mean OOPE and % borrowing were less evident among the c-section deliveries (Figure 8), as compared to normal deliveries, in India. The OOPE on csections did not differ significantly according to type of area (rural/urban), receiving full ANC care (yes/no) and pregnant woman's interaction with health worker (yes/no) ( Table 5). For poor and illiterate women, expenditures on c-sections were beyond their capacity to pay resulted in significantly more borrowings.
Discussion
In 2007-08, four years after the implementation of the JSY programme, half of all deliveries in India occurred at home. OOPE among women having institutional deliveries remained high, with considerable variation between the states/UTs. High OOPE due to institutional delivery forced one-third to half of the families to opt for borrowings, despite implementation of JSY programme to address this, reflecting both low use and the modest value for cash transfer within this programme. Even among women who had normal deliveries in public institutions, JSY use was less than 50% in 29 of the 34 states/UTs in India, highlighting scope for further improvement. Increased literacy and wealth were associated with a higher likelihood of an institutional delivery, but higher OOPE and no major variations in use of the JSY programme.
How comparable are our results with other studies?
The Coverage Evaluation Survey (CES-2009) [12] report estimated mean expenditure for transporting a pregnant woman to facility in India at Rs. 192, while it was Rs. 322 in our study. A study done [25] ,924). These variations in expenditures may be due to variations in the percentage of private hospital deliveries, 11% in our study and 5% in the reported study [26]. A comparison of our results (based on 2007/08 data) with those from the National Sample Survey Organization (NSSO) conducted in 2004 [27] suggests that OOPE to families for public and private institution delivery may have increased during this time period. In 2004, OOPE on a public, private and a home delivery respectively was Rs. 1,387,Rs. 6,094,and Rs. 428; while OOPEs in 2007/08 were Rs. 2,103,Rs. 7,245 and Rs. 466 respectively. There was no major increase in expenditure on home deliveries over this period. This data suggests that the JSY programme may not have offset increases in OOPE over that time period for many families.
Our findings suggest that the proportion of women opting for home deliveries in 2007/08 remains high (52%) in India; although a more recent (2009/10) estimate [12] found it to be 27%, suggesting that the JSY programme may have been successful in reducing the proportion of home deliveries since the DLHS-3 (2007-08) was conducted. Women from high focus-non NE states (where substantial portion of deliveries were at home) cited the following reasons for opting 'home as the place of delivery' in their previous pregnancy: not necessary to go to institution (33%); cost of institutional delivery was too much (25%); no time to go to institution (24%); better care at home (17%); institution too far/no transport (12%); lack of knowledge (7%); family did not allow (7%); not customary (7%); poor quality of service at institution (5%). This implies that barriers other than OOPE, including availability, accessibility, and lack of planning and cultural reasons need to be addressed to reduce home deliveries in India.
A cross-sectional survey [28] in 2008 found that the average amount paid by JSY beneficiaries to an institution for medicines and other services ranged from Rs. 299 in Madhya Pradesh to Rs. 1,638 in Orissa. These findings are consistent with ours, and imply that the JSY benefit is insufficient to cover expenditures incurred on delivery, thus, requiring many families to borrow money to pay for this. This is confirmed by our finding that rural families from high focus-non NE states had average additional expenditures of Rs. 544 and Rs. 4,761 for public and private institution deliveries respectively after receiving the JSY benefit (Rs 1400). Further, mean OOPE to families for normal deliveries in public institutions was more than the JSY-compensation amount of (Rs. 1,400) in five of the 10 high focus-non NE states.
Study strengths and limitations
This study provides some of the first robust state-level estimates of OOPE for normal and c-section deliveries, the proportion of families required to borrow to meet these expenditures and the reach of JSY-programme, by location of delivery in India. One of the limitations of our study is OOPE to family on delivery care reported here are based on the figures recalled by women. Studies that gather expenditures of families from hospital records [4] are often more accurate as they are not influenced by recall or reporting bias. The current study only included direct expenses such as transportation and facility-based expenses. It did not include indirect expenses such as spending by women and families on food, other purchases during hospitalization/delivery, wages lost by women and family members during the delivery process and bribes/gifts. Results of this study must be seen in the light of limitations of the methods of DLHS-3 [29] which did not capture the reasons for variable implementation and use of JSY between different states [14], including eligibility guidelines, awareness of JSY programme, amount distributed, payment process, delays in payments to mothers and involvement of Associated Social Health Activists (ASHAs) in maternity care [25][26][27]. Before streamlining of JSY programme in 2007-08, there was very little change in the distribution of institutional deliveries during 2002-04 [30] and 2005-06 [11]. In 2009 proportion of institutional deliveries in India increased to 73% and JSY use increased to 33% [12], clearly implying that the coverage of the JSY has increased since 2007-08, and our findings are unlikely to reflect current JSY use and distribution of location of delivery, even though OOPE and family borrowings may not have changed markedly since 2007-08. Hence, ongoing evaluation of the JSY programme is essential to establish whether its reach and impacts on OOPE and family borrowings have improved.
Policy implications
Our results highlight the ongoing high OOPE of Indian families for delivery/maternity care, resulting in 25-47% families in India having to borrow money to meet pregnancy/delivery related expenses. The OOPE burden was found to be especially high in: low wealth index, illiterate/ less educated and low social group families and low percapita income states [31]. The high levels of OOPE found, low reported use of the JSY programme and given that expenditures exceed the financial benefit of this programme for many families, suggest that the impact of programme on OOPE in 2007/08 appears to have been modest. Additional investment in the JSY programme, strengthening state-specific interventions targeting population groups most likely to avoid institutional care due to OOPE and providing support to families in financial planning for maternity care are likely to be required in order to meet the MDGs 4 and 5 in India.
Conclusions
Our study highlights the ongoing high OOPE and impoverishing impact of institutional delivery care in India despite a high profile policy initiative seeking to address this issue. Additional investment in JSY and strengthening of state level implementation is required to increase coverage of JSY programme, reduce maternity related OOPE, reduce delivery associated borrowings and increase the proportion of institutional deliveries in India. Such an investment is vital to accelerate progress towards achievement of MDGs 4 and 5.
Endnotes a 12 cases out of 36,536 were excluded from analysis as outliers b 14 cases out of 83,524 were excluded from analysis as outliers | 5,510.4 | 2012-12-05T00:00:00.000 | [
"Economics"
] |
Antibacterial and cytotoxic cytochalasins from the endophytic fungus Phomopsis sp. harbored in Garcinia kola (Heckel) nut
Background The continuous emergence of multidrug-resistant (MDR) bacteria drastically reduced the efficacy of our antibiotic armory and consequently, increased the frequency of therapeutic failure. The search for bioactive constituents from endophytic fungi against MDR bacteria became a necessity for alternative and promising strategies, and for the development of novel therapeutic solutions. We report here the isolation and structure elucidation of antibacterial and cytotoxic compounds from Phomopsis sp., an endophytic fungus associated with Garcinia kola nuts. Methods The fungus Phomopsis sp. was isolated from the nut of Garcinia kola. The crude extract was prepared from mycelium of Phomopsis sp. by maceration in ethyl acetate and sequentially fractionated by column chromatography. The structures of isolated compounds were elucidated on the basis of spectral studies and comparison with published data. The isolated compounds were evaluated for their antibacterial and anticancer properties by broth microdilution and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide methods respectively. The samples were also tested spectrophotometrically for their hemolytic properties against human red blood cells. Results The fractionation of the crude extract afforded three known cytochalasins including 18-metoxycytochalasin J (1), cytochalasins H (2) and J (3) together with alternariol (4). The cytochalasin compounds showed different degrees of antibacterial activities against the tested bacterial pathogens. Shigella flexneri was the most sensitive microorganism while Vibrio cholerae SG24 and Vibrio cholerae PC2 were the most resistant. Ampicillin did not show any antibacterial activity against Vibrio cholerae NB2, Vibrio cholerae PC2 and Shigella flexneri at concentrations up to 512 μg/mL, but interestingly, these multi-drug resistant bacterial strains were sensitive to the cytochalasin metabolites. These compounds also showed significant cytotoxic properties against human cancer cells (LC50 = 3.66–35.69 μg/mL) with low toxicity to normal non-cancer cells. Conclusion The three cytochalasin compounds isolated from the Phomopsis sp. crude extract could be a clinically useful alternative for the treatment of cervical cancer and severe infections caused by MDR Shigella and Vibrio cholerae.
Background
Endophytic fungi are organisms that live inside the plant tissues and behave as plant hosts [1]. They have proven to be a rich source of novel organic compounds with interesting biological activities and a high level of biodiversity [2,3]. Natural products from endophytic fungi have been observed to inhibit or kill a wide variety of harmful microorganisms including phytopathogens, as well as bacteria, fungi, viruses, and protozoans that affect humans and animals [4]. As one of the most frequently isolated secondary metabolites from endophytic fungi cultures, cytochalasins are produced by Phoma [5], Hormiscium [6], Helminthosporium [7], Phomopsis [8] and Curualuriu [9] genera. They have been identified as contaminants of potato [5], tomato [6], pecan [10], rice [11], millet [8] and litchi fruit [9]. The cytochalasins A, B, C, D, and E are highly toxic to the chick, rat, mouse, and guinea pig [11][12][13][14] and are teratogenic to both chick and mouse [13,[15][16][17]. In recent years, most works on endophytic fungi have been centered on plants in the temperate and tropical regions of the world [18].
Plants of the genus Garcinia (family Clusiaceae), widely distributed in tropical Africa, Asia, New Caledonia and Polynesia, have yielded an abundance of biologically active and structurally intriguing natural products [19]. Garcinia species are known to contain a wide variety of oxygenated and prenylated xanthones, as well as polyisoprenylated benzophenones such as the guttiferones [20].
Garcinia kola (Clusiaceae) is a plant of West and Central African origin [21]. In Nigeria, the seed (Bitter kola) is chewed for the relief of cough, colds, colic, hoarseness of voice, and throat infections. The plant is also used for the treatment of liver disorders, jaundice, fever, and as a purgative and chewing sticks [21]. We focused on Garcinia kola nut because it is one of the most commercialized fruits in West and Central Africa, its highly valued perceived medicinal attributes, and the consumption of large quantities does not cause indigestion. However, several management strategies have been employed for their conservation, but the growth of the molds due to their moisture during that conservation remains a serious problem [22]. Moreover, further studies by Austin [23] attributed the loss of viability of kola nut seeds to reduction in moisture content.
During our investigation, the fungus Phomopsis sp. associated with that nut was found to be a producer of diverse secondary metabolites, including cytochalasins from its mycelium in potato dextrose agar (PDA) medium. Attracted by the potential production of this class of compounds, a so-called OSMAC (one strainmany compounds) [24] approach was carried out to find compounds. Following the application of the OSMAC principle, we found out that when the culture conditions were changed from PDA medium to solid state medium (rice), fermentation significantly changed and based on high-performance liquid chromatography (HPLC) monitoring, 18-metoxycytochalasin J (1), cytochalasins H (2) and J (3) and alternariol (4) were isolated. In this report, we evaluate the cytotoxic activities of cytochalasins against bacterial species and human cervical cancer cell lines, with emphasis on MDR Shigella flexneri and Vibrio cholerae.
General experimental procedures
High resolution mass spectra were obtained with an LTQ-Orbitrap Spectrometer (Thermo Fisher, USA) equipped with a HESI-II source. The spectrometer was operated in positive mode (1 spectrum/s; mass range: 100-1000) with nominal mass resolving power of 60 000 at m/z 400 with a scan rate of 1 Hz). It was equipped with automatic gain control to provide high-accuracy mass measurements within 2 ppm deviation using an internal standard; Bis (2-ethylhexyl) phthalate: m/z = 391.28428. The spectrometer was attached with an Agilent (Santa Clara, USA) 1200 HPLC system consisting of LC-pump, PDA detector (λ = 260 nm), auto sampler (injection volume 5 μL) and column oven (30°C). Following parameters were used for experiments: spray voltage 5 kV, capillary temperature 260°C, tube lens 70 V. Nitrogen was used as a sheath gas (50 arbitrary units) and auxiliary gas (5 arbitrary units). Helium served as the collision gas. The separations were performed by using a Nucleodur C18 Gravity column (50 × 2 mm, 1.8 μm particle size) with a H 2 O (+0.1% HCOOH) (A) / acetonitrile (+0.1% HCOOH) (B) gradient (flow rate 300 μL/min). Samples were analyzed using a gradient program as follows: 80% A isocratic for 1 min, linear gradient to 100% B over 18 min, after 100% B isocratic for 5 min, the system returned to its initial condition (80% A) within 0.5 min, and was equilibrated for 4.5 min. The separation was carried out by preparative HPLC run for 20 min on a Gilson apparatus with UV detection at 220 nm using a Nucleodur C18 Isis column (Macherey-Nagel, Düren, Germany), 5 μm (250 × 16 mm) with a H 2 O (A) / CH 3 OH (B) gradient (flow rate 4 mL/min). Samples were separated by using a gradient program as follows: 60% A and 40% B isocratic for 2 min, linear gradient to 100% B over 18 min, after 100% B isocratic for 5 min, the system returned to its initial condition (60% A) within 0.5 min, and was equilibrated for 4.5 min. The NMR spectra were recorded on a Bruker DRX-500 MHz spectrometer. Chemical shifts (δ) were quoted in parts per million (ppm) from internal standard tetramethylsilane and coupling constants (J) are in Hz. Silica gel [Merck, Kieselgel 60 (0.063-0.200 mm)] was used for column chromatography. Melting points were determined on a BÜCHI melting point b-545 apparatus. UV spectra were measured with the earlier described spectrometer.
Isolation of endophytic fungus
The fungus was isolated from the nut of Garcinia kola bought at Mokolo local market in Yaounde (Cameroon). The plant material was identified at the Cameroon National Herbarium, Yaoundé, where a voucher specimen (N°27839/SRF-CAM) has been deposited. The seed was first cleaned by washing several times under running tap water and then cut into small slices, followed by successive surface sterilization in 70% ethanol and NaOCl (6-14% active chlorine) for 2 min and finally with sterile distilled water for 2-3 times. The plant material was then dried in between the folds of sterile filter papers and deposited on a Petri dish containing potato dextrose agar medium (PDA) (200 g potato, 20 g dextrose, and 15 g agar in 1 L of H 2 O, supplemented with 100 mg/L of chloramphenicol to suppress bacterial growth). All the plates were incubated at 28°C to promote the growth of endophytes and were regularly monitored for any microbial growth. On observing the microbial growth, subculturing was done. Each endophytic culture was checked for purity and transferred to freshly prepared PDA plate
Identification of the fungus CAM240
Cultures were grown on PDA at 25°C under 12 h light / 12 h darkness cycles. The strain CAM240 formed abundant mycelium that filled out the Petri dishes (9 cm diameter) in 8 days. The isolate was identified by Dr Clovis Douanla-Meli after macroscopic and microscopic examinations of its morphological features. Isolate was deposited as AGMy0319 in the Culture Collection of Federal Research Centre for Cultivated Plants (JKI), Braunschweig, Germany.
Fungal culture and extraction
Phomopsis sp. was cultured in 12 flat culture bottles containing 100 g rice and 100 mL water enriched with 0.3% peptone each, autoclaved at 121°C for 45 min. Each flask received about 5 small pieces of mycelium from PDA plate under sterile conditions. After 40 days of growth at 25°C, ethyl acetate (12 x 500 mL) was added to each bottle, homogenized and filtered after 24 h and taken to dryness to afford 11.6 g of crude extract.
Antibacterial assay Microbial growth conditions
A total of six bacterial strains were tested for their susceptibility to compounds and these strains were taken from our laboratory collection (kindly provided by Dr. T. Ramamurthy, NICED, Kolkata). Among the clinical strains of Vibrio cholerae used in this study, strains NB2 and SG24 and CO6 belonged to O1 and O139 serotypes, respectively. All these strains were able to produce cholera toxin and hemolysin and multi-drug-resistants (MDR). The other strains used in this study were V. cholerae non-O1, non-O139 (strain PC2); and Shigella flexneri SDINT. The MDR V. cholerae non-O1 and non-O139 strain PC2 isolated from aquatic environment was positive for hemolysin production but negative for cholera toxin production [25]. The American Type Culture Collection (ATCC) strain, Staphylococcus aureus ATCC 25923, was used for quality control. The bacterial strains were maintained on agar slant at 4°C and subcultured on a fresh appropriate agar plates 24 h prior to any antibacterial test. The Mueller Hinton Agar (MHA) was used for the activation of bacteria. The Mueller Hinton Broth (MHB) and nutrient agar (Hi-Media) were used for the MIC and MBC determinations respectively.
Inocula preparation
Suspensions of bacteria were prepared in MHB from cells arrested during their logarithmic phase growth (4 h) on MHB at 37°C. The turbidity of the microbial suspension was read spectrophotometrically at 600 nm and adjusted to an OD of 0.1 with MHB, which is equivalent to 1 × 10 8 CFU/mL. From this prepared solution, other dilutions were made with MHB to yield 1x10 6 CFU/mL.
Determination of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC)
MIC and MBC of compounds 1-3 were assessed using the broth microdilution method recommended by the National Committee for Clinical Laboratory Standards [26,27] with slight modifications. Each test sample was dissolved in dimethylsulfoxide (DMSO) to give a stock solution. The 96-well round bottom sterile plates were prepared by dispensing 180 μL of the inoculated broth (1x10 6 CFU/mL) into each well. A 20 μL aliquot of the stock solution of compound was added. The concentrations of sample tested were 0.125, 0.25, 0.50, 1, 2, 4, 8, 16, 32, 64, 128, 256 and 512 μg/mL. The final concentration of DMSO in each well was < 1% [preliminary analyses with 1% (v/v) DMSO did not inhibit the growth of the test organisms]. Dilutions of tetracycline and ampicillin served as positive controls, while broth with 20 μL of DMSO was used as negative control. The ATCC strain Staphylococcus aureus ATCC 25923 was included for quality assurance purposes. Plates were covered and incubated for 24 h at 37°C. After incubation, minimum inhibitory concentrations (MIC) were read visually; all wells were plated to nutrient agar (Hi-Media) and incubated. The minimal bactericidal concentration (MBC) was defined as a 99.9% reduction in CFU from the starting inoculums after 24 h incubation interval.
Cytotoxicity assay
HeLa (Human cervical cancer cell line, ATCC No. CCL-2) and Vero cells (African green monkey kidney cells, normal non-cancer cells, ATCC No. CCL-81), obtained from the American Type Culture Collection (ATCC) were used in this study. Cytotoxic activity was determined using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT, Sigma, USA) assay reported by Mosmann [28] for the HeLa and Vero cells. This cell viability assay is based on living cell's property to transform the MTT dye tetrazolium ring into a purple-colored formazan structure due to the action of mitochondrial and other dehydrogenases inside the cell. The color intensity yielded by the cell population is directly proportional to the number of viable cells, and one can quantify the absorbance measurements using mathematical parameters. Each test sample was dissolved in dimethylsulfoxide (DMSO) to give a stock solution. Compounds 1-3 were prepared from the stock solutions by serial dilution in RPMI 1640 to give a volume of 100 μL in each well of a microtiter plate (96-well). Each well was filled with 100 μL of cells at 2 × 10 5 cells/mL. The assay for each concentration of compound was performed in triplicates and the culture plates were kept at 37°C with 5% (v/v) CO 2 for 24 h. After removing the supernatant of each well and washing twice by PBS, 20 μL of MTT solution (5 mg/mL in PBS) and 100 μL of medium were then introduced. After 4 h of incubation, 100 μL of DMSO were added to each well to dissolve the formazan crystals and the absorbance values at 490 nm were measured with a microplate reader (Bio-RAD 680, USA). The relative cell viability (%) was expressed as a relative percentage of treated cells to the untreated control cells (TC/UC × 100). The rate of cell inhibition was calculated using the following formula: inhibition rate = [1-(OD test /OD negative control )] × 100%. The LC 50 values were calculated as the concentration of test sample resulting in a 50% reduction of absorbance compared to untreated cells. Cells treated with 5-fluorouridine + RPMI 1640 served as positive control while cells left untreated + 1% (v/v) DMSO + RPMI 1640 were used as negative control.
Hemolytic assay
Whole blood (10 mL) from a healthy man was collected into a conical tube containing heparin as an anticoagulant. Erythrocytes were harvested by centrifugation at room temperature for 10 min at 1,000 × g and were washed three times in PBS solution. The top layer (plasma) and the next, milky layer (buffy coat with a layer of platelets on top of it) were then carefully aspirated and discarded. The cell pellet was resuspended in 10 mL of PBS solution and mixed by gentle aspiration with a Pasteur pipette. This cell suspension was used immediately.
For the normal human red blood cells, which were in suspension, the cytotoxicity was evaluated as previously described [29]. Compounds 1-3, at concentrations ranging from 32 to 512 μg/mL, were incubated with an equal volume of 1% human red blood cells in phosphate buffered saline (10 mM PBS, pH 7.4) at 37°C for 1 h. Tetracycline was tested simultaneously. Non-hemolytic and 100% hemolytic controls were the buffer alone and the buffer containing 1% Triton X-100, respectively. Cell lysis was monitored by measuring the release of hemoglobin at 595 nm with a spectrophotometer (Thermo Scientific, USA). Percent hemolysis was calculated as follows: [(A595 of sample treated with compound -A595 of sample treated with buffer)/(A595 of sample treated with Triton X-100 -A595 of sample treated with buffer)] x 100.
Statistical analysis
Statistical analysis was carried out using Statistical Package for Social Science (SPSS, version 12.0). The experimental results were expressed as the mean ± Standard Deviation (SD). Group comparisons were performed using One Way ANOVA followed by Waller-Duncan Post Hoc test. A p value of 0.05 was considered statistically significant.
Identification of the fungus
Macroscopic examination of the isolate revealed that colonies were cottony, developing compact aerial mycelium, at first uniformly white (Fig. 1a) then becoming whitish with pale brown patches. The reverse side of cultures was whitish, then turned light brown with scattered darker spots which later appeared regularly concentrical. Conidiation began in 12-day old colonies with the formation of spherical, subglobose to ampuliform black stromata, measuring 210-250 × 220-380 μm and arranged in a circle in the Petri disch (Fig. 1a) and containing pycnidia. Watery exudate drops from pycnidia contained only beta conidia. These were 17-28.5 × 0.9-1.9 μm, unicellular, hyaline, filiform and mostly slightly curved at one end (Fig. 1b).
Cultural and morphological features of the strain CAM240 enabled its reliable placement in the genus Phomopsis. There was noticeable morphological similarity with Phomopsis longicolla [30], a species generally known as a Soybean pathogen, but that can be isolated as endophyte from other different host plants. With reference to recent revision of species concept in Phomopsis, specific determination requires a muli-locus analysis of ITS, tef and ß-tubulin loci [31]. Therefore, taxonomy of strain CAM240 as based only on the morphology in this study was restricted to generic level.
Chemical analysis
The mycelium from Petri dish after ten days fermentation was extracted with 10 mL ethyl acetate. The obtained extract was submitted to HR-LC-MS and the major compounds were directly identified (Fig. 2). The crude extract (11.60 g) from the large scale fermentation was firstly submitted to HR-LC-MS and then chromatographed on a silica gel column (0.04-0.063 mm, 6 cm x 60 cm, 100 g) eluting with cyclohexane, mixture cyclohexane/ethyl acetate by increasing the polarity and finally with methanol. 56 fractions of 200 mL each were collected and combined according to TLC profile into 17 fractions. Each fraction was monitored by LC-MS and fractions 7, 10 and 16 were further purified by means of high performance reverse phase liquid chromatography to yield 3 cytochalasins: 18metoxycytochalasin J (1) (4.1 mg, t R = 9.48 min) isolated as brown amorphous powder, its molecular formula was determined to be C 29 [34], was the major metabolite and alternariol (4) (5.3 mg, t R = 7.55 min) as a white powder, HRESIMS m/z 259.06009 [M + H] + (calculated for C 14 H 11 O 5, 259.06065) [35]. The chemical structures of the isolated compounds are shown in Fig. 3.
The chemical investigation of the crude extract from the rice medium of Phomopsis sp. harboring nut of Garcinia kola, by means of different chromatography techniques yielded four main compounds. Cytochalasins were the major secondary metabolites as detected and shown in Fig. 2, and this class of compounds is commonly found in Phomopsis genus.
Antibacterial activity
The cytochalasins showed different degrees of antibacterial activities against the tested bacterial pathogens (Table 1). Shigella flexneri SDINT was the most sensitive microorganism while Vibrio cholerae SG24 and V. cholerae PC2 were the most resistant. Ampicillin did not show any antibacterial activity against V. cholerae NB2, V. cholerae PC2, and Shigella flexneri SDINT at concentrations up to 512 μg/mL while these multi-drug resistant bacterial strains were found sensitive to the cytochalasin metabolites. This finding suggests the antibacterial potencies of these compounds in particular for the treatment of multi-drug-resistant (MDR) bacterial strains. Compounds 1, 2 and 3 showed selective activities; their inhibitory effects being noted respectively on 4/6 (66.66%), 5/6 (83.33%) and 4/6 (66.66%) of the studied microorganisms. A keen look at the MBC values indicates that most of them are equal to their corresponding MICs. This proves that the killing effects of many tested samples could be expected on the sensitive strains [36].
The present study showed significant antibacterial activity of cytochalasin compounds against MDR entero-pathogenic bacteria including the clinical isolates of toxigenic Vibrio cholerae, the causative agents of dreadful disease cholera and Shigella sp., the causative agent of shigellosis. These compounds were having significant antibacterial activities against Gram-positive bacterium, Staphylococcus aureus. Although cytochalasin compounds have been reported to possess interesting activity against a wide range of microorganisms [37], no study has been reported on the activity of the metoxycytochalasin J (1), cytochalasins H (2) and J (3) against these types of pathogenic strains.
Cytotoxicity activity
Compounds 1-3 were evaluated for their anticancer against human cervical cancer cells (HeLa cells) ( Table 2). The lowest LC 50 value (corresponding to the most cytotoxic compound) was found with compound 3 (LC 50 = 3.66 μg/mL) followed in decreasing order by compound 1 (LC 50 = 8.18 μg/mL) and compound 2 (LC 50 = 35.69 μg/mL) ( Table 2). Interestingly, the cytotoxicity of compound 3 can be considered more important when taking into consideration the criterion of the American National Cancer Institute (NCI) regarding the cytotoxicity of pure compounds (LC 50 < 4 μg/mL) [38]. The data also showed that the tested compounds were most cytotoxic to HeLa cells (LC 50 = 3.66-35.69 μg/mL) when compared with Vero cells (LC 50 = 73.88-129.10 μg/mL) indicating that they are less toxic to normal cells. Our results are in agreement with those of Xu et al. [39] who showed the cytotoxicity activity of some cytochalasin compounds isolated from the solid substrate culture of Endothia gyrosa IFB-E023 against the human leukaemia K562 cell line with the IC 50 values varying between 1.5 to 28.3 μM.
In the present study, Selectivity Index (SI) of active compounds was determined in order to investigate whether the cytotoxic activity was specific to cancer cells/bacterial strains. The SI of the samples are defined as the ratio of cytotoxicity (LC 50 values) on normal noncancer cells (Vero cells) to cancer cells (HeLa cells) or bacterial cells: SI = LC 50 on Vero cells / LC 50 on HeLa cells or MIC. Test agents with SI equal or higher than ten were considered to have high selectivity towards cancer cells [40]. Apart from compounds 1 and 3 on HeLa cells, the SI values of the tested samples against the HeLa cells and bacterial strains ranged from 0.14 to 3.61 and could be considered as poor.
Hemolytic activity
Human red blood cells provide a handy tool for toxicity studies of compounds, because they are readily available, their membrane properties are well known, and their lysis is easy to monitor by measuring the release of hemoglobin [29]. The hemolytic activities of compounds 1-3, and tetracycline on human red blood cells (as a function of sample concentration) are shown in Fig. 4. At the highest concentration tested in this study (512 μg/mL), compounds 1, 3 and tetracycline caused less than 10% hemolysis, while compound 2 caused 20.14% hemolysis.
Conclusions
The chemical study of the ethyl acetate extract of Phomopsis sp. mycelium afforded three known cytochalasins including 18-metoxycytochalasin J (1), cytochalasins H (2) and J (3) together with alternariol (4). Compounds 1, 2 and 3 showed different degrees of antibacterial activities against MDR clinical strains of enteropathogenic bacteria with low toxicity to human red blood cells and normal Vero cells. These compounds also showed significant cytotoxic properties against human cervical cancer cells. The overall results of this study indicate that cytochalasin compounds 1-3 isolated from the Phomopsis sp. mycelium could be a clinically useful alternative for the treatment of cervical cancer and severe infections in particular those caused by Shigella flexneri and Vibrio cholerae strains resistant to ampicillin. | 5,448.8 | 2016-11-14T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science",
"Medicine"
] |
Modification of Type B Inclusions by Calcium Treatment in High-Carbon Hard-Wire Steel
: To investigate the modification of type B inclusions in high-carbon hard-wire steel with Ca treatment, Si-Ca alloy was added to high-carbon hard-steel, and the composition, morphology, size, quantity, and distribution of inclusions were observed. The samples were investigated by scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS). The experimental thermal results showed that the modification effect of inclusion was better in high-carbon hard-wire steel with Al of 0.0053% and Ca of 0.0029% than that in steel with Al of 0.011% and Ca of 0.0052%, in which the inclusions were mainly spherical semi-liquid and liquid CA 2 , CA, and C 12 A 7 . The inclusion size decreased from 3.2 µ m to 2.1 µ m. The degree of inclusions segregation was reduced in high-carbon hard-wire steels after calcium treatment. The results indicate that the modification of inclusions is conducive to obtaining dispersed inclusions with fine size. The ratio of length to width decreased and tended to be 1 with the increase in CaO content in the inclusion. When the content of CaO was higher than 30%, the aspect ratio was in the range of 1 to 1.2. The relationship between the activity of aluminum and calcium and the inclusions type at equilibrium in high-carbon hard-wire steel was estimated using classical thermodynamics. The calculated results were consistent with the experimental results. The thermodynamic software Factsage was used to analyze the effect of aluminum and calcium additions on the type and quality of inclusions in high-carbon hard-wire steels. The modification law and mechanism of type B inclusions in high-carbon hard-wire steels are discussed.
Introduction
The wire rod rolled by structural steel with carbon content not less than 0.6% is called high-carbon hard-wire steel, which is widely used in construction, transportation, and other industries [1][2][3]. Nonmetallic inclusions affect the mechanical properties and corrosion performance of steel [4]. Compared with low-carbon steel, high-carbon steel has high hardness and low ductility and is more sensitive to nonmetallic inclusions [5,6]. In recent years, the output and quality of high-carbon hard-wire steel produced in China have been greatly improved. However, the breaking rate of the wire is relatively high during the drawing and twisting processes [7]. Controlling the number, size, distribution, and morphology of brittle inclusions is very important in the smelting process of high-carbon hard-wire steel [1].
B-type inclusions (Al 2 O 3 inclusions) are brittle inclusions and tend to accumulate at the nozzle, which affects the stable operation of the production process [8]. The modification of inclusions is a good way to reduce the adverse effect of B-type inclusions on the production and performance of steel [9][10][11]. Calcium treatment [12][13][14][15][16][17] can effectively modify Al 2 O 3 inclusions into calcium aluminate inclusions with a low melting point, which can effectively prevent nozzle clogging [18][19][20][21]. Calcium alloy was added into liquid steel by powder spraying or wire feeding to promote the transformation of high melting point Al 2 O 3 inclusions into plastic or semi-plastic low melting point calcium aluminate inclusions [22,23]. The inclusion after calcium treatment is composed of a certain proportion of liquid phase and solid phase, and the composition of the inclusion has a very important effect on the castability and deformation of the inclusion during rolling. Therefore, it is very important to study the phase composition of inclusions during calcium treatment. [24] Numata et al. [25] studied the effect of calcium content on the modification of inclusions by laboratory experiments. Verma et al. [26] demonstrated that the inclusion size decreased significantly after Ca treatment, mainly from 10-20 µm to 1-2.5 µm. The inclusion size was the smallest after 2 min of Ca treatment and then gradually increased. Yuan et al. [27] treated medium and high carbon with Al content of 0.025-0.044% with calcium. When the Ca content was less than 10 ppm, the solid rate of inclusion reached more than 60%, and they believed that the suitable Ca content was 17-23 ppm. Yoshihiko et al. [28] modified the inclusions in mild steel by adding Ca-Si alloy. They found that the modification effect was better in the steel with a Ca content of 20 ppm, in which the content of CaO in the oxide inclusion was close to 50%. Simpson et al. [29] showed that calcium aluminate inclusions with lower melting points formed when the mass ratio of w[Ca]/w[Al] is larger than 0.11 in molten steel.
Many studies have been carried out on the treatment of Al 2 O 3 inclusions with calcium, but most of them were focused on low carbon steel. Limited research was conducted on the modification of B-type inclusions in steel with high carbon. In order to expand the application of calcium in the production of high-carbon hard-wire steel, the effect of calcium treatment on the quantity, morphology, distribution, and composition of B-type inclusions in the molten steel by adding alloy was studied.
Experimental Method
A vacuum induction furnace was applied in the production of base metal of highcarbon hard-wire steel. Table 1 lists the compositions of raw materials used in the study. The base metal was put into a tubular resistance furnace and heated to 1600 • C. After melting for 30 min, aluminum powder was added, and timing started. The time of adding aluminum powder was recorded as 0 min. After 5 min, calcium alloy was added. After 65 min, the sample was taken out and placed in water for cooling. During the whole smelting process, high purity argon gas was fed into the tubular resistance furnace. The samples can be divided into three groups according to the amount of aluminum alloy and calcium alloy added to the steel. The composition of the three groups of samples is shown in Table 2. In sample 1, only aluminum alloy was added for deoxygenation, and calcium alloy was not added. A large amount of aluminum and calcium was added in sample 2, and a small amount of aluminum and calcium was added in sample 3. The three groups of samples were processed into metallographic samples of 10 mm × 10 mm × 10 mm, rod samples of Φ5 × 100 mm, and metal chips, respectively. The metallographic samples were ground to a mirror finish with different mesh sandpaper and emery polishing paste. The composition and characteristics of inclusions were analyzed by scanning electron microscope and energy dispersive spectrometer (SEM-EDS). Twenty inclusions were randomly selected from each sample to detect the content of corresponding chemical elements. The composition and type of inclusions were analyzed according to the stoichiometric relationship. One hundred fields of view were selected for each sample. Sample surface pictures were taken under the scanning electron microscope (SEM) 1000× field. Image-ProPlus image processing software was used to analyze the size, quantity, coordinate parameters, and other characteristic values of inclusions.
Composition and Morphology of Inclusions
Inclusions with a low deformation rate induce cracks in high-carbon hard-wire steel during drawing, mainly due to the different thermal expansion coefficients between inclusions and steel matrix where a radial tensile force is generated in the matrix around inclusions, which leads to a stress concentration around inclusions. When inclusions change to a spherical shape, the stress concentration around inclusions weakens, which improves the drawing performance of high-carbon hard-wire steel [22].
The composition and morphology of inclusions in three groups of samples are shown in Figure 1. Figure 1a-c shows the composition and morphology of typical inclusions in sample 1 without calcium treatment. It can be seen that all inclusions were alumina inclusions with irregular morphology. Figure 1d-f shows the composition and morphology of typical inclusions in the calcium-treated sample 2. Almost all inclusions in sample 2 were transformed into calcium-aluminate inclusions after 60 min of calcium treatment, but the morphology of inclusions was still irregular. The inclusions in sample 3 treated with calcium were mainly calcium aluminate, and the content of CaO in the inclusions was higher than that in sample 2, and the morphology of the inclusions was almost spherical. The surface scanning results of inclusions in Figure 2 also showed that the inclusions in samples 2 and 3 were calcium aluminate, and the elements of Ca and Al distributed in inclusions homogeneously. Figure 3 shows the inclusion composition distribution of samples 2 and 3. It can be seen that CaO content in the inclusion of sample 2 was in the range of 5 to 25%, and most of the calcium aluminates were CA 2 and CA 6 . The CaO content in sample 3 inclusions ranged from 20 to 50%, and most of the calcium aluminates were CA 2 and CA. A small amount of C 12 A 7 was found in sample 3. According to the phase diagram of CaO-Al 2 O 3 calculated by Factsage as shown in Figure 4, the melting point of CaO-6Al 2 O 3 (CA 6 ) was 1833 • C, CaO-2Al 2 O 3 (CA 2 ) was 1765 • C, CaO-Al 2 O 3 (CA) was 1604 • C, 12CaO-7Al 2 O 3 (C 12 A 7 ) was 1455 • C, 3CaO-Al 2 O 3 (C 3 A) melting point was 1539 • C. The liquid calcium aluminate at 1600 • C is marked by shadow area in Figure 3, where the CaO content was in the range of 37 to 57%. When the composition of calcium aluminate inclusions was between CA and CA 2 , the inclusions were semi-liquid at 1600 • C. The calcium aluminate inclusions ranging from CA 2 to CA 6 were solid-state at 1600 • C. Figure 3 indicates that most of the inclusions in sample 2 were solid and those in sample 3 were semi-liquid or liquid. The inclusion in sample 1 was a solid alumina inclusion. The research [9,30,31] showed that the deformability of inclusion is directly related to its melting point, and the lower the melting point, the better the deformability. The modification of B-type brittle inclusions into low melting point plastic inclusions will reduce their harmful effects on the production and performance of steel. At the same time, it also showed that the inclusion in sample 3 had a good deformation effect. Figure 3 shows the inclusion composition distribution of samples 2 and 3. It can be seen that CaO content in the inclusion of sample 2 was in the range of 5 to 25%, and most Figure 3 shows the inclusion composition distribution of samples 2 and 3. It can be seen that CaO content in the inclusion of sample 2 was in the range of 5 to 25%, and most showed that the deformability of inclusion is directly related to its melting point lower the melting point, the better the deformability. The modification of B-typ inclusions into low melting point plastic inclusions will reduce their harmful e the production and performance of steel. At the same time, it also showed that t sion in sample 3 had a good deformation effect. In order to explore the relationship between inclusion morphology and i composition, the relationship between CaO content in inclusion and inclusion asp was statistically analyzed in this study, as shown in Figure 5. With the increase in oxide content in the calcium aluminate inclusions, the ratio of length to width w to 1. When the content of CaO was higher than 30%, the aspect ratio was in the ra to 1.2, indicating that the inclusion was approximately spherical. The main reaso when the content of calcium oxide in the calcium aluminate inclusions was hig 30%, the inclusions were located in the solid-liquid two-phase region or the liqu region. In order to explore the relationship between inclusio composition, the relationship between CaO content in inclus was statistically analyzed in this study, as shown in Figure 5 oxide content in the calcium aluminate inclusions, the ratio to 1. When the content of CaO was higher than 30%, the asp to 1.2, indicating that the inclusion was approximately sphe when the content of calcium oxide in the calcium aluminat 30%, the inclusions were located in the solid-liquid two-ph region. In order to explore the relationship between inclusion morphology and inclusion composition, the relationship between CaO content in inclusion and inclusion aspect ratio was statistically analyzed in this study, as shown in Figure 5. With the increase in calcium oxide content in the calcium aluminate inclusions, the ratio of length to width was closer to 1. When the content of CaO was higher than 30%, the aspect ratio was in the range of 1 to 1.2, indicating that the inclusion was approximately spherical. The main reason is that when the content of calcium oxide in the calcium aluminate inclusions was higher than 30%, the inclusions were located in the solid-liquid two-phase region or the liquid phase region. Figure 6 shows the inclusion size distribution in high-carbon hard-wire steels. Among the three groups of samples, the inclusion size of sample 1 was the most widely distributed, and the diameter of the largest inclusion was more than 10 μm. The size of the largest inclusion was less than 8 μm in the samples treated with calcium, i.e., samples 2 and 3. The number of inclusions with 1-3 μm was the largest in these three groups of samples. According to the analysis of the average size and number of inclusions in Figure 7, it can be seen that the average size of inclusions in calcium-treated steel was significantly smaller than that of high-carbon hard-wire steel without calcium treatment. The size of inclusion in sample 3 was obviously smaller than that in sample 2, indicating that the better modification of inclusion is beneficial to the refinement of inclusions. In addition, Figure 6 shows the inclusion size distribution in high-carbon hard-wire steels. Among the three groups of samples, the inclusion size of sample 1 was the most widely distributed, and the diameter of the largest inclusion was more than 10 µm. The size of the largest inclusion was less than 8 µm in the samples treated with calcium, i.e., samples 2 and 3. The number of inclusions with 1-3 µm was the largest in these three groups of samples. Figure 6 shows the inclusion size distribution in high-carbon hard-wire steels. Among the three groups of samples, the inclusion size of sample 1 was the most widely distributed, and the diameter of the largest inclusion was more than 10 μm. The size of the largest inclusion was less than 8 μm in the samples treated with calcium, i.e., samples 2 and 3. The number of inclusions with 1-3 μm was the largest in these three groups of samples. According to the analysis of the average size and number of inclusions in Figure 7, it can be seen that the average size of inclusions in calcium-treated steel was significantly smaller than that of high-carbon hard-wire steel without calcium treatment. The size of inclusion in sample 3 was obviously smaller than that in sample 2, indicating that the better modification of inclusion is beneficial to the refinement of inclusions. In addition, According to the analysis of the average size and number of inclusions in Figure 7, it can be seen that the average size of inclusions in calcium-treated steel was significantly smaller than that of high-carbon hard-wire steel without calcium treatment. The size of inclusion in sample 3 was obviously smaller than that in sample 2, indicating that the better modification of inclusion is beneficial to the refinement of inclusions. In addition, the number of inclusions in sample 2 was the largest, while the number of inclusions in sample 3 was the least.
Surface Density Distribution of Inclusions
To describe the distribution of inclusions directly, the area percentage of inclusions at different locations on the surface of the steel sample was statistically obtained according to the two-dimensional coordinates of the inclusions and the area of each inclusion, as shown in Figure 8. Area density represents the percentage of inclusion area to sample area, and it was obtained by Equation (1).
where A represents the area density of inclusions on the surface of the steel sample. Ainclusion represents the total area of all inclusions in the observed region. Asteel is the area of steel in the observed region, and it was 0.0286 mm 2 in this study. It can be observed that the distribution of inclusions was not uniform in the highcarbon hard-wire steel without calcium treatment. The density of inclusions per unit area on the steel surface was high in sample 1. The inclusions aggregate in the red region in Figure 8a and the maximum density of inclusions reached 1.4% in sample 1. The density of inclusions in calcium-treated steels was less than 0.2%, and the distribution of calciumaluminate inclusions was more uniform than that of alumina inclusions. In addition, the segregation area of inclusion in sample 3 was smaller than that in sample 2, indicating that the inclusions tend to distribute more homogeneously in steel with a better modification effect.
Surface Density Distribution of Inclusions
To describe the distribution of inclusions directly, the area percentage of inclusions at different locations on the surface of the steel sample was statistically obtained according to the two-dimensional coordinates of the inclusions and the area of each inclusion, as shown in Figure 8. Area density represents the percentage of inclusion area to sample area, and it was obtained by Equation (1).
where A represents the area density of inclusions on the surface of the steel sample. Ainclusion represents the total area of all inclusions in the observed region. Asteel is the area of steel in the observed region, and it was 0.0286 mm 2 in this study.
x FOR PEER REVIEW 8 of 13 In conclusion, the inclusions were modified after calcium treatment of high-carbon hard-wire steel. The modification of inclusions in sample 3 was better, where the CaO content of inclusions was higher, and there were more liquid and semi-liquid inclusions. It can be observed that the distribution of inclusions was not uniform in the highcarbon hard-wire steel without calcium treatment. The density of inclusions per unit area on the steel surface was high in sample 1. The inclusions aggregate in the red region in Figure 8a and the maximum density of inclusions reached 1.4% in sample 1. The density of inclusions in calcium-treated steels was less than 0.2%, and the distribution of calcium-aluminate inclusions was more uniform than that of alumina inclusions. In addition, the segregation area of inclusion in sample 3 was smaller than that in sample 2, indicating that the inclusions tend to distribute more homogeneously in steel with a better modification effect.
In conclusion, the inclusions were modified after calcium treatment of high-carbon hard-wire steel. The modification of inclusions in sample 3 was better, where the CaO content of inclusions was higher, and there were more liquid and semi-liquid inclusions. Furthermore, the dispersed inclusions with fine size were obtained in sample 3 because the liquid calcium aluminate inclusions have good wettability.
Discussion
According to the detected results of inclusion composition, it is concluded that the chemical reactions that occur in the formation process of a calcium aluminate inclusion in hard-wire steel are (2)-(7) The activity of solute in molten steel can be calculated according to Equation (8) a i = f i [%i] (8) a i , f i and [%i] are the activity, activity coefficient, and mass fraction of solute i in steel, respectively. The activity coefficient of solute in molten steel was calculated by the Wagner model [34,35]: where i and j represent different solutes, e j i is the first-order interaction coefficients, and γ j i is second-order interaction coefficient. Combined with the data in Tables 2-4, Formulas (7) and (8) were used to calculate the activity coefficient and activity of Ca and Al in sample 2 and sample 3 and listed in Table 5. According to the thermodynamic data of chemical reactions (3)-(7) in molten steel, the classical thermodynamic calculated method was adopted to obtain the equilibrium relations of various calcium aluminate salts, as shown in Figure 9. According to Figure 9, when a Al was more than 0.0030, the type of calcium-aluminate inclusions mainly depended on the calcium activity in steel. With the increase in calcium activity, the inclusions gradually underwent the following transformation: Al 2 O 3 →CA 6 →CA 2 →CA→C 12 A 7 →C 3 A. According to the activity of calcium and aluminum in samples 2 and 3, as shown in Table 5, two points in Figure 8 were obtained. The thermodynamic calculated results showed that the types of inclusions at equilibrium in sample 2 were CA 6 and CA 2 , and the types of inclusions in sample 3 were CA and C 12 A 7 , which was consistent with the experimental test results. Table 4. Second-order interaction coefficients of elements with Al and Ca in molten steel (1873 K).
Ca
Al O Ca --−36,000 Al -−0.0284 -Combined with the data in Tables 2-4, Formulas (7) and (8) were used to calculate the activity coefficient and activity of Ca and Al in sample 2 and sample 3 and listed in Table 5. According to the thermodynamic data of chemical reactions (3)-(7) in molten steel, the classical thermodynamic calculated method was adopted to obtain the equilibrium relations of various calcium aluminate salts, as shown in Figure 9. According to Figure 9, when aAl was more than 0.0030, the type of calcium-aluminate inclusions mainly depended on the calcium activity in steel. With the increase in calcium activity, the inclusions gradually underwent the following transformation: Al2O3→CA6→CA2→CA→C12A7→C3A. According to the activity of calcium and aluminum in samples 2 and 3, as shown in Table 5, two points in Figure 8 were obtained. The thermodynamic calculated results showed that the types of inclusions at equilibrium in sample 2 were CA6 and CA2, and the types of inclusions in sample 3 were CA and C12A7, which was consistent with the experimental test results. To study the modification of type B inclusion in high-carbon hard-wire steel, Factsage 7.2 software was used and combined with the high-carbon hard-wire steel composition in Table 2. The effect of the amount of Ca and Al addition on the composition and quality of inclusions in the high-carbon hard-wire steel was calculated and illustrated in Figure 10. It indicates that the inclusions were alumina, solid calcium aluminate, liquid oxide, and calcium silicate in high-carbon hard-wire steel when adding Ca ranging from 0 to 0.0080% and adding Al ranging from 0.0050% to 0.0150%. When the amount of added aluminum was 0.0050%, the alumina inclusion gradually turned into liquid calcium aluminate with an increase in Al content. There was a small amount of solid calcium aluminate with the addition of calcium in steel ranging from 0.0020% to 0.0021%. As the amount of calcium was less than 0.0070%, the amount of liquid oxide increased with the increase in calcium addition. When the amount of calcium addition was greater than 0.0070%, the amount of liquid oxide was reduced with an increase in calcium addition, which was transfer into To study the modification of type B inclusion in high-carbon hard-wire steel, Factsage 7.2 software was used and combined with the high-carbon hard-wire steel composition in Table 2. The effect of the amount of Ca and Al addition on the composition and quality of inclusions in the high-carbon hard-wire steel was calculated and illustrated in Figure 10. It indicates that the inclusions were alumina, solid calcium aluminate, liquid oxide, and calcium silicate in high-carbon hard-wire steel when adding Ca ranging from 0 to 0.0080% and adding Al ranging from 0.0050% to 0.0150%. When the amount of added aluminum was 0.0050%, the alumina inclusion gradually turned into liquid calcium aluminate with an increase in Al content. There was a small amount of solid calcium aluminate with the addition of calcium in steel ranging from 0.0020% to 0.0021%. As the amount of calcium was less than 0.0070%, the amount of liquid oxide increased with the increase in calcium addition. When the amount of calcium addition was greater than 0.0070%, the amount of liquid oxide was reduced with an increase in calcium addition, which was transfer into calcium silicate. As the addition amount of aluminum was 0.01% and 0.015%, the content of CaO in liquid calcium aluminate increased with the increase in calcium content, and the inclusions gradually underwent the following transformation: Al 2 O 3 →CA 6 →CA 2 →liquid calcium aluminate. The appropriate amount of Ca addition to modify B-type inclusions into calcium aluminates was less in high-carbon hard-wire steel with lower aluminum content. calcium silicate. As the addition amount of aluminum was 0.01% and 0.015%, the content of CaO in liquid calcium aluminate increased with the increase in calcium content, and the inclusions gradually underwent the following transformation: Al2O3→CA6→CA2→liquid calcium aluminate. The appropriate amount of Ca addition to modify B-type inclusions into calcium aluminates was less in high-carbon hard-wire steel with lower aluminum content.
Conclusions
The mechanism and law on the modification of type B inclusions in high-carbon hard-wire steel with Ca treatment were studied by thermal experiment and thermodynamic calculation. The main conclusions are as follows: (1) Ca treatment has a great effect on the composition and morphology of type B inclusions in high-carbon hard-wire steels. The modification effect of inclusion was better in high-carbon hard-wire steel with Al of 0.0053% and Ca of 0.0029% than that in steel with Al of 0.011% and Ca of 0.0052%, in which the inclusions were mainly spherical semiliquid and liquid CA2, CA, and C12A7. The experimental results have a good agreement with the classical thermodynamic calculation results.
(2) CaO content in calcium aluminate inclusions directly affects the morphology of inclusions. The ratio of length to width decreases and tend to be 1 with the increase in CaO content in the inclusion. When the content of CaO was higher than 30%, the aspect ratio was in the range of 1 to 1.2, indicating that the inclusion is approximately spherical. The main reason is that when the content of calcium oxide in the calcium aluminate inclusions is higher than 30%, the inclusions are located in the solid-liquid two-phase region or the liquid phase region.
(3) The size distribution of inclusion in high-carbon hard-wire steel became narrower, and inclusion size was smaller, and inclusions were distributed more uniformly in highcarbon hard-wire steels after calcium treatment. It indicates that the modification of inclusions is conducive to obtaining dispersed inclusions with fine size. Compared with the high-carbon hard-wire steel without calcium treatment, when the calcium content was 0.0029%, the average size of inclusion decreased from 3.2 to 2.1, and the inclusions were difficult to segregate.
Conclusions
The mechanism and law on the modification of type B inclusions in high-carbon hardwire steel with Ca treatment were studied by thermal experiment and thermodynamic calculation. The main conclusions are as follows: (1) Ca treatment has a great effect on the composition and morphology of type B inclusions in high-carbon hard-wire steels. The modification effect of inclusion was better in high-carbon hard-wire steel with Al of 0.0053% and Ca of 0.0029% than that in steel with Al of 0.011% and Ca of 0.0052%, in which the inclusions were mainly spherical semi-liquid and liquid CA 2 , CA, and C 12 A 7 . The experimental results have a good agreement with the classical thermodynamic calculation results.
(2) CaO content in calcium aluminate inclusions directly affects the morphology of inclusions. The ratio of length to width decreases and tend to be 1 with the increase in CaO content in the inclusion. When the content of CaO was higher than 30%, the aspect ratio was in the range of 1 to 1.2, indicating that the inclusion is approximately spherical. The main reason is that when the content of calcium oxide in the calcium aluminate inclusions is higher than 30%, the inclusions are located in the solid-liquid two-phase region or the liquid phase region.
(3) The size distribution of inclusion in high-carbon hard-wire steel became narrower, and inclusion size was smaller, and inclusions were distributed more uniformly in highcarbon hard-wire steels after calcium treatment. It indicates that the modification of inclusions is conducive to obtaining dispersed inclusions with fine size. Compared with the high-carbon hard-wire steel without calcium treatment, when the calcium content was 0.0029%, the average size of inclusion decreased from 3.2 to 2.1, and the inclusions were difficult to segregate.
(4) The thermodynamic calculated results based on Factsage indicate that when the added amount of aluminum was 0.0050%, the alumina inclusions gradually changed into liquid calcium aluminate inclusions with the increase in the addition amount of calcium in the high-carbon hard-wire steel. When the addition amount of aluminum was 0.01% and 0.015%, the content of CaO in liquid calcium aluminate increased with the increase in calcium addition in high-carbon hard-wire steels, and the inclusions gradually underwent the following transformation: Al 2 O 3 →CA 6 →CA 2 →liquid calcium aluminate.
The appropriate amount of Ca addition to modify B-type inclusions into calcium aluminates is less in high-carbon hard-wire steel with a lower aluminum content.
Author Contributions: Conceptualization, L.W.; methodology, Z.X.; software, L.W. and Z.X.; writingoriginal draft preparation, L.W. and Z.X.; writing-review and editing, L.W. and C.L. All authors have read and agreed to the published version of the manuscript. | 6,818 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Modernization of family farms improves the sustainability of food security for farm households in Burkina Faso
Family farms are poorly modernized in Burkina Faso despite their predominance in the country’s agriculture and their major contribution to national food production. Convincing evidence of the contribution of family farm modernization to food security is needed to support advocacy. This study used data from recent national longitudinal surveys and Cox semi-parametric regression methods to explore the effect of factors of modernization on the food security of farm households in Burkina Faso. The results showed that the training of agricultural workers, ownership of traction animals, and use of improved seeds reduced the risk of food-secure households falling into food insecurity by 22.8, 21.6, and 14.9%, respectively. These three factors significantly determine the stability of households’ food security, suggesting that the modernization of family farms could contribute to the prevention of food insecurity in Burkina Faso. A key strength of this study is that it was able to capitalize on the wealth of these data, which come from national surveys that are representative of farm households at the provincial level, longitudinal and prospective, making it possible to track the same households over time, at an annual frequency.
INTRODUCTION
Since the colonial period, the Sudano-Sahelian populations have experienced recurrent food crises, making food insecurity a historical marker of their societies and spaces. These populations have been exposed to several forms of food insecurity, ranging from seasonal to persistent (Janin, 2010). For example, the famine of 1972/1973, which led to the creation of the Standing Inter-State Committee for Drought Control in the Sahel (CILSS), was one of the most serious food crises experienced by the Sudano-Sahelian populations in the twentieth century (Courade et al., 2000;Bonnecase, 2010). Even today, food insecurity remains an acute problem in Sahelian and West African countries. According to the Cadre Harmonisé analyses of the food and nutritional situation in the Sahel and West Africa, more than nine million people were experiencing a food crisis between October and December 2019, including 620,000 considered to be in food emergency status (RPCA, 2019). These situations of food insecurity, which characterize in particular the Sahelian populations, who are more vulnerable to food insecurity (Janin, 2006), are due on one hand to rainfall deficits and land degradation that hamper agricultural and fodder production in the region, and on the other, to insecurity and intercommunity conflicts that prevent the populations from accessing the food produced (Ouédraogo et al., 2007;RPCA, 2019).
As a Sahelian and essentially agricultural country, Burkina Faso is not exempted from the food insecurity situation that prevails in the West African sub-region. In this country, food insecurity is a matter of constant concern and is part of the daily life of many households. For example, in 2008, 83.5% of households felt foodinsecure, of which 30% were moderately food-insecure and 5.5% highly food-insecure (DGPER, 2009). This sense of food insecurity is reflected in household food consumption patterns. In 2012, about 57% of Burkinabe households had poor and limited food consumption, mainly dominated by cereals (Burkina Faso, 2012). According to the same source, food insecurity affected more than 35% of households in the 170 communes declared at risk of food insecurity in 2012. Also in 2012, the United Nation's Food and Agriculture Organization (FAO) estimated the number of food-insecure people in Burkina Faso at 1.8 million (FAO, 2012). More recently, between October and December 2019, more than 1.2 million food-insecure people need immediate assistance in Burkina Faso (RCPA, 2019). These figures show that Burkina Faso faces a chronic challenge to ensure sustainable food and nutrition security for its population.
To meet this challenge, family farms can play an important role in addressing the problems of food insecurity. Indeed, the contribution of these family farms is essential to the food supply of the Burkinabe population. For example, the demand for sorghum and millet is fully met by national production, which comes predominantly from family farms. These products represent about 66% of national cereal supplies (Zoundi, 2012;FEWS NET, 2017). In addition, family farms supply the country's major cities with fresh produce (Robert et al., 2018). These family farms have also proven their capacity to supply the national market with locallyproduced broiler chickens (Ouédraogo and Zoundi, 1999). Despite this contribution to the country's food supply, Burkina Faso's family farms are poorly modernized, as is its agriculture overall, being characterized by low mechanization and low consumption of agricultural inputs. In 2009, the proportion of farms using a tractor was 0.2% and the amount of fertilizer used on arable land was 9.13 kg/ha in Burkina Faso, compared to 10.46 kg/ha in sub-Saharan Africa and 122.13 kg/ha globally (MAFAP, 2013). This poor modernization of agriculture is most pronounced at the level of small family farms with a surface area of three hectares or less. At this level, Taondyande's (2018) analysis of the production potential of family farms in Burkina Faso is very illustrative.
1 That analysis revealed that in the 2016/2017 agricultural season, the dose of mineral fertilizer used was 12 kg/ha for very small farms and 19 kg/ha for small farms, while it was 53 kg/ha for large farms. Furthermore, that analysis showed that improved seeds were not much used in Burkina Faso. For the group of very small and small farms, the rate of use of improved seeds was about 1 kg/ha on average compared to 6.6 kg/ha for large farms, which is far below the required rate of 15 kg/ha.
On the strength of this observation, the country has embarked on a process of modernizing its agriculture, which is heavily dominated by family farms (Burkina Faso, 2015a). Family farming is thus increasingly taken into account in this modernization process. The authorities' commitment to the modernization of family farms is reflected in the national food and nutritional security policy, which takes into consideration familybased agriculture and the development of family farms (Burkina Faso, 2013). The agro-sylvo-pastoral, fisheries, and wildlife policy law adopted in 2015 also testifies to this commitment of the Burkinabe authorities to the modernization of family farms. For example, article 116 of this law stipulates that: "Mechanization in the agro-sylvopastoral, fisheries, and wildlife sectors must be adapted and accessible to family farmers" (Burkina Faso, 2015b, p. 54, authors' translation). The successful implementation of such strategy to modernize family farms must be supported by convincing evidence of the positive impact of their modernization on food security. This study therefore sought to test the hypothesis that modernizing family farms can lessen the risk of households becoming food-insecure.
Over the past two decades, much work has been done on the issue of food in developing countries, particularly in sub-Saharan Africa (Courade et al., 2000;Babatunde et al., 2007;Ouédraogo et al., 2007;Coulibaly et al., 2008;Beyene and Muche, 2010;Janin, 2010;Yabile, 2011;Zoundi, 2012;Ndobo and Sekhampu, 2013;Gebrehiwot and van der Veen, 2015;Bekele, 2017;Feyisa, 2018). Of these studies, only a few have sought to identify factors associated with food security or insecurity in sub-Saharan Africa, and those have focused mainly on Ethiopia (Beyene and Muche, 2010;Gebrehiwot and van der Veen, 2015;Bekele, 2017;Feyisa, 2018). Using econometric models, these studies confined themselves to the identification of factors associated with food security (or food insecurity) status, but without establishing causal links between these factors and food security. This was due in particular to the cross-sectional nature of the data used in these studies, which did not allow causal links to be established between the phenomenon under study and variables that evolve over time. In contrast to previous studies, the present study used recent longitudinal data (panel data) with appropriate analyses (biographical analyses) to assess the impact of family farming modernization factors in Burkina Faso on households' likelihood of not becoming food-insecure.
LITERATURE REVIEW
Food security is a complex concept that has evolved considerably over time. In 1996, the World Food Summit defined food security in these terms: "Food security exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life" (FAO, 1996;FAO, 2008, p.1). This definition, widely used today, has four main dimensions that must be applied simultaneously to achieve food security objectives: physical availability of food, economic and physical access to food, food utilization, and stability of the other three dimensions over time (FAO, 2008). The complexity of the concept of food security has resulted in the existence of several conceptual frameworks that attempt to better explain the linkages between the different dimensions of food security and related concepts, such as ecological, social, economic, and political aspects. These associated concepts contribute to the overall understanding of food security by shedding light on the choices and issues that determine the availability of the food that people need and want (FAO, 2011;Ndobo and Sekhampu, 2013).
The relationship between food security and the modernization of family farms can be analyzed in terms of the first dimension of food security, which concerns food supply. This supply is determined by the level of food production, size of reserves, and net trade (FAO, 2008;Burkina Faso, 2012). Generally speaking, agricultural modernization can be understood as a modification of agricultural production conditions aimed at improving not only the quantity of production, but also the productivity of the various factors (capital, labor, land) involved in agricultural production (Perrier-Bruslé, 2009). Applying these changes at the family farm level is likely to result in better coverage of household food needs in developing countries. Advocates of family farming believe that profound changes in the production conditions of family farms in developing countries are key to ensuring food security in these countries (Zoundi, 2012;Taondyande, 2018).
In the literature, the relationship between the factors of agricultural modernization and food security has thus far been inadequately examined. Existing research can be divided into two groups. The first includes studies that directly investigated the factors associated with food security or insecurity; the second group consists of studies that established a link between modernization and agricultural productivity based on the hypothesis that productivity has an influence on food security. The studies that investigated factors associated with food security or insecurity has shown that modern technologies, such as fertilizers and improved seeds, have an influence on the food security of small family farms. In south-western Ethiopia, Feyisa (2018) observed at the bivariate level that households using improved seeds were more food-secure than those not using them. Also in Ethiopia, Bekele (2017) noted in Wolayta that access to improved seeds helped diversify and increase food production of rural households, with a positive impact on food security. With regard to fertilizer, its impact on food security is unclear. While Beyene and Muche (2010) showed that the use of chemical fertilizer positively influenced food security in central Ethiopia, Feyisa (2018) observed, on the contrary, that the amount of fertilizer used was negatively associated with food security. However, studies have shown that fertilizer use has a positive impact on agricultural productivity in Central Africa (Yakete-Wetonnoubena and Mbetid-Bessane, 2019) and on global agricultural production (Roberts, 2009).
In addition to modern technologies, some studies have found other variables of agricultural modernization, such as access to agricultural credit and the use of water and soil conservation (WSC) techniques, to be explanatory factors for food security (Beyene and Muche, 2010;Gebrehiwot and van der Veen, 2015). For example, in the Tigray region of northern Ethiopia, Gebrehiwot and van der Veen (2015) assessed the impact of a food security program based on financial loans. That program provided credit to poor households for a range of agricultural activities and training. Their study showed that the program had a positive effect, in that it improved household dietary caloric intake by 772.19 kcal/day per adult. Regarding WSC techniques, Beyene and Muche (2010) showed that these significantly influenced the food security of rural households in central Ethiopia. Households practicing at least one WSC technique were 3.5 times more likely to be food-secure than those practicing none.
Other studies have shown that the modernization of family farms can influence household food security through the increased agricultural yield generated. For example, animal traction, often presented as the driving force behind the modernization of family farming in sub-Saharan Africa, accelerates the execution of cultivation operations, thereby increasing the area under cultivation and, by extension, the quantities produced (Havard et al, 2009). Yabile (2011) showed that ownership of agricultural equipment was associated with lower food vulnerability of populations in four regions of Côte d'Ivoire. In Togo, Saragoni et al. (1992) showed that applying a mineral fertilizer on a variable crop succession increased the productivity of degraded soils by restoring the physical properties of these soils. Similarly, in northern Burkina Faso, an experimental study conducted by Sawadogo et al. (2008) showed that zaï, a WSC technique, increased crop yields and enhanced the value of eroded land. Using zaï and compost enhanced with burkina phosphate, these authors obtained yields of 1200 kg/ha for sorghum on crusted soils, whereas the most fertile land in the same region usually produces barely 800 kg/ha under normal rainfall conditions. In general, this brief review of the literature shows that modernization of family farms can contribute to achieving food security objectives in developing countries.
Data sources
The data used in this study are from the Burkina Faso Permanent Agricultural Surveys (EPA) series. The EPA is a panel survey The present study used data from the latest panel for the 2013/2014 to 2016/2017 crop years. The sample for this panel was obtained through a two-stage random draw and stratified by province and type of producer (small producers from low-potential villages, small producers from high-potential villages, large producers from low-potential villages, and large producers from high-potential villages). At the first level, 1759 administrative villages were drawn with a probability proportional to the size of the villages. At the second stage, 5297 agricultural households were selected by simple random selection without discount. Sample of households tracked fluctuated slightly over the period. The sample consisted of 5,297 farm households in 2014, 5,014 in 2015, 5,079 in 2016, and 5,165 in 2017. This sample instability is related to the entry and exit of some households in the survey clusters. Of all the households tracked, 4,943 were food-secure at least once between 2014 and 2017.
The geographical scope of this panel is all agricultural households in Burkina Faso except those in the urban communes of the following 12 cities: Ouagadougou, Bobo Dioulasso, Banfora, Koudougou, Tenkodogo, Kaya, Fada N'gourma, Po, Gaoua, Dori, Dédougou and Ouahigouya. Each EPA round consists of three distinct phases: the first phase involves enumerating household members and updating basic information. The second phase is for crop forecasting, stock estimates, and production utilization. The third phase concerns the acquisition and the use of agricultural inputs, the estimation of harvests from yield squares and the assessment of the level of food security.
Study variables
The dependent variable in this study was the length of time (in years) that a farm household was food-secure. The food-secure or food-insecure status of the household was thus a key variable. It Bougma et al. 133 was captured from the household consumption score. This indicator, based on the number of days of consumption of eight food groups during the past week, is a good indicator of household access to a sufficiently energetic diet (Leroy et al., 2015). The eight food groups used are: main foods (rice, corn, tuber,...); peas and lentils; vegetables; fruits; meat and fish; milk; sugar and oil. Weights determined by the World Food Programme (WFP) ranging from 0.5 to 4 were assigned to each food group. The household food consumption score is calculated by first multiplying for each food group its frequency of consumption during the last seven days by its food weight, and then averaging the scores obtained. The score calculation classifies households into two groups: those with a score equal or less to 35 are in food insecure group and those with a score of more than 35 are in food secure group. The independent variables of interest were: agricultural worker training, measured by the presence of at least one trained agricultural worker in the household; access to agricultural credit, measured by the presence in the household of at least one member who had received agricultural credit; membership in farmers' organizations, measured by the presence of at least one agricultural worker in the household who was a member of a farmers' organization; traction animal ownership, measured by the presence of at least one traction animal in the household; the use of WSC techniques, measured by the presence of at least one household plot being cultivated using WSC techniques; fertilizer use, measured by the use of fertilizer (urea, phosphate, NPK) on at least one household plot; and the use of improved seeds, operationalized by the use of improved seeds (maize, sorghum, fonio, yam, etc.) on at least one household plot. These variables were dichotomous, with the modalities being "yes", if the household possessed the factor, and "no", if otherwise.
These variables of interest were first tested in a survival model before being taken into account in the analyses. Those tests led to the removal of access to agricultural credit from the analyses. In fact, this variable was highly correlated with agricultural worker training, as more than 70% of farm households with access to agricultural credit had at least one trained agricultural worker. As such, agricultural worker training captured the effect of access to agricultural credit. Subsequently, the six variables retained were used to create a composite variable called "degree of agricultural modernization", which was used to test the combined effect of factors of modernization on food security. This variable, also used in the analyses, comprised seven modalities ranging from zero (0) for households with no factors of modernization to six for those with all six of the selected factors of modernization.
The other explanatory variables used to control the effect of factors of modernization were: cotton cultivation (yes, no); agroecological region (East, Sahel, Center, North-West, West); rainfall (in mm); sex of the head of household (male, female); age group of the head of household (under 35 years, 35-49 years, 50-59 years, 60 years and over); education of the head of household (educated, uneducated); area per agricultural worker (less than 1 hectare, 1 to less than 3 hectares, 3 to less than 6 hectares, 6 hectares and over); and household size (under 8 persons, 8 to 12 persons, and more than 12 persons). The agro-ecological regions, which correspond to the five environmental and agricultural research regions of the Institute of the Environment and Agricultural Research (INERA) of Burkina Faso, are used to control the effect of biophysical factors such as soil fertility. The use of area per agricultural asset in the analyses is justified by the fact that family farms are not homogeneous and the impact of modernization factors can vary according to the different types of family farms.
Analysis methods
To assess the impact of agricultural modernization on food security, this study used Kaplan-Meier life tables and Cox semi-parametric regression. These biographical methods were chosen because of the longitudinal nature of the data. Their implementation was based on a conceptualization of farm households' transition from food security to food insecurity, which is important to explain. A household is at risk of experiencing the event (food insecurity) from the moment that it is in a food security situation. Thus, the observation began as of the date on which the household was first food-secure and continued until occurrence of the event (food insecurity). Households that did not experience the event by the observation end date (2017) were considered right-censored. Thus, censoring occurred if, at the date of the last survey, the household had not yet experienced a situation of food insecurity. The two types of observation exits (occurrence of event, date of survey) were the only ones considered in this study. This design excluded from the analyses households that had never experienced food security during the period. Consequently, these biographical analyses focus on the 4,943 households that experienced food security at least once between 2014 and 2017.
Kaplan-Meier's life table method, which describes events evolving over time, was used to explore the stability of households' food-secure status based on the factors of agricultural modernization. Such tables are used to construct curves representing time distribution before the occurrence of an event (Bocquier, 1996); in this case, a household's becoming foodinsecure. Significance tests (logrank tests) were conducted to verify whether the differences observed between households using a factor of modernization and those not using the same factor were significant. Cox semi-parametric regression was used to measure households' risk of falling into food insecurity according to the factors of agricultural modernization. This regression model calculated the effects of factors of modernization and control variables on the annual risk of falling into food insecurity. A significance threshold of 5% was used in this study.
Evolution of food security and factors of agricultural modernization
This analysis covered all households tracked between 2014 and 2017, whether they were food-secure or not. It showed that, over the period 2014-2017, the proportion of food-secure farm households declined steadily, from 83.1% in 2014 to 68% in 2017 ( Figure 1). The decline in the proportion of food-secure households worsened over that period. In fact, the difference in proportion went from 2.1% between 2014 and 2015, to 5.8% between 2015 and 2016, and 7.2% between 2016 and 2017. These proportion differences are statistically significant from one year to the next at the 5% threshold. These results suggest deterioration in the food status of farm households in Burkina Faso over the period studied. This situation can be related to the insecurity that the country has been experiencing since 2015. According to the Food Crisis Prevention Network, the security situation has aggravated food insecurity and undermined the livelihoods of people in Sahelian countries (RPCA, 2019). Figure 2 presents the evolution of the factors of farm modernization, based on the proportion of households with access to these factors. The curves show that the factors of agricultural modernization changed little over the 2014-2017 periods. For all factors, differences between the extremities of the proportions in that the period are below 8%. These differences range from 2.5% for use of improved seeds to 7.1% for membership in a farmers' organization. Moreover, the different factors of modernization evolved discontinuously except for fertilizer use, which increased steadily over the period studied. For example, the proportion of households using WSC techniques and that of households using improved seeds dropped continuously between 2014 and 2016, and then increased slightly in 2017. The same was true for access to credit, which declined between 2014 and 2015 before increasing. While the proportion of households with agricultural workers who were members of farmers' organizations rose steadily from 2015 onwards, it remained below its initial level of 40% in 2014.
Factors of modernization and household survival in food security
The analysis in this section focused on farm households' stability in food security by measuring the time elapsed between the first observed food security situation from 2014 onwards and the moment when these households became food-insecure. Overall, nearly 41% of households became food-insecure during the period 2014-2017. The curves constructed from the Kaplan-Meier estimators illustrate the timelines of household food security status and the differences according to the factors of agricultural modernization. Figure 3a shows that households with no trained agricultural worker become food-insecure at a faster rate than those with at least one trained agricultural worker. In 2015, 5.3% of households with no trained agricultural worker fell into food insecurity, compared to 3% for those with at least one trained agricultural worker. These proportions were 15.3 versus 11.1% in 2016 and 43.5 versus 35.4% in 2017. The significance test for this factor of modernization showed the difference between these two household groups to be significant at the threshold of 1‰.
The curves in Figure 3b show that households with no agricultural workers belonging to a farmers' organization become food-insecure faster than those with at least one worker who did. However, the proximity of the two curves suggests these two groups of households fell into food insecurity in much the same way. This was confirmed by the test of significance for farmers' organization membership, which turned out to be non-significant. In contrast to that factor, ownership of traction animals contributed positively to the stability of food-secure households. Figure 3c indicates that households that owned at least one traction animal fell into food insecurity less quickly than those without a traction animal. For example, in 2016, 13.2% of households that owned at least one traction animal became food-insecure, compared to 29.6% of households with none. The difference between these two groups of households was significant at the threshold of 1‰. Figure 3d shows that households that used WSC techniques fell into food insecurity slightly faster than those that did not use them. However, the difference between these two groups was not significant at the 5% threshold. Similarly, Figure 3e indicates that households that used fertilizer fell into food insecurity slightly faster than those that did not use fertilizer. However, the significance test showed the observed difference was not statistically significant. With regard to the use of improved seeds, the results showed that households that used them fell into food insecurity less quickly than those that did not (Figure 3f). The proportions of households falling into food insecurity were 4. 5, 14.4, and 41.2% in 2015, 2016, and 2017 respectively for households not using improved seeds, compared to 4.2, 11.1, and 38% in 2015, 2016, and 2017 respectively for those using
Effects of factors of modernization on household food security
Analyses in this section focused on factors of modernization that might explain a household's risk of becoming food-insecure based on Cox semi-parametric regression. Table 1 presents these results. Model 1 estimated the combined effect of the factors of modernization, while Model 2 estimated each factor's net effect on households' food security stability. The results of Model 1 showed that the net effects of the degree of agricultural modernization on the food security of farm households were significant at the 1% threshold. Thus, households with one or more factors of modernization were 41.5 to 57.3% less likely to become food-insecure than were households with none. Furthermore, the observation of relative risks indicated a variation in the effects of the degree of agricultural modernization, from the effect of a single factor to the cumulative effect of the six factors of modernization. Analysis of the results of Model 2 showed that the presence of at least one trained agricultural worker, the ownership of traction animals, and the use of improved seeds significantly determined the stability of households' food security. Thus, households with at least one trained agricultural worker were 22.8% less likely to become food-insecure than households with none. Similarly, households with at least one traction animal were 21.6% less likely to become food-insecure than those with none, and households that use improved seeds were 14.9% less likely to become food-insecure than households that did not use them. On the other hand, the results of Model 2 showed that membership in farmers' organizations, the use of WSC techniques, and the use of fertilizer did not determine households' food security stability at the 5% threshold. These factors of agricultural modernization had no significant effect even in the raw model.
Effects of other variables on household food security
The results of Model 2 showed that cotton cultivation, agro-ecological region, and household size were determining factors in household food security. Cotton cultivation had a negative effect on households' food security stability. Households that grew cotton were 16% more likely to become food-insecure than those that did not. Households in the Sahel were 53.8% less likely to become food-insecure than those in the Western agroecological region. On the other hand, households in the Eastern, North-Western, and Central agro-ecological regions were 18, 23, and 47% more likely, respectively, to be food-insecure than were households in the Western region. In terms of household size, larger size was associated with lower risk of falling into food insecurity. Households with more than 12 persons were 15.7% less likely to be food insecure than those with fewer than eight persons.
The results also indicated that the gender and age of the household head, as well as the area per farm worker, had significant gross effects. Households headed by females were 41.7% more likely to be food-insecure than were those headed by males. Similarly, households with heads aged 60 years and older were 15.3% more likely to be food-insecure than were those headed by a person under 35 years of age. Compared to households with less than one hectare, households with more than three hectares per agricultural worker were 14.3 to 29.5% less likely to be food-insecure. However, the effects of these three variables were insignificant when all the variables in the study were taken into account.
Evidence of the impact of agricultural modernization on food security
The results of this study showed that the degree of agricultural modernization was a major determinant of food security for farm households. Overall, accumulating several factors of agricultural modernization significantly reduced the risk of a farm household falling into food insecurity. This suggests that the modernization of family farms is likely to contribute to food security stability for farm households. By considering the effect of each of the factors of modernization, this study showed that agricultural worker training, traction animal ownership, and the use of improved seeds had a significant positive effect on the stability of farm households' food security. Having trained agricultural workers in the household significantly reduced the household's risk of falling into food insecurity. This result is in line with expectations, since the training of farm workers enhances the productivity of agricultural labor and agricultural capital, which implies an increased agricultural yield capable of generating decent incomes (Rolland, 2016).
Furthermore, the results show that traction animal ownership positively determined the food security of farm households. This result confirms to some extent the findings of Ndjadi et al. (2019), who showed that the number of animals in livestock farming influences farm performance. In the context of Burkina Faso, the possession of traction animals, on one hand, facilitates the production and transport of organic manure in the fields and the performance of harvesting work using the cart; and on the other hand, it enables the practice of harnessed cultivation for the ploughing of cotton, cereals, and groundnuts (Havard et al., 2004;Poda, 2004;Dufumier, 2015). The work of Yakete-Wetonnoubena and Mbetid-Bessane (2019) have shown that switching from manual to harnessed cultivation has increased agricultural productivity in Ouham in Central Africa. In addition, the results of the present study showed that the use of improved seeds is an important determinant of the stability of farming households' food security. This result, which confirms the descriptive analyses of some studies (Beyene and Muche, 2010;Feyisa, 2018), can be explained by the fact that improved seeds ensure better and more diversified crop yields and enable farmers to cope with current environmental challenges, including climate change (Bekele, 2017).
Other factors determining household food security
This study showed that cotton cultivation, agro-ecological region, and household size determine the stability of households' food security. Cotton cultivation has a significant influence on households' food security status. Contrary to expectations, households that grow cotton are more likely to become food-insecure than are those that do not grow cotton. This result goes against some studies that observed a positive correlation between cotton production and food production (Raymond and Fok, 1995;Poda, 2004). Such a result raises questions about the management of agricultural production in the cotton-growing areas of Burkina Faso. Poda (2004) noted that certain practices, notably the sale, transfer, and sharing of cereals, were likely to lead farm households in the cotton zone in western Burkina Faso into a situation of food non-self-sufficiency. Another explanation for this result could be related to the volatility of cotton prices from one year to the next, which does not allow farmers that mainly produce cotton to have stable and sufficient revenues to guarantee their food security. Furthermore, the negative effect of cotton cultivation on food security may be explained in the Burkinabe context by the fact that cotton revenues are not often used to purchase food for household consumption.
The results also show that the agro-ecological region determines the food security of farm households. Households in the Eastern, Central, and North-Western regions were more likely to be food-insecure than were those in the Western region. This result can be explained by the favorable climatic and edaphic conditions in the agro-ecological region of western Burkina Faso. Furthermore, the results showed that, compared to households in the Western region, households in the Sahel region were less likely to become food-insecure. One possible explanation for this result is that households in the Sahel region, in addition to agricultural products, benefit from livestock products (milk, meat, etc.), which could ensure greater dietary diversity and thus a better food consumption score.
Household size was found in this study to be associated with a lower risk, in that the larger the household, the less likely it was to become food-insecure. This result contrasts with what was observed in two studies in Burundi (Zoyem et al., 2008) and Ethiopia (Beyene and Muche, 2010). In those studies, larger household size negatively influenced food security. This difference in results could be explained by household composition in terms of agricultural workers and the use of child labor. In Burkina Faso, child labor is still common, such that most household members aged five years and older are considered agricultural workers (ISSP, 2018).
The results of the study also showed that certain variables such as the sex and age of the head of household and the area per farm worker did not determine household food security, all else being equal. These three variables seemed to affect the stability of households' food security through other variables ultimately considered as determinants, since these variables had significant effects in the raw model. As such, equitable access to factors of modernization by female heads of households, older heads of households, and households with small farms was likely to ensure stability of food security to the same extent as in households headed by men or with large farms.
Conclusion
The objective of this study was to explore the impact of the modernization of family farms on the food security of farm households in Burkina Faso. To this end, it tested the effects of factors of agricultural modernization on the stability of households' food security. Agricultural modernization was identified as the training of agricultural workers, membership in farmers' organizations, ownership of traction animals, use of fertilizer, use of WSC techniques, and use of improved seeds. The results of the study showed that the degree of agricultural modernization of a household determined the stability of its food security. Households with one or more factors of agricultural modernization were less likely to become food-insecure than those with no such factors. These results suggest that the modernization of family farms can be an important lever in the prevention of food insecurity in Burkina Faso. The combined effect of factors of modernization suggests that the possession of any factor of modernization is equally likely to reduce the risk of households falling into food insecurity. However, estimations of the effect of each factor of modernization showed that, of the six factors tested, three were the major determinant of food security than the others. These were the training of agricultural workers, the ownership of traction animals, and the use of improved seeds. The government must focus on these modernization factors to significantly improve household food security.
The unobserved effects of some factors of modernization may be related to conceptual limitations that are important to note. First, the complexity of the very concept of food security based on four dimensions could make it difficult to apprehend the effects of certain variables on food security, since this modernization does not act a priori on all dimensions of food security. Second, the approach as perceived in this study implies, on one hand, increased and diversified production of food crops intended primarily for personal consumption and, on the other hand, a reinvestment of income from cash production into household food supply. Once modernized, family farms might shift from food crops to cash crops, and the income earned might not be used primarily for household food supply. Such a scenario would not guarantee the achievement of food security objectives. On the other hand, a key strength of this study is that it was able to capitalize on the wealth of these data, which come from national surveys that are representative of farm households at the provincial level, longitudinal and prospective, making it possible to track the same households over time, at an annual frequency. | 8,576.4 | 2021-04-30T00:00:00.000 | [
"Economics"
] |
Nestin-dependent mitochondria-ER contacts define stem Leydig cell differentiation to attenuate male reproductive ageing
Male reproductive system ageing is closely associated with deficiency in testosterone production due to loss of functional Leydig cells, which are differentiated from stem Leydig cells (SLCs). However, the relationship between SLC differentiation and ageing remains unknown. In addition, active lipid metabolism during SLC differentiation in the reproductive system requires transportation and processing of substrates among multiple organelles, e.g., mitochondria and endoplasmic reticulum (ER), highlighting the importance of interorganelle contact. Here, we show that SLC differentiation potential declines with disordered intracellular homeostasis during SLC senescence. Mechanistically, loss of the intermediate filament Nestin results in lower differentiation capacity by separating mitochondria-ER contacts (MERCs) during SLC senescence. Furthermore, pharmacological intervention by melatonin restores Nestin-dependent MERCs, reverses SLC differentiation capacity and alleviates male reproductive system ageing. These findings not only explain SLC senescence from a cytoskeleton-dependent MERCs regulation mechanism, but also suggest a promising therapy targeting SLC differentiation for age-related reproductive system diseases.
(c) Flow cytometry analysis of apoptosis of primary Nestin-GFP+ SLCs from different age groups in vitro, double-stained with Annexin V-FITC and propidium iodine (PI).
(d) Quantification of apoptosis ratio of total cells from (e). (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(e) Flow cytometry analysis of mitochondrial membrane potential stained with TMRE of primary Nestin-GFP+ SLCs from different age groups in vitro. 20 μM FCCP was the positive control added to the 2m group.
(f) Quantification of mean fluorescence intensity of TMRE in (g). (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(g-i) Western Blot analysis and quantification of autophagy related proteins in Control (Vehicle) and Bafilomycin A1 (Baf A1) treated primary Nestin-GFP+ SLCs from different age groups in vitro. (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(j) Representative immunostaining pictures of mitophagy in Control (Vehicle) and Bafilomycin A1 (Baf A1) treated primary Nestin-GFP+ SLCs from different age groups in vitro. Mitochondria are stained with Mitotracker and autophagy is stained with LC3. Scale bar, 10 μm.
(k) Quantitative analysis of mitophagy by calculating Mander's coefficient in (I).
(Manders' coefficient of mitochondria is shown for each condition, n = 48, 53 and 48 cells for 2m, 12m and 24m SLCs, respectively; All data are mean ± SD; One-way ANOVA).
(l-n) Western Blot analysis and quantification of mitophagy related proteins in primary Nestin-GFP+ SLCs from different age groups in vitro. (n = 3 biological repeats for each group; All data are mean ± SD; Multiple t tests).
(o) Quantitative analysis of mitochondrial ATP production in primary Nestin-GFP+ SLCs from different age groups in vitro (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(p) Quantitative analysis of glucose uptake primary Nestin-GFP+ SLCs from different age groups in vitro (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(q) Quantitative analysis of lactate production of primary Nestin-GFP+ SLCs from different age groups in vitro (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA). Two-sided comparison; Error bars represent SDs. *p < 0.05, **p < 0.01, ***p < 0.001; Uncropped western blots and source data are provided as a Source Data file. (a) qPCR analysis of relative mRNA expression of MAMs resident proteins in primary Nestin-GFP+ SLCs from different age groups in vitro. (n = 3 biological repeats for each group; All data are mean ± SD; Multiple t tests).
(b-c) Western Blot analysis and quantification of MAMs resident proteins expression in primary Nestin-GFP+ SLCs from different age groups in vitro. (n = 3 biological repeats for each group; All data are mean ± SD; Multiple t tests).
(d-g) qPCR analysis of relative mRNA expression of IP3R2 (d), Western Blot analysis and quantification of IP3R2 protein expression (e) and qPCR analysis of relative mRNA expression of anti-oxidative (f) and ER stress (g) related genes of IP3R2-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; Multiple t tests).
(h-k) qPCR analysis of relative mRNA expression of MFN2 (h), Western Blot analysis and quantification of MFN2 protein expression (i) and qPCR analysis of relative mRNA expression of anti-oxidative (j) and ER stress (k) related genes of MFN2-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; h-i, unpaired t test; j-k, multiple t tests).
(l-o) qPCR analysis of relative mRNA expression of PACS2 (l), Western Blot analysis and quantification of PACS2 protein expression (m) and qPCR analysis of relative mRNA expression of anti-oxidative (n) and ER stress (o) related genes of PACS2-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; l-m, unpaired t test; n-o, multiple t tests).
(p-s) qPCR analysis of relative mRNA expression of GRP75 (p), Western Blot analysis and quantification of GRP75 protein expression (q) and qPCR analysis of relative mRNA expression of anti-oxidative (r) and ER stress (s) related genes of GRP75-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; p-q, unpaired t test; r-s, multiple t tests).
(t-w) qPCR analysis of relative mRNA expression of Sig1R (t), Western Blot analysis and quantification of Sig1R protein expression (u) and qPCR analysis of relative mRNA expression of anti-oxidative (v) and ER stress (w) related genes of Sig1R -knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; t-u, unpaired t test; v-w, multiple t tests). Two-sided comparison; Error bars represent SDs. *p < 0.05, **p < 0.01, ***p < 0.001.
Uncropped western blots and source data are provided as a Source Data file. (a) qPCR analysis of relative mRNA expression of Nestin in Nestinknockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(b-c) Western Blot analysis and quantification of Nestin protein expression of Nestin-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(d) Representative immunostaining pictures of colocalization between mitochondria and ER in Nestin-knockdown primary SLCs from 2 months old mice. Mitochondria and ER are marked with Mitotracker and ER tracker, respectively. Scale bar, 10 μm.
(e-f) Quantification of the levels of colocalization in (F) (Manders' (of mitochondria) and Pearson's coefficients are shown for each condition, n = 57, 52 and 45 cells for Control, shNES-1 and shNES-2 group, respectively; All data are mean ± SD; One-way ANOVA).
(g) Flow cytometry of intracellular ROS level stained with DHE of Nestinknockdown primary SLCs from 2 months old mice in vitro. 100 μM antimycin was the positive control added to the Control group.
(h) Quantification of mean fluorescence intensity of DHE in (I). (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(j) Quantification of percentage of SA-β-Gal positive cells in (D). (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA).
(k) qPCR analysis of relative mRNA expression of testosterone production related genes at day 9 during induced differentiation of Nestin-knockdown primary SLCs from 2 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; Two-way ANOVA).
(m) Quantification of the mean fluorescent intensity in (L) (n = 3 biological repeats for each group; All data are mean ± SD; Multiple t tests).
(n) Quantification of testosterone level in the supernatants of medium during induced differentiation at different time points (day 3, day 6, day 9) of Nestinknockdown primary SLCs from 2 months old mice (n = 3 biological repeats for each group; All data are mean ± SD; Two-way ANOVA). Two-sided comparison; Error bars represent SDs. *p < 0.05, **p < 0.01, ***p < 0.001.
Uncropped western blots and source data are provided as a Source Data file. biological repeats for each group; All data are mean ± SD; Unpaired t test).
(m) Quantification of mitochondrial ATP production in primary SLCs from 2 months old, 24 months old and melatonin-treated 24 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test).
(n) Quantification of glucose uptake in primary SLCs from 2 months old, 24 months old and melatonin-treated 24 months old mice. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test).
Uncropped western blots and source data are provided as a Source Data file. (a-f) qPCR analysis of relative mRNA expression of testosterone production related genes in LCs induced by primary SLCs from 2 months old, 24 months old and melatonin-treated 24 months old mice at day 9. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test).
(g) Quantitative analysis of mean fluorescence intensity of LHR staining in Figure. 5g. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test).
(h) Quantitative analysis of testosterone level in the serum from 2 months old, 24 months old and melatonin-treated 24 months old mice in day 9. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test).
(i-j) Western Blot analysis and quantification of Nestin expression in seminiferous tubules from 2 months old, 24 months old and melatonin-treated 24 months old mice after treatment with melatonin for 5 days. (n = 3 biological repeats for each group; All data are mean ± SD; Unpaired t test). Two-sided comparison; Error bars represent SDs. *p < 0.05, **p < 0.01, ***p < 0.001.
Uncropped western blots and source data are provided as a Source Data file. (n = 5 biological repeats for each group; All data are mean ± SD).
(c) Immunofluorescence analysis was carried out to determine the colocalization of AAV8-GFP (green) and PDGFRα (red). 30 μm for original pictures and 15 μm for enlarged pictures.
(d) Statistical analysis of efficacy of AAV8 transduction in PDGFRα+ SLCs (n = 5 biological repeats for each group; All data are mean ± SD).
(e) qPCR analysis of relative mRNA expression of Nestin. (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA and Sidak's multiple comparisons test) (f) Western Blot analysis of Nestin expression in the testis of wild type, AAV8-sgCon, AAV8-sgNes and AAV8-sgNes+Mel groups.
(i) Quantification of mean fluorescence intensity of DHE in Figure. 7i. (n = 3 biological repeats for each group; All data are mean ± SD; One-way ANOVA and Sidak's multiple comparisons test). | 2,545.2 | 2022-07-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
ALMA detection of CO rotational line emission in red supergiant stars of the massive young star cluster RSGC1 Determination of a new mass-loss rate prescription for red supergiants
,
Introduction
The evolution of massive stars up to the point of supernovae (SNe) remains poorly understood.The steepness of the initial mass function and their short lifetimes (∼15 Myr) make such stars rare, whilst the brevity of their post main-sequence (MS) evolution makes the direct progenitors of SNe rarer still.The pre-SN mass-loss behaviour is the key property that determines the appearance of the SN, since it dictates the extent to which the envelope is stripped prior to explosion.It also determines the nature of the end-state, that is complete disruption, neutron star, black hole, or total implosion with no supernova (e.g.Heger et al. 2003).
Send offprint requests to<EMAIL_ADDRESS>most common of the core-collapse SNe are of type IIP, which are observed to have red supergiants (RSGs) as their direct progenitors.Smartt (2009) noted that the range of initial masses of these SN progenitors inferred from pre-explosion photometry, 8≤M /M ⊙ ≤17, is at odds with conventional theory, which predicts that the upper mass limit should be closer to ∼30 M ⊙ ; referred to as the 'red supergiant problem' (e.g.Ekström et al. 2012).A potential explanation for this discrepancy is that the missing RSGs (i.e.those with an initial mass between 17 -30 M ⊙ ) collapse to form black holes with no observable SNe.Later on, Davies & Beasor (2020) cautioned that this observational cutoff (of 17 M ⊙ ) is more likely to be higher and is fraught with large uncertainties (19 +4 −2 M ⊙ ) and that also the upper mass limit from theoretical models should be shifted down-wards to ∼25 -27 M ⊙ .One of the main uncertainties in both the data analysis and stellar evolution predictions is our relatively poor knowledge of RSG mass-loss rates.
Mass loss during the RSG phase can affect the progenitors of SNe in two ways.Firstly, increased mass loss can strip the star of a substantial fraction of the envelope, causing the star to evolve back to the blue before SN (Georgy 2012), and possibly depleting the stellar envelope of hydrogen (hence changing what would have been a Type-II SN into a Type-I SN).Secondly, the mass ejected can enshroud the star in dust, increasing the visual extinction by several magnitudes (e.g. de Wit et al. 2008;Beasor & Davies 2018), and causing the observer to underestimate the pre-SN luminosity of the star, or perhaps preventing the progenitor from being detectable at all (Walmswell & Eldridge 2012).Hence, accurate knowledge of RSG mass-loss rates is crucial to our understanding of stellar evolution and SN progenitors.
Despite this, the mass-loss rates ( Ṁ ) of RSGs are relatively poorly known, in comparison to the winds of hot massive stars which have been studied extensively (e.g.Sundqvist et al. 2011;Hawcroft et al. 2021;Rubio-Díez et al. 2022).The mass-loss rate recipes most often used in evolutionary models are those from de Jager et al. (1988) and Nieuwenhuijzen & de Jager (1990), which are somewhat antiquated as they are basically scaled up from red giants, and which can only predict Ṁ of field RSGs to within ±1 dex (see Fig. 2 below).These recipes assume that Ṁ scales with mass, luminosity and temperature, but do not take into account how Ṁ may change as the opacity of the circumstellar material builds up over time.What is required is a study of the mass-loss rates of samples of RSGs with uniform initial abundances and masses, where the evolutionary phase is the only variable.
There are several established methods for measuring Ṁ in cool stars.Arguably the best is to monitor the spectrum of a companion star as it passes behind the primary's wind (e.g.Kudritzki & Reimers 1978), but unfortunately there are very few such systems.Until recently, the only way to study large numbers of RSGs was to observe and model the infrared excess arising from the circumstellar dust (e.g.van Loon et al. 1999;Bonanos et al. 2010).However, most of the material is in molecular gas, and so a large (∼×200-500), uncertain correction factor must be applied to convert the measured dust mass-loss rate into a gas mass-loss rate (hereafter referred to as Ṁ SED ), while there is no information on the outflow speed or radial density profile (required to get Ṁ ).Alternatively, OH masers can be used to measure the gas wind speed.The modelling of the spectral energy distribution (SED) for an assumed gas-to-dust ratio yields a predicted wind speed that can be compared to the observed one.The scaling of the SED models to account for the difference between expected and measured expansion velocity, then yields the gas-to-dust ratios and mass-loss rates of the sample under study (see, e.g., Goldman et al. 2017).A much better way is to observe the gas using CO molecular line transitions to derive the expansion velocity and gas mass-loss rate directly from the CO line profiles -hereafter referred to as Ṁ CO (e.g., Knapp & Morris 1985;Loup et al. 1993;Josselin et al. 1998;Decin et al. 2006;Ramstedt et al. 2008;Danilovich et al. 2015).The faintness of these lines has meant that, until now, such observations were only possible for nearby bright RSGs.One exception for the detection of CO from an extragalactic RSG is IRAS 05280−6910 situated in the Large Magellanic Cloud, but the spectral resolution of the data acquired with the Herschel Space Observatory did not allow the expansion velocity to be measured (Matsuura et al. 2016).Now, with the immense gain in sensitivity provided by the Atacama Large Millimeter/submillimeter Array (ALMA), we can directly measure the expansion velocity and gas mass-loss rates of homogeneous samples of RSGs with wellconstrained distance and stellar parameters.
With this study, we aim to provide the first measurements of the gas mass-loss rates ( Ṁ CO ) of RSGs as a function of the specific RSG age.To do this, we have identified a sample of RSGs which have roughly the same masses and identical initial chemical compositions.Such samples are uniquely found in star clusters.The cluster RSGC1 at a distance, d, of ∼6 600 pc contains 14 RSGs and one post-RSG, all with initial masses ∼25 M ⊙ (Davies et al. 2008;Beasor et al. 2020).The cluster is effectively coeval, since even a large cluster age spread (0.5 Myr) would be short compared to the cluster age of 12 Myr (Davies et al. 2008).This means that the range of initial masses of the RSGs in RSGC1 must be narrow, and since all stars with this initial mass must follow virtually the same evolutionary track on the Hertzsprung-Russell (HR) diagram, the differences in the stars' luminosities are entirely due to how evolved they are.By measuring the mass-loss rates of these stars, we will be able to determine not only accurate mass-loss rates for a sample of RSGs, but also how mass-loss behaviour changes as the star evolves.In a next step, using these measurements in conjunction with model estimates of the RSG lifetimes, one can then integrate over all stars in the cluster to find the total mass lost during the RSG phase, a key property when estimating the fate of a star and the type of SN it will produce.
Observations and data reduction
The 14 RSGs (F01-F14) and 1 post-RSG (F15) in RSGC1 (see Table 2 in Davies et al. 2008) were observed with ALMA on 2015 June 9 and 11 for proposal code 2013.1.01200.S.We requested observations at both band 9, centred on the CO v=0 J=6-5 rotational line transition and band 6, centred on the CO v=0 J=2-1 rotational line transition, but only the latter were obtained.Positions and other details are given in Table A.1.The observations have three spectral windows (spw); one 'line' spw with a width of 1.875 GHz and 3840 channels to cover the CO(2-1) transition, and two 'continuum' 2 GHz spw with 128 channels each centred at 228.5 GHz and 213 GHz.The recorded line channels are not independent and the minimum effective spectral resolution of 0.977 MHz is approximately double the channel width.The spectral resolution of the continuum data is 15.625 MHz.36 antennas were used with minimum and maximum baselines of 63-783 m, providing a maximum recoverable scale of ∼4.′′ 4 in a field-of-view of 26 ′′ .The total integration per target source was 4.8 min.Standard ALMA Cycle 2 observing and quality control procedures were used 1 .The flux scale was set relative to Titan (excluding its atmospheric lines).Compact quasars J1733-1304 and J1832-1035 were used for bandpass calibration and phase-referencing, respectively.
The data were calibrated using the ALMA Quality Assurance scripts implemented in CASA (the Common Software Applications package) (McMullin et al. 2007).The estimated accuracy of the flux scale as applied to the targets is ∼7%.The targetphase-reference separation is ∼3.7 • (depending on target).Inspection of the (small) slopes in the phase-reference phase solutions, along with the probable antenna position uncertainties in 2015 (Hunter et al. 2016), suggests an absolute astrometric accuracy ≳ 1/16 of the synthesised beam, depending on the target signal to noise ratio (S/N).Davies et al. (2008) and Nakashima & Deguchi (2006), respectively (see Table A.1).The noise in the spectrum is 1.2 -1.4 mJy (see text for derivation) and is shown as (red) error bar in each panel.The dotted black line in the upper left panel is an alternative fit to the CO(2-1) line profile of F01, as discussed in App. C.
We inspected the continuum spw for each target and excluded several channels covering the SiO v=3 J=5-4 line seen around 212.582 GHz for some sources.We imaged each target, achieving a noise from the full 1.7 GHz range of σ rms ∼0.05 mJy.The synthesised beam in all images is about (0. ′′ 49×0.′′ 37) at position angle ∼-65 • , depending on the frequency.Table A.1 shows that the continuum peaks are less than half the rms in a 3 km s −1 spectral channel, so we did not perform continuum subtraction.Image cubes were made for the spw covering the CO(2-1) line for each source at 3 km s −1 velocity resolution (approximately 4 input channels), adjusted to constant velocity in the Local Standard of Rest frame (v LSR ) with respect to the CO(2-1) rest frequency of 230.538GHz.We obtained σ rms ∼1.9 mJy.We also made images at 1.3 and 10 km s −1 resolution but these do not reveal any more detections or significant details.We imaged the SiO v=3 J=5-4 line which, where detected, covered 2-4 continuum channels (width ∼22km s −1 ).In these low spectral resolution continuum spw the per-channel σ rms is ∼0.6 mJy and no other lines were detected.The parameters for all detections are given in App.A-App.C.
For five RSGs (F01, F02, F03, F04, and F13), the CO(2-1) line emission was detected, with a spatial extent < ∼ 1 ′′ (see App. C).These are the first detections of spectrally and spatially resolved CO rotational line emission of sources in an open cluster, in this particular case CO emission arising from the stellar wind of red supergiants located in RSGC1.For each of those five RSGs, the CO(2-1) line profile was extracted for a circular aperture of 0. ′′ 75 centred on the peak of the continuum emission.) and line profiles, there is contamination by interstellar medium (ISM) emission at specific frequencies; see the dashed regions in Fig. 1.In general, there is much less (or no) ISM contamination visible on the high velocity side, although there might be a noise background.Therefore, the red part of the line profile can also be used to assess the ISM contamination at lower velocities, since the CO line profiles are expected to be symmetric2 .For each source, the ISM contamination and alternative fits are discussed in App. C.
We made total intensity (zeroth moment) maps for each CO line over the uncontaminated channels.The rms-noise in the total intensity maps ranges between 21 -29 mJy/beam km/s, with corresponding peak signal-to-noise ratio between ∼12 -34 (see Sect.C).We measured the azimuthally averaged flux in annuli 200-mas thick, taking the minimum of the rms or the median average deviation as the error (see Decin et al. 2018, see Fig. C.2).These gave flux distributions with full width half maximum sizes of 460, 400, 480, 600 and 460 mas for F01, F02, F03, F04, F13, respectively, uncertainty being ∼50 mas.This gives an indication of the relative sizes of the brightest ∼60% of the emission, rather than the true size, since the flux distribution is not necessarily Gaussian and may be irregular.It is more challenging to estimate the total size of the CO emission, since our observations are sensitivity-limited and provide a lower limit.We estimated where 3× the rms noise (3×25 mJy beam −1 km s −1 ) intersected the azimuthal average profiles.This gave diameters of 700, 530, 770, 650 and 900 mas for F01, F02, F03, F04, F13, respectively but the position uncertainty is proportional to the phase noise, ∼140 mas at S/N = 3, and the weaker sources, in particular, may be more extended.
Analysis and results
The five sources detected in CO(2-1) include (i) the three M-type RSGs with the highest Ṁ SED amongst the sample of red supergiants in RSGC1 analysed by Beasor et al. (2020) (F01, F02, and F03; with Ṁ SED > ∼ 4×10 −6 M ⊙ /yr); and (ii) the peculiar RSG F13, which is anomalously red compared to the other RSGs in the cluster (Davies et al. 2008), and for which the true luminosity is difficult to determine (Beasor et al. 2020).Ten sources (F05, F06, F07, F08, F09, F10, F11, F12, F14, F15) remain undetected at a CO(2-1) rms noise value of ∼1.7 mJy/beam.For five of those sources, Beasor et al. (2020) derived a mass-loss rate between 1.8×10 −7 ≤ Ṁ SED ≤8.7×10 −7 M ⊙ /yr, implying that a line sensitivity of at least a factor 5-20 better is required to detect those sources with a similar ALMA setup.The Beasor et al. Ṁ estimates were not yet available at the moment these observations were proposed and a general value of 1×10 −6 M ⊙ /yr was used for calculating the line sensitivities in the proposal.Moreover, the CO outer envelope radius was then estimated based on Mamon et al. (1988).However, both the Ṁ -estimate (for the undetected sources) and the CO outer envelope radius (for the detected sources) turned out to be smaller (see Sect. 3.1), implying an overestimate of the actual CO(2-1) line strength.
For the five sources for which CO(2-1) rotational emission was detected, we derive the properties of the red supergiant's wind from a radiative transfer analysis.The outcomes, in particular the wind mass-loss rates, Ṁ CO , are then compared to (literature) parameters retrieved from an analysis of the dust spectral features visible in the SED, Ṁ SED .
Radiative transfer analysis of the CO(2-1) emission
To retrieve the stellar wind parameters from the CO(2-1) line profiles, we used the non-local thermodynamic equilibrium (non-LTE) radiative transfer code GASTRONOOM (Decin et al. 2006) based on a multi-level approximate Newton-Raphson (ANR) operator.The molecular line data are as specified in Appendix A of Decin et al. (2010).When ray-tracing the modelled circumstellar envelope (CSE), we used a circular model beam with the same extraction aperture (of 0. ′′ 75 diameter) to allow direct comparisons.The modelled CSEs were divided into 150 shells, evenly spaced on a logarithmic scale from the stellar radius (R ⋆ ) out to the outer envelope radius.
The effective temperatures, T eff , and stellar luminosities, L bol , were taken from Davies et al. (2008); see Table 1 3 .This translates into stellar radii of ∼1100 -1500 R ⊙ (see Table 1).Since we only have one rotational CO line, we cannot put constraints on the radial distribution of the kinetic temperature for the individual sources.We therefore assume that the gas kinetic temperature follows a power-law radial profile 3 We note that those values are slightly different than the ones used in Beasor & Davies (2016, 2018); Beasor et al. (2020).
with ϵ = 0.6, since such a power law has been shown to be a good representation of the kinetic temperature in circumstellar envelopes (Decin et al. 2006, and references therein).
For each of the five detected sources, the radial extent of the CO emission zone is larger than the beam size (of ∼0.′′ 5), but slightly lower than 1 ′′ in diameter (see Fig. C.1).Given a distance of 6 600 pc to RSGC1 and stellar radii between 7.55×10 13 -1.04×10 14cm, this translates into a CO envelope radius of 250< R out < 650 R ⋆ .While the spectra of F01 and F13 are not spatially resolved for an extraction aperture of 0. ′′ 75, the spectra of F03 and F04 show the clear absorption depth reminiscent of spatially resolving the CSE.This is in line with the angular sizes estimated from the zeroth moment maps which imply that F03 and F04 have the largest angular extents (Sect.2).These considerations yield a CO envelope radius of ∼350 R ⋆ for F01 and F13, of ∼400 R ⋆ for F02, and of ∼600 R ⋆ for F03 and F04.Those values are smaller by a factor of ∼1.5 -2 compared to the value used for the proposal preparation, that was based on the r 1/2 -value -which marks the radius where the CO abundance drops to half of its initial value -of Mamon et al. (1988) for a wind mass-loss rate of 1×10 −6 M ⊙ /yr.
To calculate the CO excitation and hence level populations, we account for excitation by the stellar photons, the microwave background and the dust radiation field (Decin et al. 2006).For the latter, we assume silicates (Decin et al. 2006) that start condensing at a temperature of ∼1750 K, which translates into a dust condensation radius, R dust , of 3-4 R ⋆ .To get the local mean radiation field at each radial point in the grid, we calculate the dust radiation field for a canonical gas-to-dust ratio (r gd ) of 200, representative for a Milky Way cluster (Beasor et al. 2020).As we discuss in App.D, the specific dust-to-gas ratio has only a minimal effect on the CO excitation by the dust radiation field for these simulations.
The parametrised β-type accelerating wind is described by The terminal wind velocity, v ∞ , is deduced from the ALMA CO(2-1) line profiles (see Table 1), uncertainty being ±3 km s −1 (see App. A).As boundary for the velocity structure, we assume that the flow velocity of the gas is equal to the local sound velocity, v s , at R 0 = R dust .For the region between R ⋆ and R dust , β is assumed to be 1/2 (Decin et al. 2006); for the region beyond R dust we follow the general conclusions from Khouri et al. (2014), Decin et al. (2020), andGottlieb et al. (2022) that the value of β should be larger to represent the slowly accelerating flow in oxygen-rich winds.We here adopt β = 3.We also include a constant turbulent velocity v turb of 3 km s −1 .
To determine the fractional abundance of CO with regard to hydrogen, we assume all photospheric carbon to be locked in CO in the circumstellar envelope (CSE).For a value of A(C) = log(C/H) + 12 as derived by Davies et al. (2009) for RSGC1, this yields a CO fractional abundance of 8.9×10 −5 .This set of stellar and circumstellar parameters allows us to retrieve the mass-loss rate, Ṁ CO , from the ALMA CO(2-1) line profiles for each of the five detected sources; the derived values are listed in Table 1, and range between 1.8-42×10 −6 M ⊙ /yr.The evolution of the four sources with mass-loss rate below ∼4×10 −6 M ⊙ /yr (F01, F02, F03, and F04) is dominated by nuclear burning, while in the case of F13 the wind mass-loss rate is currently determining its RSG evolution (see Fig. 2).For the RSGC1 red supergiants that remained undetected in the CO(2-1) line, an upper limit on the mass-loss rate of ∼7×10 −7 M ⊙ /yr is derived.
Stellar parameters CSE parameters Star
Given this set of input parameters, the largest uncertainty in Ṁ CO arises from uncertainties in the terminal wind velocity (∼35%), and then in the distance, the outer CO envelope size, and the CO fractional abundance (each ∼20%); for additional details, readers can refer to App.D. The combined effect of the errors in the individual input parameters on Ṁ CO is a factor ∼1.4; for additional details, readers can refer to App.D. This error on Ṁ CO is lower than the errors on Ṁ SED ; see Table 1.Measurements of dust emission are usually angularly unresolved, hence rely on SEDs, and depend on more factors known to vary such as composition and size of grains, the stellar contribution to the SED, etc. for which reason estimates of the dustto-gas ratio can differ by an order of magnitude.
We note that terminal wind velocities determined solely from the half line width at zero intensity might be lower limits since it was shown by Decin et al. (2018) that line widths are sensitivity limited, and hence that in some cases higher signal-to-noise data could indicate higher v ∞ values, and hence higher Ṁ CO values.However, if the terminal velocity increases, so does the width of the full line profile and of the width between the two horns in spatially resolved, optically thin line profiles.We therefore have used all these characteristics together to determine v ∞ .
An increase in terminal velocity by 3 km s −1 , would induce an increase in Ṁ CO by ∼40% (see Table D.1).
The ALMA data also constrain the CO outer envelope radius (see Table 1).In recent studies, Groenewegen (2017) and Saberi et al. (2019) improved the calculations on CO photodissociation in circumstellar envelopes made by Mamon et al. (1988).Using the ratio of the maximum flux density over 3 times the noise as a proxy for the fractional abundance of f 0 /f CO (r) (with f 0 being the initial CO abundance), we can use Eq.(3) of Saberi et al. (2019) and their values for r 1/2 and α (their Table B.1.) to compare the theoretical predicted CO envelope size with the ALMA observations.The observed ALMA CO envelope radius of the four RSGC1 sources F01, F02, F03, and F04 is only lower by a factor ∼1.1 -1.8 compared to what Saberi et al. (2019) predicted, which is a remarkable agreement given the fact that the values for r 1/2 and α were calculated for a standard Draine (1978) interstellar radiation field (ISRF).The ISRF in the massive young cluster RSGC1 might actually be higher given the fact that newly formed stars affect the surrounding materials strongly via their UV photons.However, unlike the case of 47 Tucanae (McDonald et al. 2015), an estimate of the strength and local variation of the interstellar radiation field in RSGC1 is currently lacking.Groenewegen (2017) has shown that an increase in ISRF by a factor ∼3 leads to a decrease in photodissociation radius by a factor ∼1.5.The notable exception is F13 for which the predicted outer radius is almost an order of magnitude larger than the observed one.F13 stands also out in other aspects; for further discussion, readers are referred to Sect. 4.
Comparison to SED retrievals
In various works, Beasor et al. have derived the mass-loss rate for red supergiants in four open clusters, including RSGC1, NGC 7410, χ Per, and NGC 2100 (Beasor & Davies 2016, 2018;Beasor et al. 2020).The first 3 clusters reside in the Milky Way, while the latter one is an LMC cluster.For each of those clusters, the age and initial mass M ini of the RSGs was determined.This yielded values of 12±2 Myr and 25±2 M ⊙ for RSGC1.For the other 3 clusters, the respective values are 21±1 Myr and 10±1 M ⊙ (NGC 2100), 20±1 Myr and 11±1 M ⊙ (NGC 7419), and 21±1 Myr and 11±1 M ⊙ (χ Per).The aim of their works was to derive a new mass-loss rate prescription for red supergiants that can be used in stellar evolution codes.Beasor et al. (2020) derived a general Ṁ -luminosity relation that is dependent on the initial mass, By keeping M ini constrained, Beasor & Davies (2016, 2018) showed that the Ṁ -luminosity relation has a tighter correlation with a smaller value for the dispersion.The standard deviation for the slope is given to be 4.8±0.6; the standard deviations on the other numerical values were not listed by Beasor et al. (2020), but have been determined in App.E. The slope of Eq. ( 3) is steeper than any other mass-loss rate relations derived for red supergiants (see Fig. 2).We here re-examine the results of Beasor et al. (2020) and compare the SED mass-loss rate to the ones derived from the ALMA CO(2-1) line in the current work.
In a first step, Beasor et al. (2020) has determined a Ṁluminosity relation for all clusters in their sample by fitting the relation to their data points (see Table E.1 and App.E).We repeat the analysis applying the same IDL routine FITEXY4 and using the most conservative error estimates for both L bol and Ṁ SED (see Table E.1).The fit to Eq. ( 4) is shown in Fig. 3; the values derived for a and b are listed in Table 2, together with the Pearson correlation coefficient.Similar to Beasor et al. (2020), the standard deviation on the intercept a and the slope b is large for the cluster RSGC1.For all clusters, our values for the intercept are systematically higher than the values listed in Table 4 The values of the slope b obtained from each cluster are compatible with one another.This had led Beasor et al. (2020) to fix b to 4.8 ± 0.6 (corresponding to the weighted mean of the slope obtained for each cluster) in order to decrease the number of degrees of freedom of the fit from 8 to 5. We follow a similar strategy to limit the number of degrees of freedom, however we do not fix b from the previous exercise.We rather fit a common value of b for all four clusters simultaneously with each individual intercepts (a i ) using a multivariate linear fitting method implemented with the MPFIT Levenberg-Marquardt least-squares solver (Markwardt 2009).Uncertainties were estimated through Monte Carlo (MC) by generating 10 4 statistically equivalent data sets randomly drawn from normal distributions centred on the best fit solutions and repeating the fit for each artificial data set.The 68%-confidence interval on the best-fit parameters are given by the 0.16 and 0.84 percentiles of the distributions of obtained parameters.Fitted at face values, the errors on the individual intercepts a i are tightly correlated.To prevent this correlation, we adopt a slightly different functional form: where a 1 is the intercept for the cluster RSGC1, and ∆a i indicates the difference in intercept of each cluster i = 1,2,3,4 with regard to the intercept a 1 (hence, ∆a 1 = 0).That is, each cluster gets its own intercept (a i = a 1 + ∆a i ) and the slope is forced to be identical.The MC method yields b = 3.34 +0.48 −0.41 , a 1 = −23.83+2.15 −2.64 with corresponding ∆a i -values listed in the last column of Table 2; see right panel in Fig. 3 6 .While mathematically equivalent to directly fitting the intercept a i instead of their difference ∆a i to a reference intercept (here arbitrarily chosen to be that of RSGC1, but see App.F), the errors on a 1 , ∆a 2 , ∆a 3 , ∆a 4 are now uncorrelated, which allows for a better sense of whether the intercepts from each cluster vary from one another from a direct comparison of the ∆a i values and their errors.
To parametrise the mass-loss rate in terms of both the luminosity and the initial mass, we perform a MC fit to all four clusters together using a parametrisation similar to Eq. 3, that is 4) where the values of both a and b are free parameters in each fit.In the right panel, we force the slope b to be identical for all clusters and fit Eq. ( 5).The best-fit values are listed in the legend and in Table 2.The shaded areas provide a visualisation of the uncertainties of the fits.They were constructed through MC as the 68%-locus of all best-fit relations derived from a fit to mock data sets used to assess the errors on the best-fit parameters.Notes.The second column lists the linear Pearson correlation coefficient.The third and fourth column list, respectively, the intercept a and slope b with their standard deviation derived by fitting Eq. ( 4) to the data.The fifth column lists the fit to Eq. 5 which yields a best-fit slope b = 3.35 +0.43 −0.37 and intercept a1 = −23.96+2.00 −2.32 . (6) The combined fit yields R = −20.63+1.93 −2.38 , S = −0.16+0.03 −0.04 , and b = 3.47 +0.57−0.45 .This new mass-loss rate relation has a higher constant R and a shallower dependence on the initial mass and the luminosity as compared to Beasor et al. (2020); see Eq. 3 and Fig. 2 (full and dotted grey lines).Except for the mass-dependent intercept, these values do not agree within their respective 1sigma uncertainties; see the standard deviations for the parameters of Eq. (3) derived in App.E.
However, the uncertainties associated with the intercept R are substantial, spanning two orders of magnitude for the associated mass-loss rate.But it is crucial to acknowledge that these uncertainties pertaining to the intercept do not reflect the uncertainties on the mass-loss rates within the range of values for the initial mass and luminosities under study.By normalising the mass-loss rate, initial mass and luminosities to representative values, the resulting parameter uncertainties become more readily applicable (e.g.van Loon et al. 2005).This yields the following analytical relation with R = 1.71 +0.54 −0.44 , S = −1.63+0.30 −0.36 , and b = 3.47 +0.57−0.45 ; the uncertainties on the intercept now being of the order of 30%.
CO-based mass-loss relation for RSGs
The CO gas mass-loss rate values of those red supergiants in common with Beasor et al. (2020) are systematically lower, on average by a factor of ∼2 (see Table 1) -although we here must provide the caveat of dealing with small number statistics.CO mass-loss rates, Ṁ CO , are not afflicted with uncertain dust extinction corrections, dust-to-gas conversion ratios, and unknown expansion velocities as is the case for Ṁ SED .The main reason for the difference in Ṁ SED and Ṁ CO for the 3 RSGC1 sources analysed in both this study and by Beasor et al. (2020) is the expansion velocity which was assumed to be 25±5 km s −1 by Beasor et al. (2020), but which is lower for all RSGs in which CO(2-1) was detected.Correcting for the terminal velocities as deduced from the ALMA CO(2-1) data, Ṁ CO and Ṁ SED agree very well for the three sources in common to both studies (F01, F02, and F03) with average difference being a factor ∼0.9 and maximum percentage difference of 32% (see also last two columns in Table 1).
A point of concern on the reliability of the mass-loss rates could be binarity.The binary fraction of unevolved massive stars is thought to be above 70% (Sana et al. 2012;Moe & Di Stefano 2017;Patrick et al. 2022).For a predicted merger fraction of 20-30%, and a binary interaction fraction of 40-50%, the total RSG binary fraction is estimated around 20% (Patrick et al. 2019(Patrick et al. , 2020;;Neugent et al. 2020;Sana 2022).As discussed by Decin (2021) and Gottlieb et al. (2022), a binary system with small orbital distance can develop an equatorial density enhancement (EDE) promoting the formation of dust grains.Hence, it is expected that for those systems mass-loss rate estimates based on dust spectral features in the SED might yield too high an Ṁ -estimates, and that mass-loss rate estimates based on a COanalysis should be preferred (Decin et al. 2019).The sensitivity of the CO(2-1) integrated line flux to binary-induced morphologies has been shown to be less than a factor of ∼2, for both spiral structures (see Fig. 16 in Homan et al. 2015) and equatorial density enhancements (Decin et al. 2019), the latter morphologies being much better traced in other molecular diagnostics, such as the SiO v=0 J=8-7 transition (Kervella et al. 2016;Decin et al. 2020).
Despite of (i) the caveat of dealing with small-number statistics, and (ii) the fact that we are dealing with the 3 RSGs in RSGC1 that have the largest Ṁ SED -values from the study of Beasor et al. (2020), we can try to improve on the mass-loss rate prescription derived in Eq. ( 6).Similar to the analysis of Beasor et al. (2020), we exclude F13 from the regression analysis since it stands out in the sample of 14 RSGs in RSGC1: F13 is unusually red, has an Ṁ CO being an order of magnitude larger than the other four RSGs detected in CO(2-1), and is the only source for which the observed CO envelope size is much lower than any prediction for CO photodissociation in a circumstellar envelope (see Sect. 3.1).This tentatively suggests that another mass-loss mechanism is active (see Sect. 4.2 and Sect.4.3).The limited and biased sample led us decide to fit both the (L bol , Ṁ CO ) and (L bol , Ṁ SED ) measurements together with a four parameter set of equations: with best-fit values being R SED = 1.77 +0.58 −0.46 , S = −1.68+0.31 −0.40 , b = 3.50 +0.60 −0.46 , and ∆R = 0.49 +0.44 −0.41 ; see Fig. 4. The Ṁ CO measurements have an intercept that differs by 0.5 ± 0.4 compared to the Ṁ SED measurements; the likelihood that such a difference occurs by chance is only ∼12%.This new Ṁ -luminosity relation derived for M-type supergiants is plotted as full red line in Fig. 2, and is clearly different from the Ṁ -relation derived by Beasor et al. (2020) (full grey line in Fig. 2).
To assess the predictive power of Eq. ( 8), we have compared its predicted Ṁ -values with Ṁ CO -values derived for some wellknown Galactic M-type supergiants towards which several rotational CO lines have been observed (α Ori, µ Cep, VX Sgr, and VY CMa; see App.G).The analysis presented in App.G proves that Eq. ( 8) predicts the gas mass-loss rate for M-type red supergiants with effective temperature between ∼3200 -3800 K with good accuracy, the average difference only being ∼30% (see Table G.1).
Moreover, we can use the derived Ṁ CO values to derive the gas-to-dust ratio in the winds of individual sources.This ratio can be determined by comparing Ṁ SED -values with Ṁ COvalues (adjusted for differences in wind speed and distance, as used by different authors).Our analysis reveals a gas-to-dust ratio of 235 ±71 for the RSGC1 sources.For the Galactic sources, the gas-to-dust ratio ranges between ∼200 -550.The exception is the extreme RSG VY CMa for which a low ratio of ∼20 is derived, but there are indications that that ratio is changing throughout its mass-loss history (see App. G).
Remarkably, the new Ṁ CO values for the M-type red supergiants F01, F02, F03, and F04 are smaller than predicted by all empirical mass-loss rate prescriptions derived prior to 2020 and used in stellar evolution codes (see right panel in Fig. 2).Again here, (part of) the explanation is based on the fact that all empirical Ṁ -relations shown in Fig. 2 are based on an SED analysis that are prone to uncertain gas-to-dust ratios and expansion velocities, and that samples of stars will be flawed by a large fraction of stars that experience binary interaction.In addition, those previous studies were often biased towards samples with high mass-loss rate objects for which the infrared excess is easier to detect and model, as acknowledged by those authors; see, for example, the discussion in van Loon et al. (2005).Moreover, distances -and hence corresponding luminosities -are very uncertain for Galactic samples.This latter caveat was avoided in the studies by Beasor et al. who focused on open clusters, which eventually led in 2020 to the RSG mass-loss prescription given in Eq. (3) (Beasor et al. 2020) and shown as full grey line in Fig. 2. The only theoretical mass-loss rate prescription for RSGs is derived by Kee et al. (2021).That relation can fit the RSGC1 data under condition of an atmospheric turbulent velocity of ∼15±1 km s −1 , which is lower than the values quoted in Table 2 of Kee et al. (2021) for stars with mass around 25 M ⊙ .
In that regard, it is also worth turning our attention to F13 and to the mass-loss rate prescriptions from van Loon et al. ( 2005) and Goldman et al. (2017) which are derived for dusty RSGs, some of which displaying clear OH maser action.The relation of Goldman et al. (2017) yields a mass-loss rate that is a factor ∼10 higher than what we derive, although we need to remark that the pulsation period estimated from the period-luminosity relation given by De Beck et al. ( 2010) is very uncertain.The van Loon et al. (2005) relation predicts a mass-loss rate for F13 of 5.8×10 −5 M ⊙ /yr, only a factor 1.4 larger than our derived Ṁ CO (see Fig. 2).van Loon et al. (2005) discussed that their recipe overestimates mass-loss rates for Galactic 'optical' RSGs, on average by a factor ∼2.8 (Mauron & Josselin 2011) with standard deviation on that value being ∼2.5, hence in line with our result for F13.8).Symbols, colours and shades have the same meaning as in Fig. 3, with the exception of the addition of the Ṁ CO measurements derived in Sect.3.1 using the L bol values of Davies et al. (2008).
Phase change in mass loss
The amount of mass lost during the RSG phase, its speed, and how soon before core collapse the material is removed can have a dramatic effect on the resulting supernova light curve and spectrum (Smith et al. 2009).For luminosities above 10 5 L ⊙ , a wind mass-loss rate ≤2×10 −5 M ⊙ /yr implies that the nuclear burning rate exceeds the wind mass-loss rate (see right panel in Fig. 2), and hence that core-He burning is dominating the evolution of F01, F02, F03, and F04 (see Fig. 2).Accounting for the uncertainty in L bol (Table 1) and Ṁ CO , only F13 is above the boundary where the wind mass-loss rate dominates the star's evolution.This outcome is in line with the results from van Loon et al. (1999) and Javadi et al. (2013), who suggested that red supergiants come in two flavours, those dominated by nuclear burning which takes about 75% of the RSG lifetime, and those dominated by intense mass loss taking ∼25% of the RSG lifetimes.
The measured wind speed of the four RSGC1 sources F01, F02, F03, and F04, whose evolution is dominated by nuclear burning (see Fig. 2), is 11±3 km/s.However, in the case of source F13, the wind speed is significantly higher, at ∼22 km/s.The measured wind speeds conform to the wind speed-luminosity relation (v ∞ ∝ ZL 0.4 ) established by Goldman et al. (2017) for OH/IR stars in the Large Magellanic Cloud (LMC).However, they are noticeably lower compared to the Galactic samples (see Fig. 17 in Goldman et al. 2017).This finding aligns with the outcomes reported by Davies et al. (2009), who identified a subsolar iron content in the RSGC1 sources and indicated a metallicity between Z = 0.008 and Z = 0.02.This preliminary alignment with the relation proposed by Goldman et al. (2017) for lower metallicities tentatively corroborates the findings of Goldman et al. (2017) of expansion velocities being consistent with the predictions of dust-driven wind theory (van Loon 2000).
One advantage of our study is the ability to observe essentially the 'same' star at different stages of post-main sequence evolution within a single cluster.When considering the changes in mass-loss rate and wind speed, this leads to the hypothesis that RSGs may undergo one or multiple phase changes in mass loss during their RSG evolution.It is conjectured that there could be intense and potentially eruptive mass loss occurring for a shorter period of the RSG lifetime, which disrupts the otherwise more tranquil mass-loss process that takes place over a larger portion of the RSG branch.For further discussion on this topic, we refer to Sect.4.3.
Implications for stellar evolution
Accurate stellar mass-loss predictions are fundamental for stellar evolutionary models, in particular for the prediction of the nature of the end-products.The outcome of this study impacts the formation frequency of core-collapse supernovae of type IIP, black holes and neutron stars, and hence the frequency of gravitational wave events that can be detected with current (and to be developed) detectors.For massive stars with initial mass < ∼ 30 M ⊙ , winds during the main-sequence phase will only remove ≤0.8 M ⊙ (Beasor et al. 2021).Hence, the only evolutionary phase during which these massive stars can potentially loose a significant amount of mass is during the cool red supergiant phase, which for a star of ∼25 M ⊙ lasts ∼10 5.5 yr (Meynet et al. 2015).
The MESA evolutionary code (Paxton et al. 2019) was used to compute the evolution of a 25 M ⊙ star until core carbon depletion7 .Following Brott et al. (2011), overshooting was modelled by extending the convective region by 0.335 pressure scale heights, while winds during the main sequence and the Wolf-Rayet phase are modelled using the prescriptions of Vink et al. (2001) andHamann et al. (1995).For temperatures below 10 4 K we switch to either the prescription of Nieuwenhuijzen & de Jager (1990) or the newly derived one from Eq. ( 8).Composition and opacities are determined from solar metallicity and metal fractions as given by Asplund et al. (2009).
Both simulations evolve to very different endpoints, with the one using the Nieuwenhuijzen & de Jager (1990) wind prescription managing to strip its outer stellar envelope and evolve to become a hot Wolf-Rayet star.Using the new prescription of Eq. ( 8), only 1.93 M ⊙ of the hydrogen-rich stellar envelope is lost (out of a total envelope mass of 11.97 M ⊙ ) implying that such massive stars would explode as RSG upon core-collapse (see Fig. 5).Following Smith et al. (2009), this leads to the suggestion that if F01, F02, F03, or F04 were to explode in their current RSG phase, this would produce a Type II SN with limited level of interaction with the circumstellar material, without enough inertia to substantially decelerate the blast wave and with no substantial narrow Hα emission from the post-shock gas.Mass-loss rates of order ∼11.97/1.93= 6.20 times higher would be needed to successfully strip the H-rich stellar envelope.In that sense, F13 might be an important outlier mass-loss wise, since its Ṁ CO is more than an order of magnitude larger than for the other four RSGs.The retrieved Ṁ CO value of F13 does not fit Eq. ( 8) which is ∼10 times above the prescription.Both the strong CO(2-1) line of F13 and its high near-infrared extinction (Davies et al. 2008) indicate that F13 is surrounded by a lot of circumstellar material produced by a wind with massloss rate higher than the other four RSGC1 sources.It might be that the luminosity of F13 is underestimated given the high Ṁ CO and the fact that it is anomalously red compared to the other RSGs in RSGC1.For fitting the relation Eq. ( 8), log(L bol /L ⊙ ) should be 5.74, which would imply that it would be brighter than any known RSG and is far above the Humphreys-Davidson limit.Another possibility is an evolutionary phase change.Davies et al. (2008) has shown that F13 is spatially coincident with H 2 O, OH, and SiO maser emission8 .Such maser emission is often associated with evolved stars that are long-period variables (periods of 300 -500 days) and have winds with a very high mass-loss rate.This induces the suggestion that another mass-loss rate mode has become active in F13.Among a coeval sample of RSGs one may expect to see enhanced mass loss, and hence masers, in those objects furthest along their evolution, that have high L ⋆ /M ⋆ ratios -close to the (modified) Eddington limit -so that only small changes in the atmospheric structurefor example caused by pulsations or changes in the high opacity due to variations in the hydrogen ionisation beneath the stellar surface -makes these stars unstable to more furious episodic mass ejections.Eq. ( 8) would then represent the more quiescent mass-loss process, while F13 is an example RSG that undergoes stronger, potentially eruptive, mass loss.
In that sense, F13 shares its status as extreme RSG with VY CMa (see App. G), the only Galactic RSG for which the Ṁ CO prediction using Eq. ( 8) is a factor ∼6.6 too low.VY CMa is one of a small class of evolved massive stars characterised by extensive asymmetric ejections and multiple high massloss events lasting several hundred years (Decin et al. 2006;O'Gorman et al. 2012;Kamiński 2019;Humphreys et al. 2021) attributed to large-scale surface and magnetic activity.Both F13 and VY CMa are indicative of a stronger, potentially eruptive, mass-loss process that breaks from the prescription given in Eq. ( 8); they demonstrate that one should be careful about applying any Ṁ -prescription, including Eq. ( 8), more globally in stellar evolution models as they will not reproduce the behaviour of those extreme RSGs; see also the recent discussion by Massey et al. (2022).
If we consider F to be the fraction of time spent on RSG phase with stronger mass loss, and B the enhancement of the mass-loss rate during that stage as compared to the more quiescent mass-loss rate (here B = 10.43),we can estimate that the enhancement of a lifetime averaged mass-loss rate would be equal to (1 − F ) + B × F .For an enhancement of 6.20 and B = 10.43,we need F = 0.55, so 55% of the lifetime on the strong massloss rate RSG phase to successfully strip the H-rich stellar envelope.However, this derived fraction of F = 55% is much higher than what one derives from using number statistics to determine the fraction of time spent in the strong mass-loss rate stage.I.e.out of 14 RSGs in RSGC1, only one of them is observed in the strong mass-loss rate stage.To assess the likelihood of observing such a ratio (1 out of 14), a binomial distribution can be employed.By conducting a Bayesian analysis with a flat prior for F , the posterior distribution of F can be determined, which corresponds to a beta-distribution with shape parameters α = 2 (representing the number of stars in high mass-loss rate phase + 1) and β = 14 (representing the number of stars in the quiescent RSG mass-loss rate stage + 1).Consequently, a 90% credible interval for F is obtained as 10.9 +17 −8.5 %.This interval represents the median value along with the range between the 5th and 95th percentiles.
This tension between the outcome of number statistics and the fraction of time needed to strip the H-envelope during the RSG phase when considering both the quiescent and (potentially eruptive) high mass-loss rate phase induces the suggestion that the RSGs in RSGC1 will not be able to strip their entire H-envelope and will explode as RSG upon core-collapse.For the full H-rich envelope to be stripped, mass-loss rates much stronger than observed for F13 would need to be invoked.A potential mechanism thereof has been explored by Heger et al. (1997), who suggested that sufficiently evolved stars become dynamically unstable and exhibit large-amplitude pulsations with periods of the order of the Kelvin-Helmholtz time scale, which eventually become strong enough to dynamically eject shells of matter from the stellar surface, implying the loss of (most of) the H-rich envelope.Simulations for episodic mass ejections from common-envelope binaries yield a similar outcome of dynamical unstable mass ejections with periods in the range of a few years to a few decades, leading to a time-averaged mass-loss rate of the order of 10 −3 M ⊙ /yr (Clayton et al. 2017).The probability of observing RSGs in such a stage of large amplitude pulsation and associated strong mass loss ( > ∼ 10 −4 Ṁ /yr) is not very large; however such events have marked consequence on the appearance of the supernova explosion.
Similarly, F15 is an important outlier, as being the only post-RSG in the RSGC1 sample of Davies et al. (2008).F15 matches the picture of being a post-RSG star, with its luminosity of log(L bol /L ⊙ ) ∼ 5.36 matching that of the brightest RSGs in RSGC1.Even though no mass-loss constraint could be made, F15 could be indicative of the mass-loss process operating in the RSG shutting off as the star becomes hotter (T eff ∼ 6850 K) and transitions to a radiative envelope.If F15 is indeed a post-RSG, it indicates that the RSG mass loss managed to strip the H-rich stellar envelope to the point that it would evolve to the blue.
Conclusions
The ALMA detection of CO(2-1) emission towards RSGs residing in the open cluster RSGC1 provides us with a powerful diagnostic to derive the gas mass-loss rates of those RSGs.Of importance is that the RSG cluster stars are co-eval, which allows for stars to be studied with the same initial conditionsmass, metallicity, local environment, etc.Since the cluster stars all have roughly the same initial masses (of ∼25 M ⊙ , within a few tenths of a solar mass), the evolutionary path should be the same, allowing for the luminosity to be used as a proxy for evolution.Based on the CO(2-1) detections, we propose a new massloss rate relation for M-type RSGs with effective temperatures between ∼3200 -3800 K, that scales with luminosity and mass.The new Ṁ -luminosity relation proposed in Eq. ( 8) is validated against some other well-known Galactic RSGs towards which multiple CO rotational lines have been observed.
The gas mass-loss rates derived from CO diagnostics are systematically lower than the values retrieved from an SED analysis on which current stellar evolution codes are based ( Ṁ CO < Ṁ SED ).Implementing our new mass-loss rate relation will impact the frequency rate of type IIP SNe, neutron stars, and black holes.In particular, models suggest that the RSG mass loss would not allow single massive stars to evolve back to the blue and explode as a H-poor SN.However, the mass-loss rate of both the RSG F13 in RSGC1 and the well-known Galactic extreme RSG VY CMa are almost an order of magnitude higher than predicted by Eq. ( 8), which is indicative of a stronger massloss process different than captured by Eq. ( 8).Statistical reasoning implies that the RSGs in RSGC1 will not be able to strip their entire H-rich envelope and will explode as a RSG upon core collapse.
Only five RSGs in RSGC1 were detected during the current observation run.A completion of this RSGC1 study with ALMA should allow a more accurate mass-loss rate relation to be derived, that can be checked against lower mass-loss rate RSG stars with lower luminosities.Given the fact that ALMA is now 50 -100% more sensitive than in 2015, such deeper observations are now well feasible and would allow the observation of large samples that are, as much as possible, uniform and unbiased.In addition, we intend to return to observe other clusters with slightly different ages, such as RSGC2 (Davies et al. 2007), which will then allow us to repeat this study but for RSGs with slightly different masses.Ultimately, we will be able to provide the time-averaged mass-loss rates and total mass lost during the RSG phase as a function of initial mass, crucial inputs for the theory of stellar evolution, and SN progenitors.
Appendix A: ALMA observation
For each star, the input for the ALMA observations is listed in Table A.1.For the five sources detected in the ALMA band 6 continuum data and in CO(2-1) line emission, Table A.1 lists the coordinates of the peak of the continuum emission, the peak intensity, and the rms noise of the continuum and CO(2-1) observations.For each source, Davies et al. (2008) has estimated the local standard of rest velocity, v LSR , based on high spectral-resolution observations of the 2.293 µm CO band head (see Table A.1), uncertainty being ±4 km s −1 .These v LSR values were used as input for the ALMA observations.For sources F01, F02 , F04, and F13, Nakashima & Deguchi (2006) also measured v LSR using SiO maser data, with an uncertainty of ±2 km s −1 (see last column in Table A.1).Using the ALMA data, we have re-assessed the v LSR values based on the considerations that (i) the CO line profile is symmetric around the v LSR value (see also footnote 2), and (ii) for spatially resolved sources for which the CO optical thickness is not too high (typically, a maximum τ ν of ∼2), the CO profile is two-horn like (see Fig. 1).Those values are listed in Table 1 and in Table A.1.Given the spectral resolution of the data, the uncertainty on the v LSR values is ±3 km s −1 .The newly derived v LSR values for all sources agree with the values from Nakashima & Deguchi (2006).For the sources F03, F04, and F13, the v LSR values also agree with Davies et al. (2008), F02 is just within the uncertainty range of both studies, but the value is off for F01.In particular, in the case of F01 Davies et al. (2008) lists a value of 129.5 km s −1 , but the ALMA CO and SiO maser data of Nakashima & Deguchi (2006) indicate a value around 117.5 km s −1 .Notes.First part of the table lists the target identifier, the input coordinates for the ALMA observations, and the vLSR as derived by Davies et al. (2008).The second part of the table lists for the four sources detected in the ALMA band 6 continuum data and in CO(2-1) the coordinates of the peak of the continuum emission, the peak flux, and the rms noise of the continuum observations.The third part lists the CO(2-1) line rms noise and the vLSR deduced from the ALMA CO(2-1) data.The last part list the vLSR values deduced by Nakashima & Deguchi (2006) from the analysis of SiO masers. (a) The stellar IDs from Figer et al. (2006). (b) From Davies et al. (2008). (c) From Nakashima & Deguchi (2006).
Article number, page 13 of 29 3 of Beasor & Davies (2016), Table 2 of Beasor & Davies (2018), and Table 2 of Beasor et al. (2020).Notes.The second and third column list, respectively, the intercept a and slope b with their standard deviation as derived by Beasor et al. (2020).
The fourth column lists the intercept a and its standard deviation for a slope fixed to b = 4.8±0.6.b) 11 (e) 1570 (b) 6.5 22 3400 (l) 25 28 13 VY CMa 5.48 (g) 2800 (g) M2/4II (b) 1680 (f ) 25 (h) 1200 (h) 22.6 (g) 35 (g) 950 (g) 80 (g) 5 1.4 Notes.Listed are the stellar luminosity L⋆, the effective temperature T eff , the spectral type, the stellar radius R⋆, the initial mass Mini, the distance D, the local standard of rest velocity vLSR, the terminal wind velocity v∞, the radius of the CO envelope Rout, the gas mass-loss rate Ṁ CO, the mass-loss rate as predicted from Eq. ( 8), and the mass-loss rate predicted using the luminosity-Ṁ -relation of Beasor et al. (2020) (Eq.( 3)). i) Estimates of µ Cep's distance vary between 390±140 and 1818±661 pc (Montargès et al. 2019, and references therein).We here take the average value of the distance estimate of Montargès et al. (2021) (based on physical considerations on the relative size of the MOLsphere, D = 641 +148 −144 pc) and of Davies & Beasor (2020) (based on the average parallax of neighbouring OB stars, under the assumption that the RSG is part of the same association; D = 940 +140 −40 pc).I.e., D = 790 pc.For a change of distance from 790 pc to 390 pc, the retrieved Ṁ CO changes from 3×10 −6 to 1×10 −6 M⊙/yr. (j) The wind of Betelgeuse has various components.O' Gorman et al. (2012) proposes that the S2 flow of α Ori extends out to a radius of 17 ′′ (or 800 R⋆), although the measured intensity distribution of CO emission as a function of projected radius extends to ∼8.5 ′′ (∼400 R⋆).Changing the radius of the CO envelope from 800 R⋆ to 400 R⋆ changes the retrieved Ṁ CO from 0.4×10 −6 M⊙/yr to 0.5×10 −6 M⊙/yr. (k) Montargès et al. (2019) have used the NOEMA interferometer to get a channel map of the CO(2-1) emission of µ Cep at a spatial resolution of 0. ′′ 92×0.′′ 72, with maximum recoverable scale (MRS) being 8 ′′ .Emission is detected up to ∼3. ′′ 5 from the central star (or ∼600 R⋆). (l) For a mass-loss rate around 2×10 −5 M⊙/yr, the predicted CO photodissociation radius is ∼3500 R⋆ (Groenewegen & Saberi 2021) or a full extent of 17 ′′ .The only CO interferometric data currently available for VX Sgr have been obtained in the framework of the ALMA ATOMIUM large program.However, the MRS of those data is only ∼8-10 ′′ and hence cannot be used to estimate the CO envelope size.For that reason, we resort to the predictions of Groenewegen & Saberi (2021).
Fig. 1 .
Fig. 1.CO(2-1) line profiles of 5 red supergiants in RSGC1.The ALMA data are plotted as red histograms.Synthetic line profiles (see Sect. 3) are overplotted as black solid lines.The grey dashed regions indicate frequency regions that are contaminated by the ISM, a strong noise background, or other genuine emission (see App. A).The vertical dashed blue lines indicate the local standard of rest velocity, vLSR, as deduced from the ALMA CO data (see App. A).The dotted and dashed-dotted blue lines indicate the vLSR value as determined byDavies et al. (2008) andNakashima & Deguchi (2006), respectively (see TableA.1).The noise in the spectrum is 1.2 -1.4 mJy (see text for derivation) and is shown as (red) error bar in each panel.The dotted black line in the upper left panel is an alternative fit to the CO(2-1) line profile of F01, as discussed in App. C.
Fig. 1 shows the observed line profiles.The noise in the spectrum (clear of ISM contamination) is given by the σ rms values from Table A.1 divided by the square root of the number of beams, resulting in a spectral noise of ∼1.2 -1.4 mJy.As visible both in the CO(2-1) channel maps (Fig. C.3-Fig.C.7
Notes.
Listed are the stellar luminosity L bol , the effective temperature T eff , the spectral type, the stellar radius R⋆, the local standard of rest velocity vLSR (see discussion in App.A), the dust condensation radius R dust , the terminal wind velocity v∞, and the mass-loss rate as deduced from the CO(2-1) line, Ṁ CO, and as deduced from an SED analysis byBeasor et al. (2020), Ṁ SED.The last two columns compare a density measure, Ṁ /v∞, based on the values deduced in this study (column 11) and used in Beasor et al. (2020) who assumed a terminal wind velocity of 25 km s −1 (column 12). (a) From Davies et al. (2008). (b) As deduced from the ALMA CO(2-1) lines.The uncertainties on Ṁ CO are discussed in App.C -D. (c) From Beasor et al. (2020).(d)F13 was classified as K2 byDavies et al. (2008).However, later on it became clear that the CO-spectral type correlation is flawed if a star has a strong stellar wind, and that under these circumstances also the spectral-type -T eff relation fromLevesque et al. (2006) does not hold.We therefore use the effective temperature and spectral type classification fromMessineo et al. (2021). (e) Beasor et al. (2020) have updated the luminosities of the RSGC1 sources published by Davies et al. (2008).In particular, Beasor et al. (2020) derived log L bol (F01) = 5.58 and log L bol (F03) = 5.33.Changing the luminosities to the values from Beasor et al. (2020) induces an increase in Ṁ CO of 10% for F01 and of 3% for F03.
Fig. 2 .
Fig. 2. Mass-loss rate as a function of luminosity.Various mass-loss rate relations derived for red supergiants are shown for a fixed stellar mass of 10 M⊙ (left panel) and of 25 M⊙ (right panel) for an assumed effective temperature of 3450 K(Reimers 1975;de Jager et al. 1988;Nieuwenhuijzen & de Jager 1990;Salasnich et al. 1999;van Loon et al. 2005;Beasor et al. 2020;Goldman et al. 2017;Kee et al. 2021).Empirical mass-loss rate relations are displayed with a solid line, the theoretical relation ofKee et al. (2021) is shown with a dashed line for 2 different values of the atmospheric turbulent velocity v atm turb .For the empirical relation of Goldman et al. (2017), we use the RSG period-luminosity relation as given in Eq. (C.17) of De Beck et al. (2010), which is valid for pulsation periods between ∼300 -800 days.The corrected Ṁ SED-relation based on the data of Beasor et al. (see Eq. (6)) is shown as dotted grey line; the new Ṁ CO-relation derived from the ALMA CO(2-1) measurements of the M-type RSG winds in RSGC1 (Eq.(8)) is shown as full red line.The rate at which hydrogen and helium are consumed by nuclear burning are shown as thick dashed-triple dotted lines; the single-scattering radiation pressure limit for an expansion velocity of 12 km s −1 is shown as dashed dark grey line.Stellar mass loss rules the evolution of RSG stars if the wind mass-loss rate exceeds the nuclear burning rate, as indicated by the light-blue region; the nuclear-burning dominated region is indicated by the light-orange region.The red triangles in the right panel indicate the (L, Ṁ )-values as derived in Sect.3.1 from the ALMA CO(2-1) line profiles of F01, F02, F03, F04, and F13.
Fig. 3 .
Fig. 3. Ṁ SED -luminosity relations for the four open clusters RSGC1, NGC 2100, NGC 7419, and χ Per.The coloured open symbols represent the (L bol , Ṁ SED)-values for the four clusters studied by Beasor & Davies (2016) and Beasor et al. (2020); error bars indicate their most conservative error estimates (see App. E).The dashed lines in the left panel show the individual fits to Eq. (4) where the values of both a and b are free parameters in each fit.In the right panel, we force the slope b to be identical for all clusters and fit Eq. (5).The best-fit values are listed in the legend and in Table2.The shaded areas provide a visualisation of the uncertainties of the fits.They were constructed through MC as the 68%-locus of all best-fit relations derived from a fit to mock data sets used to assess the errors on the best-fit parameters.
Fig. 4 .
Fig. 4. Ṁ -luminosity relations parametrised in terms of log L bol and Mini and fitted jointly for the four open clusters RSGC1, NGC 2100, NGC 7410, and χ Per.Left panel presents the best-fit to the Ṁ SED measurements of the four clusters (Eq.7) while the right panel includes in the fit the Ṁ CO-values of four stars in RSGC1 (purple downward triangles) using Eq.(8).Symbols, colours and shades have the same meaning as in Fig.3, with the exception of the addition of the Ṁ CO measurements derived in Sect.3.1 using the L bol values ofDavies et al. (2008).
Fig. 5 .
Fig.5.Evolutionary track as computed with the MESA evolutionary code for a star of initial mass 25 M⊙ applying the RSG mass-loss rate prescription from Nieuwenhuijzen & de Jager (1990) (blue) and from Eq. (8) (orange).To discern long-lived versus short thermal evolution phases, dots have been added every 10 5 years; tracks go till carbon depletion indicated by the open square.The inset at the RSG, for a range in log T eff of 0.01 dex, shows how both tracks digress there.
Values used for Fig. 3 and Fig. E.1, and reproduced from Table
Fig
Fig. E.1.Ṁ -luminosity relations as determined by Beasor et al. (2020).The coloured symbols represent the (L bol , Ṁ SED)-values for four open clusters as derived by Beasor & Davies (2016); Beasor et al. (2020); error bars indicate their most conservative error estimates.The straight lines in both panels show the individual straight-line fits to each relation log( Ṁ SED/M⊙yr −1 ) = a + b log(L bol /L⊙) for all clusters in the sample.In the left panel, the dashed lines show the Ṁ -L bol relation using the values for the offset and slope as given in Table 4 of Beasor et al. (2020); the full lines in the right panel show the fits to the Ṁ -L bol relation once the gradient is fixed to b = 4.8, following Eq. 4 of Beasor et al. (2020) and using the values for the initial mass as given in Table1ofBeasor et al. (2020).The red dotted line in the left panel shows the fit to RSGC1 for an intercept a = -53; see footnote 7. The red dashed-triple dotted lin in the right panel shows the fit to RSGC1 following Eq.(E.1).
Fig
Fig. F.1.Ṁ -luminosity relations for the four open clusters RSGC1, NGC 2100, NGC 7410, and χ Per.Dashed gold, red, black, and blue lines present the best-fit of Eq. (5) to the (L bol , Ṁ SED) measurements of the four clusters (shown as open symbols with corresponding colour), and the dashed purple line the best-fit to the (L bol , Ṁ CO) values of four stars in RSGC1 (filled purple downward triangles).
Table 1 .
Stellar and CSE parameters for the five RSGs in RSGC1 for which CO(2-1) emission was detected.
Table 2 .
Best-fit parameters for the Ṁ SED-luminosity relation for each cluster.
Table A .
1. Data for the stars observed within the ALMA proposal 2013.1.01200.S.
Table E .
2. Best-fit parameters for the Ṁ SED-luminosity relation for each cluster derived by fitting Eq. (4) to the data.
Article number, page 25 of 29 Table G.1.Stellar and CSE parameters for the red supergiants observed in various CO rotational lines by De Beck et al. (2010).bol /L ⊙ ) T eff Spectral R ⋆ | 15,425.2 | 2023-03-16T00:00:00.000 | [
"Physics"
] |
Mechanical and X ray computed tomography characterisation of a WAAM 3D printed steel plate for structural engineering applications Building
(cid:1) Construction industry can use WAAM 3D printed technology to optimise steel components. (cid:1) Mechanical properties are benchmarked against the material properties in Eurocode EN 1993–1-1. (cid:1) X-ray Computed Tomography is employed for quality insurance of the steel’s integrity. (cid:1) Ductility of the WAAM steel limits this steel to static applications in building structures. This paper reports an investigation to improve fundamental knowledge and understanding of 3D printing of steel in structural engineering. The process method examined is Wire and Arc Additive Manufacturing (WAAM) for the manufacturing of large-sized components. The mechanical properties of 3D printed Union K 40 - GMAW steel are determined and benchmarked against measured properties of EN 8 carbon medium steel. The results presented and discussed are from tensile coupon testing and X-ray Computed Tomography, with the latter inspecting an internal volume of the WAAM steel for: printing orientation; mapping porosity; interfacial variation between the printed layers. The key finding is that the mechanical properties of the WAAM steel satisfy the requirements for a structural steel grade for building structures as specified by Eurocode 3 (EN 1993–1-1). (cid:1) 2020 The Authors. Published by Elsevier Ltd. ThisisanopenaccessarticleundertheCCBY-NC-NDlicense (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Introduction
Over the last three decades, the potential of 3D printing has evolved significantly, offering efficiencies and providing optimised structural solutions from the exploitation of this industrial revolution in manufacturing processing. Sakin and Kiroglu [1] observed in 2017 that 3D printing had become one of the fastest-growing technologies in manufacturing. Aerospace, oil and gas, marine and automobile applications were found by these authors to be attracting a lot of interest, because of flexibility and freedom in part design and enhanced product complexity for lightweight engineered solutions [2][3][4]. The next engineering sector to benefit from 3D printing is the construction industry [5]. Wu et al. [6] observed that 3D printing technologies can have significant benefits in reducing construction times; minimising costs for improved affordability; reducing waste; and increasing design flexibility.
There have been several 3D-printing projects that illustrate effective redesign as another benefit to the construction industry. For examples, for non-structural components previous projects include an 1.5 Â 4 Â 0.1 m aluminium window frame, with its steel/aluminium 0.25 Â 0.3 Â 0.01 m curtain wall bracket [7], and a 0.6 m wide and 0.8 m high nylon decorative joint to substitute an existing steel cladding connection [8]. To demonstrate the possibilities of Additive Manufacturing (AM) processing there are projects for scaled-down components, such as: 50 Â 50 mm cross-section stainless steel sub-columns [9]; micro-lattice structures [10]; an optimised tensegrity node structure [11]; topology optimisation for stainless steel three-branch joints [12].
Using the capability of AM to print complex 3D shapes and its ability to print a component of more than one material enabled Izard et al. [13] to optimise a damper unit for civil engineering based on an optimised energy dissipation analytical model for https://doi.org/10.1016/j.conbuildmat.2020.121700 0950-0618/Ó 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). achieving negative stiffness with significant energy dissipation. Another relevant enhancement is for the size of 3D printing components, with the 12 m span steel pedestrian bridge in the Netherlands [14] taking AM for construction to a new scale. To provide AM components for on-site construction applications, the company Autodesk has developed a mobile cabinet for Wire Arc Additive Manufacturing (WAAM) processing with steel welding materials [15].
Today, limitations that are potentially delaying routine field applications vary from the lack of technical information on the quality of the final 3D printed components to the detailed information for achieving the highest material quality from the printing processes themselves [16]. In addition, there is an absence for 3D printed steel components of standards (such as for the design of structures, see EN 1993-1-1:2005 [17]) and other guidance to benchmark/evaluate their structural performances as novelshaped components over their intended design service lives.
The aim of this paper is to build confidence in WAAM produced 3D printed steel, which inherently uses a welded grade of steel, can create processing porosity, a rough surface and a nonhomogeneous micro-structure. The new contribution will show that the welded steel possesses mechanical properties that can be in accordance with the material specifications found in Section 3 of the steel structural Eurocode BS EN 1991-1-1:2005 [17]. By demonstrating that WAAM steel does, indeed, possess materials properties that satisfy this Eurocode standard, it will be acceptable to employ, where appropriate to do so, the structural design procedures in the standard to design steel structures having WAAM components of relatively large sizes. Fig. 1 shows the three stages of redesigning, enhancement, and construction to be employed for component optimisation when using 3D printing in civil engineering works. Often, the required components/structures for any construction project are known, and conventional manufacturing has, for many years provided cost-effective engineered solutions for construction with the materials of steel, concrete, timber, etc. With 3D printing the opportunity is present to redesign steel components and to offer the capability to manufacture steel components at sizes found in conventional steelwork structures. These major benefits are also enhanced by adding the potential advantages of: optimise structural shapes with complex geometries; printing-off bespoke and single components; employing automation; removing any requirement for highly skilled welders. To transfer all these AM advantages into structural engineering applications it is necessary to build confidence in AM technology, and to achieve this goal, research and development projects, like that reported in this paper, are on-going to fill-in known gaps to our current knowledge and understanding.
Wire Arc additive manufacturing (WAAM)
There are a variety of AM processes that the construction sector could apply during this early stage of application development [16]. AM methods for metal components can be split into the two technologies of either the Powder Bed Fusion (PBF) technology or the Directed Energy Deposition (DED) technology [18], being based on the source of energy and the input of raw construction materials as either metal powders (in PBF) or wires (in DED) [2,3].
To avoid having a lower limitation on maximum spatial dimensions, the DED process of WAAM is found to be more appropriate. Whereas, the component size manufactured by a PBF method is always limited by the process chamber size, the robotics arm (with six-axis motion) in the WAAM process can readily be programmed to 3D print complex and much larger components. Size is now simply limited by the size of the robot arm and/or the movement of the robot unit itself. Additionally, WAAM is not restricted to the limitations of an overhead gantry system mentioned in the literature [19,20], which involve issues for accessibility, especially onsite, and for transportation and installation on-site of the system. Other disadvantages of PBF 3D printing are that this AM method can require a sintering stage that is an extra processing/energy stage and its deposition rate of steel at 0.2 kg/hr is 45 times less than the 9 kg/hr achieved by DED processing [21].
The three photographs in Fig. 2 show the WAAM process equipment with Autodesk in the UK, where the heat source melts a steel wire being laid down by the robot. It is the combination of an electric arc and wire as feedstock which gives this DED process the name WAAM [22]. In the process the welding torch, seen in Fig. 2(a), dispenses the steel wire as molten metal at a specified rate, whilst moving in one direction (as shown by the yellow arrow) and builds-up a component's volume, first in the horizontal plane, and then in the vertical direction layer-by-layer. Although not relevant to the characterization work of the steel reported herein, it is important to understand that WAAM processing is well-suited for adding complex-shaped features into large components [2].
Taking a historical perspective on the development of ironbased materials, it is seen that cast iron [23] has similar physical feature to WAAM steels, such as: shrinkage, residual stresses, porosity, roughness of the surfaces and dimensional tolerances of final component (if no additional machining is employed). These similarities allow cast iron to be selected as a reference construction material because it has been successfully used in civil engineering structures.
The characterization work presented in this paper is used to investigate a WAAM produced steel by determining short-term mechanical tensile properties and obtaining the internal microstructure by using X-ray Computed Tomography (XCT) analysis. The non-destructive testing method provides a quality control characterization that can be used to establish a scientific association between the level of porosity and the steel's material properties, which are compared with those for steel grades scoped by the steel structural Eurocode BS EN 1993-1-1:2005 [17].
WAAM processing and steel sample
To 3D print the sheet of steel seen in Fig. 3, the processing parameters using the WAAM equipment, shown in Fig. 2, were guided by the manufacturer Autodesk. The wire of diameter 1.2 mm was type Union K 40. This is a Gas Metal Arc Welding (GMAW) solid steel wire, which according to the supplier has: a 25% strain at ultimate failure (e u ); a yield strength (f y ) of 360 N/mm 2 at 0.2% proof strain (e y ); an ultimate tensile strength (f u ) of 400 N/mm 2 . GMAW solid wire electrode is often used for welding unalloyed and low alloy steels with shielding gas. This is because it is especially suited for electrolytically and hot dip galvanized thin sheets that are used primarily in vehicle and autobody fabrication. For this research Autodesk printed a Union K 40 steel sheet component of dimensions 0.345 Â 0.075 Â 0.0205 m in approximately 10hrs of processing time. In the WAAM process the torch direction is always laid down parallel to the longer side length of the sheet (i.e. coinciding with the length 345 mm in Fig. 3), and this is the Longitudinal direction. There are five printed layers in building-up the plates' mean thickness to 20.5 mm. The word layer has the meaning that the volume of steel is continuous and homogeneous. At the interface between the five layers there is a change in the steel's composition and porosity is present (as identified by XCT imaging). Note that the thickness of the sheet is not constant with an overall surface roughness (which can be seen in Fig. 3), which is known to be dependent on the printing parameters, particularly the printing speed.
To evaluate the mechanical properties of the printed steel the authors decided to compare the measured properties against the equivalent material properties reported in Section 3 (Materials) in EN1993-1-1:2005 [17]. The reason for this is that if the 3D printed steel is shown to possess material properties that meet the specification of steel grades in Eurocode 3 so does the WAAM processed steel Union K 40. Note that Union K-140 is a welding steel and therefore does not necessarily have to possess the material properties to be a Eurocode 3 specified grade of steel. This paper reports on the test methods and test procedures to characterize the tensile properties and internal micro-structure using coupons cut from the sheet component shown in Fig. 3.
Machining and cutting for test sample geometries
Post-processing steps for the WAAM component could have varied between heat treatment, milling, grinding, machining, etc. One feature of WAAM printed components is that the outer surface is rough and for the Autodesk plate this roughness can be considered as the striation lines seen in Fig. 3. As seen in Fig. 4(a) a grinding wheel running at a fixed speed of 1650 rev/min was used to grind down the steel to a constant 17 mm thickness. The prepared plate has flat and polished surfaces, which are needed in tensile testing for gripping and to bond on post-yield rosette and axial strain gauges. Then, as seen in Fig. 4(b), three tensile dog-bone coupons were cut-out using an aqua jetting technique. Moreover, as identified by the blue ellipses in Fig. 4(c), four through-thickness rectangular coupons were cut-out-from the steel left in the sheet between the three tensile dog-bone coupons. This provided nonstandard coupons to make a further examination of the steel having the interfacial regions from the 3D printing. Because the coupons did not receive a heat treatment process the material properties of the steel were determined including the WAAM processing residual stress distribution. It is worthy to note that postprocessing heat treatments are expensive and time-consuming and for construction applications can be seen to be undesirable on cost grounds alone. Post 3D printing heat treatments might be necessary with AM components in other engineering sectors.
Figs. 5 and 6 show a dog-bone coupon based on standard BS EN 10002 [24], and a through-thickness (non-standard) rectangular coupon taken from the WAAM printed component. Table 1, with its accompanying Fig. 7, defines the dimensions for three different coupons. Column 1 is for coupon type and coupon labelling scheme, and columns 2 to 5 report in millimetres: thickness (a o ); width (b o ); gauge length (L o ), for part of coupon to measure the direct strain; parallel length (L c ), which for dog-bone coupons is larger than L o ; overall coupon length (L t ). First and second rows in Table 1 give the dimensions for both types of coupons of the WAAM steel. They have a size and geometry that was commensurate with the limited amount of available 3D printed steel (see . [24]. The overall length of the specimen was limited to the printed thickness, after grinding, which is 17 mm, and to the chosen coupon width for the rosette strain gauging. Accordingly, for WAAM_L_T coupons the minimum L o for the 20 Â 17 crosssection is 104 mm, and for the maximum coupon length of 300 mm the parallel length (L c ) is 132 mm. The labelling of the dog-bone coupons is WAAM_L_T_1, where 'L' is for Longitudinal (wire laying down) direction. 'T' for Tensile testing and '1 0 is for coupon number 1. The only change with the non-standard coupons is that 'L' is replaced with 'TT' for the Through-Thickness direction, which is for when the axial strain gauging is positioned on the 'thickness' surfaces, noting that the direction of tensile load is still in the Longitudinal direction.
To provide a benchmark material a single tensile dog-bone coupon was prepared of EN 8, a medium carbon steel, also having dimensions in accordance with the proportional test approach in BS EN 10002 [24]. The third row in Table 1 shows that for the EN_8 coupon the mean cross section was 10 Â 20 mm.
Test set-up and test procedure
Tensile testing was performed to determine the direct stressdirect strain relationship of the steels to ultimate failure. As In the case of testing the non-standard rectangular coupons of WAAM_TT_T_1 to 4 elongation for direct strain was measured by a 2620-602-200700 axial dynamic extensometer having a 12.5 mm gauge length, with a travel of ±5mm giving ±40% strain (see Fig. 8(b)). Load was now applied using a 100 kN Instron universal testing machine.
For the maximum coupon cross-section size of 20 Â 17 mm (see Table 1) and an ultimate tensile strength of 440 N/mm 2 (Union K 40 steel) the expected ultimate tensile load is 150 kN.
To validate the test set-up and procedure before proceeding to characterise the WAAM steel a benchmark test was performed using the same test procedure with the coupons of EN 8 steel.
The test procedure for the strength testing was in accordance with BS EN 10002 [24]. The constant stroke rate was 2 mm/min for the recommended 0.00025s À1 strain rate over the elastic stage for determination of both the upper yield strength (R eH ) and the tensile strength (R m ), which are for strengths f y and f u , respectively in EN1993-1-1:2005 [17]. Testing was performed at 23.5°C and 34% humidity and the test duration was 17 min.
To inspect the WAAM steel's integrity and identify internal micro-structural changes the non-destructive, non-contact examination employed was X-ray Computed Tomography (XCT). Other techniques have been used in the past for the examination of AM components, one being radiography and liquid penetrate inspection [25]. XCT was chosen in this study because it has the main advantage of establishing the volumetric identification of differences in steel materials.
The test procedure utilises a series of digital radiographs from a full rotation of the examined sample that are reconstructed to create a 3D computational volume of internal and external geometries based on the attained grey values. XCT is uniquely able to segment the different internal regions according to achieved attenuation and provide volumetric analysis. As shown in Fig. 9 for the WAAM steel (after the sheet was grinded) the grey values variations presents regions of different steel composition with a higher atomic number with brighter grey values and regions of material with a lower atomic number with darker grey values. In the figure, the black regions are identified as internal voids where, at the interfaces of the five layers, the molten steel has not fully coalesced during the WAAM welding process. The grey-image in Fig. 9 is affected by common ring artefacts that do not limit the segmentation of the different regions. The coloured image on the right-side highlight the concentration of porosity in layers that exists at the interfaces separated by 3.8 mm in the thickness direction. The XCT scanning of the coupons before and after tensile testing can expose any changes in the internal micro-structure owing to failure deformations; especially, at the interfaces where the porosity exists (see Fig. 9(b)). This highlights the link of how the WAAM steel responses to tension and the material's short-term material properties. To provide a benchmark on the quality of XCT scanning, a sample of EN-8 steel was used to establish a reliable contrast [26]. The calibrated procedure presented in [27] and [28] was used to minimise, if not eliminate the uncertainty regarding the known XCT limitations, such as signal noise.
X-ray Computed Tomography (XCT)
XCT scanning provided valuable information in order to understand the internal micro-structure of the WAAM steel. Three types of samples were involved in this non-destructive evaluation, namely the dog-bone coupons before and after tensile testing (Section 4.2), and the through-thickness coupons, but only after testing.
As Fig. 10 shows the layered structure of the WAAM steel comprises of three constituents at different volume regions. The grey coloured region demonstrates the bulk homogeneous Union-40 K steel, which is the principal constituent. The green, red and blue coloured volumes in the figure are for interfacial regions between the five layers for the specific WAAM welding process. The final constituent is the voids (porosity) at the interfaces and is imaged by the beige coloured discontinuous volumes in Fig. 10. The porosity constituent has the lowest volume fraction of the three. Figs. 11 and 12 show the 3D volume of WAAM steel before tensile testing that were used to calculate the volume fraction of porosity. For the interfacial regions seen in Fig. 11 the 3D printing parameters used by Autodesk does not allow the molten steel to fully coalesce, and this processing weakness can be due to the inter-passing distance parameter of the weld torch (see Fig. 2(a)). The XCT scanning highlights that steel in the interfacial region can have a slightly lower density than that of the bulk of the steel. Fig. 12 shows further that along these interfacial regions there are pockets of porosity where there is no steel present.
A volumetric analysis of the WAAM steel before tensile testing finds that the mean volume fraction of porosity is 0.1%, while the volume fraction of the regions of steel having the lower density is 1.68%. Archimedes principle can be applied to determine average steel density. A precision electronic balance was used with a builtin hook from which the WAAM_L_T_1-3 coupons were individually hung to measure, in air, their weight to 0.01 g. Next, the balance was zeroed before lowering the coupon to be submerged underwater. A minus weight reading was now recorded to give the coupon's volume. Taking into consideration any change in temperature [29], the average calculated density from three coupons is 7.77 g/cm 3 . This gives a 0.7% difference to the recognised density of medium carbon steel (EN-8) for the benchmark steel material.
Archimedes principle was further employed to establish the porosity volume fraction. Researchers [26] highlighted its potential to capture a reliable measure of porosity, in addition to its capacity for determining material density. Spierings et al. [30] mention that the method has a limitation because it produces a higher value than the actual level of porosity. The presence of this limitation could be justified owing to low resolution in capturing the porosity and noting the presence of denser regions of steel away from the interfacial regions. Reported in Table 2 are the volume fraction of the three constituents in the WAAM steel, determined after tensile testing. Column 1 defines the three constituents (see Fig. 10), with (A) for bulk steel, (B) for interfacial regions, and (C) for porosity. Columns (2) to (4) lists the volume fractions as percentages for the three constituents. It is observed that the percentage of interfacial regions is from 0.38 to 0.64%, with a mean of 0.48%. Furthermore, the anal-ysis finds that the porosity ranges from 0.05 to 0.21%, and because the through-thickness coupons are relatively small (refer to Table 1), their higher porosity indicates that there can be localised differences in the amount of porosity. This preliminary finding might be reflected in localised differences in material properties observed in the tensile test results presented in Section 4.2.
Figs. 13 and 14 show the XCT volumes of WAAM_L_TT_1 and WAAM_L_TT_2 after fracture. The interfacial regions are exposed in Figs. 13(a) and 14(a), giving evidence for a homogenous deformation throughout the coupon cross-section. Note that the fracture occurs at the top of the volume where there is the cup and cone fracture zone. In Figs can be observed that the fracture deformation occurred close to the top where the prominent voids are elongated. It can be speculated that the stress concentrations at one or more voids might have led to premature fracture in the steel with a lower fracture strain (e u ) than its value would have been had there been no porosity. The level of importance of the XCT scanning test results is discussed after the tensile coupon test results are reported and discussed in Section 4.2.
An important lesson to be learnt from the XCT characterization work is that by using the WAAM processing parameters in this study there are voids (to volume fractions of 0.2%) within the interfacial region. Because porosity lowers mechanical properties, particularly under fatigue loading, we want to eliminate it, if practical to do so, by establishing the optimum processing parameters. Further work is therefore necessary with the WAAM processing parameters to find out what is the minimum possible percentage of porosity.
Tensile tests
Prior to presenting and discussing the WAAM steel tensile test results, it is relevant to present the material properties of structural steel for the design of steel structures informed by the code of practice, known as Eurocode 3, BS EN1993-1-1:2005 [17]. Specific steel grades are listed that satisfy parts of the standard BS EN 10025 for hot rolled products of structural steels [31]. Note that the tensile properties of these steel grades are to be determined in accordance with BS EN 10002 [24]. These steels are for a thickness 40 mm and for various section shapes. The range of characteristic yield strengths (f y ) is from 235 to 460 N/mm 2 and characteristic ultimate tensile strengths (f u ) is from 360 to 570 N/mm 2 ; the higher the yield value the higher is the ultimate value. Section 3 then provides in paragraph 3.2.2(1) the ductility requirements. These requirements are expressed as follows.
For steels a minimum ductility is required that should be expressed in terms of limits for: -the ratio f u / f y of the specified minimum ultimate tensile strength fu to the specified minimum yield strength f y ; Fig. 11. Volumes at the interfacial regions having lower steel density (colours according to volume sizes). Beneath the paragraph is the note to state that for the limiting values of the ratio f u /f y , the elongation at failure and the ultimate strain e u may be defined in the National Annex. In the British Standard (BS) Annex [32] the following is recommended: For buildings the limiting values for the ratio f u /f y the elongation at failure and the ultimate strain e u are given below.
-Elastic global analysis: f u /f y ! 1.10; Elongation at failure not less than 15%; e u ! 15e y .
-Plastic global analysis: f u /f y ! 1.15; Elongation at failure not less than 15%; e u ! 20e y .
For bridges plastic global analysis should not be used and the limiting values for the ratio f u /f y , the elongation at failure and the ultimate strain e u for elastic global analysis are given as: f u /f y ! 1.20; elongation at failure not less than 15%; e u ! 15e y .
For the purpose of carrying out design calculations paragraph 3.2.6 recommends that the elastic constants are 210000 N/m 2 for the (mean) modulus of elasticity and 0.3 for Poisson's ratio.
In BS EN 1993-1-1 Section 3 [17] there are material property requirements for fracture toughness, through-thickness properties that are not considered in this study.
Plotted in Fig. 15 are tensile direct stress-direct strain curves to ultimate failure, coloured blue, red and black, for the test results from coupons WAAM_L_T_1 to WAAM_L_T_3. Also plotted in the figure is a purple coloured curve for the benchmarking characteristics from testing coupon EN_8 (see Table 1). Inspection of the test results show that the WAAM steel curves have similar ductile-steel characteristics as the benchmark medium carbon grade steel.
Reported in Table 3 are the individual test results for coupons WAAM_L_T_1 to 3. In this table, column 1 is for the coupon labels and names of statistical properties. The modulus of elasticity, E, is given in column 2 and this property was calculated from the slope to 65% of the linear (elastic) part of the direct stress-direct strain curves plotted in Fig. 15, using the least squares fit method. Columns 3 and 4 are for values of yield strengths. Column 3 is for the Eurocode f y value that is equal to the upper yield strength Below the three rows in Table 3 with listings of the coupon's individual test results are four rows for the batch results of: mean; standard deviation for a normal (Gaussian) distribution (SD); Coefficient of Variation (CoV); characteristic value. Note that the mean value of the modulus of elasticity is also its characteristic value. For strength and strain properties, the characteristic values are obtained in accordance with paragraphs in Annex D of EN 1990 [33], by using characteristic value equal to mean minus n times SD, where SD is the standard deviation. For three coupons in the batch n is equal to 1.89. Because the CoV is always <10% for properties reported in columns 3 to 6 there is justification to calculate the characteristic value with the assumption that the coefficient of variation is known a priori [33]. If a coefficient of variation had been >10% then n could be taken as 3.37 and the calculation of characteristic values will be lower.
From the tensile test results, the mean yield strength (f y ) is 419 N/mm 2 (CoV of 1.1%) at a mean yield strain (e y ) of 0.21% (not reported in Table 3 because it is a constant). The mean ultimate strength (f u ) is 474 N/mm 2 (CoV of 0.8%) at a mean ultimate strain (e u ) of 13.3% (CoV of 2.8%), with the strain at fracture (e f ) higher than 26% (CoV of 6.8%). These values are higher than reported in Section 2 from the supplier for the Union K 90 wire used to 3D print the steel sheet seen in Fig. 3. The mean modulus of elasticity is 20800 N/mm 2 (CoV of 8.0%), which is 1.4% lower than its specified mean value in Eurocode 3 [17]. The reason for this is the relatively low E of 189000 N/mm 2 measured from coupon WAAM_L_T_3; 216000 N/mm 2 is the mean taking results from the other two dog-bone coupons.
From Table 3 it is seen that the characteristic strengths f y = 411 N/mm 2 and f u = 467 N/mm 2 of the WAAM steel fall within their, respective, Eurocode 3 [17] ranges from (f y ) 235 to 460 N/mm 2 and (f u ) 360 to 570 N/mm 2 . Note that the CoV is close to 1%, showing that for a coupon having cross-section size 20x17mm there is low variation in strengths. Because the ratio f u /f y = 1.14 the WAAM steel can be deemed acceptable as a steel grade in accordance with the ductility requirements of f u /f y = 1.15 in BS EN 1993-1-1 [17] for plastic global analysis for design of buildings or bridges. Elastic global analysis is verified only for buildings because elongation at failure, e f = 26% is greater than 15%. Also, the remaining ductility requirement e u = 12.7% ! 15 Â e y = 3.15% is satisfied. It is noted that for the measurement of strain by an extensometer, the gauge length with WAAM_L_T coupons is 80 mm. This length should have been 104 mm (Table 2) so that the elongation at failure is measured over a desired gauge length equal to 5.65A o . It is believed that this transgression from the requirements of Section 3 in BS 1993-1-1, because of the limited size of the WAAM sheet at 0.345 Â 0.075 m, did not affect the findings described herein because the tensile strain values are much higher than their minimum requirements.
Combining readings from the axial and transverse extensometers and the bi-axial strain rosettes, Poisson's ratio was determined and is presented in Fig. 16 [17]. Four further tensile stress-strain curves are reported in Fig. 17 from testing with the non-standard rectangular coupons WAAM_L_TT_1 to 4. Note that compared to the WAAM dog-bone coupons of cross-section 17 Â 20 mm (Table 1) these coupons had a cross-section size of 3.8 Â 17 mm, and thereby a thickness 4.47 time greater than width and a cross-sectional area five time lower. Plotted in the figure are four curves, coloured yellow, green, blue and mauve, and for comparison the single purple coloured tensile characteristics from coupon EN_8. From the four rectangular WAAM coupons the mean yield strength is 375 MPa at mean strain of 0.2% and the mean ultimate strength is 454 N/mm 2 at a mean ultimate strain of 12.7%. Although the mean strain at yielding is the same, the yield strength is 44 N/mm 2 lower than determined using the WAAM_L_T_1 to 3 dog-bone coupons. It is also seen that the mean ultimate strength is 20 N/mm 2 lower. Because these coupons are straight-sided and non-standard their test results are not used to determine characteristic values for a comparison with material properties in Eurocode 3.
It is seen from the four plots in Fig. 17 that there is consistency in the steel's response for coupons WAAM_L_TT 1 to 4 to a strain of about 14.6%. Plastic deformation with ductile fracture is the same failure mode as the tension stress goes beyond the elastic limit. Fig. 18(a) to (c) shows that fracturing in both dog-bone and nonstandard coupons is by the well-known cup and cone failure mechanism of a ductile steel. It is found that the mean reduction in cross-section area is 53% for WAAM_L_T1 to 3, 40% for WAAM_L_TT_1 and 2 and 30% for WAAM_L_TT_3 and 4. The lower is the reduction in area the lower is the ultimate strain at fracture as seen from the results in Figs. 15 and 17. The practical reason why the fracture strain recorded with coupons WAAM_L_TT_3 and 4 does not exceed 15% is that the cup and cone failure did not occur in the 12.5 mm length of the extensometer. In other words, e f was not measured by the extensometer, only tensile deformations to a plastic strain of about 12%, prior to the localized cup and cone failure elsewhere. Fig. 17 shows the falling branch for these two coupons (blue and brown coloured curves). By connecting together the two fractured part e f was measured for WAAM_L_TT_3 and 4 to be 18% and 21%, respectively. Plotted in Fig. 19 are the seven WAAM steel coupon test results (without fracture strains for WAAM_L_TT_3 and 4) to highlight differences in e f , the strain at fracture. Stress-strain curves can be divided into having the following three main stages of: linear elastic: strain hardening: steel fracture. Coloured curves black, red and blue are for coupons WAAM_L_T_1 to 3 and show similarities in the first two stages, while a divergence starts with the onset of the falling branch and recorded e f . It is known [34] that internal molecular equilibriums are associated with the plastic flow balance between the stages of the tensile stress-strain relationship under uniaxial load. Eudier [35] has found that the existence of porosity has great effect on reducing the fracture strain and for decreasing the yield strength. Additional evidence for how porosity affects steel material properties is given in the literature for cast steel. Hardin and Beckermann [36] showed that stiffness depends on the size and distribution of voids, and in [37] that porosity also has an impact on the fracture mechanism during the ductile fracture stage. Now inspecting the WAAM_L_TT_1 to 4 results in Fig. 19, given by the green, purple, yellow and blue curves, a greater variation can be observed in all three stages aside from the elastic slope (for the modulus of elasticity). The new insight gained from this characterization work raises the question whether the material properties of f y , e y , f u and e u are affected by the presence and distribution of voids, and a possible greater localised nonhomogeneity in the through-thickness coupons? Figs. 13 and 14 demonstrate the internal structure of coupons WAAM_L_TT_1 (having 0.21% of porosity) and 2 (having 0.12% of porosity) by XCT scanning, showing that local to the section of the coupon with the cup and cone fracture the voids are elongated. Now from an inspection of the falling branches in Fig. 19, leading to e f , there is evidence to suggest that this material property might be dependent on the size and the distribution of porosity, and that because the curves from coupons WAAM_L_T_1 to 3 be a coupon size effect. It is possible from the results of this preliminary investigation to say that either an individual void or group of voids close together might have altered the flow distribution during plastic deformation causing fracture to occur in the WAAM_L_TT coupons. Hardin and Beckermann [37] suggest that when porosity is less than a few percent there is no measurable loss of stiffness, or large stress concentrations, or stress redistribution, but it will significantly reduce the fatigue strength. To verify the findings for steels made by WAAM additive manufacturing, a tensile testing programme is required to consider WAAM processing variables, and which satisfies standard BS EN 10002, including having a minimum dog-bone coupon batch size of five.
Concluding remarks
From the armoury of 3D printing technologies for additive manufacturing, the directed energy deposition process of Wire Arc Additive Manufacturing (WAAM) is employed to print a 20.5 mm thick sheet of Union K 40 -GMAW steel. After grinding-off the inherent surface waviness, several coupon specimens of 17 mm thickness were prepared to characterise the short-term tensile material properties of the WAAM steel. To examine the internal micro-structure formed during the welding process the nondestructive test method of X-ray Computed Tomography (XCT) was used with coupons prior to and after tensile testing. Observations drawn from evaluating the new test results have started to fill-in known gaps in knowledge and understanding that are needed to prepare guidelines to design and execute optimum 3D steel printed components for building structures. The main findings from the characterisation work are that: a steel sheet of 20 mm thickness can be 3D printed using the Autodesk equipment for the WAAM processing method. the material test results provide information for: the constitutive tensile stress-strain relationship; Poisson's ratio; the cupand cone failure mechanism. All results are for a similar steel to the ductile characteristics of steel grades that satisfy the design requirements in the British version of the steel structural Eurocode, which is BS EN1993-1-1. the Union K 40 -GMAW steel in the 3D Printed sheet has an overall elongation that satisfies the requirement in BS EN 1993-1-1:2005. from mean properties the ratio of ultimate tensile strength to yield tensile strength at 1.14 is nearly in accordance with the minimum limit of 1.15 specified in the National Annex to BS EN 1993-1-1:2005:2008, which means the WAAM steel in this test is only valid for building structures designed by structural global analysis in the elastic range. XCT scans provide a quality control characterization of the 3D printed steel and information from the images may be used to develop a scientific association between the level of porosity and the steel's tensile properties. an important lesson to be learnt from results of XCT scanning is that by using the study's WAAM processing parameters there are, within the interfacial regions, pores have volume fractions of 0.2%. because the highest quality of WAAM steel will possess no porosity further work is necessary with variants to the WAAM processing parameters to find out if 0.2% is the minimum practical volume fraction of porosity.
It is the authors' expectation that the results from this research should be a step forward toward assist progress in applying WAAM processing of steel to manufacture building components that improve aesthetics and structural optimisation. This aim can, potentially, be achieved by applying additive manufacturing to redesign, enhance and construct new engineered solutions in order to use less steel and minimise embodied energy.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 9,007.6 | 2020-12-09T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Analysis and Forecasting the Price of The S&P 500 Index Using the Arima Model
. The results of the research allowed to determine that the chosen S&P 500 index can serve as a reflection of the state and forecasts of economic development of the United States. A successful forecast of the index can serve not only as a key point in building an individual investment strategy, but also as an indicator of the general state of the economy. The mathematical model for predicting the dynamics of the index was built. Through exploratory data analysis, a better understanding of the time series and its characteristics was obtained. The application of various statistical methods, such as moving statistics and stationarity tests, made it possible to identify trends and seasonality in the data.
Introduction
One of the main goals of econometric modeling of the money market is the study of time series in finance.For a long time, financial market researchers assumed that financial assets follow a normal distribution and are completely unpredictable.However, the application of new approaches to financial market modeling has shown that real time series of financial data are not only devoid of randomness, but also have a long memory.This means that past events have a strong influence on the future returns of financial assets.
Stock market indices are an integral part of the analysis and understanding of the financial system.They provide macroeconomists and financial economists with important tools for studying and forecasting market behavior and economic development.Without reliable and consistent indices, it becomes more difficult to identify long-term patterns, evaluate the performance of financial companies, and make comparisons between different markets.
Financial indices are an important source of information for traders and investors.They provide a quick summary of the state of stock markets, evaluate their performance and make informed investment decisions.Indices allow traders and investors to easily track changes in the market, identify trends and predict possible price movements.However, despite the importance of indices, not enough attention is always paid to their methodology and proper use.Currently, information on index methodology is rarely included in economics or business curricula and remains available only to a limited number of specialists.This can lead to misinterpretation of data and potentially misleading index-based decisions (Kupper, 2022).
Historically, there have been many different indexes, offering different calculation methodologies and focusing on different aspects of the market.Some have become widely known and used, such as the Dow Jones Industrial Average (DJIA), the S&P 500, and the NASDAQ Composite [14].However, each index has its own characteristics and purpose, and the choice of a particular index depends on your analysis or investment strategy.Given these factors, it is clear that a thorough understanding of indices and their methodology is key to the proper use and interpretation of data.The correct use of indices allows you to draw more accurate and meaningful conclusions about the state of the stock markets and to make more informed decisions.
According to the calculation method, the indices are divided into groups, the most common of which are as follows (Investopedia, 2023): ➢ Price indices: Calculated by averaging the prices of index components with their weights.Examples include the Dow Jones Industrial Average (DJIA) and the Nikkei 225.➢ Market-cap-weighted indices: Calculated by taking into account the market value of the index components.The larger the market capitalization of a company, the greater its weight in the index.
Examples include the S&P 500 and the NASDAQ Composite.➢ Balanced indices: All index constituents are equally weighted, regardless of size or market capitalization.➢ Factor indices: These are calculated based on specific factors such as value factor (price/earnings), capitalization factor (small, mid, or large capitalization), asset value factor, and others.Examples of factor indices include the Fama-French three-factor model and the MSCI Minimum Volatility Index.
In the stock market, indices provide a number of important functions, including: ➢ The first function of indices in the stock market is their ability to reflect the overall performance of the market.By combining several stocks or other financial instruments into a single index, indices allow investors to measure overall market movements and changes.Indices such as the S&P 500 or the Dow Jones Industrial Average provide valuable information about the market and its long-term trend (Ellis, 2016).
➢ The second function of indices in the stock market is to be used as a benchmark to compare the performance of investment portfolios, mutual funds or individual stocks.Indices allow investors to gauge how well their investments are performing relative to the overall market.Comparison to an index can help determine the effectiveness of an investment strategy and whether it needs to be adjusted (Ganeshwaran, 2022).
➢ The third function of indices in the stock market has to do with their ability to serve as a market trend indicator.Changes in the index can indicate current market trends.For example, an increase in an index can be a signal of a strong economy and investor confidence, while a decrease in an index can indicate economic problems or uncertainty in the market.Such signals can help investors make decisions to buy or sell assets based on current market trends (Novotný, Jaklová, 2022).
➢ The fourth function of indices in the stock market is their ability to guide investment strategies.Indices provide information about industries, geographic regions, or other market segments.This information can be used by investors to develop their investment strategies and make asset allocation decisions.For example, an investor may decide to focus on an industry sector that is performing well relative to the overall market (Ellis, 2016).One of the most interesting assets is the S&P 500 index, which is one of the main indicators of the American economy.Successful and accurate forecasting of the index allows analysts and economists to draw conclusions about the trends in the economy and take appropriate actions not only within an individual investment portfolio, but also within countries.
The S&P 500 index is designed to measure the performance of eligible stocks listed on the NYSE and Nasdaq.
It is weighted by float-adjusted market capitalization and incorporates liquidity and tradability criteria in the constituent selection process (S&P Global, 2023).The S&P 500 Index measures the value of the stocks of the 500 largest companies by market capitalization listed on the New York Stock Exchange or Nasdaq.The intent of Standard & Poor's is to have a price that provides a quick look at the stock market and the economy (S&P Global, 2023).
In order to be selected by the Index Committee and to be included in the S&P 500 Index, a company must meet certain criteria (Corporate Finance Institute, 2023): ➢ Geographic location: The company must be incorporated and headquartered in the United States.➢ Market Capitalization: The company must have a market capitalization of at least $8.2 billion.Market capitalization is calculated by multiplying the company's current share price by the total number of shares outstanding.➢ Stock Liquidity: A company's stock must be highly liquid, meaning that it is actively traded on the stock exchange.This makes it easy to buy and sell a company's stock in the market.➢ Public availability: At least 50 percent of a company's outstanding shares must be publicly traded.This makes the company's stock widely available in the marketplace and allows investors to include it in their portfolios.➢ Financial performance: The company must have positive earnings in the most recent quarter and positive earnings in the previous four quarters.This indicates that the company is financially stable and successful.
It is important to note that while the S&P 500 Index was originally intended to be an index of 500 companies, it actually contains 505 stocks.This is because some companies, such as Google (now Alphabet Inc.), Facebook, and Berkshire Hathaway, have multiple share classes that are still considered separate components of the index (S&P Global, 2023).
The S&P 500 Index is an important part of the stock market and its connection to the U.S. economy is obvious.This index is considered by most analysts and investors as an indicator of the overall health of the stock market.It includes the 500 largest publicly traded U.S. companies representing various industries and sectors of the economy.Therefore, changes in the index effect not only the performance of individual companies, but also the collective performance of the economy as a whole.Also an important tool for passive investors seeking access to the states economy through index funds.Index funds, such as ETFs or index mutual funds, track the performance of the index and allow investors to diversify their portfolios by investing in all the companies in the index.
The relationship between the S&P 500 Index and the US economy is manifested in several ways.First, the index includes the largest and most representative companies in various sectors of the economy.Therefore, its performance and movement reflect conditions and trends in a wide range of sectors and industries.The index is an important indicator of investor confidence and expectations about economic conditions in the United States.Rising prices in the index indicate optimistic investor sentiment and belief in continued economic growth.This can stimulate investment, business growth, and demand for labor.As mentioned above, the index is also used as a benchmark for evaluating the performance of investment portfolios and funds.Many active and passive investors use the index as a benchmark to compare and evaluate the performance of their investments.If an investment portfolio outperforms the index, it may indicate successful asset management (Investopedia, 2023).The index considered by many analysts and economists to be one of the most important indicators of the health and prospects of the U.S. economy.Changes in the index can serve as an indicator of future economic trends, growth or decline.Analysts study the relationship between the index's movements and macroeconomic factors such as GDP, inflation, and unemployment to predict possible economic outcomes (Jareño, 2016).
The S&P 500 Index plays an important role in reflecting and measuring the performance of stocks and the U.S. economy as a whole.Its movements and changes reflect important aspects of economic activity and investor sentiment.This makes it an indispensable tool for analyzing and forecasting economic developments and the market (Dattatray, 2019).
Literature Review
Johannes W. Flume (2021) note that the stock exchange means the organizer of commodities, securities and labor-powered wholesale sales on the basis of supply and demand in the economy, as well as for the sale of financial and trading transactions to sellers and buyers place (Johannes, 2021).In Investopedia (2023) and IndiaCharts (2022) express the view that trading on the stock exchange is conducted according to certain rules and procedures, and all transactions are registered and monitored by the relevant regulatory authorities (SoFi, 2023).
Over time, trading mechanisms and institutions evolved, contributing to the development of international trade and laying the foundation for modern stock markets.
The growth of industry and the emergence of new companies created new investment opportunities and stimulated the development of the stock market.However, with this development came the need to establish regulatory measures and protect the interests of investors.In response to these needs, rules and regulations were introduced and specialized organizations responsible for the control and supervision of stock exchanges and companies were established.Such organizations play an important role in ensuring stability and creating confidence in the market by ensuring that trading rules are followed, investor rights are protected, and fraud is prevented (Stock Market History) (IndiaCharts, 2022).
Malkiel, B. G. (2015) express the view that today, the stock market continues to undergo active technological innovations that are changing the very process of conducting transactions and accessing information.Electronic trading, process automation and the development of online platforms have created opportunities for investors to trade stocks and other financial instruments with greater efficiency and convenience (Malkiel, 2015).Wiley, J. (2017) emphasize that now the exchange market is a complex system where various financial assets, such as stocks, bonds, commodities and derivatives, are traded (Wiley, 2017).Fatemeh Aramian (2021) conclude that it is an electronic platform where buyers and sellers make transactions based on supply and demand (Aramien, 2021).
Pisano, U. A., Martinuzzi, B., & Bruckner, B. (2012) draw attention to the fact that the stock market performs a number of economic functions: allows lenders and investors to invest their money in various financial assets, such as stocks and bonds (this increases the amount of financial resources available and encourages investment activity); plays an important role in providing information about the financial condition of companies and projects related to various financial assetsn (this reduces the cost of accessing such information, making it more accessible and allowing for more informed investment decisions); provides liquidity to holders of financial assets (owners of stocks and bonds can sell their investments in the market when they need funds or want to reallocate their investments.This creates the ability to quickly convert assets into cash); serves as a platform for the development and evolution of various methods of financing projects (it provides an opportunity for companies and organizations to raise capital to finance their projects through the issuance of stocks and bonds of various types and maturities) (Pisano, 2012).
Merritt B. Fox.(2021) note that the stock market plays a key role in stimulating economic growth and wealth creation, so accurate forecasts of its state are important for the overall stability and efficiency not only of financial markets, but also of national economies.The study and development of modeling methods, stock value forecasting is a practically significant task for all market participants and those who just want to enter the market, allowing to make informed decisions to managing an investment portfolio.
One of Henry's seminal works, "Stock Market Liberalization, Economic Reform and Stock Prices in Emerging Markets", emphasizes that stock markets play a crucial role in facilitating the relationship between savers and producers in society (Henry, P. B. (1997).Savers, who have accumulated a surplus of funds, seek to invest their savings in profitable and ambitious projects.On the other hand, producers, representing the productive sectors of the economy, need financial resources to fuel their activities and promote economic growth.
Mishkin, F. S., & Eakins, S. G. (2014) draw attention to the fact that stock markets act as intermediaries, allowing the transfer of funds from savers to producers.This process allows productive sectors to access the necessary capital for expansion and development.The productivity and functions of the stock market play an important role in redirecting funds from those who have excess resources to those who need them, thereby facilitating economic activity and development.
Metodology
Statistical methods include a large number of methods, such as methods of valuation theory, factor analysis, regression and correlation analysis, etc.With the help of these methods, investors can conduct a comprehensive statistical research of the financial market, make forecasts of market processes and, on the basis of these forecasts, make more reasonable investment decisions.However, working with such systems for forecasting short-term price movements, rapidly changing intraday information is associated with some difficulties both in the selection of an analysis method and in the interpretation of results.This seems to be a significant drawback, because the speed of forecasting intraday trading is very important (Malyshenko, 2014).
One of the simplest and most effective methods is the Autoregressive Integrated Moving Average (ARIMA): This model is one of the most common and simplest time series forecasting models.It is based on the assumption that the future values of a series depend on its past values and forecast errors.It is based on a combination of three main components: autoregression (AR), integration (I) and moving average (MA), as follows: ➢ Autoregression (AR): The model assumes that the future values of a time series depend on its past values.Autoregression uses the lags (previous values) of the series to predict its future values.Autoregression (AR order) determines the number of past values used in the model.➢ Integration (I): Integration is used to ensure that the time series is stationary.If the original series is nonstationary (has trends or seasonality), it can be transformed into a stationary series by using the differences between successive observations.The integration order (I-order) determines the number of differences applied to the series.➢ Moving Average (MA): The moving average assumes that the current value of the series depends on random forecast error at previous times.The model uses smoothing of the forecast errors to account for their effect on future values of the series.The MA order determines the number of past errors (Hyndman, 2018).
The next model that can be distinguished -GARCH (General Autoregressive Conditional Heteroscedastic)is a model used to model and predict the volatility of time series, such as the prices of financial instruments, including the SP500 index.The GARCH model is based on the assumption that the variance of a series varies over time and depends on the previous values of the series.XGBoost (Extreme Gradient Boosting) -Combines weak models, such as decision trees, to improve predictions.Works with different types of features, automatically selects important features, and is resistant to overfitting.It has several important advantages.First, it provides high speed and efficiency due to its optimized implementation of gradient binning, making it ideal for dealing with large amounts of financial data.Second, can handle both numeric and categorical attributes, allowing it to account for a variety of factors in stock market analysis, providing models with flexibility and accuracy.Third, the algorithm automatically determines the importance of the attributes and selects the most important ones, improving the quality of forecasts and simplifying the model by removing unnecessary attributes (Rahman, 2023).XGBoost has builtin regularization mechanisms that prevent model overlearning and provide more reliable stock market forecasts (Dat Tan Trinh, 2022).
Exponential smoothing is a time series forecasting method that uses a weighted average of past observations with decreasing weight as you move away in time.The basic idea of exponential smoothing is to give more weight to more recent observations and less weight as you move away from the current moment.This allows us to model the impact of newer data on predictions while taking into account the decreasing importance of older data.
Results
Dataset was taken from the Yahoo Finance website (Yahoo Finance, 2023), that contains historical data for the S&P 500 Index.This daily dataset covers the period from January 3, 1990 to May 31, 2023 and provides information about the opening price, the maximum and minimum price, the closing price and the adjusted closing price.
This case study will provide a basic understanding of the data structure and provide reasonably accurate future price predictions, which can be a great tool to minimize the risk of losing money in the market.
We will do our work using the Python programming language version 3.10.12.You should start by importing all the necessary components (Figure 1).These libraries and modules play a crucial role in data analysis, visualization, time series modeling, and evaluation of model performance on the dataset.They provide a wide range of functions and tools that simplify various aspects of data analysis and forecasting in the context of financial markets.They are also divided into sections, and you can see a description of them in the comments Source: compiled by the author using the Python programming language version 3.10.12.
All necessary libraries have been downloaded, now you need to import and read the data set (Figure 2).And first, we need to check all the information about the data, check the data types and also check for missing data in the set (Figure 3).In this paper, we only need the closing price for the entire study.To focus specifically on the closing price, we can extract the 'Close' column from the dataset and store it in a new variable called df_close(Figure 4).This will allow us to perform further analysis and calculations on the closing prices only.For further successful data visualization, let's set the 'Date' column as an index, it gives us the opportunity to replace the serial number of the rows with a specific date of observation (Figure 5).Next, we'll create a Kernel Density Estimation (KDE) graph that estimates the probability density function of the data, providing insight into the distribution and shape of the data.This KDE chart allows you to visualize the distribution of closing prices.The resulting curve provides an estimate of the probability density function, with higher peaks indicating areas of higher density and lower troughs indicating areas of lower density.The shading below the curve provides a visual representation of the estimated probability density function (Figure 7).Kernel density estimation (KDE) is a method used to estimate the probability density function of a random variable from a given data set.The graph can provide insight into the central trend of closing prices.The location of the highest peak on the KDE chart corresponds to the mode of the distribution that represents the most frequent closing price.This can be useful for traders in identifying potential support or resistance levels -from this point of view we can identify four support levels, the main one being around 1200.It is also important to note that the KDE chart can help identify potential outliers in the closing prices.Outliers are data points that deviate significantly from the overall distribution pattern.These outliers may represent important events or anomalies that have affected the closing prices -of which we do not observe any .
We can move on to more basic things like the Dickey-Fuller test for stationarity.Data are stationary if they have no trend or seasonal effects.And if the data is non-stationary, we need to convert it to stationary before we can fit it to an ARIMA model.But before we do that, construct a rolling mean and a rolling standard deviation, which are statistical metrics that help to estimate the mean and the dispersion of the values in the time series in a given window (Figure 8).And also, in addition to the graph with the moving statistics, it gives us the results of the test -Augmented Dicky Fuller test -the unit root test Let us first see the formula for the Dickey Fuller test which is the origin of the Augmented Dickey Fuller test, and that is (1).
where, = value in the time series at time t or lag of 1 time series.
∆ −1 = first difference of the series at time (t-1) The formula for Augmented Dickey Fuller test, and it goes as follows (2.2): The formula for ADF is the same equation as the DF with the only difference being the addition of differencing terms representing a larger time series.Fundamentally, it has a similar null hypothesis as the unit root test.
That is, the coefficient of Y(t-1) is 1, implying the presence of a unit root.If not rejected, the time series is taken to be non-stationary.If null hypothesis is rejected, then Test statistic < Critical Value and p-value < 0.05, the time series is stationary (Wooldridge, 2019), (Yang, 2022).The result of the function (Figure 9) is available to see in (Figure 10).Analysis of test results (Figure 10): ➢ Test Statistics has a positive value of 1.015948.Compared with the critical values, this indicates that the test statistic is far from zero and is not negative enough to confirm the stationarity of the series.➢ The p-value is 0.994431, which is a high value close to 1.This means that there is a high probability of obtaining such or more extreme results even if the null hypothesis of non-stationarity of the series is true.Therefore, we do not have sufficient evidence to reject the null hypothesis.➢ Critical values) are shown as -3.431467 (1%), -2.862034 (5%), -2.567033 (10%).They are threshold values compared to the test statistic.If the test statistic is less than the critical value, the null hypothesis of non-stationarity is rejected.In this case, the value of the test statistic is not low enough compared to the critical values, which confirms the lack of stationarity in the series.➢ In summary, based on the results obtained, we can conclude that the time series is not stationary, since we have not rejected the null hypothesis of non-stationarity.This may indicate the presence of a trend, seasonal fluctuations or other systematic changes in the data.➢ The next logical step is to separate the seasonality from the trend before analyzing the time series.Such an approach will lead to stagnation of the resulting series.(Figure 11).The result of the code (Figure 11) is available to see in (Figure 12).Also consider the second version of stationarity -to reduce the magnitude of the values and the increasing trend in the series, we first take a log of the series.Then, after obtaining the logarithm of the series, we compute the rolling average of the series.A rolling average is calculated by taking data from the previous 12 months and calculating an average consumption value at each subsequent point in the series.The following code is used to smooth the time series and analyze its variability.The logarithmic transformation helps to reduce non-stationarity and smooth fluctuations in the data, and calculating the moving average and standard deviation allows you to assess the overall trend and variability of the series (Figure 13).The plot of the code (Figure 13) is available to see in (Figure 14).The process described in the code (Figure 15) is called "trend removal" or "detrending" a time series.It is an important step in time series analysis and can help reveal hidden features and patterns in the data.Trend removal is performed by subtracting the moving average from the original time series.This highlights shorterterm fluctuations, such as business cycles and seasonal patterns, and makes it easier to analyze these components.
Figure 15. Calculate the difference between df_log and the moving average and ADF results
Source: compiled by the author.
And also from (Figure 15) available to see the updated result of the ADF test.Based on these results we can draw the following conclusions: ➢ Test Statistic (-1.684634e+01) is less than the critical values for all significance levels (-3.431466e+00, -2.862033e+00 and -2.567033e+00).This indicates that we can reject the null hypothesis of unit root and accept the alternative hypothesis of stationarity of the time series.➢ P-value (1.127717e-29) is very close to zero, which also confirms the statistical significance of the test results.Usually, if the p-value is smaller than the selected significance level (Naushad, 2020), we can reject the null hypothesis and conclude that the series is stationary.In this case, the p-value is much less than 0.05, which confirms the stationarity of the series.➢ Thus, based on the results of the Dickey-Fuller test with a very low p-value and test statistics lower than the critical values, we can conclude that the time series is stationary.This ensures the stability of the model and allows the use of past data to accurately predict future values.➢ The ARIMA model is one of the most popular models for making short-term forecasts.To describe this model, three groups of parameters are used: p, d, and q are nonnegative integers that characterize the order of the model parts (autoregressive, integrated, and moving average, respectively).The parameters p, d, and q together define the structure of the ARIMA model.For example, an ARIMA(1, 1, 1) model means that an autoregression of order 1, a differentiation, and a moving average of order 1 are used.The choice of the optimal values of the parameters p, d and q can be based on the analysis of the autocorrelation function (ACF) and the partial autocorrelation function (PACF) of the time series, as well as on the use of statistical criteria and model evaluation methods (Hyndman, 2018).In our work we will use the function of automatic selection of parameters (Figure 16).The auto_arima function (Figure 16) performs automatic model parameter tuning and is a convenient tool that allows you to automatically select the optimal ARIMA model parameters based on statistical analysis and heuristic methods.It facilitates the process of model tuning with a simple function rather than searching for p, d and q parameters separately.The result is shown in (Figure 17).The results (Figure 17) are presented as a table, where each row corresponds to the ARIMA model with certain parameter values, and the columns show the following information: ➢ ARIMA(p, d, q)(P, D, Q)[m]: ARIMA model parameters, where p, d, q are order of autoregression, integration and moving average respectively, P, D, Q are order of seasonal autoregression, integration and moving average respectively, m is seasonal period (in our case have zero values due to choice of model and specifying seasonal=False parameter).➢ AIC: Akaike Information Criterion (AIC), which is a measure of the relative quality of the model, where a smaller value of AIC indicates a better model.
Based on the results, the best ARIMA model is ARIMA(0,1,1)(0,0,0)[0], which has the lowest AIC value of -34978.785.Next, we need to divide our performance into test and training data at a ratio of 15:85 (Figure 19).As a result, the Auto ARIMA model assigns the values 0, 1, and 1 to p, d, and q, respectively will input these parameters to our model (Figure 20).The result of the model (Figure 20) is available to see in (Figure 21).Based on the results of the ARIMA methods provided in (Figure 21), we can make an express test (Smigel, 2021): ➢ The coefficient ma.L1 is -0.0777, which means that the model uses a lag of the difference series to predict public values.The negative sign of the hazard of detection between the variances and the current value of the relationship.➢ The sigma2 is 0.0001.This is a very small value that the model is good at predicting the estimated time series.➢ The Ljung-Box criterion (Q) has a value of 0.01 and the p-value (Prob(Q)) is 0.94.This indicates that the autocorrelations of the residuals in the first lag are not significant.➢ The value of heteroskedasticity (H) is 0.51, which indicates an increase in the heteroskedasticity of the signs.
Thus, we can say that the ARIMA (0, 1, 1) model gives good results because it has a significant factor and a low variance of the residuals and we can proceed with the forecast (Figure 22).Predictions can be used to predict future values of a time series based on available historical data.The standard errors returned by the se_mean method allow you to estimate the uncertainty or scatter of the predicted values.
The smaller the standard error, the more accurate and reliable the predictions will be.Confidence intervals obtained with the conf_int method provide information about the likely range in which the future values of the series will lie.Confidence intervals help assess the uncertainty of predictions and can be used to make more informed decisions based on the probability that future values will fall within a certain range.The result of the forecasting is available to see in Figure 23.The plotting of the forecast data is available to see in (Figure 24).When we evaluate the results of the forecast plot (Fig. 2.23), we can mention that it looks realistic and close to the test data.The only thing that stands out is March 2020 period, when we lost about 30% of the index value in one week, but we all know what an out-of-state situation is due to the pandemic and the general panic in the market.Such situations are extremely difficult to predict and such collapses are only possible by analyzing the news background.Similar crashes can also happen as a result of force majeure, for example, the recent incident that happened to Equifax, the large-scale cyber-attack that compromised the personal information of approximately 147 million people.As a result of the attack, Equifax's stock price plummeted and the company suffered significant financial losses -the stock price continued to decline in the following weeks, with a total loss in value of approximately 35% [45].But it is only relevant in the context of one specific company -this is another advantage of working with indices -such events in one company do not have a critical impact on the index as a whole.And our predictable data clearly show the data to which we returned after the correction, which gives us the opportunity to make long-term forecasts, in our case the forecast was for the next 884 steps/day.We also need to be sure that we can call our model an accurate one.There are many metrics to evaluate the quality of a model.These (Figure 25) metrics allow you to evaluate how accurately and reliably the model is able to predict the values of the S&P 500 Index under study.
Conclusion
The growing importance of stock price forecasting has attracted considerable attention from industry experts and investors.Analyzing stock market trends is challenging due to the inherently noisy environment and significant volatility associated with market trends.The complexity of stock prices involves several factors, including quarterly earnings reports, market news, and changing investor behavior.Traders rely on a variety of technical indicators derived from daily stock market data.Despite the use of these indicators to analyze stock returns, accurately predicting daily and weekly market trends remains a challenge.Accurately predicting stock trends is a fascinating and challenging task in an ever-changing industrial world.Several aspects that influence stock market behavior are both non-economic and economic factors that are taken into consideration.Thus, stock market forecasting is considered a major challenge for increasing production.
Traditional methods show that stock market returns are predicted based on past stock returns, other financial variables, and macroeconomics.Predicting stock market returns has led investors to investigate the reasons for predictability.Forecasting stock market trends is a complex process because it is influenced by many aspects, including traders' expectations, financial circumstances, administrative events, and certain factors related to market trends.Moreover, the stock price list is usually dynamic, complex, noisy, non-parametric and non-linear in nature.Financial time series forecasting becomes problematic due to certain complex characteristics such as volatility, irregularity, noise and changing trends.
Our choice of the ARIMA model in this research is based on the following considerations: ➢ Simplicity and interpretability: it is relatively easy to use and understand.It has a set of parameters, such as autoregression orders (p), difference (d), and moving average (q), that can be chosen based on data analysis and statistical metrics.This makes the model accessible to a wide range of users.
➢ Flexible model specification: Models are very flexible and can be customized to simulate different types of time series.This versatility is useful when working with multiple time series, as one type can be applied to different data sets.
➢ Time Dependency Accounting: The model accounts for time dependencies in the data, taking into account previous values in the series.This allows for trends, cyclicality, and seasonality in stock market time series.The model can capture long and short term dependencies, making it effective for forecasting financial time series.
➢ Suitable for all datasets: Can be trained on relatively small datasets due to fewer parameter requirements compared to neural networks or deep learning models.This makes them suitable when working with limited data availability.
➢ Robust performance: typically provide robust performance comparable to other time series statistical methods.While they may not always be the most efficient models, they provide consistent and reliable results, making them a good choice when time is limited for extensive experimentation.
➢ Prevalence and Availability: This is one of the most common time series forecasting models.It is well studied in the literature and has extensive support in statistical packages and software tools.This makes the model accessible and usable in practical stock market forecasting tasks.
➢ Proven efficiency: The model has proven efficiency in forecasting time series, including financial data.Numerous studies and practical applications show that the model can be an effective tool for stock market forecasting.
➢ These advantages contribute to the attractiveness and usefulness of ARIMA models in forecasting and analyzing time series data, including our selected stock market index, the S&P 500.
➢ This article develops an ARIMA model for forecasting and analysis of time series data of the S&P 500 stock market index: ➢ EDA and Data Cleaning/Validation: Perform exploratory data analysis to understand the characteristics of the time series.
➢ Determine Moving Statistics: Сompute moving statistics, such as moving averages or moving standard deviations, to identify trends and seasonality.
➢ Test for stationarity: Apply a stationarity test, such as the Augmented Dickey-Fuller test, to ensure that the time series is stationary.If the time series is not stationary, we will perform additional manipulations to achieve stationarity.
➢ Apply seasonal decomposition: Apply the seasonal decomposition method to decompose the time series into its components: trend, seasonality, and residuals.This will allow us to better understand the contribution of each component to the overall index dynamics.
➢ Applying the logarithmic transformation: Apply the logarithmic transformation to the time series to smooth the extreme values and reduce their impact on the model.
➢ Finding the optimal model parameters: Search for the optimal parameters for the ARIMA model -this will allow us to build the most accurate and appropriate time series model.
➢ Implement an ARIMA model to predict the S&P 500 index: Divide the data into training and test data and implement the ARIMA model using the optimal parameters to predict the future price.
➢ Analyzing the Results and Checking the Accuracy of the Model: Analyze the results of the analysis and compare the predicted values with the actual data to evaluate the accuracy and reliability of our model.This will allow us to draw conclusions about the applicability of the model for predicting the future dynamics of the index.
Figure 3 .
Figure 3. Dataset information and check for null values
Figure 4 .
Figure 4. Extract the 'Close' column from the dataset
Figure 5 .
Figure 5. Setting the 'Date' column as an index
Figure 8 .
Figure 8. Rolling Mean and Standard Deviation
Figure 9 .
Figure 9. Def test_stationarity with definition of rolling statistics
Figure 10 .
Figure 10.Results of Augmented Dicky Fuller test
Figure 12 .
Figure 12. Results of seasonal decomposition
Figure 13 .
Figure 13.Moving average and standard deviation
Figure 14 .
Figure 14.Plot of Moving Average and Standard Deviation (Chumachenko, 2020) ➢ Parameter p (AR -Autoregressive): Indicates the number of previous values of the time series used to predict the current value.If p=1, the model uses only one previous value.If p=2, the model uses two previous values, and so on.A larger value of p means that the model uses more previous values for prediction.➢ The parameter d (I -Integration): It determines the number of differentiations needed to achieve stationarity of the series.Differentiations help to remove trends and seasonality in the series.If d=0, the series is considered stationary.If d=1, then the model uses the first difference value of the series.If d=2, the model applies the second difference value of the series, and so on.➢ Parameter q (MA -Moving Average): It indicates the number of previous prediction errors used to predict the current value.If q=1, the model uses only the previous error.If q=2, the model considers two previous errors, and so on.A larger value of q means that the model uses more previous errors for prediction.
Figure 16 .
Figure 16.Create and train an AutoARIMA model
Figure 19 .
Figure 19.Splitting the data into train and test sets
Figure 20 .
Figure 20.Building and training an ARIMA model
Figure 22 .
Figure 22.Forecasting with a trained ARIMA model and obtaining predictive values Source: compiled by the author.
Figure 23 .
Figure 23.Output of forecast values, standard errors and confidence intervals Source: compiled by the author.
Figure 24 .
Figure 24.Plot with Predicted Index price
Figure 25 .
Figure 25.Calculation of ARIMA model evaluation metrics (Serafeim, 2020)a tool for stock market forecasting and modeling.Unlike traditional models, it is able to handle long-term dependencies in time series and capture complex time patterns.It has a built-in ability to remember and forget information over time, allowing it to account for long-term trends and seasonal fluctuations in the market.LSTM also has the ability to model non-linear dependencies and adapt to changing market conditions.It can use different types of data, including stock prices, trading volumes, macroeconomic indicators, and other factors to make more accurate predictions.Numerous studies and publications demonstrate the successful application of this model in stock market forecasting and describe various methods and approaches to its use(Serafeim, 2020).VAR (Vector Autoregression) is a model that is widely used for stock market forecasting and modeling.It allows you to analyze the relationships between multiple time series, taking into account the impact of one series on others.
(Uzakariya, 2021)ly used in financial econometrics and time series analysis to model and predict the volatility of financial data.It has advantages in modeling variability and accounting for variance structure in time series.However, like any model, GARCH has its limitations and requires proper parameter selection and estimation to achieve good results in predicting volatility (Sheeeen, 2016).Random Forest is a machine learning algorithm that combines multiple decision trees to perform classification and regression tasks.It uses randomness to select features and data samples, and combines tree predictions to improve model accuracy and stability.Random forest has high performance, the ability to estimate the importance of features, robustness to overlearning, and a wide range of applications in a variety of domains(Uzakariya, 2021).LSTM (Long Short-A VAR model is a system of simultaneous equations in which each variable depends on its past values and the past values of other variables.This allows for complex interactions and dependencies between various factors such as stock prices, trading volumes, market indicators, and economic performance.The VAR model has the ability to capture dynamics and long-term trends in time series and to predict future values based on past data.Its advantages include the ability to analyze historical relationships, estimate impulse response, and perform scenario analysis.There are many papers in the financial econometrics literature and research studies that apply the VAR model to stock market forecasting and modeling, and describe methods for estimating and interpreting the results (Longmore, 2020). | 9,590 | 2023-12-31T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Minimizing Design Costs of an FIR Filter Using a Novel Coefficient Optimization Algorithm
This work presents a novel coefficient optimization algorithm to reduce the area and improve the performance of finite impulse response (FIR) filter designs. Two basic architectures are commonly used in filters—direct and transposed. The coefficients of a filter can be encoded in the fewest possible nonzero bits using canonic signed digit (CSD) expressions. The proposed optimization algorithm can share common subexpressions (CS) and reduce the number of replicate operations that involve the CSD coefficients of filters with a transposed architecture. The effectiveness of the algorithm is confirmed by using filters with the collision detection multiple access (CDMA) standard, the 121-tap high-pass band, and 105and 325-tap low-pass bands as benchmarks. For example, the proposed algorithm used in the optimization of 105-tap filter has a 30.44% smaller combinational logic area and a 16.69% better throughput/area than those of the best design that has been developed to date. Experimental results reveal that the proposed algorithm outperforms earlier designs.
Introduction
Digital filters have a wide range of applications because they are much more stably reliable than analog filter.Digital filters are used in image/audio processing and a wide range of wired and wireless communication systems.The designs of digital filters vary widely for various applications.They can be divided into finite impulse response (FIR) and infinite impulse response (IIR) filters.
A finite impulse response filter (or so called FIR filter) has a linear phase and arbitrary amplitude, and it is easily implemented.The main goal of the previous designs has been to prevent for a high-cost multiplier at the transmitting side, since a multiplier must be used at the receiving side.The design approach herein involves simplifying the digital filter's coefficients to reduce the area cost of the filter.The coefficients of a digital filter can be separated into various coefficient groups.The filter hardware comprises logical adders, subtractors, and shift registers.If the number of these logical components can be reduced by some simplifying methods, then the overall system can be improved.
The rest of this paper is organized as follows.Section 2 briefly describes previous researches for filter optimization.Section 3 then describes the coefficient optimization method.Next, Section 4 summarizes the experimental results and compares them with those of other previous designs.Conclusions are finally drawn in Section 5, along with recommendations for future research.
Related Works
2.1.Coefficient Simplification Methods.Coefficient simplification is one of the most effective ways of improving the area and performance of a finite impulse response filter.Numerous methods of coefficient simplification for filters have been developed.The minimum number of signed power-of-two (MNSPT) methods [1] has been developed to simply numbers of the coefficients.The canonic signed digits (CSD) [2] representation is used to reduce the numbers of binary "1"s in the coefficients to reduce the area of realizing constant multiplications.Simplification algorithms are utilized to reduce the numbers of required constant multipliers in FIR filter realization.Using an algorithm to determine the relationships among coefficients and to extract the common terms in their binary formats can reduce the number of redundant logical operations.
In the literature, horizontal and vertical relationships can be found between coefficients; existing relationships can be checked to design an algorithm for extracting their common factors.Such an algorithm commonly has low complexity.The algorithm of Paško et al. [3] performs a global search but consumes too much time.In some studies [4][5][6][7][8][9], horizontal and vertical relationships between coefficients and the displacement and delay characteristics of coefficients were used to perform the simplifications.Ernesto and Dolecek [10] utilized linear programming to identify the largest common factors of coefficients.Searches for common factors using low-complexity methods can be divided into two categories: horizontal and vertical.
Horizontal Search Algorithms.
Horizontal search algorithms find shifting relationships between coefficients.For example, in Figure 1, the coefficients H0 and H1 in binary format are shifted relative to each other, so they have the same multiplication block.They can both be multiplied by performing only one calculation.
Vertical Search Algorithms.
Vertical search algorithms find the delay relationships between the values in corresponding positions of binary representations of coefficients.Coefficients with such a relationship have the same addition block.They can be added by performing a single calculation.Figure 2 presents the vertical search.
Literature Review.
Coefficient representations can be categorized into binary and canonic signed digit (CSD) representations.The CSD representation primarily involves reducing the number of "1"s in the original binary representation of coefficients.More "1"s result in more repeated additions and require more adders in the corresponding circuit realizations.Additionally, repeated additions can be represented as a sequence of additions with a large value and one subtraction.For instance, in binary, the value seven can be expressed as (2 2 + 2 1 + 2 0 ); it can also be expressed as (2 3 − 2 0 ) in standard CSD notation.The value seven in CSD notation uses a single subtraction instead of the two additions that are required using binary notation.The cost of realization is reduced by using the CSD notation.Numerous coefficient simplification algorithms are described below.
In 2005, Vinod and Lai [11] improved the horizontal and vertical search algorithms that were developed in 2003 [5].They constructed multiplier block adders (MBAs) and then structure adders (SAs).Their new algorithm yielded a final result after adding one or more delays to the logic gates of realized noncommon factors of the coefficients.The algorithm that was presented by Takahashi and Yokoyama [6] extracts common factors by finding the common factor with the highest frequency.If two or more common factors have the same frequency, then the smallest one is extracted.The experience of performing the algorithm in a filter with 26 coefficients shows that the factors (1, 0, ) and (1, 0, 0, ) are found to appear most frequently.Maskell and Liewo [12] developed an algorithm for reducing the height in all instances of an adder tree that is composed of common factors.The height of the adder tree can be reduced by properly setting the width of the adder.Accordingly, the common factors that are extracted more resulted in the less wide adders with low latency and low area cost.A local search algorithm firstly extracts the common factors (1, 0, 1) and (1, 0, −1).The algorithm also uses a specific multiplier block (MB) in place of a full adder (FA), reducing the area cost by 67%.
Proposed Coefficient Optimization Method
Equation ( 1) is the finite impulse response (FIR) filter.At various time points , the variable is calculated only with the coefficient ℎ.Consider the following: The basis of the common subexpression elimination (CSE) algorithm is to find the common factors of the coefficients of a filter.In the transposed form, presented in Figure 3, the common factors of the coefficients are evaluated as the shared multiplication blocks (MBs).Therefore, the overall area of the FIR circuit can be reduced by the sharing of MBs.
This section elucidates the use of a new CSE algorithm to extract the common factors of the coefficients in CSD notation.The algorithm obtains the statistics concerning the frequencies of the appearances of coefficients and finds the reciprocals of those coefficients to search for common factors.Algorithm 1 presents the pseudocode of the proposed CCSE (CSD-based common subexpression elimination) algorithm.The steps of the proposed CCSE algorithm are as follows.
Step 1. Find the th coefficient and find all nonzero bit positions (from high to low) of the coefficient.Record these positions and list the combinations of subexpressions (SEs) with more than one nonzero bit.Use all of the combinations as the basic elements (BEs) in simplification (tabulate the BEs).If an input element matches one of the BEs in the table, then increase the statistical frequency of the BE.Otherwise, if an input element does not match any BE in the table, the input element becomes a new BE and is added to the table.
Step 2. Set = + 1 and determine whether the value exceeds the value in the boundary condition set in the initialization.If it does not, then repeat Step 1; otherwise, proceed to Step 3.
Step 3. Evaluate all of the BEs in the table to find their subexpressions (SEs) having reciprocal SEs.If the inverted SEs exist, then use the positive SEs as basic elements.Calculate the number of appearances of negative SEs.
Step 4. Evaluate all of the BEs in the table to find which BE has the highest appearance frequency.If the highest frequency is one, then the algorithm proceeds to the final step; if the highest frequency exceeds one, then select the BEs as common subexpressions (CSs), and if more than one BE has the same highest frequency and this frequency exceeds one, select the shorter BE as the extracted CS.
Step 5. Find all of the coefficients with the same CS that was generated in Step 4 and those of the corresponding inverted CS and perform the elimination process.When the process is complete, put a new replaced variable back in the original expressions and reset the loop value to zero, before returning to Step 1.
Final.Complete the algorithm and output the simplification results.Table 10 presents the example of a filter with three coefficients to elucidate the actual processes of the algorithm.Intermediate results are obtained after each step of the algorithm, as described in the following statements.
Explanation 1 ( equals zero).Select the coefficient H(0) and list all of the SEs whose nonzero bits are greater than one.Make these SEs to BEs for simplification.The appearance frequencies of these SEs are as follows.
Firstly, no BE is recorded in the table, so all SEs are taken as BEs for subsequent simplification.
Explanation 3 ( equals one).Select the coefficient (1) and list all of the SEs whose nonzero bits are greater than one.The intermediate results are as follows.
The BEs in the table are as follows.
Explanation 5 ( equals two).Select the coefficient ( 2) and list all of the SEs whose nonzero bits are greater than one.The intermediate results are as follows.
The BEs in the table are as follows.
Explanation 9.The CS (1, 0, 1) is taken from Step 4 and the inverse CS (−1, 0, −1) is also utilized in the simplification.A new variable replaces the CS in the original coefficients and the intermediate outputs are as presented in Table 11.After the five steps have been completed, the algorithm resets the value to zero and returns to Step 1.
Explanation 10.Repeat the five steps and find the BEs of coefficients until the appearance frequencies of BEs are equal to zero.Table 12 presents the outputs of the algorithm.
The algorithm yields the following CSs.C5 = (1, 0, 1), whose decimal value is 5; C21 = (5, 0, 1), whose decimal value is 21; C169 = (21, 0, 0, 1), whose decimal value is 169.Based on the above explanations and the pseudocode of the proposed algorithm, the main goal of the first step is to identify all subexpressions with nonzero bits.An algorithm that finds more subexpressions is more likely to find the best simplification.The third step is to calculate the number of inverted SEs, with a view to improving the area of simplification.The fourth and fifth steps are the major reduction steps.The major difference between the proposed algorithm and earlier ones concerns the CSE processes.The proposed algorithm does not directly eliminate the CSs from the coefficients but replaces the CSs with new variables.In next iteration, the algorithm performs the extraction of new CS between the coefficients with new variables and those without CS.This new approach can extract more new CSs and achieve better simplification results.
To confirm the effectiveness of the proposed algorithm, the filter with three taps is utilized to estimate the required logical operations at the architectural level.The input variable is set to 12 bits.The minimum bit widths of the coefficients and common factors are utilized in the estimation.A represents the adder; S denotes the subtractor; I denotes the inverter.The realization areas of a filter with original three coefficients in CSD notation are as follows.The realization of a filter with coefficient (0) needs (72 A, 0 S, 0 I), (1) needs areas of (60 A, 0 S, 0 I), and H(2) needs areas of (0 A, 30 S, 0 I).The total area of the filter needs (132 A, 30 S, 0 I).After the algorithm is implemented, the realization of common subexpression C5 needs (15 A, 0 S, 0 I), C21 needs (17 A, 0 S, 0 I), and C169 needs (20 A, 0 S, 0 I).H(0) and (1) share the common subexpression C169 with a shifting relationship.The shift relationship can be realized without occupying additional area.H(2) has the inverted C5.The realization of H(2) only needs (15 A, 0 S, 15 I) after the algorithm is executed.The total area cost of the filter is (52 A, 0 S, 15 I).In the estimation of the area, the subtractor and the adder are assumed to have the same area cost.The area cost of the inverter is 1/10 of that of the adder.Accordingly, the original area cost of the filter is 162 A. The proposed algorithm reduces the area cost of the filter to 53.5 A. The algorithm reduces 67% of the area cost of realizing the coefficients of the filter.
Experimental Results
In the experiments herein, four filters are used to compare the performance of various search algorithms; the filters are a symmetric filter with 48 tap defined in CDMA 2000 communication protocol, a high-pass filter with 121 tap, and two low-pass filter with 105 and 325 taps.The CDMA 2000 [13] is a 3 G mobile communication standard.The 3 G system offers various telecommunications services, including voice, multimedia, and high-speed and low-speed data transmissions.The system requires a based band filter to perform intersymbol interference.The CDMA 2000 standard recommends the use of a symmetric finite impulse response filter with 48 tap to eliminate the interference.A high-pass filter with 121 tap [14] and a low-pass filter with 105 and 325 taps [15] are also used to confirm the effectiveness of simplification by the search algorithms.Symmetric coefficients of the three filters are realized with the transposed architectures.
In the realization, the Synopsys Design Compiler (DC) SP1 software is utilized to data concerning the synthesis of the circuit.The process technology adopts the CBDK Arm 4.0 TSMC 0.18 um cell library with the default system parameters.The following data in the compared table are rounded to the second decimal place, the unit of circuit area is um 2 , the unit of data arrival time is nanosecond (ns), the unit of the throughput is Gigabits per second (Gbps), and the unit of throughput per area is bps/um 2 .In the comparison of areas, combination logic area is utilized to confirm the simplification performance of each algorithm.The search algorithms can simplify the coefficients, which are realized using combinational logic units.In the following performance comparisons, the coefficients with original CSDbased expressions are only simplified using the Synthesis DC tool.Other search algorithms perform simplifications of the coefficients in CSD notation.
According to Table 1, the search algorithm proposed by Jang and Yang [4] has a coefficient simplification ratio of 4.14% compared with original filter, which is better than that of any previous algorithm.The algorithm extracts the most common factors than the others but also causes the most path delays.The proposed CCSE algorithm reduces the combination logic area by 13.43%, and the total filter area by 8.56% compared with original filter.The proposed algorithm reduces the area of the filter by more than the previous algorithms.According to Table 2, the search algorithm that was developed by Jang and Yang [4] has the best simplification ratio, 15.85%, which corresponds to the best reduction of the area of coefficient realization.The proposed algorithm reduces the combinational logic area by 22.91% and the total filter area by 15.57%.Both of these results are the best achieved using any algorithm.Tables 3 and 4 also reveal that the proposed algorithm reduces the area of the filter more than does any other.
As more common factors are extracted using the search algorithms, the path delay (or the data arrival time) is increased more.The realizations under the coefficients in CSD notation have shorter path delays compared to those of the coefficients in original binary notation after search algorithms are implemented.The throughputs have the same effects as the path delays.In Table 5, all of the throughput/area ratios are reduced by realizing the constant multiplications with the coefficients in CSD notation.The proposed algorithm increases the throughput/area ratio by 2.84%, which is the largest increase of any algorithm.In Table 7, the coefficient simplifications using the proposed algorithm increase the throughput/area ratio by 16.69% more than the original coefficient simplifications using Synopsys DC.Tables 6 and 8 also reveal that the proposed algorithm has the best simplification ratio of any of the compared algorithms.
Previously proposed search algorithms have different advantages in coefficient simplifications than those of the four filters described above.All of the algorithms effectively reduce the area cost of realizing the FIR filters; most of them sacrifice throughput by increasing the path delays.The proposed CCSE algorithm reduces the most area of the filter, but it has a higher throughout than the other algorithms.According to experimental observations, the proposed algorithm performs best not only in area reduction but also in the throughput/area ratio.
However, the layout realizations of 48-, 121-, 105-, and 325-tap filters are also utilized to reveal the effectiveness of our proposed algorithm.The Cadence SOC Encounter software is utilized to place and route the designed filters.The I/O pin counts of all filters are 40 pins.The synthesis and layout processes utilize TSMC 0.18 m mixed-signal RF 1P6 M CMOS technology.Table 9 shows the placement and route information of four designed filters.The die size of 48tap CDMA filter is about 0.16 mm 2 and the total gate count of the filter is approximately 12.5 K.The filter can operate at 72 MHz with 7.531 mW power dissipation.Figure 4 shows the layout graph of the 48-tap filter with 40 I/O pins.The 325-tap filter has the largest chip size and can operate at 55 MHz with an area of 87.68 K gates.
Conclusions
In summary, coefficient simplifications by the search algorithms are useful in reducing the combinational logic area
Figure 3 :
Figure 3: An FIR filter of the transposed form. | 4,204.2 | 2014-09-11T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Prevalence of Babesia Spp . in Presumably Healthy Dogs and Associated Risk Fators in OBIO/AKPOR Local Government Area, Rivers State, Nigeria
Babesia canis is a parasitic protozoan transmitted by Ixodid ticks. It infects the red blood cells of most mammals especially dogs, causing canine babesiosis. In the present study, the prevalence of Babesia spp. and associated risk factors among dogs in Obio/Akpor Local Governemnt Area, Rivers State were investigated using blood film. Blood samples from 150 dogs were randomly collected and examined for the presence of the parasite with March and November, 2022. Blood films were prepared, fixed in methanol, stained in Giemsa and examined under the microscope for the presence of the parasite. Data on age, breed, sex and other related risk factors were obtained using self-structured questionnaire. Out of the 150 dogs examined, 27(18%) were infected and out of the 27 infected dogs, 3(11.1%), 10(37.0%), 2(7.4%), 6(22.2%), 2(7.4%) and 4(14.8%) were from Rumuolumeni, Ogbogoro, Rumuopirikon, Choba, Rumuola and Ozuaba communities respectively. There was a significance difference (p<0.05) in the number of infected dogs across the communities when compared to the number of dogs that were not infected . More males 12(19.4%) were infected than females 6(14.3%). Dogs within the age range of 7-36months had the highest infection 16(59.3%) while no infection was recorded among dogs within the age group of 3-6months. Local breed had the highest infection 18(36%) than crossed breed6(12%) and exotic breed 3(6%). There was a significance difference (p<0.05) in the prevalence of Babesia spp. in relation to sex, agePp and breed of dogs. Other risk factors identified were management practice, vaccination, use of dogs and intensity of tick infestation which were all statistically significance (p<0.05) The study confirmed that canine babesiosis is a serious health concern among dogs in the study area and attention should be given to the risk factors during intervention.
INTRODUCTION
Dogs are one of the most important domesticated animals in many parts of the globe where they are used for security, hunting and as pets (Omudu et al., 2007).In Africa, dogs are kept for similar purposes including breeding, herding and source of protein (Opera et al., 2005;Hambolu et al., 2014) and for the treatment of certain illness (Gurumyen et al., 2020).Incidentally, dogs are one of the many targets of Babesia spp.especially because they are vulnerable to tick infestation (Omudu et al., 2007;Solano-Gallogo et al., 2016).The parasites infect a wide range of both domestic and wide animals including man (Carter, 2001;Schnittger et al., 2012).Obeta et al., 2020).The parasite belongs to the genus Babesia and, alongside other species of the genus, are responsible for babesiosis in dogs, horses and rodents (Oguche et al., 2020).There are two groups of Babesia: the large and small Babesia.Babesia canis is a large form.They can be morphologically differentiated by their size and shape in the infected red blood cell (Laha et al., 2015).The large forms with pyriform shape is pointed at one end and round at the other) orientate in the red blood cell in acute angle to each other while the small forms (oval shape lacking pyriform) lies at an obtuse angle to each other (Ruprah, 1985;Laha et al., 2015) Area.This study is therefore aimed at the determination of the prevalence and associated risk factors of the infection among dogs in some communities of Obio/Akpor Local Government Area.
Sample size
The sample size for this study was determined using the method of Kothari ( 2004).Therefore, a total of 150 dogs (25 dogs from each of the communities) were randomly selected for this study.The communities were Rumuolumeni, Ogbogoro, Rumuepirikom, Rumuola, Ozuoba and Choba.
Sample collection
A total of 150 dogs (50 local breed, 50 exotic breed and 50 cross breed) were randomly selected for investigation for the presence of Babesia canis infection in the study area.Blood samples from these dogs were obtained the help of a veterinary doctor and the owners.The method of WHO, (1991) which was adopted in the collection of blood samples.About 10ml of blood sample was collected through the cephalic vein using 5mL disposable syringe and 23-gauge needle into a sample vial containing 1mg ethylene diamine tetra acetate-K (EDTA-K) as anticoagulant.The blood samples were immediately kept inside a cool box containing icepack and transported within 4 hours to the Research Laboratory, Department of Biology, Ignatius Ajuru Ignatius Ajuru University of Education, for parasitological examination.
Parasitological analysis of blood samples
Laboratory examination of the blood samples for the presence of Babesia canis were done using the method of Hendrix and Robinson (2006).Thin blood smears were prepared from the blood samples, air dried and fixed in methanol for 3-5 minutes and allowed to dry.The slides were stained in 3% Giemsa for 30 minutes and washed with phosphate buffered saline (PBS) to remove excess stains.The slides were then air-dried and examined under oil immersion (x100) for presence of intra-erythrocytic merozoites of Babesia spp.
Questionnaires
A total of 150 copies of self-structured questionnaires were produced and distributed to the dog owners to obtain information regarding the sample location, sex, age, breed, management and infestation of ticks as well as the risk factors associated.
Data analysis
The data collected was analyzed using SPSS version 20 to determine the prevalence while Chisquare was used to evaluate the relationship between the variables including age, sex, breed and risk factors.Value of P<0.05 was considered significant and confidence interval of 95%.The following formulae were used to determine the prevalence in relation to the respective variables.
Plate 1: Merozoite (Giemsa Stained) of Babesia canis in infected red blood cell 2).Similarly, all the 150 dogs investigated were within the age of 3-60months.A total of 42, 71 and 37 dogs were within the age group of 3-6months, 7-36 months and 37-60 months respectively.The results indicates that out of the 42 dogs within the age range of 3-6 month examined, there was no infection 42 (0%), of the 71 dogs within the age range of 7-36 months, 16 (22.5%)were positive while out of the 37 dogs in the age group of 37-60months examined, 11 (29.7%) were infected (Table 2).Out of the 150 dogs examined, 50 each were local, exotic and crossed breeds.Local breed had the highest numerical infection 18 (36%), followed by crossed breed 6 (12%) and exotic breed 3 (6%).There was no significance difference (p>0.05) in the prevalence of the infection in relation to sex, age and breed of dogs investigated.
Prevalence of Babesia spp. in relation to risk factors
A total of 150 questionnaires containing self-structured questions were produced and distributed to dog owners with the view to evaluating certain risk factors associated with babesiosis.Out of the 150 respondents, 88 and 62 agreed that they keep stray dogs and caged dogs respectively.Of the 88 stray dogs, 18(66.7%)were infected while 9(33.3%) of the caged dogs were positive for babesiosis (Table 3).Similarly, 21(77.8%) of dogs that had tick infestation and 6(22.2%) of dogs that had no tick were positive for the infection respectively.A total of 54dogs were regularly vaccinated, out of which 2(7.4%) were infected while 25(92.6%) of the 96 dogs that had no regular vaccination were positive for babesiosis (Table 3).Out of the 36, 44 and 70 dogs kept as pet, hunter and security respectively, 7(26%), 10(37%) and 10(37%) were positive for babesiosis respectively (Table 3).More rural dogs (70.4%) were infected than urban dogs (29.6%).The infection rate of dogs that were feed on home-made food was 37%, dogs that ate anything had 37% infection rate while dogs that were fed on pet food had 26% (Table 3).The results indicated that these factors significantly (p<0.05)influence the prevalence of the infection.
DISCUSSION
Babesia canis is a haemoparasite that causes canine babebiosis in dogs.The infection is highly pathogenic and is the major cause of haemolytic anaemia in dogs in the tropics (Kamani et al., 2011).The parasite is among the most widely distributed haemoparasites of dogs occurring in almost anywhere the tick vector Rhipicephalus sanguineus is reported (Taylor et al., 2007).
This study recorded a high overall prevalence of 27% of Babesia canis in the study area.This is an indication that babebiosis is still a health challenge in the area and that the tick-vector of the parasite is widely distributed in Nigeria.The recorded prevalence is higher than the 8.9%, 11.66% and 10.8% reported in Abuja at various times by Jegede et al., 2014, Obete et al., 2009and Obeta et al., 2020 respectively.It is also higher than the 10.2% and 12.9% recorded in Makurdi and Plateau State by Amuta et al. (2010) and Oguche et al. (2020) respectively; as well as the 2.4%, 3.8%, 5.3% and 13.33% in Zambia, Cape Verde, Souther France and Costa Rica by Williams et al. (2014), Salem andFarag (2014), Garcia-Quesada et al. (2021) and Rene-Martellet et al. (2015) respectively.The differences in the prevalence recorded in the various studies may be attributed to the poor management strategies adopted by the dog owners (Obeta et al., 2020), relative distribution and abundance of the tickvector of the parasite, differences in geography of study areas (Jegede et al., 2014) and lack of immunization of the dogs by their owners (Amuta et al., 2010).
In this study, more males were infected than females and this was statistically significant (p<0.05)(p-value=?).This is in consonance with previous studies in Jos, Nigeria by Omudu et al. (2007), in Vom, Northern Nigeria by Daniel et al. (2016) and elsewhere in South Africa by Mellanby et al. (2011).The high prevalence could be as a result of the hormonal status of the male dogs particularly the presence of testosterone which might limit the quality of care given to it by the owner, excessive roaming behaviours of male dogs to search for mating partners and establish territories, exposing them to more tick infestation (Mellanby et al., 2011, Daniel et al., 2016;Obeta et al., 2020).Their female counterparts are presumably less mobile as they spend much time nursing the puppies and are giving good care and attention by the owners due to their economic value.The result is however contrary to the records of Omudu et al. (2010) in Makurdi, Okunbanjo et al. (2013) in Abuja, Jedege et al. (2014) in Abuja and Oguche et al. (2020) in Jos.These studies recorded high prevalence of B. canis in female more than male dogs.
The study recorded a high infection rate (p<0.05)(p-value=?) of Babesia spp. in older dogs while there was no observable infection in younger dogs.Specifically, dogs within the age range of 7-36months had the highest infection.This is in agreement with previous studies by Obeta et al. (2020) and Jegede et al. (2014) who reported a high prevalence of the parasite in older dogs in Abuja.Similar observation was made in Jos by Oguche et al. (2020).This result contradicts the report of Okunbanjo et al. (2013) who observed a low prevalence of the parasite in older dogs and a high prevalence in puppies in Zaria.The relatively high prevalent rate recorded in this study could be attributed to lower resistant and poor immune system against the parasite by older dogs, possibly because of age.Research indicates that animal immunity decreases with age making them susceptible to infection.It may also be attributed to frequent and longtime exposure of older dogs to the vector of the parasite (Egege et al., 2009).It is posited that dogs within the age range recorded in this study are very active and roam about indiscriminately thereby exposing themselves to tick infection which might account for the high prevalent rate of canis babesiosis recorded in our study.Again, the habit of assembling in the mating season and style of playing in the field may influence high tick infestation of dogs.However, studies have shown that canis babesiosis increases with age but declines when the dogs are about 4-5years old (Hornok et al., 2006).
In this study, local breed has the highest infection (p<0.05) of 36% compared to exotic breed (6%) and crossed breed (12%).Similar result was recorded in Abuja by Jegede et al. (2014) and Daniel et al. (2006) in Jos.
This could be as a result of poor management system and lack of care for the local breed by their owners.This local breed is allowed to roam freely in the street to scavenge, hence, they are vulnerable to high tick infestation.Moreso, they are hardly immunized and are breed in very poor hygienic condition (Kamani et al., 2011;Eguche et al., 2020).In this study, there was no recorded significant prevalence in babesiosis infection in relation to breed of dogs.However, several studies have suggested that breed is a predisposing risk factor in babesisosis infection (Hornk et al., 2006;Mellanby et al., 2011).The reason for this is not clear but may be related to differences in genetic composition (Obeta et al., 2020).The result obtained in our study is at variant with the report of Nalubamba et al. (2015) and Obeta et al. (2020).The researchers reported high prevalence of canis babesiosis in exotic breed in Zambia and Abuja, Nigeria respectively.Unconfined dogs, presence of tick on dogs, irregular vaccination were the risk factors associated were some of the risk factors identified.Dogs used for hunting had high frequency of infection than others.This is in agreement with previous studies (Costajunior et al., 2009;Veneziano et al., 2018).This might be as attributed to the adopted management techniques.The results also indicated that dogs in urban and rural areas are equally vulnerable to the infection.This is in consonance with the record of Silva et al. (2012).
CONCLUSION
Babesia spp.remains a health challenge among dogs in the study area irrespective of the risk factors.However, free-range and hunting dogs were at high risk of infection.Due to the physiological, behavioural and nutritional effect of the canis babesiosis on dogs, prevention and control remains a viable option, through modern management system, vaccination, use of acaricide and regular fumigation of the environment.
.
Other members of the genus include Babesis rossi, Babesia vogeli, Babesia gibsoni and Babesia microti (Jegede et al., 2014; Nalubamba et al., 2015; Rene-Marlellet et al., 2015; Obeta et al., 2020).These species, except Babesia microti have been reported in Africa (Solano-Gallego and Baneth, 2011).Babesia canis causes canine babesiosis also known as malignant jaundice (Penzhorn et al., 2017) or piroplasms (Irwin, 2009).The parasite dwells in the red blood cells where it replicates and destroy the erythrocytes causing disease to the host.The commonest mode of spreading the infection is through tick bite during blood meal (Jegede et al., 2014; Nalumanba et al., 2015).Transmission through blood transfusion and transplacental transmission have also been reported (Jegede et al., 2014).Hard ticks are the major vectors of babesiosis.Dermacentor riculations transmits B. canis in Europe (Barker et al., 2012), Rhipicephalus sanguineus transmits B. vogel in tropical and sub-tropical regions of the world (Lavan et al., 2018), including Asia, North America, North and East Africa (Hauschild et al., 1995; Oguche et al., 2020) but in South Africa, B. rossi, which causes a fatal in infection in dogs is transmitted by Haemaphysalis sanguineus (Bashir et al., 2009; Avenant et al., 2021).The pathological presentation and severity of the infection is dependent on the species of Babesia responsible for the infection and the host immune response (De Tommani et al., 2013).However, the general manifestation may include anaemia, lymphopenia, neutropenia and thrombocytopenia (Mathe et al., 2006).Other symptoms include weakness, jaundice, pallor, hypotoxic injury, systemic inflammation fever, splenomegaly and collapse resulting from intra and extra vascular hemolysis (Irwin, 2009; Oguche et al., 2020).In humans, infection by canine babesiosis can results in serious diseases condition especially in immune-compromised persons (Salano-Gallego et al., 2016) but may presents slight symptoms in immunecompetent individuals (Vannier and Kraus, 2012; Yabsley and Shock, 2013).Although several reports of Babesiosis have been documented in Nigeria since its emergence in 1962 (Obeta et al., 2020), scanty record exist on the prevalence of canine babesiosis among dogs in Rivers State particularly in Obio/Akpor Local Government size N = Population size under the study which is the total number of dog population officially registered by the veterinary unit of Ministry of Agriculture in Obio/Akpor Local Government Area. a = Level of significance, which is 0 Prevalence of Babesi canis in relation to sex = (iii) Prevalence of Babesi canis infection in relation to age of dogs = (iv) Prevalence of Babesi canis infection in relation to breed of the dogs = (v) Evaluation of potential risk factors = Ethical Clearance The ethical clearance for this study was obtained from the Rivers State Ministry of Agriculture, Port Harcourt, Directorate of Research and Development, Ignatius Ajuru University of Education while verbal consent obtained from dog owners.
Table 1 :
Overall prevalence of Babesia canis infection among dogs in Obio/Akpor
Table 2 :
Prevalence of Babesia canis infection in relation to sex, age and breed of dogs
Table 3 :
Risk factors associated with the transmission of Babesia canis (n =150). | 3,868.2 | 2023-03-08T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
TECHNOLOGY 4.0 FOR BUILDINGS MANAGEMENT: FROM BUILDING SITE TO THE INTERACTIVE BUILDING BOOK
The main result of the research that we intend to illustrate is the connection between the contents of 4.0 Industry (Ciribini 2018), and the information sharing with BIM design (Lucarelli 2018), through the insertion into a single data container (black storage box), of all the sensors inherent to the entire building process, to monitor the building from the early construction phases and obtain a precise history about it. The goal is to create an "As Built" model flanked by the interactive digital building book, capable of an automatic upgrade depending on the variation of the monitored data during the useful life of the building. The aim of this project is to exploit the use of IoT (Gabriele 2015), for the data communication to the black box (Smart Monitoring Building Box SMBBox) installed in the building from the beginning of the construction site, in order to initially monitor the status work progress and safety management on site, and subsequently, thanks to the combination with the BIM model for data management, it will be possible to digitize the physical and functional characteristics of the case study object. The methodological approach is based on the following steps: BIM modeling; sensor design and installation and data container; data collected updating; "As Built" model creation; Interactive building Drafting. This method is being carried out on a restricted building located in the historic center of L'Aquila, subject to seismic improvement as a result of the damage caused by the 2009 earthquake. * Corresponding author
INTRODUCTION
A concrete response to the growing demand for organized management of the construction process on site and safety in the workplace can be achieved through the use of rationalized control and management systems and procedures that involve the use of innovative technologies. The integration of high-tech systems for data storage and management, such as BIM software (Building Information Modeling), with a series of tools and equipment always present on site (PPE, crane, etc.) equipped with IoT sensors (Internet of things) (Nevado 2018), represents the future of construction in terms of quality of realization, sharing of information, but above all in terms of corporate Facility Management. If all this is seen on a larger scale, and in situations of complexity, such as in the reconstruction of historic buildings in the municipalities hit by the earthquake, it would allow easier development, but above all control and real-time updating of government procedures that rule the process. Among these we can include the activity planning or site set-up in the areas of intervention, through the communication of data between the companies involved in the reconstruction, precisely because the more the work is extended, the specific processes need to be optimized in order to increase the overall organization and security. The study of the state of the art has highlighted a growing interest in the sector, which has led to the development of several lines of research. In general, however, the trend followed is that of rationalizing and automating all in the various sectors. The industry of construction site follows the same direction. One of the tools already used is the SmartSafety platform for construction sites [5]. This platform includes an integrated system for the management of safety in large sites powered by battery, with fully wireless instrumentation and infrastructure that allows locating real-time people and work vehicles via GPS, monitoring PPE and the activities performed, generating danger labels and delivering information to communication devices. On the other hand, other research areas focus on Smart Equipment, which involves the use of construction site clothing and PPE with innovative technologies both from a material and a technological point of view. Among the Smart DPI there is the Smart Helmet [6], a construction helmet solution developed with BIM technology. It allows real-time and remote trasmission of the project realized in BIM within the construction site or the evaluation of the environment where who wears the smart helmet moves. t is activated by simply lowering the built-in mask that acts as a real computer containing all the design information related to the building subject of the intervention. One of the purposes in the use of these technological devices is to obtain a smart building book (Solustri 2000) increasingly similar to the "as built" construction, but above all that contains in detail all the materials used inside the building for a monitoring and maintenance activity over time of all the features that make it up. To date, the smart building book represents, in fact, one of the most important tools for integrated prevention and safety. The smart building book has been discussed at many conferences. The National Association of Builders (ANCE), has promoted numerous initiatives for the dissemination of this tool since 2017. In Italy the smart building book is not yet mandatory, although there have been various attempts, both nationally and regionally, to set it up. The last Draft Law (n. 2826, 'Measures concerning the protection of the territory and provisions aimed at establishing the building log book, of 10 May 2017), indicated 31 December 2017 as the deadline for the Regions for the adoption of measures to make the Smart Digital book mandatory for all private properties (including the areas of appurtenance and any functional destination), in order to secure our territory (both of the soil and of the building heritage) (engineers cc 2019).
METHODOLOGY
The research developed proposes the experimentation of a totally smart and innovative construction site that is, above all integrated with a work system supported by the digitalization of different solutions. The integration of the BIM with monitoring sensors will allow the management of the construction, technical-administrative and accounting procedures and activities in compliance with safety requirements (figure 1). To this end, a methodology has been developed that is based first on the modeling in BIM of the design intervention to be carried out and subsequently, through the installation of the black box on site and of the sensors related to it, the monitoring of the building during the construction phases and use. The methodological approach is therefore based on the following steps (figure 2): 1. BIM modeling 2. sensor design and installation and black box 3. Collected data update 4. "As Built" model creation 5. Drafting of the interactive building log book (Smart Digital Book) The objective of this project is to exploit the use of the IoT, for data communication to the black box (Smart Monitoring Building Box -SMBBox) installed in the building from the construction site set up, in order to monitor the work progress status and safety management on site, and subsequently, thanks to the combination with the BIM data management model, it will be possible to digitize the physical and functional characteristics of the building object of the intervention. The research carried out to date concerns the first point of the list referring to BIM modeling, in particular for the management of building site stages on a listed building located in the historic center of L'Aquila, subject to repair and upgrade following the damage caused by the 2009 earthquake (Lucarelli 2018), and part of the second one aimed at the design and installation of the black box and the monitoring sensor system.
CASE STUDY
The building has been the object of a heavy restructuring that has resulted in the seismic structure improvement through consolidation of the horizontal sectors ceilings (Garagnani 2015). For purposes of research, we focused on the study of the consolidation of the brick vaults, which involved a processing with carbon fiber bands.
The study area is identified in the plan below, the portion of slab to be consolidated is on the second floor of the building (figure 3). The construction process of the single phase was analyzed using a BEP, to better interpret all the individual consolidation processes (figure 4).
Works analysis
For the brick vaults consolidation it has been foreseen, after removal of the flooring and of the present filling, the construction of abutments in cement mortar and the application, in the extrados of the vault, of a composite carbon fiber grid made integral with the vault itself from carbon flakes appropriately clamped with anti-shrinkage mortar or epoxy resin, all poached in a hood of fiber-reinforced mortar. The system has also been made integral with the masonry by steel bars inserted in the new blockages in solid bricks. Following the reinforcement, a lightened filling was achieved with expanded clay or light cellular concrete, together with a screed, with a poached electro-welded grid. In addition to work on the vault, also involve inserting the tie rods as an integral part of the consolidation to create the horizontally finished (figure 5).
Collection of manual and sensorial data.
Acceptance of materials on the work site is a very complex activity and in recent years the focus on this topic has grown. After passing the acceptance check, all the materials used in the processing are "tabulated" to have a single instrument for collecting information. This collection of data can be carried out for the most part automatically using product traceability through the use of labels with passive Rfid tags. The sense of Rfid's endowment on materials (as well as on means, tools and instruments), is precisely to make them able to describe the origin and characteristics of each product. Once the piece is on the site, the experts can use the RFID technology (figure 7) together with the quality control processes (QA/QC) and the BIM (building information modeling) process, to perform all the investigations and inspections. Finally, these methodologies can provide useful information even when the components are installed. Having the possibility to use such products, the verification on the site can be carried out directly by reading the RFID, recording the data automatically and therefore shortening the process of traceability and quality verification of the products, as part of the process will be absorbed into a single reading operation. The present study provides three levels of verification, two automatic by reading Rfid systems (or barcodes or QrCode), and another manual type. The first two refer to the data provided by the manufacturer and the speaker, which are automatically transferred to the black box connected to the BIM model, the others will be manually entered by the project manager during installation and verification (figure 8). For each material used in the processing of the vault, it was therefore essential to include all data, quantities, the origin of the manufacturer's certificates, in order to have a database of all the work, if extended to the entire intervention, of the entire building. The black box therefore represents not only the data collection through sensors, a storage of all the information of the materials, the characteristics of each, but also the day in which the work was carried out and the duration (data available from the gantt chart connected to the BIM model continuously updated). The sensors installed and connected to the black box, as well as for security monitoring, will be the type of control of environmental data (indoor and outdoor temperature and humidity, air quality...). These data are fundamental to the control of environmental conditions at the time the work is carried out, especially for some specific activities such as concrete castings, laying of mortars, plasters, etc. Therefore a station has been installed for the external detection of environmental data (weather station), an air quality sensor and movable temperature and humidity sensors for each work room ( figure 9). The monitoring process, aimed at obtaining the digital building book, is divided into three phases: -Monitoring ante operam: photograph of the actual state before the changes related to the work that ends with the development of the work site; -Monitoring during construction: during the production period. Verify the correct execution of the works according to the implementation standards of the materials; -Post-operation monitoring: it starts just before dismantling the building site and continues during the useful life of the building to monitor the environmental conditions and for the management and maintenance of the plants.
Data collection in the present study refers to monitoring during construction: for each work and for each material, a quality control in the execution was carried out, which will remain in the history of the building and which will highlight the advantages and defects of the realization, as well as to identify materials that, over time, can prove to be harmful and whose use could become prohibited. The sensors for detecting environmental data are also useful to verify the environmental working conditions for safety of the operators on work site.
RESULTS
As already mentioned, the proposed research has not been completed yet, therefore it has been possible to obtain only partial results. In particular, the first step of the previously analyzed methodology and part of the second one were carried out. For this reason, as it has already been discussed in the previous paragraphs, the first result obtained is inherent in the BIM modeling of a complex building aggregate that underwent a post-earthquake restauration of 2009 and seismic structural improvement. However, at present, the black box and the sensor system are being installed to perform the monitoring. The design has already been carried out (an example of a possible graphical interface for the management and maintenance of a frame is shown in Figure 10) and in particular, it was considered appropriate to place the system composed of sensors and black box near to the vault object of intervention. Therefore, a sensor network was created, an infrastructure composed of elements capable of making, processing and communicating measurements to a central point in which the data are elaborated (black box). The design of a Sensor Network involves the management of network protocols for communication with the various sensors, an application necessary for the management and the storage of data, an external interface for the consultation and analysis of data, a database that contains them and a web server with a specific web application. For the structure of the Sensor Network, several wireless nodes are provided. They are spread in the delimited areas of the work and periodically, they send the data collected by sensors to a collection point (the black box) which manages the network, collects data from the nodes and forwards them to another remote system for secure storage and for further processing provided in the BIM model and in the operating software. Once the system installation is complete, it is possible to proceed with monitoring and updating of the data. The execution of all the methodological steps will allow the achievement of the final result that is a single document that will be present in the building book. Currently, this document is very important in a period where the Italian building heritage must be subjected to important and continuous restructuring and the safety of people is the main objective of any building intervention. This compiled building book can give us detailed information on the state of health of the buildings (Pesce 2018), to be able to act promptly with a maintenance plan or to facilitate the reconstruction after collapses or damage. Moreover, all the materials used will be cataloged with precision, in order to verify their useful lifetime, but also the ever-changing environmental and human compatibility. In addition, intending BIM both as a model and as a process, it opens the way to the creation of the building book in digital form. It is particularly difficult in relation to existing buildings because, often, there are a lot of documents and information, absent or outdated drawings, changes that have taken place over the years and a multiplicity of users and functions (Dejaco 2017). This initial situation, combined with the use of BIM and the use of this IoT system, with protocols and rigid schemes of the traditional approach, requires a clear definition of the model requirements to be translated into information attributes associated to the objects and the whole model. The space is an object of the BIM model as much as a structural or architectural component. The information associated with the spaces, as well as with the other BIM model objects, appropriately reworked and integrated with the black box data, contribute to the creation of a constantly updated building book. Especially, in an existing building, we must face a series of problems very different from those that must be faced during planning and construction; among the main ones, it is possible to list the large amount of information to be historicized and the plurality of non-technical users (or anyway not able to use a BIM software) involved in the management. These two problems can be solved by connecting the BIM model of one or more buildings to a database or to building management systems, simplified and accessible to all users, even thanks to simplified applications for mobile devices. Some of these software are already in production such as BMS (Building Management System) and Computer-Aided Facility Management (CAFM) systems (Ingegneri cc 2019). Figure 10: Part of the database with graphic interface for the management and maintenance of a window.
DISCUSSION
In a digital integration scenario, the ability to monitor data and to process and manage information constitutes the baseline on which to create value. The criticality lies in the fact that over 60% of buildings in Italy are almost forty years old (ISTAT data for August 2014), and therefore are built with regulations that do not guarantee the current levels of safety and comfort. The inadequacy of the buildings is more highlighted by the continuous seismic events that occurred in Italy in recent years with strong impacts on safety, health and costs to be sustained. Obviously, as in the BIM approach, digitization results a higher cost in a first phase because of investing a lot of time and resources but, it will lead to a simplification and speed up the process in a second phase. The construction system, and the sectors connected to it, achieve a turnover of about 12% of national employment therefore, the effects of direct or indirect interdependence of the buildings on the economy are significant. Despite this, differently to other sectors, the building trade remains the most traditional and the least digitized one, with direct consequences on productivity and costs of real estate and infrastructure construction and management. It is estimated that the use of smart technologies and related platforms can lead to a reduction in the total cost of the intervention and the management of about 20%. Such savings and energies could be used, for example, for the recovery of public works, for schools, offices, army barracks, etc., which in many cases are now obsolete and require seismic and energy requalification and they would lead to reduction of management and maintenance costs for national resources. For years, Italy has found itself inadequate in front of this type of development but, nowadays, we are finally implementing the transition of the building trade to digital, a 4.0 building trade, a path that can play a strategic role to restart its growth, in particular of one of its driving sectors such as construction. From this point of view, the digitalization processes can serve the contracting stations to make the construction works more efficient, the professional technicians optimize the design and execution phase, and the construction companies optimize building work site and save resources.
CONCLUSIONS
In Italy, the request for greater transparency in the knowledge of a building and its state of affairs returns to the limelight whenever a natural or otherwise event occurs, in which one or more buildings suffer damage resulting in loss of human life. The main tool for certifying the status of a building is the building book, a sort of identity card that, over time, provides detailed and summary information on the building from both a technical and administrative point of view. The use of the building book in a design and construction process involves the adoption of Building Information Modeling (BIM) and IoT is the next future (Pesce 2018).
The BIM model is intended as a digital book by defining information, processing it, storing data and connecting to currently available web interfaces. A further step forward can be taken by the sensors which can be installed on the building from the construction phase, with the purpose of monitoring safety on site, and collecting all the possible data during the construction, in order to have the actual knowledge of materials, installation, installation time. The digitalization of all the documentation of the building and the creation of the digital book of the work and its transmission to the Public Administrations. It will eventually produce a single Big Data Base for the layout of all the buildings, in order to monitor the risk and the vulnerability from the structural point of view, the energy levels and the ordinary and extraordinary maintenance. It will also be possible for Public Administrations to legislate on the minimum maintenance of buildings by users, and to provide incentives for the various types of interventions. The digitalization of the construction sector, therefore, affects its determining aspects for example: from the point of view of professional performance. It facilitates the sharing of information and helps to develop collaboration platforms and thanks to cloud systems, it allows remote access of all information, simplifying and accelerating all the design and production processes. Following a digital design, the BIM model allows the transfer of a "virtual model" from the designer to the winning company, to the subcontractors and to the owner/user, allowing each involved figure to add specific knowledge to the model. This reduces information losses, increasing the quality of the finished product and helping to reduce waste of time and resources. In other words, we talk about Facility Management (Jeong 2016), which is the design, implementation and control process through which we can create a quality work environment against minimal economic resources. It is time for a new paradigm to promote urban regeneration and a real estate re-development of the country because, if an important transformation process has already begun in the industry, the construction sector has to look to the future as well. Smart buildings integrated in a smart country are therefore the first step towards the electric city intended as a set of "smart systems" that act and interact in a preventive manner, which has not only the safety of people as main objective, but also the reduction of land consumption through the safeguarding of our building heritage.
ACKNOWLEDGMENT E.L. defined the structure and organization of the article and wrote the article. E.L. and M.R. wrote the results chapter. M.L. elaborated figures 3,4,5,6 during the research development. The research group consisted of all the authors. In particular P.D. was responsible for the scientific research.
The study presented in the article was developed within the research group "Production of the building industry and rational management of the construction process on site", DICEAA, University of Study of L'Aquila. Figure 10: the image was created by the authors based on a figure from Dejaco M.C., Maltese S., Re Cecconi F., Il fascicolo del fabbricato, Maggioli Editore, febbraio 2017, ISBN: 978-88-916-2178-8. | 5,388.6 | 2019-05-04T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Contact Stiffness Study: Modelling and Identification
Computerized fixture design &analysis has become means of providing solutions in production operation improvement. Although fixtures can be designed by using CAD functions, a lack of scientific tool and systematic approach for evaluating the design performance makes them rely on trial-and-errors, which leads to several problems, for instance, over design in functions, which is very common and sometimes depredates the performance (e.g., unnecessary heavy design); the quality of design that cannot be ensured before testing; the long cycle time of fixture design, fabrication, and testing, which may take weeks if not months; a lack of technical evaluation of fixture design in the production planning stage.
Introduction
In machining processes, fixtures are used to accurately position and constrain workpiece relative to the cutting tool.As an important aspect of tooling, fixturing significantly contributes to the quality, cost, and the cycle time of production.Fixturing accuracy and reliability is crucial to the success of machining operations.
Computerized fixture design &analysis has become means of providing solutions in production operation improvement.Although fixtures can be designed by using CAD functions, a lack of scientific tool and systematic approach for evaluating the design performance makes them rely on trial-and-errors, which leads to several problems, for instance, over design in functions, which is very common and sometimes depredates the performance (e.g., unnecessary heavy design); the quality of design that cannot be ensured before testing; the long cycle time of fixture design, fabrication, and testing, which may take weeks if not months; a lack of technical evaluation of fixture design in the production planning stage.
Over past two decades, Computerized Aided Fixture Design (CAFD) has been recognized as an important area and studied from fixture planning, fixture design to fixturing analysis/verification.The fixture planning is to determine the locating datum surfaces and locating/clamping positions on the workpiece surfaces for a totally constrained locating and reliable clamping.The fixture design is to generate a design of fixture structure as an assembly, according to different production requirements such as production volume and machining conditions.And the design verification is to evaluate fixture design performances for satisfying the production requirements, such as completeness of locating, tolerance stack-up, accessibility, fixturing stability, and the easiness of operation.
For many years, fixture planning has been the focus of fixture related academic research with significant progress in both theoretical and practical studies.Most analyses are based on strong assumptions, e.g., frictionless smooth surfaces in contact, rigid fixture body, and single objective function for optimization.Fixture design is a complex problem with considerations of many operational requirements.Four generations of CAFD techniques and systems have been developed: group technology (GT)-based part classification for fixture design and on-screen editing, automated modular fixture design, permanent fixture design with predefined fixture components types, and variation fixture design for part families.The study on a new generation of CAFD just started to consider operational requirements.Both geometric reasoning, knowledge-based as well as case-based reasoning (CBR) techniques have been intensively studied for CAFD.How to make use of the best practice knowledge in fixture design and verify the fixture design quality under different conditions has become a challenge in the fixture design &analysis study.
In fixture design verification, it was proved that when the fixture stiffness and machining force are known as input information, the fixturing stability problem could be completely solved.However most of the studies were focused on the fixtured workpiece model, i.e., how to configure positions of locators and clamps for an accurate and secured fixturing.FEA method has been extensively used to develop fixtured workpiece model (e.g., Fang, 2002;Lee, 1987;Trappey, 1995) with an assumption of rigid or linear elastic fixture stiffness.The models and computational results cannot represent the nonlinear deformation in fixture connections identified in previous experiments.As Beards (1983) pointed out, up to 60% of the deformation and 90% of the damping in a fabricated structure can arise from various connections.The determination of fixture contact stiffness is the key barrier in the analysis of fixture stiffness.The existing work is very preliminary, by either simply applying the Hertzian contact model or considering the effective contact area.
The development of fixture design &analysis tools would enhance both the flexibility and the performance of the workholding systems by providing a more systematic and analytic approach to fixture design.Fixture stationary elements, such as locating pads, buttons, and pins, immediately contact with the workpiece when loading the workpiece.Subsequent clamping (by moveable elements) creates pre-loaded joints between the workpiece and each fixture component.Besides, there may be supporting components and a fixture base in a fixture.In fixture design, a thoughtful, economic fixture-workpiece system maintains uniform maximum joint stiffness throughout machining while also providing the fewest fixture components, open workpiece cutting access, and shortest setup and unloading cycles.Both static and dynamic stiffness in this fixture-workpiece system rely upon the component number, layout and static stiffness of the fixture structure.These affect fixture performance and must be addressed through appropriate design solutions integrating the fixture with other process elements to produce a highly rigid system.This requires a fundamental understanding of the fixture stiffness in order to develop an accurate model of the fixture -workpiece system.
Computer-aided fixture design with predictable fixture stiffness
The research on fixture-workpeice stiffness is a crucial topic in fixture design field.Currently, based on the elastic body assumption, using FEA method to predict the fixture stiffness has been widely accepted.With the consideration on the contact and friction conditions, the validity and accuracy of the methodology was been illustrated by two cases simulation and experimental comparison (Zhu, 1993).
The following is an introduction on the general methodology.
First the stiffness of typical fixture units is studied with considerations of contact friction conditions.The results of the fixture unit stiffness analysis are integrated in fixture design as a database with variation capability driven by parametric representations of fixture units.When a fixture is designed using fixture design &analysis tool, the fixture stiffness at the contact locations (locating and clamping positions) to the workpiece can be estimated and/or designed based on the machining operation constraints (e.g., fixture deformation and dynamic constraints).Fig. 1 shows a diagram of the integrated fixture design system.In order to study the fixture stiffness in a general manner, fixture structure is decomposed into functional units with fixture components and functional surfaces (Rong, 1999).In a fixture unit, all components are connected one to another where only one is in contact directly with fixture base and one or more in contact with the workpiece serving as the locator, clamp, or support.Fig. 2 shows a sketch of the fixture units in a fixture design.When a workpiece was located and clamped in the fixture, the fixture units are subjected to the external loads that pass through the workpiece.If the external load is known and acting on a fixture unit, and the displacement of the fixture unit at the contact position is measured or calculated based on a finite element (FE) model, the fixture unit stiffness can be determined.
The fixture unit stiffness is defined as the force required for a unit deformation of the fixture unit in normal and tangential directions at the contact position with workpiece.The stiffness can be static if the external load is static (such as clamping force), and dynamic if the external load is dynamic (such as machining force).It is the key parameter to analyze the relative performance of different fixture designs and optimize the fixture configuration.
Analysis of fixture unit stiffness may be divided into three categories: analytical, experimental and finite element analysis (FEA).Conventional structural analysis methods may not work well in estimating the fixture unit stiffness.Preliminary experimental study has shown the nature of fixture deformation in T-slot based modular fixtures (Zhu, 1993).An integrated model of a fixture-workpiece system was established for surface quality prediction (Liao, 2001) based on the experiment results in (Zhu, 1993), but combining zhu's experimental work and finite element analysis (FEA).Hurtado used one torsional spring, and two linear springs, one in the normal direction and the other in the tangential direction, to model the stiffness of the workpiece, contact and fixture element.(Hurtado, 2002) FEA method has not been studied for fixture unit stiffness due to the complexity of the contact conditions and the large computation effort for many fixture components involved.
Finite element formulation
Consider a general fixture unit with two components I and J, as shown in Fig. 3 (Zheng, 2005(Zheng, , 2008)).For multi-component fixture units, the model can be expanded.The fixture unit is discretized into finite element models using a standard procedure, except for the contact surfaces, where each nodes on the finite element mesh for the contact surface is modelled by a pair of nodes at the same location belonging to components I and J, respectively, which are connected by a set of contact elements.The basic assumptions include that material is homogenous and linearly elastic, displacements and strains are small in both components I and J, and the frictional force acting on the contact surface follows the Coulomb law of friction.
The total potential energy p of a structural element is expressed as the sum of the internal strain energy U and the potential energy Ω of the nodal force; that is, It is well known that the element strain energy can be expressed as,
where K is the element stiffness matrix; and q is the element nodal displacement vector.
The potential energy of the nodal force is, where R is the vector of the nodal force.It includes internal force and external force.
When the two components I and J are in contact, a number of three-dimensional contact elements are in effect on the contact surfaces.It should note that the problem is strongly nonlinear, partially due to the fact that the number of contact elements may vary with the change of contact condition.The original contacting nodes might separate or recontact after separation, based on the deformation condition on the contact surface; also contact stiffness may not constant either.The contact elements are capable of supporting a compressive load in the normal direction and tangential forces in the tangential directions.When the two components are in contact, and the displacements in the tangential directions and normal direction are assumed as independent, the element itself can be treated as three independent contact springs: two having stiffness k t and k in the tangential directions of the contact surface at the contact point and one having stiffness k n in the normal direction.
Usually, there are two methods used to include the contact condition in the energy equation: the Lagrange multiplier and the penalty function methods.In order to understand these methods, a physical model of the contact conditions is presented, shown in Fig. 4. When two contact surfaces of fixture components, i.e., body J and I, are loaded together, they will contact at a few asperities.
Fig. 3. Contact Model of Two Fixture Components
The contact criteria can be written as: Where, is distance from a contact point i in body I to a contact point j on the body J in the normal direction of contact; f ni is the contact force acting on point i of body I in the normal direction.
325
It shows the kinematic condition of no penetration and the static condition of compressive normal force.To prevent interpenetration, the separation distance for each contact pair must be greater or equal to zero.If >0, the contact force f ni =0.When =0, the points are in contact and f ni <0.If <0, penetration occurs.In real physics, the actual contact area increases, and contact stiffness is enhanced when the load increases.Therefore, the contact deformation is nonlinear as a function of the preload as shown Figure 4(e).In the Lagrange multiplier method, the function w(, f ni ) represents the constraint, which prevents the penetration between contact pairs.In the penalty function method, an artificial penalty parameter is used to prevent the penetration between contact pairs.
Body I State after loading d) Normal contact deformation curve e)
Fig. 4. Physical Model of the Contact Conditions
In the penalty function method, the contact condition is represented by the constraint equation, Where {t} is the constraint equation, C K is the contact element stiffness matrix, Q is the contact force vector of the active contact node pairs.When 0 t , it means that the constraints are satisfied.So the constraint equation Eq. 4 becomes The total potential energy Π p in Eq. 1 can be augmented by a penalty function where is a diagonal matrix of penalty value i .The total potential energy in the penalty function method becomes The minimization of Π pP with respect to where is the penalty matrix.
On the other hand, in the Lagrange multiplier method, the contact constraint equation can be written as: where the components of the row vector i (i=1, 2, …, N), are often defined as Lagrange multipliers i .
Adding Eq. 8 to the potential energy in Eq. 1, we have the total energy in the Lagrange multiplier method, The minimization of Π pL with respect to q and requires that In a matrix form, Eqs. 10 and 11 can be expressed as, While the constraints in Eq. 8 can be satisfied, the Lagrange multiplier method has disadvantages.Because the stiffness matrix in Eq. 12 may contain a zero component in its diagonal, there is no guarantee of the absence of the saddle point.In this situation, the computational stability problem may occur.In order to overcome that difficulty, a perturbed Lagrange multiplier method was introduced (Aliabadi, 1993).
where is an arbitrary positive number.At the limit goes to , the perturbed solutions converge to the original solutions.The introduction of will maintain a small force across and along the interface.This will not only maintain stability but also avoid the stiffness matrix being singular, due to rigid body motion.Similarly, the minimization of p pL with respect to q and results in the following matrix, Eq. 14 can be expressed as: Substitute Eq. 16 into Eq.15, For simplicity, let all i in [] of penalty function equal to , i.e. i = .Thus, the perturbed Lagrange multiplier is equivalent to the penalty function method.
In the Lagrange multiplier method, both displacement and contact force are regarded as independent variables; thus, the constraint (contact) conditions can be satisfied and the contact force can be calculated.It has disadvantages.The stiffness matrix contains zero www.intechopen.com Finite Element Analysis -From Biomedical Applications to Industrial Developments 328 components in its diagonal, and the Lagrange multiplier terms must be treated as additional variables.This leads to the construction of an augmented stiffness matrix, the order of which may significantly exceed the size of the original problem in the absence of constraint equations (Aliabadi, 1993).In comparison with the Lagrange multipliers method, the implementation of the penalty function method is relatively simple and does not require additional independent variables.It is often adopted in the practical analysis because of its simple implementation.
Contact conditions
Based on an iterative scheme (Mazurkiewicz, 1983), the contact conditions in FEA model are classified into the following three cases: 1. Open condition: gap remains open; 2. Stick condition: gap remains closed, and no sliding motion occurs in the tangential directions; and 3. Sliding condition: gap remains closed, and the sliding occurs in the tangential directions.
Let f ji and u ji be the contact nodal load vector and the nodal displacement, respectively, which are defined in the local coordinate system, where the subscript j indicates the component number ( j = I or J), and i indicates the coordinate (i = n, t, τ), as shown in Fig. 5.By equilibrium of the contact element, 0 where x is the node number of body I or J.
The displacement and force must satisfy the equilibrium equations in the three contact conditions (note that {n, t, τ} is the local coordinate system).
Open condition
When the normal nodal force F n is positive (tension), the contact is broken, and no force is transmitted.The displacement change in the normal and tangential directions, denoted respectively by
,, where u Jn and u In are the current displacements of node J and node I in a normal direction, respectively.For each structural contact element, stiffness and forces are updated, based upon current displacement values, in order to predict new displacements and contact forces.
Stick condition
The force in the tangential direction S F , which is the composition of the nodal force in t and directions (F t and F ), is defined only when , where is the Coulomb friction coefficient, there is no slide-motion in the interface, and the contact element responds like a spring.The stick condition exists if where k t and k are the tangential contact stiffness in t and directions, respectively.In the analysis of fixture unite stiffness, set t kk .
Sliding condition
Slide-motion will occur when the absolute value of S F is more than || n F
. The slidemotion may occur in both the element t and directions.That is, if where mean the maximum friction force in t and directions.
Solution procedure
The model presented in the previous section can be implemented to determine the fixture unit stiffness in clamping and machining.Because the model involves high nonlinearity, the Newton-Raphson (N-R) approach is used to solve the problem.Considering the full Newton-Raphson iteration it is recognized that in general the major computational cost per iteration lies in the calculation and factorization of the stiffness matrix.Since these calculations can be quite expensive when large-order systems are considered, the modified Newton-Raphson algorithm is used in this research (Bathe, 1996).
Given the applied load R and the corresponding displacement u, the applied load is divided into a series of load increments.At each load step, the contact stiffness and contact conditions remain constant.And several iterations may be necessary to find a solution with acceptable accuracy.The modified Newton-Raphson method is used first to evaluate the initial out-of-balance load vector at the beginning of the iteration at each load step.The outof-balance load vector is defined as the difference between the applied load vector R and the vector of restoring loads r i R .When the out-of-balance load is non-zero, the program performs a linear solution, using the initial out-of-balance loads, and then checks for convergence.If the convergence criteria are not satisfied, the out-of-balance load vector is reevaluated, the new contact conditions and the stiffness matrix are updated, and a new solution is obtained.This iterative procedure continues until the solution converges.The modified Newton-Raphson method and its flowchart are outlined by Fig. 6.
Contact stiffness identification using a dynamic approach
First, the dynamic method is studied for use in the estimation of normal contact stiffness.
The results of the dynamic methods are compared with the results based on the static test of normal contact stiffness; then the validated dynamic test method is used in estimation of tangential contact stiffness.
Theoretical formulation of 1-D normal contact stiffness
The idea behind the identification of normal contact stiffness is that the contact interface is modeled by a discrete linear spring.When the preload is changed, contact stiffness will change.When body I is in contact with the ground, the dynamic model of the entire structure can be shown as in Fig. 7.According to this theoretical model the relationship between natural frequencies and normal contact stiffness can be established.When natural frequencies are obtained from impact test, along with a theoretical model, normal contact stiffness can be estimated.
In the one-dimensional model of body I, m is the mass of body I, k is the contact stiffness, p is the preload, f(t) is impulse excitation, u(x,t) the longitudinal displacement of the bar at distance x from a fixed reference.With use of a bar in Fig. 7, the governing equation of the longitudinal vibration of the bar can be expressed as The boundary conditions of the bar are: and at x=l: Initially, the system starts from rest, from the static equilibrium position of the bar, such that the initial displacement condition is: The response of a system to an impulsive force can also be obtained by considering that the impulse produces an instantaneous change in the momentum of the system before any appreciable displacement occurs.The second initial condition is - 2 is called the separation constant and is designated to be negative (De Silva, 1999).
Therefore, the mode shapes X (x) satisfies whose general solution is According to the general solution and the modal boundary conditions, one can get Set the structure stiffness as * EA k l and the ratio of the stiffness as . Since the structure stiffness k* is constant and known, the ratio of the stiffness is proportional to the contact stiffness k n .Therefore Eq. 11 can be expressed as This transcendental equation has an infinite number of solutions i (i=1,2,…)that correspond to the modes of vibration.When is changed, the solution of i will change.When is changed from 0.1 to 10, one can get the corresponding i l as shown in Fig. 8.The natural frequencies can also be obtained using In an experimental study, the natural frequencies can be obtained by an impact test. i can calculated from Eq.31 since the natural frequencies are related to the system characteristics.
Then can be determined from Eq. 30.Finally, the contact stiffness, k n , can be estimated based on the definition of .According to the assumption that contact stiffness is a function of the preload, the natural frequencies can be determined in experiments under different preloads.The change of contact stiffness can then be identified based on the change of the through measurement of the natural frequency variation.It should be noted that although any mode of the natural frequency can used to estimate the contact stiffness, some modes might be more sensitive than others to the change of the preloads.
Experimental procedure and results
The experiments were conducted in order to verify the method of identifying contact stiffness in the normal direction (Zheng, 2005(Zheng, , 2008)).The measurement instrumentation includes the proximity, the impact hammer with a load cell, power supply, and a Fast Fourier transformation (FFT) analyzer, as shown in Fig. 9.The experimental procedure can be expressed as follows: 1. Frequency response function (FRF) of the bar is measured by using the hammer to excite the system.Thus, the natural frequencies of the bar can be obtained.
According to the natural frequency equation
3. Based on the relationship between i l and of the first three modes in Fig. 8, the can be inferred from the comparison of experimental results and theoretical results.Then the normal contact stiffness can be obtained from the equation When the natural frequencies are obtained from the experiment, along with the curves of the relationships between i l and , contact stiffness can be determined from each mode of vibration.However, when the preload changes, the natural frequencies may not necessarily change significantly with the change of normal load for certain modes.Contact stiffness should be identified from the mode most sensitive to changes of a preload.Fig. 10 shows the FRF of the test system under different preload.Fig. 11 shows the relationships between the natural frequencies and preload.The natural frequency of the third mode f3 is the most sensitive to changes in a preload.It can be seen that the results from the dynamic tests are consistent with the numerical calculation results based on the static test results.When the results of dynamic test are consistent with the static test results, the dynamic test method can be used in identifying tangential contact stiffness, for which the static tests are too difficult to conduct.
Theoretical formulation of tangential contact stiffness
Two fixture components are in contact at a certain number of asperities due to the inherent roughness of the surface.When they are subjected to tangential forces, the components are mutually constrained through frictional contacts.A friction model based on the Coulomb friction theory is shown in Fig. 13.The tangential contact stiffness results from the elasticity of asperities of the contact surfaces, and the total resulting stiffness of these contact surfaces depends on their statistical topographical parameters.
Slide area
Stick area Slide area
Fig. 13. A Friction Model
Consider that body I is brought into contact with the flat surface of the support under a uniform preload, P, and is subjected to an small excitation, F, as shown in Figure 14.It is assumed that the tangential contact stiffness will change as the preload increases.The friction at each contact point is governed by Coulomb's law.When force is applied in the tangential direction, the asperities in body I will also deform until the shear stress between the asperities exceeds the limit, then the contact surface will slide each other.The friction model of body I in contact is shown in the Fig. 15.The friction force is given by Eq. 32.
The idea of the identification of the tangential contact stiffness is to compare the two sets of system natural frequencies: one set is identified from the measured impulse response in tangential direction under different preloads, and the other set is calculated from the FEA model of the system.Based on the numerical simulation, a relationship between tangential contact stiffness and the natural frequencies can be established.If the natural frequencies are measured in the experiments under different preloads, the contact stiffness can be calculated from the relationship obtained by the numerical simulation.In order to do the numerical simulation, the effect of the contact force needs to be included into FEA model of the system.The additional contact stiffness matrix will be introduced in the general FEA model.The derivation of contact stiffness matrix is briefly given as follows.
Consider an elastic body I in Fig. 15, the kinetic, strain, and potential energies of the system can be expressed respectively as: where K is the kinetic energy; {u} is the displacement vector; V is the volume of the elastic body I; is the mass density of the material; U is the strain energy; and {}; are the strain and stress components, respectively.W is the potential energy of external forces; F is the external surface force vector specified on the boundary S 1 ; ˆc R is the contact force vector on the contact surface S c .Note that ; The body force is ignored.Using the above energy expressions the total potential energy of the system is Based on the well-known Hamilton's principle, a discretized FEA formulation for a typical element can be expressed as To obtain the matrix form, the displacement field of a typical element {u}, which is a function of both space and time, can be written as: where [N(x)] is a vector of the space function; and {d(t)} is the nodal response vector.Using the interpolation relationship the element, where [D c ] is the contact property matrix.In the section, the displacements of contact element in the normal direction are assumed to keep stick.Therefore, the normal contact stiffness becomes infinity.The tangential contact stiffness is considered.
The derived contact stiffness matrix should be added to the general FEA model for the fixture stiffness analysis to take into account the effects of the contact force.Followed the standard procedure of the eigenvalue problem, the system natural frequencies can be obtained using the FEA method to establish the relationship between the tangential contact stiffness and natural frequencies.For example, a specimen that has the dimensions 530.75 in was used to measure dynamic characteristics.Fig. 16 shows the FEA model of the specimen.Contact elements were modeled as separate springs on the top and bottom surfaces of the specimen.There are two nodes for each contact element.One node is on the contact surface of the specimen.The other node is constrained at all degrees of freedom.The impulse force was applied at the side of the specimen.The response was obtained at point M, at the other side of the specimen.
Conclusions
Forces in a workpiece-fixture system have a crucial impact on the deformation and accuracy of the system.In this chapter, an FEA model of fixture unit stiffness is proposed.A contact model between fixture components are utilized for solving the contact problem encountered in the study of fixture unit stiffness.By several simple experiments and comparison with the corresponding analytical solution and experimental results in the literature, this methodology is validated.This analytic approach also can be extended in the research of complex fixture system with multiple units and components, which will lead to a new progress in the design and verification of fixture-workpiece system study.
Fig. 5 .
Fig. 5. Sketch of Contact Force on the Contact Surface
n
is the gap between a pair of the potential contact points.In each increment of load, the gap status and the stiffness values are iteratively changed until convergence.As the load is increased, n will change and hence should be adjusted initial gap before any deformation and T n is the gap change caused by the total combined normal movement at the pair of points.
Fig. 8 .
Fig. 8. Relationships between the Nondimensional Natural Frequencies and the Stiffness Ratio in the First Four Modes
Fig. 14 .
Fig. 14.Body I on the Support
Finite
] is the geometry matrix.Comparing to the standard FEA formulation an additional term of C K , referred as the contact stiffness matrix in included in Eq. (37).The term stems from the work done by the contact force on the contact surface.A brief derivation is presented as follows.The work done by the contact force on the contact surface can be written as Eqs.(38) and (42) into (41) yields www.intechopen.com
Fig. 16 .
Fig. 16.Finite Element Model of Specimen Fig.17 shows the relationships between tangential contact stiffness and natural frequencies of the first two vibration modes.The results are obtained through numerical simulation.From experiments, the frequency response is measured under the different preloads.The contact stiffness can be determined based on the relationships shown in Fig.17.
Fig.17 shows the relationships between tangential contact stiffness and natural frequencies of the first two vibration modes.The results are obtained through numerical simulation.From experiments, the frequency response is measured under the different preloads.The contact stiffness can be determined based on the relationships shown in Fig.17.
Fig. 17
Fig. 17.The Relationship of Tangential Stiffness vs. the First Two Natural Frequencies Fig. 17.The Relationship of Tangential Stiffness vs. the First Two Natural Frequencies | 7,102.2 | 2012-03-30T00:00:00.000 | [
"Engineering",
"Computer Science",
"Materials Science"
] |
Writing and mathematical problem solving in Grade 3
The mathematics curriculum currently used in South African classrooms emphasises problem solving to develop critical thinking (South Africa Department of Basic Education [DBE] 2011a:5). However, based on the performance of South African learners in comparative international studies in mathematics, such as Trends in International Mathematics and Science Study and Southern Africa Consortium for Monitoring Educational Quality, there is concern regarding their competence when solving mathematical problems and their use of meaningful strategies (Ndlovu & Mji 2012). Despite the use of standardised tests such as the Annual National Assessments (ANAs) and the provincial systemic tests conducted in the Western Cape, results reflect a difference between the ability to use procedural and conceptual knowledge. Learners often do not achieve the minimum requirements of their grade levels, especially in the area of problem solving. Learners’ lack of achievement in basic numeracy skills in the ANA, especially in Grades 3 and 4, is highlighted by Graven et al. (2015:69).
INTRODUCTION AND OVERVIEW
Chapter one provides the background and rationale for this study. The research question and sub-questions are outlined and the methodological and theoretical orientations of the study are presented. In addition, the significance and limitations of the study are set out.
BACKGROUND TO THE STUDY
This study was prompted by the low standard of mathematics results in South Africa. The country has participated in international studies such as Trends in International Mathematics and Science Study (TIMSS) and Southern Africa Consortium for Monitoring Educational Quality (SACMEQ). According to Reddy (2013:16), these international studies provide an external benchmark against other countries, providing a reliable insight into the state of the education system. Participation in these studies shows that South Africa has consistently performed below international levels. In Ndlovu and Mji's (2012:189) comparison between the Revised National Curriculum Statement (RNCS) and South African learners' performance in TIMSS, it was found that "learners performed worst in (the category) Using Concepts, suggesting little conceptual understanding being achieved by the curriculum". This result implies learners had difficulty using mathematical concepts that they are expected to know according to the curriculum. Consequently, there seems to be a discrepancy between the intended curriculum, that which is expressed through its intended outcomes, and the implemented curriculum, that which is taught daily in South African classrooms. The intended curriculum encourages critical thinking in the application of mathematical knowledge to problem-solving. If critical thinking were practised daily, it is likely that learners could achieve better results in use of concepts tested in international studies such as TIMSS.
South Africa nationally uses the Annual National Assessments ( The results of these assessments have raised concerns because learners often do not achieve minimum requirements at their grade levels. Results reflect a stronger ability to use procedural knowledge than conceptual knowledge. This imbalance is especially evident in the systemic evaluation which largely tests learners' problem-solving abilities. Learners consistently perform lowest in the area of Measurement compared to other content areas such as Numbers, Operations and Relationships and Space and Shape. Mathematical word problems are often used to test a content area such as measurement. Learners require conceptual knowledge rather than procedural knowledge in these instances. The site for this study displayed this trend since 2012 with learners performing at an average pass rate of approximately 60%. The pass rate refers to the number of learners reaching a minimum pass requirement of 50%. Additionally, Siyepu (2013) suggests poor performance of South African learners is related to the quality of learning and teaching support materials (or lack thereof) as well as lack of qualifications, knowledge and skills of teachers. In large parts of the country there is an over reliance on textbooks and other support materials as resources for lessons. Siyepu (2013:8) claims that "South African textbooks encourage mainly lower order skills (such as recall) as opposed to the higher order skills (such as problem-solving)". Therefore, the standard and availability of textbooks could directly affect many learners' mathematical understanding and ability to solve problems. The researcher, drawing on experience as a Foundation Phase teacher, found this relation between textbooks and results to be true. In observing fellow teachers during mathematics lessons prior to this study, the researcher found a pattern of textbook teaching. In workshops and meetings regarding mathematics teaching, the same over-reliance on textbook learning or visible pedagogy was noted. These instances provide valuable insights into the way teachers use problem-solving in mathematics lessons. There is an overemphasis on procedural knowledge; where learners are taught how to solve mathematical problems. Teachers appeared to assist learners by teaching them tools such as looking for keywords in the context of the problem. Added to this, teachers were sometimes prescriptive by insisting on a specific operation that applies to a particular problem. Learners were often expected to use a number sentence to find their solution.
Learners were generally not encouraged to try their own methods which would develop their critical thinking. All learners in a class were expected to solve a problem in the same way as prescribed by the teacher.
Teachers find it difficult to include problem-solving in daily mathematics lessons. Often the demands of the curriculum create an environment in which lessons focus on procedural knowledge rather than conceptual knowledge. Problem-solving is perceived as a time-consuming activity that achieves little. However, it is through problem-solving that learners make sense of mathematical concepts: they learn new concepts and practise learned skills as they apply and develop their mathematical knowledge (Kilpatrick, Swafford & Findell, 2001:420;Schoenfeld, 2013).
RATIONALE OF THE STUDY
As a Foundation Phase teacher, the researcher has become increasingly concerned with learners' general ability to think, reason and solve problems in mathematics. Learners often lack competence in solving mathematical problems and explaining what they have done in their attempts to reach a solution. Learners tend to rely on the teacher's instruction to solve mathematical problems. Learners use too little writing: words, pictures and symbols in the mathematics classroom to track the processes followed when solving problems. It may be possible that learners are reluctant to do so due to a lack of exposure to writing in mathematics. Learners are taught too rigidly to solve problems using specific methods and procedures given by the teacher. In discussions with various Foundation Phase teachers, it has become clear that a disparity exists between their thinking and understanding of problem-solving and the daily use of problem-solving in mathematics lessons.
These concerns led to this research into different aspects of problem-solving: the role of problem-solving within the curriculum as well as different approaches to implementing problem-solving in the classroom. Learners were observed carefully in the researcher/teacher's class: the way they solved problems during mathematics lessons was examined. It became apparent that learners had difficulty writing their strategies and solutions when they solved problems. Some learners' writing did not reflect the problem being solved while other learners seemed to wait for instructions from the teacher to solve the problem. Most learners were unable to explain their solutions to the teacher or their peers. It was at this point that Burns's (1995a) work on the use of writing in mathematics became pertinent. Further investigation into research in this area led to questioning whether the use of writing could have an impact on learners' ability to solve mathematical problems.
The research questions and purpose of this study emerged from this context.
THE PURPOSE OF THE STUDY
This research study seeks to investigate the use of writing in the mathematics classroom as a way of supporting learners in the process of problem-solving and learning mathematics.
The research question is as follows:
Research question:
How do various types of writing tasks support Grade 3 learners in solving mathematical problems? Sub-questions: 1. What support do writing tasks give to the development of conceptual understanding?
2. What support do writing tasks give to the development of problem-solving strategies?
3. How are writing tasks useful in the Foundation Phase mathematics classroom?
4. What challenges do learners encounter when implementing writing tasks in the Foundation Phase mathematics classroom?
Different types of writing tasks in mathematics are explored as methods that can enhance creative and critical thinking as well as encourage reflective thought, so deepening conceptual understanding in order to support mathematical problem-solving skills.
Vygotsky's theory of social constructivism and, in particular, the Zone of Proximal Development (ZPD) (Vygotsky, 1978) and scaffolding (Bruner & Haste, 1987), underpin this research. Cognitive constructivist theory emphasises that children construct their own understanding and, therefore, construct their own strategies to solve mathematical problems.
Social constructivist theory stipulates that the teacher and learners collaborate to build knowledge and construct the individual's understanding. Social constructivism and scaffolding clarify the use of writing in this study as a valuable tool to scaffold learners' understanding when solving mathematical problems. The work of other theorists is incorporated to support the overarching theory of social constructivism. Skemp's theory on the development of schemas (Skemp, 1987(Skemp, , 1989) is used to explain how learners construct and reconstruct their mathematical knowledge through problem-solving. In addition, Sfard's theory of the process and object of mathematical conceptions (Sfard, 1991) as it relates to problem-solving is discussed.
THE OBJECTIVES OF THE STUDY
An objective of the study was to determine the usefulness of writing in mathematics. It sought to gauge the support writing could give to the development of strategies learners used to solve mathematical problems. The question was whether learners displayed conceptual development in their ability to connect appropriate mathematical knowledge and skills to particular problems.
Another objective of the study was to conclude whether the systematic implementation of specific writing tasks would be beneficial to learners' problem-solving strategies. This objective would be evident if learners were to show increased development of more advanced strategies by the end of the data collection period. The aim was to determine whether there was a significant improvement in the written strategies and explanations learners used when solving mathematical problems to enable better verbal explanations of their solutions. This study was used to determine whether all the writing tasks could be relevant and beneficial to Foundation Phase learners in the South African context.
THE IMPORTANCE OF PROBLEM-SOLVING IN MATHEMATICS
The Problem-solving, which is discussed in further detail in the literature review in Chapter two, involves critical thinking and reasoning to find a solution and is generally considered a life skill that should be developed. Heddens and Speer (2006:82) define problem-solving as "the (interdisciplinary) process an individual uses to respond to and overcome obstacles or barriers when a solution or method of solution to a problem is not immediately obvious".
Mathematical problems and, in particular, word problems should form part of problemsolving. Solving mathematical problems can be used either as a consolidation activity once a particular concept has been taught or as a starting point from which conceptual knowledge can be developed.
THE IMPORTANCE OF WRITING IN MATHEMATICS
As is discussed in the literature review, writing is essential in supporting the development of mathematical knowledge and its application to problem-solving strategies. Through the use of writing, learners express their thinking and extend their understanding of mathematical ideas (Burns, 2007:38). This comprehension allows them to reflect critically on their conceptual understanding. Writing helps learners to make sense of mathematical problems: learners learn how to represent and communicate their thinking.
In this study, various writing tasks were modelled and implemented in a Grade 3 class to cultivate the use of writing in mathematics. These writing tasks included writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes (Burns, 1995a) and shared writing (Wilcox & Monroe, 2011). Through implementation of the tasks, learners could be encouraged to explain their thinking. It could then be determined whether the use of writing supports learners in mathematical problem-solving. Results of this study are presented in Chapter 4.
OVERVIEW OF THE RESEARCH METHODOLOGY
A case study approach was used in this qualitative study. A primary school in Cape Town, South Africa, was selected as the site for the study. The population of the study was one of the Grade 3 classes from the school. A sample of eight learners was purposively selected from the class. Data collection instruments included interviews, audio-recordings, field notes and learners' written pieces from the pre-test, post-test and writing intervention.
The four-step approach to analysing data described by Dana & Yendol-Hoppey (2009) was employed for this investigation. This approach included description, sense making, interpretation and implication drawing. Learners' problem-solving strategies were analysed using the Learning Framework In Number Wright, Martland, Stafford & Stanger, 2006;Wright, 2013). This framework, together with the theoretical framework of the study, guided the process of analysis.
SIGNIFICANCE OF THE STUDY
This study is significant both for the activities within the mathematics classroom and in terms of curriculum implementation. As far as the mathematics classroom is concerned, the use of writing should be included in lessons as stated in the curriculum. The use of writing tasks may be intentionally implemented to address this requirement. This study may enhance the teaching of mathematics as well as learners' problem-solving abilities by giving teachers tools to incorporate writing in mathematics.
This study is significant for implementation of the current curriculum in South African schools.
The CAPS Mathematics for Foundation Phase stipulates that writing is essential in mathematics for learners to communicate their thinking (South Africa DBE, 2011:9). Kuzle (2013:43) agrees that writing is a valuable tool for learning and communicating mathematics.
In order for writing to be used in mathematics classrooms across South Africa, teachers need to be trained how to develop their own writing skills and implement them successfully during their pre-service training. In-service teachers should be given the knowledge and tools to implement writing in their mathematics classroom when they engage in ongoing professional development. Teachers model good writing practices by explaining and justifying solutions for the mathematical problems they encounter.
LIMITATIONS OF THE STUDY
One of the limitations of this study was the researcher's position as teacher. Creswell and Miller (2000:127) state that researchers should "acknowledge and describe their entering beliefs and biases early in the research process". As the teacher of the selected Grade 3 class, the researcher for this project was close to events and interactions (Hamilton & Corbett-Whittier, 2013:129). Being researcher and teacher could have created bias since relations were created with subject learners participating in this study. Morrell and Carroll (2010:79) posit that "the researcher's initial opinion or impressions of a subject colour subsequent observations". The researcher/teacher had to be aware continually of discarding personal thoughts and views, especially when selecting the sample of eight learners as well as during the data analysis process. According to Morrell and Carroll (2010:80), being both teacher and researcher could jeopardise the validity of the study. This difficulty was addressed by making the researcher's dual role explicit in the context of the study. Multiple opportunities to collect and display data were used in conjunction with audio-recordings which helped to ensure validity of the data.
The sample for this study was relatively small: participants were from one Grade 3 class.
Eight learners were selected from this class for the purpose of interviews and analysis of learners' written pieces. Findings of this study are limited to this particular class and group of learners in the sample and cannot be generalised to a broader population in a different setting regarding the impact of writing tasks in supporting problem-solving.
Another limitation of this study was the number ranges used in the mathematical problems learners solved during the data collection period. The mathematical problems were differentiated for the three mathematical ability groups present in the participating Grade 3 class. The problems shared the same context across the groups. However, the number ranges differed. A higher number range was employed for the above average (AA) ability group while the below average (BA) ability group solved problems with a lower number range. The number range for the average (A) ability group was considered to be typical for learners in this grade. The results concerning number ranges of mathematical problems will be discussed in Chapter five.
A further limitation concerned implementation of the number of writing tasks during the data collection. Before data collection commenced, it was planned to do three writing episodes per week over a period of ten weeks. These writing episodes included modelled writing lessons as well as opportunities for learners to implement the writing tasks. Added to this, a pre-test and post-test before and after the implementation of the writing tasks were envisaged. Data collection did not follow as planned because the school programme did not always afford the time to collect data on certain key days. The school's assessment programme needed to be taken into account. More data was collected in some weeks than others. Although the writing intervention was shortened to eight weeks, the same number of writing episodes took place as planned. Being well prepared for potential pitfalls is essential when conducting research.
Chapter One
The background and rationale for this study, as well as the purpose of the study, are presented. Chapter One provides a brief overview of problem-solving and the use of writing in mathematics. It includes an overview of the methodology as well as the significance of the study. The limitations of the study are also mentioned.
Chapter Two
The theoretical framework and literature review for this study are outlined. The chapter begins with defining Vygotsky's theory of social constructivism as the overarching theory with an emphasis on the Zone of Proximal Development (ZPD) and scaffolding. Particular theories of Skemp and Sfard are presented as they relate to the abovementioned theories.
The literature review focuses on problem-solving in mathematics; placing it in the context of this study. Writing in mathematics is then explained, paying particular attention to the work of Burns (1995a).
Chapter Three
In this chapter the research design for this study is delineated as a qualitative case study.
The research plan is presented describing the data collection plan. This includes the pilot study, pre-test, implementation of the writing tasks and the post-test. Subsequently the site and sample are discussed. The data collection instruments include learners' written pieces, audio-recordings of ability group discussions, field notes and interviews with eight learners selected from the Grade 3 class. The process of data analysis is explained.
Chapter Four
The findings of this study are presented. These provide evidence that writing supports learners when engaged in mathematical problem-solving. Results from the pre-test, implementation of the writing tasks and the post-test are given with examples from learners' written pieces as well as from interviews conducted with selected learners. Results from field notes and audio-recordings of the ability group discussions were considered.
Chapter Five
Themes are extracted from the data as they relate to the research questions of this study.
These are discussed in answer to the research questions. Lastly, this study makes recommendations for mathematics education: possible areas of further research are highlighted.
The theoretical framework that underpins this study is discussed in the next chapter and the relevant literature that addresses the research.
INTRODUCTION
The purpose of this research study is to investigate how various types of writing tasks support Grade 3 learners when they attempt to solve mathematical problems. Learners often find difficulty solving word problems because they require a deeper conceptual understanding of mathematical ideas. The literature review discusses theories of learning and schools of thought in mathematics that relate to the research question.
The literature review begins with the theoretical framework that underpins this study.
Vygotsky's social constructivist theory, in particular the Zone of Proximal Development (ZPD), scaffolding and inner speech is employed. Skemp's theory on the development and restructuring of schema and Sfard's theory on the process and object of mathematical conceptions relevant to this research are referred to throughout. Literature on mathematical problems pertaining to this investigation includes levels of problem-solving strategies, writing in mathematics with particular reference to Burns (1995a) and types of writing tasks that can be employed in the mathematics classroom to support problem-solving thinking strategies.
THEORETICAL FRAMEWORK
This research study involves support provided by the teacher and peers in order for learners to solve mathematical problems. Learners are required to use their existing knowledge of mathematical concepts when they engage in problem-solving. (1999:3), constructivism is "a theory of learning which holds that every learner constructs his or her ideas, as opposed to receiving them, complete and correct, from a teacher or authority source". Selley describes constructivism as internal and personal, enabling the learner to build his or her knowledge by "reinterpreting bits and pieces of knowledge" gained from others. Sperry Smith (2013:10) concurs by stating that constructivism is "a theory that views the child as creating knowledge by acting on experience gained from the world and then finding meaning in it". A learner assimilates and owns knowledge more thoroughly and completely when he or she is able to apply and re-configure knowledge as opposed to learning facts off by heart: what Freire terms 'banking'. According to Ernest (1994:63), "social processes and individual sense making" are imperatives within this theory.
Conceptual knowledge that is individually constructed is rooted in the individual conscience and experience (Skemp, 1989:203). Through learning constructively, the learner is an active participant in the process of testing, applying and appropriating knowledge (Selley, 1999:6).
Learners make sense of the knowledge they have gained, and can own and apply it, when such knowledge has been shaped through their own experiences of life and interactions with others.
According to Piaget, children construct increasingly complex 'maps' of their world in an attempt to organize, understand and adapt to it (Donald, Lazarus & Lolwana, 2010:49).
Piaget's developmental stages provide a progression in terms of the learner's ability to move from the concrete, pre-operational stage to the abstract. Carruthers and Worthington (2006:22) refer to Piaget's idea of readiness where appropriate developmental stages need to be reached before certain concepts can be understood. Piaget's theory is more concerned with the physical aspects of cognitive development in the construction of knowledge than interaction and culture. Vygotsky's theories on social constructivism focus on the role of others in the construction of knowledge. For the purpose of this research, social constructivist theory supports interaction and collaborative learning that writing and mathematical problems stimulate. Vygotsky's theory of social constructivism provides a theoretical underpinning: the works of other theorists are drawn upon to corroborate and contrast aspects of central theoretical concern.
Social constructivism
Vygotsky's social constructivist theory explains that meanings are social constructions, built up and passed on between people in social contexts, each of which has a history and culture with its own set of 'meanings' (Donald et al., 2010:54). Similarly, Fosnot and Dolk (2001:6) suggest that "the process of constructing meaning is the process of learning". When referring to socio-cultural theory, Sutherland (2007:5) states that "students bring informal perspectives on mathematics to any new learning situation and these influence what they pay attention to and thus the knowledge they construct". The learning situation is interactive: the teacher and learners collaborate to facilitate the individual's construction of knowledge (Schoenfeld, 2013:20). Learning is influenced by reflective thinking, social interaction and effective use of models or tools. Learning environments in which learners engage in explaining their thinking greatly affects the knowledge they construct (Schoenfeld, 2013:28).
Learners need to be socially engaged when they solve mathematical problems (Schoenfeld, 2013:15). The role of the teacher is pivotal: Sutherland (2007:5) argues that teachers should be aware of the informal approaches learners bring to the mathematics classroom in order to exploit such prior and valid skills as a basis for acquiring and assimilating new mathematical ideas.
Zone of Proximal Development (ZPD)
A fundamental impact of Vygotsky's thought upon the development of educational theory is the concept of the zone of proximal development (ZPD). Vygotsky (1978:86) defines ZPD as "the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers". Wright, Martland, Stafford & Stanger (2006:28) explain ZPD as the "knowledge that the learner is capable of learning under the influence of appropriate teaching, and this zone is regarded as more extensive than that consisting of the knowledge that the learner is capable of learning without assistance". Learning within the ZPD makes use of the knowledge the learner already possesses as the foundation on which to construct prospective knowledge. What the learner is initially able to do collaboratively, he is later able to do independently. In the ZPD, teaching represents the means through which development is advanced (Vygotsky, 1978:131). Daniels (2001:56) describes Vygotsky's theory of the ZPD as an "attempt to understand the operation of contradiction between internal possibilities and external needs that constitutes the driving force of development". In this study, writing activities create the opportunity for a ZPD to be established. Different types of writing tasks are used as a method to determine the support writing could give to mathematical problem-solving strategies and explanations within the ZPD. Vygotsky's theory of ZPD leads to the theory of scaffolding.
Scaffolding
Vygotsky and Bruner's work discusses the theory of scaffolding which builds on the notion of the ZPD. The more knowledgeable other (MKO), be it the peer, parent or teacher, scaffolds understanding through individually tailored pacing of the problem-solving process (Bruner & Haste, 1987:8). The gap between what the learner can do, given the constraints of her/his cognitive functioning, and what s/he can achieve with the intercession and scaffolding of adults or peers, describes the concept of the ZPD (Bruner & Haste, 1987:9). Scaffolding occurs when the MKO provides more manageable steps in the process that lead to the ability to solve the problem. These first realistic and attainable steps comprise beneficial teaching and learning situations that promote the construction of knowledge (Skemp, 1989:73).
Through these manageable steps, the role or involvement of the learner is simplified rather than the task itself (Daniels, 2001:107). Orton and Frobisher (1996:18) add that the teacher's role within a constructivist learning environment is a vital contribution to the learner's construction of knowledge. In this study, in the ZPD, the teacher provides stimulation through writing activities that support, prompt and stimulate individual learning.
Such writing tasks are "instructional strategies" (Daniels, 2001:108) that provide scaffolding for learners when they solve mathematical problems. Skemp (1989:76) concurs that, when learners talk with their peers in pairs or groups in a mathematical situation through cooperative learning, they have the opportunity to explain and discuss mathematical concepts.
These situations help to develop and extend mathematical thinking; learners construct their knowledge socially and interactively. Through such situations, scaffolding occurs which leads, in turn, to the learner's construction of independent knowledge.
Sperry Smith (2013:10) explains scaffolding as support given by the teacher using prompts that eventually lead to the learner's ability to work independently. Similarly, Siyepu (2013:5) describes scaffolding within the ZPD as learning activities that the teacher employs to develop knowledge. Through creation of the ZPD, thinking can be tested and challenged without fear: knowledge and skills are enhanced through learning activities with the help of the teacher or significant other. This appropriation, ownership and assimilation of own knowledge requires assistance through social interaction between the MKO and the learner as well as between peer learners. Such interactions create opportunities for teachers and learners to "pause (and) comment on their problem-solving efforts in oral or written reflections" (Siyepu, 2013:8). In the ZPD, activities can be used to consolidate and organise the learner's informal knowledge into a more highly organised knowledge structure (Skemp, 1989:75). In this study, scaffolding occurred through implementing different types of writing tasks that learners may use to support their strategies when solving mathematical problems.
Burns's (1995a) methodology of using writing in mathematics is introduced and implemented as a tool to scaffold learners' understanding and support them when solving mathematical problems. Fosnot and Dolk (2001:18) describe scaffolding as support given where the teacher designs activities to develop understanding. Once the learner acquires the necessary knowledge or skill and performs a task or solves a problem within the ZPD, assistance is decreased and eventually removed to encourage independent thinking. Learners become more independent as they progress through the ZPD: they become less reliant on the support given through scaffolding. Daniels (2001: 109) explains that "the learner actually decreases the level of dependence upon the support structure as the learning sequence progresses".
As soon as the learner understands the mathematical knowledge, the landmark is shifted and other questions are raised (Fosnot & Dolk, 2001:18). The ZPD is extended through further scaffolding to develop new mathematical knowledge: learners can proceed to engage with more challenging mathematical problems.
Vygotsky's overarching theory of social constructivism is detailed through the theories of the ZPD and scaffolding. In the next section, another tenet of Vygotsky's theories, inner speech, is discussed.
Inner speech
In this research study, when learners engage in personal writing, their engagement is similar to inner speech as theorized by Vygotsky. The role of inner speech is placed within the broader spectrum of language development. Vygotsky (1986:30) describes inner speech as fulfilling a similar role to egocentric speech, as theorized by Piaget. Both types of speech are used to comprehend a situation and, in essence, perform the same function of conversing with the self. Egocentric speech appears to be commonly experienced by younger children when they voiced their thinking: while silent inner speech was evident in older school children. Vygotsky (1986:33) explains that egocentric speech does not fall away as Piaget suggests but rather it turns into inner speech when a child reaches school-going age. He further argues (Vygotsky, 1986:36) that speech and, more importantly, thought development move from the social to the individual: "inner speech is speech for oneself (whereas) external speech is for others" (Vygotsky, 1986:225).
In relation to this study, learners often construct meaning socially, especially within a problem-solving context in a mathematics classroom. Learners make use of external speech when they engage in discourse around the mathematical problems presented. Learners attempt to construct their knowledge and make sense of problem-solving strategies when they engage with the MKO and their peers. Within this social constructivist setting, learners move through their ZPD's according to their individual conceptual understanding and mathematical abilities. Learners progress through their ZPD's when they construct knowledge socially by engaging with others and individually by writing in mathematics.
Learners employ writing tasks as a means of inner speech in order to make sense of mathematical ideas and express their thinking: their use of writing reveals their individual development of thought.
Social constructivism within CAPS
The CAPS Mathematics curriculum (DBE, 2011:10) states that, in the Foundation Phase, learners "should be exposed to mathematical experiences that give them many opportunities to do, talk and record their mathematical thinking". By doing this, mathematics lessons become interactive (DBE, 2011:12): learners work in groups or as a whole class. This constructive interaction provides ample opportunities for learners to construct mathematical knowledge socially: they engage with one another and the teacher. This interaction was elaborated on in the discussion of the zone of proximal development earlier in this chapter.
The current curriculum sets out a platform for collaborative mathematics lessons where knowledge is constructed and shared. This type of learning occurs when learners grapple with mathematical problems, and apply and develop their mathematical knowledge. This development links the theories of social constructivism and the ZPD mentioned earlier. The next section explains Skemp's theory of the development of schemas which relates to Vygotsky's pedagogical theories that underpin this study.
2.2.6
The development of schemas Skemp (1987:24) discusses schemas as the development of conceptual structures which build on fundamental notions of constructivist learning. The function of a schema is to integrate existing knowledge in order to acquire new knowledge and so enhance understanding (Skemp, 1987:24). Skemp (1987:25) refers to the suitability of existing schema when building new knowledge. In order to construct new knowledge, there has to be a link to available schemas that exist. New knowledge cannot be constructed in isolation.
Such linking of prior and new knowledge requires the learner to test, apply and imaginatively/cognitively assimilate new knowledge within an existing schema: any existing schema needs to be restructured to develop concepts further. Skemp (1987:28) refers to such further development as reconstruction. Fülöp (2015:40) concurs that engaging in problem-solving provides opportunities for learners to "refine, combine, and modify knowledge they have already learned". When this individual appropriation or ownership of knowledge occurs, it is likely that a deeper conceptual understanding has developed through long-term schemas that are appropriate and adaptable (Skemp, 1987:34). Sutherland (2007:53) adds that, in the construction of knowledge, not all learners are "focusing on the same processes or constructing the same knowledge, but that through dialogue, actions and interactions a sort of common knowledge emerges". Conceptual development may occur in a whole class or group setting where learners are developing and restructuring similar schemas through social constructivism. Such development relates to the previous discussion regarding Vygotsky's theory of the ZPD.
According to Skemp (1989:53), "concept formation has to happen in the learner's own mind…as teachers…help along the natural learning processes". Skemp (1989:62) describes formation of concepts by stating that "the process of abstraction involves becoming aware of something in common among a number of experiences, and if a learner does not have available in his own mind the concepts which provide the experiences, clearly he cannot form a new higher order concept from them". At this point, the role of the teacher becomes crucial in guiding learners and providing scaffolding within the ZPD. Skemp (1989:63) explains that knowledge is often constructed by combining and relating concepts which the learner has already mastered and owned through a process of explanation and use of examples. Skemp adds that learners are required to learn many higher order concepts in mathematics but that it is essential that learner already possesses the necessary lower order concepts. Learners may become confused by higher order concepts if their lower order concepts are incorrect or restricted, especially when such concepts are closely related. Such issues may be addressed within the ZPD when learners grapple with mathematical problem-solving.
According to Barnes and Venter (2008:11), "knowing what to do in a specific situation, but not necessarily understanding why it works, may limit the transfer of that procedure or skill".
The individual learner learns to make connections and construct knowledge of mathematics in a flexible and coherent way which is fundamental to the development of schemas and, in turn, further mathematical knowledge. Countryman (1993) states that learners need to construct mathematics by "exploring, justifying, representing, discussing, using, describing, investigating and predicting". These elements can be incorporated and assimilated successfully when learners are engaged in solving mathematical problems that encourage development of mathematical knowledge while they progress through designated phases of ZPD.
In this study, learners use writing tasks to support and explain their mathematical problemsolving strategies. In order to solve problems, learners require certain mathematical knowledge and skills. If the necessary lower order concepts are incorrect or inadequate, learners experience difficulty later: they lack the essential schemas to engage in more advanced problem-solving. Mathematical problems develop knowledge: learners apply existing knowledge to the problem. This study investigates what kind of writing supports learners best when they solve and explain problems. Writing allows learners to clarify their thinking when they apply mathematical knowledge and reconstruct schemas.
Sfard's theory regarding mathematical ideas is relevant at this point: it relates to learners' understanding of mathematical ideas which is essential to solving mathematical problems.
The process and object of mathematical ideas
Sfard's theory on mathematical conceptions describes the interplay between the process and object of the same mathematical idea (Sfard, 1991:28). The process, or operational conception, is the dynamic action where an idea is conceived at a lower level. The object, or structural conception, is conceived at higher levels that underlie relational understanding (Sfard, 1991:16). Solving mathematical problems requires an existing knowledge of mathematical ideas: the objects. However, engaging in problem-solving may necessitate that a process be used to solve the problem which, in turn, may lead to the conception and development of other mathematical ideas. Sfard (1991:19) explains the nature of moving from operational conception to structural conception where active, visual representations develop into a more abstract understanding through mental representations. Orton (2004:25) describes mathematics as a product (organised body of knowledge) and a process (learner participation in a creative activity).
Problem-solving allows for movement between these concepts in order to use knowledge proficiently (Sfard, 1991:28). In this study, solving mathematical problems is supported by writing about the processes and solutions. In order to do so, learners engage in operational and structural conceptions as required by the problems they attempt to solve.
The literature review that follows focuses primarily on two areas: problem-solving and writing in mathematics. The nature and use of problem-solving develops mathematical knowledge and skills. Problem types and levels of strategies learners use when solving mathematical problems are discussed as they relate to number learning. Use of writing in mathematics is examined as well as types of writing tasks that can be used to encourage critical thinking and support learners to solve mathematical problems. The role of language in mathematical problem-solving is argued.
SOLVING MATHEMATICAL PROBLEMS
The purpose of this study is to examine how writing tasks can support learners when they solve mathematical problems. In this section of the literature review, different perspectives of mathematical problems are examined. The use of problem-solving in the mathematics classroom is explained as well as the use of word problems as a type of problem-solving exercise. The role of previous knowledge and conceptual development is elaborated upon: both relate to mathematical problem-solving. Various types of word problems are dealt with as they are presented in a mathematics lesson.
Problem-solving
Problem-solving refers to real-life problems that encourage the use of skills such as prediction and analysis. Problem-solving makes use of novel problems that encourage critical thinking: learners engage with problems in an intelligent rather than routine manner (Orton & Frobisher, 1996:20). The problems are novel in that learners have not encountered the problem situation or context in previous mathematics lessons. Problem-solving encourages a higher cognitive demand: the context and the solution are not obvious (O'Donnell, 2006:349). According to Kuzle (2013:45), problem-solving is process-oriented: learners take an active role in generating ideas to solve problems. The ability to generate ideas further enhances the understanding that problem-solving requires higher order, critical thinking because solutions are, by definition, not immediately observable. The process of problem-solving may require learners to work through various possible solutions in order to solve problems (Marzano, 2014:85). Fülöp (2015:40) agrees that, in problem-solving, "students cannot directly apply methods and algorithms to solve it or… it is a task with multiple solutions where the students are asked to come up with different ways of solving the problem". Wright, Martland, Stafford and Stanger (2006:37) explain that learners could experience cognitive reorganisation when they generate more sophisticated strategies during problem-solving. Cognitive reorganisation links to Skemp's theory of constructing and reconstructing schemas when new mathematical knowledge is acquired. Problem-solving provides opportunities for such links to occur.
Word problems are a type of problem-solving. Burns (2007:16) explains how problemsolving and word problems are different but can be linked together to build the learner's use of mathematical knowledge. The following section addresses word problems. Burns (2007:16) states that traditional word problems require learners to "focus on the meaning of the arithmetic operations (where they need) to translate the situation into an arithmetic problem…and then perform the computation called for". She defines a mathematical word problem as a situation requiring that mathematical skills, concepts, or processes be used to arrive at the goal (Burns, 2007:17). This definition concurs with Frobisher's (1994:152) explanation that, "in a word problem, a task or situation is presented in words, and a question is asked which sets out the goal that the solver has to attain". Word problems are a particular way of presenting problems using words that provide a context or situation in which mathematical knowledge is required to find a solution. Burns (2007:16) links word problems and problem-solving when she proposes that problem-solving abilities can be raised through the use of word problems.
Problem-solving and previous knowledge
Problem-solving, as explained by Orton and Frobisher (1996:20), is "the use of novel problems which require children to draw on previously acquired knowledge expertise in an intelligent rather than random or routine way". There appears to be a common thread in this area of research: prior knowledge is a necessary starting point to problem-solving. In earlier research on problem-solving, Polya (1957:110) explains that, "in order to obtain the solution, we have to extract relevant elements from our memory, we have to mobilize the pertinent parts of our dormant knowledge…any feature of the present problem that played a role in the solution of some other problem may play again a role". Orton (2004:24) adds that "problemsolving is now normally intended to imply a process in which the learner combines previously learned elements of knowledge, rules, techniques, skills and concepts to provide a solution to a situation not encountered before". The same is true when learners encounter a word problem.
In order for learners to solve mathematical problems, they need to have some mathematical knowledge as a background on which to build. Polya (1957:9) explains that "the materials necessary for solving a mathematical problem are certain relevant items of our formerly acquired mathematical knowledge, as formerly solved problems". Learners use what they know in order to solve that which is unknown: learners make connections with previous knowledge and mathematical problems in order to construct new meaning. Polya (1957:15) describes this connection as part of the problem-solving process where the learner looks back at a previous solution to make connections for solving a newer, harder problem.
Before presenting a learner with a problem, the teacher needs to establish what previous knowledge already exists. Orton (2004:25) posits that the knowledge and constructions the learner has mastered, linked to the knowledge required by the problem, can result in a successful solution to the problem.
Problem-solving and conceptual development
Problem-solving is described as a process of thinking and reasoning that helps conceptual development rather than procedural development (O'Donnell, 2006:351). It develops the learner's understanding of mathematical concepts rather than entrenching a set of procedures to reach an answer. Fosnot and Dolk (2001:9) concur learning in mathematics occurs through different contextual situations which generate various mathematical models, strategies and big ideas that involve schematizing, structuring and modelling. Big ideas are the structures of mathematics a learner grasps when making a shift in mathematical reasoning (Fosnot & Dolk, 2001:10), much like conceptual development. Models can represent mathematical ideas and be used as tools to express mathematical thought (Fosnot & Dolk, 2001:11). Learners may sometimes need to use mathematical tools or manipulatives to solve problems in order to make sense of the problem. However, they are still required to construct their mathematical knowledge by using certain tools as models of the mathematical relations that exist (Russell, 2000). It is through problem-solving that these big ideas and models are advanced. Kennedy, Tipps and Johnson (2008:115) argue there is no problem if the answer and procedure are already known. If the procedure is known and applied to find the answer, an exercise is followed. Solving a problem requires a reflection and possibly an original step (Musser, Burger & Peterson, 2011:4). The problem needs to encourage a higher cognitive demand (O'Donnell, 2006:349) where the mathematical content embedded in the problem may not be immediately obvious to the learner. The learner needs to gain insight and perform analysis before finding a solution to a problem that could require decision-making (Heddens & Speer, 2006: 82). Kolovou, van den Heuvel-Panhuizen and Bakker (2009:35) posit that "the solution process often requires many steps back and forth until the student is able to unravel the complexity of the problem situation". In order to do this, the learner ponders the problem thoroughly, tries out different approaches and connects a whole range of possible and appropriate techniques and methods (Orton, 2004:25). Problem-solving does not have a single direct path to a single fixed answer. A problem has to be deconstructed in order to understand and use the mathematical skills and tools required to find a solution. Luneta (2013:80) states that a problem is "a question (that) is posed to a person who initially does not know what direction to take to solve a problem (and) there may be many possible paths to a solution". This definition of a problem requires learners to become more flexible in their thinking, deepening their conceptual understanding in order to solve mathematical problems (Kennedy et al., 2008:5). When learners are engaged in problem-solving in this way, they become aware that one problem may be solved using different strategies: such thinking should be encouraged to explore alternatives (Kilpatrick et al., 2001:344). This view is supported by Sperry Smith (2013:65) who claims that as learners think about problems and create their own strategies, they become confident in using and enjoying mathematics in creative and original ways.
"Problem-solving ability is enhanced when students have opportunities to solve problems themselves and to see problems being solved. Further, problem-solving can provide the site for learning new concepts and for practicing learned skills" (Kilpatrick et al., 2001:420). Heddens and Speer (2006:84) argue the opportunity to apply conceptual knowledge through problem-solving is as important as understanding the concepts themselves because it provides more meaning and purpose to the knowledge and skills the learner has acquired: "mathematical thinking is nurtured through problem-solving experiences that do not restrict a child's avenues of success to a single route" (Heddens & Speer, 2006:85). This process allows learners to deepen their conceptual understanding. By solving mathematical problems, learners engage in the process of sense-making: they apply and develop their mathematical knowledge (Schoenfeld, 2013). This development occurs as a result of a learner's ability to "notice patterns, raise conjectures, and then defend them to one another" (Fosnot & Dolk, 2001:2). Learners learn to think critically about their own strategies as well as the strategies of others. Through sharing their strategies, learners are exposed to multiple strategies when they explain and compare their solutions and develop mathematical relations (Russell, 2000). Through this discussion and interaction with fellow learners, conceptual understanding is revealed (Campbell, Rowan & Suarez, 1998:50).
2.3.5
Invented strategies in problem-solving Fülöp (2015:49) defines a strategy as the thinking aspect of problem-solving that is invented and flexible. She adds that it is "an overarching idea involving arranging or combining what is otherwise discrete and independent with a particular end in view". Strategy thinking involves making decisions while the doing aspect (methods and algorithms) entails implementing the decisions made. Added to this, Campbell et al. (1998:49) suggest that, when learners invent their own strategies, they enhance their learning. In reference to a project, Campbell et al. (1998:49) find that "students often solved problems by inventing algorithms on the basis of their interpretations of the problems, their understanding of arithmetic operations, and their representation of numerical relationships". Learners should be encouraged to explain their invented strategies (Campbell et al., 1998:50). This verbalisation of personal strategies displays an ability to arrive at the solution, demonstrating the conceptual and procedural knowledge needed in the process. Murphy (2006:219) adds that, when using their invented strategies, learners often rely on established mathematical ideas such as commutativity and associativity while they develop their mathematical reasoning abilities. In a study conducted by Fülöp (2015:51), it was found that instruction about different strategies was not a quick, easy process but that it was beneficial to learners' problem-solving abilities.
Types of problem-solving
There is a distinction between problem-solving and solving problems, whether they are word problems or mathematical problems. According to Orton (2004:84), "there are different kinds of problems in mathematics…routine practice problems, word problems, real-life applications and novel situations". Heddens and Speer (2006:82) concur by stating that there are four types of word problems: traditional textbook word problem; multistep textbook word problem; non-traditional word problem and a real-life problem situation. Routine practice problems may be incorporated at the end of a chapter or unit on a particular mathematical concept.
Word problems, traditional word problems and multistep word problems may refer to problems traditionally presented where learners need to ascertain the operation required to solve the problem. Real-life problems and novel situations could be more realistic and relate to learners' own personal experiences, an example of which could involve planning a class outing and all the logistical aspects involved.
Use of word problems links to the process of problem-solving. Word problems play a crucial role in the mathematics classroom because they allow learners to develop the skills to engage in problem-solving. Hansen (2011:71) explains that word problems can have multiple purposes including the practice of mathematical skills, motivating children, assessing attainment and developing problem-solving abilities and mathematical concepts and skills.
Added to this, Kilpatrick et al. (2001:183) claim the use of word problems provide opportunities for learners to use more advanced levels of counting and procedures for computation. Different levels of counting and procedures that learners may use when solving problems as far as they relate to this study are discussed later. Such levels are referred to in analysis of learners' work described in the findings (chapter 4).
Although it is widely acknowledged that the use of word and/or mathematical problems and problem-solving in mathematics is essential to building conceptual understanding, problems are often not presented in a way that supports this. Heddens and Speer (2006:83) discuss these common shortcomings of the use of problem-solving in mathematics as: Not being constantly present throughout a unit; Not integrating topics from different units and/or subjects; Focused only on a specific interpretation of an operation; Looking for key words rather than contextual clues; and Oversimplified application of knowledge.
There is a difference between routine and non-routine (word) problems. Routine problems can be likened to solving procedures or exercises as discussed earlier. On the other hand, non-routine problems are more complex and puzzle-like. In a study conducted by Kolovou et al. (2009:45), on problem-solving in Dutch textbooks, it was found that the number of nonroutine problems that encourage deeper conceptual understanding was irrelevant. The number of puzzle-like tasks presented in textbooks that the majority of Dutch teachers use are minimal which may be related to learners' underperformance in the area of problemsolving. It is possible that this lack of exposure to more complex problem-solving is a decisive factor in South African mathematics classrooms. Learners may not be presented with enough opportunities to deepen their conceptual understanding and develop better problem-solving abilities through challenging word problems.
Problem-solving in CAPS
According to the CAPS mathematics curriculum in the Foundation Phase (DBE, 2011:8), learners need to develop specific skills in mathematics, especially because they relate to problem-solving. These specific skills include: "learn to listen, communicate, think, reason logically and apply the mathematical knowledge gained; learn to investigate, analyse, represent and interpret information; (and) learn to pose and solve problems". CAPS states, in the Foundation Phase, "solving problems in context enables learners to communicate their own thinking orally and in writing through drawings and symbols" (DBE, 2011:9). The curriculum does not specifically mention the use of writing in words when solving mathematical problems. Researchers such as Burns, however, advocate the use of writing in words. This study sought to determine whether the use of writing, including words, can support learners' mathematical problem-solving strategies. Luneta (2013:81) describes problem-solving, as indicated in CAPS, as "non-routine problems, higher order understanding and the ability to break a problem down into its component parts". It is imperative that learners use writing in the mathematics class to provide written explanations of their thinking when solving mathematical problems. It not only provides a means of clarifying their thinking and the strategies they choose to use but it can comprise an informative assessment of the learners' understanding. Later in this chapter, the purpose of writing in mathematics is examined: what it entails and the various types of writing tasks that can be used in mathematics.
PROBLEM TYPES
This study tests the use of different word problems to stimulate and develop learners' problem-solving skills and their ability to solve problems. The word problems relate to the basic operations (addition, subtraction, multiplication and division) using whole numbers. Naudé and Meier (2004:105) distinguish three problem types as they relate to the basic operations. These include problems that involve adding and subtracting, repeated addition as a means to conceptualise multiplication as well as grouping and sharing as a means to conceptualise division.
The purpose of combining addition, subtraction, multiplication and division for the purpose of this study is that mathematical problems are often presented in such a way that learners may use either operation as strategies: they are inverse operations. Some learners may use addition as a strategy while other learners may use subtraction to solve the same problem.
This duality applies to problems where multiplication or division may be used as a strategy to solve a particular problem. Problem-solving usually has multiple paths to a solution; as previously mentioned in this chapter.
Addition and subtraction problem types
According to Kilpatrick et al. (2001:184), there are four types of problems that involve addition and subtraction: joining, separating, part-part-whole relations and comparison relations depending on which quantity is unknown. There are three quantities involved in addition and subtraction problems: the initial amount, the changed amount and the result (Naudé & Meier, 2004:108). Word problems provide contexts for adding and using different addition procedures to facilitate learners' reasoning and improve their understanding of addition processes (Kilpatrick et al., 2001:190). Kilpatrick et al. (2001:191) explain the relation between addition and subtraction as follows: "Students examine a join or separate situation and identify which number represents the whole quantity and which numbers represent the parts. These experiences help students to see how addition and subtraction are related and help them to recognize when to add and when to subtract". Although multiplication and division are inverse operations that are closely linked, there are differences in their underlying strategies.
Multiplication and division problem types
Division can use two main strategies, that is, distributing and chunking (Van den Heuvel-Panhuizen et al., 2012:53). Distributing involves sharing objects or numbers equally one by one, whereas chunking shares groups of objects equally. Fosnot and Dolk (2001:11) discuss unitizing when multiplying and dividing as the concept that requires children to "use number to count not only objects but also groups -and to count them both simultaneously". In other words, eight objects, for example, can concurrently be seen as one group. Fosnot and Dolk (2001:53) refer to division problems as being quotitive or partitive. In quotitive problems, the whole is given in the problem and the learner needs to determine how many groups fit into the whole. One of the pre-test problems of this study (problem 2 in Appendix E) uses quotitive sharing where learners were given the amount in each group (platters of 7 doughnuts each) along with the total number of objects (e.g. 56 doughnuts).
Learners needed to determine the number of platters needed: how many groups were needed to fit into the whole. Partitive problems allow learners to distribute the whole amount between the number of groups given. For example, learners are given a problem where the whole amount of 35 needs to be distributed between 7 groups. A partitive strategy requires learners to share the whole amount of 35 items either by distributing or chunking as described earlier. Learners often find difficulty with partitive problems because they "need to comprehend the one-to-one correspondence involved…and consider the number of groups, the number in the groups, and the whole…simultaneously" (Fosnot & Dolk, 2001:53).
Splitting is another strategy that may be used when solving multiplication and division problems: learners break down the problem into smaller problems (Van den Heuvel-Panhuizen et al., 2012:162). Van den Heuvel-Panhuizen et al. (2012:154) refer to splitting as a form of decomposing into hundreds, tens and ones where learners require some understanding of the place-value structure of numbers.
Problem types in CAPS Mathematics
According to the CAPS Mathematics curriculum (DBE, 2011:79), there are certain problem types which should be posed at Grade 3 level. Learners should be solving problems such as grouping where the remainder is discarded or incorporated as well as sharing where the remainder is discarded. They should solve problems that involve repeated addition as well as addition and subtraction. These problem types specifically encompass the four basic operations which generally reflect problems used in this study. Other problem types such as sharing leading to fractions, grids, rate, proportional sharing and problem situations with different functional relations are mentioned in the CAPS Mathematics curriculum (DBE, 2011:79). However, these problem types are not included here: this research study focuses only on problems involving addition, subtraction, multiplication and division.
LEVELS OF PROBLEM-SOLVING STRATEGIES
`A variety of research has been conducted in the area of understanding the strategies and levels of conceptual knowledge that learners present when solving mathematical problems (Van Den Heuvel-Panhuizen et al., 2012;Wright, Martland, Stafford & Stanger, 2006;Wright, 2013;Schoenfeld, 2013). Problem-solving strategies that learners have used, or are familiar with, are forms of knowledge they bring to a mathematical problem (Schoenfeld, 2013:18). The conceptual level that learners possess when solving a problem can be linked to the strategies they have previously used. This linkage implies that learners approach future problems with more knowledge than before (Schoenfeld, 2013:20): they continually develop their mathematical knowledge and problemsolving strategies with each problem they solve.
For the purpose of this research study, the work of and Wright, Martland, Stafford and Stanger (2006) is fundamental in understanding learners' stages and levels of conceptual knowledge and the strategies used in tackling mathematical problems. The Learning Framework In Number (LFIN) is referred to as a description of early number learning.
LFIN encapsulates likely stages and levels of number learning that learners progress through as they develop their mathematical knowledge. The LFIN incorporates the following areas of number learning: the Stages of Early Arithmetical Learning (SEAL); number words and numerals; the Structuring Number Strand (SNS); conceptual place value knowledge and early multiplication and division . The aspects of SEAL, conceptual place value knowledge and early multiplication and division, are most applicable to this study: all relate to strategies used by Grade 3 learners in solving mathematical problems. Descriptions of these aspects of the LFIN are given below. The majority of learners in the selected grade 3 class were at a stage of their number learning where they coped adequately with number words and numerals as well as SNS. These aspects of the LFIN are more applicable to the number learning required in lower grades: they were not areas of focus in defining and understanding the LFIN within this study.
Stages of Early Arithmetical Learning (SEAL)
The SEAL delineates the stages that learners pass through when they develop their knowledge of early arithmetic strategies and is, therefore, the primary aspect of LFIN. The term 'counting' is used to describe the SEAL. Wright (2013:28) clarifies the activity of counting as Forward Number Word Sequences (FNWS) and Backward Number Word Sequences (BNWS) in which the sequences of number words are recited. Wright, Martland, Stafford and Stanger (2006:20) posit that counting occurs when it is assumed that learners have a cognitive goal in determining the numerosity of a collection rather than reciting the FNWS or BNWS. Counting involves solving additive or subtractive problems (Wright, Martland, Stafford & Stanger, 2006:10). The particular counting strategies that the learner uses in SEAL are demarcated in Table 2.1. (Wright, Martland, Stafford and Stanger, 2006:9) As can be seen in Table 2.1 above, the strategies used in the SEAL become increasingly sophisticated. In the earlier stages, learners often need to see the items they are counting, whether they are physical objects or representations of objects, e.g. drawings or tallies. By the time learners reach stage 5, they are able to use advanced strategies such as those listed in Table 2.1. Facile number sequence includes a range of strategies that learners reach having gone through the development of number learning in the previous stages.
Learners make use of procedures that display a deeper conceptual knowledge they apply their knowledge of compensation, commutativity, doubles and inverse operations, for example. The strategies which learners use when they solve problems display the stages of their conceptual development. When solving the same problem, some learners may use a strategy that reflects a lower or higher stage of number learning and development compared to other learners. Such distinctions may become evident in their writing when they solve and explain problems.
Learners progress to at least stage 3 of the SEAL, and generally begin to develop base-ten arithmetical strategies. Learners develop in their understanding of groups of ten within numbers as opposed to working with individual items: counting in ones. Learners develop their conceptual understanding and more complex strategies could become more evident in their writing when they solve and explain problems. Wright, Martland and Stafford (2006:22) provide a detailed description of the levels of strategies depicted in Table 2.2 below.
Conceptual place value
According to Wright (2013:27), learners make use of materials such as bundling sticks, ten strips and hundred squares when they develop their understanding of place value. Learners' conceptual understanding of place value progresses by incrementing in tens on the decuple (adding using the multiples of ten): then off the decuple (adding ten to any number, e.g. 47).
Following this process, learners decrement by tens off the decuple (subtracting ten from any number) and finally they are able to give ten more and ten less as well as a hundred more and a hundred less of given numbers.
At Grade 3 level, learners are expected to have some conceptual understanding of place value as stated in the CAPS Mathematics curriculum (South Africa DBE, 2011). Learners who have appropriately developed their understanding in this area make use of it as part of their strategies when solving mathematical problems.
Early multiplication and division
Learners progress through five levels when they develop their understanding of multiplication and division; part of the LFIN. The five levels of early multiplication and division are depicted in Table 2.3 below. (Wright, Martland, Stafford & Stanger, 2006:14) As with the SEAL, learners progress through the levels of early multiplication and division.
When they are at levels 1 and 2, they require individual items to be counted. At level 1, learners do not count in multiples whereas they use more advanced counting strategies at level 2. When learners have reached levels 4 and 5, their understanding of multiplication and division is more abstract because they do not require items, whether physical or drawn.
To conclude, levels of conceptual understanding according to the LFIN have been set out.
This delineation of levels relates to this study in terms of analysis and reflection on learners' work, especially when solving mathematical problems. Such delineation allows a comparison to be made between the strategies learners used in the pre-test and the posttest. The next section of this chapter addresses writing in mathematics.
WRITING IN MATHEMATICS
The purpose of this study is to investigate how various types of writing tasks support Grade 3 learners in solving mathematical problems. In this section of the chapter, the purpose of writing in the mathematics classroom is examined. Reasons for including writing as an essential part of the mathematics lesson are given in supporting the development of mathematical knowledge that is crucial to effective problem-solving strategies.
The purpose of writing in mathematics
The importance of writing within mathematical problem-solving is to encourage children to develop a meaningful understanding of mathematical knowledge. Davison and Pearce (1998:42) explain that performing a writing task requires learners to reflect on, analyse, and synthesize the material being studied in a thoughtful and precise way. Luneta (2013:87) adds that, when learners write a reflection on where they are stuck, it allows them to reflect on their mathematical understanding of concepts. Putting their strategies on paper allows learners to be mindful of their own strategies while verbal feedback can often be lost over time. Writing helps learners clarify and define their thinking as well as examine their ideas and reflect on what they have learned in order to deepen and extend their understanding (Burns, 1995a:13). Their final work is not meant to be a polished product but rather a provisional means of expressing and consolidating their understanding of mathematical ideas (Burns, 2007:38). Kuzle (2013:43) concurs by stating that writing is a tool for learning and communicating mathematics. Writing in mathematics is one of the means of representing and communicating understanding: it helps the learner to make sense of mathematical ideas in order to construct knowledge. According to Columba (2012:3), conceptual understanding develops when learners represent their understanding using words, symbols, graphs and discourse.
Researchers (Jurdak & Zein, 1998;Miller, 1991Miller, , 1992Bagley & Gallenberger, 1992; Morgan, 1998) concur on the importance and purpose of writing in mathematics. Through the active process of writing, learners read the product of their thinking on paper: it is a way of knowing what they think and deepening their understanding. Learners reflect on, clarify and explain their thought processes.
When solving mathematical problems, writing forms a vital role in the learner's development of conceptual knowledge. According to Carruthers and Worthington (2006:13), this development occurs when learners make meaning personal: they make actions, marks, draw, model and play. Writing may take various forms in a mathematics lesson and, more so, in problem-solving because it encourages learners to engage actively with their previous knowledge to develop strategies and methods for solving a more difficult problem. Writing creates opportunities to make connections to the mathematical knowledge required by the problem. Writing in the form of words, pictures and numbers provides a platform for learners to explain their thinking to themselves and peers by placing emphasis on their process and not just the answer ( Van de Walle & Lovin, 2006:16). Whitin and Whitin (2008:432) add that learners should be encouraged to write with increasing clarity and detail to demonstrate their understanding of a problem. In this study, learners are introduced to the use of writing in mathematics. Learners in this study had opportunities to engage with various writing tasks for different purposes. This study focuses on writing as a method to help learners solve mathematical problems. The aim is to gauge whether the use of writing supports learners to make sense of mathematical problems when they solve and explain them.
Representing thinking through the use of writing
While learners are attempting to solve mathematical problems, they represent their thinking through what they write. They grapple with a problem and attempt to make sense of it. Their writing is a reflection of what is happening in their minds. Sperry Smith (2013:171) explains that "writing about math is one way to reflect on the process and to explain and defend ideas". Writing provides a key opportunity for learners to develop, clarify and communicate their thinking. Luneta (2013:109) claims that as learners represent their understanding by writing, they communicate mathematical ideas and understanding about concepts to themselves and to others. Writing in mathematics allows them to reflect, check, amend and understand what they have done (Orton, 2004:91).
Learners may use a variety of ways to represent their thinking when engaging with a particular problem. A mathematical problem can have multiple paths to a solution. A group of learners may have different representations of their thinking on paper: they may use various strategies to solve the same problem. The use of different strategies and representations may be due to the various mathematical abilities of the learners who understand mathematical concepts at varying levels. Some learners may write and solve problems at more sophisticated levels than others based on their levels of conceptual understanding. Through engaging with the writing of others, learners are able to compare and learn from the strategies of their peers. Luneta (2013:125) adds that learners gain better problem-solving skills when they are presented with both text and pictures. While learners write, they can represent their thinking through numbers, words and pictures and make sense of the mathematical problem.
As in a similar research study conducted by Amaral (2010), the thinking process is of importance when using writing in mathematics and not the presentation, spelling and/or grammar. The purpose of writing is to make sense of the mathematical problem and communicate thinking and understanding (Burns, 2007). The piece of writing is a product of their thinking and not a test of their writing abilities.
The use of representations in writing provides teachers with insights into learner thinking (Luneta, 2013:126). Through writing, the teacher is made more aware of individual learners' understanding, misconceptions and difficulties which may be responded to individually or corporately (Borasi & Rose, 1989:358). Miller (1991:517) agrees: misconceptions can be dealt with through the use of writing. These insights may determine the direction of future lessons which address misconceptions timeously and appropriately. This provision helps the teacher to implement intervention strategies, either individually or as a class (Miller, 1991:519). This type of intervention may, in turn, improve the teacher's mathematics instruction as a whole.
In another study, conducted by Fluent (2006:43), it was found that learners' improvement in written explanations was nominal despite sharing strategies verbally. In the experience of the researcher/teacher of this particular study, learners had opportunities to solve problems individually and in pairs. They engaged in discourse in their mathematical ability groups concerning their strategies. The results of this study will be discussed in chapter 4 in comparison to Fluent's study.
Writing in mathematics lessons
Writing in mathematics may be implemented in different ways. Miller (1992:354) suggests engaging learners in a short writing activity at the start of a lesson to express their thinking and prepare for the lesson. These writing activities provide opportunities for a written dialogue between the teacher and learners. It is more likely that learners who find difficulty in mathematics may feel more at ease to express their confusion because this dialogue is private (Miller, 1991:518). However, the teacher needs to consider learners who have language difficulties: such learners may be less able to express themselves through writing.
Alternatively, Elliot (1996:92) discusses the benefits of concluding a lesson with a writing activity to reflect on the day's lesson. This conclusion may guide the teacher in preparation for the next lesson if there are misconceptions which require correction.
In this study, specific writing tasks were introduced as a means to cultivate the use of writing in mathematics to support learners when they solve mathematical problems and explain their solution strategies. The type of writing task being implemented determined the point at which the task was used during the lesson. Certain writing tasks such as writing to record in a journal and writing to explain are likely to take place during the conclusion of a lesson. On the other hand, writing to solve mathematical problems is better suited to an earlier part of the lesson to give learners the opportunity to engage in group discussions on their strategies and explanations. The following section describes the various types of writing tasks that were implemented in this study.
TYPES OF WRITING TASKS IN MATHEMATICS
Writing in mathematics takes various forms which require learners to record their thinking in different ways. Writing may occur with or without revision (Wilcox & Monroe, 2011:521). It can be introduced through short, simple writing tasks at first to encourage and develop the use of writing tasks in the mathematics lesson (Meier & Rishel, 1998:7).
After reading the work of various researchers, writing in mathematics as explained by Burns (1995a) was used. Burns describes different types of writing tasks and their purpose in developing conceptual understanding. The writing tasks presented in her work were conducted with learners from different grades throughout the primary school years into early high school. Since this study focused on the Foundation Phase, Burns's writing tasks were most suitable: her research included learners from these grades. She provides a detailed methodology of each writing task that largely links to the aims of this study. There are five different writing tasks that will be described further below: writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes (Burns, 1995a) and shared writing (Wilcox & Monroe, 2011).
Although not directly from Burns's work, shared writing was added to this study because it linked to the current curriculum guidelines in use in South Africa. Shared writing is an element of the Balanced Language Approach (BLA) in which learners and the teacher write together.
The overarching purpose of using writing in mathematics (discussed earlier in this chapter) is for learners to clarify, explain and communicate their thinking (Burns, 2007:38). Through the task of writing, early learners gain the opportunity to develop conceptual knowledge and build mathematical connections. Each writing task, according to Burns, has a different purpose and is used to develop specific areas where learners engage with a particular perspective of explaining and communicating their thinking. This study investigates how writing tasks can be used to support learners to solve word problems.
Writing to solve mathematical problems
The use of this writing task is specifically to solve mathematical problems: learners write to solve and explain their strategies. It is distinct from writing to explain, for example, which focuses on learners explaining their understanding of particular mathematical concepts.
According to Kuzle (2013:44), writing is considered a method to support the acquisition and development of mathematical knowledge that enables an improvement in problem-solving abilities. As learners write about their problem-solving strategies, they are able to make sense of their mathematical understanding. This concurrent writing reveals their understanding of the mathematical concepts concerned in the particular problem they are dealing with as well as their understanding of how mathematics relates to real life. Wadlington, Bitner, Partridge and Austin (1992:209) concur that writing about problemsolving connects mathematics to the world around the learners. Burns (1995a:69) explains that, in order to solve problems, learners should use a variety of strategies, and verify and interpret results. This combination of strategies creates opportunities for them to develop and explain their thinking. In doing so, learners not only record their solutions but provide their reasoning as to why the answer made sense to them (Burns, 1995a:76). Jacobs and Ambrose (2009:265) emphasise that learners' representations of a strategy are linked with their interpretation of the problem and should reflect how they thought about and solved the problem. In a study conducted with preservice teachers, Kuzle (2013:53) notes that writing about the problem-solving process enabled participants to better understand and justify their thinking when they reflected on their strategies and solutions. Although the participants of this research study are Grade 3 learners, the use of writing to solve mathematical problems encourages learners to consider and reflect on their strategies. This writing task engages learners in organising their thoughts: they write an explanation of the processes they followed to solve a mathematical problem. Burns (1995b:41) encourages learners to discuss their ideas before engaging in writing: learners share possible strategies to solve the problem as a springboard for writing about their individual strategies and explanations. She adds that prompts displayed on the board help learners to start writing if they require such assistance. In the Foundation Phase, learners are often given opportunities to solve mathematical problems in pairs or groups.
Despite working co-operatively, they should still write about the experience individually to develop and clarify their thinking (Burns, 2007:39). The think-write-share strategy develops learners' own understanding of the mathematical problem by thinking and recording their responses on paper on their own before participating in pair or group discussions (Wilcox & Monroe, 2011:522): the thinking and writing aspects of problem-solving happen individually and learners share their thinking and writing with others. On some occasions during this study, the think-write-share strategy was employed as a means to support learners in their writing tasks when they solved mathematical problems.
Writing to record (keeping a journal or log)
Burns (1995a:51) explains that journals or logs allow learners to keep ongoing records about what they are doing and learning in their mathematics class which can be used to record their thinking when they notice something, make an observation, or report a discovery.
Keeping a journal or log serves to enrich the quality of discussions, review previous knowledge and construct meaning. Borasi and Rose (1989:348) refer to a journal as a "personal notebook where students can write down any thought related to their mathematics learning". Through writing in their journals, learners are actively engaged in the process of making connections and constructing meaning. In this way, the mathematical content makes more sense to the learner when they construct and internalise knowledge. Yang (2005:14) refers to journal writing as mathematical diary writing. He explains that, through mathematical diary writing, learners are able to communicate and reflect on their thinking by explaining what they learn in class in their own way. Bagley and Gallenberger (1992:661) describe the purpose of journal writing as allowing learners to summarise and associate ideas, define concepts, experiment with concepts, review and reflect on topics and strategies and express their feelings and frustrations with regards to their mathematics learning. Yang (2005:13) adds that the use of diary writing enables learners to enjoy problem-solving through writing as opposed to only representing their thinking. Jurdak and Zein (1998:416) find that there is a relation between journal writing and mathematics instruction. The results of their study show its positive effects on conceptual understanding, procedural knowledge and mathematical communication. O'Donnell (2006:351) concurs that learners need daily opportunities to write about their mathematics lessons in a journal. The teacher may use prompts to focus their journal writing.
Although Borasi and Rose's (1989:355) research involves university students, they acknowledge the need for prompts because some students find difficulty in writing spontaneously. A framework may be given that focuses on a specific lesson or mathematical concept that has been taught (Burns, 2007:39). Freed (1994:24) suggests a flexible use of the journal that allows opportunities for free writing as well as structured writing activities with the use of prompts. As learners write freely in this way, they do so without concern about spelling, punctuation and style (Bagley & Gallenberger, 1992:661).
Journals are ongoing records of learners' thinking which provide learners with regular opportunities to reflect on mathematics lessons and/or concepts and analyse their own learning. Bagley and Gallenberger (1992:660) explain that a journal allows the teacher to informally evaluate learners' levels of comprehension. Journals in mathematics provide a record of learners' development for the teacher (Amaral, 2010:24). Amaral (2010: 67) shows that journals help to keep the teacher informed of learners' progress, drive instruction, improve learners' communication and increase learners' understanding of mathematical concepts. Keeping a journal creates a "private dialogue between the teacher and each student …(through) the exchange of questions, responses, comments and remarks" (Borasi & Rose, 1989:360). At a Foundation Phase level, a journal may prove more challenging because learners find it difficult to engage in a written dialogue. However, simplified comments and words of encouragement may stimulate individual learners to express their thinking. In this study, written dialogue was used at times to communicate with learners.
Comments were written about their strategies when solving problems as well as when they engaged in other writing tasks. Learners were asked questions about their strategies in order to explain further or extend their thinking through their writing.
Writing to explain
The purpose of this writing task is for learners to explain what they understand about a specific mathematical topic or concept. This writing task could be referred to as note-taking or note-making where learners list the main points of a lesson as well as their reflections and perceptions (Wilcox & Monroe, 2011:522). Freed (1994:23) refers to note-taking as defining a concept where a term is explained in the learner's own words. Learners can "summarise what they learned and tell how to apply it" (Freed, 1994:23). When using this type of writing task, there is a focus on a particular mathematical concept that learners are required to clarify and explain. For example, after having learnt fractions, learners write an explanation of fractions in their own words. In their explanations, they are encouraged to write about what they have learned and understood.
Writing about thinking and learning processes
This form of writing task does not focus on a specific topic or mathematical concept. Burns (2007:40) suggests that learners write about their favourite or least favourite activities, qualities of a good problem-solving partner, directions for an activity or game or a letter to visitors describing mathematics activities in the classroom: "A letter to a friend, relative or teacher can combine reflective and communicative writing" (Freed, 1994:24). This type of writing task has a more general focus where learners engage in writing more freely. Writing in this way allows learners to think beyond the actual mathematics lesson and more on mathematics in general.
Shared Writing
In the CAPS curriculum for home language, shared writing is mentioned as a methodology to develop learners' writing skills in literacy. Shared writing involves the teacher and learners writing together or learners writing in pairs or groups. This kind of co-operation differs from the think-write-share strategy mentioned previously. Wilcox and Monroe (2011:526) suggest that teachers use this writing experience in the mathematics classroom to review and internalize mathematical concepts and ideas as well as develop mathematical communication. Together in the ZPD, the teacher and learners formulate a mathematical story reflecting their understanding of a particular concept. Learners may then take different sentences to write as a final draft and create representations for a class book. A similar approach could be used to make alphabet books about mathematics vocabulary. Freed (1994:23) suggests writing poetry about mathematical concepts, vocabulary or topics such as limericks, cinquains and concrete poems as well as a rap. Involving learners in activities such as these encourages learners to put their knowledge and understanding of mathematics across in a creative way and, at the same time, solidify that knowledge. Shared writing allows learners to collaborate as a class or group so encouraging the element of social interaction in a constructivist classroom.
Previous research (Burns, 1995a;Luneta, 2013;Jurdak & Zein, 1998;Miller, 1991Miller, , 1992Bagley & Gallenberger, 1992;Morgan, 1998) shows that writing can be used to help develop understanding of mathematical concepts and processes. Although there are different writing tasks presented by Burns (1995a), there are similarities, overlappings and links between them. The use of writing tasks enables learners to clarify and represent their thinking.
Writing may enhance their conceptual understanding generally when they are stimulated in this way to reflect on what they have done. Different types of writing tasks provide valuable tools to deepen the individual learner's knowledge while working collaboratively with the teacher and peers. In this study, such collaboration is manifested in examining learners' writing and the support it gives mathematical problem-solving.
THE ROLE OF LANGUAGE IN MATHEMATICAL PROBLEM-SOLVING
In this section of the literature review, the role of language in mathematical problem-solving is examined. When engaging with problems, learners are expected to read and understand as well as solve and explain their strategies. The language used in mathematics and how this is applied in problem-solving contexts is discussed.
According to Luneta (2013:105), learners should know and understand the language of mathematics and develop skills to apply it. Clemson and Clemson (1994:84) describe mathematics as a language of symbols which transcends words which: learners are expected to use in talking, reading and writing about what they have encountered in symbolic form. Often, words that are used in everyday language, take on a different and more specific meaning when used in a mathematical sense (Luneta, 2013:94). Such secondary meanings may cause confusion for learners, especially those who have limited language abilities.
Often the errors in learners' thinking stem from their misunderstanding of the vocabulary of mathematics (Koshy, Ernest & Casey, 2000:177). It is imperative that time is spent teaching mathematical vocabulary linked to relevant concepts so that learners' understanding is enhanced. Learners acquire the language of mathematics through careful explanation, listening and practice (Sperry Smith, 2013:56). Burns (2007:372) explains that learners acquire mathematical language when words are used in contexts that bring meaning to them.
However, Burns (2007:43) exhorts that "teaching knowledge of the mathematical ideas and relations must precede teaching vocabulary". It is only when doing so that learners connect their knowledge to mathematical language. Clemson and Clemson (1994:98) add that reading competence needs to be achieved in order to solve mathematical problems. The wording of a problem could be read aloud and talked through before solving it in order to assist learners with language difficulties and to develop understanding. Learners may rely on keywords presented in the problem which may mislead them. Sperry Smith (2013:56) explains that, in order for learners to understand the mathematical concepts and processes required by the problem, their attention needs to be drawn to the way the problem is phrased. This strategy may support learners in developing a better understanding of the problem they are reading.
Language is important in this study: learners were engaged in reading, talking and writing in mathematics lessons. The impact of learners' level of language competency on their individual ability to solve problems will be discussed in Chapter 5.
CONCLUSION
The first section of this chapter focuses on the theoretical framework. Vygotsky is presented as the main theorist with a particular focus on social constructivism, the ZPD and scaffolding.
Other theorists are drawn upon that link or elaborate upon to Vygotsky's theories. Bruner's ideas on scaffolding are included with Vygotsky's. Skemp's theory on the construction of schemas relates to the learners' development of conceptual knowledge. Learners' understanding of mathematical concepts is discussed using Sfard's theory of the process and object of mathematical ideas.
The second section of chapter 2 reviews literature that relates to the research question.
There is a discussion of research about mathematical problems and the use of problemsolving in the mathematics classroom. Types of problems that learners encounter in mathematics and the levels of understanding at which they solve these problems are set out.
INTRODUCTION
The purpose of this research study is to investigate how various types of writing tasks support Grade 3 learners in solving mathematical problems. In this chapter, the research methodology and design of the study are described.
In order to answer the research questions, the study makes use of a qualitative research design in the form of a case study.
The research site was a primary school in Cape Town, South Africa. The sample for this study was a Grade 3 class from which eight learners was purposively selected. The data collection instruments that were used included interviews, audio-recordings, field notes and learners' written work. In this chapter, the purpose of the instruments and the process of gathering and analysing the data are described. The trustworthiness, validity and reliability of the study are explained and the ethical considerations outlined.
The research design is the logical plan that guides the process of linking the data to be collected to the research questions and the conclusions of the study to ensure that the evidence addresses the research questions (Yin, 2009:26). The design could take the form of a quantitative, qualitative or mixed methods study. Quantitative research establishes generalisable trends and objective facts through the use of surveys, questionnaires and statistics; whereas qualitative research studies human beings and their behaviour in order to make sense of feelings, experiences, social situations and phenomena (Rule & John, 2011:60). Salkind (2009:12) refers to "the general purpose of qualitative research methods (as examining) human behavior in the social, cultural, and political contexts in which they occur". In addition, Denzin and Lincoln (2011:3) state that qualitative research attempts to make sense of phenomena and the meanings people bring to them. Mixed methods research combines these traditions to obtain a more holistic understanding of the data and, in turn, the research results. This research study makes use of a qualitative research design because the researcher investigates how writing tasks support learners in solving mathematical problems. Learners' experiences of writing tasks were observed during mathematics lessons. Lessons provided the social context for data to be collected. Data were collected from a pre-and post-test, as well as from interviews with selected learners. In this qualitative study, the researcher sought to interpret how writing could be used as a tool in the mathematics classroom to support and develop learners' mathematical problem-solving skills and to determine whether these writing tasks could be implemented successfully in the Foundation Phase.
RESEARCH DESIGN
There are a number of design methodologies that are commonly used in qualitative research studies. These include action research, comparative research, case study, evaluation and experiment (Thomas, 2011:36). This research study makes use of a case study approach.
Simons (2009:21) defines a case study as follows: Case study is an in-depth exploration from multiple perspectives of the complexity and uniqueness of a particular project, policy, institution, programme or system in a 'real-life' context. It is research-based, inclusive of different methods and is evidence-led. The primary purpose is to generate in-depth understanding of a specific topic. Rule and John (2011:4) define a case study as a systematic, in-depth investigation of a particular instance in its context in order to generate knowledge. Writing tasks were introduced systematically during the data collection period beginning with the use of writing to solve mathematical problems. Learners already had prior experience of solving mathematical problems and were familiar with some elements of using writing to represent their thinking. They were progressively introduced to other writing tasks such as writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes and shared writing. While learners engaged with the different writing tasks, indepth observations of their writing and the development of their writing strategies were conducted. The phenomenon, in the case of this study, was the learners' use of writing when solving mathematical problems which was monitored in its natural context; mathematics lessons, over a given period of time (Swanborn, 2010:13).
This case study has elements of a design experiment in which particular forms of learning are engineered and studied within a particular context (Cobb, Confrey, diSessa, Lehrer & Schauble, 2003:9). A design experiment involves various elements such as tasks or problems, discourse, the established norms of participation, the tools provided, and the practical means by which classroom teachers can orchestrate relations among these elements (Cobb et al., 2003:9). In this study, Burns's (1995a) American-based use of writing in the mathematics class was applied in a South African Foundation Phase classroom.
Based on Cobb et al.'s description above, the tools in this study were the writing tasks that were used to assist learners to express their mathematical thinking as they solved mathematical problems (the tasks) and to develop their mathematical understanding further (Burns, 1995a:49). Relations among these elements were orchestrated by modelling the types of writing tasks to the learners and, noting how they can be used to solve and explain the thinking behind the solution of mathematical problems. In doing so, learners think, reason and make sense of mathematical ideas in order to support and enhance their problem-solving abilities (Burns, 1995a:13).
RESEARCH PLAN
This study incorporated various activities that sought to answer the research question. A pilot study was conducted with a different class of Grade 3 learners prior to the data collection. Details concerning the pilot study and its significance in the overall plan for the data collection are discussed later. A pre-test was given to the selected Grade 3 class followed by interviews of eight learners regarding their solutions in the pre-test. Writing tasks were introduced and implemented in the class as an intervention to support learners in solving mathematical problems. The data collection period concluded with a post-test and another set of interviews with the same eight learners. Together, these activities of the research plan sought to determine whether the writing tasks had supported the learners in their mathematical understanding and their ability to solve and explain problems. In the following sections the execution and purpose of the various activities of the research plan are elaborated.
Pilot Study
A pilot study was conducted with a different class of Grade 3 learners. Yin (2009:92) suggests that a pilot case study enables the researcher to refine the content of the data collection plan and the procedures to be followed. A pilot study provides conceptual clarification for the research design so that the researcher is able to develop relevant questions for the actual case study (Yin, 2009:92). The purpose of this pilot study was to give direction to the research plan by assisting in the design of the mathematical problems to be used for the actual study. This process took place in the year prior to the data collection period. The pilot study provided an opportunity to test various types of writing tasks while learners solved mathematical problems. Testing was a way of gauging the level of teacher support required. This technique helped to link particular mathematical problems to a suitable type of writing task that supports learners when they solve mathematical problems.
A total of 35 learners participated in the pilot study. Before the pilot study, parents of the learners concerned were informed that the normal teaching and learning required by the curriculum would not be adversely affected. They were made aware that none of the learners' written pieces would be used in the thesis report.
During the pilot study, learners used a journal only and were given eight writing tasks. The writing tasks used during the pilot study were: writing to solve mathematical problems, writing to record (keeping a journal or log) and writing to explain. Due to time constraints, learners did not have an opportunity to use shared writing or writing about thinking and learning processes. After reading their entries using the writing task, writing to explain, it became apparent that much more guidance and support were needed on the purpose of each writing episode. It was noted that instructions and expectations should be clarified better, especially when learners start using writing to explain their thinking. It was imperative to make learners aware that spelling, punctuation and grammar would not be taken into account when their writing was being read. After this explanation was conveyed to the class, a few of the struggling learners felt more at ease when writing in their journals and verbally expressed this greater security during the pilot study.
Learners had one experience of writing to record (keeping a journal or log) during which they reflected on the day's mathematics lesson. Before this writing episode, prompts were displayed and discussed during other lessons when learners gave verbal responses to the prompts. However, it was found that, when learners were expected to write their responses, they appeared more restricted in their written responses when compared to their prior verbal responses. This insight, gained from the pilot study, was valuable in the planning and implementation of the writing tasks in the actual study.
The pilot study was beneficial in preparing for the data collection period: it allowed the researcher to gain insight into the learner support required when introducing and implementing various types of writing tasks. The pilot study emphasized the importance of presenting the purpose of each writing task which was modelled to the learners. When learners were engaged in writing to solve mathematical problems, the researcher observed that verbal explanations of solutions did not reflect their written strategies when solving the problem. Verbal explanations were sometimes different to written solutions and explanations. The dichotomy between their verbal and written explanations demonstrated that they may not have understood the purpose of the writing task.
During the pilot study, a variety of mathematical problems were presented to learners. Some of the mathematical problems dealt with fractions. It was a concern that the variety of problems made the focus of the study too broad. In the activities of the research plan of the actual study, mathematical problems using whole numbers that involved addition, subtraction, multiplication and division were used. Other mathematical problems were tackled during data collection as part of the normal teaching and learning programme.
However, certain mathematical problems were purposively selected for this research study.
These problems were selected because they used whole numbers and aimed to address the research questions of this study.
The insight gained from the pilot study made it possible to identify suitable mathematical problems to be used during the data collection period and to refine the data collection plan.
Data collection plan
Data were collected over a period of 10 weeks including a week at the beginning and the end for the pre-test and post-test respectively. This process spanned school terms: when I had collected data from the middle of one school term to the middle of the next. It was envisaged that there would be three writing episodes per week. However, in practice, this was not the case because the normal teaching and learning programme of the school needed to be considered which included time for assessments and related interventions. Some public holidays fell within the data collection period. Only one writing episode occurred in week 2 and week 6. In week 7 learners engaged in writing to explain geometric patterns. Although this exercise did not fall under whole numbers, it was decided to use this as a topic since it followed on from the lessons covered at the time. While this writing episode did not offer much evidence for this study, it provided insight into whether the use of writing tasks could be applied across content areas. The following table shows the data collection plan as it had been adjusted to accommodate the needs of the class and school. Week 5 Model writing about thinking and learning processes (letter) Write about thinking and learning processes (letter to principal)
Problem 5 Problem 6
Week 6 Problem 7 Week 7 Model shared writing Shared writing (story) Writing to explain (geometric patterns) Problem 8 Problem 9 Week 8 Write to explain (fractions) Write about thinking and learning processes (favourite activity)
Problem 10
Week 9 Write about thinking and learning processes (qualities of a good problem-solver) Problem 11 Problem 12 Problem 13 Week 10 Post-test (five mathematical problems) The following sections describe the different aspects of the data collection period and how they unfolded during this study.
Pre-test
At the beginning of the data collection period, five mathematical problems (Appendix E) were given to the learners to solve as a pre-test over a period of five days. Three of these problems were addition/subtraction problems while two problems were multiplication/division problems. Structuring the pre-test in this way gave more opportunities to analyse the learners' strategies thoroughly. Their strategies would be used to select eight learners to be interviewed after the pre-test. The process and purpose of the pre-test were explained to the learners. The purpose of the pre-test was to gauge the learners' ability to solve and explain mathematical problems at the beginning of the data collection period. It was explained to the learners how the pre-test would be conducted. Learners were each given an A4 sheet of paper to be used for the five problems presented to them during the pre-test. Although all the learners were presented with mathematical problems that shared the same context, the number ranges of the problems were differentiated according to the three mathematical ability groups in the class. The mathematical problems were read aloud before learners solved them to assist learners with reading difficulties. A brief class discussion took place before solving each problem but, for the most part, learners solved the problems individually so that the researcher could ascertain the mathematical knowledge, skills and strategies learners employed to solve the problems. While learners were solving the problems, the researcher moved around the classroom to observe their strategies and solutions without giving assistance.
First set of interviews
The solutions and strategies that learners employed during the pre-test were used to purposively select eight learners who were individually interviewed. The eight learners represented the different mathematical ability groups. Two learners were selected from the above average ability group and three learners each from the average and below average ability groups. The sample of learners was selected based on varying levels of success in solving the pre-test mathematical problems in order to obtain a variety of learners' written work. The purpose of the interviews was for learners to explain the written strategies they had used when solving the problems of the pre-test. The verbal explanations of their strategies and solutions were compared to their written strategies. The interviews helped to gauge learners' understanding of problems because learners verbally explained their strategies. (Appendix H lists the interview schedule.) The interviews were audio-recorded and transcribed.
Writing tasks
Following the pre-test and first set of interviews, the different types of writing tasks were introduced and implemented with the Grade 3 class. The researcher modelled the different types of writing tasks as presented by Burns (1995a) when they were introduced to the learners. Modelling allowed learners to see how each type of writing task could be used and provided them with opportunities to practise implementing them. During these writing episodes, the purpose of the writing task was communicated to learners as a means to support them while they were solving mathematical problems and help them to explain their thinking. From the insights gained through the pilot study, the researcher tried to ensure that learners participating in the study understood the purpose and expectations of the various types of writing tasks.
Learners were encouraged to participate in the class discussion even though it was largely led by the researcher to help them understand how the writing tasks applied to mathematics in the context of problem-solving. After this phase, each type of writing task was modelled to the learners on a big sheet of paper which was displayed on the mathematics wall in the classroom for the rest of the data collection period. While each writing task was being modelled, the researcher continued to link the writing task to the mathematical content.
Learners were invited to express their ideas as well as to enhance their understanding of writing in mathematics.
At Grade 3 level, the end of the Foundation Phase learning experience, learners are expected to have gained a certain level of competence when working with the basic operations.
To this end, a variety of word problem types was used involving addition/subtraction and multiplication/division of whole numbers. The problems were used as tools to gauge learners' understanding of mathematical concepts within basic operations.
The types of writing tasks taught to the learners during the data collection period aimed to support their understanding and ability to solve word problems. Learners used their journals to record and clarify their thinking when solving mathematical word problems. This technique enabled them to become accustomed to writing and the expectations of writing.
The mathematical problems (listed in Appendix G) were purposively selected prior to the data collection period. The results from the problems used in the pilot study assisted in selection of the problems for the actual study. In chapter 2, different problem types were discussed according to the basic operations the problems covered. Some criteria were used in the selection of the mathematical problems. The selected mathematical problems used whole numbers. The problem types used one or more of the four basic operations to arrive at the solution without incorporating other mathematical concepts such as fractions. The problems were presented in such a way that there would be more than one strategy to reach the solution. For some of the problems, learners could use inverse operations to reach the same solution. Four mathematical problems involved addition and/or subtraction. The remainder of the problems engaged learners in repeated addition/multiplication and division.
One of these problems (problem 7) lent itself to using addition, subtraction and multiplication while another (problem 9) lent itself to using addition, multiplication and division. Problem 1 was a separation problem where the initial was unknown. Problem 2 was a joining problem where the result was unknown. Problem 6 was a comparison problem where the difference was unknown. Problems 3, 4 and 5 were subtractive division problems. Problems 8, 10, 11 and 13 could be solved as repeated addition or repeated subtraction problem types. In problem 12, learners were presented with a strategy for solving a subtraction problem.
Learners had to analyse the strategy to see where the mistake was made. One non-routine or context-free problem (problem 10) was included in the data collection. This problem was incorporated to see whether learners were able to explain their thinking when solving context-free problems as well.
Modelling writing to solve mathematical problems
The first writing task that was modelled for learners was writing to solve mathematical problems. In the Foundation Phase, learners should have already used writing in solving mathematical problems by the time they reach Grade 3. However, learners are generally not required to explain their thinking in writing. Often the expectation is simply to find the correct answer. It was decided that this assumption should be the first type of writing task introduced in this study because several learners were already familiar with the exercise. Jacobs and Ambrose (2009:265) suggest that learners write about their strategies and representations as a way of reflecting on the problem and how they solved it. The researcher explained that, when a problem is being solved, it should be possible to explain verbally and in writing what the solution is and how it was arrived at. The reader should realise that the answer is correct by explaining how the problem was solved so that the reader can understand why the answer and the solution made sense. What is written is as important as arriving at the answer. When writing, learners may use drawings, numbers and words to represent their thinking in a way that makes sense to all participants. Such a writing task was modelled specifically to solve and explain a problem (see Figures 3.1 and 3.2).
Following this modelled lesson, learners were given an opportunity to solve and explain a mathematical problem before another type of writing task was introduced.
Modelling writing to record (keeping a journal or log)
Writing to record (keeping a journal or log) was introduced next and the purpose of this writing task was explained to learners. According to Burns (1995a:51), this type of writing task should be an ongoing record of what learners are doing and learning in their mathematics class. It can be used to record their thinking as lessons occur. The journal prompts, called sentence starters, were displayed (see Appendix I). The prompts were discussed during which time learners suggested possible sentences orally. The ways in which writing prompts could be used as a starting point to explain learners' thinking were then modelled.
Learners were not expected to write in their journals daily during the data collection period.
They were encouraged, however, to write as they noticed something or made a discovery which deepened their conceptual understanding of mathematics as they solved word problems (Burns, 1995a:51).
Modelling writing to explain
The next writing task, writing to explain, followed from a mathematics lesson on place value.
At the end of the lesson, the researcher expounded on this type of writing task. A class discussion ensued around the purpose and process of this writing task. The researcher explained that this type of writing task is employed to explain a particular mathematical concept: it helps to organise thinking and share what is understood with others. Learners in the class suggested sentences about place value upon which the class deliberated. As they did so, the researcher modelled writing to explain on a big sheet of paper (see Figure 3.3). Figure 3.4 shows the sentences that were written to explain the concept of place value. This record was the learners' first attempt at explaining place value. At a later stage, learners revisited this explanation to determine whether they could change or add to this explanation so that it made more sense. This alteration phase occurred after the data collection period once learners had more opportunities to engage with the concept of place value in mathematics lessons.
Modelling writing about thinking and learning processes
The next writing task that was modelled was writing about thinking and learning processes.
Modelling shared writing
Shared writing is not specifically drawn from Burns (1995a) but rather from researchers such as Wilcox and Monroe (2011). As mentioned in the literature review (paragraph 2.7.5), this type of writing task was included in the study because it is a methodology prescribed in the RNCS (CAPS) curriculum for languages currently used in South Africa. It was explained to learners that shared writing in mathematics can be used to review and internalize mathematical concepts and ideas and develop mathematical communication (Wilcox & Monroe 2011:526). The measurement concept of one centimetre was the context for the story modelled to the learners. Learners were encouraged to give ideas as the story developed which enabled the link to be made between the mathematical concept and the story.
Summary of implementation of writing tasks
Different types of writing tasks were introduced gradually and not all at once so that learners became accustomed to using each one. As Burns's (1995a) teaching methodologies suggest, class discussions took place to brainstorm ideas before learners engaged in writing.
These discussions assisted learners to formulate their own thinking and extend their ideas.
After each writing task was introduced, learners engaged in writing episodes that allowed them to implement what was modelled to them. In all writing episodes, learners were urged to explain their thinking. At times, sample writing was discussed to deepen learners' understanding; allowing them to reflect on their thinking. Learners received feedback on their writing as notes were made in their journals or on their papers, or as the researcher spoke to them during or after the writing episode. As Amaral (2010) suggests, any feedback given, whether verbal or written, should be positive to enhance the ability to appropriate and implement writing tasks. Learners were encouraged to write in their journals whenever they had an idea, made an observation or noticed something, and not only when instructed to do so. When learners wrote collaboratively as a pair or small group in a shared writing exercise, one piece of writing was on paper with individual learners' names recorded on it.
This study makes use of Burns's (1995a) different types of writing tasks in Grade 3 mathematics lessons. The types of writing tasks were introduced and implemented over a period of approximately eight weeks. They included writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes, and shared writing. In chapter 4, the findings of the writing episodes that learners engaged in after the different types of writing tasks were modelled are described.
Post-test
Learners solved five mathematical problems as a post-test at the end of the data collection period to gauge whether the use of writing supported them in solving mathematical problems.
The problem types used in the post-test were similar to those used during the pre-test. Two of the problems in the post-test were addition/subtraction problem types while three problems were multiplication/division problem types. The differentiated mathematical problems are listed in Appendix F.
The post-test was conducted in a similar manner to the pre-test. The mathematical problems were read aloud to assist learners with language difficulties. However, learners solved the problems individually. The strategies and explanations learners wrote were compared to those in the pre-test in order to ascertain what improvements or changes occurred in their problem-solving strategies and abilities.
Site
The site for this research was a preparatory school in a suburban area in Cape Town, South Africa. It is a quintile 5, Foundation Phase school with classes from Grade R to Grade 3. This school was selected as the site for this study because the researcher was a Grade 3 teacher at the school. Using another school as the site would have been disruptive to the normal teaching programme of the selected school.
Sample
In qualitative research, sampling methods may be random, convenient or purposive (Simons, 2009:35-36). For this study, convenient sampling was conducted because there were five However, the researcher worked intensively with a sample of eight learners selected from the class. These learners were purposively selected. Purposive sampling involves a deliberate selection of settings, persons or activities to provide relevant information about the goals of this research (Maxwell, 2013:97). The eight learners were selected based on the solutions and strategies they used when solving the mathematical problems during the pre-test. They displayed varying abilities when solving and explaining mathematical problems and represented the three mathematical ability groups present in the Grade 3 class. The LOLT (English) was the home language of the eight sampled learners. These learners displayed varied literacy abilities in the classroom. Although the study focused on writing, its purpose was not to reflect on learners' literacy abilities. Writing in mathematics focuses on the conceptual understanding of mathematical concepts which is represented in numbers, words and pictures. The eight learners were selected based on their mathematical abilities and not their literacy abilities. These learners were interviewed following the pre-test and post-test to explore how writing was used as they solved mathematical problems. Due to the nature of the data collection in this study, it was decided that this study would not report on data from all the learners in the class because the data set would become too large. By purposively selecting eight learners, this thesis report could be more specific in answering the research questions.
The development of the eight learners' problem-solving abilities was determined through the mathematical problems presented to the learners at the beginning and the end of the data collection period. The development was gauged during this period while the researcher observed the learners' development of writing and how it was used to support their mathematical problem-solving abilities. It is important to note that learners were given generic mathematical problems with varying number ranges to accommodate the different mathematical ability groups in the class.
In the following section, data collection instruments used during this study are presented.
These instruments were selected because they best suited the process of gathering data for the purpose of the study.
DATA COLLECTION INSTRUMENTS
The purpose of this research study is to investigate how various types of writing tasks support Grade 3 learners in solving mathematical problems. The data collection instruments used to answer the research questions included learners' written pieces, audio-recordings of the ability group discussions, field notes and interviews. In this section, each data collection instrument and the purpose for including it in this study is described. The manner in which each instrument was used is outlined.
Learners' written work
In the case of this research study, the learners' written tasks were in the form of journals and work done on paper. The types of writing tasks included writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes, and shared writing.
According to Swanborn (2010:73), the advantage of documents, in this case the learners' written work, is that they provide a stable source of data: they are outside the researcher's influence. Rule and John (2011:67) suggest that the documents may prompt important questions which could be pursued further in interviews. In this study, learners' written work during the pre-test was used to select eight learners to be interviewed. The interview questions (Appendix H) referred to the writing these learners used when they solved mathematical problems of the pre-test. However, disadvantages of this data collection instrument include a biased selectivity as well as the possible bias of the researcher herself (Yin, 2003:86). As the teacher of the selected class, the researcher had to be aware of selecting learners based on their use of writing in the pre-test, considering their mathematical abilities and not their literacy abilities.
The eight learners' written work was collected over the duration of the data collection period when they engaged in various writing tasks. Written work was analysed to determine how writing tasks supported Grade 3 learners in mathematical problem-solving. Added to this, the mathematical problems learners solved during the pre-test and post-test were included as part of this data collection instrument.
Audio-recordings
In this study, audio-recordings were made throughout the data collection period since all the learners in the class engaged with writing to solve mathematical problems. Learners solved differentiated problems (listed in Appendix G) according to the mathematical ability group of which they were a part. They were given time to solve the problems and write their solutions and explanations in their journals. The different ability groups discussed their solutions and strategies on the carpet while the rest of the learners continued working on their solutions or completed other mathematics activities. Audio-recordings were made of the ability group discussions. It was decided that audio-recordings would be beneficial as an instrument for collecting data because it may not have been possible to capture as much of the discussions as possible through taking field notes alone. Additionally, audio-recordings were inconspicuous (Creswell, 2014:192) since they allowed learners to explain their strategies and participate in discussions more freely. Learners were not as distracted by the audiorecordings as they would have been had field notes been made during the discussions. The use of audio-recordings allowed facilitation of the group discussions: it was not necessary to take field notes during discussions. The audio-recordings were transcribed, coded and analysed. writing can support mathematical problem-solving.
Interviews
Simons (2009:43) describes interviews as a means of exploring core issues quickly and indepth. Interviews provide opportunities to ask follow-up questions and probe motivations. As a data collection method, "interviews can be time consuming to arrange, carry out and to analyse and yet interviews can also provide some of the richest data" (Hamilton & Corbett-Whittier, 2013:104).
The eight Grade 3 learners selected in this study were interviewed individually regarding their use of writing and how it influences and supports their thinking when solving mathematical problems. They were interviewed on two occasions. They were first interviewed after the pre-test and again after the post-test. Interviews were semi-structured with a flexible list of questions and key themes (Appendix H). Silverman (2011:162) highlights the skills needed to conduct semi-structured interviews as probing, rapport with the interviewee and understanding the aims of the study. As the teacher of the selected Grade 3 class, the researcher had an established rapport with the learners being interviewed. At the same time care had to be taken not to influence their responses in a particular direction. The interview questions related to specific writing episodes the learners experienced during the pre-test and the post-test. Olsen (2012:33) explains the use of questions and prompts in semistructured interviews: both need to be planned in advance. Learners were selected based on their use of writing during the pre-test, so their strategies were considered in the planning of possible prompts. Interviews were audio-recorded to ensure that analysis of interviews was not limited; rather that data were captured in their entirety. Interviews were transcribed, coded and analysed. This process is elaborated upon during discussion of the data analysis of this study.
DATA ANALYSIS
The process of analysing data makes sense of what has been collected. Rule and John (2011:75) state that the "key research questions…developed at the start of the study should serve as a guiding force in the analysis process". Data collected for this study were explored in relation to the research questions stated below.
Research question:
How do various types of writing tasks support Grade 3 learners in solving mathematical problems? A four step approach to analysis of data was used: description, sense making, interpretation and implications which are commonly used in case studies (Dana & Yendol-Hoppey, 2009:120). This process enabled management of data analysis. The steps of description and sense making were used to organize and prepare the data for interpretation (Rule & John, 2011:76).
Description
Audio-recordings of the group discussions as well as interviews with the eight Grade 3 learners were transcribed in preparation for data analysis. Olsen (2012:39) describes transcription as "writing down or typing out the text of an interview or other sound file".
Pseudonyms were used during transcription in order to maintain participants' anonymity. Olsen (2012:35) adds that transcripts enable the researcher to have "insight into mechanisms, processes, reasons for actions, and social structures as well as many other phenomena". Once data were organized and prepared, data were read and re-read to develop a descriptive sense of what was happening, describe initial insights and reflect on the overall meaning (Creswell, 2014:197). In doing so, empirical information was converted into a description of the data in order to draw meaning from them (Henning, 2004:6).
Sense making or coding
The next step, sense making, is referred to as coding. According to Cohen and Manion (1994:286), coding is the "translation of question responses and respondent information to specific categories for the purpose of analysis". Rule and John (2011:77) refer to the use of codes as labels that are assigned to different themes or foci within the data. Moreover, Dana and Yendol-Hoppey (2009:118) refer to Schwandt's definition of coding as "a procedure that disaggregates the data, breaks it down into manageable segments and identifies or names those segments". Coding is "a database of connections between various terms and data items selected from among the whole basket of evidence" (Olsen, 2012:46). This understanding of the process of coding guided this part of the analysis process.
There are various steps to coding data including determining the size of text segments, developing a list of codes for basic retrieval and then detailed retrieval (Olsen, 2012:80). For the purpose of this study, data were coded using ATLAS.ti, a computer-assisted qualitative data analysis programme.
"Computer packages allow the user to store notes about the definition of their codes and to retrieve segments of data that have been assigned different codes, allowing you to gather together all instances of a particular code in order to compare these" (Barbour, 2014:262).
ATLAS.ti proved a useful tool which provided a comprehensive overview of large amounts of text in the form of the learners' written work as well as transcriptions of audio-recordings and interviews (Henning, 2004:126). Although computer-assisted analysis was used in this study, it remained the primary responsibility of the researcher in ensuring systematic, thorough analysis of the data (Barbour, 2014:260). Friese (2014:12) refers to NCT analysis as an analytical procedure or approach. NCT analysis involves "noticing interesting things in the data, collecting these things and thinking about them, and then coming up with insightful results" (Friese, 2014:12). Referring to NCT analysis, Friese (2014:13) describes noticing things as "the process of finding interesting things in the data…and nam(ing) them". In this study, as patterns were noticed in the data, they were assigned or attached to codes. The codes were largely determined before data were collected in order to provide a provisional coding frame for the data analysis. These codes developed from the relevant literature read in this field of research. The codes were established to attempt to answer the research questions of this study. The provisional coding frame included: representations of mathematical problems, demonstration of understanding through writing, individual writing and collaborative writing, the use of prompts when writing, the use of specific topics to develop writing and conceptual understanding, the usefulness of Burns's types of writing in a South African context and the development or change in learners' abilities to solve mathematical problems. However, these codes were adapted and additional codes considered while data were analysed. Barbour (2014:260) adds that one moves "back and forth between provisional and revised coding frames and transcripts or coded extracts in order to interrogate themes and build up explanations".
Codes were developed deductively before data collection and inductively during data analysis. Some codes were merged into themes for the purpose of addressing the research questions in the findings and discussion in chapters 4 and 5 while others became themes on their own. Themes were used to search for a detailed description of the use of writing when solving mathematical problems (Creswell, 2014:199). Barbour (2014: 278) argues theoretical frameworks that inform data analysis are often "referenced in terms of guiding the general approach taken in research, in formulating the questions to be asked and in determining what counts as 'data'". In this study, the theoretical framework largely concerned social constructivism since learners participated in collaborative work and group discussions where they engaged with the teacher and their peers. Through social constructivism, learners developed their mathematical problem-solving abilities in their ZPD's through the introduction and implementation of various writing tasks. Scaffolding was used to assist learners while they developed their use of writing in mathematics. Through encountering the writing tasks, learners' procedural knowledge and conceptual knowledge were drawn upon and developed. According to Thorn (2000:68), this theoretical lens determines how the researcher approaches and collects data which is relevant in answering the research questions so that raw data can be transformed to depict the focus of the study.
As discussed in the literature review in chapter 2, the Learning Framework in Number (LFIN) incorporates the Stages of Early Arithmetical Learning (SEAL), the Structuring Number Strand (SNS), conceptual place value and early multiplication and division . The LFIN, in conjunction with the theoretical framework, was used to guide the process of data analysis. Using the different stages and levels of the various components of LFIN, the researcher analysed the strategies that learners employed when solving mathematical problems, particularly when analysing their strategies in the pre-test and post-test. The LFIN supplied clear indicators of the stage or level of strategies that were used in the pre-test when compared to the post-test. This process made it possible to gauge what developments or changes there were in learners' strategies when solving mathematical problems.
Interpretation
During interpretation, statements were constructed that express and communicate the findings supported by data. Creswell (2013:187) describes interpretation as "abstracting out beyond the codes and themes to the larger meaning of the data". Olsen (2012:56) adds that interpretation is processing data by presenting it differently in order to deliver new meaning.
In this study it was considered whether a writing intervention, through the implementation of writing tasks, made an appreciable difference to learners' use of strategies for solving problems (Creswell, 2014:178). The purpose was to determine whether writing tasks supported learners' development of mathematical problem-solving abilities.
Drawing implications
The final step, drawing implications, involves any change of action the study may bring about or any new questions generated for further research.
"Conclusions are likely to be strengthened by some further analysis that attempts to make sense of similarities and differences within the dataset and which also seeks to locate the study within the wider picture of what is already known about the topic in question" (Barbour, 2014:263).
This study may lead to wider use of writing in mathematical problem-solving in the Foundation Phase in South African schools. It may support learners to develop their problem-solving strategies as well as their conceptual understanding and mathematical knowledge and skills. This study may highlight the usefulness of writing tasks in the Foundation Phase and the difficulties learners experience when implementing them.
The interpretations and implications drawn from the data are expounded upon in chapter 5 as far as they relate to discussion of findings in this study.
VALIDITY, TRUSTWORTHINESS AND RELIABILITY
According to Maxwell (2013:121), validity is assessed in terms of how it relates to the purposes and credibility of the research study. Validity can be achieved through explicitly reporting how research was conducted and locating any weak points within the study (Swanborn, 2010:37). Throughout this chapter, the researcher outlined in detail the research process by providing the research plan that answered the research questions of this study.
Triangulation was applied to the data collection and analysis of this study. Rule and John (2011:109) explain triangulation as "the process of using multiple sources and methods to support propositions or findings generated in a case study". This explanation confirms Yin's (2009:99) statement that the findings of the case study are corroborated through multiple measures of the same phenomenon. Craig (2009:108) adds that triangulation involves multiple sets of data to focus on views and perceptions of a particular phenomenon. This strategy eliminates bias and strengthens the validity of the study. In this study, data collection instruments include learners' written work, audio-recordings, field notes and interviews. According to Denzin and Lincoln (2011:5), using multiple methods such as those employed in this study adds rigour, complexity and depth. Such added objectivity was particularly evident because verbal explanations learners gave during interviews and group discussions added depth to the evidence provided from written work which displayed individual strategies and explanations.
Bias was reduced through the careful formulation of open-ended questions for the interviews (Cohen & Manion, 1994:282). The learners' responses were audio-recorded. Audiorecordings of the interviews and ability group discussions ensured the researcher's neutrality and the legitimacy of the learners' responses (Davies, 2007:157). These audio-recordings enabled the researcher to put aside prior assumptions in order to process data to determine the outcomes of the study. The use of multiple data collection instruments and audiorecordings ensured the results of the data analysis were accurate representations of the context in which data were collected (Davies, 2007:243).
The researcher for this study is also the teacher of the learners selected. As such, her position as teacher-researcher is acknowledged since it may have compromised the reliability and interpretation of data generated through various data collection instruments.
Guba's concept of trustworthiness (Rule & John, 2011:107) refers to, amongst other things, the confirmability of the study where the researcher's influence and bias are disclosed. In this instance, the researcher's position as teacher is acknowledged as a possible bias and limitation to the research study. The researcher's familiarity with the learners' mathematical ability may have presented bias in selection of the learners to be interviewed. Knowledge of learners' language abilities may have affected use of scaffolding; provided to those who were known to struggle to read, comprehend and interpret the mathematical problems. Some of the strategies learners used during the writing intervention may have been achieved through additional support. However, scaffolding was not provided during the pre-test and post-test where strategies and explanations were analysed and compared. The researcher limited the potential bias of being both teacher and researcher.
With regards to positionality, Creswell (2014:188) states that past experiences in the classroom "may cause researchers to lean toward certain themes, to actively look for evidence to support their positions and to create favourable or unfavourable conclusions about the sites or participants". Having previous knowledge of learners' abilities may have led to pre-empting the themes for discussion. According to Creswell (2014:188), such knowledge can compromise disclosure of information as well as create an imbalance of power between researcher/teacher and learners. The process of analysis could have been biased to meet the desired conclusions of this study. Being aware of this danger increased the researcher's care to collect data as accurately as possible: multiple instruments were used during this study. Trustworthiness could be ensured through triangulating the analysis and findings from the data collection instruments.
Reliability is reached through the precision of procedures and documentation. Henning (2004:151) describes reliability as follows: "If all research steps are declared and documented, the research is potentially replicable and someone may then assess, by doing it all in the same way in a similar setting and with similar participants, whether the replicability is feasible". Creswell (2014: 203) explains that the reliability of a study is found in its consistency and stability concerning the steps and procedures followed in documenting the case study. In the research design and plan described earlier in this study, the steps and procedures were detailed to elaborate upon the reliability of the study. The research design and plan of this study could be replicated in a similar or different context to determine whether the results were consistent and reliable.
ETHICAL CONSIDERATIONS
According to Salkind (2009:80) anonymity should be observed during the research process by maintaining confidentiality: anything that is learned about the participant is held in the strictest of confidence (Salkind, 2009:82). In this research study, pseudonyms were used for the school and the names of all participants.
Salkind ( Ethical concerns regarding implementation of the writing tasks were considered. Although eight learners were purposively selected for this study, all the learners in the participating Grade 3 class implemented the writing tasks as an intervention to support their mathematical problem-solving (Creswell, 2014:98).
CONCLUSION
In this chapter, the methodology used for this research study was explained. This qualitative study employed a case study approach because the use of writing tasks was investigated during mathematics lessons. Details of the research plan were elaborated. The pilot study was discussed as it gave meaningful assistance to the formulation of the research plan for the actual study. It afforded unique insights into the selection of mathematical problems for this study. The strategies and solutions of the pre-test conducted at the beginning of the data collection period were analysed and eight learners were selected to be interviewed. All the learners of the participating class were introduced to the writing tasks. Writing tasks were modelled to the learners who were given opportunities to use the writing tasks over a period of eight weeks. A post-test was conducted at the end of the data collection period to determine whether the use of writing supported learners in their problem-solving strategies.
Throughout the data collection period, data were collected through learners' written work, audio-recordings, field notes and interviews. This compilation of data facilitated triangulation and ensured the trustworthiness and validity of the study. Data collected from the data collection instruments were transcribed and coded using a provisional coding frame.
Additional codes were added where necessary. Data analysis was conducted using the theoretical framework as well as the LFIN. Themes for the discussion emerged. This process occurred through merging codes into themes or taking a code as a particular theme in the discussion. The conclusion of the data analysis process was described with a focus on the interpretation of the findings and the implications drawn for the use of writing in the Foundation Phase of a typical South African mathematics classroom.
The following chapter presents the findings of this study as they relate to the research questions.
INTRODUCTION
In this chapter the findings of this research study into the use of writing tasks to develop learners' mathematical problem-solving skills are discussed (Burns 1995a). This study employed various types of writing tasks and investigated whether these writing tasks can be This chapter begins with an overview of the results of the pre-test and the interviews at the beginning of the data collection period. Findings of the writing tasks that learners were engaged in during the period of implementation are presented. Findings of the post-test and the second round of interviews are also provided.
PRE-TEST
On the first day of the pre-test, the process and purpose of the pre-test were explained to All the learners used problem-solving strategies as reflected in SEAL which were appropriate in terms of the addition/subtraction problem type. As learners progress through the stages of SEAL, they develop increasingly sophisticated strategies from counting all, counting on and counting back to compensation and commutativity. The strategies used in the SEAL are applicable to addition and subtraction problem types. Learners' strategies were at different stages in their number learning as is reflected in their solutions. Five learners solved the problem at the level of perceptual counting: stage 1 of the SEAL. Their strategies involved counting visible or drawn items (as in Figure 4.2). Two learners were at stage 3 (initial number sequence) where counting on or counting down strategies was used; while one learner (Figure 4.1) solved the problem at stage 5 (facile number sequence) using strategies beyond counting-by-ones incorporating more advanced strategies and procedures. This reflected a strategy at stage 1, which is perceptual counting. Learner 4 wrote down an addition sign in the sum as opposed to a subtraction sign. This error could have been due to a lack of conceptual understanding or simply a mistake in the learner's writing. A few learners struggled with the context of the second problem. During observation, it was noted that some learners experienced difficulty reading and understanding the problem: they required some assistance in this regard. It is possible that this weakness was due to learners' limited vocabulary and/or reading comprehension skills. The problem was read and discussed briefly before learners solved it individually. The researcher/teacher did not provide much assistance to learners while solving the problem because this problem formed part of the pre-test. Assisting learners during the pre-test may have produced results that lacked validity when compared to the post-test.
When solving this problem, six learners appropriately used level 1 (initial grouping) of early multiplication and division which suited the multiplication/division problem type. The drawings learners made reflected quotitive sharing: learners arranged the items in the problem into groups. In quotitive sharing, items are shared into groups of a given size (Wright, Martland, Stafford & Stanger, 2006:14). This problem required learners to share the total number of doughnuts into platters of 7 doughnuts each to determine the number of platters needed. Three of the six learners identified at level 1 made errors when using this strategy because they had incorrect answers. In interviews after the pre-test, Learner 2 (AA) and Learner 4 (A) explained that they had used drawing but counted at a level of stage 1 of the SEAL (perceptual counting). Their drawings represented quotitive sharing which they combined with a counting strategy. Learners 6 and 7, both from the below average group, used incorrect strategies that did not fit the multiplication/division problem type. Learner 7 (BA) did not use a strategy that fitted the multiplication/division problem type. As can be seen in Figure 4.5, this learner used addition/subtraction as a strategy. He used the numbers given in the problem and subtracted 7 from 28, although his tallies show that he added 7 and 21 to reach the total of 28 doughnuts. He used a combination of addition and subtraction that left him with the answer of 21. It is evident that Learner 7 (BA) did not understand the problem. Added to this, it is possible that he did not have the conceptual understanding of early multiplication and division as described in chapter 2. All eight learners used strategies that reflected the SEAL for solving the addition problem.
However, various learners solved it at different stages. Two learners were at stage 1 using
Learner 4 (A) understood the context and question of the problem and applied a strategy
relevant to the problem type represented in the problem. Although there was no evidence of a calculation, a sentence was written to explain the strategy that was used. At this stage, learners had not yet been exposed to the types of writing tasks that would be introduced as part of this research study once the pre-test and interviews were completed.
Learner 6 (BA) used counting in twos as a strategy for solving the same problem ( Figure 4.8). This counting strategy was used within an addition sum which shows a combination of strategies at stage 4 of the SEAL (intermediate number sequence). However, another number sentence was written where the learner added incorrectly. The representations used in the strategy did not match the number sentence because the learner arrived at the incorrect answer. However, the strategy of adding by counting in two's suggests that Learner 6 (BA) understood that this was an addition problem type.
Problem 4
1. There are 17 pins in a box. How many pins will there be in 6 boxes?
2. There are 17 pins in a box. How many pins will there be in 4 boxes?
3. There are 17 pins in a box. How many pins will there be in 2 boxes? Five learners used strategies that reflected early multiplication and division as explained by Wright, Martland and Stafford (2006a). Three learners were at level 1 (initial grouping) where Learner 2 (Figure 4.10) and Learner 6 (BA) had incorrectly used partitive sharing.
These learners had taken the 17 pins and shared them between the number of groups in the problem. They had misunderstood the problem by using division instead of multiplication.
Learner 8 (BA) correctly used quotitive sharing: she assigned 17 pins to each group in her drawing of the items. Four blocks were drawn to represent the four boxes in the problem with lines in each box. The quotitive sharing in her drawing showed that the learner conceptualised seventeen pins in each box. However, her number sentence did not reflect this strategy, as can be seen in Figure 4.9. Learner 8 (BA) added the numbers given in the problem (17 + 4) but came to the answer of 39. When counting the lines in all the boxes, it showed that she had drawn far more than 39 pins. It appeared that she had some conceptual understanding of the multiplication or repeated addition strategy required by the problem but was unable to follow this through the entire problem. Figure 4.11. Despite this being a multiplication/division problem type, his strategy could have worked if he had used it correctly. He had decomposed 17 using place value (10 + 10 + 10 + 10; 7 + 7 + 7 + 7) four times which represented the 4 boxes in the problem. However, he did not continue using this strategy by adding these number sentences to find the total number of pins. The number sentence that he wrote does not match the rest of his strategy. Three learners used strategies reflecting initial number sequence (stage 3 of the SEAL).
Two learners solved this problem at stage 1 of the SEAL (perceptual counting). One learner displayed a strategy using facile number sequence (stage 5 of the SEAL) while another learner had no visible strategy. On the last day of the pre-test, Learner 4 was absent. When reflecting on learners' strategies and analysing their writing during the pre-test, most learners had difficulty solving mathematical word problems and communicating their thinking through the strategies they had written. Often, the strategy did not fit the problem or it showed their lack of deeper conceptual understanding of the problem. The strategies sometimes reflected the lower levels or stages of the aspects of the LFIN. There was little evidence of more advanced strategies typical of the higher levels of the LFIN, especially from learners in the average and below average mathematical ability groups.
FIRST SET OF INTERVIEWS
Interviews were conducted with the same eight learners discussed in the pre-test results above. The purpose of the interviews was to gauge learners' understanding of the problems when they verbally explained their strategies. Verbal explanations were considered against recordings of their solutions in the pre-test. Interview questions were structured in order to establish how learners were able to explain their solutions based on their writing when solving problems of the pre-test. Interviews helped to explore learners' thinking and understand what they were doing. There was evidence of scaffolding in some interviews: learners needed prompts to explain their strategies. Most of the selected learners found difficulty explaining their problem-solving methods used in the pre-test. Verbalisation of their strategies did not always reflect what they had written on paper. Some learners, particularly from the average and below average groups, seemed to lack the mathematical vocabulary to explain what they had done.
Learner 1 (AA) was able to explain his strategies verbally even though this learner sometimes had the incorrect solution. At this stage of the data collection (pre-test), he had not written an explanation of his thinking when solving the problem because learners had not yet encountered the use of writing tasks. His strategies showed that he could use his conceptual understanding to represent his thinking. He was able to combine more than one method in certain strategies to reach his solution (Figure 4.1).
Learner 2 (AA) was able to explain her strategies verbally according to what she had done.
This ability allowed an understanding of the problems of the pre-test compared to the writing she used in her strategies. In her writing she sometimes used tallies to represent her strategy. At the time of the pre-test, some mathematical problems had a low enough number range for tallies to be used as a strategy. Below is an excerpt from the interview where Learner 2 explained how she used tallies as a strategy.
Researcher: And the one with the cricket team?
Learner 2: I put, I had 90, I put 94 circles then I crossed out 47 of them and so I counted the rest of them and it gave me 47.
Learner 2 (AA) Pre Interview
When solving the first four problems of the pre-test, Learner 3 (A) used strategies that reflected the problem types represented. For example, the third problem required use of addition as a strategy which was reflected in his writing. However, he came to the incorrect solutions for these problems. As a result of this pre-test, it appeared that Learner 3 (A) was able to determine the underlying mathematical concepts needed to solve the problem but could not solve the problem. He needed many prompts during the interview to help explain or justify his strategies. He seemed to find difficulty applying the strategy to his writing in order to reach the solutions successfully. The following excerpt is from the pre-test interview conducted with Learner 3 (A).
Learner 3 (A) Pre Interview
His explanation of the strategy made sense according to the multiplication/division problem type (problem 2 of the pre-test). The recording of this strategy and his verbal explanation showed that he solved the problem at level 1 (initial grouping) of early multiplication and division. The above excerpt displays an understanding of the required operation or strategy to solve this problem. However, he did not follow through with this strategy and came to an incorrect answer.
During Learner 4's (A) interview, it appeared that there was an understanding of the mathematical concepts required by each problem. The learner generally represented her thinking by using drawings or tallies. This was often her strategy when recording her thinking while she solved mathematical problems. However, drawings were not used when solving the third problem about the school sports team. In the excerpt below, Learner 4 (A) explained that drawing would be time-consuming when solving this problem since the numbers were too high. In this case, she was able to adapt her strategy and change her usual method of representation to suit the needs of the problem. In the following excerpt she explains why drawings were not used as a strategy for this particular problem.
Researcher: Let's look at the school sports team problem. What did you do here?
Learner 4 (A) Pre Interview
In the pre-test interviews with the selected learners from the above average and average ability groups, it seemed that they had the necessary conceptual understanding to solve mathematical problems according to the problem type as mentioned in chapter 2.
Conversely, their solutions were not always correct: they had either misread on misinterpreted the problem. Some learners needed more prompting than others when verbally explaining their problems. Only one learner wrote a brief explanation of her strategy during the pre-test (see Figure 4.7) which was significant because learners had not been exposed to the various writing tasks at this stage of the data collection. They were not expected to write explanations of their problem-solving strategies but, in a few instances, learners wrote statements of their solutions without explanations of the strategies they used when solving the problems. During the pre-test, learners were asked to solve the problems showing their strategies and solutions. They were not asked to write explanations of their strategies.
The three learners from the below average ability group did not use strategies appropriate to the problem types presented in the pre-test. This failure showed their lack of conceptual understanding related to the mathematical problems. This lacking could be linked to their language ability: two of these learners (Learner 7 and Learner 8) had below average reading and comprehension abilities. When these learners were interviewed, they had difficulty explaining what they had done. Below is an excerpt from the interview conducted with Learner 7 (BA) which displays his difficulty in explaining his strategy.
Researcher: Let's look at the first problem that you did. Do you want to explain to me how you solved this problem? Learner 7: I don't know.
Learner 7 (BA) Pre Interview
It was evident in their strategies that the three below average learners comprehended the third problem about the school sports team as an addition problem type even though their solutions or answers were incorrect: displayed in the interview with Learner 8 (excerpt below
Learner 8 (BA) Pre Interview
Learner 8 did not elaborate upon her strategy for this problem. At first, she explained that she had erased her drawings, in which she used the tally method, and solved the problem using a number sentence. When asked how she added the numbers, she said that she used drawings. This explanation did not make sense since she had erased her drawing. It is possible that she may have erased the drawing after she arrived at the answer. Figure 4.14 shows some evidence of her erased drawings and the number sentence that she wrote to solve this problem. During the pre-test interviews, learners often had difficulty giving verbal explanations of their problem-solving strategies. A possible factor in their inability to do so could have been a lack of appropriate mathematical vocabulary to clarify their thoughts. Another factor could have been that they had not previously explained their strategies verbally in the way that was expected during this study. Learners used limited details and explanation in their writing which may have led to the difficulty in their verbal explanations: they were not expected to use writing in this manner. Learners had not yet been exposed to using writing in mathematics through various writing tasks.
Once all the pre-test interviews were concluded, the various types of writing tasks, as modelled by Burns (1995a), were implemented in the selected Grade 3 class. Later, a posttest was conducted and the same learners were interviewed to compare their use of strategies and how the writing tasks supported them in reaching solutions. The findings of the post-test and interviews are elaborated later in this chapter.
WRITING TASKS
After the pre-test interviews were completed, various writing tasks (Burns, 1995a) were introduced to learners: writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes and shared writing. These tasks were modelled to learners to encourage them to clarify, justify and explain their thinking and to help in problem-solving. The writing tasks were implemented as an intervention to support learners in mathematical problem-solving. In this section, findings are presented on the implementation of the writing tasks between the pre-test and post-test.
Writing to solve mathematical problems
Once writing to solve mathematical problems was modelled, learners solved thirteen mathematical problems covering various problem types involving different numerical operations. As in the pre-test, these mathematical problems were differentiated according to the three different mathematics ability groups present in the selected Grade 3 class (see Appendix G). Learners solved the problems and wrote about them in their journals. With each writing task, learners were encouraged to use writing to solve mathematical problems to clarify their thinking and explain their strategies. When doing so, they often needed questions or prompts to guide them in their writing. Some learners needed more assistance than others in this regard. This distinction may explain some of the reading and comprehension difficulties learners experience.
Verbal and written feedback
On some occasions, guidance was provided verbally while the researcher moved around the classroom observing the learners. At other times, written feedback was presented in the journals where learners solved the mathematical problems. This was often the case with learners who did not receive verbal feedback at the time they completed the writing task.
Learners were requested to respond to the written feedback the following day by adding on to what they had already written. The aim of the verbal and written feedback was to guide their writing by drawing attention to the mathematical concept(s) within the problem. Figure 4.15 below is an example of writing to solve mathematical problems by Learner 2, an above average learner. The learner understood the problem type by using an appropriate strategy but made an error in her calculation. She counted by adding thirteen each time, not twelve.
Feedback was written to guide the learner to check her counting again. After the learner followed the support given through the written feedback, she realised that she had counted incorrectly.
The role of language when solving problems
The second problem learners were given to solve was comparing the height of the fence and the wall where they could use addition or subtraction as a strategy. It was noted that the below average ability group found the problem quite difficult: many of them did not understand the meaning of the word 'higher'. They needed guidance to be able to read and solve the problem. They thought that the wall was 18cm high as opposed to being 18cm higher than the fence. Most of the learners in this group had difficulty understanding the problem. Some scaffolding was needed to overcome this difficulty during the ability group discussion. Most learners in the group erased their initial strategies after the discussion.
This erasure made it difficult to compare what they had done before and after deliberating over the problem. Figure 4.18 shows Learner 8's strategy after the group discussion. Even though she understood that she had to add to solve this problem, she added 3 in her number sentence and her drawing. Although she recorded 46 + 18 in her number sentence, she did not add 18. This omission did not make sense according to the problem. This problem was not the only mathematical one where learners had difficulty understanding the context of the problem. The following excerpt from the field notes below indicates the support learners needed to be able to solve a problem. Scaffolding through modelling was required to enable learners to make a connection to the mathematical knowledge in the problem.
The excerpt of the field notes was written while learners solved problem 4. The problem read as follows: Learner 7 (BA) may not have understood the problem due to misinterpreting the vocabulary/language and the context of the problem.
Solving multistep mathematical problems
Below are examples of learners solving multistep problems as mentioned in chapter 2 ( The problem that Learner 2 (AA) solved in Figure 4.20 above is an example of a multistep problem. To solve the first part of the problem, she used her mathematical knowledge of doubling numbers. In her explanation, she states that she "doubled 26 twice". This strategy is at stage 5 of the SEAL (facile number sequence). She continued solving the problem and displayed repeated abstract composite grouping (level 4 of early multiplication and division) by using repeated subtraction. However, learners generally use two steps in their strategies to reach a solution. They had to solve the problem using a particular strategy that reflected the problem type (multiplication/division). Subsequently, the knowledge they gained from this strategy enabled them to answer the question. To start, Learner 6 (BA) used quotitive sharing by dividing the eggs into groups. This sharing was at level 1 of early multiplication and division, namely initial grouping. She combined this strategy with repeated subtraction which reflected repeated abstract composite grouping (level 4 of early multiplication and division). She did not, however, continue by answering the question in the problem. The below average group discussed their strategies together to compare what they had done. Below is an excerpt from the field notes written during the group discussion. reflected working at level 2 of conceptual place value when he was incrementing by tens off the decuple. He used this strategy to find the total number of tins in the problem but did not solve the second part of the problem concerning the number of boxes needed for all the tins.
Evidence of learners' errors
The researcher noticed that a number of the learners erased their work, especially when writing to solve mathematical problems. This erasure left minimal evidence of their thinking.
In speaking to them about this, some learners appeared to feel that their strategies were not adequate or they did not want their mistakes to be seen. The researcher explained to all the learners that seeing their strategies as well as their errors helped to explain what they were thinking when they solved problems. This explanation was necessary to determine how writing supported their ability to solve problems. Evidence of their strategies, including their errors, was essential to assess learners' conceptual understanding and address misconceptions in their thinking.
Writing to record (keeping a journal or log)
When this writing task was introduced to the class, it was explained to them that their journals were accessible at any time during mathematics lessons and not only when they were specifically asked to record in their journals. The purpose of doing this was to encourage learners to write about their experiences and thoughts while they occur and record them in a journal or log. This task was included to help learners make connections and think critically about the activities during a mathematics lesson. This task enhances learners' ability to make observations which is a necessary skill when solving mathematical problems.
Learners were instructed to record what they did and learned in mathematics lessons at any time to create an ongoing record (Burns, 1995a:51). As mentioned in chapter 2, learners should write in their journals whenever they notice or discover something but the researcher found that, unless this type of writing task was mentioned or time set aside to give them this opportunity, most learners did not write in their journals spontaneously and regularly as had been envisaged. There was evidence of four writing assignments only of this nature in cases where the researcher had prompted learners to write.
Learners from the above average ability group used more detail when writing in their journals. They did not always use the prompts displayed in the classroom (see Appendix I) which were examples of ways to start sentences to guide learners' thinking when writing to record in their journals. This ability may be due to having an above average language competence as well as a greater ability to express their thinking in words. Their higher levels of reading and comprehension could have affected their conceptual understanding as discussed in chapter 2. There was more evidence of critical thinking when learners sometimes gave reasons for making certain statements. Most of the learners from the average ability group used the prompts to state what had happened in the day's lesson.
They did not actually explain what they meant or extend their thinking through their writing.
Similar findings emerged from learners in the below average ability group but most of them wrote less than learners from the other groups.
Writing to explain
The purpose of this writing task is to explain a mathematical concept to show understanding.
Learners clarify what they know through reflecting and summarising. By doing this, their writing is enhanced: they engage with mathematical concepts and develop their knowledge and understanding. This enhancement could be accomplished through writing a summary (Freed, 1994:23) or listing the main points of the lesson and their reflections (Wilcox & Monroe, 2011:522). In this study writing is used to support learners while they solve and explain mathematical problems and links to the purpose of this task: learners cultivate their use of writing to explain their thinking.
Learners were given three opportunities to use writing to explain their understanding of a Most of the learners needed written or verbal feedback when writing to explain a mathematical idea. Their explanations were often limited. Many required further prompting through feedback to provide more detailed explanations. The researcher encouraged learners to write more than one or two sentences when they could not explain everything they knew about a topic in their writing. Such writing tasks helped to gauge the level at which they understood a concept which reflected the knowledge they had constructed during previous mathematics lessons.
Writing about thinking and learning processes
After several opportunities to use the types of writing tasks mentioned above, learners were introduced to writing about thinking and learning processes. When using this type of writing task, learners did not focus on a mathematical concept but rather explained mathematical ideas that could relate to their understanding in lessons. Freed (1994:24) suggests that this type of writing task encourages reflective and communicative writing. Learners used "writing about thinking and learning processes" on three occasions during the data collection period.
When learners first engaged with this writing task, they were given an opportunity to choose one of the ideas or topics mentioned during the class discussion prior to engaging in writing.
They chose to write a letter to the principal. During this writing task, learners were given the option to write in pairs. Learners were reminded that the focus of their writing was not on their spelling and grammar but rather on communicating their thinking through their writing.
In this way, writing about thinking and learning processes related to the study since it enhanced their ability to put thoughts into words. Seven pairs of learners focused more on mathematical concepts and how they are used in mathematics lessons. The rest of the learners wrote as if they were writing to record in a journal or log. They stated what happened and what they did during that particular day's lesson. The example in Figure 4.27 below shows how this pair of learners wrote about the day's lesson and included an explanation of the mathematical concept covered that day. On another occasion, learners engaged in writing about their favourite mathematics activities. Learner 2's (AA) writing task in Figure 4.28 shows that she understood the purpose of this type of writing task because she focused on a general mathematics activity that occurred regularly during lessons. The researcher observed the learners while they were writing but did not engage with them or prompt them. Assistance was given to those learners who had language and/or vocabulary difficulties. It was noticed that some learners may not have understood the instructions regarding this type of writing. Their writing seemed to reflect writing to record (keeping a journal or log): they focused on the day's mathematics lesson rather than looking at general mathematics activities taking place during any lesson.
In Figure 4.29, Learner 3 (A) wrote about mathematics in general. He did not write about a specific activity in the way this type of writing task requires. What do you need to be a good problem-solver?
What do you need to know to be a good problem-solver?
What makes someone a good problem-solver?
Learners were given a few minutes to discuss their thoughts in groups before engaging in individual writing for approximately ten minutes. Learners were observed while they wrote but they were not assisted or prompted. (Figure 4.30). His use of writing showed that he understood the purpose of the task. In Figure 4.31, Learner 1 (AA) listed the mathematical skills and concepts one may need to use when solving problems. This particular writing task supported learners because they focused on the process of problem-solving rather than solving mathematical problems themselves.
Shared writing
The last writing task presented to learners was shared writing. This writing task was included in the study because it develops learners' writing skills in mathematics. Shared writing provides opportunities to expand and clarify learners' thinking in a different way when compared to the other writing tasks. Learners had to use their knowledge of a mathematical concept in a creative manner. This technique was applied to the context of problem-solving because learners were required to explain and communicate their thinking.
There was classroom discourse around topics, however learners had difficulty finding a concept to write about that they could communicate. They worked in pairs and created a story of their experience: how they would feel if they shrank down to one centimetre tall. The same measurement concept was used in the shared writing piece which was modelled to the class. The topic was familiar to the learners but they were encouraged to write a different story to the modelled story displayed on the mathematics wall in the classroom. The researcher moved around; prompting learners through questioning since some of them were uncertain of the task. The concept of one centimetre was dealt with incidentally on a few occasions but the stories that the learners wrote reflected their understanding of measurement. There were some descriptions of what things looked like around them in comparison to their size. Some stories were more detailed in their descriptions which could be related to their language abilities. Figure 4.32 below is an example of a shared writing piece by two learners from the selected Grade 3 class. These learners were not part of the eight learners selected for the purposes of the pre-test, post-test and interviews. Consent was given by parents as noted in chapter 3 (paragraph 3.8).
POST-TEST
The post-test was conducted in a similar manner to the pre-test. (Appendix F lists problems given to different mathematical ability groups in the post-test). The researcher reminded learners of the pre-test and the types of writing tasks that were implemented during the lessons throughout the data collection period. Learners were given an opportunity to talk about different types of writing tasks that they used. They were each given an A4 sheet of paper to be used for the five problems presented to them during the post-test.
As with the problems used in the pre-test, post-test problems are numbered below according to the three mathematical ability groups in the selected Grade 3 class. There were differentiated problems for the above average ability group (1), the average ability group (2) and the below average ability group (3).
Problem 1
1. Anwar has planted 19 seedlings in the vegetable garden. James has planted 16 seedlings. Thandi has planted twice as many as James. How many seedlings have they planted in the vegetable garden?
2. Anwar has planted 15 seedlings in the vegetable garden. James has planted 12 seedlings. Thandi has planted twice as many as James. How many seedlings have they planted in the vegetable garden?
3. Anwar has planted 13 seedlings in the vegetable garden. James has planted 9 seedlings. Thandi has planted twice as many as James. How many seedlings have they planted in the vegetable garden?
The researcher did not instruct learners to use writing in mathematics: writing tasks were implemented during the data collection period. It was evident that many learners used "writing to solve mathematical problems" as can be seen in the examples below. In Figure 4.33, Learner 1 (AA) used a strategy that was applicable to the addition problem type and gave a suitable explanation of what he had done. He had applied the type of writing task, writing to solve mathematical problems, successfully because he was able to make sense of how he solved the problem. His explanation described each step that he had followed and included doubling numbers, decomposing into tens and units and adding to solve the problem. This ability was indicative of facile number sequence (stage 5 of the SEAL).
It seemed that Learner 7 (BA) understood only the first part of the problem: he doubled the number of seedlings to find Thandi's amount (shown in Figure 4.34). He was able to write some explanation of what he did using writing to solve mathematical problems. This fact reflects facile number sequence: he doubled the numbers which displays a non-counting-byones strategy: he did not continue solving the rest of this problem. It is possible that he misinterpreted the question in the problem. He may have thought he had to find out how much Thandi had rather than the total number of seedlings in the garden.
Problem 2
1. There will be a parent meeting at school tomorrow evening. 81 parents will be coming. The big tables will be used with six chairs around each. How many tables will need to be set up?
2. There will be a parent meeting at school tomorrow evening. 65 parents will be coming. The big tables will be used with six chairs around each. How many tables will need to be set up?
3. There will be a parent meeting at school tomorrow evening. 39 parents will be coming. The big tables will be used with six chairs around each. How many tables will need to be set up?
When solving this problem, Learner 7 (BA) was the only one to use the SEAL in his strategy (facile number sequencestage 5). The rest of the selected learners were at various levels of early multiplication and division. Three learners were at level 1 (initial grouping), one learner was at level 3 (figurative composite grouping) and three learners were at level 4 (repeated abstract composite grouping), the highest level at which this problem was solved. The last table had two chairs only. This strategy was at level 1 of early multiplication and division, initial grouping, in which she exhibited quotitive sharing. Learner 6 (BA) used a writing task, writing to solve mathematical problems, to explain the strategy she used. The first two attempts were not written about or explained, but the final strategy and the explanation showed an understanding of the problem as well as the problem type. Even numbers are not recorded in Figure 4.36. Later, during the post-test interview, Learner 2 came to realise the error in her thinking.
In Figure 4.37, Learner 3 (A) used his knowledge of counting as his strategy. He reflected level 2 of early multiplication and division (perceptual counting in multiples) when he attempted to count in multiples of six. He misread or misinterpreted the problem; as can be seen above. He explained that he counted in sixes but reached a total of 102. This explanation did not make sense since the problem had a total of 57 parents.
The following two problems used the same context as the previous problem in the post-test.
These problems extended the context by focusing on a different aspect of the problem. As can be seen in Figure 4.38, Learner 2 (AA) was the only learner to use conceptual place value in her problem-solving strategy. Her strategy and explanation showed an understanding at level 2, incrementing by tens off the decuple: she worked out the difference between 48 and 81. She provided a detailed explanation of her strategy through her writing which justified her thinking: she solved this problem.
Learner 4 (A)'s explanation did not demonstrate conceptual understanding of the problem type because she added the numbers given in the problem (Figure 4.39). This subtraction problem focused on comparison: learners needed to find the difference between the numbers given. Her explanation described how she incorrectly used addition in her strategy.
Learner 7 (BA) did not show a strategy for this problem but stated his solution (Figure 4.40).
It appeared that he had solved this problem mentally according to the explanation that he had written. This below average learner used writing that clearly explained how he had solved the problem by using a count-down strategy. This strategy was indicative of stage 3 of the SEAL (Initial number sequence). He used "writing as solving mathematical problems" that made sense and clarified his thinking. This technique displayed a deeper conceptual understanding of the problem type and showed that, despite there being no evidence of a strategy using a number sentence or counting, it was still possible to gauge from his use of this writing task that he understood the context of the problem. His writing indicated the support that "writing to solve mathematical problems" could afford learners.
The other five learners worked at various stages of the SEAL which was suitable for this addition/subtraction problem type, except Learner 3 who displayed no clear strategy.
Subsequently, he explained that he guessed this answer during the post-test interview.
Problem 4 1. After the parent meeting coffee will be served. One pot of coffee makes 7 cups. How many pots of coffee need to be made if each person has one cup?
2. After the parent meeting coffee will be served. One pot of coffee makes 7 cups. How many pots of coffee need to be made if each person has one cup?
3. After the parent meeting coffee will be served. One pot of coffee makes 5 cups. How many pots of coffee need to be made if each person has one cup? In Figure 4.41, Learner 7 (BA) represented his strategy using a drawing, numbers and words that made sense. His strategy reflected figurative composite grouping, level 3 of early multiplication and division. He used repeated addition in such a way where each group is represented as an abstract composite unit (Wright et al., 2006a(Wright et al., & 2006b. He wrote an explanation that detailed how he solved the problem. He understood the mathematical concept required in the problem which was counting in fives. He realised that he needed to subtract one in order to cater for each parent in the problem. This technique is shown by his number sentence where he reached 40 through his counting strategy and then subtracted 1 since there were 39 parents in the problem. Despite being in the below average ability group, this learner was able to link this problem to the previous problems that used the same context. He did not rely on counting by ones and used a strategy that required higher order thinking. Like Learner 7 (BA), Learner 5 (A) showed an understanding of mathematical concepts previously learned to solve this problem (Figure 4.42). Initially, he used the doubling strategy to a point and incorporated this into a repeated addition sum. He successfully combined two strategies from his prior knowledge which demonstrates a deeper conceptual understanding.
This strategy reflected repeated abstract composite grouping, level 4 of early multiplication and division, in the way he used repeated addition as a strategy. Although both learners showed deepened conceptual understanding through their use of mathematical knowledge in their strategies, they did not state the number of pots needed in answer to the question.
Both representations showed that they had a clear understanding of how to solve the problem but they failed to provide the solution to the problem. If there are 21 children in his class, will he have enough muffins? As shown in Figure 4.43 above, Learner 2 (AA) wrote an explanation that adequately described the strategy she used to solve the problem. In reading the problem, she was able to see how her prior knowledge of doubling numbers could be used as a strategy which was indicative of perceptual counting in multiples, level 2 of early multiplication and division.
Learner 3 (A) drew two groups of 12 representing two trays with 12 muffins in each tray ( Figure 4.44) but the last group in his strategy showed that he did not understand that 12 muffins were in each tray regardless of how many he needed for the class. He changed this number to 7. He added the number of muffins needed for the class of 31 children. The explanation that he wrote showed a mismatch with his strategy. He explained that he doubled each number instead of adding them. He doubled 12, the first two groups, which made 24. Then he explained that he doubled 7 to get to 31 when, in fact, he added 7 to 24.
This doubling gave him the exact number of muffins needed for the class of 31 children. His strategy reflected intermediate number sequence, stage 5 of the SEAL, rather than early multiplication and division which was expected according to the problem type. He did not state whether he had enough muffins or not. He had worked out the number of muffins needed. This ability may be related to the fact that he changed the number of the third tray to 7 which meant that he had precisely enough for the class. It is possible that Learner 3 did not interpret the question stated in the problem correctly.
In reflecting on the post-test, it became clear that it was generally the same learners who used "writing to solve mathematical problems" to explain how they solved the problems.
Some included more detail than others. The researcher did not specifically ask learners to include written explanations in the post-test in an attempt to see whether the implementation of the different types of writing tasks during the data collection period had an impact on their ability to solve and explain problems. As evident in many of the examples of learners' strategies during the post-test, it was possible to see more detail in their writing which was apparent in their strategies or explanations. The introduction and implementation of the writing tasks helped learners to think through what they were doing in more detail in order to explain it to others. This type of clarification was encouraged throughout the data collection period when learners solved problems individually and in pairs. The post-test showed many of the learners continued to clarify and explain their thinking in this way. Writing tasks, specifically "writing to solve mathematical problems", supported learners when they solved the problems. No clear strategy 1
SECOND SET OF INTERVIEWS
The same learners interviewed after the pre-test were interviewed again after the post-test.
The purpose was to ascertain whether the introduction and implementation of different types of writing tasks had an impact on the learners' ability to solve and explain their strategies and solutions to mathematical problems. The interview questions focussed more on the different types of writing tasks and how learners used them within the context of solving problems to understand how writing may or may not give support when solving mathematical problems.
Learners from the above average ability group wrote more detailed explanations in the posttest than learners from the other ability groups. During their interviews, they were able to give more detailed, longer explanations of their strategies. There could possibly have been a link between their use of writing when solving problems and an improved verbal explanation.
Below is an excerpt from Learner 1's (AA) interview. Learner 1 (AA) displays a deep understanding of mathematical concepts which enables him to solve problems competently that reflect higher levels of number learning in the LFIN. As with his pre-test and first interview, his strategies showed that he could apply his conceptual understanding to represent his thinking. As shown in the above excerpt of the post-test interview, Learner 1 (AA) deployed more advanced strategies; giving in-depth explanations of how he solved the problems. The added detail in his writing explanations when solving post-test problems provided further support to his existing knowledge of mathematical concepts.
When Learner 6 (BA) completed the problems in the post-test, she was able to write a detailed explanation of a strategy that she had used. As a result, she could give a verbal explanation in her post-test interview. Below is an excerpt of the interview with Learner 6 (BA) where she describes her strategy as shown in Figure 4.35.
Researcher: And then you went on to drawing. Why did you choose to draw? Researcher: So how many tables did you put out for the meeting? Other learners from the average and below average ability groups referred to the usefulness of writing explanations of their problem-solving strategies. Most of the selected learners described how their use of writing assisted them in making sense of the problem they were solving. Below is an excerpt from the second interview conducted with Learner 7 (BA).
Initially, the learner could not remember or explain his strategy. But, after he read the explanation he had written, he could make sense of the problem and provide the solution.
This breakthrough was an indication of how he could verbally explain his thinking: his written solution and explanation were detailed. His thinking when he solved the problem previously was clearly expressed in his writing as can be seen in Figure 4.41. Learner 7: Because if you count in 5s to 40 and then you minus 1, you will get the…to 39 so…so you will need 39 cups.
Researcher:
Good. Ok and how many pots did you draw to be able to make 39 cups of coffee?
Learner 7 (BA) Post Interview
After explaining how they solved the problems in the post-test, learners were given an opportunity to explain how they preferred to write their strategies. Learner 2 (AA) clarified in her interview that using drawing in her writing helped to avoid confusion when she attempted to solve her mathematical problems.
Researcher: Which way do you prefer to solve problems because I see sometimes you, you use sums and sometimes you draw and sometimes you use words. What do you find the easiest for you?
Learner 2 (AA) Post interview
During the post-test interviews learners were able to provide better verbal explanations when compared to the pre-test interviews. This improvement was due to the fact that most of them used "writing to solve mathematical problems" to solve and explain their thinking. Their posttest explanations contained more detail which assisted them in making sense of their strategies.
CONCLUSION
Findings presented in this chapter address the research questions of this study. Findings of the pre-test and first set of interviews showed that learners generally solved problems reflecting lower levels and stages of the LFIN. The LFIN is the framework of number learning by , Wright, Martland, Stafford and Stanger (2006) and Wright (2013) used to analyse learners' problem-solving strategies.
Writing tasks were modelled and implemented in the selected Grade 3 class. Learners were given various writing tasks over eight weeks and the researcher provided scaffolding when needed to support and develop learners' use of writing.
The post-test and second set of interviews were conducted at the end of the data collection period. The results of the pre-test and post-test were compared to determine whether writing tasks supported learners in solving mathematical problems. The learners' problem-solving strategies often reflected higher levels and stages of the LFIN when compared to the pretest. Learners solved problems and explained their thinking behind their solution strategies in more detail during the post-test. Learners were able to provide improved verbal explanations of their strategies during the interviews.
Discussion regarding these findings is presented in the final chapter. Recommendations for further study in the use of writing in mathematics follows as well as reflections on the use of writing to support mathematical problem-solving.
INTRODUCTION
The purpose of this study was to explore how writing supports Grade 3 learners' mathematical problem-solving abilities. The study employed various writing tasks as promoted by Burns (1995a). A pre-test was conducted with a selected Grade 3 class at the beginning of the data collection period to determine learners' ability to solve mathematical problems. A group of eight learners from the class was selected and interviewed regarding their solutions in the pre-test. The writing tasks were introduced and implemented to support learners in solving mathematical problems. Data were collected through audio recordings of in-class ability group discussions and learners' written pieces. The data collection period concluded with a post-test and interviews of the eight selected learners to gauge the impact of the writing tasks on learners' ability to solve mathematical problems. The research questions that guided the study were as follows:
Research question:
How do various types of writing tasks support Grade 3 learners in solving mathematical problems?
Sub-questions:
1. What support do writing tasks give the development of conceptual understanding?
2. What support do writing tasks give the development of problem-solving strategies?
3. How are writing tasks useful in the Foundation Phase mathematics classroom?
4. What challenges do learners encounter when implementing writing tasks in the Foundation Phase mathematics classroom?
This chapter presents a summary of the research process, followed by a discussion of the findings. How the findings answer the research questions of the study is discussed as well as additional themes that emerged during the data analysis. Implications and recommendations that follow from the study are determined. Reflections on the study include its limitations.
SUMMARY OF THE RESEARCH PROCESS
In chapter 4 detailed results from the analysis of the data collected in this study were presented. A brief summary of the research process is followed by discussion of the findings.
At the beginning of the data collection period, all learners of the selected Grade 3 class participated in a pre-test. Learners were required to solve five mathematical problems. Most learners had difficulty solving these problems and communicating their thinking in writing.
Eight learners were selected to be interviewed regarding the strategies they used when solving the problems in the pre-test. These learners represented three mathematical ability groups in the class. The verbal explanations of the eight learners were restricted: most of them had difficulty explaining their strategies. Probing questions were necessary to assist them during the interviews.
Over a period of eight weeks, various writing tasks (Burns, 1995a;Wilcox and Monroe, 2011) were modelled to all the learners in the class. On these occasions, the purpose of each particular writing task was communicated to enhance learners' understanding of the task expectations and encourage the use of writing to explain their thinking. The writing tasks included writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes, and shared writing.
Learners were given various opportunities to complete writing tasks during mathematics lessons.
At the end of the data collection period, a post-test was conducted with all the learners of the selected Grade 3 class. The same procedure was followed as for the pre-test. Learners were not explicitly requested to use "writing to solve mathematical problems" when solving problems in the post-test but a number of them did so, providing varying degrees of detail in the explanation of their strategies. The post-test was used to determine whether the introduction and implementation of writing tasks supported learners to solve mathematical problems and to explain the thinking behind their strategies. The same eight learners were interviewed after the post-test.
SUMMARY OF THE FINDINGS
Data collected in this study were analysed using the theoretical framework outlined in chapter 2. Data were collected from learners' written work, interviews, field notes and audio-recordings of ability group discussions. Learners' written work showed the strategies they used when solving the mathematical problems set. These problem-solving strategies were analysed using the work of , Wright, Martland, Stafford and Stanger (2006) and Wright (2013). After the pre-test and first set of interviews had been conducted, writing tasks were modelled to all the learners of the selected Grade 3 class over an eight week period. Learners were given opportunities to complete writing tasks during mathematics lessons over this period.
While data were being collected many learners, especially those from the average and below average ability groups, chose to discuss the problem before tackling a writing task. Some learners chose to write collaboratively more often than others when the opportunity arose.
Learners became accustomed to doing the writing tasks and developed more detail in their writing which led to more comprehensive written and verbal explanations of their strategies when solving problems.
The role of language needs to be considered when learners solve mathematical word problems. Many learners, especially from the below average ability group, found it hard to understand the contexts of some of the mathematical problems. This difficulty may have been a result of limited reading abilities amongst these learners.
Learners who wrote detailed explanations when solving the problems of the post-test were able to provide detailed verbal explanations of their strategies and solution processes during the post-test interviews. Their use of writing appeared to help them make sense of their strategies and justify their thinking when solving problems. The introduction and implementation of writing tasks supported learners' mathematical problem-solving abilities.
DISCUSSION
Through the data analysis of the findings, the research question and sub-questions were addressed. A discussion of the findings is presented in the following section (5.4.1 -5.4.4).
The work of , Wright, Martland, Stafford and Stanger (2006) and Wright (2013), referred to as the Learning Framework In Number (LFIN), was used to analyse learners' problem-solving strategies. An overview of LFIN was presented in Chapter 2. Although it is primarily a framework for teaching numbers, the LFIN is relevant to this study because it provides stages and levels of development for number learning which helped to analyse learners' strategies. LFIN covers various aspects of number learning such as the Stages of Early Arithmetical Learning (SEAL), conceptual place value knowledge and early multiplication and division (Wright et al., 2006a) which applied to many of the strategies seen in this study. Through analysis, it was possible to pinpoint the exact level at which each learner solved the problem within a particular aspect of the LFIN. This pinpointing enabled comparisons to be made between the levels of problem-solving strategies used in the pretest and the post-test.
Using writing to develop conceptual understanding
Research sub-question 1 of the study addresses the support writing tasks give to the development of conceptual understanding. When learners' initial and later use of writing was analysed and compared, it was evident they could provide more detail and refer to distinct aspects of mathematical content when they solved problems. Figure 5.1 shows how he was able to solve the problem but did not use mathematical vocabulary to explain his strategy. In Figure 5.2, there is evidence of a detailed explanation using specific mathematical ideas although he arrived at the incorrect answer. In the literature review, conceptual understanding within problem-solving was discussed.
O 'Donnell (2006:349) states that problem-solving needs to encourage a higher cognitive demand where the mathematical content embedded in the problem may not be obvious to the learner. The problems presented throughout the data collection needed to encourage critical thinking and develop conceptual understanding.
Moreover, Orton and Frobisher (1996:23) suggest that "problem-solving shifts the weight from the acquisition of knowledge and skills to using and applying them". Solving mathematical problems should encourage learners to move beyond the use of procedural knowledge and develop their own conceptual knowledge. As Sfard's (1991:28) theory suggests, learners move between their operational and structural conceptions of mathematical ideas when they solve problems. Miller (1992:354) adds that writing is an active process that promotes students' procedural and conceptual understanding of mathematics. Through writing, learners communicate their understanding of mathematical concepts whenever they solve mathematical problems. Heddens and Speer (2006:84) argue that the opportunity to apply conceptual knowledge is as important as understanding the concepts themselves. It provides more meaning and purpose to the knowledge and skills the learner has acquired. Learners use what they know in order to solve that which is unknown.
The learner makes connections with previous knowledge and mathematical problems in order to construct new meaning.
As learners used the writing task, writing to solve mathematical problems, they had opportunities to develop and apply conceptual understanding to the problems. The problems were presented in a way that encouraged learners to connect their existing knowledge to the mathematical content. While learners solved problem 3 during the implementation stage of data collection, they needed the skill of counting in fours as well as the related multiplication table. (Appendix G lists the differentiated mathematical problems used during the implementation of the task, writing to solve mathematical problems.) Although learners had previously engaged with this mathematical knowledge, they had not done so in that particular day's lesson. If there was a mathematical concept that was required to solve the problem, the same concept or skill needed to be included in the mental mathematics section at the start of the lesson. It appeared that most learners required some level of scaffolding in this regard. Scaffolding will be discussed later in this chapter as one of the themes. Many learners may have had difficulty making the connection between the mathematical content embedded in the problem and their existing knowledge on their own.
Before learners solved problem 5, counting in threes was included into the mental mathematics exercises at the beginning of the lesson. This particular problem dealt with the context of tricycles for which the skill of counting in threes was required. Some learners, generally from the average and below average mathematical ability groups, had difficulty understanding the problem or finding a strategy and solution. This was significant since learners practised the skill of counting in threes before the problem was presented. Most of these learners had difficulty making a connection to their existing knowledge. As can be seen in Figure 5 Based on these findings, it is evident that the use of writing in mathematics supports the development of conceptual understanding. Throughout data collection, learners were encouraged to connect the problem they were solving to a mathematical concept or idea.
Learners from the average and below average mathematical ability groups seemed to find this more challenging because they often had difficulty finding the mathematical content embedded in the problems. As the writing intervention progressed, learners were given more opportunities to use writing tasks to explain their thinking. They engaged in writing tasks in a way that encouraged them to think through their strategies and solutions in order to write a suitable explanation of their thinking. Development of their conceptual understanding was particularly evident in the post-test where learners individually wrote more detailed explanations incorporating mathematical ideas. restructuring schemas that enhanced their understanding (Skemp, 1987:28). These findings show that writing tasks support the development of conceptual understanding. As learners solve and explain mathematical problems, they critically think about the mathematical content in the problem. The majority of problem-solving strategies used by learners in the post-test reflect higher stages and levels of LFIN. These findings suggest that they were able to connect the mathematical content and context of the problem to their existing knowledge. In some instances, learners combined mathematical concepts in their strategies. This showed an improved conceptual understanding: they were connecting concepts to find a solution.
Learners' development of problem-solving strategies
Research sub-question 2 traces the development of problem-solving strategies after writing tasks were implemented. It was evident in the study that learners improved their problem-solving strategies. The question was whether the lower number range of some mathematical problems had an impact on the levels of problem-solving strategies learners used.
Comparison between pre-test and post-test
In the literature review, there was an overview of problem-solving and the use of mathematical problems. It was explained that problem-solving is "a process in which the learner combines previously learned elements of knowledge, rules, techniques, skills and concepts to provide a solution to a situation not encountered before" (Orton, 2004:24). A pre-test and post-test were conducted at the beginning and end of the data collection period.
The purpose of doing so was to gauge the levels of problem-solving strategies learners used before and after implementing different types of writing tasks. (Wright, Martland, Stafford & Stanger, 2006:14) As mentioned earlier in this chapter, the stages and levels of the different aspects of LFIN (Tables 5.1 and 5.2) provided clarity and differentiation between the strategies learners used.
The results of the analysis are in tabular form below where selected learners' strategies are listed. Many learners in the selected Grade 3 class were restricted in their use of mathematical problem-solving strategies in the pre-test. Their strategies often reflected lower stages and levels of different aspects of the LFIN. Tallies were frequently used as a strategy in the pretest and the earlier part of the writing intervention. At this stage learners were not expected to describe their thinking although they had solved mathematical problems prior to this study.
Two learners from the above average ability group already showed strategies that were more advanced during the pre-test when compared to the other learners. When these strategies were compared to those in the post-test, these learners displayed strategies at higher stages and levels where there was evidence of enriched writing to explain their strategies. For example, Learner 2 (AA) usually solved problems in the pre-test at stage 4 of the SEAL (initial number sequence) and level 1 of early multiplication and division (initial grouping) as shown in Figure 5.7 below. Similar results were apparent in strategies used by the average and below average ability groups. There was a marked difference in the strategies Learner 5 (A) used in the post-test when compared to the pre-test. This distinction can be seen in Figure 5.10 and Figure 5.11 below. During the pre-test, his strategies were generally at level 1 of the SEAL (perceptual counting) and there was evidence of level 2 of early multiplication and division (perceptual counting in multiples). In the post-test he solved problems at the highest stage of the SEAL (facile number sequence) and at level 4 of early multiplication and division (repeated abstract composite grouping). Writing improved problem-solving strategies of learners in the below average ability group.
Learner 7, for instance, used basic strategies in the pre-test at level 1 of the SEAL (perceptual counting) to solve two of the problems. Figure 5.12 below is an example of the strategies he used. The remaining problems did not have a visible strategy or the strategy used did not match the problem type. The post-test reflected a significant improvement in the strategies used to solve problems.
Use of writing was evident to explain how he solved the problems. The more complex strategies reflected in the post-test were at stage 5 of the SEAL (facile number sequence) and level 3 of early multiplication and division (figurative composite grouping). Learner 7's (BA) strategy and explanation in Figure 5.13 is an example of how he used figurative composite grouping by applying his conceptual knowledge of counting in fives. He realised that he needed to subtract one in order to answer the problem correctly.
The remainder of the selected learners displayed similar tendencies when comparing the strategies used to solve the problems in the pre-test and the post-test. Throughout the data collection period, learners were encouraged to write to explain how they solved mathematical problems. Writing in this way enhanced their problem-solving strategies: they considered their strategies in detail in order to write their explanations. Some learners used mathematical language in their explanations which showed that they were able to link elements of their strategies with particular concepts they had learned previously. For example, terms such as double and decompose were used, which some learners referred to as breaking down ( Figure 5.11). This usage was an example of how they used their mathematical knowledge to enhance their strategies when solving problems. This phenomenon related to Sfard's theory of the process and object of a mathematical idea where learners could apply existing mathematical knowledge and vocabulary to the process of problem-solving. As explained in Chapter 2, the process, or operational conception, is the dynamic action where an idea is conceived at a lower level and the object, or structural conception, is conceived at higher levels that underlie relational understanding (Sfard, 1991:16).
Mathematical problem-solving requires applying existing knowledge of mathematical ideas (objects) as well as the conception and development of new ideas (process). Learners' written explanations became more detailed in the post-test, reflecting mathematical knowledge and vocabulary. This observation suggests that concepts taught in mathematical lessons were being connected to problems being solved. Learners engaged in, and used, processes and objects of their mathematical ideas in order to find solutions. In this study, learners used writing to solve and explain mathematical problems. When they encountered problems, either individually or corporately, learners appeared to use strategic thinking to determine how to arrive at solutions. Learners drew on their existing mathematical knowledge and applied it to their strategies. At times, mathematical problems required a reconstruction of mathematical knowledge: learners developed further invented strategies by adding to, or combining, existing mathematical ideas (Campbell et al., 1998).
Limited use of strategies related to lower number range
This study examines the support that writing tasks give to the development of Grade 3 learners' problem-solving strategies. The mathematical problems used in this study differentiated number ranges according to the three mathematical ability groups present in the participating Grade 3 class. While learners solved mathematical problems, most of the eight selected learners initially used tallies as a strategy. This technique was used by learners representing all three mathematical ability groups. This usage was especially evident during the pre-test and the earlier part of the writing intervention. As learners solved more mathematical problems during the latter part of the data collection, more advanced strategies, less reliant on tallies, were used by the same learners.
Learners from the below average ability group often used tallies in their strategies when compared to the strategies of the above average and average ability groups. They may have used tallies more frequently because their mathematical problems used lower number ranges than the other ability groups. Learners, especially from the above average ability group, seldom used tallies. This fact may have been the result of higher number ranges of their mathematical problems. The use of tallies takes more time. This possibility was confirmed by Learner 1 (AA) when he explained in the post-test interview that he preferred using numbers instead of tallies in his strategies since it was easier and quicker. Learners from the above average ability group were less likely to use limited strategies when compared to learners from the below average ability group.
Although the number ranges of the post-test were not markedly higher than those of the pretest, learners from all ability groups became less reliant on tallies as a strategy. According to Schoenfeld (2013), learners develop their problem-solving strategies with each problem they solve. They apply familiar knowledge and/or strategies to the problem they had not previously encountered. Each time they approach a problem, they do so with more mathematical knowledge than before. It cannot be assumed that learners used tallies due to the lower number range. The implementation of writing tasks and the social constructivist approach to this study may have increased development of more advanced strategies when learners were exposed to the strategies of peer learners.
The usefulness of writing in mathematics
Research sub-question 3 describes the usefulness of implementing the writing tasks in the Foundation Phase mathematics classroom, especially in a South African context. Writing tasks used in this study were American based. This section assesses how useful implementation of writing is to support Grade 3 learners' problem-solving abilities.
The usefulness of writing in problem-solving
As stated in the CAPS Mathematics curriculum for Foundation Phase, "solving problems in context enables learners to communicate their own thinking orally and in writing through drawings and symbols" (DBE, 2011:9). As explained in chapter 2, the curriculum does not specifically stipulate the use of writing in words when solving mathematical problems but researchers, such as Burns (1995a), promote the use of writing in words. This study sought to determine whether the use of writing, including words, can support learners' mathematical problem-solving strategies. Burns (1995a:13) explains that writing helps learners clarify and define their thinking as well as examine their ideas and reflect on what they have learned in order to deepen and extend their understanding.
According to Morgan (1998:22), writing assists learners in the investigative process, supports reflection and develops problem-solving processes. By engaging with their thought processes, learners deepen their conceptual understanding (Miller, 1991:517). Throughout the data collection period, it was evident the use of writing gave learners opportunities to enhance their mathematical knowledge when they critically engaged with others and developed their thinking. By engaging with others, learners were encouraged to reflect on their strategies and clarify their understanding of mathematical ideas. Writing about problems demonstrated individual learners' understanding, misconceptions and difficulties which may be responded to individually or corporately (Borasi & Rose, 1989:358).
Observing the use of writing afforded an opportunity to pinpoint specific misconceptions and address them timeously and appropriately.
There was a marked difference in the learners' written strategies and explanations as well as their verbal explanations of the post-test when compared to the pre-test. The amount of detail included during the post-test demonstrated that learners could justify their solutions.
This demonstration may have been as a result of the writing intervention by which they were able to engage critically in the ability group discussions and learn from the strategies and explanations of other learners. Implementation of the writing tasks had an impact on the development of learners' problem-solving strategies and improved their ability to explain the thinking behind their solution processes. Writing in mathematics can be useful in the South African Foundation Phase classroom.
Preferred types of writing tasks
Sub-research question 3 of the study focuses on the usefulness of the writing tasks in the Foundation Phase mathematics classroom, highlighted in section 5.4.3. The theme addressed here discusses preferred types of writing tasks in the participating Grade 3 class which demonstrates the usefulness of only two writing tasks at this level.
At the end of the data collection period, the selected eight Grade 3 learners were interviewed about their strategies when solving mathematical problems. Although the questions focused largely on the strategies and explanations resulting from the post-test, they were asked questions about the types of writing tasks they used as well as the type of writing task they prefer.
Five types of writing tasks were implemented in the selected Grade 3 class. Two of these, namely writing to solve mathematical problems and writing to explain, were more common or popular amongst the sample of eight learners. Three learners mentioned that they preferred writing to solve mathematical problems while two learners chose writing to explain. Each of the remaining three learners chose one of the remaining writing tasks which were writing to record (keeping a journal or log), writing about thinking and learning processes, and shared writing.
In analysing their responses in the second interview in comparison with examples of their writing tasks, learners were more inclined to use two writing tasks: writing to solve mathematical problems and writing to explain. It was noted that, as learners used these writing tasks more frequently, they tended to write longer pieces which included more detail.
Learners made more use of mathematical vocabulary in their writing and they showed more evidence of linking the writing task to prior knowledge. Learners were able to construct and develop their knowledge by building on the knowledge they already possessed. As mentioned in chapter 2, learners are required to learn many higher order concepts in mathematics where it is essential that the learner has already assimilated necessary lower order concepts into his cognitive structure (Skemp, 1989:64). In order to write solutions and explanations, learners needed to build on the concepts and skills they already knew as a means to develop their problem-solving abilities. Writing tasks were a means to support their thinking when they attempted to clarify their strategies.
Learners preferred these writing tasks when the purposes of the writing tasks are taken into account. As learners engaged in writing tasks, similarities appeared between the writing tasks even though the intended purpose of each task was different. In each type of writing task, learners were required to explain their mathematical knowledge at varying levels. In certain writing tasks such as writing to explain and writing to solve mathematical problems, this connection between writing and solving problems was more evident because explaining and clarifying their knowledge was directly linked to the purpose of those particular writing tasks. But, in the other writing tasks, learners were still required to provide an explanation of their conceptual understanding but in different ways.
The purpose of writing to record (keeping a journal or log) is to keep an ongoing record of mathematics lessons (Burns, 1995a:51). The displayed prompts that most learners used gave them opportunities to write about what happened in the lesson as well as what they did or did not understand. They could summarise their mathematical ideas, reflect on their understanding and make observations. By doing so, learners would provide explanations of their mathematical knowledge which serves the same purpose as that of the other writing tasks.
Learners use "writing about thinking and learning processes" to describe mathematics activities in the class (Burns, 1995a:40). When using this writing task, learners provide a description of the activity as well as a reflection on mathematics. They are encouraged to explain their thoughts on the activity they are writing about.
As learners engage in shared writing, they reflect on mathematical concepts through writing a story, for example. This writing task enables learners to review and internalize mathematical concepts and ideas as well as develop mathematical communication (Wilcox & Monroe, 2011:526). Although learners are writing more creatively, their mathematical knowledge is still communicated and explained.
At times, the purpose of each writing task appeared blurred due to the similarities between the different writing tasks. This blurring may have led some learners to write in a different manner than the writing task required. In Figure 5.15, it is evident that Learner 3 (A) may have confused the purpose of writing about thinking and learning processes with another type of writing task. It became essential to communicate the specific purpose of each writing task repeatedly while encouraging learners to explain their thinking in all of the tasks. Although each writing task was beneficial in its own right, the findings suggested that implementing these five different writing tasks may not be necessary or appropriate in the Foundation Phase. The three least popular tasks, as indicated by the sample of eight learners in this study, could probably be introduced in the higher grades. Writing to solve mathematical problems and writing to explain may appear to be more relevant to the conceptual understanding at a Grade 3 level because it focuses learners' attention on understanding mathematical ideas. Later in this chapter, recommendations regarding the relevance of specific writing tasks in mathematics as related to the research questions are discussed.
The challenges learners encounter during implementation of writing tasks
In this study, learners encountered various challenges when implementing writing tasks in mathematics lessons. Research sub-question 4 describes these challenges. Various methods were used to help learners overcome such challenges. Learners were given opportunities to work collaboratively on some occasions. Scaffolding was provided in various ways.
Comparing individual and collaborative writing
Research sub-question 4 addresses the challenges that learners experienced when they engaged in writing tasks. As a result of these challenges, there were occasions during the writing intervention when learners could work collaboratively. When they did so, they generally worked in pairs. During the pre-test and post-test, learners solved the mathematical problems individually to determine whether the writing tasks affected their ability to solve mathematical problems. While the writing tasks were implemented in the class during the intervention, learners had opportunities to write individually and collaboratively. On certain occasions learners were encouraged to work with a partner, especially when they were experiencing difficulty reading the problem or the context of the problem was too challenging. All the learners solved problem 1 collaboratively. Since this was the first time they were encouraged to implement "writing to solve mathematical problems", it was hoped that learners would be able to assist each other when they solved the problem and wrote their explanations. Despite solving the problem in pairs, learners required further prompting to guide them in their strategies and writing.
As the data collection progressed, learners were given the option to work in pairs or individually when they solved certain mathematical problems. Some learners preferred to work individually while others seemed more dependent on the support of a partner. Most learners who requested to work collaboratively were from the average and below average mathematical ability groups. These learners often needed varying degrees of scaffolding and support during mathematics lessons.
After learners solved the context-free, routine problem (problem 9 in Appendix G), they were given time to revisit the problem and explain their thinking. A few learners used this time to discuss with their peers what they had done. This kind of discussion among learners occurred more regularly during problem-solving activities as the data collection period progressed. Some learners seemed to be more at ease when sharing their ideas and strategies with peers. Others, however, wanted to know whether their answers were correct before sharing their strategies.
Learners were given opportunities to write collaboratively for the other writing tasks. One of these tasks included writing about thinking and learning processes; where learners wrote a letter to the principal about mathematics lessons. Another collaborative writing task was implementing the use of shared writing. Here, learners wrote a story imagining that they were one centimetre tall to elaborate on their understanding of measurement. In both these instances, learners were engaged in discourse of the mathematical knowledge required to be able to write in a collaborative manner. This provided a meaningful opportunity for learners to participate in discussion where scaffolding of conceptual knowledge was necessary to complete the writing task. Although learners were encouraged to write collaboratively on these occasions, a few learners chose to write individually. Learners who chose to do so were mostly from the above average ability group.
In reflecting on the learners' development of the use of writing in mathematics throughout the data collection period, it was clear that the more learners were encouraged to incorporate writing into their tasks, the more at ease they were. Some learners, however, required more support through collaborative writing to reach this stage. Based on the findings of this study, it is observed that writing in this way could improve learners' individual writing abilities. This was evident in the post-test where learners solved and explained their problems individually.
The majority of the learners wrote longer texts and included more detail than in the pre-test.
This improvement was particularly noticeable with learners from the below average ability group and learners who experienced language difficulties. Opportunities to engage in collaborative writing may assist learners when implementing writing tasks and improve learners' ability to write in mathematics individually.
Scaffolding using writing tasks
Throughout the data collection period, it was clear that learners encountered challenges when implementing the writing tasks in mathematics lessons. Scaffolding was necessary to address these challenges so that learners were able to use writing tasks in mathematics in Grade 3. In chapter 2, scaffolding was described as the learning activities the teacher or more knowledgeable other (MKO) uses to develop knowledge (Siyepu, 2013:5). In this study, scaffolding occurred in the following ways. Scaffolding was provided through the implementation of writing tasks as a means of breaking up a problem into manageable parts.
Peer interaction and collaboration were other forms of scaffolding. Learners were prompted during observation when they solved mathematical problems. Written and verbal feedback were also given to enhance the learners' ability to solve problems.
The writing tasks themselves were used as a form of scaffolding which helped to support and develop learners' ability to solve mathematical problems (Daniels 2001:108). Writing tasks created opportunities for learners to construct and apply mathematical knowledge. The mathematical problems learners solved during this study were linked to the mathematical knowledge they were expected to develop as part of the CAPS Mathematics curriculum prescribed for Grade 3. The problems used learners' number learning within the sphere of addition, subtraction, multiplication and division. By solving these problems, learners made connections with these mathematical concepts. For example, the third problem learners solved as part of the writing intervention required learners to use and apply their knowledge of counting in fours and/or the related multiplication table (Figure 5.16). Learners' development of mathematical knowledge was scaffolded through this problem. This was the case with other mathematical problems as well as other writing tasks in this study. Burns's (1995a) methodology of using writing in mathematics was introduced and implemented as a tool to scaffold learners' understanding and support learners when they solve mathematical problems. When learners solved the mathematical problems, they were sometimes provided with more manageable steps to find a solution. The problem was broken up into parts so that learners solved one part first before moving on to the next part. Doing so simplified the learners' role in order to solve the problem (Daniels, 2001:107). Below is an excerpt from the field notes taken during research which describes the scaffolding given when learners solved the first problem during the writing intervention. The problem for the average ability group reads as follows: The examples above ( Figure 5.17 and Figure 5.18) show how learners from the average ability group were able to solve the problem after scaffolding had occurred. These learners erased the work they had done before scaffolding so no comparison is possible between their strategies before and after discussion. Seeing learners' attempts at various strategies would have helped me to better understand the decisions they made to try alternate strategies. These strategies show that Learner 4 and Learner 5 understood the context of the problem and were able to solve it according to the addition/subtraction problem type.
Learner 4 used addition and explained that counting was used to arrive at the solution.
Learner 5 used his knowledge of place value to decompose 71 into tens and ones. He explained how he subtracted 32 when he crossed out tens and ones. He displayed knowledge of subtracting through the decade when he crossed out ten and changed it to 9.
The group of average ability learners needed a visual representation of the problem coupled with verbal prompts to understand the context of the problem. Polya (1957:110) explains that relevant elements from formerly acquired mathematical knowledge should be used to solve the present problem. Learners solved similar addition/subtraction problem types in previous mathematics lessons that required them to find the missing addend. By using a visual presentation, it was possible to relate previous problems and mathematical ideas.
Scaffolding occurred in this study through prompts given to learners while they solved mathematical problems (Sperry Smith, 2013:10). One occasion where prompts were required was when learners solved the fifth problem of the writing intervention. As shown in Figure 5.3, Learner 7 (BA) made tallies to represent the number of tricycle wheels in the problem. But, he did not go beyond drawing these tallies in order to solve the problem.
Written feedback was provided in order for him to continue working on this problem the following day. He was uncertain of the feedback given so further scaffolding was necessary.
While he was explaining his strategy, his tallies were circled into groups of three by the researcher as a means of scaffolding his understanding of the context of the problem. He was asked what this circling represented. Learner 7 (BA) was able to use the circled tallies to clarify that one group represented one tricycle with three wheels.
When learners engaged in the writing tasks in the lessons, different forms of scaffolding took place to develop learners' understanding of the writing tasks as well as their mathematical knowledge. Scaffolding and prompts were not provided during the pre-test and post-test because this could have negated their purpose which was to determine whether the writing tasks supported learners' mathematical problem-solving abilities.
Based on these findings, it is evident the use of writing in mathematics could be a means of providing scaffolding to overcome some of the difficulties learners encounter when implementing writing tasks. Writing tasks, verbal prompts and the teacher/researcher's written feedback may scaffold learners' conceptual understanding when they engage with mathematical problems.
The role of language in problem-solving
One of the challenges observed was the role language plays when learners engage in mathematical problem-solving. As mentioned in chapter 4, some learners either misread or misinterpreted mathematical problems which affected their ability to solve them and explain what they had done. When learners solved the second problem (see Appendix G), it became clear that more time needed to be spent on reading and interpreting the problem in order to select an appropriate strategy and explain it. If learners struggled with the contextual aspect of the problem, they would probably be unable to solve the problem in a meaningful way. As a result, teachers should carefully consider the wording of problems when they are presented to learners. Some learners may become confused by the language in the problem: the context or question in the problem is not always presented in an understandable way.
In other instances it was apparent that, where learners had below average reading and comprehension abilities in language, they experienced difficulty reading and interpreting the mathematical problems coherently. This difficulty was particularly evident in the case of Learner 7 and Learner 8 who were selected from the below average mathematical ability group. As shown in Figure 5.19, Learner 8 (BA) had difficulty solving the problem about the height of the wall (problem 2), even after the strategies were discussed in the ability group. If learners lack a firm understanding of mathematical vocabulary needed to clarify their mathematical knowledge, their explanations may be limited. Learners may become confused by everyday language that has a more specific mathematical meaning when used in mathematical problems (Luneta, 2013:94). This confusion was evident throughout the data collection period. As mentioned in the literature review, time must be spent on teaching mathematical vocabulary which links different concepts. This linkage enhances learners' conceptual understanding by eliminating errors caused by misunderstanding of mathematics vocabulary (Koshy et al., 2000:177). An increased effort in the development of language and writing across the curriculum could benefit learners and enhance their ability to talk, read and write about what they have done to solve mathematical problems (Clemson & Clemson, 1994:84). Developing learners' language may enable them to express their thinking and justify their solutions in other subject areas, not only mathematics.
Strategies according to problem types
Learners encountered challenges concerning which type of strategy to use for which kind of problem. The literature review dealt with problem types and strategies. Distinctions were made between addition/subtraction problem types and multiplication/division problem types.
During the pre-test, the selected learners from the below average ability group did not always use strategies that matched the problem types. For instance, Figure 5.20 shows how Learner 7 (BA) used an addition strategy to solve a multiplication/division problem (problem 2 of the pre-test). When solving problem 5 of the pre-test, the same learner had no visible strategy ( Figure 5.21). During the post-test, learners from the above average and average ability groups generally used strategies that related to the problem type. They appropriately chose strategies that reflected the basic operation present in the mathematical problem. Figures 5.22 and 5.21 show two strategies used by learners from the below average group during the post-test.
Learner 7 (BA) used multiplication/division appropriately in Figure 5.22 but his solution was incorrect: he stated that 39 cups were needed rather than 8 coffee pots. Although Learner 6 (BA) used tallies in her strategy ( Figure 5.23), she did so in groups to show the trays mentioned in the context of the mathematical problem. These strategies reflected the appropriate problem types at Grade 3 level in accordance with the CAPS Mathematics curriculum (DBE, 2011:79). There was more evidence of appropriate strategies occurring in the post-test than in the pre-test. Some learners may have encountered language difficulties that led to inappropriate strategies being used for the different problem types. Another challenge learners may have experienced could have been a lack of deepened conceptual understanding. Learners may have been further challenged if they could not identify the mathematical content embedded in the problems in order to use the appropriate strategies for the problem types. The findings of the post-test suggest that implementation of the writing tasks had an impact on the use of appropriate strategies according to the problem types the learners encountered when compared to the pre-test.
The findings demonstrate the support that writing gives to Grade 3 learners in solving mathematical problems. Consequently, the implementation of writing tasks seems useful in the Foundation Phase mathematics classroom because it could enhance learners' conceptual understanding and problem-solving strategies. Although learners encountered difficulty when implementing the writing tasks, scaffolding and collaborative writing opportunities enabled them to use writing in mathematics successfully as was seen in the results of the post-test.
SIGNIFICANCE OF THE STUDY
This study is significant for the mathematics classroom, especially in the area of problemsolving. Learners used writing tasks to support their mathematical problem-solving strategies and explain their solutions. Learners actively engaged in the construction of mathematical knowledge and developed conceptual understanding. By working collaboratively, learners were exposed to the problem-solving strategies of others in the ability groups. Exposure to other learners' strategies may have allowed them to reflect on suitable problem-solving strategies and encouraged learners to think critically when they solved and explained problems.
This study is significant for implementation of the current curriculum in South African schools.
Foundation Phase learners are expected to communicate their thinking using writing (DBE, 2011:9). This study reveals the need for teachers, pre-service and in-service, to be trained in developing writing skills and implementing such skills. Training enables teachers to model good writing practices by explaining and justifying the solutions for the mathematical problems they encounter.
LIMITATIONS OF THE STUDY
As mentioned in Chapter one, there were limitations to the study. The researcher for this study was also the teacher of the sample group. As such there is a potential bias to the research process. This bias could have affected the selection of the sample of eight learners and the data analysis process. The validity of the data was ensured, however, by using multiple data collection instruments and audio-recordings of ability group discussions and interviews. The validity of the data helped to secure an objective thesis report.
The sample of the study was small. Eight learners were purposively selected from one Grade 3 class. The small sample limited the study resulting in the inability to generalise the findings to a broader population.
The mathematical problems used in the study were differentiated according to the expected number ranges of the three mathematical ability groups. The contexts, however, were identical for the problems across the ability groups. For some of the mathematical problems, it appeared that the number range was too low and did not present enough of a challenge for learners. This was evident in all the mathematical ability groups. At other times, some of the learners from the above average ability group were not sufficiently challenged by the problem. It appeared that either the context of the problem was too simple or the number range was not suitable. Learners were not adequately encouraged to develop strategies that encouraged a higher cognitive demand: the solution and/or strategy may have been obvious.
On other occasions, the context of the problem proved too perplexing for the below average learners. In addition, most learners found aspects of reading and language difficult. The context of the problems may have caused learners to have difficulty identifying the mathematical content within them.
The normal school programme had an impact on data collection envisaged prior to the pretest. Data could not be collected three times per week as planned. As a result, data collection was shortened to accommodate the assessment programme of the school.
RECOMMENDATIONS
The purpose of this study was to determine how the use of writing tasks supports learners' mathematical problem-solving strategies. The various writing tasks of Burns (1995a) and Wilcox and Monroe (2011) were used as a writing intervention. The writing tasks were modelled to the learners and implemented in the selected Grade 3 class.
As described earlier in this chapter, there was a distinct difference in the strategies and explanations learners used in the pre-test and the post-test of this study. Learners used "writing to solve mathematical problems" in the post-test without being instructed to do so.
Their detailed use of writing allowed them to explain their strategies better during the second interview. This improvement suggests that the use of writing tasks increases learners' ability to describe the thinking behind their solution processes when they engaged in mathematical problem-solving in this study. The use of writing provided the environment for learners to engage with the teacher and peers more openly and critically. They were actively encouraged to reflect on their thinking in order to explain it to others. In addressing two of the research sub-questions of this study, learners improved in their ability to solve and explain mathematical problems which demonstrated the development of their conceptual knowledge.
Writing in mathematics is an essential part of the curriculum in Foundation Phase in South Africa. This study showed the benefits of the use of writing when learners engage in mathematical problem-solving. Although this study used writing tasks initially implemented in the United States, this study proved the usefulness of such tasks in the South African Foundation Phase classroom. Further research is necessary which deals with the use of writing beyond the scope of mathematical problem-solving. Based on the results of this study, it would be fair to assume that other areas of knowledge and skills could benefit from the implementation of writing in the mathematics classroom. Further research needs to be done in the higher grades when learners engage with increasingly complex mathematical concepts.
Previous international research has been conducted where writing explanations in mathematics were part of the content courses for preservice teachers (McCormick, 2010).
Research showed it was beneficial to improving conceptual understanding in mathematical problem-solving and developing writing practices. Teachers should model good writing practices by explaining and justifying the solutions for the mathematical problems they encounter. A study conducted by Craig (2011) researched the use of writing as a tool in a first-year university course in South Africa. This study did not rely on pre-service teachers as its sample, unlike the international studies referred to earlier. Similar research should be conducted in education faculties of universities in the South African context so that preservice teachers are given the knowledge and tools to implement writing in mathematics in their classrooms in future. In this way, mathematics teachers can be equipped to model and implement writing to support learners' mathematical problem-solving abilities. They would be prepared to deal with any challenges learners may encounter while implementing the writing tasks.
CONCLUSION
The purpose of this research study was to determine how various types of writing tasks support Grade 3 learners' mathematical problem-solving ability. The writing tasks included writing to solve mathematical problems, writing to record (keeping a journal or log), writing to explain, writing about thinking and learning processes (Burns, 1995a) and shared writing (Wilcox & Monroe, 2011). A sample of eight learners was selected and interviewed regarding their strategies in the pre-test and the post-test. Learners' written pieces produced during the writing intervention, field notes and audio-recordings of ability group discussions formed part of the analysis for this study.
The CAPS Mathematics curriculum for Foundation Phase states that learners should be writing in the mathematics class. This study revealed that writing in mathematics is beneficial to the area of problem-solving within mathematics in accordance with the prescribed curriculum. The writing tasks supported learners in their problem-solving strategies: learners were using more advanced strategies by the end of the data collection period. Selected learners were able to provide better verbal and written explanations of their solutions.
This study showed that two of the writing tasks, namely writing to solve mathematical problems and writing to explain, were valuable tasks that developed the learners' ability to explain their thinking. These two writing tasks should be considered as primary tasks in the mathematics curriculum while the other writing tasks may be secondary. The secondary writing tasks include writing to record (keeping a journal or log), writing about thinking and learning processes and shared writing. These writing tasks did not prove as useful to the sample of learners in this study.
This study concludes that learners who engage in writing in mathematics may be able to reflect critically on their thinking when they construct mathematical knowledge and skills that are essential in the problem-solving process. Teachers, both in-service and preservice, may be encouraged by this study to incorporate writing into their daily mathematics lessons. This incorporation of writing supports learners when they apply mathematical knowledge to problem-solving.
REFLECTIONS ON THE STUDY
There are a number of elements that have enabled me to develop as a teacher and as a researcher. The opportunity to engage in research of this nature helped me to reflect on and improve my daily teaching practice. Although problem-solving was regularly planned as part of my mathematics lessons, this study made me more attentive to the way I used problemsolving in my daily lessons. Including the writing tasks added a different dimension to my mathematics lessons where I could readily gauge the improvement in the learners' abilities to solve problems.
The learners themselves became increasingly enthusiastic as they continued engaging with the writing tasks during the data collection period. They were more eager to solve mathematical problems than before the writing tasks were introduced. After the data collection was completed, I noticed learners continued using writing to explain their thinking even when I had not prompted them to do so. When I asked them their reason for using writing in mathematics, many of them explained that it helped them to make sense of what they were doing.
Added to this, I found that I became more discerning in my use of scaffolding. This study enabled me to recognise when scaffolding was genuinely needed and when I needed to allow learners to discover the mathematical content on their own. In a sense, I felt more at ease in allowing learners the space to grapple with the context of a mathematical problem that would sometimes take more than one lesson. In other words, I could allow learners to delve deeper into their strategies, taking time to engage in critical thinking and explain their solutions.
During the data collection period, I became increasingly aware of the use of erasers when solving problems. I felt some data may have been lost or incomplete due to learners erasing incorrect strategies. Seeing learners' attempts at various strategies may have helped me to better understand their thinking behind their solutions. These attempts may have given me a better comprehension of their later attempts and the decisions they made to try alternate strategies.
This research study did not follow its original plan. As a teacher-researcher, I was faced with a few challenges in the implementation of the writing tasks and managing the data collection plan. I had originally planned to collect data continuously over a ten week period, excluding two weeks to conduct the pre-test and post-test that stretched over two school terms. My data collection plan catered for three opportunities per week where learners were either engaged in writing activities or I was modelling the writing tasks to them. However, the daily school programme did not always afford the time for this to occur as planned. At times, the structure and content of certain mathematics lessons required more time to be devoted to content areas that needed attention which meant there was not enough time to comprehensively engage in problem-solving and writing tasks. As a result, I found that some weeks I was able to collect more data than others. Therefore, during certain weeks I was able to collect data almost every day whereas I could only collect data once or twice during other weeks. Added to this, learners' assessments also needed to be completed for the quarterly report cards which meant that I was unable to collect data for a period of two weeks. This delay occurred during the earlier part of the data collection period. I had just introduced and implemented the first writing task, namely writing to solve mathematical problems, and I was concerned that momentum would be lost. This was not the case and the learners were able to continue implementing the writing task and developing their mathematical problem-solving abilities.
Being a teacher-researcher was challenging as I mentioned in the limitations of my study in Chapter one. I had to be continually aware of the tension between the two roles, knowing which role was required more actively at any given time. It was particularly challenging as data were being collected during most mathematics lessons when learners engaged in writing tasks, problem-solving and ability group discussions. I needed to be mindful of when scaffolding was appropriate in my role as a teacher and when I needed to step back in my role as a researcher. As the study progressed, I became more comfortable in my role as researcher and felt more at ease in striking the balance between the two roles during mathematics lessons.
Moreover, I was conscious of the potential bias that could occur as I conducted the study in my own class. As I selected the eight learners as the sample for this study, I had to largely disregard their literacy abilities and focus more on their mathematical abilities. This process was made a little easier in that learners had already been placed in different mathematical ability groups which were separate from their literacy ability groups. This allowed me to ensure that learners selected based on their solutions in the pre-test reflected the three mathematical ability groups.
In future, I would spend more time perfecting the data collection plan as far as possible. I have learned that, as a researcher, I need to be more prepared for, and anticipate, potential pitfalls that may occur. EMD -Initial grouping (partitive sharingused strategy incorrectly, incorrect answer) | 55,510 | 2017-06-30T00:00:00.000 | [
"Mathematics",
"Education"
] |
Tracking Outfield Employees using GPS in Web Applications
This paper presents e-Track, a web-based tracking system for outfield employees in order to cater for various business activities as demanded by the business owners. Such demands may range from a simple task assignment, to employee location tracking and remote observation of the employees’ task progress. The objective of the proposed system is two-fold. First, the employees to access the application and clocks-in work. Second, a standalone web system for the employers to determine the approximate location of the staff assigned with outfield duties. The IP address recognition will ensure no buddy punching takes place. e-Track is hoped to increase efficiency among employees by saving time travelling between branches during outfield duties. In the future, e-Track will be integrated with claim and payment modules to support arrangement for outfield duties.
Introduction
Business owners are now looking for more ways to incorporate technology to their benefit.Each business has to have its own set of employees for a company to maintain sustainability.It is worth noting that running a business is a risky venture to which a survey attests to proving 49% of businesses fail within their first five years and approximately 30% of businesses don't even make it through the first two years [1].One of the reasons as to why businesses fail, especially small businesses are due to a lack of employee productivity [2].
There are several problems employers and employees face which reduces their productivity in a working environment.In most organizational setting, there are no standard methods to track employee with outfield works that actually meets a business's needs.The way the working environment functions currently would not be optimal to nurture an individual's full potential.Employees have to check in to work using punch cards and would have to stay on in their office cabin throughout their working hours [3].Even if an employee has outfield duties, it is a complicated procedure for an employee to travel to a destination and return to office to sign out before office hour ends.The scenario worsens if the company has multiple offices and employees have to be travelling constantly.
Another issue is buddy punching.According to Synerion [4], buddy punching is described as when one individual checks in for another colleague of his without them being there.This is a wastage of company resources as each employee are being paid accordingly.This issue may lead to employee dissatisfaction, as some employees e-mail<EMAIL_ADDRESS>give full effort and those who slack off get paid the same amount.In the long run, there is a high chance of employee productivity decreasing.It is clear that the traditional method used to track employee activities based on card swipes or manual entry using a book keeping method is no longer effective.
To address the issues, a Web-based Outfield Employee Tracking System named e-Track is proposed.e-Track architecture is proposed to be on a client server basis whereby there will be a mobile application for employees to access and a standalone web application for the employers to determine the approximate location of the staff assigned with outfield duties.Any organization that sends their employees for outfield duties will find this application extremely useful as they would be able to assign duties to employees based on their locations.In addition, this application also allows companies with many branches to take advantage of this system by allowing checking in to work through the mobile application.The IP address recognition will ensure no buddy punching takes place.The advantages are mutually beneficial as employees will also feel secure that their whereabouts are known by the employee in case of any mishaps taking place.
The remainder of this paper is as follows.Section 2 presents the related works, Section 3 presents the architecture and prototype of the proposed system e-Track, and finally Section 4 presents discussions and conclusions with plans for future work.
Related Work
In this modern era, tracking devices have becoming a common household gadget in human daily lives for var-MATEC Web of Conferences 150, 05015 (2018) https://doi.org/10.1051/matecconf/201815005015MUCET 2017 ious reasons, ranging from tracking missing pets to missing phones.The devices are small and convenient to use.Nonetheless, tracking devices have not been completely utilized in an organizational setting.In this section, three online time tracking applications are reviewed, which are TSheets, ActivTrak, and Veriato 360.
TSheets
TSheets is the mobile applications name wherein it allows users to track time spent from any location, using any device in real time [5] (Refer Figure 1).This application would be useful for employees who work in remote locations or switch jobs regularly.Employees would be able to track time easily as there is clock in and clock out reminders, employee breaks and overtime alerts as well to ensure all employees are aware and up to date of the current situation ongoing in the company.The main functionality of TSheets is the use of Global Positioning System (GPS) for online time tracking.The location at which the employee has clocked-in is visible with the aerial view.Secondly, the time entry can be manual, punch or custom.GPS location points are immediately attached to the employee's timesheet when they log in and log out or even change job modes.Employees are allowed to log in and log out at real time.On extenuating circumstances, employees can enter their time manually and allocate time to custom projects and tasks.Finally, in the case of any absenteeism such as sick days, personal days or even a family vacation, TSheets tracks employee time off to manage employees paid time off.This function enables employees to request time offs directly from the mobile application and employers would be able to approve the request in a similar manner.
ActivTrak
ActivTrak is a free employee monitoring software and it is developed on the basis of a free cloud-based monitoring service [6] (Refer Figure 2).It allows employers to know who is doing what and for how long by tracking applications and web usage based on each workstations.Activ-Trak has three main features, which are real-time monitoring, alarm, and website blocking.For real-time monitoring, ActivTrak allows administrators to view a live data stream of the active window on any of the monitored devices screen.The administrators are able to see the time, device, user, title and even its public IP address.With real time monitoring, ActivTrak provide details on exactly the events happening in the monitored devices at any point from anywhere.
ActivTrack also has alarms that can be set to trigger at any actions of employees.For example, if an employee visits any restricted websites for longer than 20 seconds, an alarm will ring indicating them to close the tab.Each time an alarm is triggered, there is an option to take multiple screenshots of the complete screen to have a proof of evidence.The admin may also opt to display a popup message which would appear on the screen of the user.If all else fails and employees are still on the restricted website, there is an option to terminate the application which triggered the alarm.
Finally, websites which employers feel takes the employees productivity away can be blocked.A website can be blocked by entering the domain name and clicking the 'blocked column'.Websites can be blocked for individual users, groups or even everyone at once.Thus, there will be no complications if separate groups of people must be blocked from specific websites.
Veriato 360
Veriato 360 is a record-keeping system wherein it presents detailed, accurate and actionable data used in incident response together with high-risk insider monitoring and productivity reporting [7] (Refer Figure 3).Veriato 360 has a different set of functionalities as compared to TSheets and ActivTrak.First, Veriato 360 has file and document tracking.This feature allows employers to view new files created and when existing files are edited, renamed or deleted.It is also useful as a supporting evidence in cases of leaks, breaches or theft.
Second is email recording, whereby Veriato 360 records emails between clients and employees.It supports reading traditional email clients such as Outlook and also recent popular webmail such as Yahoo and Gmail.Veriato 360 allows searching across all employee emails simultaneously and sorting them according to subject, CC, BCC and Web Mail Host columns.Finally is the keystroke logging feature.This option allows the recording of each and every thing typed on every keystroke.Applying this feature to highly positioned individuals in the company will ensure complete transparency among people with high level of access in the organization.Keystrokes can be determined on which application they are used at as well.For example, employee typing in Outlook will display in 'Outlook' and not just vaguely 'user'.
Architecture and Prototype of e-Track
The proposed project will make use of a mobile and web application architecture.Usually, a client server architecture will be applied for any form of mobile applications.However, with the development of technology and different architectures, the opportunity to take into consideration specific aspects related to mobile devices and their connectivity with servers is now available.Figure 4 shows the architecture for the proposed web-based tracking system for outfield employees, which is e-Track.Clients are divided into two categories on how they operate which are thin clients and fat clients [8].Examples of clients are mobile device types ranging from cellular telephones, tablets and RIM devices.Thin clients do not have the ability to run custom application code therefore has to completely rely on the servers to obtain its functionality.However, the advantage is that it does not rely solely on the mobile device's operating system [9].Thin clients uses widely available web and wireless application protocol to display html, xml and wml types of application content pages.
On the other hand, fat client usually has one to three layers of application code and can function without a server for a certain amount of time [8].Practically, fat clients would be the most suitable client type to choose for businesses as even when communication between client and server is not established, the application will still be useful.For example, a fat client application will be able to accept the users input and store the date in a local database until connectivity the server is strong and the data can be transferred to the server.This feature would be essential to businesses as the user can still utilize the application even when not in contact with the server.
However, fat client does have its disadvantages.It relies heavily on the device's operating system and mobile device type.Thus, making the code tough to release and distribute.This also means the device has to support multiple code version on different devices for it to have a proper resolution (Aung, 2016).Fat client does have its variety, as it can be implemented in different ways ranging from one layer to three-layer application code.Although single layer may be useful on devices in a small scale project, it would be extremely hard to isolate individual functionalities and reuse the code in a large project.Table 1 shows the comparison between the two application code.
Nowadays, web page hosting has become a necessity in every mobile application development.This is to be able to display and service web pages using the mobile device even when the mobile client is only connected to a network and back-end system occasionally.Thus, to achieve that, the equivalent of a mini web server must be established on the mobile device.
Prototype
Figure 5 shows the user interface for login page.Users are required to input their own login details for their account which are email address and password.The login page has simple design and layout for users.
Advantages Disadvantages
Good scalability due to the distributed deployment of application servers.
Communication complexity increases due to increased distance between communication points.Better data integrity as data corruption through client applications can be eliminated.
Additional effort is required due to increase of performance whereas the 2-tier model can handle the particular function using an automated tool.Enhanced security through the implementation of several layers.
Figure 7 shows the task assignment page.Administrators can assign tasks by allocating specific time slots to employees registered in the database.Finally, e-Track generates list as shown in Figure 8.The users can search for specific employees and retrieve their details.The administrators will also be given power to edit information and delete employees when necessary.
Evaluation
e-Track was evaluated among people who work at an organization as the respondents.Questionnaires were distributed to three categories of people who are office employees, outfield employees and employees who juggle both office tasks and outfield duties.These three groups of people would be the primary users who would benefit from the usage of the system.The questionnaire was distributed through physical paper to the respondents and in order to gain a bigger pool of answers, both open-ended and close ended questions will be used.Figure 9 to Figure 13 show the evaluation findings.
Question 1: What do you think is the reason for employee productivity to decrease?This question was done to determine the core reasons for declining employee productivity and its correlation with need of technological gadgets.The results prove that employees feel the need for the inclusion of technology to their working routing while simultaneously help address an organizations weakness.This questions objective was to analyse if buddy punching occurs at the office place.The results show most percentage of the employees check in for their colleagues in behalf of them.This would possibly lead to a tremendous loss of revenue for any businesses.Question 3: How many hours would you save in a week if you had an agile work scope?Time equals to money.Wasted time can't be recovered especially in a business setting.This leads to the motivation behind this question.The aim was to determine how many hours can be saved or maximized by a business owner by providing their employees a flexible and agile work scope.The highest percentage of answers was 2 to 4 hours.These many hours per week would means 10 to 20 wasted hours in a month.Question 4: Rate the level of satisfaction of the current attendance system which is being used?
The ranges were purposefully given to over a 4 rating as the evaluation is designed so the respondents unable a neutral answer as that would not be reflective of how adaptive they are towards the current system.The results show that equal percentage of users are happy with the current manual way of handling employees.The developer believes this would be due to a phenomenon of resistance to change.This analysis was done to decide the users requirement on if a payroll needs to be incorporated to the system for employees who travel for outfield duties.The results were mostly not receptive towards that idea.The author believes the respondents need to be further enlightened regarding the benefits of this functionality for it to be accepted wholly.
Discussions and Conclusions
The major aim of the proposed research is to design and implement a web-based tracking system for outfield employees in order to cater for various business activities as demanded by the business owners.Such demands may range from a simple task assignment, to employee location tracking and remote observation of the employees' task progress.The aim of the project is to provide both employees and business owners a convenient method to monitor business activities, employee attendance while simultaneously producing and implementing a more convenient, revolutionary method that can widen employees' work scope and increase the productivity.
The proposed e-Track System benefits both employees and business owners in a number of ways.One of the core benefit is the increased efficiency among employees.A more efficient working environment allows each employee to focus on developing their individual potential and talent in a more agile working environment.Next, the tracking system helps to reduce time wastages because employees do not have to waste unnecessary time travelling between branches during outfield duties just to sign out from their workspace.e-Track gives a time-out mechanism via a simple logout button from the mobile application.
From the perspective of business owners, the automation of the monitoring job will lessen the burden for HR department to manage their employees' whereabouts.This will enable the Human Resource (HR) department to focus on different tasks such as recruiting staffs and managing timely payrolls.A tracking mechanism based on Global Positioning System (GPS) may also reduce absenteeism cases especially when dealing with common excuses such as the traffic jam.This is because the system allow users to login from the nearest geolocation according to a specific radius required by the employer.Finally, the tracking system can also be used to locate employees during their outfield duties in case of mishappenings and emergencies.
There are also intangible benefits of the web-based outfield employee tracking system.First and foremost, employees will be able to enjoy complete freedom of workspace and improve their individual creativity in contributing to the organization, instead of being inhibited by the traditional system of having to be in the office from 9am-5pm.Secondly, a more secure attendance check-in feature provides a more secure way to enable employees to check in to work while simultaneously reducing buddy punching.Thirdly, absenteeism can be well managed by avoiding absenteeism due to minor reasons.The system will even alert users who have not checked in to the system 5 minutes before commencement of work.The system also allows employees to inform beforehand failure to attend work in cases of extenuating circumstances.Finally, the application that is based on GPS location provider is more reliable and available 24/7.
Future enhancement of e-Track is to make the system more reliable in terms of calculation of employee payment based on travelled location.This will give more precise payment to employee to plan their budget before making decision.Finally, this project could be developed into smart phone application for IPhone, Android, Symbian, and Blackberry platform to use e-Track application.This application will have basic features such as trip advisor and photo gallery for front camera image capturing.
Figure 1 .
Figure 1.User interface for TSheets Time Tracking Source.
Figure 5 .
Figure 5. User interface for login.
Figure 6
Figure6shows the interface for tracking the users.Users can view the location of the employees using the google API implemented in the coding.Real-time location tracking is available when employees are logged into their accounts.
Figure 6 .
Figure 6.User interface for location.
Figure 8 .
Figure 8. User interface to generate list.
Table 1 .
Comparison between one layer to three-layer application code. | 4,204.2 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Fuzzy Cognitive Maps Based Game Balancing System in Real Time
ABSTRACT
INTRODUCTION
Diversion play in computerized recreations includes a few components, for example, activities and difficulties that players must embrace to finish amusement exercises. An amusement planner may change the diversion mechanics to make challenges simpler or harder to comprehend, giving predefined trouble levels, for example, "simple", "ordinary", and "hard". Nonetheless, these modifications are static and might be made in light of a discretionary benchmark, which is not appropriate for all clients.
Practically speaking, players have diverse ability and experience levels and may discover foreordained troublesome levels as "too simple" or "too hard", getting to be noticeably disappointed or exhausted. The outcome might be diminished inspiration to continue playing the amusement, which implies lessened engagement.
An answer for adapt to these issues is to progressively change the diversion trouble levels as indicated by the present playing setting, which incorporates observing player activities, mistakes, and execution in the amusement. The writing alludes to arrangements in light of this thought as "dynamic diversion trouble adjusting and "dynamic trouble alteration (DDA)". There are a few works that approach DGB and related issues. For instance Tijs and co-creators [1] proposed to adjust trouble levels utilizing the player's passionate state. Be that as it may, the work by Tijs and co-creators [1] shows a few downsides. Initially, their approach needs to get some information about his/her passionate state amid the amusement. Also, their approach does not have a legitimately useful basic leadership framework. In another related work, Hunicke [2] analyzed how dynamic trouble alteration influenced player advance while leading analyses that controlled free market activity of different things in the amusement. Vasconcelos de Medeiros [3] proposed a static level adjusting, in light of the input of genuine gaming encounters. This approach is fascinating in light of the fact that the trouble level is displayed utilizing genuine information (rather than utilizing an irregular and subjective gauge). Be that as it may, this arrangement is not dynamic and the trouble levels continue as before amid the whole amusement. In this paper, we propose a technique to change the trouble levels powerfully and continuously, which depends on player association data, setting factors, and Evolutionary Fuzzy Cognitive Maps. Player cooperations contains essential activities in an amusement, for example, "hopping", "eating", and "running", characterized in the diversion configuration organize. Setting factors are identified with diversion state and Salen and Zimmerman [4] characterize "amusement state" as the present state of the diversion at any given minute. Consider for instance a Soccer Game. In its amusement state components we could locate the accompanying setting factors: the half time being played, the rest of the time, group data, current score and current climate conditions. Transformative Fuzzy Cognitive Map (E-FCM), is a displaying instrument, proposed by [5], [6], in view of Fuzzy Cognitive Maps, with the distinction that in E-FCM each state is developing in light of nondeterministic outside causalities progressively. Our approach makes an E-FCM in view of diversion setting factors, which is later changed to incorporate player communications, for example, hop, eat and run; which depend of the amusement outline. The E-FCM refreshes all setting factors progressively relying upon player communications, which changes the amusement trouble levels while a diversion session is going on. We utilize E-FCMs on the grounds that they are effective apparatuses to help with thinking and basic leadership forms. The writing gives cases of utilizing E-FCMs in a few unique zones, for example, political emergency administration and political basic leadership [7] and intelligent narrating [6]. An outcome of periodized small side games with and without mental imagery on playing ability among intercollegiate level soccer players explained in [8]. Review of cognitive radio Network is also shows the [9].
EVOLUTIONARY FUZZY COGNITIVE MAP
Displaying a dynamic framework can be hard in a computational sense. Furthermore, planning a scientific model might be troublesome, exorbitant and at times even unimaginable. These methodologies offer the upside of evaluated results yet endure a few disadvantages, for example, the prerequisite to have particular learning outside the area of premium A comparative study between visibility-based roadmap path planning algorithms [10]. Fluffy Cognitive Maps are a subjective option way to deal with dynamic frameworks, where the gross conduct of a framework can be watched rapidly and without the administrations of operations inquire about master. In the Evolutionary Fuzzy Cognitive Maps each state is advancing in light of nondeterministic outside causalities continuously. E-FCM is developed with two principle parts: ideas and causal connections. Concept (C), which represents a variable of interest in a real-time system and is expressed as a tuple: Where, S denotes the state value of the concept. T is the evolving time for the concept, representing a multiple of a fixed time slice t0 and Ps is the probability of self mutation. Causal relationship (R), which represents the strength and probability of the causal effect from one concept to another concept. It is defined as a tuple: Where W is the weight matrix of the causal relationship, 1]. S denotes whether the causal relationship is either positive (+) or negative (−). is the probability that the causal concept affects the result concept C. Fuzzy causal relationships for a system with n variables can be represented as a n×n weight matrix W: For a system with n variables, the mutual causal probability can be represented as a n x n matrix : Different concepts might have different evolving times. For a system with n variables, it can be represented as a vector T: Besides the causal effects from others concepts, each concept will also alternate its internal state randomly in real time. Each concept is modelled with very small mutation probability. If the probability is high, the system would become very unstable. For a system with n variables it can be represented as a vector : The concepts in the system update their states in their respective evolving time. The state value of concept is updated according to the following equations: Where f is the activation function to regulate the state value. is the state value of concept at time t. Δ i is the state value change of concept at time t. T is the evolving time of concept to update its value. Different concepts may have different evolving times. The k1 and k2 values are two weight constants. The summation Δ is subjected to conditional probability , and Δ is subjected to self-mutation probability .
EXPERIMENTS AND RESULTS
So as to tentatively approve our model, we built up the Time over diversion. Time Over is a runner sort diversion where a young fellow escapes from a twister to spare himself. Figure 1 outlines some screenshots of Time Over amusement. In a preparatory adaptation, the amusement had just two setting factors: score and speed. The diversion computes the score variable as per the quantity of things that a player gathers. The speed variable has consistent incentive in the diversion. Afterward, we added more setting factors to enhance amusement play, considering perspectives, for example, player tiredness, totalling six factors: 1) Stamina: represents the player's energy, which increases as the player collects more items in the game. 2) Speed: represents the player's speed, which relates to stamina. Speed decreases over time to simulate the player character's tiredness. 3) Obstacle type: there are three types of obstacles: easy, default, and hard. These types represent how difficult the obstacles are. 4) Obstacle period: represents the period (time interval) that the game uses to insert obstacles in the game scene. 5) Item type: there are two types of collectible items in the game: water bottle and seeds. Both items increase player stamina, but water bottles provide more stamina than seeds. 6) Item period: represents the period (time interval) that the game uses to insert collectible items in the game scene. Every setting variable is a fluffy esteem, standardized to the scope of [0,1]. The mean of every variable esteem relies on upon particular diversion plans. For effortlessness we characterized impediment sort as mapping the real estimation of obstruction sort to theoretical "simple", "default", and "hard" trouble hindrance levels. The "simple" trouble level maps to the scope of [0,0.33], the "default" trouble level maps to [0.34,0.66] and the "hard" level maps to run [0.66,1]. The Item sort as mapping the real estimation of thing sort to calculated "water" and "seeds". The water thing appears to the range [0,0.5]. The seed thing appears to the range [0.6,1]. We relate every setting variable to the accompanying ideas: Table 1 outlines the probabilistic weight network W of causal connections, which are resolved either from a specialist information or learnt from a learning base; as the model intended for this amusement is basic, the weights were given by the diversion planner. The framework is a ones grid since we consider the likelihood that an idea influencing another idea is one.
The enactment work chosen for the examinations was the strategic capacity, on account of the delicate limit. This implies the consequence of strategic relapse can be deciphered as the likelihood of watching certain reaction and likelihood ought to be a number in the vicinity of 0 and 1, comprehensive. Keeping in mind the end goal to model player associations with the E-FCM, we added two bolts to the E-FCM show. The 1 bolt, speaks to the stamina that the player earned by gathering things. The 2 bolt speaks to stamina misfortune. The stamina esteem diminishes always. We utilize the "diversion outline" as the time unit. In such manner, we consider that time develops as the diversion outline succession advances. We refresh the six setting factors each edge, as per the developing time T. The qualities in T signify the time interim in which a variable is refreshed. For instance, an estimation of 1 implies that a variable is refreshed each edge. An estimation of 2 implies that a variable is refreshed each two edges, et cetera. For Time Over diversion, because of its straightforwardness, we dole out the estimation of one to all setting factors in T. In different settings, when it is required that diverse setting factors are refreshed non concurrently, every setting variable must have its particular advancing time. For instance, to demonstrate the consider of rain an environment, the advancing time of the rain could be 10 on
RESULT AND DISCUSSION
The player actions of eating more or fewer items are reflected in the increase and decrease of the stamina value. The items period is proportional to the stamina, but its curve is softer since there is less stamina and the items period is shorter, ensuring that the player will have items to eat, in order to increase his stamina value and, therefore, increase his speed value. The item type is inversely proportional related to the stamina value because of the impact of items when the value of stamina is low: it must be higher so that the stamina value can be increased. Due to these changes, which affect directly to the actions of eating or not the items, the context variables tend to present peak.
CONCLUSION
We watched that altering the E-FCM delivered the coveted result, as the player plays the amusement; our technique could change the trouble levels powerfully utilizing the setting factors and player connection as data sources. Subsequently, we infer that the proposed technique is proficient and is versatile to the player needs progressively, enhancing the amusement play involvement. | 2,662.8 | 2018-02-01T00:00:00.000 | [
"Computer Science"
] |
Prevalence and clinical implications of germline predisposition gene mutations in patients with acute myeloid leukemia
Acute myeloid leukemia (AML) is one of the most common types of leukemia. With the recent advances in sequencing technology and the growing body of knowledge on the genetics of AML, there is increasing concern about cancer predisposing germline mutations as well as somatic mutations. As familial cases sharing germline mutations are constantly reported, germline predisposition gene mutations in patients with AML are gaining attention. We performed genomic sequencing of Korean patients diagnosed with AML to identify the prevalence and characteristics of germline predisposition mutations. Among 180 patients, germline predisposition mutations were identified in 13 patients (13/180, 7.2%, eight adults and five children). Germline mutations of BLM, BRCA1, BRCA2, CTC1, DDX41, ERCC4, ERCC6, FANCI, FANCM, PALB2, and SBDS were identified. Most of the mutations are in genes involved in DNA repair and genomic stability maintenance. Patients harboring germline mutations tended to have earlier onset of AML (p = 0.005), however, the presence of germline mutations did not showed significant association with other clinical characteristics or treatment outcome. Since each mutation was rare, further study with a larger number of cases would be needed to establish the effect of the mutations.
With the recent advances in sequencing technology and the growing body of knowledge on the genetics of AML, there is increasing concern about cancer predisposing germline mutations as well as somatic mutations. It has been widely recognized that not only somatic mutations in cancer tissue, but also germline gene mutations can affect disease characteristics, progress, and prognosis, and In cases with solid cancers such as breast cancer, the significance of germline mutation had been already recognized and changes in treatment and genetic counseling according to the presence of those mutations had been settled down in clinical practice. Similarly, the category myeloid neoplasm with germline predisposition mutations was included in the WHO classification in 2016 1 .
A number of cases of familial leukemia with these mutations was reported, mainly in relatively well-recognized genes such as DDX41, CEBPA, and RUNX1 [2][3][4] . However, these studies focused on specific ethnic groups, and data regarding other ethnicities are lacking. Furthermore, for patients with AML, hematopoietic stem cell transplantation (HSCT) is frequently performed. Germline predisposition mutations could be a significant issue in the setting of HSCT, where most donors are family members, and a higher probability of shared mutations is expected. As in the case of BRCA1/2 gene mutation, there may be an increased risk of other cancers, and patients and family members having the same mutation should be involved in a surveillance program. Therefore, there is a growing need for basic information about frequency and types of germline predisposition gene mutations.
In this context, we assessed the prevalence of germline predisposition gene mutations and identified the clinical characteristics of mutation carriers among Korean patients with AML using genomic sequencing and We first established a set of genes known to be associated with AML predisposition according to WHO classification 5 , and variants of those genes were prioritized. Variants were further classified according to American College of Medical Genetics (ACMG) guideline 6 . For PM2 score, global population frequency cut off < 0.00001 for dominant disease and < 0.0001 for recessive disease were applied. For PP3 score, the agreement of at least five prediction tools was applied.
Germline confirmation test. Suspected variants in germline predisposition genes were further confirmed with bone marrow specimens were collected when the patients were in complete remission. Sanger sequencing was performed using custom primers and the BigDye Terminator Cycle Sequencing Ready Reaction Kit on an ABI Prism 3730 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA). Because bone marrow was used for germline mutation testing, possibility of confounding due to factors including residual tumor and clonal hematopoiesis cannot be ruled out. Interpretation of Sanger sequencing results was proceeded under recognition of the limitation.
Statistical analysis.
To compare the outcome according to molecular characteristics, Fisher's exact test and logistic regression analysis were utilized, with single or multiple variables. Variables included in multiple logistic levels were chosen by adapting stepwise as a variable selection method. Statistical analyses were computed using R. p values < 0.05 were considered statistically significant.
Results/discussion
For gene panel sequencing, diagnostic exome sequencing, median sequence coverage was 1626x, with an average of genotype quality score 98. For whole exome sequencing, median sequence coverage was 182x, with an average of genotype quality score 54.
Genes involved in DNA repair or maintenance of genomic stability were frequently mutated. Among 180 patients, 13 (13/180, 7.2%) showed pathogenic mutations in germline predisposition genes ( Table 1). Most of the identified germline predisposition gene mutations were in genes involved in DNA repair or maintenance of genomic stability, which were associated with inherited bone marrow failure syndromes including Fanconi anemia (FA), dyskeratosis congenita (DC), or Shwachman-Diamond syndrome (SDS).
Eight of the 13 germline mutations identified were in six FA genes: FANCD1 (BRCA2), FANCI, FANCM, FANCN (PALB2), FANCQ (ERCC4), and FANCS (BRCA1). FA genes are involved in the FA signaling pathway, which is crucial in the DNA damage response. Constant exposure to endogenous and exogenous genotoxic agents can jeopardize genomic stability when the DNA damage response is compromised 7 . FA is a rare genetic disorder often accompanied by numerous other conditions, including early age of onset of symptoms, multiorgan congenital defects, bone marrow failure leading to pancytopenia, and predisposition to hematological and non-hematological malignancies 8 . Monoallelic mutation carriers who had no apparent signs, were known to have increased risk for cancer [9][10][11] . The protein products of these six genes are components of the DNA damage response system and participate in various cellular processes. Genomic instability resulting from defective function of those proteins can be associated with increased cancer risk. However, elucidation of the function of each component and the consequences of deficiencies of each product have yet to be determined.
In addition, BRCA1/2, and PALB2 are not only associated with FA and predisposition for AML, but are also widely known as important risk factors for breast and ovarian cancer. Although monoallelic mutation carriers Besides, BLM is associated with Bloom syndrome, a rare chromosomal instability disorder characterized by growth retardation, immunodeficiency, and a wide spectrum of cancers 12 . BLM mutation is known to be associated with cancers 13 . ERCC6 plays a critical role in DNA repair, and an association between disruption of its function and increased susceptibility to cancer has been reported 14 .
CTC1 is known causative genes of the telomere biology disorder DC, characterized by accelerated telomere shortening leading to manifestations such as bone marrow failure, cancer, and pulmonary fibrosis 15 . DC genes function in telomere maintenance; CTC1 functions in a telomere-associated complex to protect the telomere from lethal DNA degradation 16 . Although these DC genes have autosomal recessive inheritance, an association between monoallelic deleterious germline mutations and myeloid malignancies has been reported 17 .
Splice site mutations of the SBDS, a causative gene of SDS, were identified in two patients. This mutation also had relatively high frequency among the control population (0.0048 from KRGDB and 0.004 from gnomAD East Asians), probably reflecting high carrier frequency. SBDS is characterized by exocrine pancreatic insufficiency, skeletal abnormalities, and bone marrow failure with an increased risk of myeloid malignancy 18 www.nature.com/scientificreports/ 90% of SDS is caused by two common mutations, c.183_184delinsCT and c.258 + 2T > G 19 , only the latter was detected in two of our patients. Although monoallelic mutations are not known to cause SDS, association between monoallelic mutation and increased risk of malignancy cannot be excluded. The DDX41 mutations were known to have different spectrum depending on ethnic groups. p.A500Cfs*9 mutation has been solely reported in Asian patients 20 . The most frequently reported mutation in Caucasians, p.D140Gfs*2, was not found in our patients. The germline DDX41 mutation is one of most frequently detected germline predisposition mutations in myeloid malignancy, with around 70 families described to date 21 . In these families, myeloid malignancies were associated with normal karyotypes, and about 50% were found to have a somatic second hit mutation in DDX41, suggesting that DDX41 acts as a tumor suppressor 22 . DDX41 is associated with the dominant inheritance and donor cell derived leukemia is already reported 20 , screening of germline predisposition mutation should be considered during donor selection.
Germline predisposition mutations were associated with earlier age of onset. Characteristics of patients with germline predisposition mutations are given in Table 2. There was a significant difference in age of onset between the two groups. Patients with germline predisposition mutations tended to be younger, showing an earlier age of onset (p = 0.005). Notably, while only 21 patients under age 20 were included (21/180, 11.7%), five among 13 with germline mutation (5/13, 38.5%) were children. Five of 21 children and eight of 139 adults had germline predisposition mutations (p = 0.005). Although certain mutations like those in DDX41 are reported not to be associated with early onset malignancy 2 , age of AML diagnosis was significantly lower in patients with germline mutations in other genes such as CEBPA 3 . It is understandable that inherited disorders tends to be expressed at earlier age in childhood. < Considering better prognosis of children in patients with acute lymphoblastic leukemia, it cannot be ruled out harboring germline predisposition mutation was associated with better outcome > The effect of germline predisposition gene mutations needs to be further investigated in this younger age group.
Notably, although the association between germline predisposition gene mutations and certain somatic mutations was not statistically clear. In addition, somatic mutations of RUNX1 and ASXL1 and complex karyotype, which are well-known poor prognostic factors, were not identified in the germline predisposition mutation positive group.
Presence of germline predisposition gene mutations did not affect the clinical outcome. The presence or absence of germline predisposition mutations, however, did not affect clinical outcome. Factors confirmed as significant in this study were well-recognized good or poor prognostic factors of AML.
In multivariate Cox proportional hazards regression analysis for overall survival (OS), complex karyotype, older age, absence of gene fusion, poor outcome of induction chemotherapy, and FLT3 ITD mutation were factors for unfavorable outcome (Supplemental Table 3). In the same analysis for relapse-free survival (RFS), poor outcome factors were RUNX1 somatic mutation and FLT3 ITD (Supplemental Table 4). On the other hand, complete remission after induction chemotherapy, the presence of gene fusions, and CD34-negative immunophenotype mutation were identified as favorable factors. Achievement of complete remission after induction chemotherapy and carrying well-known poor prognostic features like FLT3 ITD mutation were important factors for both OS and RFS.
The presence or absence of germline predisposition mutations did not affect clinical outcome. As reason for the negative finding, the number of identified mutations was possibly insufficient to determine the effects of www.nature.com/scientificreports/ those mutations. More patients and germline mutation carriers would be needed to establish whether germline predisposition mutations are beneficial or harmful. Clinical and therapeutic heterogeneity of patients might also play the role. Comparison of patients between groups with otherwise identical condition would be desirable. Although younger age could be linked to higher durable intensity of and better response to chemotherapy, the association of germline predisposition mutation and treatment outcome was not definite. Because statistical significance could not be achieved partially due to insufficient sample size, further study with more patients is needed.
In conclusion, we identified 13 patients with germline predisposition mutation among 180 patients with AML. Most of the mutated genes are involved in the DNA repair system, contributing genomic stability. Although the effect of these mutations on clinical outcome, including OS and RFS, was not significant, we confirmed that this group of patients tends to develop AML at a younger age. Since each mutation was rare, further study with a larger number of cases would be needed to establish the effect of the mutations. | 2,772 | 2020-08-31T00:00:00.000 | [
"Biology",
"Medicine"
] |
Tailoring of magnetoimpedance effect and magnetic softness of Fe-rich glass-coated microwires by stress- annealing
There is a pressing need for improving of the high-frequency magneto-impedance effect of cost-effective soft magnetic materials for use in high-performance sensing devices. The impact of the stress-annealing on magnetic properties and high frequency impedance of Fe-rich glass-coated microwires was studied. Hysteresis loops of Fe-rich microwires have been considerably affected by stress- annealing. In stress-annealed Fe- rich microwire we obtained drastic decreasing of coercivity and change of character of hysteresis loop from rectangular to linear. By controlling stress-annealing conditions (temperature and time) we achieved drastic increasing (by order of magnitude) of giant magnetoimpedance ratio. Coercivity, remanent magnetization, diagonal and of-diagonal magnetoimpedance effect of Fe-rich microwires can be tuned by stress-annealing conditions: annealing temperature and time. Observed experimental results are discussed considering relaxation of internal stresses, compressive “back-stresses” arising after stress annealing and topological short range ordering.
(ii) Melt extracted amorphous microwires (diameters of 30-60 μm) known since beginning of 90-th 11,14 . These microwires present not perfectly cylindrical shape that can affect the magnetic properties and hence GMI effect. (iii) glass-coated microwires (with typical metallic nucleus diameters of 0.5-40 μm) prepared using so-called Taylor-Ulitovky method (also known as quenching-and-drawing method) known since 60-s 15,16 , but extensively studied starting from 90-s 5,7,12,13 . This fabrication method involves rapid quenching from the melt of perfectly cylindrical metallic alloy nucleus surrounded by glass-coating. The characteristic feature of these microwires is the enhanced magnetoelastic anisotropy arising by rapid quenching itself as well by the difference in thermal expansion coefficients [17][18][19] .
Among the properties of amorphous wires the GMI effect is actually one of the most attractive phenomena suitable for a number of technological applications such as magnetic sensors, memories and devices, smart composites for remote stress and temperature monitoring, health monitoring etc [20][21][22][23][24][25] . The main reason for elevated interest in GMI effect is related to the high sensitivity of the impedance to an applied magnetic field achieving up to 600% relative change of impedance of soft magnetic wires allowing detection of extremely low magnetic field.
Usually magnetic field dependences of impedance, Z, is expressed through the GMI ratio, ΔZ/Z, defined as: where H max is the maximum applied DC magnetic field. Reported magnetic field sensitivity (up to 10%/A/m) of GMI effect in amorphous wires is one of the largest among the non-cryogenic effects 26,27 . It is worth mentioning that the theoretical maximum GMI ratio is about 3000% being few times larger than the GMI ratios reported up to now 28 . Moreover theoretical minimum skin depth is about 0.3 μm 29 .
It is commonly accepted that the origin of GMI is related to the classical skin effect of magnetic conductor 30,31 .
Consequently the guidelines for searching of magnetic materials presenting the largest GMI effect are laying in design of soft magnets thicker than few μm (about one order thicker than the minimal skin depth) with low magnetic anisotropy.
Glass-coated microwires prepared using Taylor-Ulitovsky method with typical metallic nucleus diameter of a few μm is therefore one of the most promising materials for achievement of the largest GMI effect. Up to now the highest GMI ratio is reported for glass-coated microwires with nearly-zero Co-rich microwires with diameters of the order of a few μm 26,27 . But for some industrial applications (i.e. tunable metamaterials for electromagnetic cloaking, imaging, stress and temperature monitoring containing the microwires inclusions in the dielectric matrix or large scale production of magnetic sensors) involving GMI effect a large amount of magnetic wires can be required. Therefore the development of cost-effective magnetically soft microwires is highly demanded for prospective applications 1,6 .
Less expensive Fe-rich amorphous glass-coated microwires are the good candidates, but usually highly magnetostrictive as-prepared Fe-rich amorphous glass-coated microwires present low circumferential magnetic permeability and therefore low GMI effect 6,12,32 . As reported elsewhere 12,32 Fe-rich microwires amorphous microwires with positive magnetostriction coefficient usually present rectangular hysteresis loop related to a domain structure consisting of a large axially magnetized single domain surrounded by outer domains with radial magnetization orientation 6,12 . Such domain structure is related to magnetoelastic anisotropy, i.e. high internal stresses and magnetostriction coefficient. Consequently enhancement of magnetic softness and GMI effect of Finemet-type Fe-rich microwires by nanocrystallization allowing reduction of the magnetostriction coefficient has been reported 33,34 . But Finemet-type nanocrystalline materials are rather brittle.
Induced magnetic anisotropy is the alternative route for optimization of magnetic softness of amorphous microwires [35][36][37] . The principal advantage of stress-induced anisotropy is that it allows maintaining superior mechanical properties typical for amorphous materials. Previously change of hysteresis loop and increasing of GMI ratio at low-frequencies (10 MHz) are reported in stress-annealed Fe 69 B 12 Si 14 C 5 glass-coated microwires 35 . Influence of induced magnetic anisotropy on high frequency (above 100 MHz) GMI effect in stress-annealed Fe-rich microwires is less studied. Only a few recent publications report on improvement of GMI ratio in stress-annealed Co-rich and Fe-rich amorphous microwires 36,37 . It is worth mentioning that the diameter reduction achieved in microwires with metallic nucleus diameter of a few μm must be associated with the shift of the optimal GMI frequency range towards higher frequencies 38 . Therefore for magnetic microwires the optimal GMI frequency range of the order of 100-500 MHz is reported elsewhere 39 .
Consequently, in this paper we present our recent experimental results on effect of stress-annealing on magnetic properties and high frequency GMI effect of Fe-rich glass-coated microwires.
Results and Discussion
As expected from previous knowledge on Fe-rich microwires 40 as-prepared Fe 75 B 9 Si 12 C 4 microwires present perfectly rectangular hysteresis loops (Fig. 1).
Increasing the annealing temperature, T ann , and keeping the same annealing time (t ann = 1 h) we observed drastic change of the hysteresis loops from perfectly rectangular to linear with quite low coercivity (see Fig. 1a-d).
Additionally, fixing T ann and rising annealing time we observed similar tendency: decreasing of coercivity, H c , and squireness ratio, M r /M s (Fig. 2).
Observed changes must be associated with changes of magnetic anisotropy and domain structure after stress-annealing.
The hysteresis loops of stress-annealed Fe-rich microwires are becoming similar that of Co-rich microwires in which the remagnetization process in axial direction is associated to the magnetization rotation 27 . Additionally, observed transversal magnetic anisotropy can be tuned by stress-annealing conditions (time and temperature, see Figs 1,2).
As pointed out previously from direct and indirect experiments 6,41,42 the domain structure of magnetic wires is usually described as consisting of a large axially magnetized single domain surrounded by outer domains. Moreover the radius of the inner axially magnetized core radius, R ic , can be estimated from squireness ratio, M r / M s as: 41 Consequently, after stress-annealing R ic decreases achieving 0.1 R at T ann = 300 °C. Therefore we must assume that the inner axially magnetized core radius, R c , decreases after stress-annealing as shown in Fig. 3a,b.
One can expect that observed stress-induced anisotropy must affect GMI effect of studied microwires. Indeed, theoretically and experimentally shown elsewhere 43-45 that easy magnetic anisotropy direction and magnetic anisotropy field affect both value and magnetic field dependence of GMI effect of magnetic wires and magnetic softness of amorphous wire is one of the most important conditions to observe high GMI effect.
Consequently, we measured GMI effect in as-prepared and stress-annealed Fe 75 B 9 Si 12 C 4 microwires. As expected from previous knowledge on GMI effect of Fe-rich microwires with axial magnetic anisotropy, as-prepared Fe 75 B 9 Si 12 C 4 microwires present rather poor GMI effect (Fig. 4a): at low frequencies (about 10 MHz, i.e where most of experimental results are reported) the GMI ratio is almost negligible. Rising the frequency, f, we observed some increasing of the GMI ratio ( Fig. 4a,b). At about f = 800 MHz maximum GMI ratio, ΔZ/Z m achieves 30% (Fig. 4b). Increasing the frequency above 1 GHz a decreasing of ΔZ/Z m is observed.
As commonly accepted elsewhere 2,38,46 at f ≤ 10 MHz the GMI effect is basically related to variations of the magnetic penetration depth due to strong changes of the effective magnetic permeability caused by a dc magnetic field 2,30,31,39,46 associated to both domain-wall movement and magnetization rotation.
For higher frequencies (up to GHz) the GMI effect is also originated by the skin effect of the magnetic conductor, but the domain walls are strongly damped. Consequently, the magnetization rotation is assumed responsible for the GMI effect 2,39,46 . At GHz frequencies, the GMI presents features similar to the ferromagnetic resonance (FMR) 2,7,39,46 .
Analysis of the magnetic field dependencies of GMI ratio can provide insight into the effect of stress-annealing on magnetic anisotropy and domain structure of Fe-rich microwires. It is worth mentioning that GMI effect of as-prepared Fe 75 B 9 Si 12 C 4 microwires present features typical for magnetic wires with axial magnetic anisotropy, i.e decay with increasing of magnetic field (see Fig. 4a).
Stress-annealed Fe 75 B 9 Si 12 C 4 microwires present rather different value and magnetic field dependence of GMI ratio: all stress-annealed Fe 75 B 9 Si 12 C 4 microwires present double-peak ΔZ/Z(H) dependencies and higher ΔZ/Z m -values (see Fig. 5a,b). As mentioned above, such double-peak ΔZ/Z(H) dependencies are predicted for magnetic wires with circumferential magnetic anisotropy 43,45 .
As can be appreciated from Fig. 5a,b, GMI ratio in Fe 75 B 9 Si 12 C 4 microwires stress-annealed at all studied conditions (T ann ) is almost one order of magnitude higher than in as-prepared Fe 75 B 9 Si 12 C 4 microwires. Below we present more detailed studies of GMI effect in stress-annealed Fe 75 B 9 Si 12 C 4 microwires.
A significant enhancement of the GMI effect at all frequencies is observed for Fe 75 B 9 Si 12 C 4 microwire stress-annealed at T ann = 250 °C for t ann = 60 min (see Fig. 6). The most noticeable are the enhanced ΔZ/Z m -values observed for the frequency band of about 300 MHz, where more than one order increasing of the GMI ratio (up to ΔZ/Z m ≈ 125%) by stress-annealing is achieved (compare Figs 4b and 6b).
Similarly, a beneficial increase of GMI ratio is observed for Fe 75 B 9 Si 12 C 4 microwire stress-annealed at T ann = 300 °C for t ann = 60 min and at T ann = 200 °C for t ann = 120 min (see Fig. 7). In the sample stress-annealed at T ann = 300 °C for t ann = 60 min GMI ratio values of about 100% are observed in a wide frequency band from 200 MHz up to 1 GHz (Fig. 7a,b).
Considerable increasing of GMI ratio is observed for Fe 75 B 9 Si 12 C 4 microwire stress-annealed at T ann = 200 °C for t ann = 120 min (see Fig. 7 c,d). At these annealing conditions again ΔZ/Z m ≈ 100% is observed for frequencies about 400-600 MHz (Fig. 7d).
In as-prepared Fe 75 B 9 Si 12 C 4 microwire the off-diagonal Z zϕ components (S 21 -values) present nearly-zero values. Stress-annealing had a beneficial effect on the off-diagonal MI effect, S 21 , as shown in Fig. 8. For all annealed conditions (T ann ) we observed considerable increasing of S 21 -values. Additionally, we observed increasing of S 21 -values upon application of the bias current, I b , in as-prepared and stress-annealed Fe 75 B 9 Si 12 C 4 sample (see Fig. 9).
The highest S 21 -values (up to 4%) upon application of the bias current were observed in Fe 75 B 9 Si 12 C 4 sample stress-annealed at 300 °C.
All stress-annealed samples exhibit double-peak shape ΔZ/Z (H) dependencies (see Figs 6a,7a and c) that suggest existence of circular magnetic anisotropy. At I b = 0, the off-diagonal impedance is low and irregular. In nearly-zero magnetostricvite microwires with spontaneous circular magnetic anisotropy such behavior is associated to a bamboo-like domain structure of the outer domain shell 46 . As mentioned above application of bias current makes the off-diagonal MI higher with characteristic asymmetric dependence on magnetic field (see Fig. 9). This dependence of S 21 on I b can be interpreted as growth of the domains with magnetization parallel to the circular field, H b , at the expense of domains with magnetization antiparallel to H b 47 .
On the other hand observed S 21 -values (with maximum up to 4% for the sample stress-annealed at T ann = 300 °C for t ann = 60 min, see Fig. 9d) are lower than that observed in the nearly-zero magnetostrictive microwire where S 21 -values can reach 15% 47,48 . Additionally under application of I b = 7-10 mA we observed transformation of the bamboo-like domain structure into a single domain in nearly-zero magnetostrictive (Co-rich) microwires 48 . Studied Fe-rich microwires present rather high (about 35 × 10 −6 ) values of the magnetostriction coefficient and therefore elevated magnetoelastic anisotropy 49 . Therefore we can assume that the applied circular magnetic field H b (associated to bias current I b ) is not enough to remove the bamboo-like surface domain structure.
It is worth mentioning that when the bias current is applied, the impedance, Z, dependence on magnetic field becomes asymmetric (see Fig. 10) that suggests the existence of a helical anisotropy 48 .
There is an evidence that elevated values of DC current flowing through the sample can produce Joule heating of the samples. When we applied a higher current (I b = 40-50 mA), the samples are heated due to Joule effect before the required current to remove the domains structure is reached. The irreversible changes of S 21 -values can be appreciated after application of I b = 40-50 mA (see Fig. 9). Indeed for I b = 50 mA the current density, j, is estimated as j ≈ 300 A/mm 2 . Earlier magnetic hardening and/or crystallization of the microwires after annealing with the DC current density, j ≈ 450 A/mm 2 was reported 45,50 .
Observed Joule heating of stress-annealed Fe 75 B 9 Si 12 C 4 microwires can considerably affect diagonal MI effect in stress-annealed samples providing interesting features. Particularly after Joule heating at 40 mA of stress-annealed at T ann = 300 °C sample GMI ratio is still rather high (above 100%), but magnetic field dependence of impedance, Z, and ΔZ/Z(H) dependence present monotonic decay for a wide frequencies range (see Fig. 11 a,b). Additionally application of bias current (I b = 20 mA) produces switching from single peak to double peak Z(H) dependence (Fig. 11c). Such considerable effect of bias current on Z(H) dependence and single peak ΔZ/Z(H) dependence must be attributed to quite low axial magnetic anisotropy in Joule heated after stress-annealing at T ann = 300 °C sample. The application of bias current must be associated to the circular magnetic field (Oersted field) given in the surface of the metallic nucleus by formula: where I is the current value, r-radial distance. This Oersted circular magnetic field can therefore switch the magnetization from axial to circular orientation in the surface layer of metallic nucleus.
Observed ΔZ/Z m (f) dependence presents quite wide optimal frequency range (from 500 MHz up to 1.5 GHz) at which ΔZ/Z m ≈ 100% (Fig. 11d). The other common feature of GMI effect in all studied (as-prepared and stress-annealed at all temperatures) is the low field GMI hysteresis (see Fig. 12). At each frequency we provided ΔZ/Z(H) dependencies with ascending and descending magnetic field in order to illustrate the low field GMI hysteresis previously reported only for microwires with vanishing magnetostriction coefficient. Observed GMI hysteresis is independent on frequency and presents features similar to that previously reported for nearly-zero magnetostictive microwires 29,51,52 .
The origin of GMI hysteresis observed in nearly-zero magnetostictive magnetic wires has been discussed considering the deviation of the magnetic anisotropy easy axis form circumferential direction, the magnetostatic interaction of the inner axially magnetized core with the outer domain shell and the irreversible switches of the transverse permeability, caused by domain wall structure transitions 29,51,52 . Negligible frequency dependence (or even lack of dependence) of GMI hysteresis in our opinion can be attributed to the static remagnetization process of studied microwires.
This assumption is confirmed by the influence of strong enough pulsed magnetic field (18 kA/m) applied before taking each measurement point on GMI hysteresis. As can be observed from Fig. 13, GMI hysteresis (diagonal and off-diagonal) can be suppressed by application of a pulsed magnetic field. Such influence has been previously interpreted 52 considering that high enough (18 kA/m) applied magnetic field saturates the inner core. On the other hand in some cases the GMI hysteresis observed in studied Fe-rich microwires presents features (i.e. impedance jumps) similar to that reported for Co-rich with helical magnetic anisotropy in the outer domain shell 29 .
Considerable enhancement of ΔZ/Z and S 21 -values observed in stress-annealed Fe-rich microwires must be attributed to transverse magnetic anisotropy evidenced from comparison of hysteresis loops of as-prepared and stress -annealed microwires.
Rectangular hysteresis loop observed in as-prepared Fe-rich microwires (Fig. 1a) with positive magnetostriction coefficient is commonly attributed to the axial magnetic anisotropy related to the magnetoelastic anisotropy [17][18][19]40 . Indeed, the axial internal stresses in glass-coated microwires arising during the preparation process are the highest within the most part of the metallic nucleus 17,18,53 .
From previous studies is known that stresses and/or magnetic field annealing considerably affects magnetic anisotropy of amorphous materials 54,55 . In particular annealing at temperatures below the Curie temperature can originate a macroscopic magnetic anisotropy with a preferred magnetization direction determined by the magnetization distribution during the annealing 54,56 . Such induced magnetic anisotropy depends on the annealing temperature, stress and magnetic applied during the annealing. Consequently macroscopically isotropic amorphous materials annealed at certain conditions (at the presence of magnetic field or stress) can exhibit macroscopic magnetic anisotropy. The origin of field-induced anisotropy of amorphous materials has been discussed in terms of the directional ordering of atomic pairs or compositional short-range ordering 46,54 , although the topological short range ordering can play an important role 55 . Aforementioned topological short range ordering (also known as structural anisotropy) involves the angular distribution of the atomic bonds 55 and small anisotropic structural rearrangements at temperature near the glass transition temperature 57 .
Mentioned pair ordering is commonly considered for amorphous alloys containing at least two magnetic elements [54][55][56][57][58] . Consequently for the studied Fe 75 B 9 Si 12 C 4 amorphous microwires containing only one magnetic element (Fe) the pair ordering and the compositional short-range ordering mechanisms of stress-induced magnetic anisotropy must be disregarded.
The other approach involving the cluster model has been proposed for explaining the evolution of physical properties of amorphous materials under annealing 59 . However conventional furnace annealing at temperatures below the crystallization (generally below 500 °C) does not affect the character of hysteresis loop of Fe-rich glass-coated microwires 33,49 .
The case of glass-coated microwires is different: the presence of the glass-coating is associated to strong internal stresses. Previously the origin of stress-induced anisotropy in Fe-rich (Fe 74 B 13 Si 11 C 2 ) amorphous microwires is discussed considering "back stresses" giving rise to the redistribution of the internal stresses after stress-annealing 35,36 .
In the present case the sample was heated, annealed and slowly cooled with the furnace under the applied tensile stress. Consequently observed transversal magnetic anisotropy can be explained considering either increasing of transversal anisotropy at the expense of axial anisotropy due to back stresses or aforementioned topological short range ordering.
The advantage of described above effective approach allowing improvement of magnetic softness and high frequency GMI effect of Fe-rich microwires is that proposed stress-annealing allows retain superior mechanical properties of amorphous materials, i.e. plasticity and flexibility.
Methods
We studied the influence of stress-annealing on magnetic properties and GMI effect of Fe 75 B 9 Si 12 C 4 amorphous glass-coated microwires (total diameter, D = 17.2 μm, metallic nucleus diameter, d = 15.2 μm) prepared by Taylor-Ulitovky method previously described elsewhere 7,15 .
Structure of studied microwires has been checked by X-ray diffraction (XRD) using a BRUKER (D8 Advance) X-ray diffractometer with Cu K α (λ = 1.54 Å) radiation. All as-prepared and annealed microwires present typical for amorphous alloys XRD patterns with broad halo (Fig. 14a). The crystallization, T cr , and Curie, T c , temperatures were determined using differential scanning calorimetry (DSC) measurements performed using DSC 204 F1 Netzsch calorimeter in Ar atmosphere at a heating rate of 10 K/min. T cr is determined as the beginning of the first crystallization peak. As can be seen from Fig. 14b in as-prepared Fe 75 B 9 Si 12 C 4 microwire T cr1 ≈ 522 °C and T c ≈ 413 °C.
Samples annealing has been performed in a conventional furnace at temperatures, T ann , below the crystallization temperature, T cr1 , and Curie temperature, T c (T ann ≤ 300 °C). All the thermal treatments were perfomed in air because metallic nucleus are coated by the insulating and continuous glass coating.
The microwire was heated, annealed and slowly cooled with the furnace under the tensile stress. This annealing is designed in order to avoid the influence of the stresses arising during the sample cooling. The value of stresses applied during the heat treatment within the metallic nucleus and glass cover have been estimated as previously described elsewhere 36 : where k = E 2 /E 1 , E 2 is the Young modulus of the metal, E 1 -Young modulus of the glass at room temperature, P is the mechanical load applied during the annealing, and S m and S gl are the cross sections of the metallic nucleus and glass coating respectively. The estimated values of applied stress estimated using eq. (4) is σ m ≈ 900 MPa. We measured the hysteresis loops using fluxmetric method previously successfully employed by us for characterization of magnetically soft microwires 40 . The hysteresis loops are plotted as the dependence of normalized magnetization, M/M 0 (where M is the sample´s magnetic moment at given magnetic field, H, and M 0 is the sample's magnetic moment at the maximum magnetic field amplitude, H m ) on magnetic field, H.
For evaluation of the GMI effect we employed micro-strip sample holder previously described elsewhere 25,29 . A magnetic field, H, is produced by a long solenoid. The microwire impedance, Z, was evaluated from the reflection coefficient S 11 measured by the vector network analyzer using the expression 25,29 : where Z 0 = 50 Ohm is the characteristic impedance of the coaxial line. The off-diagonal Z zϕ component has been evaluated from the transmission coefficient, S 21 25,29 . The GMI ratio, ΔZ/Z, is defined using eq. (1). The magnetostriction coefficient of studied microwires has been evaluated using the small angle magnetization rotation method (SAMR) method 60 . Although initially this method was developed for amorphous materials in which the magnetization rotation presents the determining role 60 , recently we demonstrated the possibility to extend the SAMR method for the case of Fe-rich microwires presenting important contribution of domain wall propagation and designed a novel set-up for SAMR measurements 49 .
Using this method we estimated the λ s -values for as-prepared and annealed samples. Evaluated λ s -values of as-prepared Fe 75 B 9 Si 12 C 4 samples are about 35 × 10 −6 (similar to λ s -values reported for Fe-rich amorphous materials 61,62 ). We observed slight increasing of λ s -values after stress-annealing (from 35 × 10 −6 to 38 × 10 −6 ). Although observed increase of λ s -values is negligible it can be explained considering the stress dependence of the magnetostriction coefficient 62 . Indeed stress-relaxation associated to the annealing and compensation of the internal stresses by back stresses may originate the magnetostriction coefficient increasing.
Conclusions
We have demonstrated an effective approach to improving the high frequency GMI effect and magnetic softness of Fe-rich microwires using stress-annealing. We found that diagonal and off-diagonal GMI effect and hysteresis loop of Fe-rich microwires are affected by stress-annealing conditions. We observed transformation of rectangular hysteresis loops to linear and beneficial increasing of diagonal MI effect by order of magnitude as well as increasing of the off-diagonal MI effect after stress-annealing for Fe-rich microwires. Stress-annealed Fe-rich microwires present high GMI ratio (above 100%) in extended frequency range (from 500 MHz up to 1.5 GHz). Additionally, GMI hysteresis is observed in as-prepared and stress-annealed Fe-rich microwires. Similarly to the case of nearly-zero magnetostrictive microwires we observed the GMI hysteresis that is almost independent on frequency and can be suppressed by application of a pulsed magnetic field.
Stress-annealed microwires present unusual features, like switching from single peak to double peak Z(H) dependence under application of bias current. Observed stress -induced magnetic anisotropy of Fe-rich microwires is discussed considering increasing of transversal anisotropy at the expense of axial anisotropy due to back stresses and topological short range ordering. | 5,889.8 | 2018-02-16T00:00:00.000 | [
"Materials Science"
] |
Reduction of dislocations in α-Ga2O3 epilayers grown by halide vapor-phase epitaxy on a conical frustum-patterned sapphire substrate
Low dislocation density of α-Ga2O3 grown on conical frustum-patterned sapphire substrate (CF-PSS) has been studied. The threading dislocation propagation path of α-Ga2O3 on CF-PSS was observed.
Introduction
To improve the performance of power devices, materials with outstanding physical properties as well as improved device fabrication processes are essential. For decades silicon, which drives power devices, has contributed to improving their performance through various processes; however, the theoretical limit of silicon's role in power devices is clearly distinguished by material properties . To solve this problem, studies on various materials are underway, and recently, ultra-wide-bandgap materials have attracted considerable attention (Higashiwaki, Sasaki et al., 2016;Pearton et al., 2018). Ultra-wide-bandgap materials include aluminium nitride, boron nitride, diamond and gallium oxide (Ga 2 O 3 ) and offer a bandgap greater than 3.4 eV.
According to the power semiconductor roadmap, Ga 2 O 3 is considered to be the next-generation semiconductor material (Higashiwaki, Sasaki et al., 2016;Oda et al., 2016). Ga 2 O 3 has five phases (, , , " and ) that can be selected by growth conditions such as temperature, working pressure and growth method (Xue et al., 2018). It has a wide bandgap of 4.5-5.2 eV and a high breakdown voltage (8 MV cm À1 and 10 MV cm À1 ). It also exhibits properties such as high stability at high temperature and voltage, high dielectric constant ($10) and low electron mobility (Leach et al., 2019). Also, Baliga's figure of merit (FOM), which represents the performance of a power device, is very high. This material has the potential to be used in various devices, for example, in field-effect transistors (FETs), Schottky barrier diodes (SBDs) and UV optical devices (Oda et al., 2016;Sasaki et al., 2013;Ghose et al., 2017). Research on -Ga 2 O 3 is more widespread compared with the other phases. -Ga 2 O 3 has a monoclinic structure and is the most stable phase, and liquid-phase growth is possible (Mu et al., 2017;Aida et al., 2008) which can yield high-quality substrates that are also inexpensive.
In addition, homoepitaxial growth is possible, and highperformance equipment can be manufactured (Murakami et al., 2014). It has been reported that metal semiconductor FETs and SBDs containing -Ga 2 O 3 have a breakdown voltage of 531 V and an on-resistance of 0.1 m cm 2 (Oda et al., 2016;Xue et al., 2018). -Ga 2 O 3 , which has improved characteristics, displays the widest bandgap, breakdown voltage, electron mobility and Baliga's FOM. These characteristics are dominant in -Ga 2 O 3 compared with -Ga 2 O 3 (Neal et al., 2017). In addition, -Ga 2 O 3 has a corundum structure, which forms a ternary system with indium oxide and aluminium oxide, enabling both bandgap engineering to produce a desired wavelength and function engineering to improve the characteristics using transition metals (Cr, Fe, V, Ti) (Feneberg et al., 2018). However, -Ga 2 O 3 undergoes a phase transition at high temperature (>700 C) involving a metastable state, which is an undesirable disadvantage; hence, the substrate cannot be fabricated by liquid-phase growth, and only heteroepitaxy growth occurs (Oshima et al., 2015;. Heteroepitaxy generates residual stress because of the difference in the thermal expansion coefficient and the lattice constant between the starting substrate and the epilayer grown (Cariou et al., 2016). Dislocations are created inside the film grown in order to relax the residual stress generated. This degrades the performance of the devices and various methods for improving the quality of thin films have been explored, one of which is the use of a buffer layer and epitaxial lateral overgrowth (ELOG) (Jinno et al., 2018(Jinno et al., , 2016Oshima et al., 2019). The buffer layer is grown between the starting substrate and the growth film to decrease the difference in the lattice constants, thereby reducing the residual stress. An -(AlGa) 2 O 3 layer using an aluminium alloy was used as the buffer layer in -Ga 2 O 3 . As a result, the threading dislocation density (TDD) of -Ga 2 O 3 decreased by more than one order of magnitude compared with that without a buffer layer (Jinno et al., 2016). However, growth of the ternary buffer layer is difficult, and the layer material may diffuse into the epilayer and increase the impurity concentration (Chaaben et al., 2016).
In ELOG, growth occurs only in periodically fabricated seed regions followed by coalescence. This method decreases the TDD observed at the surface because the interface between the epilayer and the substrate is reduced to suppress the occurrence of dislocations, and the dislocations propagating to the surface are bent laterally (Oshima et al., 2019).
In this study, -Ga 2 O 3 epilayers were grown on a conical frustum-patterned sapphire substrate (CF-PSS) by halide vapor-phase epitaxy (HVPE). The -Ga 2 O 3 epilayers grown on CF-PSS were examined and compared with those grown on the conventional sapphire substrate. The use of the CF-PSS decreases the threading dislocations (TDs) by promoting lateral growth on patterns and bending in the pattern, as observed by transmission electron microscopy (TEM).
Methods
The -Ga 2 O 3 epilayers were grown by HVPE on a conventional sapphire substrate (CSS) and a CF-PSS. HVPE was employed with an atmospheric horizontal hot wall acting as a resistor heater and divided into a source zone and a growth zone. Liquid gallium metal, as a group III precursor, was placed in the source zone. The liquid Ga metal reacts with the hydrochloric acid gas to produce gallium monochloride (GaCl) and gallium trichloride (GaCl 3 ). The temperature of the source zone was fixed at 470 C, and GaCl was generated as the major reactant (Cariou et al., 2016). GaCl reacts with oxygen as a group VI precursor in the growth zone and is synthesized as -Ga 2 O 3 on substrates such as CSS and CF-PSS. The temperature of the growth zone was maintained at 500 C. Nitrogen was used as the main carrier gas. The total gas flow was fixed at 5 l min À1 .
The thickness of the -Ga 2 O 3 epilayers was approximately 3 mm, and the growth rate was 6 mm h À1 . The pattern size in the CF-PSS was 1.1 mm in top circle width and 0.6 mm in height. The surface and cross-sectional morphologies of the grown -Ga 2 O 3 epilayers were observed by field-emission scanning electron microscopy (FE-SEM). The surface roughness was measured by atomic force microscopy (AFM). The structure and crystal quality of the epilayers were investigated by -2 scan and ! rocking curve measurements for the 0006 and 10-12 diffractions using high-resolution X-ray diffraction with Cu K 1 radiation of 1.54 Å wavelength. The X-ray diffractometer consisted of a line source, a graded parabolic (multilayer) mirror, a four-bounce symmetric Ge (440) monochromator and a two-bounce channel-cut Ge (220) analyzer in front of the detector. Cross-sectional TEM was performed to observe the TDs in the -Ga 2 O 3 epilayer.
Results and discussion
The surface and cross-sectional FE-SEM images of the -Ga 2 O 3 epilayers grown on CSS and CF-PSS are shown in Fig. 1. The surface morphologies of -Ga 2 O 3 epilayers grown on CSS and CF-PSS were flat and crack-free. The root mean square roughness values of -Ga 2 O 3 epilayers grown on CSS and CF-PSS measured by AFM were 7.3 and 5.9 nm, respectively, and the surface of the -Ga 2 O 3 epilayer on CF-PSS was more uniform.
The morphology of the -Ga 2 O 3 epilayer grown on CF-PSS was observed with increasing growth time, as shown in Figs. 1(e)-1( j). During the initial growth time of 5 min [Figs. 1(e) and 1( f )], all areas of the patterns were covered with Ga 2 O 3 grains. The difference in growth rate according to the growth direction was not noticeable. At a growth time of 10 min [Figs.
research papers 1(g) and 1(h)], the space between the patterns was filled due to c-axis growth at the bottom and lateral growth at the sidewall without air voids. In particular, we confirmed that the lateral growth on the top region of the pattern occurred preferentially in the m-plane direction, and among them, the lateral growth rates were relatively high at three m-planes with a 120 angle (shown in the inset). In the top region of the patterns, small inversed-triangular-pyramidal shapes were regularly observed at the surface at a growth time of 15 min [Figs. 1(i) and 1( j)]. This is the result of lateral growth in six m-plane directions because of the difference in the high growth rates among the specific three m-plane directions. Because of this difference in growth rate, by employing the CF-PSS, the areas grown in the m-plane directions were merged and additional growth time was required for a smooth surface.
However, the results suggest there is potential for growth of the -Ga 2 O 3 epilayer with improved surface morphology and that lateral growth was promoted in the m-plane direction compared with the CSS. XRD was used to investigate the crystal structure of the epilayer. Fig. 2(a) shows the XRD -2 scan spectra of the -Ga 2 O 3 epilayers grown on CF-PSS for 5, 10, 15 and 35 min. The 0006 diffraction peak of the -Ga 2 O 3 epilayer was very small at a growth time of 5 min, which represents the initial stage of growth. Additionally, the sapphire peak was the major peak, similar to the CF-PSS. At a growth time of 10 min, the 0006 diffraction peak of the -Ga 2 O 3 epilayer and the 004-diffraction peak of "-Ga 2 O 3 were observed. We assumed that the -phase was grown at the top and bottom of the CF-PSS, and the "-phase was grown on the sidewall of the CF-PSS. In a previous report, Shapenkov et al. (2020) confirmed that an -Ga 2 O 3 epilayer was grown on the top of the pattern, and an "-Ga 2 O 3 epilayer was grown on the sidewall of the pattern. The 004 diffraction peak position of the -Ga 2 O 3 epilayer was observed at 38.85 (JCPDS No. 06-0509). The intensity of the 0006 diffraction peak of the -Ga 2 O 3 epilayer increased with continuous growth. However, the 004 diffraction peak of the "-Ga 2 O 3 epilayer disappeared. As the lateral growth of the -phase progressed, it was thought that the "-phase, which would have grown initially in the pattern side, was blocked. The lattice mismatch between the -Ga 2 O 3 epilayer and -Al 2 O 3 substrate was 4.6% on the a axis and 3.3% on the c axis, which is relatively large. The 0006 diffraction peak of -Ga 2 O 3 epilayers grown on CSS and CF-PSS was observed at 40.18 . This peak position was shifted to a lower angle compared with that of the strain-free -Ga 2 O 3 epilayers. The lattice constants of both -Ga 2 O 3 epilayers were calculated as a = 4.9799 and c = 13.455 Å . This result indicates that both -Ga 2 O 3 epilayers were in a slightly compressive stress state. This compressive stress was caused by the difference in the coefficient of thermal expansion. The thermal expansion coefficients of the sapphire substrate and -Ga 2 O 3 epilayer were 8.6 Â 10 À6 and 1.1 Â 10 À5 K À1 , respectively (Higashiwaki & Fujita, 2020). As the -Ga 2 O 3 epilayer grew and was cooled, compressive stress was generated, resulting in a peak shift to a low angle. Fig. 3 shows the typical X-ray rocking curves (XRCs) obtained for the -Ga 2 O 3 epilayers on CSS and CF-PSS. The full width at half-maximum (FWHM) of the 0006 diffraction peak is symmetric with respect to the screw dislocation, and the FWHM of the 10-12 diffraction peak is asymmetric with respect to the edge and mixed dislocations. The FWHMs of the 0006 and 10-12 diffraction peaks of the -Ga 2 O 3 epilayers on CSS were 75 and 1539 arcsec, respectively. In our previous study, the FWHMs of the 0006 and 10-12 diffraction peaks of 1 mm -Ga 2 O 3 epilayers on CSS were 27 and 3254 arcsec, respectively . The FWHM of the 0006 diffraction peak increased slightly, whereas the FWHM of the 10-12 diffraction peak decreased significantly. It appears that the thickness of the -Ga 2 O 3 epilayer increased, and the TDs generated at the interface merged while being directed to the surface. On the other hand, the FWHMs for the 0006 and 10-12 diffractions of the -Ga 2 O 3 epilayers on CF-PSS were 368 and 961 arcsec, respectively. Compared with the -Ga 2 O 3 epilayers on CSS, the FWHMs of the 0006 and 10-12 diffraction peaks increased and decreased, respectively. Chen et al. (2018) reported that periodic patterns of the sapphire substrate were beneficial for suppressing grain twisting when the adjacent grains coalesce. However, CSS did not favour grain twisting, though it was advantageous to suppress the tilt of the grain. It is assumed that the FWHMs of the 0006 and 10-12 diffraction peaks were affected by the growth on the pattern.
To confirm the effect of CF-PSS on the -Ga 2 O 3 epilayer, TEM was carried out. Fig. 4(a) shows a cross-sectional TEM image of the -Ga 2 O 3 epilayer-CF-PSS interface observed along the [11-20] zone axes. The dark areas (dashed circle) were periodically observed at the -Ga 2 O 3 epilayer-CF-PSS interface, indicating misfit dislocations (MDs) on the -Ga 2 O 3 epilayer-sapphire interface. MDs occur when the length of 20 crystal cells of the -Ga 2 O 3 epilayer with a large lattice constant coincides with that of 21 crystal cells of -Al 2 O 3 with a small lattice constant (Kaneko et al., 2012). MDs were generated to alleviate the in-plane compressive strain caused by the difference in lattice parameters between -Ga 2 O 3 and -Al 2 O 3 . The inset images (dashed square) in Fig. 4(a) show the electron diffraction patterns for -Ga 2 O 3 and sapphire, respectively, corresponding to the corundum structure. The epitaxial relationships between the -Ga 2 O 3 epilayer and CF-PSS were (0006) -Ga 2 O 3 epilayerk(0006) sapphire. Figs. 4(b) and 4(c) show the plan-view and cross-sectional TEM images of the -Ga 2 O 3 epilayer on CF-PSS. The dark spots on the surface indicate the TDs. We can confirm that the end-on strain contrast from TDs on the surface did not appear uniformly, and the densities of TDs were relatively lower in a certain region of the ring shape. The TDD of the ring region was 9 Â 10 8 cm À2 and that at the center region was 1.6 Â 10 10 cm À2 . As a result, the average TDD in the -Ga 2 O 3 epilayer was determined to be 8.4 Â 10 9 cm À2 .
The high-magnification TEM image [ Fig. 4(d)] can be divided into three regions according to the distribution of TDs. In regions 1 and 3, the -Ga 2 O 3 epilayer growth occurred along the c axis, which can be confirmed by the propagation of the TDs generated at the interface. In contrast, the TDs were negligible in region 2. As -Ga 2 O 3 was grown in region 1, the lateral growth of -Ga 2 O 3 occurred simultaneously, and the width of the lateral growth gradually increased with longer growth times. As a result, the TDs generated in region 3 were significantly decreased (or prevented) by the lateral growth region, and a region with a low density of dark spots developed that can be attributed to the TDs that appeared on the surface. Fig. 4(e) shows a schematic of the growth mechanism of the -Ga 2 O 3 epilayer on CF-PSS. The dotted-line rectangle shows the dislocation-blocking area because of the lateral growth. Consequently, we determined that the crystal quality of the -Ga 2 O 3 epilayer on CF-PSS was improved compared with that on the CSS owing to the blocking of dislocations by the lateral growth of -Ga 2 O 3 .
Conclusions
We studied a single-crystal -Ga 2 O 3 epilayer on CF-PSS using HVPE. The thickness of the -Ga 2 O 3 epilayers was approximately 3 mm at a growth temperature of 500 C. The -Ga 2 O 3 epilayers grown exhibited slightly in-plane compressive stress because of the lattice mismatch and difference in thermal expansion coefficients between the substrate and -Ga 2 O 3 . The 10-12 diffraction FWHMs of the -Ga 2 O 3 epilayer grown on CF-PSS and CSS were 961 and 1539 arcsec, respectively. The MDs were produced at the interface between the substrate and the -Ga 2 O 3 epilayer, as well as in the -Ga 2 O 3 epilayer, creating an end-on strain contrast of TDs on the surface of the -Ga 2 O 3 epilayer. The average TDDs in the -Ga 2 O 3 epilayer on CF-PSS and CSS were 8.4 Â 10 9 and 1.6 Â 10 10 cm À2 , respectively, both of which exhibited a decrease in TDs. The reduction of TDs was observed differently according to the growth of the -Ga 2 O 3 epilayer in the pattern. In the caxis growth, the TDs are the same as those along the growth direction. On the other hand, TDs were negligible during the lateral growth. This lateral growth obstructed the path of the TDs propagating between the patterns to the surface, thus significantly decreasing the TDs appearing on the surface. | 4,132.8 | 2021-04-28T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
The Effect of Capsulotomy Shape on Intraocular Light-Scattering after Nd:YAG Laser Capsulotomy
Purpose To investigate the effects of capsulotomy shape on the visual acuity and visual quality after neodymium: yttrium aluminum garnet laser capsulotomy. Methods In this study, a total of 42 eyes from 35 patients with posterior capsule opacification were divided into the circular and cruciate groups. The corrected distance visual acuity (CDVA), objective scatter index (OSI), modulation transfer function cutoff (MTF cutoff), Strehl ratio, and Optical Quality Analysis System values at contrasts of 100%, 20%, and 9% (OV-100, OV-20, and OV-9) were measured at precapsulotomy and 1 week and 1 month postcapsulotomy. The pseudophakic dysphotopsia questionnaire (PDQ) was used to evaluate the subjects' satisfaction with treatment. Results OSI values were significantly higher in the cruciate group than in the circular group at 1 week and 1 month after capsulotomy (P=0.013 and P < 0.001). No significant difference was found in the OSI values between the two groups before capsulotomy (t = 0.52; P=0.61). The decrease in OSI was higher in the circular group than in the cruciate group at 1 week and 1 month after capsulotomy (P=0.036 and P=0.019). No significant differences were found in the Strehl ratio, MTF cutoff, CDVA, OV-100, OV-20, and OV-9 between the two groups at 1 week and 1 month after capsulotomy (P > 0.05). The PDQ results showed that patients with circular-shaped capsulotomy complained less with intolerance of bright lights than those with cruciate-shaped capsulotomy. Conclusions Circular-shaped capsulotomy can induce less intraocular light scattering and increase patient satisfaction.
Introduction
Phacoemulsification combined with foldable intraocular lens (IOLs) implantation can markedly improve visual acuity and contrast sensitivity in patients with cataracts. However, posterior capsular opacification (PCO) is a common complication after cataract surgery. Lundqvist and Mönestam found that over one-third of patients received neodymium: yttrium aluminum garnet (Nd:YAG) laser capsulotomy for PCO in 10 years after cataract surgery [1]. Schaumberg et al. [2] showed that the incidences of PCO at 1, 3, and 5 years after cataract surgery are 11.8%, 20.7%, and 28.4%, respectively. Ursell et al. [3] followed up 13,329 eyes implanted with AcrySof IOL, 19,025 eyes implanted with non-AcrySof hydrophobic IOL, and 19,808 eyes implanted with non-AcrySof hydrophilic IOL for 3 years and found a 3year incidence of PCO and Nd:YAG capsulotomy ranging from 4.7% to 14.8% according to the different IOL materials used. Ambroz et al. [4] found PCO in 30.9% of the surveyed eyes at 18.4-50.2 months after pediatric cataract surgery. In pediatric cataract patients without posterior capsulotomy and anterior vitrectomy, the incidence of PCO was as high as 70% [5].
PCO can markedly degrade visual function, including visual acuity and contrast sensitivity [6,7]. Most patients with PCO suffer from disability glare, which reduces retinal image contrast [8,9]. Nd:YAG capsulotomy effectively improves the visual acuity, contrast sensitivity, and glare sensitivity of patients with PCO. It can also decrease intraocular light scattering and improve patient satisfaction following treatment [10,11].
Intraocular light scattering is an important parameter used to evaluate visual function [12]. It can be perceived as glare, halos, blinding at night while driving, and hazy vision. Montenegro et al. [10] found that increased intraocular light scattering can cause veiling luminance on the retina, which leads to glare, halos, and blinding at night. e condition may severely degrade visual performance and retinal image quality and is an important cause of visual function impairment in pseudophakic eyes. e parameters measured by the Optical Quality Analysis SystemII (OQAS, Visiometrics S.L., Terrasa, Spain) (Figure 1), such as modulation transfer function (MTF) cutoff, Strehl ratio, objective scatter index (OSI), and OQAS values at contrasts of 100% (OV-100), 20% (OV-20), and 9% (OV-9), are widely used to evaluate the intraocular light scattering and objective optical quality of eyes in clinics. MTF is the ratio of contrast between the retinal image and the original scene, and the MTF cutoff is defined as the cutoff frequency at 1% of the maximum MTF [13]. e Strehl ratio is the ratio of the central intensity of the point image between the measured and ideal eye. OV-100, OV-20, and OV-9 are the OQAS values calculated by the OQAS system at contrasts of 100%, 20%, and 9%, respectively. OV-100 is equal to the MTF cutoff frequency divided by 30 cycles per degree (cpd), whereas OV-20 and OV-9 are 0.05 and 0.01 of the MTF, respectively. OSI is the ratio of light intensity between the peripheral annular zone (12 min of arc) and the central peak zone (within 1 min of arc). A higher OSI value indicates more intraocular light scattering. Higher values of MTF cutoff, Strehl ratio, OV-100, OV-20, and OV-9 indicate better visual quality. ese parameters can help evaluate the visual quality of the human eye objectively [13,14].
Nd:YAG laser capsulotomy size can affect visual function [15]. Holladay et al. [16] found that a smaller capsulotomy opening increases light scattering. However, to date, whether the shape of Nd:YAG laser capsulotomy affects visual function and the effects of Nd:YAG laser capsulotomy shape on visual acuity, intraocular light scattering, and MTF remain to be evaluated. In this study, Nd:YAG laser capsulotomy with a circular or cruciate shape was performed to evaluate the effects of shape on visual acuity, intraocular light scattering, and MTF.
Subjects.
e study protocol was approved by the local ethics committee. Informed consent was obtained from the participants, and all procedures followed the tenets of the Declaration of Helsinki. A total of 42 eyes of 35 patients with PCO were involved in this study between July 2017 and December 2018. No phacoemulsification complications were found in all patients. Ophthalmological examinations, including corrected distance visual acuity (CDVA), refractive measurements, intraocular pressure (IOP), and slit-lamp and fundus examination, were performed before and after Nd:YAG laser capsulotomy. e patients were assigned to circular and cruciate capsulotomy groups sequentially. e inclusion criteria were as follows: age between 50 years and 90 years, postoperative astigmatism of less than 1.00 diopter (D), same implanted IOL (Akreos Adapt, Bausch & Lomb, USA), IOL power between +18.00 D and +24.00 D, and uncomplicated surgery with well-centered IOL in the capsular bag. Patients with ocular pathologies (e.g., high myopic, corneal opacities, retinopathy, maculopathy, and glaucoma), systemic diseases, serious anterior chamber inflammation after capsulotomy, and a history of ocular surgery were excluded. During capsulotomy, the laser might hit the optic of the IOL, which could result in spots on the optic. e spots may induce extra intraocular light scattering. us, the patients with spots on the optic of the IOL were also excluded.
All patients reported blurred vision. Subjective grading was used in this study. PCO was subjectively graded on a scale of 0 to 10 by three doctors, and the average was recorded. A posterior capsule completely covered with severe PCO received a score of 10, whereas a completely clear posterior capsule received a score of 0. e mean PCO score was recorded. e central 4.0 mm zone of the posterior capsule was the key area observed ( Figure 2). Findl et al. [17] reported that the subjective grading correlated well with the objective Automated Quantification of After-Cataract (AQUA) system and the Evaluation of Posterior Capsular Opacification (EPCO) system.
e AQUA system can automatically analyze the retroillumination images of PCO within the capsulorhexis region and calculate a score between 0 and 10 (0 � a clear capsule; 10 � a capsule completely covered with PCO) [11,18].
e EPCO system can also evaluate the photography of PCO, and the EPCO score is calculated by a computer (0 � a clear capsule; 4 � a capsule with severe PCO) [19]. Findl et al. [17] found that the subjective grading by examiners and objective analytic systems (such as AQUA and EPCO system) showed good reproducibility and correlated well with each other.
Nd:YAG Laser Capsulotomy
Technique. Before capsulotomy, the pupils were dilated with topical 0.5% tropicamide and 0.5% phenylephrine eye drops (Mydrin-P, Santen Pharmaceutical, Osaka, Japan). en, 0.5% proparacaine hydrochloride (Alcaine, Alcon Co., USA) was used for topical anesthesia. A contact lens was applied to facilitate accurate focusing after topical anesthesia. All Nd:YAG laser capsulotomy procedures were performed by the same surgeon (J.L.) using an Nd:YAG laser (Ellex Inc., Adelaide, AUS). A 4-mm light band of slip lamp was used to examine the capsulotomy area at 0°, 45°, 90°, and 135°in all patients.
us, we ensured that the diameter of capsulotomy was 4.0 mm at the specified degrees. e residual larger fragments in the capsulotomy area were also cleared by the Nd: YAG laser in the cruciate group. After capsulotomy, 1% fluorometholone (Santen Pharmaceutical, Osaka, Japan) four times daily for 3 days was prescribed.
Patients were divided into two groups according to the shape of capsulotomy. Cruciate-shaped capsulotomy was performed in 21 eyes (cruciate group), whereas circularshaped capsulotomy was performed in another 21 eyes (circular group). e diameter of capsulotomy was 4.0 mm, which was measured by a slit-beam ruler. e laser energy was set to 2.0-2.5 mJ in the cruciate group and 1.5-2.0 mJ in the circular group. In the circular group, we decreased the Nd:YAG laser energy to obtain a round, centered, and relatively smooth foramen. Once the central posterior capsule was cut off by a low-energy laser, we increased the laser energy (from 2.5 mJ/pulse to 3.5 mJ/pulse) and smashed the central posterior capsule. In the cruciate group, we performed cruciate-shaped capsulotomy using laser energy levels ranging from 2.0 mJ/pulse to 2.5 mJ/pulse. e capsular debris in the optical axis was all cleared ( Figure 3).
Ophthalmologic Measurements.
At before capsulotomy and 1 week and 1 month after capsulotomy, the CDVA, slitlamp, and fundus examinations were performed. e OSI, MTF cutoff, Strehl ratio, OV-100, OV-20, and OV-9 were measured using OQAS II (Visiometrics S.L., Terrasa, Spain). e spherical refractive error was automatically corrected internally by OQAS (from − 3D to +3D). e spherical refractive error of more than ±3D and astigmatism were corrected by placing an appropriate spherical and cylindrical lens in front of the eye. e pupils were fully dilated with topical 0.5% tropicamide and 0.5% phenylephrine eye drops (Mydrin-P, Santen Pharmaceutical, Osaka, Japan) before examination. e parameters were automatically measured by OQAS under a 4.0-mm artificial pupil, which was controlled by a diaphragm wheel inside the OQAS. All subjects were measured three times, and the average was recorded. e pseudophakic dysphotopsia questionnaire (PDQ) designed by Dr. Olson R.J. was used to evaluate the patients' satisfaction at 1 week and 1 month postcapsulotomy. e PDQ includes nine questions to rate the satisfaction with different pseudophakic dysphotopsia symptoms. For each question, the subject was asked to rate their satisfaction with dysphotopsia symptoms from 0 (no problem) to 10 (debilitating). e nine questions include satisfaction with bright light, oncoming headlights at night, halos, glares, flashes of light, dark or shadows at the side of vision, flickering shadow around lights, and semicircular shadow. And an additional question is the overall satisfaction with the vision (0 � totally unsatisfied; 10 � totally satisfied). e PDQ is considered to evaluate the satisfaction of pseudophakic dysphotopsias accurately and has good reproducibility [20].
Statistical
Analysis. Data were statistically analyzed using SPSS computer package 17.0 (SPSS Inc., Chicago, Ill, USA). e normality of data distribution was confirmed with the Shapiro-Wilk test. e decimal Snellen visual acuity was converted into a logMAR scale. e independent sample t-test for parametric variables and the Mann-Whitley U test for nonparametric variables were used to compare the means between the two groups.
Results
Forty-two eyes of 35 patients (10 males and 25 females) were examined. All patients completed the 1-month follow-up. e mean age was 68.55 ± 10.63 years (range, 51-87 years). No significant differences in age and gender were observed between the two groups (P � 0.246 and P � 0.747, respectively). No associated complications (e.g., serious anterior chamber inflammation, macular edema, anterior hyaloid damage, and retinal detachment) were observed in all patients. Table 1 e medians of CDVA at 1 week postcapsulotomy were 0.10 logMAR (range 0 to 0.14 logMAR) in the circular group and 0.10 logMAR (range 0 to 0.10 log-MAR) in the cruciate group. No significant difference was found between the two groups in the CDVA at 1 week postcapsulotomy (U � 207.500; P � 0.724). e medians of the CDVA at 1 month postcapsulotomy were (range 0 to 0.10 logMAR) in the circular group and 0.00 logMAR (range 0 to 0.10 logMAR) in the cruciate group. No significant difference was found in the CDVA at 1 month postcapsulotomy between the two groups (U � 180.00; P � 0.250). e mean OSI values at 1 week postcapsulotomy were 2.57 ± 1.23 in the circular group and 3.69 ± 1.53 in the cruciate group. e OSI was higher in the cruciate group than in the circular group (t � − 2.606; P � 0.013). e mean OSI values at 1 month postcapsulotomy were 1.82 ± 0.73 in the circular group and 3.00 ± 1.21 in the cruciate group. e OSI was significantly higher in the cruciate group than in the circular group (t � − 3.823; P < 0.001) (Figure 4). e decrease in OSI compared with preoperative OSI was significantly higher in the circular group than in the cruciate group at 1 week and 1 month postcapsulotomy (t � 2.164, P � 0.036; and t � 1.582, P � 0.019) ( Figure 5).
No significant difference was found in the Strehl ratio between the circular and cruciate groups at 1 week postcapsulotomy (0.132 (0.067-0.162) VS 0.094 (0.072-0.180), U � 217.000; P � 0.930). Similarly, no significant difference was found in the Strehl ratio between the circular and cruciate groups at 1 month postcapsulotomy (0.126 (0.101-0.148) VS 0.126 (0.083-0.173), U � 206.000; P � 0.715) ( Figure 6 and Table 2). e mean MTF cutoff values at 1 week postcapsulotomy were 20.242 ± 12.407 cpd in the circular group and 20.352 ± 12.055 cpd in the cruciate group. No significant difference was found in the MTF cutoff between the two groups at 1 week postcapsulotomy (t � 0.015; P � 0.988). Similarly, no significant difference was found in the MTF cutoff value between the circular and cruciate groups 1 month postcapsulotomy (27.013 ± 10.029 cpd VS 21.628 ± 7.693 cpd; t � 1.952; P � 0.058) (Figure 7 and Table 2). No significant differences were found in the OV-100, OV-20, and OV-9 between the circular and cruciate groups Table 2). At 1 week postcapsulotomy, the PDQ results showed that 16 (76.19%) of 21 patients in the circular group and 21 (100%) of 21 patients in the cruciate group had 1 or more complaints with intolerance of bright lights. Eight patients (38.10%) in the circular group and twelve patients (57.14%) in the cruciate group had complaints rated as 5 or more (with 0 being no complaint and 10 being debilitating). One patient (4.76%) in the circular group and two patients (9.52%) in the cruciate group had complaints rated as 8 or more. One month after laser capsulotomy, the PDQ results showed that 14 (66.67%) of 21 patients in the circular group and 19 (90.48%) of 21 patients in the cruciate group had 1 or more complaints with intolerance of bright lights. One patient (4.76%) in the circular group and 7 patients (33.33%) in the cruciate group had complaints rated as 5 or more (with 0 being no complaint and 10 being debilitating) ( Table 3). No patient in the two groups had complaints rated as 8 or more. e scores for the overall satisfaction of surgery (0 � totally unsatisfied; 10 � totally satisfied) were significantly higher in the circular group than in the cruciate group at 1 week and 1 month postcapsulotomy (U � 137.000, P � 0.035; and U � 140.000, P � 0.041) ( Table 3).
Discussion
Nd:YAG laser capsulotomy can markedly improve visual acuity in patients with PCO [21]. However, visual acuity is only one component of visual function. Intraocular light scattering causes patient dissatisfaction after intraocular surgery. In this study, we evaluated the effects of capsulotomy shape on visual function after capsulotomy. e mean OSI values of the cruciate group at 1 week and 1 month postcapsulotomy were markedly higher than those of the circular group. However, the intraocular light scattering might also have been induced by the PCO before capsulotomy [11].
e OSI values of the two groups before capsulotomy were also measured, and no significant difference was found. In addition, the decrease in OSI compared with the preoperative OSI was higher in the circular group than in the cruciate group at 1 week and 1 month postcapsulotomy.
us, we inferred that circular-shaped capsulotomy induced less intraocular light scattering. As OSI is only an objective light-scattering parameter, the patients' subjective sense should also be evaluated. e pseudophakic dysphotopsia survey designed by Dr. Olson R.J. was used in the present study [20], and our results showed that the scores of overall satisfaction were higher in the circular group than in the cruciate group at 1 week and 1 month postcapsulotomy. Patients with circular-shaped capsulotomy also complained less of intolerance of bright lights than those with cruciate-shaped capsulotomy. Circular-shaped capsulotomy induced less intraocular light scattering than cruciate-shaped capsulotomy. Capsulotomy size affects intraocular light scattering. Goble et al. [22] reported that patients who received wide capsulotomies show less forward light scattering than those who received narrow treatment. Montenegro et al. [10] reported that small capsulotomies could increase intraocular straylight. In this study, the capsulotomy diameters were of the same size (4.0 mm) in the two groups. We inferred that capsulotomy shape can also affect intraocular light scattering. Capsule remnants are important factors that induce light scattering and glare disability [22]. Montenegro et al. [10] found that the percentage of the pupil area with capsule remnants considerably contributed to intraocular light scattering after Nd:YAG laser capsulotomy. Nd:YAG laser capsulotomy with a circular shape and lower energy results in a smoother capsulotomy opening edge and fewer capsule remnants in the pupil areas; by contrast, cruciate-shaped capsulotomy with higher energy often produces more capsule remnants and jagged edges, which can increase light scattering.
Our results show that the CDVA did not significantly differ between the two groups. Intraocular light scattering and best-corrected visual acuity (BCVA) have been reported to significantly improve after laser capsulotomy, and the former was shown to be independent of the latter. Before capsulotomy, the intraocular light scattering was moderately correlated with BCVA. After capsulotomy, no significant correlation was found between BCVA and intraocular light 10 scattering [9]. Visual acuity and intraocular light scattering are entirely different descriptors of visual quality. Visual acuity encompasses the central 0.02°of the point spread function [23]. OSI measures the forward light scattering in a visual angle within 20 min of an arc by the double-pass method [24]. Visual acuity correlates well with contrast sensitivity, whereas visual acuity and contrast sensitivity are not well correlated with intraocular light scattering [12]. Van den Berg's [25] study on donor eye lenses also showed that light scattering does not affect the point spread function center. Visual acuity is weakly correlated with intraocular light scattering. Intraocular light scattering in the eye is spread through a large angle.
In the present study, a significant difference was found in the intraocular light scattering between the two groups, but no significant differences were found in the MTF cutoff, Strehl ratio, OV-100, OV-20, and OV-9 between the two groups at 1 week and 1 month after YAG laser capsulotomy. Pennos et al. [26] reported a strong correlation between the C-Quant log (s) and contrast sensitivity. C-Quant measures forward light scattering over a visual angle of 5°-10°by the compensation comparison method, whereas OQAS measures forward light scattering over a smaller visual angle within 20 min of an arc by the double-pass method [24]. In our study, OQAS was used to measure intraocular light scattering. Hence, the visual angle measured by C-Quant was larger than that measured by OQAS. Different visual angles may lead to variations between these measurements. e C-Quant straylight meter uses the compensation comparison method. During examination, the subject test field is divided into half fields, and the subject chooses the half field that flickers more intensely [27]. Hence, the examination results can be affected by the subjects' responses. e subject's age and education can also affect the examination results. e OQAS is based on the analysis of the double-pass image of a point source projected on the retina and evaluates forward light scattering. OQAS is an objective examination, and all procedures can be completed within a few minutes [14,24]. Moreover, the repeatability of the OQAS measurements is slightly better than that of the C-Quant Journal of Ophthalmology measurements. OQAS measurements provide a slightly higher intraclass correlation coefficient than C-Quant measurements [24].
Conclusion
In conclusion, circular-shaped capsulotomy induces less intraocular light scattering than cruciate-shaped capsulotomy. It does not require high laser energy. Considering these characteristics, circular-shaped capsulotomy can provide better satisfaction than cruciate-shaped capsulotomy.
Data Availability
All the data used in this study are available from the corresponding author.
Disclosure
Jun Li and Zhe Yu are co-first authors.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding this paper. | 4,921 | 2020-03-23T00:00:00.000 | [
"Medicine",
"Engineering"
] |
A Score Level Fusion Method on Fingerprint and Finger Vein
. In this paper, we represent a score level fusion method on fingerprint and finger vein. Each unimodal identification system carries out processes of image preprocessing, feature extraction and feature matching to generate a vector of score. And we apply clustering analysis to split the score range into zones of interest. Then a decision tree and weighted-sum approach are used to make the decision. We test the proposed method on standard biometric database. Three metrics, namely, False Accept Rate, False Reject Rate, Recognition Rate, are used to evaluate experimental results. And experimental results show that the fusion system has a better performance than unimodal identification system.
Introduction
Biometric identification system is a prerequisite to ensure the security of the system. But it is difficult to realize an extreme high recognition rate system with unimodal identification system. Unimodal system which is based on single modality has several inherent problems like intra-class variation, spoofing attacks and failure-to-enroll rate. In this situation, fusion process helps to solve these problems. A multibiometric system is the system which fuse different single biosignature together to do identification [1]. There are different kinds of fusion methods which are sensor level fusion, feature level fusion, score level fusion and decision level fusion [2].
Score level fusion has advantages that have a wide application for varieties of modalities and score level carries relatively rich information about original images [3]. There are two approaches for score level fusion. The combination means combining the scores from matching to form a single score to make the final decision [4].
In this paper, we work on the fusion on the score level for fingerprint and finger vein. Each modality has its own identification system and each of the system will generate a vector of scores [5]. The scores are composed of matching values.
Dividing score range into zones of interest and dealing with them will improve the performance of calculation [6]. Fusion of scores was achieved either by combination [7] or by classification [5]. In [8], it proposed the combination of these two methods which is called BCC. K-means algorithm is used to divide the score range into zones of interest [8]. However, the combined method did not give a convinced demonstration about how to allocate weight to each score from unimodal identification system. We use the method in [9] to improve this fusion approach. In this way, we improve the method further and it makes this algorithm more complete.
For each unimodal identification, we also make a detailed design and try to ensure high recognition rate. In terms of fingerprint, traditional algorithm is used in fingerprint identification subsystem. In [19], Poincare Index algorithm is used to extract core points according to Crossing Number (CN) theory, extract minutia points (ridge end points and bifurcation points), reduce the endpoint through continuous smooth processing, and finally obtain enough feature points. In [10], fingerprint image matching results are calculated according to the distance matching algorithm. Then we will get the scores of matching values which range 0 to 1. And the closer the score is to 0, the more accurate the matching result will be. The above are the methods for fingerprint identification system. In terms of finger vein, several image processing and image matching methods were applied. The fuzzy logic theory [13] is developed in the step of the vein segmentation. the problem of vein features extraction with two-dimensional Gabor filter is solved in [14]. Finally, the Hamming distance HD [15] is calculated to obtained matching score which represents the matching degree of two images. Since the distance HD does not range 0 to 1, it is necessary to do some normalization to make it range 0 to 1 [16].
After the generation of scores of matching values from each unimodal identification system, it will be the fusion process. In [8], the method achieves the classification by the decision tree combined to the weighted sum (BCC) in the paper. And it generates a new score level fusion method. However, in [8], it does not give a convincing demonstration about the allocation of weight and it just gives fixed weights to iris scores and fingerprint weight. As we all know, different data distributions should have different weights. Therefore, it is necessary to give a clear computing method on weights E3S Web of Conferences 185, 03035 (2020) ICEEB 2020 http://doi.org/10.1051/e3sconf/202018503035 of scores. In [9], optimal weight of score level is illustrated in different angles. We do some research on it and choose the appropriate computing method which can be applied in the fusion process. In this way, we do not only use the weighted sum method to make the fusion method more effective, but also improve the fusion method which can be used among many other biometrics.
Fingerprint identification
The fingerprint identification is made of three parts which are image preprocessing, feature point extraction and feature point matching.
The first step is to scale the fingerprint image and transform the color image into gray image. The image format is modified for subsequent operation.
The second step is to normalize, so that its gray value is limited to a certain range.
The third step carries on the multi-region threshold segmentation [17], according to the sum of the eight neighborhood points to distinguish the foreground and background colors.
The fourth step is to remove the image noise by means of mean filtering [17], then adopt the enhancement method based on ridge direction field [17]. And binary image, finally enhance the fingerprint pattern along the ridge.
The fifth step is to remove the cavities and glitches in the fingerprint [10].
The sixth step is to refence the concepts of morphological operation [18]. The open operation and closed operation of the processed image are carried out to refine the images.
The main purpose of extracting fingerprint feature points is to calculate the feature information of core and minutia. Poincare Index algorithm [19] is used to extract the fingerprint core points. The algorithm is robust when dealing with image noise. When the Poincare Index value of a pixel is π, the pixel is determined to be the core point, and the coordinate and direction field information is extracted.
Feature point matching mainly adopts the method of matching about distance [17].
In this paper, we will find a feature point. Then, starting from this feature point, we will do a detection in a distance along the ridge. Every step of detection will be saved. After multiple detections, we will get the arrays with information about feature points and use Euclidean distance to do the comparison.
The comparison of fingerprints was performed by the Euclidean distance, given by Eq. (1).
Finger vein identification
The vein recognition system is made up of four main steps, including pre-processing, segmentation, feature extraction and vein matching.
The image pre-processing is used to normalize the geometry size and gray scale of the image referred to the concept of grey-scale interpolation [11]. There are lots of redundant information contained in the vein image which are detected by the bilinear interpolation method [12], then the Gabor filter is applied to separate the useful information from the redundant information and achieve the purpose of enhancing the image.
The vein segmentation requires threshold segmentation algorithm [12] which is based on fuzzy logic [13] to accomplish the binarization of the image. This step is used to divide the image into three regions, namely, the background area, the blur area and the foreground area, in order to remain as many significant vein features as possible, meanwhile, isolate the binary image.
The feature extraction is aimed to extract the phase and direction of the vein features based on the characteristic that the directional lines of vein, which was developed by applying two-dimensional Gabor filter [14], leading to a specific texture image. Then the direction characteristic code and phase characteristic code of the vein have been obtained.
Fusing the characteristic of phase and direction, the score for vein matching is gained based on calculating the Hamming distance HD [6] between the prototype vein code and the under-tested vein code, as the Eq. (2) shows: The code M and code N are inferred from two finger vein images, while the mask M and mask N are the respective masks of the two images due to the blocked light.
Score level fusion approach
The output of fingerprint identification system and finger vein identification will be used for score level fusion. The score normalization can transform scores of different systems into a common domain before combing them [15,27]. We use Min-Max method to normalize the output of both identification systems. The equation is shown in Eq.
In this paper, we use BCC in [8] to do the score level fusion and do some modification in the process of calculating weighted sum.
Dividing the score range into zones of interest
Our research is the fusion based on score level. Therefore, the score range is what we will deal with. We will divide the score range into zones of interest according to the numerical features of scores. After the dividing, numerical features of each zone will be more prominent. Then we can give these digital characteristics practical E3S Web of Conferences 185, 03035 (2020) ICEEB 2020 http://doi.org/10.1051/e3sconf/202018503035 meaning for further processing which helps to improve efficiency of fusion. Hence, clustering analysis is an effective tool. It can classify scores of matching values according to numerical characteristics.
Based on the scores generated by each unimodal identification system, the score range will be divided into three zones zone 1, zone 2 and zone 3 [8]. Zone 1 is the certainty zone where the user is identified (identical class) if his identification score is in this zone. Zone 2 is the uncertainty zone where the identification is not sufficiently reliable (undefined class). Zone 3 is the certainty zone where the user is not identified (different class).
K-means algorithm is used to divide the score range into zones into interest [8]. The advantages of K-means algorithm are the fast convergence and good clustering effect. And the effect of classification will be accurate with the dense sample data. First, we build the standard biometric database of each unimodal system. It will output group of scores of matching values through the processing of image preprocessing, feature extraction and feature matching. According to different decision thresholds from 0 to 1, we calculate the FAR (False Accept Rate), and FRR (False Reject Rate). Using FAR and FRR as the coordinate in which FAR is abscissa and FRR is ordinate, we plot the ROC curve and calculate the EER. EER is for further discussion. The set of coordinates from ROC curve is the input of K-means algorithm. The score range is divided into K zones Z={z 1 ,z 2 ,…z k }. Assuming that the center of a clustering is m i (FAR,FRR), the point x j (FAR,FRR) will belong to zone i if its distance to the center of the zone i is smaller than its distance to the center of any other zone.
To minimize the distance between x 1 , x 2 , … x n in each zone, we choose the square error function as objective function in Eq. (4): After clustering the ROC curve, we get three zones. Finding the boundary value of decision threshold, and we can divide the score range into zones of interest. Threshold of dividing zone 1 and zone 2 for fingerprint is recorded as P 1 . And threshold of dividing zone 2 and zone 3 for fingerprint is recorded as P 2 . Threshold of dividing zone 1 and zone 2 for finger vein is recorded as V 1 . And threshold of dividing zone 2 and zone 3 for finger vein is recorded as V 2 .
Fg.1 Decision tree of classification
The input is a group of 2-D vectors of scores which is composed of the users' fingerprint and finger vein score of matching values obtained by fingerprint and finger vein identification system. The decision tree based on the zones of interest shown in Fg.1 in [8] will classify the scores. The flowchart shown in Fg.2 in [8] will do the final decision according to the results of classification.
Decision tree makes decision according to scores of matching values and thresholds. The purpose is to distinguish the vector of scores that can be recognized and the one that cannot be recognized. In the flowchart, if the result of classification is Different Class for all of vectors of scores, the person is not identified. If only one vector of score is identical class, the identity of the user will be the identity of the corresponding user of this vector of score. If several vectors of scores are identical class, weighted sum of each vector of score will be calculated. Find the minimal sum, and the identity of the user will be the identity of the user which the minimal sum corresponding to.
However, in the stage of weighted sum, paper [9] does not give a convincing demonstration about how to allocate weight to each score from unimodal identification system. We use the method in [9] to improve this fusion approach. EER will be minimal in this condition which is shown in Eq. (5) and Eq. (6):
Experiment and results
In the experiment, we use two standard biometric databases: FVC2002_DB1_B for fingerprint and FV-USM for finger vein. On the base of the two standard biometric databases, we build double modality database which contains 50 users. The identities of users are decided by the combination of their fingerprint and finger vein signatures. There are 5 images for each modality from each user. For each modality from each user, 3 images are stored in database as samples. The other 2 samples are one for training and one for testing. It means that 50 images are used to calculate zones of interest and 50 images for evaluation.
To evaluate the performance of this fusion system, we calculate FAR, FRR. Meanwhile, Recognition Rate is also calculated according to FAR, FRR and thresholds for the evaluation. We use K-means algorithm to divide the score rang into zones of interest and plot ROC curve through K-means clustering analysis. The curves are shown in Fg. 3 Meanwhile, the calculation in background program of thresholds which are P 1 , P 2 , V 1 and V 2 of zones of interest is completed. Thresholds are shown in Table1. After calculating these thresholds, we will use the testing sets to apply the modified fusion method to score level fusion between fingerprint and finger vein. The results of experiments are shown in Table 2. In Table 2, we can find that, not only iris and fingerprint, but also finger vein and fingerprint are applicate for the score level fusion method BCC. Furthermore, our efforts in the modification on the method are worthwhile because compared to BCC, modified BCC reduces false accept rate and false reject rate and increase recognition rate.
Conclusion
In this paper, we work on the multibiometric fusion in score level for the identifications of fingerprint and finger vein. We choose the method which has been used in different types of biological modalities.
We use Hamming distance and Euclidean distance to calculate scores of matching values. Then, scores should be through the normalization. K-means algorithm is for the dividing of zones of interest. The decision tree for classification and flowchart for combination will achieve the score level fusion. Meanwhile, we choose one method for the calculation of weighted sum which is our modification on the method BCC.
In this way, we have one more train of thought to work on the fusion in the modality of fingerprint and finger vein. And our modification on the method BCC is effective. Furthermore, the modification not only improves the performance of the multibiometric fusion for fingerprint and finger vein, but also the calculation method of weight brings the method BCC to a whole. The results show that the method BCC is effective in the fusion of fingerprint and finger vein and our medication on BCC improves the performance of BCC.
Critically, K-means algorithm may not be suitable for varieties of distributions of scores of matching values. The choice of clustering analysis algorithm should be based on the actual situation about scores. | 3,809.4 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Experimental Investigations on Pullout Behavior of HDPE Geogrid under Static and Dynamic Loading
(is paper describes a series of laboratory pullout tests that were performed to investigate the pullout behavior of high-density polyethylene (HDPE) uniaxial geogrid subjected to static and dynamic loading. Pullout tests were conducted on HDPE geogrid reinforced coarse sand under normal static loading (60–300 kPa), dynamic loading with different amplitudes (20, 40, and 60 kPa), and different frequencies (2, 4, and 6Hz) by using the newly developed pullout apparatus. (e results indicated that the pullout resistance of geogrid presented different growth patterns with the increase of normal loads under static loading. (e amplitude and frequency both had significant effects on the interaction between reinforcement and soil, and the increment of the pullout resistance was 0.6 kN and 0.3 kN, respectively. (e effect of dynamic loading on the soil-geogrid interface can be gradually equivalent to that of static loading corresponding to the balance position of dynamic loading with the increase of frequency compared with the static loading. (e results of this study are helpful for the selection of the strength of the reinforcement in different locations and to simplify the study on the stress of reinforcement in reinforced soil structures under traffic loads.
Introduction
In recent decades, geosynthetic-reinforced soil (GRS) structures have been constructed more frequently, such as subgrade [1,2], embankment [3,4], retaining walls [5][6][7], slopes [8,9], and landfills [10,11], because they have the advantages of low cost [12], simple construction, and environmental protection [8,13] and can deform without damage. High-density polyethylene (HDPE) geogrids, as an example of the important geosynthetics reinforcement materials, are commonly adopted in many GRS structures, such as reinforced soil retaining wall and reinforced slopes, especially for steep slopes [14][15][16], because of their excellent mechanical characteristics such as high strength, high elongation, and durability. In addition, the geogrid can significantly enhance the bearing capacity and reduce the settlement [17][18][19]. To give full play to the performance advantages of the geogrid, extensive investigations have been conducted on the performance of the geogrid reinforced soil structure by several authors [20,21]. e deformation conditions and the engineering performance of the soil are a significant improvement by the friction of the interface of reinforced soil. Hence, the friction characteristics [22,23] of the interface between soil and geogrid reinforcement are one of the important factors to be considered in the design and stability analysis of GRS structures. e pullout test [24][25][26] is an effective and also commonly used method to investigate the interaction of the interface between soil and reinforcement. In order to simulate the interaction process and further understand the mechanical behavior and deformation characteristics of geogrid reinforcement, many laboratory experiments have been performed by several authors [27][28][29]. At the same time, some valuable results were obtained by a series of pullout tests.
Abdi and Mirzaeifar [26] conducted pullout tests to investigate the effect of the particle size and distribution on the friction characteristics of the soil-geogrid interface. It was observed that the pullout resistance increased with the increase of the particle size and nonuniformity, and the particle size had a greater influence. Altay et al. [30] studied the interaction of the interface between geogrid and clay soil. In addition, the effect of the moisture content of the clay soil on the pullout resistance was also examined. e test results showed that when the moisture content of clay was the optimum moisture content (OMC), the pullout resistance was the largest. Wang et al. [31] conducted laboratory pullout tests and numerical simulations to investigate the effect of the number of transverse members on the pullout resistance. By visualizing the interaction of geogrid and soil in the numerical model, the load transfer characteristics were also analysed. Peng and Zornberg [32] used transparent soil to visualize the interaction of the soil-geogrid interface to study the mechanism of loads transfer. Cardile et al. [33] investigated the effect of the interference mechanism between two transverse ribs on the bearing capacity based on laboratory pullout tests and proposed a theoretical method to predict the peck pullout resistance. e above-mentioned literature mainly studied the influence of the properties of the infill materials and reinforcements on the friction characteristics of the soil-geogrid interface through pullout tests. In addition, the normal static loading is also one of the important factors that affect the mechanical behavior of the interface. e interaction of the soil-geogrid interface was investigated under static loading (σ � 20, 40, and 100 kPa) by Wang et al. [31]. Aali et al. [34] studied the influence of the number of longitudinal and transverse rib members on the pullout force by conducting a series of pullout tests under normal static loading (σ � 20, 40, and 80 kPa). e pullout characteristics of waste tire strips with uniaxial and biaxial geogrids were conducted under σ � 20, 40, 50, and 60 kPa by Li et al. [35]. In the present study, there is a lack of detailed investigations into the variations of the pullout capacity of geogrid reinforcement under higher normal static loading as compared to past studies.
In addition to static loading, GRS structures are generally subjected to dynamic loading [36]. At present, deformation calculation and stability analysis of reinforced soil retaining walls are generally based on the friction strength parameters between soil and reinforcement under static loading. Traditional testing methods where the friction coefficient of the interface between soil and reinforcement was obtained by static loading may be adopted, but these are limited to light traffic loads and slow speed. However, reinforced soil structures are generally subjected to dynamic loading generated from earthquakes and traffic loads. Specifically, the effect of traffic loading on the reinforced soil structures cannot be ignored. Hence, the effect of dynamic loading on the interface friction strength of reinforced soil and the change of interface strength after dynamic loading are need further research.
Cardile et al. [37] studied the effect of cyclic loading history (effective stress � 10, 25, 50, and 100 kPa; f � 1 Hz) on the pullout resistance and the stability of the interface. e test results indicated that the mechanical behavior of the soil-geogrid interface was related to the amplitude of the cyclic loading and effective stress. Liu et al. [38] tested the tensile strain of geogrid under different amplitudes and frequencies and obtained the dynamic characteristics and cumulative deformation development law of retaining wall under repeated traffic load. Hussaini et al. [39] investigated the mechanical behavior of the ballast-geogrid interface under cyclic loading. It was observed that the geogrid can effectively prevent the lateral movement of ballast and reduce the settlement. However, the failure of larger particles was relatively high. In contrast to the above-mentioned studies, the effect of the dynamic loading with higher loading amplitude and higher frequency on the strength of the interface lacked enough research. Meanwhile, the relationship of the strength of the interface between static loading and dynamic loading also lacked enough research. To further understand the pullout behavior and deformation characteristics of geogrid reinforcement, a large-scale laboratory pullout apparatus was developed. e newly developed equipment provides the basis for research.
In this paper, a series of laboratory pullout tests were performed on HDPE uniaxial geogrid by using the newly developed pullout apparatus under static and dynamic loading. e effects of several factors, including the value of the normal static loading and the frequency and amplitude of dynamic loading, on the pullout resistance were investigated. Additionally, the relationship of the pullout behavior of the geogrid subjected to static and dynamic loading was also discussed.
Laboratory Pullout Apparatus
In this paper, a large-scale multifunction laboratory pullout apparatus, which can be applied to both static loading and dynamic loading, was developed to examine the friction characteristics of the interface between soil and geogrid reinforcement under different loading scenarios. e newly developed pullout apparatus is mainly comprised of four components: a rigid steel pullout box, normal loading system, horizontal control system, and a data acquisition system. Table 1 presents the main technical parameters of the pullout apparatus, and the view of the apparatus is shown in Figure 1.
Pullout Box.
Several factors, including the types and sizes of geosynthetics and the shape and size of infill materials, were considered to design the dimensions of the pullout box. e inner dimensions of the pullout box developed in this study were 600 mm × 400 mm × 500 mm (L × W × H), and the material is a steel plate with the thickness of 15 mm. What is more, the reinforced ribs, namely, steel transverse ribs, were uniformly arranged around the box to protect the sides of the test box from deformation and even damage during the pullout test, as shown in Figure 1.
In the traditional apparatus, the front displacement monitored by the displacement sensors at the clamp is generally adopted to analyse and discuss the results of the pullout test. Additionally, the calculation formula of the shear stress was τ � T/2LB, where T was pullout resistance at the moment when geogrid was pulled out; L and B were the length and width of the reinforcement placed on the inside of the pullout box, respectively. It should be noted that, for the duration of the test of the traditional apparatus, the contact area of the interface between reinforcement and soil decreased with an increase in the relative displacement of the soil-geogrid interface, which will cause the results obtained from the laboratory pullout tests to be greater than the real values. erefore, the newly developed apparatus needs to solve this problem.
A slot was set on both the front and back of the box, one of which is close to the clamp and the other is far away from the clamp as shown in Figures 1 and 2. Firstly, the height of the slot is consistent with the height of the clamp to ensure that the reinforcement was pulled out in a horizontal state during the test. In addition, taking into account the types of geosynthetics and the thickness of reinforcement, the height of the narrow gap can be adjusted to meet the requirements of different tests and reinforcements. Last but most important, the setting of the back slot is aimed at keeping the contact area of the soil-geogrid interface constant for the duration of the test. Due to the existence of the back slot, the geogrid can be placed along the longitudinal direction of the test box, and the end of the geogrid reinforcement can extend to the outside of the test box. e back slot is designed to ensure the accuracy of the calculation results of the formula. What is more, the end of the reinforcement is connected to the back displacement (BD) sensor by a clean steel wire rope to monitor the relative displacement of soilgeogrid during the test.
Compared with other test apparatus, the newly developed pullout box had a larger internal size, and the test box was made from steel, so the quality will be greater after filling. To ensure that the test can be carried out smoothly and quickly, the pullout box was equipped with some auxiliary facilities. Multiple linear guide rails were arranged under the test platform. e setting of the linear guide rail is helpful for not only the movement of the test box but also the loading and unloading of the infill materials. Additionally, a limit device was set to ensure that the test box located directly under the loading plate to avoid the friction between the loading plate and the inner wall of the pullout box. To prevent the infill material from falling into the linear guide rail through the narrow gap in front of the pullout box during the test, the baffle was added to the test platform.
Normal Loading System.
In the design of this equipment, the inverted hydraulic loading method was adopted through the action of the reaction frame. e hydraulic pump applied the normal loading to the infill material in the pullout box through the loading plate. To avoid the friction between the pressure plate and the side wall of the pullout box during the test, the size of the loading plate made from steel was 590 mm in length, 390 mm in width, and 35 mm in thickness. A series of tests, including strength and stiffness tests, were carried out on the loading plate to avoid producing deflection and even damage when transferring the normal loading. e tests showed that, in the process of the test, no deformation was found on the loading plate under the normal loading up to 1000 kPa. e maximum normal loading applied to the apparatus is 800 kPa. e frequency of the dynamic loading was up to 50 Hz. At the same time, several waves, including sine wave, triangle wave, square wave, and combination wave, can be applied by the normal loading system. e normal loading system can apply not only static loading but also dynamic loading. Additionally, an important function of the normal loading system was to meet the relative density of the infill material in the experimental programs by using the loading plate.
Horizontal Control System.
e horizontal servo control system adopted the strain control type. e improvement feature of the pullout apparatus was the pullout rate and the horizontal displacement, because limited researches had been conducted on the effect of the pullout rate on the friction characteristic of the soil-geogrid interface. e increase of the horizontal displacement can greatly reduce the size effect of the geogrid specimen and provide the basis for the subsequent study of the effect of the number and the position of the transverse ribs on the interaction mechanism. e clamp with complementary concave convex and a sine wave can ensure that the tested geosynthetic sample does not slide and reduce the damage to the geosynthetic sample. e force sensor and displacement sensor were set at the clamp, which were connected with the computer. e time interval of data acquisition was set at the computer terminal so as to accurately record the pullout resistance and the pullout displacement during the test.
Data Acquisition and Processing
System. To facilitate the setting and input of test parameters in the process of the test, the pullout apparatus was equipped with data acquisition and processing system, which can not only monitor but also record some important data. It is important for the apparatus to monitor the normal loading during the test because it needed to ensure that the normal load applied to the infill material was constant and stable, especially for high static loading and dynamic loading. In addition, the pullout resistance, the front and back displacement monitored by sensors, and the curve of the data can be obtained from the Advances in Materials Science and Engineering data acquisition system to enhance the control of the test progress. Last, in order to ensure the safety of the experimenters during the test, the special protection devices, which included force protection and displacement protection, were set in the software system.
Summary of Pullout Apparatus Characteristics.
Compared with the previous apparatus, the developed pullout apparatus had the following technical characteristics: (1) e normal loading can apply a high normal static load of up to 800 kPa to simulate the vertical pressure of reinforced buildings such as high slope and high retaining wall. At the same time, different parameters such as waveform, load amplitude, frequency, and cycle times can be set to simulate the dynamic load. (2) e overall rigidity of the test box is improved significantly by the reinforced ribs uniformly arranged around the side of the box, providing a guarantee for the pullout test under high normal static loading and dynamic loading. (3) e setting of the slot at the front and back can not only ensure that the contact area between the reinforcement and the filler is constant but also measure the real pullout displacement between the reinforcement and the infill material. (4) e data acquisition system improves the accuracy of the data and strengthens the control of the test process.
InFill Material.
e infill material used in this study was coarse sand with a particle size distribution varying from 0.1 to 10 mm. e particle size distribution curve of the coarse sand determined from sieve analysis is shown in Figure 3. Coarse sand can be classified as well-graded sand (SW) (1) (3) (9) To determine the shear strength parameters of the infill material used in the test, a series of laboratory direct shear tests (DST) were performed. It should be noted that the selection of the normal loads in the direct shear test (DST) should be consistent with the pullout test to ensure that the effect of the load on the particles was excluded. DST was carried out on specimens with dimensions of 20 mm in height and 61.8 mm in inner diameter under three normal stresses of 80 kPa, 100 kPa, and 120 kPa. e basic physical properties of the coarse sand are summarized in Table 2. e dimension of above and below the reinforcement inside the newly developed pullout apparatus meets the requirements of ASTM D6706: soil thickness greater than 150 mm, six times 85%-passing particle size (D85) of sand material, and three times the maximum particle size (D max ) of the soil used in testing.
Reinforcement Material.
e reinforcement material used in this study was uniaxial geogrid, which was made from stretched high-density polyethylene (HDPE). e engineering properties of the geogrid used in this study are presented in Table 3.
Experimental Programs.
Experimental programs are mainly comprised of two components: static loading and dynamic loading. e reinforcements are generally subjected to gravity loads from Earth pressure. To simulate the gravity loads of the reinforcement at different depths in practical engineering, a series of normal static loads were defined, including 60 kPa, 80 kPa, 100 kPa, 120 kPa, 200 kPa, and 300 kPa. In addition to static loads, the GRS structures are also subjected to dynamic loads such as earthquakes and traffic loads. To simulate the dynamic loads of the structures at different road conditions, a series of normal dynamic loads were defined, including frequency of 2 Hz, 4 Hz, and 6 Hz and amplitude of 20 kPa, 40 kPa, and 60 kPa. e loading curve and the magnitude of normal static loads and dynamic loads are presented in Figure 4 and Table 4, respectively. Chen and Su [40] simulated the traffic loads with the sine wave and studied the dynamic response of subgrade by finite element software ABAQUS. Tian and Chu [41] analysed the load characteristics of traffic load and concluded that the sine wave was a reasonable way to simulate traffic load. Yu et al. [42] analysed the deformation characteristics and stability of widened embankment under two conditions with or without reinforcement by using sine wave to simulate traffic load. Hence, in this paper, the sine wave was used to simulate the dynamic loading. In addition, previous tests indicated that the pullout rate had a significant Advances in Materials Science and Engineering 5 influence on the experimental results. erefore, the pullout rate used in the tests was kept constant at 2.0 mm per minute to minimize the effect of the pullout rate on the tests. It can be seen from Figure 4 that the loading process is mainly composed of two stages. Stage one is the pullout preparation stage. At this stage, the horizontal control system is in a relaxed state, and the normal loading system will quickly reach the target value of the vertical load according to the experimental programs. Stage two is the pullout test. At this stage, the normal loading system can apply and maintain constant uniform normal loads on the infill material throughout testing, and the horizontal control system pulls the geogrid at a constant rate of 2.0 mm/min. It should be noted that the setting of the normal load time in the software operating system needs to be longer, because this time is counted from the experimental preparation stage, and the end time of the experiment is unknown. In this experiment, the pullout displacement is 100 mm and the pullout rate is 2.0 mm/min. e test takes about 50 minutes. However, the time required for the normal load to go from 0 to the target load in the experimental preparation stage, as well as the unforeseen conditions in the pullout test, also needs to be considered. erefore, the normal static loading time of this experiment is set to 70 min. As shown in Figure 4, unlike the static load, the dynamic load needs to input the frequency and amplitude in the second stage when the normal load reaches the target value. e sine wave was adopted to this test. Additionally, it is worth mentioning that, in Table 4, the aim of the selection of the normal static load of 70 kPa is used to compare the differences in the mechanical behavior of the soil-geogrid interface under static and dynamic loading.
Experimental Procedures.
e test procedures in this study are mainly comprised of three components: starting equipment, test preparation, and test. e experimental procedures are shown in Figure 5 It was worth mentioning that some details need to be paid attention to in the process of the test. Firstly, in the procedures of starting equipment, it is necessary to lubricate the sidewall of the pullout box by lubricant to minimize the effect of boundary on the test. en, the height between the bottom of the box and the slot was 25 cm. In the process of the backfilling, the infill materials should be backfilled in 5 layers and compacted. Next, before laying the geogrid, the surface of the infill materials needed to be shaved. In addition, the geogrid in the test box should be parallel to the two sidewalls of the test box and the distance should be symmetrical and equal to minimize the friction of the sidewall. Meanwhile, the laid geogrid should be prefixed by the clamp to avoid changing the position of the geogrid
Results and Discussion
is section presents a summary of the results and analyses of laboratory pullout tests conducted on HDPE uniaxial geogrid reinforced coarse sand subjected to static and dynamic loading by using the newly developed pullout apparatus. Firstly, the influence of the front displacement (FD) and the back displacement (BD) on the results of the pullout test was discussed. en, the effects of the normal static and dynamic loads on the pullout resistance of the geogrid were discussed, respectively. Additionally, the variation laws of the interface shear strength parameters, including the apparent cohesion force and the apparent friction angle, with the static and dynamic load were also analysed. Finally, the similarities between the static loading and dynamic loading on the mechanical behavior of the soil-geogrid interface were analysed by comparing the results such as the pullout curve and pullout resistance.
Comparison of Front Displacement and Back
Displacement. Figure 6 shows that the pullout resistance versus displacement that contains the front displacement (FD) and back displacement (BD) under the same normal static loading (σ � 80 kPa) was obtained from front sensors and back sensors, respectively.
It can be seen from Figure 6 that the curves of the FD and BD have both similarities and differences. e similarities were that the curve trends of the FD and the BD are the same, both of which showed that as the pullout displacement increases, the required pullout resistance gradually increases. It should be noted that the relationship between the displacement and pullout resistance was not directly proportional, because the geogrid was not an elastic material but an elastoplastic material. e differences were that the FD monitored by the displacement gauge installed at the clamp occurred at the moment when the pullout test was started, but the BD occurred only a period of time after the test started. e biggest reason for the difference between FD and BD was that the geogrid between the clamp and the front slot of the test box lacked lateral confinement generated from soil mass. en, with the increase of pullout force provided by the horizontal control system, the load was transmitted along the longitudinal rib of geogrid and then the BD monitored by the additional displacement meter occurred when subjected to pullout force. e experimental results indicated that FD was mainly comprised of two parts: the tensile deformation of geogrid reinforcement between the clamp and the front slot of the test box, namely in the air, and the relative displacement between soil and geogrid in the box during the pullout tests. However, the back displacement contains only the relative displacement of soil and geogrid.
From the above analysis, it is determined that the back displacement can better reflect the relative horizontal displacement of the interface between soil and geogrid reinforcement as compared to the front displacement. us, the following results should be presented in terms of pullout resistance versus back displacement, rather than front displacement.
Effect of Static Loading on the Pullout Resistance.
e plot of pullout resistance versus back displacement under six normal static loads, namely, 60 kPa, 80 kPa, 100 kPa, 120 kPa, 200 kPa, and 300 kPa, is shown in Figure 7. It can be seen from Figure 7 that the tendency of the curve obtained from the laboratory pullout tests has similar characteristics, which showed that the pullout resistance increased and displacement decreased with an increase in the normal loading varying from 60 kPa to 200 kPa, which was consistent with the conclusion obtained by Altay et al. [30], Wang et al. [43], and Yi et al. [44] through pullout tests. Meanwhile, the above conclusions also verified the reliability of the newly developed pullout apparatus. However, the geogrid suddenly broke in the Advances in Materials Science and Engineering process of the pullout tests when subjected to normal stress σ � 300 kPa and then the pullout resistance dropped sharply. It was found that the magnitude of the normal static loading directly affected the pullout resistance.
To further investigate the effect of normal static loading on the pullout resistance of geogrid, the pullout resistance corresponding to the normal loads is presented in Figure 8. It should be noted that the pullout resistance corresponding to the normal loads of 300 kPa cannot be used, because the geogrid reinforcement was broken rather than pulled out during the test. us, it is necessary to discard this group of the experimental data obtained from the pullout test under σ � 300 kPa.
It can be seen from Figure 8 that the pullout resistance increases with the increase of the normal loads from 60 kPa to 200 kPa, but the pullout resistance presents different growth modes. According to the increase of the pullout resistance, it can be roughly divided into three regions, which are region I, region II, and region III, as shown in Figure 8. Region I was the normal load increasing from 60 kPa to 80 kPa, and the pullout resistance increased by 1.65 kN. Region II was the normal load increasing from 80 kPa to 120 kPa, and the increase in pullout resistance was about 0.7 kN, which was a decrease of 59.6% compared with region I. Region III was the normal load increasing from 120 kPa to 200 kPa, and the increase in pullout resistance was only 0.22 kN, which was a decrease of 68.6% compared with Advances in Materials Science and Engineering region II. erefore, it can be seen that the change of the normal static load has a great influence on the pullout resistance. Specifically, the influence of normal loading on pullout resistance was gradually weakened.
Considering that there are three growth modes of pullout resistance with the increase of normal loads, an approach of global polynomial fitting for the whole test procedure was adopted to analyse the relationship between the pullout resistance of the geogrid used in this study and the normal static load. As can be seen from Figure 8, when the normal load was small, the pullout resistance increased as the normal load increased, but when the normal load was greater than a certain value, the pullout resistance decreased instead. In addition, when a certain normal load was reached, such as σ =300 kPa, the pullout resistance of the interface between the soil and reinforcement was greater than the tensile strength of the geogrid, and then the geogrid would break. is certain value of the normal load is called critical normal load, and the corresponding pullout resistance is called the maximum pullout resistance of geogrid used in this test. Under the critical normal load, the strength of both the infill material and the geogrid can be fully exhibited, so the pullout resistance can reach the maximum. According to the polynomial fitting equation, the critical normal load of the geogrid used in this study is 176 kPa, and the corresponding pullout resistance is 10.42 kN. e determination of the critical normal load not only helps to provide a reference for the selection of reinforcement placed at different depths but also is of benefit to make full use of the material and then reflects significant economic advantages of reinforced soil structures. Hence, it is necessary to determine the critical normal load corresponding to the maximum pullout resistance by conducting a series of laboratory tests in the design of the geogrid reinforced soil structures, such as retaining walls and slopes, especially for steep slopes.
According to the calculation formula of the shear stress, the shear stress of the soil-geogrid interface under different normal static loads was calculated. e plot of the relationship between the shear stress and the different normal loads is shown in Figure 9. e interface shear strength parameters that contain the apparent cohesion force and the apparent friction angle were obtained from piecewise linear fitting according to the three areas mentioned above. e piecewise linear fitting equation is also shown in Figure 9. To further analyse the influence of the magnitude of the normal static load on the shear strength parameters, the variation laws of the shear strength parameters, including the apparent cohesion force and the apparent friction angle, with three regions are presented in Figure 10.
It can be seen from Figure 10 that the apparent cohesion force increases with an increase of the normal static loading, while the apparent friction angle decreases instead. When the normal loading increases from region I to region II, the apparent cohesion force increases from 5.33 kPa to 15.27 kPa, a significant increase of 186.5%, and the apparent friction angle decreases from 13°to 6.3°, a decrease of 51.5%. When the normal load increases from region II to region III, the apparent cohesion increases from 15.27 kPa to 24.16 kPa, an increase of 58.2%, and the apparent friction angle decreases from 6.3°to 1.8°, a decrease of 71.4%. It can be seen that the magnitude of the normal static loading has a significant impact on the shear strength parameters of the soilgeogrid interface. However, with the normal load increases, the magnitude of the increase in apparent cohesion decreases and the magnitude of the decrease in apparent friction angle increases. e reason is that as the normal load increases, the geogrid and coarse sand increase in density, especially near the transverse ribs at the interface. is helps the passive bearing capacity of the geogrid to gradually develop, so the apparent cohesive force is significantly increased. However, when the normal load continues to increase, the particles may break and then cause the particle size distribution to change. e particle size of the infill material is reduced, which results in an increase in the decrease in the apparent friction angle, and, at the same time, Figure 11 shows the plot of the pullout displacement versus pullout resistance of geogrid reinforced coarse sand subjected to dynamic loading with the same frequency and the different amplitude of 20 kPa, 40 kPa, and 60 kPa. It can be seen from Figure 11 that the pullout resistance increases with an increase in the magnitude of the amplitude under the same frequency, which was consistent with the conclusions obtained by Cardile et al. [37] and Xu and Shi [45]. Additionally, the curve trend of pullout displacement versus pullout resistance under dynamic load is similar to that under static load, which provides the possibility for the equivalent of the dynamic load.
Effect of Dynamic Loading on the Pullout Resistance.
To quantitatively analyse the effect of the amplitude on the pullout resistance of geogrid under dynamic loading, the pullout resistance corresponding to the amplitude is presented in Figure 12. It can be seen from Figure 12 that the pullout resistance was directly influenced by the amplitude. e pullout resistance increases linearly with the increase of the amplitude of the dynamic load, and the increment is about 0.6 kN.
e experimental results revealed that the dynamic loading with a larger amplitude has a greater influence on the pullout resistance of the soil-geogrid interface under the same frequency. e plot of the geogrid reinforced coarse sand subjected to dynamic loading with the same amplitude (20 kPa and 40 kPa) and the different frequency (2, 4, and 6 Hz) and frequency versus pullout resistance is shown in Figure 13. Additionally, the relationship of the pullout behavior of the geogrid subjected to static and dynamic loading was also discussed, because the curve trend of pullout displacement and pullout resistance under dynamic load is similar to that of the static load. To compare and analyse the influence of normal static load (60 kPa and 80 kPa) and dynamic load (60∼80 kPa) on the friction characteristics of the soil-geogrid interface, the pullout test was also conducted under static load of 70 kPa. It is worth mentioning that 70 kPa was the balance position of the lower limit value (60 kPa) and upper limit value (80 kPa) of normal dynamic load.
It can be seen from Figures 13(a) and 13(b) that both the pullout resistance increased with an increase in the frequency under the same amplitude. By placing the pullout curves of static and dynamic loading together, some rules can be observed, which can provide ideas for simplifying the study of dynamic loads. e pullout curve under dynamic loading changed within the upper and lower limits of the static loading. Furthermore, the pullout curves of the dynamic loading at different frequencies do not exceed the two pullout curves corresponding to the upper and lower limits of the static loading, but the degree of deviation is different. Additionally, the pullout resistance increased with an increase in the frequency under the same amplitude and gradually approached the pullout resistance under the normal static load corresponding to the balance position of dynamic loading. e test results indicated that the dynamic loading action with a smaller frequency is close to the static load action corresponding to the lower limit value of the dynamic load. At the same time, with an increase in the frequency, the effect of dynamic loading on the interface of soil and geogrid can be gradually equivalent to that of static loading corresponding to the balance position of dynamic load.
To further investigate the effect of frequency on the pullout resistance, the plot of the frequency versus the pullout resistance was shown in Figure 14. As shown in Figure 14, the pullout resistance increases linearly with the increase of the frequency of the dynamic load, and the increment is 0.3 kN and 0.5 kN, respectively. e test results indicated that the dynamic loading with larger frequency has a greater influence on the pullout resistance of the interface between soil and geogrid under the same amplitude. Meanwhile, the dynamic loading with a larger amplitude has a greater influence on the pullout resistance was also validated. In addition, the difference between the dynamic loading (σ � 60∼80 kPa; σ � 60∼100 kPa) with larger frequency (f � 6 Hz) and the static loading (σ � 70 kPa; σ � 80 kPa) which was the balance position of dynamic loading was 0.31 kN and 0.20 kN, respectively. Hence, the difference between the dynamic loading and the static loading corresponding to the balance position of dynamic loading decreased with the increase of the frequency and amplitude. rough the above analysis, the idea of simplifying the research of dynamic load can be obtained, thus improving understanding of the pullout behavior of the geogrid reinforcement subjected to static and dynamic loading. e reason is that some of the larger particles, especially near the transverse ribs at the interface, were crushed under dynamic loading of different frequencies due to the normal dynamic load from the lower limit value to the upper limit value in a short time, thus reducing the passive bearing capacity of the transverse rib on the coarse sand. Hence, the pullout curve of dynamic loading is close to the pullout curve of static load corresponding to the lower limit of dynamic load amplitude. At the same time, with the increase of the frequency, the density of the infill material in the test box increases, and the influence of frequency in dynamic loading is gradually weakened. Specifically, the effect of dynamic loading on the soil-geogrid interface gradually tends to that of static loading corresponding to the balance position of the dynamic loading.
Conclusions
is paper presents a series of experimental investigations on HDPE uniaxial geogrid under static and dynamic loading by using the new pullout apparatus. e variation laws of the friction characteristics and the shear strength parameters of the soil-geogrid interface with the magnitude of normal static loading, frequency and amplitude of dynamic loading, and the idea of simplifying research on dynamic loading were analysed by comparing the experimental results obtained from laboratory pullout tests. e following conclusions were obtained. e pullout resistance presented three growth modes with the increase of the normal loads, while the influence of normal loading on pullout resistance was gradually weakened. A critical normal load was developed to provide a reference for the selection of reinforcement placed at different depths, benefiting to make full use of the material. e apparent cohesion force increases with an increase of the normal static loading, while the apparent friction angle decreases instead. e normal static loading, the amplitude, and frequency of normal dynamic load had significant influences on the friction characteristics of soil-geogrid interface under 60-80 kPa, and the increment of the pullout resistance was 1.65 kN, 0.6 kN, and 0.3 kN, respectively. Hence, the influence of normal static loading is greater than that of the amplitude and frequency of the normal dynamic loading. e effect of dynamic loading on the soil-geogrid interface tends to the action of the static loading corresponding to the balance position of dynamic loading with the increase of frequency of dynamic loading. is study is helpful to simplify the study of the stress state of the reinforcement under dynamic loading in the design and stability analysis of the reinforced soil structure, avoiding the failure of the structure.
In this paper, the sine wave was used to simulate the dynamic load, which had some limitations in reflecting the traffic load. erefore, we will study the influence of complex dynamic loads on the soil-geogrid interface in the follow-up work.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 9,179.8 | 2020-10-14T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Forced Convection Heat Transfer for Stratospheric Airship Involved Flight State
: Forced convection heat transfer is a significant factor for the thermal control of a stratospheric airship. However, most of researches were conducted without considering the influence of flight state causing serious errors. In order to accurately predict the forced convection heat transfer of the stratospheric airship at an angle of attack, firstly, an empirical correlation of Nusselt number ( Nu ) as function of Reynolds number ( Re ) andlength to diameter ratio ( e ) is developedunder horizontal state based on a validated computational fluid dynamic (CFD) method. Then, a correction factor K, considering its angle of attack ( α ), is proposed to modify this correlation. The results show that: (1) Nusselt number increases with the increase of Reynolds number, decreases as the length to diameter ratio changes from 2 ~ 6, and increases as the angle of attack changes from 0 ◦ ~ 20 ◦ . (2) At higher Reynolds number, the calculated results are 30 percent higher than those of previous studies with α = 20 ◦ . (3) Compared with α and e , the e ff ect of Re on correction factor K can be ignored, and K is a strong equation of α and e . The e ffi ciency of heat transfer is increased by 6 percent with α = 20 ◦ . The findings of this paper provide a technical reference for the thermal control of a stratospheric airship.
Introduction
A stratospheric airship is a type of aircraft that takes off by buoyancy. Compared with other aerostats, the stratospheric airship has its unique advantages and is widely used in civil fields such as satellite communications, meteorological measurements, and military fields such as surveillance and defense [1][2][3].
Thermal performance is one of the most important factors affecting the flight state of a stratospheric airship. The stratospheric airship is filled with a large amount of gas, and it experiences a rough external environment during flight, as the change in temperature affects the buoyancy to a large extent [23]. In the past decades, many research activities have been held on the forced convection heat transfer of a stratospheric airship. In their researches, Kreith and Kreider [11] established a simple numerical model to simulate the average temperature of balloon envelope and lifting gas. Yao [12] proposed a multi-node thermal transient model of stratospheric airship and verified the model using high-altitude flight experiments. Fang [15] built a two-node thermal model to analyze heat source and heat transfer patterns that affect thermal balance and thermal performance of airships flying in stratospheric environment. However, the most existing forced convection heat transfer correlations are only applicable in the case of low Reynolds number, so Dai [17] investigated the steady forced convection heat transfer of an isothermal spherical aerostat with the Reynolds number range from 20 to 10 8 , and a new piecewise correlation was proposed. Later, Shi [21,22] proposed a new fixed-point adjustment method of an airship to solve the problem of height instability due to dramatic daily-temperature swings. The airship membrane was discretized into a triangular element to enhance the computing accuracy, and a multi-node thermodynamic model of the airship was established.
In summary, despite the attainment of considerable knowledge on the forced convection heat transfer of a stratospheric airship (through experimental or numerical work), the most relevant empirical correlations are only applicable to spherical airships or airships flying under horizontal state. So many conclusions are not universal because the consistency of study object and correlation equations geometric model is often neglected. And, no specific correlation is available on the external forced convection heat transfer of an ellipsoidal airship flying with a certain angle of attack, although the literature has pointed out that the efficiency of heat transfer is increased by 7 percent considering the angle of attack [24]. The heat transfer criterion equations need to be improved. The present study investigates the effects of Reynolds number, length to diameter ratio, and angle of attack on the thermal characteristics of an ellipsoidal airship to answer this demand. A new correlation of the average Nusselt number is built based on the data obtained from computational fluid dynamic (CFD) calculations. Moreover, a correction factor K considering angle of attack is proposed to modify this formula.
Geometric Model
An ellipsoidal airship is taken as the research object in this paper, and its generatrix equation [25] can be written as where L is the length of the airship, D is the diameter of the airship. In recent years, the CFD method has been able to successfully simulate the thermal characteristics of an airship, owing to the rapid development of computer technology. Moreover, its precision is sufficient to meet the demands of engineering calculation. However, the scaled-model is only geometrically similar to the actual object, and the Reynolds number is not equal, which lead to a large error due to the scale effect when converting the model data into the actual object [26,27]. Thus, a full-scale airship model with L = 100 m is used in this paper to accurately simulate the forced convection heat transfer over the external surface of an airship. Here, the length to diameter ratio is defined as e = L/D. Five cases with e = 2, 2.5, 3, 4, and 6 are numerically simulated by changing the value of D to change the value of e. The values of L and D are shown in Table 1. The numerical simulations are based on the CFD software, Version 18.0, and the ICEM software, Version 18.0 is used for the meshing. The research object in this paper is a symmetrical structure without a tail wing; thus, only a 1/2 body structure is used. At the same time, in order to ensure the symmetry of the flow field structure, a "Symmetry" boundary condition is used, and the airflow direction is parallel with the SYM (as shown in Figure 1).
Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 12 ensure the symmetry of the flow field structure, a "Symmetry" boundary condition is used, and the airflow direction is parallel with the SYM (as shown in Figure 1).
Control Equations
The phenomena of flow and heat transfer were controlled by the following equations [28]: Momentum equation where i represents x-component, y-component, and z-component, V is the velocity vector, μ is the viscosity of the fluid, P is the pressure of the fluid, ρ is the density of the fluid, g is the gravitational acceleration. Energy equation where p c is the specific heat of the fluid, k is the thermal conductivity and T is the temperature of the fluid.
Computational Domains, Mesh, and Boundary Condition
The computational domain and the configuration of the airship are illustrated in Figure 1. Figure 1 shows that the boundary conditions of the computational domain are divided into INLET, OUTLET, WALL, FARWALL, and SYM. The INLET boundary is assumed to be a uniform velocity inlet given the magnitude and direction. Different angles of attack are indicated by changing the direction of the velocity. The OUTLET boundary is assumed to be "Opening" with a reference pressure of 0 Pa. The FARWALL boundary is assumed to be "Opening". The WALL boundary is assumed to be a no-slip wall, and its temperature is 298.15 K. The SYM boundary is assumed to be "Symmetry" . The thermal physical properties of the flow fluid are assumed to be constant at a temperature of 288.15 K, and the pressure of the computational domain is set as 1 atm. The fully implicitly coupled multigrid linear solver is used. The high resolution scheme is used for advection scheme and turbulence numerics.
Control Equations
The phenomena of flow and heat transfer were controlled by the following equations [28]: where i represents x-component, y-component, and z-component, V is the velocity vector, µ is the viscosity of the fluid, P is the pressure of the fluid, ρ is the density of the fluid, g is the gravitational acceleration. Energy equation where c p is the specific heat of the fluid, k is the thermal conductivity and T is the temperature of the fluid.
Computational Domains, Mesh, and Boundary Condition
The computational domain and the configuration of the airship are illustrated in Figure 1. Figure 1 shows that the boundary conditions of the computational domain are divided into INLET, OUTLET, WALL, FARWALL, and SYM. The INLET boundary is assumed to be a uniform velocity inlet given the magnitude and direction. Different angles of attack are indicated by changing the direction of the velocity. The OUTLET boundary is assumed to be "Opening" with a reference pressure of 0 Pa. The FARWALL boundary is assumed to be "Opening". The WALL boundary is assumed to be a no-slip wall, and its temperature is 298.15 K. The SYM boundary is assumed to be "Symmetry". The thermal physical properties of the flow fluid are assumed to be constant at a temperature of 288.15 K, and the pressure of the computational domain is set as 1 atm. The fully implicitly coupled multigrid linear solver is used. The high resolution scheme is used for advection scheme and turbulence numerics.
Given the large physical size of an airship, its Reynolds number may easily increase to a magnitude over 10 6 , which is far beyond the critical Reynolds number. Thus, the flow around an airship is a typical turbulent flow. The computational domain is divided into two separate subdomains, namely, the internal domain and the external domain, to precisely investigate the thermal characteristics over Appl. Sci. 2020, 10, 1294 4 of 12 the external surface of an airship. The mesh of the internal domain, includingboundary layers, is fine, and the first layer of the mesh around the surface lies at y + ≤1. The mesh of the external domain is relatively coarse. Through convergence analysis, the number of elements is controlled at about 2,000,000. The max aspect ratio is 36.4, the min quality is 0.549, the max value of max angle is 147, and the min value of min angle is 34.2, and the min equiangle skewness is 0.37. The computational mesh is illustrated in Figure 2.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 12 Given the large physical size of an airship, its Reynolds number may easily increase to a magnitude over 10 6 , which is far beyond the critical Reynolds number. Thus, the flow around an airship is a typical turbulent flow. The computational domain is divided into two separate subdomains, namely, the internal domain and the external domain, to precisely investigate the thermal characteristics over the external surface of an airship. The mesh of the internal domain, includingboundary layers, is fine, and the first layer of the mesh around the surface lies at y + ≤1. The mesh of the external domain is relatively coarse. Through convergence analysis, the number of elements is controlled at about 2,000,000. The max aspect ratio is 36.4, the min quality is 0.549, the max value of max angle is 147, and the min value of min angle is 34.2, and the min equiangle skewness is 0.37. The computational mesh is illustrated in Figure 2.
Turbulence Model
The turbulence model has a significant effect on the results of the numerical simulation [29]. The suitability of the k-ω turbulence model in a study on forced convection heat transfer around an ellipsoidal airship has been proven in the literature [30]. To further verify the applicability of the turbulence model in the present study, a numerical simulation is carried out by using k-ω turbulence model, and the pressure coefficients Cp is plotted in Figure 3. Because experimental results are not available to directly verify the numerical results of an ellipsoidal airship, the pressure coefficients data of a spherical airship is used as a reference to verify the turbulence model and numerical scheme. The size of the model and boundary conditions are same with the literature [31]. The pressure coefficients obtained in the present study are compared with the experimental measurements of Achenbach. In Figure 3, it can be observed that the performance computed with k-ω turbulence model is close to the experimental measurements. Consideringthe smaller discrepancy, thus, k-ω turbulence model is used in the present study.
Turbulence Model
The turbulence model has a significant effect on the results of the numerical simulation [29]. The suitability of the k-ω turbulence model in a study on forced convection heat transfer around an ellipsoidal airship has been proven in the literature [30]. To further verify the applicability of the turbulence model in the present study, a numerical simulation is carried out by using k-ω turbulence model, and the pressure coefficients Cp is plotted in Figure 3. Because experimental results are not available to directly verify the numerical results of an ellipsoidal airship, the pressure coefficients data of a spherical airship is used as a reference to verify the turbulence model and numerical scheme. The size of the model and boundary conditions are same with the literature [31]. The pressure coefficients obtained in the present study are compared with the experimental measurements of Achenbach. In Figure 3, it can be observed that the performance computed with k-ω turbulence model is close to the experimental measurements. Consideringthe smaller discrepancy, thus, k-ω turbulence model is used in the present study.
Results and Discussions
Existing studies have shown that the main factors affecting forced convection heat transfer around an ellipsoidal airship are the length to diameter ratio and the Reynolds number under a horizontal state [20,28,[32][33][34]. However, an airship generally ascends or descends at a certain angle
Results and Discussions
Existing studies have shown that the main factors affecting forced convection heat transfer around an ellipsoidal airship are the length to diameter ratio and the Reynolds number under a horizontal state [20,28,[32][33][34]. However, an airship generally ascends or descends at a certain angle of attack; thus, estimating its thermal characteristics under this condition is necessary.
The effects of different parameters on forced convection heat transfer are studied in this paper. The values of a single parameter is numbered from 1 to 5, as shown in Table 2. A reference value is set with Re of 1 × 10 7 , e of 2, and α of 0 • . Only the value of the analyzed parameter is changed during the analysis, and the values of the remaining two parameters arethe same as the reference values. A total of 15 cases is conducted in this part to study the effects of different parameters on the forced convection heat transfer around an ellipsoidal airship. shows that Nu clearly increases with the increase of Re. According to the definition of the Reynolds number, that is, Re = Ul/ν, where U is the velocity of the fluid, l is the characteristic length of the airship, and ν is the kinematic viscosity of the fluid. When L and ν are constant, U∝Re. The larger the value of Re, the larger the value of U around an airship, which results in the removal of more heat and the increase of Nu. Figure 4b demonstrates that Nu decreases with the increase of e. According to the definition of e above, the shape of an airship is close to a sphere when e is small. However, the shape of an airship is close to a flat plate as e increases. References [35][36][37] indicate that a vortex occurs in the tail region of a sphere while a uniform flow passes over a sphere, thereby resulting in enhanced convection heat transfer. Figure 4c indicates that Nu increases with the increase of α. The presence of the angle of attack (α) can cause changes in the flow structure around an airship, thereby resulting in a windward side, which eventually leads to a higher convective heat transfer. Figure 5 shows the velocity contours and local velocity vectors around the airship with angle of attack of 0 • and 20 • . The airflow flows from left to right. It is obvious that, at the front region, the velocity of the airflow decreases significantly, and then increases along the airship hull. It reaches a maximum value at the middle region of the airship hull and then gradually decreases. At the tail region, the velocity of the airflow is almost zero due to the separation of the airflow from the surface of the airship. Comparing Figure 5b with Figure 5a, it can be seen that the flow field structure around the airship is changed due to the existing of angle of attack. On the windward side, the position where the velocity of the airflow reaches its maximum moves backward, and in the leeward side, it shows an opposite trend. The change of flow field structure around the airship is the root reason for the change of forced convection heat transfer.
Horizontal State
The horizontal state is the main state in a flight course. Thus, investigating the forced
Horizontal State
The horizontal state is the main state in a flight course. Thus, investigating the forced convection heat transfer around an ellipsoidal airship under horizontal state is important. Assume α = 0 • , the values of Re and e are determined according to Table 2. The average heat transfer in terms of the Nu around an ellipsoidal airship can be written as a power law equation where c 1 , c 2 and c 3 are constant coefficients. Based on the data obtained from the aforementioned simulation results, a new correlation of Nu with a determination coefficient (r 2 ) of 0.9996 and a root mean squared error (RMSE) of 2054 is proposed via MATLAB R2018a. The fitting points and fitting surface are shown in Figure 6. (b) α = 20°
Horizontal State
The horizontal state is the main state in a flight course. Thus, investigating the forced convection heat transfer around an ellipsoidal airship under horizontal state is important. Assume α = 0°, the values of Re and e are determined according to Table 2. The average heat transfer in terms of the Nu around an ellipsoidal airship can be written as a power law equation where c1, c2 and c3 are constant coefficients. Based on the data obtained from the aforementioned simulation results, a new correlation of Nu with a determination coefficient (r 2 ) of 0.9996 and a root mean squared error (RMSE) of 2054 is proposed via MATLAB R2018a. The fitting points and fitting surface are shown in Figure 6. The values of the fitting parameters are listed in Table 3. The value of c 1~c3 is introduced into Formula (5) to obtain Formula (6) Nu = 0.0161Re 0.8543 e −0.0454 (6) where Re ∈ [1 × 10 7 , 2.0 × 10 8 ] and e ∈ [2, 6]. The model of the non-fit points is numerically simulated to validate the correctness of Formula (6). Table 4 compares the values of Nu calculated by Formula (6) with those obtained from the simulation results. The calculated results agree well with those of the simulation results.
State Modification
The problem of changes of attack angle of a stratospheric airship is an important subject on the research of the Stratospheric Airship Platform. Li [38] pointed out that the airship reaches its stable attack angle in a short time after released, and that the attack angle is usually large. Then, the airship ascends at its stable attack angle. So, it is necessary to investigate the forced convection heat transfer of the airship at a certain attack angle.
The definition of K is introduced in this part and can be expressed in the form of K(Re, e, α) = Nu(Re, e, α)/Nu(Re, e) (7) Figure 7 shows the relation between K and α (Re = 1 × 10 7 ). Clearly, K increases with the increase of α regardless of the value of e, the larger the attack angle, the greater the difference. K increases with the increase of the e for a given α, especially at higher α.
State Modification
The problem of changes of attack angle of a stratospheric airship is an important subject on the research of the Stratospheric Airship Platform. Li [38] pointed out that the airship reaches its stable attack angle in a short time after released, and that the attack angle is usually large. Then, the airship ascends at its stable attack angle. So, it is necessary to investigate the forced convection heat transfer of the airship at a certain attack angle.
The definition of K is introduced in this part and can be expressed in the form of Figure 7 shows the relation between K and α (Re = 1 × 10 7 ). Clearly, K increases with the increase of α regardless of the value of e, the larger the attack angle, the greater the difference. K increases with the increase of the e for a given α, especially at higher α. Figure 8 shows the relation between K and Re (e = 2). Clearly, K remains nearly the same as the increase of Re regardless of the value of α. For example, K reduces from 1.052 to 1.033 with Re changes from 1 × 10 7 to 2.0 × 10 8 when α = 20 • . This result means that the effect of Re on K can be ignored. K increases with the increase of α for a given Re. For instance, the value of K at α = 20 • and α = 5 • is 1.053 and 1.004, respectively, with a difference of 4.8% when Re = 1 × 10 7 .
Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 12 ignored. K increases with the increase of α for a given Re. For instance, the value of K at α = 20° and α = 5° is 1.053 and 1.004, respectively, with a difference of 4.8% when Re = 1 × 10 7 . The correction factor K can be simplified to K (e, α) based on the above analysis. The mean valueof K corresponding to different Re is taken as the final value of K. Based on the data obtained from the above simulation results, a correlation of the correction factor K with a determination coefficient (r 2 ) of 0.9995 and a RMSE of 0.0005286 is proposed via MATLAB R2018a. The formula can be written as The correction factor K can be simplified to K (e, α) based on the above analysis. The mean valueof K corresponding to different Re is taken as the final value of K. Based on the data obtained from the above simulation results, a correlation of the correction factor K with a determination coefficient (r 2 ) of 0.9995 and a RMSE of 0.0005286 is proposed via MATLAB R2018a. The formula can be written as where e ∈ [2, 6] and α ∈ [0 • , 20 • ]. According to the above analysis, the forced convection heat transfer for stratospheric airship involved flight state can be calculated by the following empirical correlation The correctness of Formula (9) is likewise validated, and the results are listed in Table 5. As mentioned above, for the external forced convection heat transfer of an ellipsoidal airship flying with a certain angle of attack, there is no specific correlation available yet. Most of the researchers conducted their researches without considering the effect of flight state. Figure 9 compares the values of the average Nusselt number obtained from Formula (9) with those taken from the literatures [11,30,39]. The geometry models used in these literatures are the same as or close to the one used in present study. They are all ellipse. It can be noted that the results of the present study show a consistent performance trend with literatures. Generally, the present results with α = 20 • are higher than those of existing correlations, especially at a higher Reynolds number, where the difference reaches a maximum of about 30 percent compared with Dai. In the thermal design period, the effect of flow state cannot be ignored.
Conclusions
A numerical survey is conducted to explore the forced convection heat transfer of a full-scale ellipsoidal airship. The effects of angle of attack, Reynolds number and length to diameter ratio on heat transfer are investigated. An empirical correlation describing the heat transfer of an ellipsoidal airship was developed. Several conclusions were drawn after analyzing the results: (1) The change of flow field structure around the airship is the root reason for the change of forced
Conclusions
A numerical survey is conducted to explore the forced convection heat transfer of a full-scale ellipsoidal airship. The effects of angle of attack, Reynolds number and length to diameter ratio on heat transfer are investigated. An empirical correlation describing the heat transfer of an ellipsoidal airship was developed. Several conclusions were drawn after analyzing the results: (1) The change of flow field structure around the airship is the root reason for the change of forced convection heat transfer. | 5,826.8 | 2020-02-14T00:00:00.000 | [
"Physics"
] |
Bulk cyclotron resonance in the topological insulator Bi2Te3
We investigated magneto-optical response of undoped Bi2Te3 films in the terahertz frequency range (0.3 - 5.1 THz, 10 - 170 cm-1) in magnetic fields up to 10 T. The optical transmission, measured in the Faraday geometry, is dominated by a broad Lorentzian-shaped mode, whose central frequency linearly increases with applied field. In zero field, the Lorentzian is centered at zero frequency, representing hence the free-carrier Drude response. We interpret the mode as a cyclotron resonance (CR) of free carriers in Bi2Te3. Because the mode's frequency position follows a linear magnetic-field dependence and because undoped Bi2Te3 is known to possess an appreciable number of bulk carriers, we associate the mode with a bulk CR. In addition, the cyclotron mass obtained from our measurements fits well the literature data on the bulk effective mass in Bi2Te3. Interestingly, the width of the CR mode demonstrates a behavior non-monotonous in field. We propose that the CR width is defined by two competing factors: impurity scattering, which rate decreases in increasing field, and electron-phonon scattering, which rate exhibits the opposite behavior.
Introduction
From the theory point of view, a three-dimensional (3D) topological insulator (TI) possesses insulating bulk and conducting surfaces, the conduction channels at surfaces being spin-polarized [1][2][3][4]. Since the spin polarization can potentially be utilized in spintronic devices, topological insulators have attracted a lot of attention in the past years [5,6]. In practice, the real samples of 3D topological insulators often conduct not only on their surfaces, but also in the bulk. Considerable efforts have been made to understand and separate the properties of surface and bulk charge carriers. These properties can particularly be studied via different spectroscopic techniques, such as angle-resolved photoemission spectroscopy (ARPES) or optical and magnetooptical spectroscopy. The optical conductivity and cyclotron resonance (CR) of a number of 3D TI materials have been reported in the literature. Perhaps the most studied family of such TIs is the bismuth selenide -bismuth telluride series, Bi2(Te1−xSex)3, which also includes the undoped members, Bi2Te3 and Bi2Se3 [7][8][9]. In this study, we concentrate on Bi2Te3. Namely, we investigate experimentally the CR in this compound. Surprisingly, the published CR measurements performed on this well-studied TI produce rather diverging results [10][11][12][13] with the absorption features being generally of rather complex shapes. One of the reasons for such diversity might be the sample-dependent variation between the surface and bulk contributions, which, in turn, greatly depend on the exact position of the Fermi level.
Unlike in the majority of previous reports [11][12][13], the CR absorption observed in our study can be well described by a single Lorentzian-shaped mode (which is rather consistent with the earliest study on this issue from 1999 [10]). We believe the absorption we detect is of bulk origin. Thus, our findings might be useful for the proper interpretations of the CR modes in doped Bi2Te3, where the balance between the surface and bulk-states contributions can be shifted towards the former, but the bulk still cannot be completely ignored.
Materials and Methods
We grew thin layers of Bi2Te3 on (111)-oriented BaF2 substrates by molecular beam epitaxy [14]. For the growth, we used binary Bi2Te3 and elemental Te. This is different from the standard practice, when elemental (Bi and Te) sources are utilized with the typical flux ratio of Te/Bi being about 10 to 20. Using Bi2Te3 and Te allowed us to reduce this ratio to the values below 1 and, hence, to precisely control the stoichiometry of the growing layer. Along with the employed "ramp up" growth procedure [15], these two approaches successfully suppress twin formation in the growing films. X-ray diffraction (XRD) φ scans about the [0 0 1] axis on the asymmetric (1 0 10) reflection revealed only 120°-periodic peaks and confirmed that the films obtained by this method are either single-domain or have a very small twin volume fraction (3-7 % for the films with 1 cm 2 area) with the c axis of Bi2Te3 being perpendicular to the substrate surface. To the best of our knowledge, this thin-film growth method is unique. Figure 1. Raw transmission spectra of Bi2Te3 films on BaF2 substrates as obtained in magnetic fields of up to 10 T. The areas with low signal due to either the substrate phonons or spectrometer electronic noise are shaded. The signal-to-noise ratio is best at around 80 cm -1 and becomes appreciably lower as frequency increases (see also Figure 2), preventing thus any meaningful measurements of the CR mode at the fields higher than 10 T.
In order to prevent possible influence of atmospheric oxygen and water, we have developed a method to cover the TI films in situ with optically friendly protecting layers of BaF2 [16]. We have found that 30 -50 nm of BaF2 provide the optimal protection. Our measurements have shown that the BaF2 cap layers affect neither crystal-structure parameters nor optical properties at the frequencies of interest.
The sample used in this study was thoroughly characterized by XRD, scanning electron microscopy (SEM), atomic force microscopy (AFM), and ARPES. The results of these investigations, presented in the Supplemental Material, confirm high structural and morphological quality of the film and show that the film possesses the topological surface electronic states as well as the states in the bulk conduction band.
For optical measurements, we utilized the infrared optical setup available at the High Field Magnet Laboratory in Nijmegen [17]. This setup consists of a commercial Fourier-transform infrared (FTIR) spectrometer (Bruker IFS113v) combined with a continuous-field 33-Tesla Bitter magnet. A detailed description of this setup could be found elsewhere [18]. The measurements were performed in the Faraday geometry [19] at 2 K. A mercury lamp was used as a radiation source. The far-infrared radiation was detected using a custom-made silicon bolometer operating at 1.4 K. The FTIR spectra were recorded in a number of magnetic fields from 0 to 30 T. The optical data were collected between 10 and 170 cm −1 (300 -5100 GHz), using a 200-μm Mylar beamsplitter and a scanning velocity of 50 kHz. At each field, at least 100 scans were averaged. As will be seen below, the data obtained in the fields above 10 T cannot be used in our analysis because of a low signal-to-noise ratio. Thus, in this study we concentrate on the measurements performed between 0 and 10 T.
Results and Discussion
In Figure 1, we show raw transmission data measured though the 115-nm-thick Bi2Te3 film on a 0.49-mmthick BaF2 substrate (cf. Figure S2 in the Supplemental Material) in magnetic fields B up to 10 T. We note that all the measurements reported in this study are performed on a single sample. As seen from Figure 2, the substrate has no detectable field dependence. Hence, all the field-inducted changes come from the film. We note that the BaF2 substrate has intense phonon modes at roughly 50 and 140 cm -1 [20]. Thus, accurate measurements around these frequencies are impossible. The spectra of Figure 1 are dominated by a single broad mode, which position shifts to higher frequencies in increasing field. The spectra can be fitted by a single Lorentzian, as exemplified in Figure 3 for 2, 4, and 6 T. In zero field, the Lorentzian central frequency is zero, i.e., the observed absorption mode is due to free carriers (Drude conductivity). The field evolution of the mode can be traced in Figure 1: with increasing field, the mode shifts upwards and eventually goes above 100 cm -1 , i.e., in the range, where the signalto-noise ratio is worsened by the spectrometer noise and the phonons in the substrate (this prevents a meaningful spectra analysis in higher fields). Still, the shift of the mode in the applied magnetic field is apparent and can straightforwardly be interpreted as a magnetic-field-induced free-carrier localization or, in other words, a cyclotron-resonance absorption.
The Lorentzian-fit results for this CR absorption mode in the fields from 0 to 10 T are shown in in Figure 4. One can see that the central frequency of the absorption line is linear in field (left panel). This immediately signals that the electronic band(s) responsible for the observed absorption have a quadratic dispersion relation. For linear electronic bands, the field dependence of the CR lines is supposed to have a square-root dependence on applied field [21]. Thus, following the Occam's razor principle, we conclude that the mode is due to bulk (i.e., not linear, not topological) electronic bands. This conclusion is in full agreement, e.g., with ARPES [9] and quantum-oscillations [22] measurements, which show that the Fermi level in undoped Bi2Te3 crosses the bulk conduction band and hence there exists a large bulk Fermi surface. Figure 1 for a few magnetic-field strengths as indicated. The raw experimental data are smoothed, using a Savitzki-Golay method [23]. Note that the spectra for 4 and 6 T are shifted upwards for clarity.
We note that weak modes due to the surface conduction channels may exist on top of the dominating bulk absorption, but within our accuracy they cannot be resolved.
The linear field dependence of the central CR frequency, ω0, can be fitted with the standard parabolic-band expression, connecting the slope of ω0(B) and the carrier cyclotron mass, m*: ω0=eB/m*c (CGS units are used, e is the elementary charge, c is the speed of light). This fit is shown in the left panel of Figure 4 as a straight line and provides m*=0.1me (me is the free-electron mass). This value is in very good agreement with the available literature data on the bulk effective mass in Bi2Te3 [24]: m*=0.109me for the response perpendicular to the c-axis, which we do probe in our transmission experiment with unpolarized light. This match provides another confirmation for the correctness of our interpretation. We would like to note here, that the complete agreement between the calculated electronic band structure of Bi2Te3 and the entire body of the available experimental work is still to be achieved, as emphasized in a recent review [25].
Finally, we turn to the width of the absorption band. As one can see from the right panel of Figure 4, the full width at half maximum (FWHM) of the band demonstrates a non-monotonous field dependence: in low fields, it decreases with increasing B and then, starting at approximately 5 T, the FWHM starts growing with the applied field. The initial decrease of FWHM can be naturally explained by the decreasing cyclotron-orbit radius with increasing B and the consequent decrease of impurity scattering. The reason for the CR mode broadening in B > 5 T is not entirely clear. We propose that this could be due to the increased electron-phonon scattering. In higher fields, the CR mode approaches the frequencies, where phonon density grows (roughly, above 40 cm -1 ; cf. the left panel of Figure 4 and Ref. [26], where the phonon density for Bi2Te3 was calculated) and hence the rate of the electron-phonon scattering starts to increase, leading to the observed total broadening of the CR line according to the Matthiessen rule.
Conclusions
We have investigated the magneto-optical response of undoped Bi2Te3 films at terahertz frequencies and in magnetic fields of up to 10 T. We observed an intense CR line, which can be fitted with a single Lorentz oscillator. The central frequency of the CR increases linearly with applied field, signaling the bulk origin of this resonance. In addition, we found the "in-plane" cyclotron mass, m*=0.1me, which matches well the literature data for bulk Bi2Te3. The width of the CR mode demonstrates a behavior non-monotonous in field. We propose that the CR width is defined by two competing factors: impurity scattering, which rate decreases in increasing field, and electron-phonon scattering, which rate demonstrates the opposite behavior. We believe our findings can be exploited in future measurements of the surface-states CR in Bi2Te3 to disentangle the bulk and surface contributions.
The film used in the magneto-optical measurements was characterized by a number of experimental probes as described below.
A. Structural and morphological characterization
For X-ray diffraction (XRD) measurements we used a Panalytical MRD Extended diffractometer. In Figure S1, the results of these studies are shown. In panel (a), a series of (0 0 l) reflections from the film is seen together with the intensive (hhh) peaks from the BaF2 substrate. Thus, the growth of a highly oriented single-phase layer with the basal plane (0 0 1) parallel to the BaF2 substrate (1 1 1) cleavage plane is evidenced. The high crystalline quality is supported by the absence of any noticeable broadening when going from the (0 0 3) to the (0 0 36) reflection peaks. As one can see from panel (b), the full width at the half maximum of the (0 0 15) rocking curve is Δω=0.082°. The somewhat asymmetric shape of the curve indicates the presence of the anti-site defects, responsible for impurity scattering. We used a JSM-7001F scanning electron microscope (SEM) to obtain the cleaved cross-section images of the film. One of such images is shown in Figure S2 (panel (a)). From this picture, the thicknesses of the Bi2Te3 film and of BaF2 cap layer were evaluated to be 115 and 49 nm, respectively. The morphology of the studied film was explored by means of the atomic force microscopy (AFM) in the tapping mode using the NT-MDT Solver 47 Pro system. For these measurements, the barium fluoride capping layer was removed by the procedure described in Ref. [S1]. A representative (5 × 5 μm) AFM scan of the investigated Bi2Te3 film is shown in panel (b) of Figure S2. Regular triangular pyramids with large domain terraces and 1 nm high steps indicate high crystal quality. Observed spiral-like growth was reported previously [S2, S3] and is believed to promote formation of twin-free films. All triangular domains are oriented in the same direction evidencing single domain sample with trigonal symmetry.
B. Photoemission characterization
The ARPES measurements were performed with a hemispherical SPECS HSA3500 electron analyzer, characterized by an energy resolution of about 10 meV. Monochromatized He I (21.2 eV) radiation was used as photon source. During the measurements, the sample was cooled with the aid of liquid nitrogen to 100 K. Prior the measurements, the surface of the samples was cleaned by several sputter-anneal cycles (Argon sputtering: 500 eV/30 min; Annealing: 260 ℃/15 min). The results of the measurements are shown in Fig. S3. | 3,339 | 2020-08-20T00:00:00.000 | [
"Physics"
] |
The Implementation of ARCS Learning Model to Improve Students Learning Activities and Outcomes in Vocational High School
Current learning is needed to emphasize student activities to be more constructive. However, learning emphasizes more on the target material and focuses on the final result. This study aims to improve student learning activities and outcomes in Automotive Engineering Basic Work subjects by implementing the ARCS learning model. This type of research is classroom action research, each cycle consisting of stages of planning, action, observation, and reflection. The research subjects were 40 students of tenth grade State Vocational High School in Purworejo, Indonesia. Data were collected through observation to determine the description of student activities during the learning process. Then the evaluation test was used to determine the final results of student learning after being given the ARCS learning model in the classroom. The study results indicate that the implementation of ARCS learning in the process provides progress on student learning activities and outcomes during the learning process. The results showed that the implementation of the ARCS model increased student learning activities by 74% and increased learning outcomes by 82%. Thus, it can be concluded that the ARCS learning model, in general, can have a potential effect because students are more constructive in learning so that it can be used as a learning reference in vocational high school.
INTRODUCTION
Education is a manifestation of civilization; superior education can print quality human resources. Human resources have a vital role in the progress of a nation because an advanced, superior, and development-oriented nation in all aspects, of course, cannot be separated from education (Indonesia's Investment Program for Human Resource Improvement, n.d.). Education is a forum for activities seen as high-quality human resources (HR) printers. In addition, education functions to develop the potential and actual abilities possessed by students. Ability development can be done formally and informally through learning (Suyitno et al., 2020). This learning process requires the activeness of both educators and students. Success in the learning process is influenced by several things, including supporting tools in learning facilities, learning materials, learning media, learning methods or strategies, and others (Purwoko, 2017).
Vocational High School is education with competency expertise from vocational fields. Vocational high school prepares students to work in certain areas of expertise, namely to create a workforce with the knowledge, skills, and work attitudes that the industry needs. Graduates of this vocational school are prepared as human resources ready to work; besides, they can also apply the knowledge they have acquired while at school to overcome problems in the community (Sisdiknas, 2003). The level of education in vocational high school in students' learning process is guided by teachers who act as facilitators to help achieve learning goals. Teachers can use various ways to support the learning process by using effective learning strategies, learning support media such as textbooks, electronic school books, pictures, audio, animated films, and others (Suyitno et al., 2020). These learning tools will be effective if they are adapted to methods by students' character in the school, the types of subjects delivered, environmental conditions, and supporting facilities (Purwoko et al., 2019).
Success in implementing this learning process is the task of an educator or teacher because the teacher is the initial designer of learning strategies in the classroom so that learning objectives can be achieved. One of the teacher's roles is as a demonstrator; namely, the teacher must show how to make each learning material better understood and internalized by every student (Sofyan et al., 2019). Learning is essentially a process of interaction with all situations around the individual. Learning can be seen as a process directed to achieve learning objectives. Democratic education must create interaction between teachers and students in the learning process. The goal is to explore students' abilities to play an active role, improve their intellectual abilities, attitudes, and interests (Sofyan et al., 2019).
One of the visions of vocational high schools in Indonesia is to organize education and training oriented to the needs of the world of work. In realizing a quality school, of course, it is strongly supported by the process of daily learning activities, which include applying learning strategies and learning models used so that students can easily understand what is conveyed by teachers in classroom learning activities constructively.
The vocational high school realizes industry-based school branding through "Link and Match" educational services in collaboration with the industrial sector. Cooperation with industry provides strong relevance to vocational specifications. The learning process is adapted to the needs of the relevant industry. For example, the subject of Basic Automotive Engineering is a lesson that must be taken in tenth grade; this subject is mandatory to pass and is taken by students with the subject of learning in this even semester, including the functions of mechanical measuring tools.
The results of observations and interviews at school show that teachers, when learning takes place, are still using conventional learning models, namely, where the teacher explains a concept.
Students observe examples of questions and continue with exercises, and then students answer questions according to the order of completion explained by the teacher. Students follow the standards of queries and continue with activities, and then students answer questions according to the order of completion explained by the teacher. In addition, although learning media have supported it, the learning that students feel is still very lacking in learning activities. One of the possible impacts is the low learning outcomes achieved by tenth-grade students. Students become less than optimal because the activities in class are significantly less and not constructive. The assessment results in the 2019/2020 academic year in tenth grade showed that out of 39 students, 38 students had not achieved the minimum completeness criteria score.
Of course, from these problems, more attention must be paid to improving learning so that students have more motivation to learn to carry out activities in class productively. Therefore, the reference learning model that is relevant to these problems is a model that gradually provides opportunities for students to be active, interactive, and participatory, namely the ARCS (Attention, Relevance Confidence, and Satisfaction) learning model (Alfiyana et al., 2018). The ARCS learning model is a form of a problem-solving approach to design aspects of motivation and learning environment in encouraging and maintaining student motivation to learn (Keller, 1987). This learning model prioritizes student attention, adapts learning materials to student learning experiences, creates self-confidence in students, and creates a sense of satisfaction. The ARCS learning model was developed based on the expectancy-value theory, which contains two components: the value of the goal to be achieved and the expectation of successfully achieving that goal. Of the two components by Keller developed into four components. The four components of the learning model are attention, relevance, confidence, and satisfaction (Keller, 1987).
Several relevant studies related to learning with the ARCS model positively affect. The research shows that the ARCS model based on the effectiveness test can significantly increase students' learning motivation in learning mathematics (Pratama et al., 2019). Other research showed that based on the comparative test, the ARCS model strongly influenced students' learning motivation; high motivation can be seen from productive activities in the classroom (Afjar et al., 2020). In addition, other research showed that training using the ARCS model helped strengthen the effectiveness of mentoring. Higher education levels and internal and external education training can increase learning motivation, significantly positively correlated with reverse mentoring (Chen, 2016).
This study provides strong optimism that the ARCS model is very relevant to learning at the Vocational High School level. The novelty of this research strongly refers to the stages of learning activities according to the ARCS model, but modifications are made to the integration of vocational content in the material being taught; this can positively impact vocational students. From this explanation, the primary purpose of this research is to improve the learning process. Vocational school students' learning activities will be able to develop by applying the ARCS model because the model has a strong relevance, namely strengthening motivation as a learning base so that learning activities in class increase and have a potential effect on Vocational High School students.
METHOD
This research is a Classroom Action Research, which combines knowledge, research, and action (Goundar, 2019). This research design uses the Kemmis and McTaggart model, which includes four stages in the research process: planning, action, observation, and reflection (Fletcher & Beringer, 2009). This research was conducted in tenth grade at the State Vocational High School in Purworejo, Indonesia. The subject was chosen because it has strong characteristics to serve as a pilot project to develop innovative and superior learning. This research was conducted in the even semester of the 2019/2020 academic year in the tenth grade.
Data were collected through observation and tests. Observations were made to describe the implementation of the ARCS learning model and student learning activities during the learning process. Student learning activities refer to the indicators: (1) students' courage in asking questions, (2) students' courage in answering questions and expressing opinions, (3) student interaction with teachers, (4) student attention during group learning processes. Then the learning evaluation test is used to measure the final learning outcomes.
Data analysis techniques in this study include (1) data analysis of student learning activities with a numerical rating scale, namely, giving the lowest score of 1 and the highest score of 5 for each aspect of the assessment. (2) Data analysis of student learning test results was carried out at the end of the cycle. Indicators of success are seen by looking at the minimum completeness criteria in the Basic Automotive Engineering subject. If students have achieved a score of more than 75, the student is declared to have completed, while students who have not achieved a score of 75 mean the student has not achieved the minimum completeness score. The final achievement of students' activities and learning outcomes from each cycle was analyzed using the percentage technique. Changes in the improvement of the learning process can be described by looking at the comparison among cycles. So that in this study, two cycles were carried out to see the pattern of change towards improving learning. The implementation of the ARCS learning model is considered successful if able to increase student activity and learning outcomes. Student learning activities are said to have positive potential if they reach a score of 65%, and student learning outcomes have reached the minimum completeness criteria of 75%.
FINDINGS
This research was conducted in two cycles. The subjects used during the research were Automotive Engineering Basic Work at basic competence: Apply mechanical measuring tools and their functions. Each cycle of this research uses classroom action research steps in its implementation, starting from planning, implementation, observation, and reflection. The implementation of the ARCS Learning Model used in this study has 17 learning steps. The implementation of the ARCS learning model is said to be appropriate or successful if it matches the syntax of the ARCS learning model. The implementation of the ARCS learning model from cycle I to cycle II is dominated by the steps of implementing actions in the classroom; during the learning process, data is collected with instruments that have been planned to see the effect of changes towards more constructive learning.
From the results of observations of the implementation of the ARCS learning model in cycles I and II, the ARCS learning model has improved according to syntax. In the first cycle, the percentage of conformity of the implementation of the learning model was 82%, then increased by 18% in the second cycle to 100%. If the achievement of the implementation of the ARCS model is described as shown in Figure 1.
Figure 1. The Improvement of ARCS Implementation
During the learning process from cycle I to cycle II, student learning activities experienced a positive increase. Data from observation and analysis showed that the level of student activity in the first cycle was 55.60%, so it was included in the category of not achieving the specified success indicators. While in the second cycle, the percentage of student activity reached 73.71%, or in the good category. So these results indicate that the indicators of success have been achieved. From the results of these observations, it can be concluded that the percentage of student activity during the learning process using the ARCS model from cycle I to cycle II increased by 18.11%. These results indicate that the model can potentially influence learning activities to make learning progress significant. The increase in student learning activities is shown in Figure 2.
Figure 2. Percentage of Student Learning Activities
The percentage of student activity in Figure II is obtained from cycle I to cycle II changes. Exploratory data on student learning activities was obtained by analyzing indicators of learning activity consisting of 4 aspects. The following is a picture showing the increase in the number of occurrences of each activity indicator from cycle I to cycle II.
Figure 3. Frequency of Occurrence of Indicator Activity
Then at the end of each cycle, a measurement of learning outcomes is carried out. Learning outcomes from cycle I to cycle II have empirically increased. This finding can be seen from the average student learning outcomes and the number of students who complete their learning outcomes. In the first cycle, the number of students who finished reaching the completed minimum completeness was 15 students with a completeness percentage of 41.7%, with an average score of 57.5. In the second cycle, the number of students who completed the minimum completeness criteria was 32 students with a percentage of 82% with an average score of 75. Figure 4 shows a significant increase in learning outcomes from cycle I to cycle II. This fact indicates that the ARCS learning model has a potential effect on students from each stage.
DISCUSSION
This study aimed to improve student activities and learning outcomes in Basic Automotive Engineering Works by implementing the ARCS learning model. The results show that the stages of each cycle consisting of planning, action, observation, and reflection that has been carried out contribute to the development of learning activities and student learning outcomes.
From the stages per cycle, namely planning, action, observation, and reflection during the research process, it was carried out well according to the learning syntax of the ARCS model. The results obtained indicate strongly that the ARCS learning model gives positive results. Each stage of student learning looks very constructive during learning; this is in accordance with research conducted by Chen (2016) which says that the ARCS learning model is significantly positively correlated with the final learning outcome (Jamil, 2019). ARCS is carried out to generate motivation so that students' activities and learning outcomes are carried out through student attention, the linkage of the material with the student's learning experience, student confidence, and student satisfaction. This statement is in accordance with what was conveyed by Pratama et al. (2019), who said that ARCS learning is a form of learning that prioritizes student attention, adapts learning materials to student learning experiences, creates self-confidence in students, and creates a sense of satisfaction in the students so that meaningful learning occurs.
Based on the results of the final action test analysis in cycle I, it appears that students are starting to be active in solving problems that the teacher has given. However, there are still students who make mistakes. This error occurs because students have not been able to understand the context and theory that has been given previously. However, in general, most students can answer questions correctly; this is in accordance with research conducted by Setiawan et al. (2020) that the ARCS model can increase student motivation in answering questions because of the stimulation provided by the teacher. Furthermore, the final test of the second cycle of action showed that students were actively able to solve the questions well. Students have been able to carry out structured solutions; this is in accordance with research conducted by Afjar et al. (2020) which states that the stages of the ARCS model impact students in solving problems in a structured manner. In general, it shows that students can solve problems related to Basic Automotive Engineering lessons in tenth grade.
CONCLUSION
Implementing the ARCS (Attention, Relevance Confidence, and Satisfaction) learning model can increase student activity and learning outcomes because this model has stages relevant to vocational students' characteristics. The results showed that the implementation of the ARCS model increased student learning activities by 74% and increased learning outcomes by 82%. So that, in general, the learning process in the classroom takes place constructively, it is shown that the results of the exploration of learning activities during learning can be documented and have a very potent effect because students seem to have a positive motivation. The side effect of having a very active class is a positive increase in student achievement, so the ARCS learning model can be applied gradually in vocational high school to make courses more meaningful and constructive. | 3,959.2 | 2021-12-20T00:00:00.000 | [
"Engineering",
"Education"
] |
Detecting Nuclear Materials in Urban Environments Using Mobile Sensor Networks
Radiation detectors installed at major ports of entry are a key component of the overall strategy to protect countries from nuclear terrorism. While the goal of deploying these systems is to intercept special nuclear material as it enters the country, no detector system is foolproof. Mobile, distributed sensors have been proposed to detect nuclear materials in transit should portal monitors fail to prevent their entry in the first place. In large metropolitan areas, a mobile distributed sensor network could be deployed using vehicle platforms such as taxis, Ubers, and Lyfts, which are already connected to communications infrastructure. However, performance and coverage that could be achieved using a network of sensors mounted on commercial passenger vehicles has not been established. Here, we evaluate how a mobile sensor network could perform in New York City using a combination of radiation transport and geographic information systems. The geographic information system is used in conjunction with OpenStreetMap data to isolate roads and construct a grid over the streets. Vehicle paths are built using pickup and drop off data from Uber, and from the New York State Department of Transportation. The results show that the time to first detection increases with source velocity, decreases with the number of mobile detectors, and reaches a plateau that depends on the strength of the source.
Introduction
A major concern with the deployment of nuclear power is the potential diversion of nuclear materials for acts of terrorism [1,2]. A fission weapon detonated in a dense population center such as New York City would produce significant casualties as could a dirty bomb and both would cause major disruption. Several radiation detection systems, however, act as a first line defense for the United States. These include radiation detectors at airports, shipyards, commercial ports of entry [3,4], and roadway border crossings [3][4][5]. However, if these systems fail, or if non-official points of entry are used, special nuclear material could be smuggled into the country. Once inside, highways and streets can be used to transport the material to nearly any destination in the United States.
The ease with which passengers and cargo can be moved across the United States presents a unique problem to protecting a city from nuclear terrorism. Distributed detectors could be used here [6,7]. One option is to maintain a stationary detection grid with placement of detectors along entry points to the city. However, smart placement of those systems would be nontrivial [8,9], and although cities like Manhattan have a limited number of access routes such as bridges and tunnels, not all large cities have geographical features that simplify optimal deployment of detector systems.
Another approach to securing a city from nuclear terrorism would be a mobile radiation detection network [10]. The effectiveness of a mobile detector fleet for locating a stationary source has been investigated, where detector vehicles followed fixed or random paths [11]. Advanced machine-learning algorithms have been shown to improve source characterization with multiple sources using mobile detectors [12]. Other studies [13] have focused on detecting mobile sources carried by individuals using stationary detectors, and suggested that it might be possible to use mobile detectors attached to police patrol cars as well. The use of distributed sensor networks to detect stationary sources at highly populated events, such as large sporting events, was investigated in [14]. Taxis, limousines, and ride-share services could provide an ideal platform for detector systems as these vehicles already possess power and communications infrastructure and are ubiquitous in most U.S. cities. However, the effect of source strength, route unpredictability, traffic speed, and detector density remain poorly characterized. In the present work, we evaluate the effect of these variables on a system of mobile sensors mounted on Uber vehicles, to detect a moving radioactive source in Manhattan, NY. The routes taken by the mobile detectors are estimated using historic Uber trip data combined with a route-finding algorithm. Geospatial data on buildings in Manhattan are combined with a simple Green's function to model radiation transport. The effectiveness of the mobile sensor network is shown as a function of source strength, speed, and the number of detector vehicles.
Methods
We consider a mobile radioactive source and mobile radiation detectors mounted on Uber vehicles moving through streets in Manhattan, NY. Building geometries in shapefile format were obtained from OpenStreetMap [15]. A limited set of Uber pickup and dropoff zones, and time stamps, are available through the New York Taxi and Limousine Commission [16]. Locations within pickup and drop-off zones were randomly sampled and routes were computed by combining the pickup and drop-off locations and time stamps with the route-finding algorithm provided by the pyroute3 library [17]. The source routes were also determined using pyroute3 and by randomly choosing an origin near the South Manhattan coast and Madison Square Gardens as a destination. The source and mobile detector routes were discretized into equally spaced time indexes with a ∆t of 2 s. The detector routes moved at a constant vehicle speed, which was computed using the total route length and duration. The source route incorporated a 10% random stop chance for each segment of the trip. The duration of the stops ranged from 1 to 10 s to simulate traffic conditions. Geometric data were stored and manipulated using the Shapely [18] python library.
Radiation transport and detection. The simulated radioactive source was Co 60 with a strength of 0.1 and 0.5 Ci, emitting 1.17 and 1.33 MeV gammas. Cobalt 60 was chosen because it is a common nuclear material of concern for dirty bombs [19][20][21]. In the simulations, the source is shielded by 10 cm of lead. Additional shielding from the delivery vehicle itself was assumed to be 0.66 m of air and 1 cm of steel. The gamma flux from the source at a given distance was approximated using a point source Greens function, which accounts for both the inverse square law and attenuation: Here, F(r) {counts/s} is the strength of the source at distance r {m}, m is the material the radiation is passing through µ m {m -1 } is the linear attenuation coefficient for gammas in material m, r m is the distance the radiation moves through material m, S {counts/s} is the strength of the source after being attenuated by the shielding, D E is the detector efficiency, and D A is the detector area. The detector is assumed to be a scintillating detector used to count gross counts. Gross counts are also used for the background rate.
The maximum range at which the source can be detected is assumed to be the distance at which the source drops to be indistinguishable from the background gamma radiation count rate for Manhattan,~83 counts/s, given by the RadNet [22]. The ranges before the source strengths dropped to the background were determined to be 24.43 m for the 0.5 Ci and 5.12 m for the 0.1 Ci sources. For this work, a detection was recorded when the count rate at a detector exceeded the background count rate by three standard deviations. Counts were integrated using the count rate detected at a time step and then multiplying that value by the length of the time step (2 s). The probability of false alarm for such a detection threshold is less than 0.5%. This standard deviation was calculated assuming the background radiation fits a Poisson distribution; meaning the standard deviation is equal to the square root of the mean. The background count rate was assumed to be constant throughout the city for this work. This is not fully realistic, as background rates vary both spatially and temporally in urban environments [23,24]. However, a fleet of mobile detectors could map out a spatial and temporal background that could be used for future research [25].
Source route generation. Generation of the source routes was done by randomly sampling points along the southern tip of Manhattan. For each point sampled, a route was built between the initial starting point and the destination of Madison Square Gardens. Routes were built, and routes from 100 randomly chosen starting points were generated.
Detector route data. Data for Uber pickup and drop-off zones for New York City for 2016-2020 were taken from the New York Taxi and Limousine Commission [16]. These data provide~70,000 pickup and drop-off zone pairs per day, with~20 h of coverage. A sample of these data for 13 December 2019, from 12:00 to 12:20 p.m., was chosen at random, and provided 3500 pickup and drop-off zone pairs. The route-finding algorithm provided by the pyroute3 library [17] was used with these to determine the shortest route between the two locations, and this was assumed to be the one used by the simulated mobile detectors.
Source and detector routes. Once the source and detector routes were constructed and broken into points with timestamps, these points are used to create a Sort Tile Recursive Tree (STRtree) using the Shapely python library. The STRtree allows for quick querying of information about the points in relation to each other. For each point in the source path, the STRtree is queried to determine which detectors are in range of the source at the given timestamp. This range is determined by calculating the maximum distance that the source could be detected given attenuation from Equation (1). Again, the detection limit was set to be three standard deviations above background. For each detector in range, the distance from the source to the detector was calculated and the number of counts per second the detector measures recorded. Buildings intersecting the path between source and detector were assumed to absorb all radiation. In this work, radiation transport through traffic other than the source and detector vehicles was ignored.
Source velocities. Three speeds were chosen to represent the movement of the source relative to data available from the NYC Department of Transportation for the average speed of bus traffic (3.1 m/s) in Manhattan [26]: 2 m/s was used to represent a vehicle source moving through heavy traffic; 10 m/s was chosen to represent fast moving traffic; and 15 m/s was chosen to represent a motorcycle moving through city streets.
Results
Considerable work has been done to flag radioactive materials as they pass through ports of entry, or major transit points. Finding mobile nuclear materials in an urban environment is a much more difficult problem and requires non-traditional deployment of detectors [27,28]. Mobile detectors can play an important role here in decreasing the risks posed by radiological sources by providing real-time radiation monitoring. Figure 1 shows routes generated for 3 December 2018 from 3:00 p.m. until 3:20 p.m. While a total of 3500 Uber journeys were made in that period, the number of routes sampled for performing the simulation was varied to evaluate the detector network's performance with increasing numbers of detector vehicles. Figure 1 shows Uber tracks for 100 (left) and 500 (right) sampled routes. The solid-colored lines represent the Uber tracks, and the red dotted line is one of the sampled source paths. Buildings are shown as solid black objects, and the white space represents roads, alleyways, parks, parking lots, and other space unoccupied by a building. Although Figure 1 shows that increasing the number of Uber routes sampled increases the cumulative coverage of the city this does not guarantee that this source and a given detector are coincident. (left) and 500 (right) sampled routes. The solid-colored lines represent the Uber tracks, and the red dotted line is one of the sampled source paths. Buildings are shown as solid black objects, and the white space represents roads, alleyways, parks, parking lots, and other space unoccupied by a building. Although Fig. 1 shows that increasing the number of Uber routes sampled increases the cumulative coverage of the city this does not guarantee that this source and a given detector are coincident. Table 1 summarizes results from 12 simulated cases with source speeds that varied from 2 to 15 m/s, strengths of either 0.1 or 0.5 Ci, and with either 200 or 400 mobile detectors deployed. In each case, the number of times a source was detected during its transit was recorded. The results in the table show that by increasing the speed, the detection rate decreases. Additionally, as the detector count increased, the number of detections increased. Both observations are in line with the expected outcome. The left image in Figure 2 shows the heat map for Case 2 (10 m/s) for a specific source route. A small number of counts is detected in the region of the last turn. The low detection in these three cases stems from a weaker source, resulting in a smaller detection radius, as well as low sensor density. Table 1 summarizes results from 12 simulated cases with source speeds that varied from 2 to 15 m/s, strengths of either 0.1 or 0.5 Ci, and with either 200 or 400 mobile detectors deployed. In each case, the number of times a source was detected during its transit was recorded. The results in the table show that by increasing the speed, the detection rate decreases. Additionally, as the detector count increased, the number of detections increased. Both observations are in line with the expected outcome. The left image in Figure 2 shows the heat map for Case 2 (10 m/s) for a specific source route. A small number of counts is detected in the region of the last turn. The low detection in these three cases stems from a weaker source, resulting in a smaller detection radius, as well as low sensor density. Figures 2-4 show the detection locations of a sample source route moving through 200 and 400 detectors. These figures show that, for a given route, the behavior of detection is not consistent with the larger sample of the 100 source routes, and is the reasoning for that increased sample size. The discussion of the figures highlights the behavior associated with one specific source route, not the collection of source routes. Figure 2 shows the count rate that a fleet of 200 detectors following historic Uber routes, during 3 December 2018 from 3:00 pm until 3:20 pm, would measure from a sample 0.1 and 0.5 Ci source moving at 10 m/s as a function of position (Table 1 cases 2 and 5). Count rates shown are summed over all detectors in range and represent a count rate above background. The mobile detectors were only able to detect the signal of the 0.1 Ci source at one location. By contrast, the 0.5 Ci source was detected at three locations along the route, which comes from the increase in detection radius due to the higher source strength. Figure 3 shows the count rate for 400 detectors and a source speed of 2 m/s (Table 1 Cases 7 and 10). These results again show that higher source strength increases the number of detections with three points of detection for the stronger source. However, the locations Sensors 2021, 21, 2196 5 of 10 at which these occur are different from those found in Figure 2, because the speed of the source causes it to be detected by a different set of mobile detectors. Figure 4 shows the count rate for 400 detectors and a 10 m/s source speed (Table 1 Cases 8 and 11). These results again show that increased source strength causes increased detections. These two cases can be compared directly to the cases shown in Figure 2. Figure 4 shows that increasing the number of detectors increased the number of detection locations from three to four. The reason is that the higher speed results in the source passing by more mobile detectors and registering more detections as a result. These figures show that, for a given route, the behavior of detection is not consistent with the larger sample of the 100 source routes, and is the reasoning for that increased sample size. The discussion of the figures highlights the behavior associated with one specific source route, not the collection of source routes. Figure 2 shows the count rate that a fleet of 200 detectors following historic Uber Figure 3 shows the count rate for 400 detectors and a source speed of 2 m/s (Table 1 Cases 7 and 10). These results again show that higher source strength increases the number of detections with three points of detection for the stronger source. However, the locations at which these occur are different from those found in Figure 2, because the speed of the source causes it to be detected by a different set of mobile detectors. Figure 4 shows the count rate for 400 detectors and a 10 m/s source speed (Table 1 Cases 8 and 11). These results again show that increased source strength causes increased detections. These two cases can be compared directly to the cases shown in Figure 2. Figure 3 shows the count rate for 400 detectors and a source speed of 2 m/s (Table 1 Cases 7 and 10). These results again show that higher source strength increases the number of detections with three points of detection for the stronger source. However, the locations at which these occur are different from those found in Figure 2, because the speed of the source causes it to be detected by a different set of mobile detectors. Figure 4 shows the count rate for 400 detectors and a 10 m/s source speed (Table 1 Cases 8 and 11). These results again show that increased source strength causes increased detections. These two cases can be compared directly to the cases shown in Figure 2. Cases 4,5,6 were evaluated for 200 mobile sensors and a 0.5 Ci source, with source speed from 2 to 15 m/s. Here we found a similar situation to the cases 1,2,3. The 2m/s and 15m/s cases (4,6) showed no detection of the source. However, there is an improvement in the detection between cases 2 and 5 due to a factor of five increase in source strength, from 0.1 to 0.5Ci. The integrated count value in Case 5 is 2.34 × 105, which is more than a factor of five larger than Case 2 (3.27 × 104 counts). This is because a stronger source increases the minimum detection radius, resulting in more detectors being in range to measure a signal.
In Cases 7,8,9,400 detectors were simulated with a 0.1 Ci source with source speed varying from 2 to 15 m/s. For this source route, the 10 m/s source velocity sources have the highest detection metrics in most scenarios, but this trend is not true when all sampled routes are considered. As seen in Table 1, increasing source speed decreases the number of total detections across all source routes. A faster speed means the amount of time required to get to the destination is reduced and therefore the amount of time for detections is also reduced. The reason for this discrepancy is that the location of detectors and sources are constantly moving. Therefore, coincidence of these locations drives the detection rates for an individual source route. Figure 5 shows the average time of first detection for a source moving at 10 m/s for a varying number of mobile detectors. For the 0.5 Ci source this figure shows that increasing the number of detectors, decreases the average time of first detection, but only to a point. The initial portion of each source trip moves through a region where ride shares have a low density. The marginal gains beyond 1000 detectors likely reflects the source being detected very quickly once it enters a region of high ride share activity, with only marginal gains with subsequent increases in detector number. For the 0.1 Ci source increasing the number of detectors again decreases the time of first detection with the same basic profile as seen in the 0.5 Ci case. For the weaker source strength, the reduction in average detection time appears to even out around 1700 detectors. However, due to the lower source strength, the detection time remains higher than for the 0.1 Ci case over the 3400 routes available for this work. This work does not explore the economic cost of outfitting the ride share vehicles with detectors of the design indicated in this work. Additionally, economic analysis was not done to understand the relationship between money spent on detectors and the likelihood of detection. Future work should explore these areas to understand the monetary costs associated with this method of detection.
This work highlights the effectiveness of a mobile detector system, but a stationary system of detectors could also be used to detect radiological threats. Of interest here would be the detector density compared to the effectiveness of the detectors and the costs associated with that density. Additional work could also be done to show the benefits of coupling the two systems. The economics of the two systems, or the couple system, would be valuable information for determining detection strategies.
Conclusions
This work shows that mobile sensors on vehicle platforms, without explicitly designated routes such as ride share vehicles, could be used to locate radioactive sources in a city. The results show that increasing the number of detectors decreases the time of first The are several limitations with the results presented here. The mobile detectors simulated here operate independently of one another. This means that the detection probability rests solely with the individual sensors, and it is not possible to integrate signals that operate near background to get a better indication of a mobile source. The effect of attenuation from traffic and buildings was also not considered. The former is considered to play a minor role, and the latter are radiologically opaque. Buildings (especially at their corners) would offer less attenuation and traffic more. High traffic would increase the effective attenuation coefficient by potentially placing more vehicles in the way between the source and the detector. In both cases, accurate models for attenuation need to be developed. Another limitation of this work is the route-finding software. Due to the resolution of the software, the placement of the vehicles at each time step could overlap (narrow streets with two lanes may not resolve cars in each lane, but rather place them in the center of the two lanes). The effect of this limitation would be to add uncertainty to the distance between two vehicles, but only if the vehicles were on a narrow street. Additionally, while this work used realistic data from ride shares in New York City, only pickup and drop off zones were provided with the ride share data. Exact locations needed to be selected within those zones, resulting in recreated routes similar to the original routes, but likely not the exact same. This limitation was deemed acceptable for this work because it is likely that there is an element of randomness within the ride share travel from day to day.
This work does not explore the economic cost of outfitting the ride share vehicles with detectors of the design indicated in this work. Additionally, economic analysis was not done to understand the relationship between money spent on detectors and the likelihood of detection. Future work should explore these areas to understand the monetary costs associated with this method of detection.
This work highlights the effectiveness of a mobile detector system, but a stationary system of detectors could also be used to detect radiological threats. Of interest here would be the detector density compared to the effectiveness of the detectors and the costs associated with that density. Additional work could also be done to show the benefits of coupling the two systems. The economics of the two systems, or the couple system, would be valuable information for determining detection strategies.
Conclusions
This work shows that mobile sensors on vehicle platforms, without explicitly designated routes such as ride share vehicles, could be used to locate radioactive sources in a city. The results show that increasing the number of detectors decreases the time of first detection. However, the benefit of increasing detectors decreases after 1000 detectors has been reached for a 0.5Ci source and is reached with 1700 detectors for a 0.1Ci one. This asymptotic behavior is likely the result of the ride shares not going into the starting location of the source routes. This may be due to the coast of the city being a less well trafficked area.
Further work should investigate detection probabilities at different times of day to account for varying levels of traffic, which would impact both vehicle speed and vehicle density. Higher fidelity radiation transport models should be investigated to see their impact on the results. While strong gamma emitting 0.5 and 0.1 Ci 60 Co sources were used in this work, realistic situations could involve much weaker sources, as well as beta and alpha emitters such as 210 Po and 90 Sr which would be significantly more difficult to detect. Our method can be extended in a simple way, however, to establish whether detection of these types of emitters is feasible. Incorporating stationary sensors, and detectors of varying types mounted on a range of portable platforms such as Unmanned Aerial Vehicles and hand-held devices could dramatically improve the sensor network's effectiveness and can be readily implemented into our existing analysis tools [29,30]. | 5,877.2 | 2021-03-01T00:00:00.000 | [
"Computer Science"
] |
Model-Based Optimisation and Control Strategy for the Primary Drying Phase of a Lyophilisation Process
The standard operation of a batch freeze-dryer is protocol driven. All freeze-drying phases (i.e., freezing, primary and secondary drying) are programmed sequentially at fixed time points and within each phase critical process parameters (CPPs) are typically kept constant or linearly interpolated between two setpoints. This way of operating batch freeze-dryers is shown to be time consuming and inefficient. A model-based optimisation and real-time control strategy that includes model output uncertainty could help in accelerating the primary drying phase while controlling the risk of failure of the critical quality attributes (CQAs). In each iteration of the real-time control strategy, a design space is computed to select an optimal set of CPPs. The aim of the control strategy is to avoid product structure loss, which occurs when the sublimation interface temperature (Ti) exceeds the the collapse temperature (Tc) common during unexpected disturbances, while preventing the choked flow conditions leading to a loss of pressure control. The proposed methodology was experimentally verified when the chamber pressure and shelf fluid system were intentionally subjected to moderate process disturbances. Moreover, the end of the primary drying phase was predicted using both uncertainty analysis and a comparative pressure measurement technique. Both the prediction of Ti and end of primary drying were in agreement with the experimental data. Hence, it was confirmed that the proposed real-time control strategy is capable of mitigating the effect of moderate disturbances during batch freeze-drying.
Introduction
Pharmaceutical freeze-drying or lyophilisation is a dehydration process mainly used for stabilizing parenteral therapeutical agents contained in aqueous solutions. By removing most of the water, the shelf-life of the product is prolonged significantly because water drives many destabilization pathways. Since, freeze-dying is a low-temperature process it is a very popular processing technique for heat-labile biological drug substances. Therefore, many of the approved biopharmaceuticals by the FDA and EMA are stabilized by lyophilisation [1]. However, freeze-drying is a long and costly process that require a lot of aseptic floor space [2].
The freeze-drying process consists of multiple consecutive phases. After loading the vials, the shelf temperature is lowered gradually to freeze the liquid in the vials. Next, the chamber pressure (P c ) is decreased (70 to 1 Pa) to establish the primary drying phase, enabling the sublimation of all the ice and formation of a porous network. However, there are some constrains in this phase. At first, the shelf temperature should be balanced well to supply heat for sublimation while not exceeding a critical product temperature (T c ). The T c is slightly higher then the glass transition temperature of the maximally freeze-concentrated amorphous solutes (T g ) or the eutectic temperature (T e ) in case of a crystalline system and can be determined with a freeze-dry microscope. Surpassing the T c would induce a loss of structure in the dried layer [3,4]. The second constraint is the choked flow condition (ṁ sub,chok ). When the sublimation rate exceeds this limit it will result in a loss of pressure control of the freeze-dryer. During this phenomena, the vapour flow is to high for the dimensions of the limiting channels (vial neck or condenser duct) of the freeze-dryer system resulting in a compression of the gas and a undesirable pressure build up at the ice sublimation interface [4,5]. The loss of pressure control also means a loss of control over the heat transfer, which could in turn lead to collapse when surpassing the T c . The last freeze-drying phase is secondary drying. It is initiated when all ice is removed by increasing the shelf temperature (T s ) to start desorption of residual bounded water [6,7].
Nowadays, the typical way of operating a batch freeze-dryer is based on fixed manufacturing protocols. This means that all freeze-drying phases are programmed sequentially based on predefined timings while maintaining, or at best linearly varying, the critical process parameters (CPPs) (i.e., P c and T s ) in between defined setpoints of the protocol. The reason being that many of the industrially applied freeze-drying protocols were originally determined based on trial-and-error rather than mechanistic knowledge. This also makes that the occurrence of unexpected disturbances during processing are not compensated for by adjusting the CPPs. Failure of the freeze-drying process, sometimes resulting in a costly total loss of the batch, is therefore not uncommon. Indeed, for commercial manufacturing all vials are still visually checked for failures while only a representative sample is sent for offline quality control to evaluate its critical quality attributes (CQAs). Hence, there exists ample room for improvement of the economic performance of most freeze-drying operations.
Supervisory control and optimisation of the freeze-drying process could avert product failures from happening. Special attention should go to the primary drying phase as it is typically the longest phase of the freeze-drying process. Hence, optimization of primary drying can significantly reduce the total processing time and thus also operating costs while improving product quality. A solution exists in dynamically adjusting the CPPs of the primary drying process, i.e., chamber pressure (P c ) and shelf temperature (T s ), in order to compensate for process disturbances as well as the continuously changing physical state of the product, i.e., increasing dried layer thickness [4,8].
However, in order to establish such supervisory strategies, adequate measurements are required in real time. In the case of freeze-drying, the temperature at the sublimation front (T i ) is the most critical parameter as it determines the sublimation rate and should not surpass the critical temperature. Because the sublimation front moves from the top to the bottom of the product during primary drying, it is not possible to measure this sublimation front with a fixed temperature probe. Moreover, the presence of probes will alter the thermodynamics of the measured vial, might damage the product and is not representative for the entire batch. An alternative is to use mathematical models describing the physical mechanisms of heat and mass-transfer occurring [9][10][11][12][13]. To obtain information on T i , such mechanistic models can then be applied as a soft sensor if the models are fed with real-time information of the system (i.e., P c , T s , dried layer thickness (L dried )). Moreover, models make it possible to drive the process continuously towards its performance boundaries with the ability to control the risks of failures [14].
Mechanistic models have intrinsic model output uncertainty due to model simplifications and assumptions. In addition to that, there also exists natural variation of certain model parameters causing additional uncertainties [15]. It is the novelty of this proposed paper to carefully characterize all these variabilities and to implement them real-time using a model-based optimisation and control strategy, to yield robust model predictions. By continuously performing model output uncertainty analysis, taking into account different sources of uncertainty, a dynamic design space is constructed during the primary drying phase of a lab-scale freeze-dryer. This design space makes it possible to optimise the primary drying process in real time while taking into account process constraints. Hereto both the shelf temperature and the chamber pressure are automatically adjusted. Moreover, by including systematic checks on the manipulated variables, the proposed strategy ensures that the process is under control even in the case of unexpected disturbances.
Supervisory Control of the Freeze-Dryer
The optimisation and control strategy was developed on an Amsco-Finn Aqua GT4 freeze-dryer (GEA, Köln, Germany) which was retrofitted with PR-4114 programmable logic controllers (PLC) (PR electronics, Rønde, Denmark) and a Pro-Face AGP3000 (Schneider Electric, Rueil-Malmaison, France) Human-Machine Interface with Modbus TCP/IP communication capabilities. To keep the condenser temperature, shelf temperature and chamber pressure at their predetermined setpoints, multiple ON-OFF process control loops were programmed on the PLC. Hereto the freeze-dryer is supplemented with resistance temperature detectors in the condenser and shelf fluid system, and two different pressure gauges in the drying chamber. To measure the product temperature directly three additional thin gauge type-K thermocouples (Conrad Electronic, Hirschau, Germany) were available in the drying chamber. To measure chamber pressure both a Pirani and Capacitance pressure gauge are present, yielding the opportunity to apply a comparative pressure measurement methodology [16]. Due to the difference of the measurement principle of the two pressure gauges, it is possible to monitor the gas composition in the freeze-drying chamber. If the gas composition shifts from a predominantly water vapour environment, i.e., during sublimation, to an environment mainly composed of nitrogen gas, i.e., at the end of primary drying, then the pressure signal of the Pirani gauge will approximate the value obtained with the Capacitance sensor [17]. Note however that the chamber pressure (P c ) is always controlled by the signal coming from the capacitance pressure since it is the most accurate sensor. Unless mentioned otherwise, the machine operating limits of the freeze-dryer were set to −40 • C and 50 • C for the shelf fluid system with a maximum cooling rate of 0.8 • C/min and maximum heating rate of 1 • C/min. The ON-OFF temperature controller of the cooling fluid was set to have a deadband of 1.5 • C. The vacuum pump could operate down to 7 Pa but for this study the lower limit was set at 10 Pa with a deadband of 0.4 Pa.
For the the purpose of supervisory control and data acquisition (SCADA), an additional remote computer was installed with LabVIEW 2017 including the NI-DSC module (National Instruments, Austin, USA) to communicate with the freeze-dryer PLC using Modbus TCP/IP. A state-machine based application was developed in the LabVIEW software to drive the freeze-dryer through all sequential freeze-drying steps, i.e., freezing, condenser preparation, primary drying initialization, primary drying with optimisation and control, secondary drying and venting/stoppering. Please note that the proposed strategy used for primary drying was implemented in MATLAB 2018a using the Parallel Computing Toolbox (Mathworks, Natick, USA) which had to be integrated in the LabVIEW application.
Primary Drying Model
A mechanistic model describing the primary drying phase of pharmaceutical vials in a traditional batch freeze-dryer is the core of the real-time optimisation and control strategy. This mechanistic model has been described thoroughly in the literature [8]. The model consists of a system of equations which have to be solved simultaneously (Equations (1)-(3)). It describes the gradual increase from the top to the vial bottom of a planar dry layer on top of the frozen product. It is based on a mass and heat balance assuming that all energy is used for the sublimation process, hence presuming steady state conditions. The model is characterised by five parameters and three variables ( Here, A P i , B P i , C P i and D P i are coefficients to describe the relationship between ice temperature and its partial vapour pressure (P i ), a and b are constants to convert to SI-units [18] [20]. The latter is based on the vial neck diameter r n and the smallest diameter of the butterfly valve on the duct between the condenser and drying chamber r d .
Input Parameters and Variability Estimation
The vial inner radius r i and outer radius r o as well as the filling volume V f ill are parameters of the mechanistic primary drying model that are intrinsically distributed. In this case, the parameter values and their variability were determined at the start of the process and fixed during the entire progression of the primary drying phase. The heat transfer coefficient K v and the dry layer resistance R p were next to their intrinsic variability also dependent on the variability of other model parameters since they are estimated from respectively P c and L dried . The other three sources of uncertainty and their variability (T s , P c and L dried ) were updated regularly during optimisation and control.
Determination of the Heat Transfer Coefficient
The heat transfer coefficient K v describes the efficiency of the heat transfer from the technical fluid inside the shelves to the ice in the vial and is therefore dependent on the freeze-dryer design and vial type. In this work, K v was determined using a gravimetric method at five different pressure levels. To this end, forty nine 10R vials (Schott AG, Mainz, Germany) filled with 3 mL of deionised water were weighed and stacked in a 7 × 7 hexagonal pattern and placed on the middle shelf of the freeze dryer. The product temperature was monitored using four thin gauge type-K thermocouple (Conrad Electronic, Hirschau, Germany) which were randomly divided between the edge and centre vial population. Next, the vials were equilibrated at 3 • C and frozen to −30 • C with a cooling rate of approximately 1 • C/min which was maintained for 2 h. Then, a vacuum was pulled to the desired setpoints (7,10,15,20, and 25 Pa) after which the T s was gradually increased to −20 • C during a period of 10 min. After 6 h of primary drying, the sublimation process was abruptly interrupted by venting the vacuum chamber. All vials were stoppered immediately. Afterwards, the vials were reweighed in order to calculate the K v of every single vial according to Equation (4).
With m sub the sublimated mass [kg], t 0 and t end respectively the start and end time of primary drying [s] and T b the product temperature at the bottom of the vial [K]. Please note that there exists a significant increase in radiative power towards vials at the edge of the stack as they are not completely shielded from the door and walls of the freeze-dryer. This is why the vials were split in an edge group, i.e., all vials who do not have six direct neighbours, and a centre group, i.e., vials who were in direct contact with six neighbouring vials, for the K v determination. The mean K v as well as the relative standard deviation (RSD) were calculated for each group at the five different pressure levels. By using Equation (5), K v could be described in function of the chamber pressure using the α [J/m 2 s·K], β [J/m 2 s·KPa] and γ [1/Pa] parameters. These parameters were determined for both groups of vials using a weighted nonlinear regression, with the inverse of the RSD as the weights. At last, the pooled RSD over all pressure levels was computed as a degree of variation for the K v parameter.
Determination of the Dry Layer Resistance
The dry layer resistance R p [m/s] is a measure of the vapor flow impedance as a result of the dried layer micro-structure. Since the dried layer thickness L dried [m] increases during primary drying, R p is also changing over time. This relationship is expressed empirically in Equation (6) To determine these coefficients as well as the variability of R p , two identical freeze-drying runs were performed. During these experiments three centre vials were monitored with a thin gauge thermocouple at the bottom-centre of the vial. A stack of vials was loaded and treated as described in Section 2.3.1 but with 3 mL of a 3% m/V sucrose solution (Fagron, Nazareth, Belgium). The pressure was set to 10 Pa for the full length of the primary drying phase. By applying Equation (7) throughout (13), the R p and L dried profiles could be calculated. Please note that the partial vapour pressure of ice P i [Pa] is expressed as a function of the sublimation front temperature T i [K] by the empirical Equation (9). Next, T i was estimated from the product temperature at the bottom of the vial T b using a heat conduction model (Equation (10)). Finally, the progress of L dried was calculated from the m sub (Equation (13)).
With A p the product surface area [m 2 ].
With λ ice the thermal conductivity of ice [W/mK]. Once the six R p and L dried curves were obtained, a non-linear regression was performed to calibrate the R p0 , A Rp and B Rp parameters of Equation (6). Please note that negative R p and temperature values at the end of primary drying, i.e., where the thermocouple value is shifted upwards due to loss of contact with ice, were omitted from the data set as these points are not representative. Afterwards, all the R p data were segmented in 25 equidistant L dried bins and a standard deviation was calculated for each of the bins. All 25 R p standard deviations were finally pooled to get a degree of variation for the dried layer resistance parameter.
Determination of Filling Volume Variability
The variability of the vial filling volume V f ill [m 3 ] was measured by determining the tare weight of 100 vials, sequentially filling them with 3 ml of deionised water using a Handystep pipettor (Brandtech, Essex, CT, USA) and reweighing them. Next the V f ill per vial was calculated by dividing the weight of the water with the density of water at 25 • C (i.e., 997 kg/m 3 ). Lastly, the standard deviation of those volumes was calculated to yield a number for the V f ill variation.
Determination of Vial Radii Variability
The variability of the outer radius r o [m] of a vial was estimated by measuring the outer diameter of 100 10R Schott vials with a 10 µm resolution caliper and subsequently calculating the standard deviation. Because the vial neck hampered an accurate measurement of the inner diameter, the same degree of variability was assumed for the inner radius r i [m].
Determination of the Critical Temperature
The critical temperature T c [K] is the maximum allowable temperature of the sublimation front. It should be noted that this characteristic temperature is highly dependent on the product formulation. When the temperature of the dry layer exceeds T c with only a minor fraction for a certain period of time, micro-collapses can occur which will change the micro-structure of the dry layer as well as its vapor flow resistance. In the case of extensive product temperature excursions during primary drying, macro-collapse or eutectic melt can occur which will impair the quality of the product. In order to determine this critical threshold, T c was investigated using a freeze-dry microscope (Linkam Scientific, Tadworth, UK). In this case 5 µL of the formulation used in this study was loaded on the system and the sample was frozen down to −40 • C for 5 min. Next, a vacuum of 1 Pa was introduced and the sample was equilibrated at −36 • C for another 4 min followed by small step increases of 0.5 • C up to −32.5 • C. Each step was attained for two minutes. At the end an image of the sample was taken to determine the exact location of the sublimation front. Based on the image, T c was determined as the highest temperature without any significant structural changes in the micro-structure of the dried layer.
Determination of the Pressure Decrease Curve
Since sublimation requires a deep-vacuum, the chamber pressure should be lowered at the start of the primary drying phase using a vacuum pump. The initial pressure drop from atmospheric pressure until the start of ice sublimation was predefined taking the machine limitations into account.
To have a robust estimation of this characteristic pressure decrease curve, ten repetitive runs were performed between atmospheric pressure and the minimal achievable pressure (7 Pa). Above 130 Pa, the Pirani sensor value P c,Pir was used instead of the signal from the capacitance sensor P c,Cap as this is the maximum limit for the latter. The slowest cumulative curve was chosen as the machine limit.
Freezing and Primary Drying Initialization Phase
The supervisory application programmed in LabVIEW starts with a freezing phase which is programmed according to a fixed protocol, i.e., list of setpoints for the shelf temperature T s at fixed time points need to be defined. The low-level temperature controller will subsequently aim for a linear transition between those setpoint values. At the same time, the condenser is activated and cooled to −50 • C. An additional check was implemented at the last temperature setpoint in order to verify whether the desired T s was achieved and if the condenser temperature was below −40 • C. If both conditions are satisfied, the application automatically continues to the initialization of the primary drying phase.
To start the primary drying initialization phase, the last T s setpoint from the preceded freezing phase was maintained and P c was decreased according to the preset pressure decrease curve (Section 2.3.6) with a time resolution of 10 s. At each time point the primary drying model was evaluated and checked for a positiveṁ sub , i.e., to inquire if ice sublimation started. From the moment this was the case, the initialization phase was terminated and the model-based control strategy of the primary drying phase was initiated. However, in the case that no sublimation was perceived, T i was assumed to be equal to T s while L dried andṁ sub were maintained at zero.
Uncertainty Analysis
Through the application of uncertainty analysis (UA) to the primary drying model (Section 2.2), a design space was obtained for the primary drying phase at fixed intervals in time. Hence, the term dynamic design space. The methodology used for the UA is based on the work of Mortier et al. with some important changes [8]. These modifications were introduced to decrease the computational load of the proposed model-based strategy. This makes it possible to perform all calculations in real time on a desktop computer with a state-of-the-art processor.
The UA starts off by defining a machine capability space. This space spans the multivariable combination of the manipulated variables which can be achieved taking into account the previous operating setpoint. Please note that in this case the two manipulated variables of interest are P c and T s . Obviously, the size of this space depends on the limits of the shelf fluid system, the vacuum pump and the considered temporal resolution expressed as ∆t. This machine capability space is subsequently transformed into a grid, which considers only a limited number of combinations in P c and T s . To this end, a resolution of roughly 0.2 Pa and 0.5 K was chosen.
Next, Monte Carlo simulations are performed to estimate the uncertainty of the primary drying model outputs. Accordingly, a model parameter space is constructed by sampling a fixed number of parameter combinations n sample within the predetermined variability of all model inputs (see Section 2.2). For r o , r i , K v , R p and V f ill a normally distributed random sampling method is applied using the (relative) standard deviation as a measure of variation. In contrast, the manipulated variables P c and T s were sampled uniformly within the machine capability space enlarged with their respective variations using a Sobol sampling technique. The samples for the L dried variable are obtained through error propagation. Meaning that the predicted distribution of L dried in the previous sampled time point is reconstructed trough a fit with the Pearson system using its first four moments, i.e., mean, variance, skewness and kurtosis, followed by a random sampling from the fitted distribution type. The Pearson system includes seven distribution types and categorizes a dataset using these moments [21]. Note that there is no covariance assumed between all eight input parameters. Hence, the samples were randomly combined to achieve an eight-dimensional model parameter space.
All points within this parameter space were subsequently used to solve the primary drying model for T i and L dried . Moreover, L tot was calculated by employing Equation (12) which was later compared to the resultant L dried to check for completion of the primary drying phase (L end [-]). Hence, a three-dimensional model output space composed of T i , L dried and L end is obtained as a solution to the eight-dimensional model parameter space.
In the next step, the model output space was sub-sampled using the machine capability grid of P c -T s combinations. This is achieved by selecting all solutions in the model output space that originates from P c -T s combinations which lay in the enlarged area, defined by the P c and T s variation, around each machine capability grid point (cf. Figure 1). This resulted in a model output sub-population for each grid point in terms of T i , L dried and L end . Please note that P c and T s are distributed uniformly in the model parameter space. Hence, a sub-sample based on P c and T s of that model parameter space yields comparable distributions for the other six sources of variation in the model [22]. Which makes that a representative sub-population is taken from the model output space for each grid point. For each grid point the sub-population in L dried was used to calculate the first four moments of its distribution to propagate the error on L dried . Next, the percentage of L end was computed and evaluated whether the end of drying was reached. Moreover, a risk of failure (RoF) space was calculated based on the distributed T i solutions for each of the considered grid points of the machine capability space. To do this, a Pearson distribution fit is made of all T i sub-populations, using the first four moments, and the 1-RoF upper percentiles (i.e., uncertainty degree) of these fitted distributions are determined to yield that RoF space. The RoF space, with a coarse resolution of 0.2 Pa and 0.5 K, explains the T i associated with a user defined RoF for all P c -T s combinations. In order to further refine these results, a cubic surface response model was fitted through this 3-dimensional data and re-evaluated with a finer resolution of 0.1 Pa and 0.1 K.
Ultimately, the refined grid was used to evaluate the primary drying model together with nominal input parameters to obtain an estimate ofṁ sub . Later the failure modes were assessed. Those combinations with aṁ sub higher or equal toṁ sub,chok , or a T i associated with a certain RoF above or equal to the T c , are invalidated. Hence, a design space was constructed which was valid for that sampling instance.
Optimisation and Control of the Primary Drying Phase
Based on the dynamic design space constructed by the UA it is possible to control and optimise the primary drying phase of a freeze-drying process taking into account a defined and accepted risk of failure (RoF). To do so, a supervisory control strategy is implemented on top of the existing low-level control loops of the freeze dryer, i.e., the ON-OFF controllers for T s and P c . Because the proposed strategy makes direct use of model output uncertainty it can be referred to as a robust model-based strategy for the purpose of supervisory optimisation and control. In fact, the result of the UA on the primary drying model is used as the cost function which is optimised towards the maximization of the sublimation rateṁ sub while not exceeding a predetermined RoF as inferred from T i andṁ sub predictions.
The presented strategy is based on the receding horizon principle, i.e., optimal process conditions in the near future are calculated using information of the past and a cost function. Depending on the chosen sampling interval ∆t the prediction horizon is divided in multiple prediction points. For each of these points in time a machine capability space is constructed wherein the UA was performed (Section 2.5.1). Please note that this machine capability space is dependent on the selection of the optimal P c -T s combination in the previous prediction point. The result is a dynamic design space for the considered horizon. Next, the set of manipulated variables, i.e., the combination of P c and T s , associated with the highestṁ sub in this design space are selected as the optimal settings for the considered prediction points.
For each optimisation in the receding horizon, the variability of the manipulated variables P c and T s as well as the variability in L dried has to be included in the UA. The former are estimated by taking into account the absolute difference between the historically measured signals and their applied setpoints during the previous prediction horizon. Whereas variation on L dried is estimated using the error propagation methodology described in Section 2.5.1. It is important to note that the distribution of L dried is propagated to the next prediction point via its first four moments of that distribution combined with random sampling from the Pearson system. Moreover, the prediction of T i and L dried are corrected after each horizon using the actual executed P c and T s in the primary drying model. Finally, the contributions of the other five sources of variability in the primary drying model were all determined upfront (Section 2.3) and therefore these values are fixed for intermittent predictions.
Besides the optimisation ofṁ sub through the use of the dynamic design space, the proposed supervisory strategy is also capable of handling unexpected disturbances in its manipulated variables, i.e., T s and P c . In practice it is possible that pressure and temperature excursions occur even though a control system is in place. For example when a vacuum leak is present, when the vacuum valve is temporally sticking, or when uncontrolled disturbances in the fluid system impede the shelf temperature controller. In order to prevent these, a system check is performed before constructing the machine capability space, i.e., it was checked if the low-level controllers of the manipulative variables achieved in attaining the setpoint at the start of each new prediction horizon. In the case of a disturbance larger than 1.5 • C or 0.4 Pa, the respective variable is restricted in the machine capability space and therefore fixed to its current measured value. In such case, the process is only optimised in the direction of the non-restricted variable. Please note that if this restriction would lead to a design space without options within the considered RoF level, the most conservative setting of the machine capability space is automatically selected (i.e., the lowest P c or T s ). Consistently choosing this conservative option will obviously decreaseṁ sub , and thus also T i , and will therefore result in a valid design space over time. In this work, the lowest authorized pressure was chosen to be 8 Pa in case of a fluid system disturbance. If the disturbance were passed, normal operation conditions were resumed.
It is important to remember that two groups of vials were identified earlier based on their distinct heat transfer properties (Section 2.3.1). This explains why in a first stage the proposed supervisory strategy is only targeted at the edge vials as these have higher K v values resulting in a faster drying cycle associated with higherṁ sub and T i values which should be controlled. Only when the whole edge vial population is dry, as determined by the uncertainty analysis, will the target of process optimisation be shifted towards the centre vial population.
Primary Drying Endpoint
Once the end of primary drying is reached, the optimisation and control strategy is automatically stopped. Please note that both the comparative pressure measurement and the uncertainty analysis have to indicate the end of primary drying before starting secondary drying. In this case the pressure signals from a Pirani gauge P c,Pir and a capacitance P c,Cap gauge are compared. The ratio of both is calculated P ratio [-] which is indicative to the molecular composition of the gas inside the vacuum chamber. If the P ratio is near 1.6 the chamber is predominantly filled with water vapour, i.e., primary drying is ongoing. In contrast, if the chamber is filled mainly with nitrogen gas, i.e., primary drying is completely finished, a P ratio near 1 is expected. Please note that the transition between the two extremes is a smooth trajectory that depends on the spatial homogeneity ofṁ sub across the freeze dried batch [20]. Please note that in this case the capacitance gauge is of superior quality as compared to the Pirani sensor, i.e., higher precision. As such, the end of primary drying, as defined by the comparative pressure measurement, is reached if the P ratio is below 1.07 for a period of minimum 15 min. Contrarily, the end of primary drying, as defined by the UA, is obtained when the percentage of vials having reached the L end condition is equal to the uncertainty degree (i.e., 1 − RoF). Moreover, the accuracy of the UA endpoint prediction is evaluated by comparing its prediction to the midpoint in the P ratio trajectory [17].
Secondary Drying Phase
The secondary drying phase is programmed according to a fixed protocol. P c is set to 10 Pa and T s is increased gradually from where the primary drying ended towards a new setpoint which is maintained until the end of secondary drying. At this point the vacuum chamber is partially vented with nitrogen gas and the vials are stoppered. Finally, the chamber is completely vented and the vials are unloaded.
Experimental Verification
To verify the proper functioning of the proposed optimisation and control strategy for primary drying, six independent freeze-drying runs were performed at a pre-defined accepted Risk of Failure level of 0.1% , i.e., the predictions for T i cross T c only 0.1% of the time. Each experimental run contains 49 vials of the 10R type. All vials were filled with 3 mL of a 3% m/V sucrose solution (Fagron, Nazareth, Belgium). The vial stack was loaded into the freeze-dryer with pre-cooled shelf at 3 • C. Next, the freezing protocol was initiated by ramping down the temperature to −36 • C at a rate of 1 • C/min. Next, the vacuum pump and condenser were initiated while freezing was maintained for another 2 h. After applying the last freezing setpoint of the initialization phase, primary drying and optimisation were started. All parameter values used by the robust model-based control are listed in Table 2. Once the end of primary drying is detected, secondary drying is started by heating the shelf temperature up to 25 • C at 0.25 • C/min which is maintained for 4 h. At the end of each run, the vials are stoppered under an atmosphere of nitrogen gas. Each vial was visually checked for defects. In this case, a slight shrinkage of the cake, attributed to the amorphous properties of the pure sucrose formulation, is not considered a defect [23]. To verify sublimation front temperature predictions T i in real time, three thin gauge thermocouple-K probes were installed in each vial stack. Since the largest part of the primary drying process is controlled for edge group, it was chosen to mount two thermocouples in edge vials (row 4, vial 1 and row 7, vial 4). The third probe sampled a centre vial (row 4, vial 4) to verify the dynamics in the final part of the process which is directed towards the centre vial population. The thermocouples were placed at the bottom-centre of the vials from which T i can be calculated using Equation (10) [16]. During operation it was constantly verified if the measured T i was within the 99.9% uncertainty range as predicted by the UA.
Three out of six verification runs were performed without the application of intended process disturbances. This to solely demonstrate the functioning of the proposed real-time optimisation algorithm. For the other three runs, multiple disturbances were intentionally introduced both in the temperature and pressure loops of the freeze-dryers. The objective of these disturbed trials is to evaluate the performance of the system check introduced on the machine capability space. This should assure that the proposed real-time optimisation strategy is able to keep the product safe in the presence of uncontrolled events. Because these disturbances were introduced as disturbed setpoints to the low-level control loops, they are out of the direct control of the optimisation algorithm. These disturbances are therefore only noticed when P c or T s start diverging from the setpoints calculated by the algorithm. In the third verification run, two fluid problems are simulated. One at the start of primary drying, and one after 5 h of drying. The disturbances were introduced by keeping T s constant around respectively −17.5 • C and −25.5 • C for a complete hour. Later on, in between 5 and 19 h of the same run, multiple pressure system disruptions were emulated. This was done by varying the setpoint of the local controller P c in multiple steps between 10.9 and 15 Pa. In the final part of the primary drying process, another T s disturbance of 1 h was introduced by keeping T s constant to a setpoint of −21 • C after 20.7 h of drying. For the other two verification runs, comparable disturbances were implemented.
Determination of the Heat Transfer Coefficient
The characteristic heat transfer coefficient K v was determined in function of the chamber pressure P c for both the edge and centre vials. Figure 2 shows the positive relation between P c and K v , which is consistent with literature findings [8,20,24]. The reason being that a higher molecular density, at elevated chamber pressures, is associated with a higher convective heat transfer component of K v . Please note that the vials have only limited contact with the shelf for conductive heat transfer due to their dome shaped vial bottom. Hence, the presence of a mediator gas between the vial and shelf surface can significantly contribute to additional convective heat transfer. Moreover, a consistently higher K v was found for the edge group as compared to the centre vials. This is because the edge vials are not completely shielded by the presence of surrounding vials, i.e., the edge vials have more field of view with the uncooled door and walls of the freeze-drying apparatus. This temperature difference drives a radiative heat flux which is consequently higher for the edge vial group [25]. This also explains why it was chosen to describe both vial groups with different regression equations (Equation (5)). For the edge group the nonlinear regression yielded a value of −1.14 J/m 2 s·K for α e with a standard error of the estimate (SE) of 3.90, a β e of 4.46 J/m 2 s·KPa with a SE of 1.22 and a γ e of 0.0757 1/Pa with a SE of 0.0207. The centre group was described using an α c of 3.46 J/m 2 s·K with a SE of 1.65, a β c of 1.93 J/m 2 s·KPa with a SE of 1.22 and a γ c of 0.0292 1/Pa with a SE of 0.0088. The RSD for both groups were comparable. Hence, only a single term of variation for the K v parameter was considered, i.e., RSD K V . Its value is calculated as the pooled RSD of all experiments and equals 0.0761. Due to their higher value of K v , edge vials are more prone to collapse during the freeze-drying process. The focus of the proposed real-time optimisation strategy of the primary drying phase was therefore targeted mostly at the edge vials. Only when all edge vials are dried according to the model, was the K v parameter characteristic to the centre vials used for optimisation.
Determination of the Dry Layer Resistance
The regression for R p against L dried was performed by applying Equations (7)-(13) on six different product temperature profiles. The resulting R p profiles were grouped and used for non-linear regression according to Equation (6). This leads to estimates of the regression parameters: R p0 = 1.51 × 10 4 m/s (95 % CI: 1.27 − 1.75 × 10 4 ), A R p = 9.68 × 10 7 1/s (95% CI: 8.78 − 10.58 × 10 7 ) and B R p = 968 1/m (95% CI: 887 − 1049). Please note that the obtained R p profiles are very similar as compared to those found in the literature [26,27]. The profile shows a sharp increase in R p at the start of the primary drying process which tapers off near the end to a value of around 10 × 10 4 m/s, i.e., the final resistance originating from the pores created by the ice crystals after sublimation. Please note that the water vapour inside the pores is assumed to be under Knudsen conditions meaning that the shape and dimensions of the pores had an influence on the evacuation of that water vapour out of the dried layer. The longer and narrower a pore is, the higher its resistance to water vapour.
The standard deviation of R p in 25 equidistant L dried bins was calculated and pooled. This results in σ R p = 1.10 × 10 4 m/s. In Figure 3, the R p data is represented using a boxplot for each L dried bin. The fit of the regression curve enlarged by an interval of 2 times σ R p is also shown. Please note that within the determined boundaries of variation the data is accurately described. Only for the first bins of L dried there is a slight overprediction.
Variation of Vial Radii and Filling Volume
Variation on other model parameters (Table 1) was measured experimentally and described using the associated standard deviation. Please note that the distribution of the parameters were assumed to be normal. The standard deviation of the outer vial radius (σ r o ) was measured with an caliper and equalled 5.44 × 10 −5 m. For the inner vial radius (σ r i ) a similar number was assumed. The filling volume variability was checked gravimetrically using deionised water. The standard deviation (σ V f ill ) equalled 1.29 × 10 −8 m 3 .
Determination of the Critical Temperature
T c was determined with the use of a freeze-drying microscope. Figure 4 depicts an overlay of images taken at then end of the temperature stage. The sublimation front moves from the right hand side towards the left and is marked in red in each overlay. Please note that ice crystals are still present on the left hand side of the image whereas the dark grey represents the dried material. From −36.0 to −34.5 • C no abnormal changes in micro-structure were perceived. The shape of the pores in the dried layer followed the configuration of the ice crystals. However, when a temperature of −34 • C was achieved a minimal change in pore shape and size was noticeable. The pore enlarged and material started to aggregate slightly. Pore enlargement is typically attributed to micro-collapse [28]. The dried product was near its T g , allowing some degree of mobility however the total structure was not completely lost yet [29]. Upon heating the sample towards −32.5 • C, collapse occurred. Because such micro-collapses are unwanted, as they alter the micro-structure of the pores and therefore also the R p coefficients, T c was defined to 238.9 K or −34.25 • C.
Verification Runs without Process Disturbances
Three normal operation verification experiments were performed to verify all functions of the primary drying optimisation and control strategy. The UA, using parallel computing, with 10,000 samples and a ∆t of 60 s needs around 90 s to calculate a horizon of 120 s. Sub-sampling, fitting of a distribution to the T i sub-population and approximating the design space with a surface response model proved successful in reducing the computational load with a factor 40 as compared to the initial strategy proposed by Van Bockstal et al. [20].
After freezing and preparation of the condenser and vacuum pump, primary drying was initiated by lowering the pressure according to the pressure curve depicted in Figure 5. During this period T s was kept constant at −36 • C. Whenever the primary drying model calculated a positiveṁ sub , which is around 18 min at 14 Pa, the real-time optimisation strategy was initiated (Figure 6). At the start of primary drying a machine capability space was constructed around the initial operating point, i.e., P c = 14 Pa and T s = −36 • C. As these process settings are very conservative, the optimiser will first try to maximizeṁ sub by maximizing T s and by decreasing P c . At around 27 min the strategy changed and P c was slightly increased, which boosts the convective heat flux and therefore also higheṙ m sub . When 13.4 Pa was reached, at around 32 min, the calculated design space was partly limited by the constraint on T i and therefore the optimal P c /T s combination was obtained by maximizing T s while slowly minimizing P c towards 10 Pa. Around 40 min the maximal heat flux was achieved which is limited by R p . The highestṁ sub with a 0.1% risk of failures, from that point forward was achieved by keeping P c at its minimal level while controlling the process through the adjustment of T s . Because R p rises during the progression of primary drying, the value of T s was gradually decreased to maintain T i as close as possible to its upper limit as defined by T c as clearly shown in Figure 6. Figure 7 illustrates the results of the uncertainty analysis after 20.45 h of primary drying. The design space is created by invalidation of grid points with a T i at 0.1% RoF equal or above the T c , and anṁ sub aboveṁ sub,chok . Next, the point in the accepted design space with the highestṁ sub is selected as the most optimal set of manipulative variables. The L dried population associated with that optimal grid point is subsequently propagated to the next prediction point as shown in Figure 7a. This results in a gradual increase of the uncertainty on L dried as primary drying progress.
Please note that after approximately 17.9 h in the primary drying phase, the UA estimated that all edge vials were dry. This because all the samples of the optimal grid point had a condition of L end,edge equal to 1. From that point onwards the K v coefficient of the centre vials was introduced in the calculations. Moreover the nominal L dried of the centre vials was used in the error propagation. Consequently, the predictions of T i and its associated uncertainty range was directed at the centre vials. This explains the sudden drop of L dried in Figure 7a. As the centre K v was also constantly lower as compared to the one obtained for the edge vials, a higher T s was employed to keep the heat flux at its upper limit. As a response P c was temporally increased to increase the heat flux even further, thereby reaching the new setpoint as fast as possible. The absolute difference between the setpoints of the low-level ON-OFF controllers and the observed values were used as a term of variation for T s and P c , which is indicated by the grey area around P c and T s in Figure 6. Please note that the measured value was always situated within this uncertain area and that it cycled around the setpoint value. This also indicates that all of the variability in the manipulated variables was fully covered by the uncertainty analysis.
By including the variation of eight different sources, the UA was capable of estimating an uncertainty on the calculated value of T i . Please note that the nominal T i prediction is not centred in this uncertain area which indicates an asymmetric solution of the primary drying model. Moreover, the width of the predicted uncertain area decreases with the progress of drying as it is mainly linked toṁ sub . Indeed,ṁ sub decreases as primary drying is mainly limited by increasing R p values.
At last, an observed T i was also calculated. This value is obtained by correcting the measurements of the three thermocouples (Equation (10)). When focusing on the edge vials, which is the case from the start until 17.9 h, the observed T i is very close to the prediction of the nominal T i and is also inside the 99.9% uncertainty area. Similar observations can be made for the observed T i after this period which is aimed at the centre vials.
Verification Runs with Process Disturbances
The results for a verification run with process disturbances is given in Figure 8. From these it is observed that the proposed control strategy correctly identifies the disturbance maximally two prediction horizons after its introduction, i.e., 240 s. At first, the variation of the disturbed manipulated variable increased since the absolute difference between its measurements and its setpoint value became bigger. This result is a broader T i distribution after the uncertainty analysis, which makes that a more conservative setting will be chosen to stay well below T c . Furthermore, the disturbed dimension of the machine capability space is not taken into account when a disturbance occurs, i.e., its value is fixed to the measured value. This leads to a correction of the machine capability grid towards the disruption, which makes that the process is only controlled using the other non-disturbed manipulated variable. Such an event is for example observed after 0.5 h of primary drying. At this pointṁ sub is maximised by reducing P c from 13.2 to 8.8 Pa while maintaining the value of T s at −17.5 • C because of the prior disturbances identified in this variable. Moreover, the variability on T s is increased at the onset of this event. Similar events can be noticed when P c is disturbed. For such an event the variability on P c is temporally enlarged while T s is lowered to keep the T i uncertainty under the critical limit. Notice that only with relatively severe process disturbances, for example occurring after 20.8 h, the 99.9% T i upper percentile briefly crossed the T c limit. However, this situation was quickly corrected, i.e., as fast as the freeze-dryer could operate by shifting the machine capability space to the most conservative grid point until a valid design space could again be achieved. Furthermore, the correction of T i and the L dried predictions in front of the UA kept future predictions on track. Regarding the product temperature, a similar conclusions can be drawn for the disturbed verification runs as for the undisturbed runs. This begin the fact that for both the centre vials (i.e., from the start to 19.98 h of primary drying) and the edge vials (i.e., from 19.98 h until the end of primary drying), the observed T i could be kept inside its uncertainty area using the proposed real-time optimisation and control strategy.
End of Primary Drying
In Figures 6 and 8, the observed T i for the edge and centre vials started deviating before the predicted end of primary drying. This deviation is caused by an imbalance between the heat-transfer andṁ sub . Hence, at the end of primary drying the sublimation surface decreases as some parts of the vial are already dried. Consequently, the heat is not fully removed by the endothermic ice sublimation, leading to heating of the product [20]. Aside of the thermocouples, a comparative pressure measurement is also employed to determine the end of primary drying. This because near the end of sublimation the gas composition inside the drying chamber changes from water vapour towards nitrogen gas. Hence, the comparative signal or P ratio approaches a terminal value of 1. Both the midpoint and offset of the P ratio curve were compared to the time point at which 99.9% of the vials had achieved the L dry condition following the UA. The three replicates of the normal operation runs had comparable results with an endpoint in between 21.48 and 21.55 h of drying. Whereas, the midpoint of P ratio was located in between 20.78 and 21.68 h and the offset between 22.73 and 23.77 h (see Table 3).
As expected, the endpoints of the disturbed verification runs are less precisely located. It appeared that there exists a strong correlation with the relative extend of the applied disturbances. Nevertheless, the location of the endpoint predictions according to the UA are analogous to the normal operation runs as they are situated between the midpoint and the offset of the P ratio curve. The relative difference between the UA prediction and the P ratio midpoint varied between 0.58 and 5.64% in time. In the final step after secondary drying, all vials were visually checked for defects such as collapse and meltback. No defects were noticed for vials coming from both the undisturbed and disturbed process, therefore indicating good performance of the controller.
Discussion and Perspectives
The observed T i values are very close (<1 • C) to the predicted T i values. Moreover, no defects are observed after secondary drying. All of this indicates accurate model prediction for both the edge and centre vials and a competent real-time optimisation and control strategy to supervise the primary drying phase. Also for the disturbed verification runs, the observed model outcomes are quite close (<1.5 • C) the the predictions. The proposed optimisation strategy kept the 99.9% upper percentile of T i almost constantly under T c , i.e., equal to a RoF of 0.1. Hence, the primary drying process can be operated more efficiently by reducing the time needed to achieve the primary drying endpoint. Moreover, the nominal L dried and T i predictions were corrected after each horizon, which is especially useful when unexpected process disturbances occur. The ability of the machine capability space to move in both directions (i.e., changing P c and T s ) resulted in adequate responses to the artificially introduced disturbances.
In the case of the 99.9% upper percentile of T i surpassing the T c under severe process disturbances, the excursion time was kept as brief as possible. Nonetheless, this excursion time could be limited further by reducing the time resolution of the model-based controller. This would allow the controller to detected and prepare for disturbances more quickly. However, this would also lead to an increased computational load. Another strategy would be upgrading the cooling/heating or vacuum system of the freeze-dryer resulting in a larger operational space and thus more ample room for corrections.
The endpoint of primary drying evaluated with thermocouples was perceived earlier then the UA prediction. However, it should be noted that the former are not the most accurate method to determine the end of primary drying since they are based on point measurements in a single vial [20]. Moreover, the primary drying model assumes a flat sublimation plane whereas in practice some curvature of the plane is noticeable. This can lead to inaccurate model predictions of the endpoint. Hence, the endpoint by the proposed strategy was based on both the model and comparative pressure measurement. The P ratio proved to be corresponding more with model predictions then the thermocouples. The offset of P ratio was chosen as the start for secondary drying as this posses the least risks towards prematurely ending primary drying. However, near the end of primary drying some desorption of the dried layer could already start, delaying the offset of P ratio [12]. The midpoint of the P ratio curve was more comparable to the UA model predictions which is similar to the observations of Patel et al. [17].
A critical aspect of this work was to reduce the computational load of the real-time calculations. To solve this some practicalities such as sub-sampling, empirical fitting of the T i distribution and refining the prediction grid with a surface response model were introduced. This results in more efficient calculations while increasing the resolution of the dynamic design space. It should however be taken into account that this approach uses multiple successive fits, which makes that only an approximation of T i , L dried andṁ sub could be made.
Because the process is steered towards the edge of failure without crossing it, it appeared that the proposed optimisation and control strategy is very dependent on the model parameter values. The accuracy of the predicted R p and its variation is very important as this indirectly acts as a hard constraint to the operation of the process. Although the experimental fit of R p using regression Equation (6) described the data quite well, there exists a slight overprediction at low L dried (see Figure 3). However, this is not problematic as it will lead to more conservative control settings and therefore no extra risk on failure is created. Yet, Equation (6) is not capable of describing all possible R p curves perfectly [26]. More detailed equations describing this relation could be included in the future. Also note that the current strategy could further be improved if a tunable diode laser absorption spectroscopy (TDLAS) system would be present. With TDLAS, the vapour flow between the drying chamber and condenser can be measured in detail. Hence, the predictions ofṁ sub can be corrected and therefore more accurate real-time estimates of the K v and R p parameters can be made.
Conclusions
Recently new methodologies are being proposed to optimize batch freeze-drying by implementing dynamic settings during primary drying [4]. When a dynamic design space approach is used, the risk of collapse, choked flow and other failures can be estimated while optimising the processing time [8,20]. In this work a model-based supervisory optimisation and control strategy is proposed that takes into account model output uncertainties to maximise the sublimation rateṁ sub of the primary drying phase. This is done while remaining below a predefined risk of failure (RoF), as determined by the critical sublimation interface temperature T c , and while avoiding chocked flow conditions. Moreover, model uncertainties are updated in real time using common process measurements which allows for more aggressive operation when variations in the measurements are low and more conservative operation when required.
Multiple experimental verification runs, with and without induced process disturbance, were performed on a batch freeze-dryer with a predefined RoF of 0.1%. During these experiments the manipulated variables T s and P c were continuously optimised according to changing design space, which resulted in a dynamic primary drying trajectory. It was shown that the measured product temperatures were consistently located within the uncertainty range as predicted by the model. Moreover, the end of primary drying was also predicted correctly which resulted in compliant products with acceptable cake appearance. As such, it should be concluded that the proposed optimisation and control strategy can be successfully used to reduce processing times and operational costs of batch freeze-drying. | 13,729.8 | 2020-02-01T00:00:00.000 | [
"Engineering"
] |
Amorphous microwires with enhanced magnetic softness and GMI characteristics
In this paper we present results on correlation of GMI effect and soft magnetic behavior in Co-rich microwires with low magnetostriction constant. Correlation between magnetoelastic anisotropy and magnetic field dependences of diagonal and off-diagonal impedance components are observed. Low field GMI hysteresis, explained in terms of magnetoelastic anisotropy of microwires, has been suppressed by the bias current.
Introduction
Magnetically soft glass coated microwires (typically of 5-30 µm in diameter) exhibit a number of outstanding magnetic properties such as magnetic bistability and giant magneto-impedance, GMI, effect [1,2].Recently, excellent soft magnetic properties and GMI effect of glass coated microwires attracted great attention [3,4], giving rise to development of industrial applications for low magnetic field detection [5].
Giant magneto-impedance effect (GMI), consisting of large sensitivity of the impedance of magnetically soft conductor on applied magnetic field, attracted great attention in the field of applied magnetism [3][4][5][6][7] especially because of excellent magnetic field sensitivity suitable for low magnetic field detection.Such GMI effect is especially high in ferromagnetic magnetically soft wires (especially of amorphous and nanocrystalline origin) [3,6,7].It is worth mentioning, that the cylindrical shape is quite suitable for achieving of high GMI effect [3,6,7].General tendency on miniaturization of magnetic sensors requires development of thin soft magnetic materials, like thin wires and thin films [1,5].Owing to its thin dimensions, glass-coated microwires gained special interest in the filed of applied magnetism for designing of the sensors based on GMI effect [3,4].It is worth mentioning, that in most of applications a high linearity of MI dependence and low hysteresis are desirable [7,8].Anti-symmetrical MI curve with linear region has been obtained in current pulsed excitation scheme of wires using detection of off-diagonal GMI component [3, 8, and 9].Such pulsed scheme for GMI measurements resulted quite useful for real GMI sensors development [5].At the same time we have recently showed, that linearity and shape of off-diagonal component in microwires can be tailored by thermal treatment [10].Considerable GMI hysteresis has been observed and analyzed in microwires possessing helical anisotropy [9], although enhanced magnetic field sensitivity of the GMI effect in amorphous wires is related to specific outer domain structure in the surface area [11].a e-mail : arkadi.joukov<EMAIL_ADDRESS>
EPJ Web of Conferences
In this paper we studied the GMI effect (GMI ratio, ∆Z/Z, diagonal Z zz and off-diagonal impedance tensor ζ φz components) and hysteretic magnetic properties in ultra-thin amorphous glass-coated microwires with vanishing magnetostriction constant.
Experimental details
We have measured dependences of the diagonal Z zz and off-diagonal Zϕ z impedance components on external axial magnetic field H in Co-rich microwires, as described elsewhere [3,8,9] The microwires were placed in a specially designed microstrip cell.One wire end was connected to the inner conductor of a coaxial line through a matched microstrip line while the other was connected to the ground plane.The components Z zz and Zϕ z were measured simultaneously using vector network analyzer.The diagonal impedance of the sample Zw = Zzzl, where l is the wire length, was obtained from reflection coefficient S 11 and the off-diagonal impedance Zϕ z was measured as transmission coefficient S 21 as a voltage induced in a 2-mm long pick-up coil wounded over the wire.The static bias field H B was created by the dc current I B applied to sample through the bias-tee element.The other experimental details are given in Ref. 8. The frequency range for the offdiagonal component Zϕ z was 10 -300 MHz, while diagonal impedance component has been measured till 7 GHz.
For practical sensor it is essential to have the anti-symmetrical dependence on magnetic field, so off-diagonal MI components could be more suitable.In the sensor application, pulse excitation is preferred over sinusoidal because of simple electronic design and low power consumption, therefore we used also pulsed excitation scheme, as described elsewhere [3,9].Hysteresis loops have been measured by the induction method, as described elsewhere [3,10].
Results and discussion
Magnetic field, H, dependence of real part, Z 1 of the longitudinal wire impedance Z zz (Z zz = Z 1 +iZ 2 ), measured till 4 GHz in Co 66 Cr 3.5 Fe 3.5 B 16 Si 11 microwire is shown in Fig. 1.General features of these dependences is that the magnetic field of maximum shifts to the higher field region increasing the f.High enough magnetic field sensitivity, i.e.GMI effect till GHz-range frequencies should be also underlined.Fig. 3 shows field dependence of the off-diagonal voltage response, V out measured using pulsed scheme, described elsewhere [3,9,12,13] in Co 67.1 Fe 3.8 Ni 1.4 Si 14.5 B 11.5 Mo 1.7 (λ s ≈ 3 • 10 -7 ) microwire with different geometry: metallic nucleus diameter and total diameter with glass coating are 6.0/10.2(ρ≈0,59), 7.0/11.0(ρ≈0,64) and 8.2/13.7 µm (ρ≈0,6).The off-diagonal components exhibit antisymmetrical magnetic field dependence, suitable for determination the magnetic field direction in real sensor devices [3,9,12,13].It should be noted from Fig. 3 that the V out (H) curves exhibit nearly linear growth within the field range from -H m to H m .The H m limits the working range of MI sensor to 240 A/m and should be associated with the anisotropy field.The effect of the ρ− ratio on V out (H) (Fig. 3) should be attributed to the effect of internal stresses on the magnetic anisotropy field.It must be underlined, that all studied samples exhibited excellent magnetically soft properties with inclined hysteresis hoops and extremely low coercivities (between 4 and 10A/m).Magnetic anisotropy field, H k , is found to be determined by the ρ−ratio, decreasing with ρ (Fig. 4), as also has been reported earlier [3,10].Additionally, we recently found, that the nature of observed low field hysteresis on Z 1 (H) and Zϕ z (H) is directly related with deviation of the anisotropy easy axis from transversal direction [8].Therefore, application of circular bias magnetic field H B produced by DC current I B running through the wire affects the hysteresis and asymmetry of the MI dependence, suppressing this hysteresis when I B is high enough (see Fig. 5, where effect of bias voltage on diagonal impedance, Z 1 , and on S 21 parameter, proportional to off-diagonal GMI component is shown).In fact in pulsed exciting scheme when the sharp pulses with pulse edge time about 5 ns are produced by passing square wave multi-vibrator pulses through the differentiating circuit, overall pulsed current contains a DC component that produces bias circular magnetic field [6,7].In this way low field hysteresis can be surpassed selecting adequate pulse amplitude. 3,14,1.On the other hand, the estimated values of the internal stresses in these glass coated microwires arising from the difference in the thermal expansion coefficients of simultaneously solidifying metallic nucleus and glass coating are of the order of 100-1000 MPa, depending strongly on the ratio between the glass coating thickness and metallic core diameter [3,[15][16][17], increasing with decreasing ρ−ratio.Consequently, magnetoelastic anisotropy of glass-coated microwires can be controlled by the geometrical ratio ρ through the strength of internal stresses.Application of stress and/or magnetic filed during annealing of microwires allows inducing considerable magnetic anisotropy and results in some cases in drastic changes of hysteretic magnetic properties and GMI behavior [10,12,18].As an example, application of axial magnetic field during annealing induces axial magnetic anisotropy in Co-rich microwires (Fig. 7).Here hysteresis loops of Co 67 Fe 3.85 Ni 1.45 B 11.5 Si 14.5 Mo 1.7 microwires (d=22.4µm, D=22.8 µm) annealed by Joule heating without (CA) and under application of axial magnetic field (FCA) are shown.As can be appreciated, increasing of remanent magnetization and decreasing of coercivity after FCA is observed.Most significant changes of both hysteresis loop and GMI behaviour have been observed in Fe-rich microwires subjected to stress annealing (Fig. 7).Stress annealing of Fe 74 B 13 Si 11 C 2 microwires resulted in induction of considerable stress induced anisotropy [18].The shape of hysteresis loop depends on time and temperature of annealing (Fig. 8a).In this case the easy axis of magnetic anisotropy has been changed from axial to transversal [18].Additionally, application of stress to annealed microwires with well-defined transverse anisotropy results in drastic change of the hysteresis loop (Fig. 8b).Origin of such stress-induced anisotropy is related with so-called "Back stresses" originated from the composite origin of glass-coated microwires annealed under tensile stress: compressive stresses compensate axial stress component and under these conditions transversal stress components are predominant [18].
Consequently, these stress annealed samples exhibit stress-impedance effect, i.e. impedance change (∆Z/Z) under applied stress, σ, observed in samples with stress induced transversal anisotropy (see Fig. 9) [18,19].It should be assumed that the internal stresses relaxation after heat treatment should drastically change both the soft magnetic behavior and ∆Z/Z(H) dependence due to stress relaxation, induced magnetic anisotropy and change of magnetostriction constant under annealing.
Summarizing, a number of interesting phenomena can be observed in thin magnetically soft ferromagnetic microwires.Taylor Ulitovky technique allows fabrication of composite microwires with thin metallic nucleus diameter.Composite character of such microwires results in appearance of additional magnetoelastic anisotropy.Heat treatment is the efficient method of tailoring of magnetic properties and GMI effect of such microwires.Selection of proper chemical composition, geometry and adequate conditions of annealing allows achieving of high GMI effect.
Conclusions
In thin amorphous wires, produced by the Taylor-Ulitovsky technique, magnetic softness and magnetic field dependence of GMI effect (both longitudinal and off-diagonal) and GMI hysteresis are determined the magnetoelastic anisotropy.This magnetoelastic anisotropy can be tailored by the sample geometry and adequate annealing.There are a number of interesting effects, such as induction of the transversal anisotropy in Fe-rich microwires allowing creating extremely stress sensitive elements.Studies of diagonal and off-diagonal MI tensor components of glass-coated microwires have shown the grate potential of these materials for microminiaturized magnetic field sensor application.Their main advantages are high sensitive low-hysteresis field dependence.By varying the alloys composition and applying post fabrication processing it is possible to control the sensor's operating range.Low field GMI hysteresis has been observed and explained in terms of helical magnetic anisotropy of microwires.
Fig. 2 .
Fig. 2. Magnetic field dependences of the coefficient S 21 at 10 MHz (a) and Z 1 (H) dependences at different frequencies (b) measured in Co 66 Cr 3.5 Fe 3.5 B 16 Si 11 microwire
Fig. 3 .
Fig.3.V out (H) response of Co 67,1 Fe 3,8 Ni 1,4 Si 14,5 B 11,5 Mo 1,7 microwires with different diameters, d, and ρratios On the other hand, it is well established, that strength of internal stresses, σ i , arising during simultaneous rapid quenching of metallic nucleus surrounding by the glass coating can be controlled by the ρ−ratio: strength of internal stresses increases decreasing ρ−ratio (i.e.increases with
Fig. 5 .
Fig.5.Effect of bias voltage U B on magnetic field dependence of diagonal impedance (a) and S 21 parameter (b) of Co 67 Fe 3.85 Ni 1.45 B 11.5 Si 14.5 Mo 1.7 microwire Traditional way to tailor magnetoelastic anisotropy issue of thermal treatment.The influence of Joule heating on off-diagonal field characteristic of nearly zero magnetostriction Co 67.1 Fe 3.8 Ni 1.4 Si 14.5 B 11.5 Mo 1.7 microwire with diameters 9.4/17.0µm (ρ≈0,55) is shown in Fig. 6.One can see that the thermal annealing with 50 mA DC current reduces the H m from 480 A/m in ascast state to 240 A/m after 5 min annealing.Observed H k (ρ) dependence has been attributed to the magnetoelastic energy contribution given by K me ≈ 3/2 λ s σ ι ,
Fig. 6 .
Fig.6.V out (H) of as -prepared and Joule-heated Co 67 Fe 3.85 Ni 1.45 B 11.5 Si 14.5 Mo 1.7 microwire (current annealing with 50 mA current intensity) (a) and ∆Z/Z(H) dependences of heated Co 67 Fe 3.85 Ni 1.45 B 11.5 Si 14.5 Mo 1.7 microwire measured at f=30 MHz and I=1 mA in microwire subjected to CA annealing at 40 mA for different time (b) where λ s is the saturation magnetostriction and σ i is the internal stress.The magnetostriction constant is mostly determined by the chemical composition and achieves almost nearly-zero values in amorphous alloys based on Fe-Co with Co/Fe ≈70/5 λ s ≈0 [3,14,15].On the other hand, the
Fig. 8 .
Fig.8.Stress impedance effect of stress annealed Fe 74 B 13 Si 11 C 2 glass-coated microwire under stress (468 MPa) at 275 o C for 0.5h measured at frequency, f=10 MHz for the driving current amplitude of 2 mA | 2,857.4 | 2012-01-01T00:00:00.000 | [
"Materials Science"
] |
Review of the Current Landscape of the Potential of Nanotechnology for Future Malaria Diagnosis, Treatment, and Vaccination Strategies
Malaria eradication has for decades been on the global health agenda, but the causative agents of the disease, several species of the protist parasite Plasmodium, have evolved mechanisms to evade vaccine-induced immunity and to rapidly acquire resistance against all drugs entering clinical use. Because classical antimalarial approaches have consistently failed, new strategies must be explored. One of these is nanomedicine, the application of manipulation and fabrication technology in the range of molecular dimensions between 1 and 100 nm, to the development of new medical solutions. Here we review the current state of the art in malaria diagnosis, prevention, and therapy and how nanotechnology is already having an incipient impact in improving them. In the second half of this review, the next generation of antimalarial drugs currently in the clinical pipeline is presented, with a definition of these drugs’ target product profiles and an assessment of the potential role of nanotechnology in their development. Opinions extracted from interviews with experts in the fields of nanomedicine, clinical malaria, and the economic landscape of the disease are included to offer a wider scope of the current requirements to win the fight against malaria and of how nanoscience can contribute to achieve them.
Introduction
Malaria is an infectious disease caused by the parasite Plasmodium spp., which is transmitted by female Anopheles mosquitoes. Among the different Plasmodium species, five of them are known to infect humans: P. falciparum, P. vivax, P. malariae, P. ovale, and P. knowlesi [1], with the first two being the most prevalent and causing the majority of malaria cases [2]. P. falciparum is the most lethal species, causing severe clinical malaria, whereas P. vivax develops a dormant liver stage, the hypnozoite, which can trigger relapses after the primary infection [3]. The life cycle of P. falciparum [4][5][6] is shown in Figure 1.
Uncomplicated malaria patients present with a combination of fever, chills, sweats, headaches, nausea and vomiting, body aches, and general malaise [5]. Physical findings in uncomplicated malaria may include elevated temperatures, perspiration, weakness, enlarged spleen, mild jaundice, enlargement of the liver, and increased respiratory rate. Severe malaria occurs when infections are complicated by serious organ failures or abnormalities in the patient's blood or metabolism. The manifestations of severe malaria include the following [5]: (i) cerebral malaria, with abnormal behavior, impairment of consciousness, seizures, coma, or other neurologic abnormalities; (ii) severe anemia due to destruction of RBCs; (iii) hemoglobinuria (hemoglobin in the urine) due to hemolysis; Malaria is classified by the World Health Organization (WHO) as a life-threatening disease that, despite being preventable and treatable, is still endemic in 87 countries mainly in the tropical and subtropical regions [1]. The last WHO report on malaria esti mated that in 2020, there were 241 million cases and 627,000 deaths worldwide, with the WHO African Region accounting for 95% of them [7]. Despite the decrease in disease burden and deaths experienced over the past years, the goal of malaria elimination is stil far, mainly because of a lack of continued financing and the emergence of parasite re sistance to the antimalarial drugs administered [1]. To achieve the global objective of malaria eradication, there is a need for research on the development of novel and affordable strategies for prevention and treatment, among which nanomedicine can play a pivotal role [8].
Nanomedicine can be defined as the application of nanotechnology to and the usage of nanomaterials for improvements in the health field. Traditionally, nanomedical tools have focused on noncommunicable diseases, with much effort on cancer [8,9]. However recently, the application of nanomaterials to infectious and neglected diseases and to diseases of poverty, for example, malaria, has been extensively investigated and exploited [10][11][12].
There are different stages of the malaria parasite life cycle that can be targeted, potentially leading to the development of a new generation of antimalarial drugs [13]. The Malaria is classified by the World Health Organization (WHO) as a life-threatening disease that, despite being preventable and treatable, is still endemic in 87 countries, mainly in the tropical and subtropical regions [1]. The last WHO report on malaria estimated that in 2020, there were 241 million cases and 627,000 deaths worldwide, with the WHO African Region accounting for 95% of them [7]. Despite the decrease in disease burden and deaths experienced over the past years, the goal of malaria elimination is still far, mainly because of a lack of continued financing and the emergence of parasite resistance to the antimalarial drugs administered [1]. To achieve the global objective of malaria eradication, there is a need for research on the development of novel and affordable strategies for prevention and treatment, among which nanomedicine can play a pivotal role [8].
Nanomedicine can be defined as the application of nanotechnology to and the usage of nanomaterials for improvements in the health field. Traditionally, nanomedical tools have focused on noncommunicable diseases, with much effort on cancer [8,9]. However, recently, the application of nanomaterials to infectious and neglected diseases and to diseases of poverty, for example, malaria, has been extensively investigated and exploited [10][11][12].
There are different stages of the malaria parasite life cycle that can be targeted, potentially leading to the development of a new generation of antimalarial drugs [13]. The majority of efforts are concentrated on therapeutic applications, mainly on developing novel targeted drug delivery systems with the use of nanocarriers (NCs) [14,15]. These systems have the capacity to release the desired drug at a specific place with a high local dose, allowing rapid action against the pathogen without, or with very low, side effects [16]. This is possible because of the capacity of NCs to encapsulate several drug types and be functionalized with molecules to target the specific site of delivery. Moreover, NCs present good biological properties in terms of toxicity, half-life, and circulation time. Besides their therapeutic application, targeted drug delivery systems can also be used in prevention strategies, for example, by encapsulating the active molecule of a vaccine to trigger a stronger immune response [17]. In addition, nanotechnology can be applied to other malaria-related interventions, such as using green processes for the fabrication of nanoparticles (NPs) [18,19].
Although the research and development of novel drugs and strategies against malaria is a crucial issue for the elimination of this disease, there are other important factors that need to be assessed [20]: affordability, social and cultural factors, ethical considerations, surveillance systems, and resource allocation, to name a few, are relevant to eradication efforts [21,22].
Conventional Strategies against Malaria
A common trend among countries that have eliminated malaria recently is their heavy investment in control and prevention [23], the goal of which is to reduce disease prevalence [24] and limit the evolution of drug resistance by the parasite [25]. A robust monitoring and surveillance system is essential to progress towards control and elimination, allowing rapid case detection and appropriate response [26]. Active case detection is useful in elimination campaigns targeting hotspots and hot-pops, while passive surveillance is vital in any endemic country, with proper training of health workers where malaria cases are rare [23].
The integrated vector management (IVM) strategy is an inexpensive and efficient tool for vector control. IVM was first introduced by the WHO and uses a combination of interventions to attack different stages of the Plasmodium life cycle. Among all vector control tools, indoor residual spraying, long-lasting insecticidal nets, and insecticide-treated nets are the most widely implemented [27][28][29][30]. Combining these approaches with entomological surveillance [29], larva source management [30], insecticide rotation [25], and occupationbased vector control [23] has offered good results in different areas [25,27,29,31].
Of increasing interest as a chemoprevention tool is mass drug administration, defined as the administration of an antimalarial drug to an entire population, aiming at reducing disease prevalence [23,31]. In areas where transmission is high, the WHO recommends chemoprevention interventions as a prophylaxis tool for high-risk populations, including intermittent preventive treatment of infants (IPTi) and of pregnant women (IPTp) and seasonal malaria chemoprevention (SMC) for children under 5 years of age before and during the high transmission seasons [32], which have been proven effective, economical, and safe prophylactic strategies for the prevention of malaria in the targeted populations.
The most widely used tools for diagnosis are rapid diagnostic tests (RDTs). Although microscopy is used as well, there is limited access to health facilities having the necessary equipment and trained personnel. RDTs are the gold standard for malaria screening and diagnosis outside health centers, but they can detect only high parasite densities in people with symptomatic malaria [23,28,31]. The alternative tools are nucleic acid amplification tests (NAATs), which have advantages, such as high sensitivity and processivity and the capacity to identify drug-resistant strains, despite being more time consuming and expensive than RDTs [33].
Currently there are different natural and synthetic compounds available for the treatment of malaria, but their effectiveness has been decreasing, as Plasmodium has evolved resistance towards most of them. To reverse this trend, the WHO encourages using combination therapies, and some drugs have restricted usage only in severe situations when the combination therapy is not working [14,34]. The first natural product employed against malaria was quinine. Although it has been one of the most effective antimalarial treat- [35]. In the same pharmacophore group of arylamine alcohols are lumefantrine, used for the treatment of uncomplicated P. falciparum malaria in combination with artemether [36], and mefloquine, which in combination with artesunate is used for the treatment of uncomplicated malaria [36,37]. Another important drug is chloroquine, used to treat all forms of malaria with few side effects, but to which resistance evolved during the 1950s [37]. Now it is used for the treatment of all uncomplicated malaria except for P. falciparum [36]. In the same quinoline chemical family are piperaquine and other drugs normally administered with artemisinin derivatives [34]. The WHO recommends the use of artemisinin and artemisininbased combination therapy (ACT) for the treatment of malaria [36,38]. Artemisinin was first isolated during the early 1970s, showing efficacy even against multidrug-resistant forms of P. falciparum [37]. Among its several derivatives, the most common are artemether and artesunate, widely used in the treatment of all forms of uncomplicated malaria [36]. Though developed for clinical malaria therapeutic treatment, some of these drugs have been recommended for prophylaxis interventions as well. Sulfadoxine-pyrimethamine (SP), the most used drug combination for chemopreventive interventions, is recommended for (i) IPTp in malaria-endemic areas of Africa, (ii) IPTi for infants below 12 months of age in areas of moderate-to-high malaria transmission in Africa, and (iii), combined with amodiaquine, in monthly SMC for all children below 6 years during the transmission season [36]. Other drugs can also be used for prophylaxis purposes by travelers to endemic regions and residents in endemic areas.
Despite the interventions currently available against malaria, there are still limitations to reducing the burden of the disease. As an example, RDTs are fast, easy to perform and require neither electricity nor specific equipment, but advances in stability, affordability, detection of low parasitemia density, and identification of asymptomatic patients are needed to improve diagnosis and enable immediate treatment [31,39].
Much progress has been made in prevention strategies, mainly in mosquito vector control. The introduction of new guidelines such as IVM, a combination of vector control tools, has led to a reduction in transmission inside houses, thus decreasing the incidence of new infections and thereby the morbidity and the mortality of the disease [30]. Although these approaches have helped in eliminating malaria in certain regions [25,32], their impact on disease prevalence is usually limited in areas of high transmissibility. Developing novel outdoor vector control tools, adapting IVM strategies to the specificities of each region and health system, bringing new insecticides to the market, and developing efficient entomological surveillance systems are key points to improve prevention [25,[30][31][32].
Although several antimalarial drugs for treatment and prophylaxis have been developed, the scenario is still far from optimal [40,41]. Factors such as costs (both for purchase and for continued drug supply), sustainability (a challenge for long-term programs), acceptability, poor product quality, and incorrect use leading to the evolution of resistance by Plasmodium are responsible for the decreasing efficacy of drugs [40][41][42][43]. In addition, the use of antimalarials for prophylaxis and chemoprevention purposes requires several doses and relies on adherence to the intervention, which may not happen, thus leading to resistance evolution. An effective drug specifically designed for prophylaxis objectives, for example, a single-dose drug, is needed for more efficacious interventions.
Despite the diagnosis tools developed recently, the prevention measures used in endemic countries, and the different antimalarial drugs available, an effective malaria vaccine remains a missing cornerstone to achieve the global goal of malaria elimination. Only one vaccine, RTS,S (Mosquirix), has completed phase III clinical trials, providing limited protection from severe malaria in African children [44]. In October 2021, Mosquirix was endorsed by the WHO for "broad use" in children, making it the first malaria vaccine to receive this recommendation. The development of a fully protective or transmissionblocking vaccine (or a combination of both) is imperative to achieve the goal of malaria elimination [23,24,31,32].
Nanotechnology against Malaria
The application of nanotechnology to health care has led to better knowledge of the biological mechanisms of diseases and to the development and improvement of tools for diagnosis and treatment [45]. Although the potential of nanomedicine against diseases of poverty and neglected diseases has not been fully exploited yet, new approaches and novel tools for malaria control, prevention, diagnosis, and treatment have been recently studied [46].
Diagnosis
Malaria diagnostic tools are mainly based on specific Plasmodium biomarkers, except for clinical diagnosis, which is based on the symptoms that a patient displays [47]. Biomarkers can be defined as indicators of the biological state of an organism, usually through the measurement of specific substances, processes, or structures from a removed sample [33]. These different biomarkers can be directly associated with the parasite density the patient is carrying, which can range from extremely low (below 1 parasite/µL) to high values (over 10,000 parasites/µL) [48]. Obtaining a fast and accurate result with a diagnostic test is essential for an appropriate treatment [48,49]. Nanotechnology has helped in identifying and fully characterizing malaria biomarkers as well as in developing novel sensors for diagnostic tests with the aim of improving quality, sensitivity, and reproducibility while diminishing the associated costs.
Currently there are six main Plasmodium biomarkers that serve as targets in diagnostic tests. These include five unique parasite proteins (histidine-rich protein II, lactate dehydrogenase, aldolase, glutamate dehydrogenase, and hypoxanthine-guanine phosphoribosyl transferase) and a pigment marker (hemozoin) [33]. They can be identified in a diagnostic test through a recognition element (e.g., antibodies or aptamers), producing a signal that is transduced into an output that can then be interpreted [33]. Apart from the three conventional malaria diagnosis tools (microscopic analysis, antibody-based RDTs, and NAATs), novel sensors have recently been developed. Aptamer-based sensors are of increasing interest because of aptamers' good properties, such as high specificity towards the target molecule, stability, amenability to functionalization, and nonrequirement of animals for their production [50,51]. Different aptasensors have already been tested against Plasmodium biomarkers, placing them as simple, inexpensive, and rapid alternatives for malaria diagnosis [33,47,52,53]. Also attracting growing interest are electrochemical sensors, which are based on a recognition element that accounts for the selectivity of the specific biomarker, usually contained in a thin layer. Once the biomarker is detected, changes in electrical properties are produced (e.g., conductance or electric potential) and then transferred to the output [33]. This system has already been tested in electrochemical point-of-care devices and in label-free sensors [54], among others [33,53].
Nanotechnology can also be applied to improve diagnostic tools already in use. Immunosensors, which use antibodies for the detection of biomarkers, are the core element of most RDTs. Adding to these platforms nanomaterials such as gold NPs [55] and magnetic NPs [56], or designing multiplex immunoassays with the capacity to detect multiple biomarkers at the same time [57], can improve the sensitivity and the overall performance of sensors, resulting in a better and more precise diagnosis. Molecular methods for malaria diagnosis based on NAATs have also been improved through nanotechnology. Polymerase chain reaction was the first molecular diagnostic tool developed and it is currently widely used, although it has limitations in terms of cost, time, and skill requirements. Loopmediated isothermal amplification has arisen as a simple, quick, specific, and cost-effective alternative allowing fast malaria diagnosis in remote areas [58][59][60]. Other devices are being developed, including microfluidic systems such as lab-on-a-chip sensors integrating multiple functions [61], although further research in this direction is needed.
Vaccines
Current malaria vaccine candidates can be divided into three different categories, depending on the stage of the Plasmodium cycle they target, which determines the vaccine's purpose and how it works [62]. Pre-erythrocytic vaccines target the sporozoites travelling to the liver with the objective of inhibiting hepatocyte infection and the subsequent RBC invasion, thus preventing progression to symptomatic disease. Although the mechanism of this response has not been fully worked out yet, the response can be enhanced by triggering specific immune cells that target the circumsporozoite protein [63]. Most vaccine candidates targeting the pre-erythrocytic stage are in phase I and phase II clinical trials, except for the Mosquirix vaccine, which is the only malaria vaccine candidate having reached a pilot implementation phase III clinical trial [64,65]. It has an efficacy of around 45% in children aged 5-17 months and 30% in children 6-12 weeks old after four doses, and although its protection decreases over time, simulations confirm that it would be a good addition to other malaria prevention and control tools in order to decrease the incidence of severe malaria [62,[66][67][68]. In April 2021, results of the phase IIb clinical trial of the vaccine candidate R21 adjuvanted with Matrix-M (R21/MM) were published [69]. This study assessed the efficacy of the vaccine in children aged 5-17 months from Burkina Faso. Participants were divided into three groups (receiving 25 µg R21/25 µg MM, 25 µg R21/50 µg MM, and the rabies vaccine in the case of the control group). All groups were administered three doses of the vaccine intramuscularly, and safety, immunogenicity, and efficacy were assessed over a 24-month period (endpoints after 6 and 12 months). The results showed a protection efficacy of 71% in group one and 76% in group two 12 months after the third dose [70]. This is the first malaria vaccine with an efficacy meeting WHO standards [71].
The second group, blood-stage vaccines, target infected erythrocytes with the generation of functional antibody response. Blocking parasite surface proteins and/or antigens on the membrane of pRBCs limits merozoite production as well as controlling and reducing parasitemia. As most of the acquired immune response of individuals with repeated episodes of malaria targets this stage, the development of an effective blood-stage vaccine is considered to be crucial to control the circulating parasites. Although there are different vaccine candidates targeting specific Plasmodium proteins exposed on pRBCs, none has shown efficacy against clinical malaria [66,68], probably because of the high polymorphism of the targeted antigens [62].
The third group comprises transmission-blocking vaccines (TBVs), which target gametocytes, the sexual parasite stages in the blood circulation. The aim of TBVs is to interrupt the transmission between humans and mosquitoes by causing an immune response to specific gametocyte proteins, reducing their infectivity. Although showing no individual benefit for the patients, TBVs are considered vital to reduce malaria prevalence in the population. Little is known of the efficacy of TBVs as all candidates are in phase I and phase II clinical trials [62,66,68].
Although much progress has been made towards a malaria vaccine, and several candidates have reached preclinical studies and are currently in the initial phases of clinical trials, only Mosquirix has completed phase III clinical studies [63]. To some extent, nanotechnology has played an important role in the development of malaria vaccines; as examples, Mosquirix is based on virus-like particles of nanometric dimensions targeting specific P. falciparum proteins [12,72], and vaccines based on lipid NCs are currently under study, as they may boost the immune response against malaria parasites [12,[73][74][75][76]. Though the recent advances are encouraging, vaccines targeting different Plasmodium stages would be the ideal candidates; however, the production of such vaccines is more difficult. Several antigens can be combined in NCs in order to obtain wider and stronger immune protection. Liposomes are especially well suited to be functionalized with different ligands ( Figure 2) and are actually widely used for vaccine production, as seen with coronavirus disease 2019 [77,78]. An integrated approach combining current malaria control strategies with an effective malaria vaccine would contribute to providing the long-term protection needed to advance towards malaria elimination [66].
cines is more difficult. Several antigens can be combined in N and stronger immune protection. Liposomes are especially ized with different ligands ( Figure 2) and are actually wide tion, as seen with coronavirus disease 2019 [77,78]. An inte current malaria control strategies with an effective malaria providing the long-term protection needed to advance towar Figure 2. Schematic representation of liposomes formulated usin improving the therapeutic efficacy of encapsulated drugs and the p Conventional liposomes are formed by phospholipids and chole drophilic drugs in their aqueous cores and hydrophobic compo layers. Active encapsulating liposomes hold a pH gradient that ca with amphiphilic nature, which, depending on the pH, are foun deprotonated forms. Targeted liposomes are developed using spec bodies or heparin and promote the delivery of high drug Long-circulating liposomes can be formulated by modifying the glycol (PEG) to enhance the blood residence time. Reproduced w Royal Society of Chemistry, 2020.
Antiplasmodial Therapeutics
At present, there are several drug combinations for the mainly ACTs. Although some of these drugs have shown goo is still room for improvement in avoiding side effects, r half-life, preventing drug resistance, and decreasing the do Nanotechnology-based drug delivery systems are novel tool efficacy of current antimalarial drugs and overcome their signed to target specific molecules, protect drugs from deg culation time, cut down dose frequency, overcome side effec (PK) profiles, and increase the overall efficacies of treatme these aims, there are three key NC components that need to Active encapsulating liposomes hold a pH gradient that can improve the loading of drugs with amphiphilic nature, which, depending on the pH, are found either in their protonated or deprotonated forms. Targeted liposomes are developed using specific pRBC ligands such as antibodies or heparin and promote the delivery of high drug doses with few side effects. Long-circulating liposomes can be formulated by modifying their surfaces with poly(ethylene) glycol (PEG) to enhance the blood residence time. Reproduced with permission from [14], The Royal Society of Chemistry, 2020.
Antiplasmodial Therapeutics
At present, there are several drug combinations for the treatment of clinical malaria, mainly ACTs. Although some of these drugs have shown good therapeutic efficacy, there is still room for improvement in avoiding side effects, reducing toxicity, increasing half-life, preventing drug resistance, and decreasing the dosage for effective treatment. Nanotechnology-based drug delivery systems are novel tools well placed to improve the efficacy of current antimalarial drugs and overcome their limitations. NCs can be designed to target specific molecules, protect drugs from degradation, prolong blood circulation time, cut down dose frequency, overcome side effects, improve pharmacokinetic (PK) profiles, and increase the overall efficacies of treatments [14,46,79,80]. To achieve these aims, there are three key NC components that need to be carefully assessed in the design phase: the drug loaded, the targeting molecules, and the core delivery system, each of which provides different qualities to the final NC formulation.
The first element that must be determined is the drug to be administered. In the case of malaria, the most active compounds currently available are ACTs, which are recommended by the WHO [36]. Although ACTs have been key in reducing global malaria incidence and deaths, with a huge impact in sub-Saharan Africa, the emergence and spread of resistance in the Greater Mekong region has been identified [81], posing the need to define new combination therapies, for example, triple ACTs [82]. Defining new combination therapies at the nanoscale is key when designing the NC formulation [17].
The second part that has to be defined are the targeting molecules, if any, to be added to the NC. Targeted delivery has the ability to increase local drug doses where the parasite resides, thus increasing treatment efficacy [17]. Considering the malaria parasite's life cycle inside the human body and the available developed drugs, there are three different cell types that can be targeted: RBCs (and pRBCs), gametocytes, and hepatocytes [76]. Passive targeting can be achieved using conventional NCs for the accumulation of the active compound in the mononuclear phagocyte system [83], which may be useful for a slow release in blood, but not much for targeting RBCs. Another way to achieve passive targeting contemplates using surface-modified, long-lasting NCs, for example, with polymers such as poly(ethylene) glycol (PEG) [84], which increases the half-life in blood of the system [85]. This allows longer interactions with target cells and potentially less toxic side effects, as studied with halofantrine loaded in poly(lactic acid)-PEG NCs [86].
In contrast, active targeting can be achieved through the conjugation of specific ligands at the surface of the NC. These ligands allow a specific release of the drug to the desired cell or tissue [76], which is key in the delivery of antimalarial drugs to pRBCs. Urbán et al. demonstrated for the first time that, indeed, specific targeting could completely discriminate between pRBCs and RBCs [87]. Different molecules have been studied for this purpose, such as peptides, antibodies, heparin, and more recently DNA aptamers. Antibodies have been widely used [74,88,89], and interestingly, the natural polysaccharide heparin was also found to be a good pRBC-targeting molecule [90,91]. Finally, DNA aptamers are gaining momentum as targeting elements because of their highly specific binding to pRBCs, as seen in different studies [92,93]. Most of these ligands can be easily conjugated onto different NCs, allowing them to deliver the drug at the desired target sites.
Finally, the third element to consider is the actual capsule of the NC formulation, which protects and carries the drug. Liposomes (Figure 2) have the ability to transport hydrophilic and lipophilic agents and can be easily tuned in terms of size, surface charge, and functionalization. Liposomized drugs are protected from degradation, have increased plasma solubility and reduced toxicity and side effects, and can be targeted to specific sites to improve their PK profiles [14,79,80]. The application of liposomes in the delivery of antimalarial drugs has been widely studied. Liposomes encapsulating primaquine showed specific targeting and delivery to the liver and reduced toxicity and increased efficacy of the drug [73]; other nanoliposomes loading monensin also offered improved antiplasmodial activity [94]; and immunoliposomes encapsulating chloroquine and fosmidomycin reduced parasitemia by 20% and exhibited 10-fold increased efficacy over the free drugs [95]. More recently, nanoliposomes containing artemisinin and artemisinin derivatives have been studied, showing extended blood circulation, improved half-life time of the drugs, and lower toxicity [96,97]. Despite these good properties observed in different studies, liposomes present some limitations too: they are relatively expensive to formulate, are not free of potential toxic effects upon drug release following degradation, and are not adequate for oral administration, which is the preferred route in the vast majority of malaria cases [14,80].
Polymeric NCs can offer an alternative to liposomes as drug delivery vehicles, providing good solubility, reduced toxicity, biocompatibility, protection of the drug, specific targeting, and potential for different administration routes. Although they have limitations in terms of drug loading capacity, polymeric nanocarriers for antimalarial drugs have been widely studied [14,79,80,98,99]. Poly(lactic-co-glycolic acid) NPs conjugated with monensin showed 10-fold increased efficacy when compared to the drug alone [100] and exhibited a synergistic effect with controlled release for combination therapy of quinoline, chloroquine, and certain antibiotics [101]. Poly(amidoamine)-based NCs carrying chloroquine exhibited increased efficacy over the free drug due to the ability to selectively target pRBCs [102]. Polyaspartamide-based NPs increased the effectiveness of their conjugated drugs and were proposed as potential candidates to overcome drug resistance [103]. Different NPs carrying artemisinin derivatives also offered improved antiplasmodial efficacy in terms of sustained and controlled release, amelioration of water solubility, and blood circulation half-life of the drugs [104][105][106][107][108].
Other delivery systems such as micelles, nanocapsules, dendrimers, and hydrogels have also been studied as nanoformulations to improve the therapeutic outcomes of conventional antimalarial drugs. Not only do the previously discussed nanoformulations have the ability to increase the efficacy of the different drugs, but more importantly, they can contribute to overcoming drug resistance mechanisms evolved by Plasmodium. Still, global research in this direction is scarce, with few, if any, of the novel antimalarial nanoformulations developed being on a clinical trial pipeline [14,79,80]. A thorough revision of the available nanocarriers for antimalarial drugs has been recently published [14], and we therefore do not extend on this subject.
Next Generation of Antimalarial Medicines
Its potential for improving diagnostic tools, vaccines, the efficacy of conventional therapeutic drugs, and control and prevention strategies have placed nanotechnology as a cornerstone in malaria elimination. Sequencing the parasite genome allowed the identification of new targets in Plasmodium, and state-of-the-art technologies led to the discovery of novel active molecules against different stages of the parasite [109,110]. To ensure that the final products meet medical needs in terms of dosage, safety, efficacy, stability, and activity against resistant strains, a common description known as the Target Product Profile (TPP) was proposed [111]. TPPs may change according to external parameters and context, but they offer a framework for the minimally acceptable profiles of future medicines. As these will most certainly be a combination of different active agents to minimize the evolution of resistance by Plasmodium, a definition of the drugs entering clinical development has been proposed as well, known as the Target Candidate Profile (TCP) [111,112]. A crucial role in coordinating and advancing proposals for novel TCPs and TPPs enlarging the malaria drug discovery portfolio is played by Medicines for Malaria Venture (MMV) [113], which has updated its descriptions according to current needs (Table 1). In the following sections, the ideal products for both treatment and prevention are outlined, highlighting the key points considering the target population to which they are addressed. Afterwards, the six more advanced next generation antimalarial candidates in the MMV pipeline [115] are discussed. For each of them, general information and status of the preclinical and clinical trials will be presented, and matching to the ideal product previously defined and possible limitations are assessed. Finally, the potential of nanomedicine to support the development of the ideal product is dealt with. To gather information and perspective for developing these sections, interviews and discussions with experts on malaria and nanomedicine were conducted (transcribed in italicized text).
Ideal Product Profile
Over the past years, the pipeline of drug development has seen a dramatic increase in terms of the number and diversity of molecules [114] with the discovery of new compounds having novel mechanisms of action [116]. This increase in potential antimalarial drugs, added to the global shift from malaria control to malaria eradication, stresses the need to define the ideal and minimally acceptable qualities of the new medicines as, according to experts, it is important " . . . not to lose sight of the fact that you end up making a product that can have a lot of use but not the one you wanted." In that sense, two different TPPs were proposed, defined, and updated over the years: medicines for patient treatment (TPP1) and for chemoprevention (TPP2).
TPP1: Case Management Medicines
The main objective of case management medicines is the treatment of acute, uncomplicated malaria in adults and children, as well as, ideally, severe malaria, although the main features may be slightly different in this second case. For that purpose, a combination of at least two molecules with demonstrated TCP1, TCP3, and TCP5 activity is desired, defining the ideal product as a single encounter radical cure and post-treatment prophylaxis (SER-CaP) [117]. Such a medicine would be optimal, as " . . . it will cure what you have, but what's more, by having certain levels in blood for a certain period of time, the mosquitoes that bite you will no longer be able to cause the disease while the drug effects last. Once you develop the disease you need something highly effective that can kill the parasite, with what the WHO defines as 95% of efficacy for any new antimalarials". Indeed, the aim of a case management drug is to quickly cure malaria, because then, the patient stops suffering. Therefore, these molecules should have TCP1 activity, i.e., clearance of the asexual blood-stage parasitemia [114], the stage of the cycle in which the parasite replicates in the blood stream and the symptoms begin: "The pre-erythrocytic phase is asymptomatic. When you have to treat people who are sick, they are in the erythrocytic phase, which is the one that you have to treat necessarily".
Apart from fast TCP1 activity, it is also essential to stop the transmission of malaria from the patient to the next mosquito and to limit the fast evolution of resistance by the parasite to the drugs: "If you use only one drug, the parasite quickly develops resistance. Therefore, you need drugs with different mechanisms of action to reduce the emergence of resistance". This is one of the reasons why there is a need for a molecule with TCP5 activity, with the ability to block transmission by targeting gametocytes. Moreover, "It would be ideal to have a drug that was also efficient against all human malarias and also against P. vivax hypnozoites. If we have a drug that can do it all, that would represent an important advantage". Indeed, that is the TCP3 activity demanded for all new drugs entering the MMV pipeline.
There is a general agreement that the best strategy to meet these requirements for new medicines is the combination of several different molecules, which, if possible, should be completely new compounds. The aim of combining two or more new molecules is to attack different stages of the Plasmodium life cycle without risking cross-resistance between them [114]. Following the identification of new molecules active against different Plasmodium stages, mainly with TCP1, TCP3, and TCP5 activity, the pipeline of medicines for the treatment of clinical malaria has been enlarged over the past years. Some of these new drugs are currently being tested in clinical trials in combination among themselves or with artemisinin derivatives [118].
Oral formulations will always be preferable; they should have a shelf life of at least 2 years, and the desired dosing regimen should be no longer than 3 days, ideally a single dose. Finally, the cost of the complete treatment course should not be higher than that of the current one, below 3 USD for adults and 1 USD for infants aged less than 2 years [114]. Other important aspects to consider are the PK profile of the drug and the dose needed to achieve the desired efficacy. Although new compounds are comparable to the current ACT treatment in these features, there are other mechanisms that can be explored to improve these properties, such as encapsulation in NCs and targeted drug delivery.
TPP2: Chemoprotective Medicines
In addition to their therapeutic activity, antimalarial medicines can be also used as preventive treatment to reduce the incidence of malaria. TPP2 refers to both chemoprotection, the protection of subjects entering a high endemic area (for example, migrants or tourists), and chemoprevention, i.e., medicines administered to populations living in high endemic areas, e.g., for SMC [114]. In terms of chemoprotection, ideal antimalarial drugs should have liver stage activity (TCP4) and the possible benefit of long-lasting asexual blood stage activity (TCP1). Though TCP4 compounds are less prone to generating resistance, the presence of TCP1 molecules increases this risk. As these medicines are intended to be administered to large populations, safety is a major concern, and adverse effects should be avoided. Frequent administration of chemoprotective medicines is needed, and most drug candidates support only a once-weekly administration, although they should ideally support a once-monthly administration [114].
Medicines used for chemoprevention, for example, in intermittent strategies such as SMC, aim at maintaining high antimalarial levels in blood during the periods of increased malaria risk, usually the rainy season. SP combined with amodiaquine has been used for SMC campaigns [119][120][121], although resistance has been detected in several regions [122]. The WHO recommends using different medicines for malaria treatment and prevention and urges the development of novel drugs specific for prevention [36]. This idea was pointed out by some experts as well: "Ideally you want to use for prevention drugs different from those that you are administering for treatment. There is a high risk of resistance evolving when prevention drugs are being used on a massive scale, e.g., on the whole population of newborns or through mass drug administration. Ideally, then, you do not want to compromise the drug that is being used as a first line of treatment for severe cases".
There are three key features that stand out for TPP2 medicines that have been identified in the literature and discussed during the interviews transcribed here. The first is related to safety: medicines aimed at chemoprevention and chemoprotection should have no drug-related severe adverse effects and minimal mild adverse effects [114]. Indeed, " . . . a very important issue is that chemopreventive drugs have to be even safer than those used for treatment. If you are ill, you assume a certain risk. It is like the story of the COVID vaccines with AstraZeneca: when feeling bad one does not care if the medicine may have side effects, because in the end what you want it for is to save your life. On the other hand, when taking a drug for prevention, this drug must be much safer: you cannot accept the risks because you are healthy". In other words, the risk a patient is willing to assume is lower in a preventive treatment: "It can be more difficult to convince a patient without any symptoms to take medicines".
The second key issue relates to the number of doses. Currently, recommended preventive treatments include three doses during a three-day course. Although theoretically, they present high clinical efficacy, the actual efficacy is much lower because of a lack of adherence to the whole regimen. Interviewed experts said that "regarding prevention, it would be acceptable to lose a little bit of efficacy and have only one dose. The efficacy that you lose because people do not take the second and third doses is enormous. If there is a drug with a long half-life after a single dose, it would be advisable to sacrifice a little bit of efficacy in exchange for that benefit". The desired TPP2 medicines should have a PK profile allowing protection for at least one week, and ideally one month, with a single dose [114].
A third important parameter to consider is the administration route for the preventive medicines. The oral route is the preferred option, mainly because of acceptance by the population receiving it and the logistics for its administration: "Where malaria is a real problem, and where you have up to 5-6 cases of malaria per child per year, one needs something oral that is easy to distribute, which does not need to be kept in the fridge, and which is as simple as possible. And oral is always easier than intravenous".
TPP2 medicines should have a minimal essential efficacy: they should offer at least an 80% reduction in the cumulative incidence of malaria, although this reduction should ideally be 95% or greater. It must be noted that the efficacy obtained in clinical trials may differ from the actual efficacy of the implementation because of different factors such as lack of adherence [36,123]. Overall, it is important for the prevention to be done intermittently, allowing patients to develop natural immunity against malaria: "It is a balance: one needs to have the drug protecting people for as long as possible, but you cannot have the person permanently protected either. The development of natural immunity against malaria requires of the immune system to keep on fighting against infectious bites. Otherwise, the first episode of malaria after you stop being protected could be as serious as the first episode in a newborn or a traveler". The aim of preventive treatment is to reduce malaria incidence, but patients should develop immunity against malaria in order to attack and clear the parasite in case of contact.
Developing medicines with all the features discussed above is ambitious. Only a small number of molecules with potential TPP2 profile have entered the pipeline over the past years, although some of them have shown encouraging results [118].
Next Generation of Therapies Supported by MMV
MMV has been supporting, over the last decade, projects for the discovery of new antimalarials, bringing to market a wide range of ACTs [115]. However, resistance to most of the ACT partner drugs has been detected recently, posing a major threat to malaria control and elimination and stressing the need to develop new molecules with novel mechanisms of action. MMV, together with its partners, is playing a major role in widening the pipeline of antimalarial candidates. Different drugs and combinations thereof are currently being tested in preclinical and clinical studies. Below, the most advanced compounds in clinical phases (Figure 3) are outlined and compared to the ideal products discussed in the previous section. immune system to keep on fighting against infectious bites. Otherwise, the first episode of malaria after you stop being protected could be as serious as the first episode in a newborn or a traveler". The aim of preventive treatment is to reduce malaria incidence, but patients should develop immunity against malaria in order to attack and clear the parasite in case of contact.
Developing medicines with all the features discussed above is ambitious. Only a small number of molecules with potential TPP2 profile have entered the pipeline over the past years, although some of them have shown encouraging results [118].
Next Generation of Therapies Supported by MMV
MMV has been supporting, over the last decade, projects for the discovery of new antimalarials, bringing to market a wide range of ACTs [115]. However, resistance to most of the ACT partner drugs has been detected recently, posing a major threat to malaria control and elimination and stressing the need to develop new molecules with novel mechanisms of action. MMV, together with its partners, is playing a major role in widening the pipeline of antimalarial candidates. Different drugs and combinations thereof are currently being tested in preclinical and clinical studies. Below, the most advanced compounds in clinical phases (Figure 3) are outlined and compared to the ideal products discussed in the previous section.
KAF156/Lumefantrine
KAF156 (also known as ganaplacide) belongs to the imidazolopiperazine class, a novel antimalarial scaffold identified by a team form the Genomics Institute of Novartis Research Foundation [124]. It showed enhanced activity against the parasite and better biocompatibility compared to other formulations and can be synthesized through a simple, high-yielding, and scalable process [125]. The first preclinical studies indicated that KAF156 possessed antimalarial activity against different Plasmodium stages, suggesting that it could be used to prevent infection, treat acute malaria, and block transmission [126]. In phase I clinical trials, it presented a good PK profile, supporting a once-daily regimen for 3 days and even a single-dose regimen, both well tolerated and without adverse effects in humans [127,128]. In addition, KAF156 showed a rapid clearance rate against P. falciparum and P. vivax, even for strains resistant to current antimalarials [129]. At the moment, this drug is under phase IIb clinical trials in combination with a new formulation of lumefantrine, which presents improved bioavailability and allows a
KAF156/Lumefantrine
KAF156 (also known as ganaplacide) belongs to the imidazolopiperazine class, a novel antimalarial scaffold identified by a team form the Genomics Institute of Novartis Research Foundation [124]. It showed enhanced activity against the parasite and better biocompatibility compared to other formulations and can be synthesized through a simple, high-yielding, and scalable process [125]. The first preclinical studies indicated that KAF156 possessed antimalarial activity against different Plasmodium stages, suggesting that it could be used to prevent infection, treat acute malaria, and block transmission [126]. In phase I clinical trials, it presented a good PK profile, supporting a once-daily regimen for 3 days and even a single-dose regimen, both well tolerated and without adverse effects in humans [127,128]. In addition, KAF156 showed a rapid clearance rate against P. falciparum and P. vivax, even for strains resistant to current antimalarials [129]. At the moment, this drug is under phase IIb clinical trials in combination with a new formulation of lumefantrine, which presents improved bioavailability and allows a once-daily administration [130]. The objective of the study is to determine the most effective and tolerable lowest dosing of the combined drug [131]. Although early results are encouraging, in vitro studies have demonstrated the possibility that Plasmodium evolves resistance against KAF156/lumefantrine [132].
Overall, KAF156 presents several interesting properties placing it as a potential SER-CaP candidate, with demonstrated TCP1, TCP3, and TCP5 as well as some TCP4 activity, the latter of which provides certain protection after treatment. Despite its multistage activity against Plasmodium, it showed slower parasitemia clearance when compared to ACTs. However, its potential to be used in single doses places it as a TPP1 case management drug. On the other hand, its known TCP4 activity also makes it a potential TPP2 prevention medicine, although the dose regimen in this case remains unclear. In conclusion, KAF156 has arisen as a candidate for the next generation of antimalarials, with improved properties compared to current ACTs except for the PK profile and parasitemia clearance time. More studies are needed to understand its mechanism of action and to further investigate efficacy and safety, as well as to fully characterize its activity when combined with lumefantrine.
Artefenomel/Ferroquine
Artefenomel (also known as OZ439), the result of a large partnership coordinated by MMV [133,134], is a novel synthetic peroxide antimalarial candidate that shares some of the chemical groups of highly effective artemisinin derivatives [135]. It completed preclinical studies, showing good antimalarial properties and minimal adverse effects in rats [134]. Furthermore, it also showed interesting results in phase I clinical trials in terms of safety [136] and in phase II clinical trials in terms of parasitemia clearance after a single dose [137,138]. In addition, it exhibited initial activity against artemisinin-resistant Plasmodium strains [139]. However, it has some drawbacks, such as embryotoxicity when used during pregnancy and uncertainty about avoiding artemisinin-based resistance, as the data obtained in the studies were limited [135].
At the same time, ferroquine was developed as a derivative of chloroquine, but with the capacity to avoid the currently widespread Plasmodium resistance to the latter [139]. One outstanding property of ferroquine is its long half-life, placing it as a candidate for single-dose therapy against malaria when used in combination [118]. It also showed activity against ACT-resistant malaria [140]. It was first tested together with artesunate, but a novel artefenomel-ferroquine combination is currently in the MMV portfolio and under phase IIb clinical trials to determine safety and efficacy as a single-dose treatment [141]. This combination is a potential TPP1 candidate, presenting exceptional antimalarial properties in terms of TCP1 and TCP3 activity. Moreover, it has an interesting PK profile, with long half-life and rapid action against all asexual stages, and is a potential single-dose treatment, which is one of the key points of TPP1 medicines. However, there are some concerns, as it fails to attack all gametocytes, showing only modest TCP5 activity.
Cipargamin
Cipargamin, formerly known as KAE609, an antimalarial candidate belonging to the spiroindolone family, was the outcome of a partnership among Novartis, MMV, Wellcome Trust, and the Swiss Tropical and Public Health Institute [113]. This distinct class of compounds has a novel mechanism of action against all parasitic blood stages [142]. Cipargamin was identified after screening and lead optimization of thousands of compounds [143], having neither adverse effects nor significant cytotoxicity in in vitro and in vivo preliminary studies. It demonstrated activity against P. falciparum, attacking different stages of the parasite [144], including gametocytes, placing it as a transmission-blocking antimalarial supporting single-or multiple-dose regimens [145].
Phase I clinical trials to assess cipargamin's PK profile (aimed at determining the effective dose causing 100% reduction in parasitemia) and safety and tolerability assays in humans were conducted, with the main concerns being hepatotoxicity and mild side effects [142,146,147]. Interestingly, in a phase II clinical study conducted in Thailand, cipargamin showed rapid efficacy in the treatment of P. falciparum and P. vivax malaria [147]. Currently, the first in-human study with an intravenous formulation of cipargamin has been completed, and a phase II study in severe malaria patients is underway [148].
Although promising in preclinical and clinical studies, cipargamin has some limitations. More studies to find a suitable companion drug for an effective combination therapy should be undertaken, since combination therapies have proven to be more effective in the prevention of resistance than single-drug therapies. Indeed, in vitro resistance to cipargamin was found in some studies, although other antimalarial drugs were effective against those resistant strains [144]. Because of its novel mechanism of action, cipargamin exhibited fast TCP1 activity, even against known resistant strains, plus TCP5 activity, showing its potential as transmission-blocking drug. Moreover, its PK profile could support a single-dose treatment. It therefore presents itself as an interesting TPP1 candidate. Nevertheless, it presents some concerns in terms of safety and lacks a suitable combination partner to date. Such a partner should present TCP3 activity to fully comply with TPP1 medicinal features.
MMV048
MMV048, known also as MMV390048, resulted from a collaboration between the Griffith Research Institute for Drug Discovery and a team from the University of Cape Town [124]. It is a novel antimalarial compound from the aminopyridine class that has entered phase I clinical studies in Africa [149]. MMV048 demonstrated activity against different asexual blood stages of the parasite, with its peak of efficacy against the schizont form. In addition, it was active against gametocytes, placing it as a multistage antimalarial candidate suitable for chemoprevention, treatment, and transmission-blocking interventions [124]. Preclinical analysis in animals confirmed its potential as a single-dose curative therapy, as a transmission-blocking drug, and as a prophylactic agent because of its long half-life when administered orally and its activity on liver stages, although its effect against late-stage hypnozoites remains unclear [150]. Three different phase I in-human studies were performed to assess the safety, tolerability, and PK profile of MMV048, with results suggesting that it was well tolerated, with the potential to be used as a preventive medicine and even as a single-dose case-management drug [151][152][153][154]. Currently, a phase IIa clinical trial to confirm its observed activity in malaria patients is being conducted in Ethiopia [123,155]. Despite good results in preclinical and early clinical studies, concerns regarding the precise safe dosage and potential combination drug partners remain key points for the future development of MMV048. Its teratogenicity has recently led to its discontinuation by MMV [156].
M5717
M5717, formerly known as DDD498, is the product of a collaboration between MMV and the Drug Discovery Unit from the University of Dundee. M5717 belongs to the family of quinoline-4-carboxamide scaffolds and acts by inhibiting P. falciparum elongation factor 2, a novel mechanism of action among antimalarials [123,157]. In preclinical studies, M5717 showed excellent blood stage activity without any clinically relevant safety concern, confirming its potential for single-dose treatment and once-weekly chemoprotection [158]. A phase I clinical trial to assess its safety, tolerability, PK profile, and parasite clearance in healthy subjects following infection with P. falciparum was conducted (results not posted yet) [159], and another phase I clinical trial to assess the preventive activity of a single oral dose is currently under way [160]. The results of these studies will hopefully confirm antimalarial activity in human subjects. With its novel mechanism of action, M5717 showed good TCP1 and TCP5 activity, presenting it as a TPP1 candidate. Moreover, its long half-life may allow for a single-dose regimen when combined with a fast-acting molecule. Finally, it can also be considered as a TPP2 candidate because of its activity against liverstage schizonts. More studies testing its efficacy, safety, and PK profile are needed, as well as an assessment of the likelihood of future resistance evolution and of potential combination partners.
P218
P218 is being developed through a collaboration between MMV and Janssen; it is a long-acting, single-dose injectable drug for prevention [115]. It showed activity against blood stages of Plasmodium, but it raised much interest as a chemopreventive agent because of its outstanding activity against P. falciparum schizonts in the liver stage [161]. A first-in-human phase I clinical trial was conducted, concluding that it had a favorable safety, tolerability, and PK profile, thus confirming its potential for malaria chemoprotection [162,163]. A second phase I clinical trial was also performed, with results confirming its safety and tolerability with excellent protective efficacy [161,164]. Despite these encouraging perspectives, the short half-life of P218 may be a barrier for its future development as a chemoprotective agent [161,162], which has led to this drug being discontinued by MMV [156].
Nanotechnology Applied to the Next Generation of Antimalarials
Nanomedicine has had a great impact in health care, leading to the development and improvement of tools for disease diagnostics and treatment. In malaria, nanomedicine has been applied mostly in the development of novel therapeutics, in which the active compounds are conventional drugs already in use combined with a delivery system [165]. This has allowed the optimization of certain features of the drug, such as its PK profile or efficacy. Moreover, in certain cases, the new medicine may overcome the drug resistance mechanisms evolved by Plasmodium. Nanotechnology could play a key role in the development of the next generation of antimalarial medicines, improving the features of existing and future drugs in order to meet the specific and desired TPP1 and TPP2 criteria. Interviewed experts said that "nanomedicine can provide different structures, which by being different from the encapsulated drug and having a different biological behavior, can be considered almost new drugs. However, the active principle, the mechanism of action that you transport, is that of the original drug". As a clear example, nanotechnology could be instrumental in bringing back to the clinical pipeline drugs that have been discontinued because of high toxicity or a short half-life, such as MMV048 and P218 [156].
One of the properties on which the delivery system may have a greatest impact is safety. When combining a drug with a specific NC, undesired interactions can be avoided, and thus, the toxicity of the drug can be lowered. Moreover, specific targeting provides large local concentrations of the drug near the parasite while keeping a low overall concentration in the organism. Although most of the newly discovered drugs present good safety profiles, adding NCs to the formulation makes them even safer. Another property that can be modulated by incorporating NCs is the PK profile: with specific targeting and controlled release, attractive PK profiles can be achieved that are adapted to the needs of TPP1 and TPP2. Drug-containing liposomes specifically targeted to pRBCs (Figure 4) can be a good choice for the encapsulation of antimalarials for intravenous administration in severe malaria cases [87,95,[166][167][168].
Some of the new antimalarials in the clinical pipeline described above have protonatable groups that can be used for their encapsulation in the liposomal lumen through pH gradient strategies, providing highly efficient drug loading [169][170][171]. Other compounds with stronger lipophilic characters are amenable to incorporation into the liposome lipid bilayer, allowing for the engineering of liposomal nanocarriers encapsulating two or more drugs, which can boost the prospects of combination therapies ( Figure 5). Achieving oral formulations with NCs is key to developing new TPP1 and T medicines: "The main problem (a drug) will face is to be able to withstand the passage throug stomach. There it will find a very acidic and destructive environment. Furthermore, it shou absorbed by the intestinal system. There are oral formulations made with certain types of saccharides, or structures that resist passage through the stomach and allow a certain degr intestinal absorption". Some zwitterionic nanoparticles based on poly(butyl metha late-co-morpholinoethyl sulfobetaine methacrylate) (PBMA-MESBMA) targeting pR ( Figure 6) have shown good activity in encapsulating certain antimalarial compou such as curcumin, helping them reach the blood circulation following oral administra [173]. Other polymers based on dendrimeric structures with the capacity for drug capsulation have also provided good pRBC targeting [174,175]. Nanotechnology c also be applied in other administration routes, such as sprays containing the medi which can be internalized through the lungs, or via subcutaneous injection or tr dermic pads, allowing slow release of the drug, which could be an alternative in ce cases. Achieving oral formulations with NCs is key to developing new TPP1 and TPP2 medicines: "The main problem (a drug) will face is to be able to withstand the passage through the stomach. There it will find a very acidic and destructive environment. Furthermore, it should be absorbed by the intestinal system. There are oral formulations made with certain types of polysaccharides, or structures that resist passage through the stomach and allow a certain degree of intestinal absorption". Some zwitterionic nanoparticles based on poly(butyl methacrylate-comorpholinoethyl sulfobetaine methacrylate) (PBMA-MESBMA) targeting pRBCs ( Figure 6) have shown good activity in encapsulating certain antimalarial compounds, such as curcumin, helping them reach the blood circulation following oral administration [173]. Other polymers based on dendrimeric structures with the capacity for drug encapsulation have also provided good pRBC targeting [174,175]. Nanotechnology could also be applied in other administration routes, such as sprays containing the medicine, which can be internalized through the lungs, or via subcutaneous injection or transdermic pads, allowing slow release of the drug, which could be an alternative in certain cases. 1,2-dioleoyl-sn-glycero-3-phosphocholine; PO 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine; D 1,2-distearoyl-sn-glycero-3-phosphorylethanolamine; Mal: maleimide; GPA: glycophorin A produced with permission from [172], MDPI, 2019.
Achieving oral formulations with NCs is key to developing new TPP1 and T medicines: "The main problem (a drug) will face is to be able to withstand the passage throug stomach. There it will find a very acidic and destructive environment. Furthermore, it shou absorbed by the intestinal system. There are oral formulations made with certain types of saccharides, or structures that resist passage through the stomach and allow a certain degr intestinal absorption". Some zwitterionic nanoparticles based on poly(butyl metha late-co-morpholinoethyl sulfobetaine methacrylate) (PBMA-MESBMA) targeting pR ( Figure 6) have shown good activity in encapsulating certain antimalarial compou such as curcumin, helping them reach the blood circulation following oral administra [173]. Other polymers based on dendrimeric structures with the capacity for drug capsulation have also provided good pRBC targeting [174,175]. Nanotechnology c also be applied in other administration routes, such as sprays containing the medi which can be internalized through the lungs, or via subcutaneous injection or tr dermic pads, allowing slow release of the drug, which could be an alternative in ce cases. Because the key component of NCs is the drug to be administered, it is essential new candidates enter the pipeline of medicines against malaria, as these will be the ac molecules of future delivery systems. It is necessary to fully characterize the mechan Because the key component of NCs is the drug to be administered, it is essential that new candidates enter the pipeline of medicines against malaria, as these will be the active molecules of future delivery systems. It is necessary to fully characterize the mechanism of action of new drugs, e.g., defining which stages of the Plasmodium life cycle they interfere with, the interactions they have with the parasite, and other properties such as their PK profiles. Nanotechnology can also offer solutions for the discovery of new antimalarial drugs, for instance, with the development of highly sensitive, single-molecule methods for the identification of inhibitors of enzymes essential for malaria parasites. Using single-molecule force spectroscopy (SMFS), the interaction between the first enzyme of the 2-C-methyl-D-erythritol-4-phosphate pathway essential for the viability of the malaria parasite, 1-deoxy-D-xylulose 5-phosphate synthase (DXS), and its two substrates, pyruvate and glyceraldehyde-3-phosphate, was characterized ( Figure 7) [176]. The DXS inhibitor fluoropyruvate was detected by such SMFS nanobiosensors at a concentration of 10 µM, which improved by two orders of magnitude the sensitivity of conventional enzyme activity assays. This result highlights the potential of individual enzyme-substrate handling for the biodiscovery of new antimalarial and antibiotic compounds present in natural product extracts at concentrations well below the detection limits of current enzymatic assays.
armaceutics 2021, 13,2189 of action of new drugs, e.g., defining which stages of the Plasmodium l terfere with, the interactions they have with the parasite, and other pr their PK profiles. Nanotechnology can also offer solutions for the disco timalarial drugs, for instance, with the development of highly sensitive methods for the identification of inhibitors of enzymes essential for m Using single-molecule force spectroscopy (SMFS), the interaction betw zyme of the 2-C-methyl-D-erythritol-4-phosphate pathway essential fo the malaria parasite, 1-deoxy-D-xylulose 5-phosphate synthase (DXS), strates, pyruvate and glyceraldehyde-3-phosphate, was characterized The DXS inhibitor fluoropyruvate was detected by such SMFS nanobio centration of 10 μM, which improved by two orders of magnitude t conventional enzyme activity assays. This result highlights the potent enzyme-substrate handling for the biodiscovery of new antimalaria compounds present in natural product extracts at concentrations well tion limits of current enzymatic assays. The impact that nanoscience can have on the development of the next generation of antimalarial treatments could be large. The main concern about nanotechnological solutions for malaria lies in the costs of the research itself and the final medicine production. Big pharma is not attracted to invest in pathologies such as malaria, as it may be less lucrative than investing in other type of diseases more prevalent in high-income regions, such as cancer or neurological disorders. However, nanotechnology can offer cost-affordable products with curative doses close to the WHO recommendations for endemic countries. To bypass the need for clinical trials, antimalarial drug nanocarriers active against the mosquito stages of Plasmodium can be designed to be directly delivered to Anopheles [177][178][179]. Eliminating malaria parasites through the delivery of targeted nanocarriers directly to the mosquito is the objective of the EuroNanoMed project NANOpheles [180], the final results of which will provide some interesting nanoformulations capable of blocking the sexual part of the pathogen's life cycle, which occurs in the insect vector. Although delivering drugs to mosquitoes presents some obvious challenges, the potential gains of such strategy are evident in terms of the economic landscape of malaria.
Conclusions
Despite the recent approval for widespread use of the first malaria vaccine, its moderate efficacy (30%) and high price (5 USD per dose) [181] call for continued research efforts. Nanotechnology and nanomedicine can offer strategies to develop a successful toolkit for the fight against malaria, both by improving conventional actions and through developing new products. This could make the global goal of malaria control and elimination, and further eradication, feasible. However, to achieve this objective, significantly more investment by public institutions and industrial partners is needed in this field. The vast majority of nanomedical interventions have been focused on noncommunicable diseases; it is time to fully translate this knowledge into research against diseases of poverty and neglected diseases, which will stimulate the development of innovative approaches to diagnose, prevent, and treat them. Finally, the transfer of this knowledge to low-and mid-income countries is crucial to allow them becoming active agents in the invention and uses of these new tools. | 14,792.4 | 2021-12-01T00:00:00.000 | [
"Medicine",
"Materials Science",
"Engineering"
] |
Gender effect in human–machine communication: a neurophysiological study
Purpose This study aimed to investigate the neural mechanism by which virtual chatbots' gender might influence users' usage intention and gender differences in human–machine communication. Approach Event-related potentials (ERPs) and subjective questionnaire methods were used to explore the usage intention of virtual chatbots, and statistical analysis was conducted through repeated measures ANOVA. Results/findings The findings of ERPs revealed that female virtual chatbots, compared to male virtual chatbots, evoked a larger amplitude of P100 and P200, implying a greater allocation of attentional resources toward female virtual chatbots. Considering participants' gender, the gender factors of virtual chatbots continued to influence N100, P100, and P200. Specifically, among female participants, female virtual chatbots induced a larger P100 and P200 amplitude than male virtual chatbots, indicating that female participants exhibited more attentional resources and positive emotions toward same-gender chatbots. Conversely, among male participants, male virtual chatbots induced a larger N100 amplitude than female virtual chatbots, indicating that male participants allocated more attentional resources toward male virtual chatbots. The results of the subjective questionnaire showed that regardless of participants' gender, users have a larger usage intention toward female virtual chatbots than male virtual chatbots. Value Our findings could provide designers with neurophysiological insights into designing better virtual chatbots that cater to users' psychological needs.
Introduction
Virtual chatbots, which are machine conversation systems equipped with chat interfaces, facilitate natural language interactions between humans and machines (Shawar and Atwell, 2005).With the application of more advanced and intelligent interfaces, chatbots enable users to engage in real-time communication and interaction with service providers (Xu et al., 2020;Adam et al., 2021).Since the invention of the world's first chatbot, ELIZA, by Joseph Weizenbaum in the 1960's, chatbots have revolutionized our modes of communication and found wide-ranging applications in fields, such as healthcare, ecommerce, retail, insurance, and customer service (Kasilingam, 2020;Mogaji et al., 2021).In the wake of the COVID-19 pandemic, the emergence of virtual chatbots has aided the logistics and supply chain services industry in maintaining communication with customers and providing uninterrupted services (Viola et al., 2021).This development has also propelled significant growth in the global virtual chatbot market.According to a report by Statista (2021), the market value of virtual chatbots is projected to reach $6.83 billion.
The constituent elements of a virtual chatbot include human figures (i.e., visual clues), names (i.e., identity cues), and chat dialogues (i.e., conversation clues; Go and Sundar, 2019).Among these, visual clues can affect consumers' intentions and decisions (Filieri et al., 2021).Particularly, gender, as part of the visual cues of virtual chatbots, significantly influences users' initial impressions, attitudes, and willingness to interact with chatbots (Calvo-Barajas et al., 2020;Zogaj et al., 2023).However, the role of users' own gender in this perception and interaction has also garnered attention.Studies have indicated that the design and presentation of gender can evoke emotional responses from users, thereby influencing their usage experience and satisfaction.Moreover, users' own gender can also influence their responses to gender cues presented by chatbots.
Therefore, this study aims to explore not only the impact of the presentation of gender on the design of virtual chatbots but also its interplay with users' gender.We can provide valuable insights into designing more humanized and effective chatbots by understanding the role of gender in human-machine communications. .
The e ects of virtual chatbots' gender
Gender is a central dimension of individuals' self-concept and identity, making it a key human attribute that significantly influences how people form connections with others (Freimuth and Hornstein, 1982).Gender-related social cues can minimize the need for extra information-seeking during interactions (Tay et al., 2014).The gender of robots fosters a sense of shared understanding between users and robots, leading to more natural and intuitive human-robot interactions (Powers et al., 2005;Eyssel and Hegel, 2012).Some studies have shown that the gender of chatbots may affect consumer behavior (Seo, 2022;Zogaj et al., 2023).
When exploring the role of gender in human-machine interaction, one aspect that cannot be overlooked is gender stereotypes.Individuals generally believe that women are more suitable for taking care of children or older adults and that male surgeons are more capable than female surgeons (Eagly, 2013;Ashton-James et al., 2019).This phenomenon reflects the existence of gender stereotypes.In reality, gender stereotypes are an enduring concept that emphasizes social consequences arising from gender cues (Tay et al., 2014).The study by Master et al. (2021) demonstrated gender stereotypes, indicating that girls are less interested than boys in computer science and engineering.These stereotypes can even extend to non-human agents (Tay et al., 2014).Apple's Siri and Amazon's Alexa typically use female voices (Chin and Robison, 2023), and Samsung's Sam is presented with a female avatar.Abdulquadri et al.'s (2021) also found that chatbots in emerging market banks are frequently branded and associated with female gender identification.On the contrary, Behrens et al. (2018) conducted a limited study, which indicated a tendency to trust male robots more than female robots.Likewise, Ahn et al. (2022) found that participants give higher competence scores to male rather than female AI agents.After reviewing existing studies on the effects of gender on human-machine communication, the perception of robots' gender remains controversial.Thus, this study aims to further explore how the gender of virtual chatbots influences users' usage intention.
This research not only discusses the impact of gender on virtual robots but also explores the influence of human gender differences on human-machine communication.Previous research has shown that human gender differences can influence perceptual experiences of things (Gefen and Straub, 1997;Qu and Guo, 2019;Denden et al., 2021).For instance, in a study by Nissen and Krampe (2021), an examination was carried out regarding how users consciously and unconsciously (neural) evaluate e-commerce websites.They found that unconscious effects influence gender-related differences in the perception of e-commerce websites.Huang and Mou (2021) discovered that, within current online travel agency websites, women exhibit more usability requirements than men.Relevant research frequently cites the similarity-attraction paradigm, which suggests that as the similarity to a target increases (i.e., similar attitudes, personality traits, or other attributes), the target's attractiveness also increases (Byrne, 1997).This study explores how men and women treat the gender of virtual chatbots in humanmachine interaction and also delves into the gender preferences of virtual chatbots among different gender users.
. Event-related potentials' method of revealing usage intentions of virtual chatbots' gender Currently, research on the relationship between the gender of virtual chatbots and users' usage intentions is limited.Most studies evaluating users' intentions regarding the gender of virtual chatbots rely on questionnaires and interviews, which may not completely capture users' true emotions and are susceptible to various influencing factors.The intention to use, as a latent psychological activity, is difficult to articulate verbally (Ding et al., 2016), while cognition and emotion, as products of brain neural activity (Kim et al., 2022), play an important role in usage intentions (Kang et al., 2015).Therefore, event-related potential (ERP) methods are needed to measure users' intrinsic intentions, including unconscious formations.
ERPs, arising from postsynaptic potentials during neurotransmission, travel passively through the brain and skull to the scalp, thereby contributing to a broader electroencephalogram (EEG; Luck et al., 2000).The EEG can measure the neurophysiological data of users experiencing the information system objectively and in real time (Liu et al., 2022).ERPs offer insights into participants' brain responses to certain cognitive events and, ultimately, into their psychological activities (Luck, 2014;Sun et al., 2022).Therefore, they can be employed to investigate neural activities related to virtual chatbots.Research has shown that some ERP components can effectively increase individuals' attentional resources and emotional arousal (Ding et al., 2016;Liu et al., 2022).Current ERP research on attention is primarily focused on three key components: N100, P100, and P200 (Luck et al., 2000;Ding et al., 2016;Cao et al., 2021). .
. The N component
The N100 component, as a crucial constituent of ERPs, peaks at ∼100 ms post-stimulus presentation and manifests as a negativegoing potential (Li et al., 2022).It is related not only to physical features in the reflection of people's attention allocation at an early stage (Luck et al., 2000) but also to attractiveness in the reflection of the capacity of stimuli to attract and maintain the participant's attention (Carretié et al., 2004).Stimuli perceived to have high attractiveness evoked an increased amplitude of N100 (Righi et al., 2014).Many previous studies have reported that N100 reflects attention allocation and attractiveness (Luck et al., 2000;Li et al., 2022;Liu et al., 2022).For example, Liu et al. investigated the impact of users' first impressions of websites on their subsequent behaviors and attitudes, utilizing ERP techniques to analyze users' evaluative processing.The study found that webpages higher in complexity and order evoked larger N100 amplitudes than those that were lower in complexity and order (Liu et al., 2022).Guo et al. examined visual attention toward humanoid robot appearances and observed that users devoted greater attentional resources to their preferred robots than to non-preferred ones (Guo et al., 2022).
. . The P component
The P100 component (peaking around 90-100 ms poststimulus presentation), as an early ERP, exhibits sensitivity to attention allocation (Liu et al., 2022).When more attention is directed to a visual stimulus, the amplitude of the P100 component increases, providing a direct indicator of attention (Smith et al., 2003), and it is typically related to physical stimulus characteristics (Perri et al., 2019).The role of P100 in reflecting attention capture has been widely reported in previous research (Perri et al., 2019;Yen and Chiang, 2021).Yen and Chiang used ERPs to explore the relationship between trust and purchase intention in the context of chatbots (Yen and Chiang, 2021).In addition, in their study on the attention allocated to app icons, Liu et al. utilized the early P100 component and found that complex icons elicited a higher amplitude of P100 than simple icons (Liu et al., 2024).
. . The P component
P200, another positive-going potential that peaks around 200 ms post-stimulus presentation, is associated with the initial exogenous "attention capture" of the affective content of a stimulus (Carretié, 2014).Stimuli arousing positive or negative feelings elicited an increased P200 amplitude (Carretié et al., 2004;Liu et al., 2022).As the most conspicuous and widely used "attention" ERP, P200 was found in a number of attention-related studies (Carretié et al., 2004;Liu et al., 2022;Wang et al., 2023).For instance, Wang et al. (2023) used ERP techniques to explore consumers' emotional experiences and consumer trust when interacting with chatbots (vs.humans).The results revealed that the amplitudes of P200 were larger for chatbots than for humans.Guo et al. (2022) utilized 20 humanoid robot pictures as experiment stimuli to investigate users' preference for the appearance of humanoid robots.The research indicated that, in the early stage, the preferred humanoid robot appearances elicited larger P200 amplitudes than the non-preferred appearances.
Existing literature provides evidence for effectively applying ERPs, particularly N100, P100, and P200, in the research of the neural time course of attention to different stimuli.Hence, this study will first examine differences in the allocation of attentional resources among participants toward virtual chatbots of different genders.Second, it will investigate how the gender of the participants themselves contributes to these differences in attentional resources allocated to virtual chatbots of different genders.
. Research hypotheses
Female roles in the service domain are more popular and predominant (Seo, 2022).However, the impact of gender on users' attention and willingness to use virtual service agents in human-machine communication remains unclear.The literature on gender stereotypes suggests that gender can serve as a direct categorization cue, influencing users' perceptions during service encounters (Macrae and Martin, 2007).Gendered service robots can evoke emotional responses such as attractiveness and likability (Macrae and Martin, 2007).The attractiveness bias effect suggests that perceived attractiveness tends to elicit positive evaluations due to attractiveness biases.Moreover, this effect is amplified when service roles are designated as female (Hosoda et al., 2003).Indeed, the research findings by Stroessner and Benitez on gendered humanoid robots support this notion, which revealed that female humanoid robots elicited more positive evaluations and a greater desire for engagement among consumers (Stroessner and Benitez, 2019).Therefore, the research hypothesizes the following: H1: Participants exhibit a higher willingness to use female virtual chatbots than male virtual chatbots; additionally, female virtual chatbots elicit greater N100, P100, and P200 amplitudes in the participants than male virtual chatbots.
Given the influence of gender congruence on interpersonal relationships in fields such as human resource management and organizational behavior (Crijns et al., 2017), the similarityattraction paradigm (Byrne, 1997) posits that attraction toward a target increases with greater similarity to the target, such as similarity in attitudes, personality traits, or other attributes.Individuals find it easier to engage with robots when they possess gender and personality characteristics (Vecchio and Bullis, 2001).In the field of human-machine communication, existing research (Tay et al., 2014) suggests that individuals are more likely to accept robots that align with their own gender and personality traits.This implies that as similarity increases, intention to use and accept robots in social contexts also increases.Therefore, this study hypothesizes the following: H2: Participants may be more inclined to use virtual chatbots of the same gender.Specifically, for female participants, there is a greater inclination to use female virtual chatbots than male virtual chatbots; conversely, for male participants, there is a greater inclination to use male virtual chatbots than female virtual chatbots.
Gender congruence may influence participants' attention.Previous research has shown that female participants are more likely to accept female robots than male participants; male participants show a higher acceptance level for male robots than for female robots (Nass et al., 1997).This finding aligns with the similarity-attraction paradigm (Byrne, 1997) and suggests that gender congruence can lead to users' positive perceptions of robots, psychological closeness, and potentially further increase attention allocation to robots (Eyssel et al., 2012).Given that attention-related EEG indicators such as N100, P100, and P200 can reflect users' attention allocation, this study hypothesizes the following: H3: For female participants, female virtual chatbots evoke larger N100, P100, and P200 amplitudes than male virtual chatbots; conversely, for male participants, male virtual chatbots evoked larger N100, P100, and P200 amplitudes than female virtual chatbots.
Research measures . Participants
A prior calculation was conducted to determine the required sample size using G * Power3.1 (Erdfelder et al., 1996): a minimum sample size of 12 was needed to detect a large effect size (f = 0.4) with a recommended statistical power β of 95% and an error probability α of 0.05.For the ERP experiment, we recruited 33 participants (16 female and 17 male participants) via WeChat, excluding two female participants due to power failure.Hence, the final analysis covered 31 participants (17 male and 14 female participants).They were all students from AHPU, of Han ethnicity, aged 19-28 years (M= 21.58, SD = 2.28); furthermore, they had normal/corrected vision, were righthanded, and remained medication-free for a week.Before the experiment, they ensured that they had sufficient sleep, had no neurological/mental disorders, and signed an informed consent form.They received RMB 70 as remuneration.The study was approved by the Ethics Committee of the Institute of Neuroscience and Cognitive Psychology at AHPU.
. Stimuli
A stimulus set consisting of six non-target stimulus images and two images of flowers as target pictures was assembled.The non-target stimulus images were sourced from the Vision China website (https://www.vcg.com), with three images featuring men and three images featuring women.We utilized Adobe Photoshop 2018 outlining and contouring tools to accentuate the lines and features of the characters in the images to achieve a humanoid robot effect.Subsequently, we adjusted the color, contrast, brightness, and saturation of the images in Photoshop to accentuate the mechanical feel.Next, while processing the characters' facial features, we conducted detailed refinement to make them appear more robot-like.Finally, we compared and finetuned the processed images with the current highly humanoid robots, Geminoid H1-4 and Kodomoroid, to make them appear closer to the target effect.All images were designed to have dimensions of 1920 * 1150 pixels.
. Procedure
This experiment was conducted in a professional ERP laboratory divided into a preparation room and an observation control room (Figure 1).The preparation room provided a suitable environment with sound insulation, suitable light, temperature and humidity, and minimal external interference.The observation control room allowed the experimenter to control the experiment process and observe any abnormal conditions in the participants.During the preparation phase, participants were instructed to wash and blow-dry their hair, which aimed to reduce impedance between the electrode and the scalp to ensure the accuracy and reliability of the EEG signal acquisition.They then entered the preparation room and sat on a chair ∼80 cm away from the computer screen, with their gaze fixed on the center of the screen.Following the international 10-20 system principles, the Cz electrode site was determined by the intersection of the line connecting bilateral earlobes and the line from the nasion to the inion.Subsequently, an appropriate electrode cap was worn.After the preparation was completed, the participants were informed about the instructions of the experiment.
At the beginning of the experiment, participants were informed that it was a scenario-based task.The scenario was as follows: Assuming your good friend's birthday is approaching, you want to buy a short-sleeved shirt as a birthday gift.However, you do not know your friend's clothing size; you only know their height and weight.Therefore, you open Taobao and browse a shortsleeved shirt design that you like.To obtain sizing information, you decide to consult with two types of virtual chatbots, William or Lily.Lily, a female virtual chatbot, exhibits pronounced feminine facial characteristics that mimic human features, while William, a male counterpart, possesses distinct male facial traits that emulate those of a real human.In the following questionnaire and EEG experiment, all male virtual chatbots are named William, and all female virtual chatbots are named Lily.
The EEG experiment was programmed and demonstrated by E-prime 3.0.The experiment utilized an oddball paradigm, and the stimulus materials included a virtual chatbot, a non-target stimulus (180 times), and a flower target stimulus (60 times).The stimuli were presented randomly, with each stimulus presented 30 times for a duration of 1,200 ms, with a "+" fixation point shown in the center of the screen for 500 ms between the two stimuli.There was one rest period in the middle of the experiment (Figure 2).Participants were instructed to remember the occurrence of all the target stimuli.
At the end of the experiment, participants were asked to rate their intention to use the virtual chatbots for six non-target stimuli.The usage intention was evaluated using three questions based on Agarwal and Karahanna's (2000) work: "I plan to use the virtual chatbots, " "I intend to continue using the virtual chatbots, " and "I expect to use the virtual chatbots in the future."The usage intention was rated on a 5-point Likert scale, with "1" meaning strongly disagree and "5" meaning strongly agree.After filling out the scale, the entire experiment concluded.The overall task required ∼40 min to complete, comprising a preparatory phase of 25 min, ∼10 min for the ERP task, and 5 min for the completion of the questionnaire. .EEG recording and analysis EEG data were recorded using a Brain actiCHamp amplifier (Brain Products GmbH, Munich, Germany) and a cap with 64 g/AgCl electrodes following the international 10-20 system.Cz was used as the reference electrode.The EEG data were bandpass filtered with a range of 0.05-70 Hz and continuously sampled at 1,000 Hz.The impedance between the scalp and electrodes was kept below 5 KΩ.
Offline EEG data were analyzed using EEGLAB (version 2019.0), an open-source toolbox developed by Delorme and Makeig (2004).The reference electrode Cz was replaced with TP9 and TP10, and the sampling rate was reduced to 500 Hz.The bandpass filter was 0.1-30 Hz.Eye movement artifacts, muscle artifacts, and other artifacts were manually removed using independent component analysis.EEG signal segments exceeding 75 µV were automatically removed, and bad channels identified visually were rejected.The rejected channels were then reinserted using a spherical interpolation method.Then, EEG signals were computed using EEG epochs that started from 200 ms before the onset of the target stimulus to 1,000 ms after the stimulus' onset.Moreover, each epoch was baseline corrected using the signal during 200 ms, which preceded the onset of the stimulus.Finally, EEG signal values related to the gender of the virtual chatbots were superimposed and averaged to generate grand-averaged ERP waveforms and scalp topographies.
. Statistical analysis
Mean amplitude and usage intention values for ERPs and subjective evaluation data were subjected to repeated measures ANOVA.A 2 (virtual chatbot gender: male and female) × 3 (brain region: central-parietal, parietal, parietal-occipital, and occipital) repeated measures ANOVA was utilized in this study.Additionally, to investigate the influence of participants' gender on ERPs and usage intention data, we analyzed male/female participants on two factors: virtual chatbot gender and brain sites, respectively, using repeated measures ANOVA for the mean amplitude and usage intention.We used the Greenhouse-Geisser correction for any violation of the sphericity assumption (uncorrected df and corrected p-values were reported).The alpha level was fixed at 0.05.All statistical analyses were conducted using SPSS22.0.
Result . Subjective questionnaire
The reliability and validity of the scale were tested using SPSS 22.0.The results showed that Cronbach's alpha was 0.855, indicating very good internal consistency.The scale's validity was assessed by performing exploratory factor analysis.After the extraction of factors by using Promax rotation, the Kaiser-Meyer-Olkin (KMO) value (KMO = 0.608) was obtained.Bartlett's test of sphericity was extremely significant, suggesting the suitability of the data for factorization.
analysis.As shown in Figure 3, we selected the P100 component in the time window of 90-105 ms in the parietal, parietal-occipital, and occipital sites.We chose the P200 component in the time window of 170-270 ms in the parietal, parietal-occipital, and occipital sites.We selected the N100 component in the time window of 100-110 ms in the central parietal site.The 12 electrodes were divided into four subgroups: a central-parietal group (CP1, CPZ, and CP2), a parietal group (P3, PZ, and P4), a parietaloccipital group (PO3, POZ, and PO4), and an occipital group (O1, OZ, and O2).
. . P
There was no significant difference among other brain regions except for the occipital lobe area.However, we observed a significant interaction effect between virtual chatbots' gender and brain sites [occipital: F (1.605,49.769)= 3.471, p = 0.048, partialη 2 = 0.101].The simple effect result indicated that female virtual chatbots evoked a larger amplitude than male virtual chatbots, and the difference was close to significant [O1: p = 0.136; Oz: p = 0.119] (as shown in Figure 8).
While female participants engaged with virtual chatbots, the main effect of virtual chatbots' gender * brain sites was significant [occipital: F (1.000,14.000)= 4.756, p = 0.047, partialη 2 = 0.254].The pair comparison result indicated that in the occipital area, female virtual chatbots evoked larger P200 amplitudes than male virtual chatbots [p = 0.047] (as shown in Figure 9).When female participants interacted with virtual chatbots, no significant effect was found on other sites.
Discussion
. The e ect of virtual chatbots' gender on usage intention In terms of usage intention, we found that virtual chatbots' gender significantly influenced users' usage intentions.The subjective evaluations indicated that, when the gender role of participants is not considered, people tend to prefer using female virtual chatbots.One plausible explanation is that virtual chatbots in the market are commonly associated with female voices, such as Apple's Siri and Amazon's Alexa (Fischer et al., 1997).This may lead to a more approachable quality associated with the female appearance, thereby increasing user acceptance and users' usage intentions.This result is consistent with gender stereotypes.Therefore, participants' intention to use female virtual chatbots was higher than their intention to use male virtual chatbots.The result supported H1.
When considering the gender factors of participants, the result suggested that among female participants, female virtual chatbots tend to have a higher usage intention than male virtual chatbots, whereas male participants tend to prefer using female virtual chatbots.The results from female participants are consistent with our hypothesis and the similarity-attraction paradigm (Byrne, 1997).A reasonable speculation is that women seek resonance and recognize the convenience of communication among the same gender.Therefore, female participants are more willing to use virtual chatbots of the same gender.Thus, H2 was confirmed.However, the results from male participants contradict our hypothesis and the similarity-attraction paradigm.One possible reason is that societal and cultural factors may influence the preferences of male participants, while individual differences may play a significant role among them.Therefore, male participants are more inclined to use female virtual chatbots.Thus, H2 was not supported.
. The e ect of virtual chatbots' gender on ERP components Gender differences in cognition and their underlying brain mechanisms have attracted increasing attention ( Ramos-Loyo et al., 2022).This study used ERP techniques to analyze the evaluative process of virtual chatbots' gender.We found that regardless of whether participants' gender is considered a factor, the gender of virtual chatbots has an effect on the amplitudes of N100, P100, and P200.
. .N N100 is sensitive to physical stimulus features and can reflect the allocation of attentional resources (Luck et al., 2000;Li et al., 2022).The results from male participants showed that, early in the time course, the gender of the virtual chatbots influenced the N100 amplitude, implying that the robot's gender attracted the user's attention.Specifically, male virtual chatbots elicited significantly larger N100 amplitudes than female virtual chatbots.This finding suggested that male participants would pay more attention to male virtual chatbots.Our results align with the similarity-attraction paradigm (Byrne, 1997), indicating that men are more inclined to engage with male virtual chatbots, thereby allocating greater attentional resources to them.This finding is consistent with those of previous studies.For instance, Bakar and McCann (2014) investigated the human-human gender congruity and found that gender congruity between supervisors and subordinates results in higher job satisfaction and commitment of subordinates.Similarly, Pitardi et al. research in the field of humanmachine communication further confirmed that gender congruity significantly enhances the positive effects of communication (Pitardi et al., 2023).Thus, when male participants encountered male virtual chatbots, the larger N100 amplitude reflected their ability to induce a positive attentional resource, potentially due to the perceived congruity in gender.This result supported H3.
. . P
The present data displayed that early in the time course, female virtual chatbots enhanced a higher amplitude of P100 than male virtual chatbots in occipital and parietal-occipital areas, suggesting that the physical properties of virtual chatbots' gender can be detected.P100 is associated with the allocation of attentional resources in the early stage of processing visual stimuli (Nass et al., 1997).The results of this study suggested that female virtual chatbots required more attention than male virtual chatbots.One possible explanation for this observation could be that female virtual chatbots are perceived as more engaging or socially salient, thus capturing more attention at the early stages of visual processing.This could be attributed to cultural and societal biases that often associate femininity with warmth and social connection (Spence and Buckner, 2000).Therefore, the increased P100 amplitude may reflect a heightened sensitivity to, and the allocation of attention toward, female virtual chatbots.Thus, H1 was confirmed.
The results showed that among female participants, female virtual chatbots enhanced a higher amplitude of P100 in occipital and parietal-occipital areas than male virtual chatbots during the early processing stage.This suggested that female participants allocated more attention and cognitive resources to female virtual chatbots.One possible explanation is the concept of the similarityattraction paradigm, where individuals tend to be attracted to and engage more with stimuli that are similar to themselves (Byrne, 1997).Our findings align with Eyssel et al.'s research (Eyssel et al., 2012), which revealed that participants developed more favorable impressions and reported greater psychological closeness when interacting with virtual chatbots of the same gender.This underscores the importance of similarity in fostering positive human-machine communication.Therefore, compared to male virtual chatbots, female participants would allocate more attention to female virtual chatbots.This result supported H3.
. .P P200 is connected to research on emotion and attention arousal et al., 2004;Liu et al., 2022).Our study revealed that female virtual chatbots induced a larger amplitude of P200 in occipital areas than male virtual chatbots.This suggested that female virtual chatbots attracted more attentional resources and generated more emotional arousal than male virtual chatbots.A reasonable explanation is that the physical attributes of female virtual chatbots, owing to their good human nature (e.g., friendly, warm, and trusting), may be detected more easily than the physical attributes of male virtual chatbots (Borau et al., 2021).Previous studies have found that computers with female voices are perceived as more attractive (Lee et al., 2000), and recent research indicates that female systems elicit feelings of comfort, confidence, and reduced tension among users (Niculescu et al., 2010).Moreover, participants tended to perceive robots with a female body shape as more communal and having more cognitive and affective trust than those with a male body shape (Bernotat et al., 2021).These findings are in line with those of our study.Consequently, female virtual chatbots generated more positive emotions than male virtual chatbots.This result supported H1.
The data on female participants showed that female virtual chatbots evoked a higher amplitude of P200 than male virtual chatbots in occipital areas.This finding suggested that female participants allocated more attentional resources when interacting with virtual chatbots of the same gender.The result not only aligns with the similarity-attraction paradigm (Byrne, 1997) but also validates gender role identification.One plausible explanation is that when female participants interact with female virtual chatbots, they perceive an inherent consistency or similarity in gender, fostering emotional connection and trust.This emotional connection, in turn, makes female participants more likely to view female chatbots as approachable and engaging partners.Hence, the result suggested that female participants, by matching the gender role expectations of the female virtual chatbots, allocated more attentional resources and exhibited positive emotional responses toward female virtual chatbots.Thus, H3 was supported.
FIGURE
FIGUREERP laboratory environment.
FIGURE
FIGUREFlowchart for the ERP experiment with virtual chatbots.
FIGURE
FIGUREThe grand-averaged waveform for virtual chatbots.(A) Shows the waveform without distinguishing the participants, (B) shows the waveform for female participants, and (C) shows the waveform for male participants.
FIGURE
FIGUREMean amplitudes of P in the occipital area.* represents .≤ P ≤ . .
FIGURE
FIGUREMean amplitudes of P in female participants. | 6,928.6 | 2024-07-10T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
SCENE CLASSFICATION BASED ON THE SEMANTIC-FEATURE FUSION FULLY SPARSE TOPIC MODEL FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGERY
Topic modeling has been an increasingly mature method to bridge the semantic gap between the low-level features and high-level semantic information. However, with more and more high spatial resolution (HSR) images to deal with, conventional probabilistic topic model (PTM) usually presents the images with a dense semantic representation. This consumes more time and requires more storage space. In addition, due to the complex spectral and spatial information, a combination of multiple complementary features is proved to be an effective strategy to improve the performance for HSR image scene classification. But it should be noticed that how the distinct features are fused to fully describe the challenging HSR images, which is a critical factor for scene classification. In this paper, a semantic-feature fusion fully sparse topic model (SFF-FSTM) is proposed for HSR imagery scene classification. In SFFFSTM, three heterogeneous features—the mean and standard deviation based spectral feature, wavelet based texture feature, and dense scale-invariant feature transform (SIFT) based structural feature are effectively fused at the latent semantic level. The combination of multiple semantic-feature fusion strategy and sparse based FSTM is able to provide adequate feature representations, and can achieve comparable performance with limited training samples. Experimental results on the UC Merced dataset and Google dataset of SIRI-WHU demonstrate that the proposed method can improve the performance of scene classification compared with other scene classification methods for HSR imagery. * Corresponding author
INTRODUCTION
The rapid development of earth observation and remote sensing techniques has led to large amount of high spatial resolution (HSR) images with abundant spatial and structural information.Some of the most popular approaches are the object-based and contextual-based methods which can achieve precise object recognition (Bellens et al., 2008;Rizvi and Mohan, 2011;Tilton et al., 2012).Nevertheless, the HSR scenes often contain diverse land-cover objects, such as road, lawn, and building.The same type of objects may vary in spectral or structural based low-level features.The different distribution of the same land-cover objects may obtain different type of semantic scenes.And the same type of scenes may consist of different types of simple objects.These methods which are based on the low-level features are unable to capture the complex semantic concepts of different scene images.This leads to the divergence between the low-level data and the high-level semantic information, namely the "semantic gap" (Bratasanu et al., 2011).It's a big challenge to bridge the semantic gap for HSR imagery.Scene classification, which can automatically label an image from a set of semantic categories (Bosch et al., 2007), as an effective method has been receiving more and more attention (Yang and Newsam, 2010;Cheriyadat, 2014;Zhao et al., 2013;Zhao et al., 2016b;Zhao et al., 2016c).Among the various scene classification methods, the bag-of-visual-words (BOVW) model has been successfully applied to capture the high-level semantics of HSR scenes without the recognition of objects in object-based scene classification methods (Zhao et al,. 2014).
Based on the BOVW model, the probabilistic topic model (PTM) represents the scenes as a random mixture of visual words.The commonly used PTM, such as probabilistic latent semantic analysis (PLSA) (Hofmann, 2001) and latent Dirichlet allocation (LDA) (Blei et al., 2003) mine the latent topics from the scenes and have been employed to solve the challenges of HSR image scene classification (Bosch et al., 2008;Lié nou et al., 2010;Văduva et al., 2013).
To acquire latent semantics, the feature descriptors captured from HSR images are critical for PTM.In general, a single feature is employed and is inadequate (Zhong et al., 2015).Multi-feature based scene classification methods have also been proposed (Shao et al., 2013;Zheng et al., 2013;Tokarczyk et al., 2015).Considering the distinct characteristics of the HSR images, the features should be carefully designed to capture the abundant spectral and complex structural information.In addition, the different features are usually fused before k-means clustering, thus acquiring one dictionary and one topic space for all the features.This leads to the mutual interference between different features (Zhong et al., 2015), and is unable to circumvent the inadequate clustering capacity of the hardassignment based k-means clustering, which is not efficient in the high-dimensional feature space.With the development of PTM for HSR scene classification, there are two issues should be considered.The first one is how to infer sparser latent representations of the HSR images.Another one is how to design more efficient inference algorithms for PTM.In order to achieve good performance for huge volume of HSR image scenes, we may have to increase the number of topics to get more semantic information.However, for instance, the distribution of topic variables for the LDA model is drawn from a Dirichlet distribution with the parameter .The variable is greater than 0 no matter how the parameter varies (Blei et al., 2003).This leads to a dense topic representation of the HSR images, which is not sparse and requires more storage, and is time consuming.Another method is to impose sparsity constrains on the topic to change the object function of the model (Shashanka et al., 2007;Zhu and Xing, 2011).But we have to do model selection with the regularization terms based auxiliary parameters of these model, which is problematic when dealing with large amount of HSR image dataset.Fivefold cross validation is often performed to evaluate the experimental dataset to guarantee enough training samples for classification accuracy (Yang and Newsam, 2010;Cheriyadat, 2014).Reducing the number of training samples would be more practical.
Inspired by the aforementioned work, we present a semanticfeature fusion fully sparse topic model (SFF-FSTM) for HSR image scene classification.Fully sparse topic model (FSTM) proposed by Than and Ho (2012) for modeling large collections of documents is utilized to model HSR imagery for the following reason.Based on the similarity of documents and images, FSTM is able to remove the redundant information and infer sparse semantic representations with shorter inference time.In this way, to acquire sparse latent topics, we intended to use a limited number of images as training sample which is more in line with the practical application.To the best of our knowledge, no such PTM based scene classification method with limited training samples has been developed to date.However, FSTM is unable to fully exploit the information provided by the limited training samples with sparse representations.Hence in SFF-FSTM, three complementary features are selected to describe HSR images.Dense scale-invariant feature transform (SIFT) feature is chosen as the structural feature, mean and standard deviation as the spectral feature, and wavelet feature as the texture feature.Based on the effective feature description for HSR imagery, a semantic-feature fusion strategy is designed to fuse the three features after semantic mining with three distinct topic spaces.This can provide fully mined semantic information of the HSR imagery from three complementary perspectives, with no mutual interference and clustering impact.The incorporation of support vector machine (SVM) with a histogram intersection kernel (HIK) is effective in increasing the discrimination of different scenes.The combination of multiple semantic-feature fusion strategy and sparse representation based FSTM is able to trade off sparsity and the quality of sparse inferred semantic information as well as inferring time, and presents a comparable performance with the existed relevant method.
The rest of the paper is organized as follows.The next section details the procedure of the proposed SFF-FSTM for HSR image scene classification.A description of the experimental datasets and an analysis of the experimental results are presented in Section 3. Conclusions are discussed in the last section.
Probabilistic Topic model
Based on "bag-of-words" assumption, the generative probabilistic model of PTM, including PLSA, LDA and FSTM, are applied to HSR images by utilizing a visual analog of a word, acquired by vector quantizing spectral, texture, and structural feature like region descriptors (Bosch et al., 2008).
Each image can then be represented as a set of visual words from the visual dictionary.By introducing the latent topics characterized by a distribution over words, the PTM model the images as random mixtures over latent variable space.
Among the various PTM, the PLSA model as the classical PTM is proposed by Hofmann (2001) The mixing weight ( | ) ki p z d is the semantic information which PTM mined from the visual words of HSR images.It can be seen that PLSA lack a probability function to describe the images.This makes PLSA unable to assign probability to the images outside the training samples, and the number of model parameter grow linearly with the size of image dataset.
Hence, in 2003, Blei proposed LDA, which introduces the Dirichlet distribution over the topic mixture based on the PLSA model.The k-dimensional random variable follows the Dirichlet distribution with the parameter , where k is assumed known and fixed first.The LDA model provides a probability function for the discrete latent topics in PLSA, which being a complete PTM.However, the Dirichlet variable is greater than 0 when varies.The latent representation of HSR imagery by LDA is often dense with the large amount of images to model, while requiring huge memory for storage.And the inference algorithm of the LDA model is complex and takes a lot of time.
In 2012, Than and Ho proposed FSTM for modeling large collections of documents and applying to supervised dimension reduction.FSTM uses the Frank-Wolf algorithm of the sparse approximation algorithm as the inference algorithm, which follows the greedy approach, and has been proven to converge at a linear rate to the optimal solutions.In FSTM, the latent topic proportion is a convex combination of the topic simplex with at most l+1vertices after l iterations, which follows an implicit constraint 0 || || 1 L .Hence, we choose FSTM with the sparse solutions to model the HSR imagery in this paper.
Complementary feature description
As can be seen from Fig. 1(a), it is difficult to distinguish parking lot from harbor, neither from the structural characteristics nor the textural ones.However, due to the spectral difference between ocean and road, the spectral characteristics play an important role.In Fig. 1(b), the storage tanks and dense residential scenes mainly differ in the structural characteristics.In addition, it can be seen from Fig. 1(c) that the forest and agriculture scenes are similar in spectral and structural characteristics, but they differ greatly in the textural information from the global perspective.Considering the abundant spectral characteristics and the complex spatial arrangement of HSR imagery, three complementary features are designed for the HSR imagery scene classification task.Before feature descriptor extraction, the images are split into image patches using uniform grid sampling method.
Spectral feature:
The spectral feature reflects the attributes that constitute the ground components and structures.
The first-order statistics of the mean value and the second-order statistics of the standard deviation value of the image patches are calculated in each spectral channel as the spectral feature, According to (2) and (3), n is the total number of image pixels in the sampled patch, and ij v denotes the j-th band value of the i-th pixel in a patch.In this way, the mean (mean j ) and standard deviation (std j ) of the spectral vector of the patch are then acquired. 1
Texture feature:
The texture feature contains information about the spatial distribution of tonal variations within a band (Haralick et al., 1973), which can give consideration to both the macroscopic properties and fine structure.Wavelet transforms enable the decomposition of the image into different frequency sub-bands, similar to the way the human visual system operates (Huang and Avivente, 2008).This it especially suitable for image classification and multilevel 2-D wavelet decomposition is utilized to capture the texture feature from the HSR images.And the level where the wavelet decomposition of the images at is optimally set to 3.
Structural feature:
The SIFT feature (Lowe, 2004) has been widely applied in image analysis since it can overcome the addition of noise, affine transformation, and changes in the illumination, as well as compensating for the deficiency of the spectral feature for HSR imagery.Each image patch is split into 44 neighbourhood regions and each directions for each gradient orientation histogram are counted in each region.Hence, the gray dense SIFT descriptor with 128 dimensions is extracted as the structural feature.This was inspired by previous work, in which dense features performed better for scene classification (Li and Perona, 2006), and Lowe (2004) suggest that using a 4 4 8 128 = dimensions vector to describe the keypoint descriptor is optimal.
Multiple Semantic-feature Fusion Fully Sparse Topic Model for HSR Imagery with Limited Training Samples
The previous studies have shown that a uniform grid sampling method can be more effective than other sampling methods such as random sampling (Li and Perona, 2006).In this way, the image patches acquired by uniformly sampling the HSR images are digitized by spectral, texture and SIFT features, and three types of feature descriptors, D 1 , D 2 , and D 3 are obtained.However, with the influence of illumination, rotation, and scale variation, the same visual word in different images may be endowed with various feature values.The k-means clustering is applied to quantize the feature descriptors to generate 1-D frequency histogram, and image patches with similar feature values can correspond to the same visual word.By the statistical analysis of the frequency for each visual word, we can obtain the corresponding visual dictionary.
The conventional methods usually directly concatenate three types of feature descriptors to make up a long feature . The long vector is then quantized by k-mean clustering to generate a 1-D histogram for all the features.As the features interfere with each other when clustering, the 1-D histogram is unable to fully describe the HSR imagery.In SFF-FSTM, the spectral, texture, and SIFT features are quantized separately by k-mean clustering algorithm to acquire three distinct 1-D histograms, H 1 , H 2 , and H 3 .By introducing probability theory, each element of the 1-D histogram for SFF-FSTM are transformed into the word occurrence probability.To mine the most discriminative semantic feature, which is also the core idea of PTM, the three histograms are separately mined by SFF-FSTM to generate three distinct latent topic spaces.This is different from the conventional strategies which fuse the three histograms before topic modeling, and only one latent topic space is obtained which is inadequate.
Specifically, SFF-FSTM chooses a k-dimensional latent variable .Given an image M and K Hence, suppose there are N images, then for each of H 1 , H 2 , and H 3 , K 1 , K 2 , and K 3 topics are assumed to compose the images, respectively.The latent semantics of H 1 , H 2 , and H 3 , denoted as 1 , 2 , and 3 , respectively , are inferred with the Frank- Wolf algorithm.Then the semantic features 1 , 2 , and 3 of all the HSR images are fused at the semantic level, thus obtaining the final multiple semantic-feature ,, with a sparse size.Finally, the F 2 with the optimal discriminative characteristics is classified by SVM classifiers with a HIK to predict the scene label.The HIK measures the degree of similarity between two histograms, to deal with the scale changes, and has been applied to image classification using color histogram features (Barla et al., 2003).We be the LGFBOVW representation vectors of M images, and the HIK is calculated according to (7).In this way, SFF-FSTM provides a complementary feature description, an effective image representation strategy, and an adequate topic modeling procedure for HSR image scene classification, with even limited training samples, which will be tested in the Experimental Section.classification based on SFF-FSTM is shown in Fig. 2. (5)
Experimental Design
The commonly used 21-class UC Merced Dataset and a 12-class Google dataset of SIRI-WHU were evaluated to test the performance of SFF-FSTM.In the experiments, the images were uniformly sampled with a patch size and spacing of 8 and 4 pixels, respectively.To test the stability of the proposed LGFBOVW, the different methods were executed 100 times by a random selection of training samples, to obtain convincing results for the two datasets.A k-means clustering with the Euclidean distance measurement of the image patches from the training set was employed to construct the visual dictionary, which was the set of V visual words.K topics were selected for FSTM.The visual word number V and topic number K were the two free parameters in our method.Taking the computational complexity and the classification accuracy into consideration, V and K were optimally set as in Table 1 and Table 3 for the different feature strategies with the two dataset.In Table 1, 2, 3, and 4, SPECTRAL, TEXTURE, and SIFT denote scene classification utilizing the mean and standard deviation based spectral, wavelet-based texture, SIFT-based structural features, respectively.The proposed method that fuse the multiple semantic features at the latent topic level is referred to as the SFF strategy.
To further evaluate the performance of SFF-FSTM, the experimental results utilizing SPM (Lazebnik et al., 2006), PLSA (Bosch et al., 2008), LDA (Lié nou et al., 2010) and the experimental results on the UC Merced dataset, as published in the latest papers by Yang and Newsam (2010), Cheriyadat (2014), Chen and Tian (2015), Mekhalfi et al. (2015), and Zhao et al. (2016a) are shown for comparison.SPM employed dense gray SIFT, and the spatial pyramid layer was optimally selected as one.In addition, the experimental results on the Google dataset of SIRI-WHU utilizing SPM (Lazebnik et al., 2006), PLSA (Bosch et al., 2008), LDA (Lié nou et al., 2010) and the experimental results on the Google dataset of SIRI-WHU, as published in the latest paper by Zhao et al. (2016a) are also shown for comparison.
Experiment 1: The UC Merced Image Dataset
The UC Merced dataset was downloaded from the USGS National Map Urban Area Imagery collection (Yang and Newsam, 2010).This dataset consists of 21 land-use scenes (Fig. 3), namely agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis courts.Each class contains 100 images, measuring 256 256 pixels, with a 1-ft spatial resolution.Following the experimental setup as published in Yang et al. (2010), 80 samples were randomly selected per class from the UC Merced dataset for training, and the rest were kept for testing.The classification performance of different strategies based on the FSTM and the comparison with the experimental results of previous methods for the UC Merced dataset are reported in Table 2.As can be seen from Table 2, the classification results of the single feature based FSTM is unsatisfactory.The classification result, 94.55% ± 1.02% for the proposed SFF-FSTM is best among the different methods, and improves a lot compared with the single feature strategy.This indicates that the combination of multiple semantic-feature fusion strategy and sparse representation based FSTM is able to trade off sparsity and the quality of sparse inferred semantic information as well as inferring time.In addition, it can be seen that SFF-FSTM is superior to the performance of SPM (Lazebnik et al., 2006), PLSA (Bosch et al., 2008), LDA (Lié nou et al., 2010), the Yang and Newsam method (2010), the Cheriyadat method ( 2014), the Chen and Tian method (2015), the Mekhalfi et al. method (2015), and the Zhao et al. method (2016a).meadow,pond,harbor,industrial,park,river,residential,overpass,agriculture,water,commercial,and idle land,as shown in Fig. 4.Each class separately contains 200 images, which were cropped to 200×200 pixels, with a spatial resolution of 2 m.In this experiment, 100 training samples were randomly selected per class from the Google dataset, and the remaining samples were retained for testing.4. As can be seen from Table 4, the classification results, 97.83%±0.93%,for the proposed SFF-FTSM, is much better than the spectral, texture, SIFT based FSTM method, which confirms the framework incorporating multiple semantic-feature fusion and FSTM is a comparative approach for HSR image scene classification.In (Zhong et al., 2015), respectively.
The training number was varied over the range of [80,60,40,20,10,5] for the UC Merced dataset.And the training number for the Google dataset of SIRI-WHU was varied over the range of [100,80,60,40,20,10].The classification accuracy with different numbers of the training samples for the UC Merced dataset and the Google dataset of SIRI-WHU are reported in Table 5 and Table 6.The corresponding curves are shown in Fig.
5.
As can be seen from Table 5, Table 6 and Fig. 5, the proposed SFF-FSTM performs better, and is relatively stable with the decrease in the number of training samples per class for the two datasets, when compared to SAL-LDA.When the training samples is under 20%, even 10% or 5%, SFF-FSTM display a smaller fluctuation than SAL-LDA, and can keep a comparative satisfactory and robust performance with limited training samples.
We also test and compare the inference efficiency of the proposed SFF-FSTM and SAL-LDA with the spectral feature for the two datasets.Nevertheless, image patches obtained by the uniform grid method might be unable to preserve the semantic information of a complete scene.It would therefore be desirable to combine image segmentation with scene classification.The clustering strategy, as one of the most important techniques in remote sensing image processing, is another point that should be considered.In our future work, we plan to consider topic models which can take the correlation between image pairs into consideration.
Fig. 1.HSR images of the parking lot, harbor, storage tanks, dense residential, forest, and agriculture scene classes: (a) shows the importance of the spectral characteristics for HSR images; (b) shows the importance of the structural characteristics for HSR images; and (c) shows the importance of the textural characteristics for HSR images.
M
is the frequency of term j in M. Hence, the inference task is to search for to maximize the likelihood of M.is the visual dictionary of V terms.Different from other topic models, SFF-FSTM do not infer directly, whereas reformulate the inference task of optimization over as a of topic.It can be seen that x is a convex combination of the K topics with the fact in (6), and by finding x that maximizes the objective function (5), we can infer the latent topic proportion of the image M.
Figure 2 .
Figure 2. The proposed HSR scene classification based on the SFF-FSTM.
Classification accuracies with different numbers of training samples per class.(a) UC Merced dataset.(b) Google dataset of SIRI-WHU 4. CONCLUSION In this paper, we have designed an effective and efficient approach-the semantic-feature fusion fully sparse topic model (SFF-FSTM)-for HSR imagery scene classification.The fully sparse topic model (FSTM) has been used for unsupervised dimension reduction of the large collection of documents first.By combining the novel use of FSTM and the semantic fusion of three distinctive features for HSR image scene classification, SFF-FSTM is able to presents a robust feature description for HSR imagery, and achieve comparative performance with limited training samples.The proposed SFF-FSTM can improve the performance of scene classification compared with other scene classification methods with the challenging UC Merced dataset and Google dataset of SIRI-WHU.
Table 4
Zhao et al. (2016a)ther methods, SPM, the LDA method proposed byLienou et al. (2010), the PLSA method proposed byBosch et al. (2008), and the experimental results published byZhao et al. (2016a), the highest accuracy is required by the proposed SFF-FSTM, which presents a comparable performance with the existed relevant method.
Experiment 3: Multiple Semantic-feature Fusion Fully Sparse Topic Model for HSR Imagery with Limited Training Samples
By modeling the large collection of images with only a few latent topic proportions of non-zero values, we intend to deal with the HSR imagery with limited training samples employing SFF-FSTM and SAL-LDA
Table 5 .
The inference time of SFF-FSTM is about 3 minutes, whereas SAL-LDA takes almost 40 minutes to infer the spectral based latent semantics.This indicates SFF-FSTM is an efficient PTM compared with the classical non-sparse PTM such as SAL-LDA.Performance of SFF-FSTM and SAL-LDA for the UC Merced dataset with limited training samples
Table 6 .
Performance of SFF-FSTM and SAL-LDA for the Google dataset of SIRI-WHU with limited training samples | 5,549.6 | 2016-06-21T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Hunting alters viral transmission and evolution
Hunting can fundamentally alter wildlife population dynamics, but the consequences of hunting on pathogen transmission and evolution remain poorly understood. Here we present a study that leverages a unique landscape-scale experiment coupled with pathogen transmission tracing, network simulation and phylodynamics to provide insights into how hunting shapes viral dynamics in puma (Puma concolor). We show that removing hunting pressure enhances the role of males in transmission, increases the viral population growth rate and the role of evolutionary forces on the pathogen compared to when hunting was reinstated. Changes in transmission could be linked to short term social changes while the male population increased. These ndings are supported through comparison with a region with stable hunting management over the same time period. This study shows that routine wildlife management can have impacts on pathogen transmission and evolution not previously considered.
Main
Human actions commonly alter wildlife populations. A classic example of an alteration is hunting, which often has density and demographic effects on a population [1][2][3][4] . However, the consequences of these actions on pathogen transmission and evolution are largely unknown, and the few available studies report contradictory ndings. Theory predicts that for pathogens with density-dependent transmission, hunting-induced decreases in density should decrease transmission rates yet make little difference to transmission dynamics for frequency-dependent pathogens. In practice, empirical data and models suggest that reducing host density can either decrease 5,6 or even increase pathogen transmission and prevalence 7,8 . The complex interplay between host density, demography, and behavior also makes predicting the impacts of hunting on pathogen dynamics complex. Limited empirical work shows that population reduction can increase pathogen prevalence via social perturbation [9][10][11][12] . For example, cullinginduced changes or 'perturbations' to badger (Meles meles) territorial behavior was considered a driver of increased bovine tuberculosis transmission among badgers e.g., 9 . However, there is also evidence that population reduction has little impact on canine rabies 13 or Tasmanian devil facial tumor disease 14 dynamics. Recent advances in high-resolution pathogen sequencing and analytic approaches can now elucidate patterns of pathogen transmission and evolution [15][16][17] that were previously out of reach. Here we address the effects of hunting on pathogen dynamics by capitalizing on pathogen sequences collected from a detailed study on the demographic effects of hunting 18 as well as from sequences obtained over the same time period in a region where little hunting occurred. Our approach enables us to provide insights into the cascading consequences of hunting, and the cessation of hunting, on hostpathogen dynamics.
RNA viruses are ideal agents for examining the effect of hunting and the cessation of hunting on pathogen transmission and evolution. Genomic variation rapidly accrues in RNA viruses, enabling estimation of ne-scale epidemiological processes (such as transmission between hosts) and the basic reproduction number (R 0 ) 16,19 (see Box 1 for de nitions of key terminology highlighted in bold). Altered transmission dynamics and the arrival of new lineages can imprint distinctive evolutionary signatures on RNA viruses as they adapt quickly to changes in host populations they encounter 20,21 . For example, if a change of management led to a higher frequency of transmission events, we expect that the transmission bottleneck would lead to high purifying selection since within-host mutations are lost with transmission (e.g., 22 ). Conversely, if new mutations entering the host population allow the pathogen to escape immune detection, we may expect an increase in diversifying selection. Altered transmission dynamics and new lineages will also shape the phylogenetic diversity of the pathogen 23 . For example, if novel pathogen lineages are frequently arriving into a host population with limited transmission, we would expect to see a pattern of phylogenetic dispersion (i.e., higher phylogenetic diversity than expected by chance 24 ). In contrast, phylogenetic clustering (i.e., lower phylogenetic diversity than expected by chance 24 ) may be a marker of increased transmission events within a population.
Term
Definition R 0 The basic reproduction number 'R naught' is the expected number of cases generated by one case in a population of susceptible individuals. Transmission bottleneck Transmission of viruses between hosts usually involves a relatively small number of virus particles being exchanged between hosts (e.g., 53 ). This has the effect of reducing viral genetic diversity population size and creating a 'bottleneck'. Purifying selection 'Negative selection' is the removal of nonsynonymous mutations (i.e., mutations that lead to a change in protein coding). Diversifying selection 'Positive selection' is the favoring of nonsynonymous mutations that yield an adaptive advantage. These mutations can rapidly increase in frequency across a population. Transmission network A network where nodes represent individual puma and edges reflect transmission events based on transmission tree estimates. Edge weights are the probability of the transmission event occurring. Transmission trees generated by the R package 'TransPhylo' (Didelot et al, 2017) estimate who infected whom, including potentially unsampled individuals using a stochastic branching epidemiological model and a time-scaled phylogeny. Weighted degree The summed probability of a individual puma (i.e., a node in the network) being involved in transmission events divided by the number of transmission events (i.e., edges in the network). Weighted degree homophily The weighted degree of transmission events between members of the same sex.
Skygrowth demographic analyses
Non-parametric population-genetic model estimating the growth effective population size through time (a surrogate for genetic diversity) using Bayesian inference. This method has been shown to accurately reconstruct pathogen outbreak dynamics in a variety of systems ( 35,51 ).
Here we leverage viral data collected from closely monitored puma (Puma concolor) in two areas in Colorado during the same time period: a 'treatment region' in which hunting pressure changed over time and a 'stable management region' acting as a control (hereafter 'stable region'). We sequenced viral genes sampled from captured puma for an endemic RNA retrovirus, puma feline immunode ciency virus (FIV pco ), which is a host-speci c pathogen considered relatively benign and not associated with overt disease outcomes 25 . Even though FIV pco is endemic in puma populations, novel infections can spread in susceptible and previously infected individuals 26 . Evidence suggests FIV pco is often transmitted via aggressive interactions, although vertical transmission is also possible 25,27 . We analyzed these viral data in both regions using a transmission network approach 16,23 that incorporates a stochastic epidemiological model with pathogen genomic data to trace transmission between individual puma. The treatment region consisted of puma in a ~12000km 2 area in western Colorado in which hunting prior to our study was common practice (see 28 hereafter 'Lag 2'). However, the decline in abundance of males was severe and rapid with males > 6 years old apparently eliminated from the population after two hunting seasons 18 . In contrast, over the same 10-year period, the stable region in the Front Range of Colorado experienced minimal hunting pressures and no change in management practice. Nearly all the individuals sampled in both regions were adults and both sexes were evenly represented. Individual survival probabilities in the stable region were unaltered across years 29 . By comparing the treatment and stable regions, we were able to test how demographic changes caused by hunting cessation and reinstatement perturb viral transmission networks and epidemiological parameters (e.g., R 0 ), and also alter pathogen diversity and evolution. In doing so we begin to untangle the complex interplay between wildlife management and pathogen transmission, which is crucial for pathogen-orientated conservation and disease management strategies.
Cessation of hunting shifts transmission networks and increases R 0
We found that reducing hunting mortality had major effects on FIV pco transmission dynamics. Even though the populations in the treatment and stable region were of comparable size ( 41 , Table S1), our estimates of R 0 for the same virus over the 10-year period were two-fold higher in the treatment region compared to the stable region (with non-overlapping 95% high probability density intervals indicating that the difference is signi cant, Fig. 1). Other model parameters, such as generation time (the time between initial FIV pco infection and onward transmission, Fig. S2), and the proportion of missing cases (Fig. S3), yielded similar estimates in both regions. The burst of transmission in the treatment population after the cessation of hunting ( Fig. 1a right panel) was likely a result of transmission between males as they were dominant in the network. In the treatment population, males had an overall mean weighted degree (Box 1) double that of females (0.37 compared to 0.14). Only one putative transmission event occurred between sexes, and we detected no female-female transmission events. When we assessed weighted degree homophily of male-male transmission events, simulations revealed that the dominance of male-male transmission events in the network was not random (1000 simulated annealing network iterations, p < 0.001, Fig. S4a). Putative transmission events largely occurred when hunting mortality was eliminated ( Fig. 1a), during which time the survival of adults and subadult males was high, age structure increased, and the abundance of independent pumas increased 18 . Male survival rates in the hunting period were also lower than for either sex in the stable region 28 . Females were, however, much less connected in the transmission network in the treatment region compared to the stable region, where they were more central (Fig. 1b). In contrast to the treatment region, the stable region showed evidence of transmission from females to both females and males. Average weighted degree was higher overall for males than females in the stable region (0.46 vs 0.29). Even though weighted female-female degree homophily was higher in the stable region (0 vs 0.05), our simulations show that we could not reject the null hypothesis that this difference was by chance (p= 0.692, Fig. S4b). Female-to-female transmission events occurred between highly related females, supporting previous ndings of the importance of host relatedness in FIV pco spread for puma in this region 30 . Taken together, our results indicate that lower hunting mortality was associated with an increase in the number of transmission events which were dominated by males.
After hunting was prohibited, the greater survival and increasing abundance of males likely resulted in greater competition between males for mates. As the dominant transmission mode for FIV pco is considered to be via aggressive contacts 31 , increased male competition for mates appears a probable explanation for the differences in transmission dynamics. Further interrogation of our transmission network supports this theory, as in all but two instances, male-to-male transmission occurred between individuals with overlapping territories in the treatment region ( Fig. 2/S5/S6). One transmission pair was unusual in having less spatial proximity, yet one puma of this pair was a likely immigrant to the region (M133) and could have passed through M73's territory at some point (Fig. 2). With the exception of M73 (~6 y.o. at time of infection), all individuals involved in these transmission events were between 1-3 y.o., which is a period when males are establishing new territories and are starting to compete for access to females 32,33 . Our results suggest it is unlikely that these males transmitted to each other prior to dispersal or via maternal or paternal contacts-since these individuals were not related based on genomic data 34 . While our estimates suggest that we were able to sample approximately 40% of the FIV pco infections in both regions (Fig. S3)-arguably good coverage for secretive, free-ranging wildlifeour models account for this type of missing data 16 . For example, nearly all putative transmission events we identi ed from our transmission networks were between individuals on the landscape at the same time and in most cases were captured in close spatial proximity to each other. The biological plausibility of these transmission events demonstrates the power of adapting transmission network models to trace transmission and gain epidemiological insights in systems that are di cult to observe.
Hunting alters diversity and selective pressure on the virus Altered transmission dynamics at a population level were associated with changes in viral evolution and diversity in the treatment region. The increased number of transmission events in the no-hunting period compared to the hunting period was supported by the strong phylogenetic clustering (isolates with less phylogenetic diversity than expected by chance) detected relative to the hunting period (Fig. 3a). The link between reduced hunting pressure and increased transmission events was further supported as we did not nd similar phylogenetic clustering in the stable region or hunting period (Fig. 3a). Moreover, we found little evidence for new lineages arriving during the no-hunting period in the treatment region (Fig. 1a). We further interrogated viral diversity patterns across time using skygrowth demographic analyses 35 . Viral genetic diversity rapidly accrued at the end of the no-hunting period (~2009/2010) before markedly declining after ~2011 when hunting was reinstated (Fig. 3b), closely mirroring male population size estimates (R 2 = 0.8, p = 0.010, Fig 3c). Female population size was not signi cantly correlated to viral population growth rate (R 2 = 0.190, p = 0.630, Fig. 3d) adding further evidence for the enhanced role of male interactions in transmission dynamics when hunting mortality was reduced. While we lack behavioral observations of puma across time, it is possible that the increase in male density with the cessation of hunting allowed for increased competition for mates and thus aggressive interactions 33 . No such increase in FIV pco diversity and growth rate was detected in the stable population ( Fig. S7b/c).
Within the treatment region, the increase in viral diversity was underpinned by greater effects of both purifying and diversifying selection acting on individuals infected during the no-hunting period compared to the hunting period (p = 0.01, likelihood ratio = 6.31). Purifying selection, potentially as a signature of rapid transmission events (e.g., 22 ), was dominant in both periods (97.25% sites ω < 1), as is often the case in error-prone RNA viruses, but stronger in the non-hunting period (ω 2 nh = 0, ω 2h = 0.1). In contrast, there was no shift in evolutionary pressure in the same periods in the stable population (p = 0.5, likelihood ratio = 0.43). While impacting a smaller proportion of the loci overall (2.79% loci ω > 1), there was strong diversifying selection in the no-hunting period as well (ω 3 nh = 21.46, ω 3h = 2.8). We identi ed ve FIV pco loci under diversifying selection using the MEME routine in both regions (cutoff: p ≤ 0.1). Two of these loci were only found in isolates in males and, based on our transmission models, the males were likely infected by FIV pco in the no-hunting period. There was no signature of diversifying or purifying selection in the envelope gene (env), which was surprising given that env is generally under greater evolutionary pressure as it is responsible for the virus binding to the host cells 36 . All loci under diversifying selection were detected in the FIV pol integrase region. Putting these lines of evidence together, we not only detected population-level impacts of demographic changes due to cessation of hunting on viral mutation, but also at the individual scale with stronger evolutionary pressure on viruses infecting males. Increased evolutionary pressure on the virus may increase the probability of a new FIV pco phenotype occurring in this population. Systematic shifts in evolutionary pressure are known to occur when viruses switch hosts e.g., 37,38 ; however here we show that selective constraints on a virus can be altered in response to host demographic changes caused by wildlife hunting. We stress that FIV pco is largely apathogenic in puma and therefore our ndings demonstrate the types of changes in pathogen transmission dynamics that can be caused by hunting induced changes in wildlife populations.
Perturbation, management and disease Our work provides a valuable case study on how hunting can have unexpected consequences for pathogen transmission and evolution across scales. Our multidisciplinary approach was particularly valuable in helping deconstruct how shifts in population structure and behaviour imprint on pathogen dynamics and evolution. For example, previous work using landscape genetic models only detected weak or inconsistent sex effects shaping FIV spread 27,30,39 . Our transmission network and phylodynamic approach, in contrast, was able to clearly distinguish the role of males in putative transmission chains and in accruing genetic diversity even though the data requirements are similar (e.g., a time scaled phylogeny). The putative transmission events we detected, supported by locational data, provided important mechanistic details at an individual scale that enabled us tease out the links between management, behaviour and transmission that are di cult to detect otherwise. Moreover, the shift in transmission network was able to provide context to the differences in pathogen evolution we detected between the no-hunting and no-hunting periods.
Our results provide a case study of the complex interplay between host demography, density and behaviour in shaping pathogen dynamics. In our case the cessation of hunting in a population in 2004 facilitated demographic change via increased male survivorship and abundance 28 , with potential increases in male-to-male contact behavior. Even though the 'perturbation' was the cessation of hunting, the underlying mechanism could be similar. An expansion in the way we think about perturbations to include a cessation of a practice leading to demographic or behavioral change may be warranted.
Our results also reveal potential shortcomings of relying on population estimates of prevalence to understand the impact of wildlife management actions on pathogen transmission. In our case, population estimates of FIV pco prevalence across time alone could not detect shifts in transmission associated with hunting and were not sensitive to changes in population size (Figs. S8/S9). The lack of signal from prevalence data may be a contributing factor behind the variability of the effects of hunting on disease dynamics in empirical systems 40 . Prevalence data may be better able to detect shifts in population demography where the pathogen causes acute infections with shorter periods of immunity.
The collection of pathogen molecular data from well-sampled wildlife populations across time is a logistical challenge, yet with ever cheaper and more mobile sequencing platforms, the potential to use approaches similar to ours is increasing, even for slowly evolving pathogens such as bacteria 19 . Our multidisciplinary approach can not only provide novel insights into the broader consequences of wildlife management on disease dynamics but can also help understand evolutionary relationships between hosts and pathogens in free-ranging species more broadly.
Materials And Methods
Study area and puma capture Our study was conducted in two regions in the Rocky Mountains in Colorado separated by ~500 km but at similar elevations and with similar puma densities 41 , vegetative and landscape attributes, yet with differing degrees of urbanization (see Fig. S10 Table S1 for a summary of the sequence data and a comparison of study area size, host mortality, and host genetic diversity between regions.
Transmission and phylogenetic trees
We constructed transmission trees between pumas in each region using the R package TransPhylo 16 . TransPhylo uses a time-stamped phylogeny to estimate a transmission tree to gain inference into "who infected whom" and when. Brie y, this approach computes the probability of an observed transmission tree given a phylogeny using a stochastic branching process epidemiological model; the space of possible transmission trees is sampled using reversible jump Markov chain Monte Carlo (MCMC) 16 . This approach is particularly useful for pathogens where the outbreak is ongoing, and not all cases are sampled 16 , as is the case here. We leveraged our FIV pco Bayesian phylogenetic reconstructions from previous work and focused on the two clades of FIV pco that predominantly occurred in each region (see Fountain-Jones et al. 2019). Whilst the TransPhylo approach makes few assumptions, a generation time distribution (the time from primary infection to onward transmission) is required to calibrate the epidemiological model 16 . We assumed that generation time could be drawn from a Gamma distribution (k = 2, θ = 1.5) estimating onward transmission on average 3 years post-infection (95% interval: 0.3 -8 years, based on average puma age estimates 33 ). Based on previous work 41 , we were con dent that the proportion of cases (π) sampled was high, therefore we set the starting estimate of π to be 0.6 (60% of cases tested in each region), and allowed it to be estimated by the model. We ran multiple MCMC analyses of 400,000 iterations and assessed convergence by checking that the parameter effective sample size (ESS) was > 200. We computed the posterior distributions of R 0 , π, and the realized generation time from the MCMC output. We also estimated likely infection time distributions for each individual and compared these estimates to approximate puma birth dates to ensure that these infection time distributions were biologically plausible. We then computed a consensus transmission tree for each region to visualize the transmission probabilities between individuals through time. Lastly, we reformatted the tree into a network object (nodes as individual puma and edges representing transmission probabilities) and plotted it using the igraph package 43 and overlaid puma sex as a trait.
Overall weighted degree and weighted degree for each sex, including edges representing homophily (e.g., male-male) and heterophily (e.g., male-female), were also calculated using igraph.
Simulation modelling
To test for non-random patterns of weighted degree between each sex, we applied a simulated network annealing approach from the Ergm R package (Handcock et al, 2018). To generate each simulated network, we tted a variety of probability distributions to edge weight and degree of both treatment and stable regions, then used AIC to select the best tting target distribution. Edge density, network size and the number of isolated nodes were xed based on each observed network. We added sex to each simulated node attribute drawing from a Bernoulli distribution (probability= 0.5). Using these network characteristics, we generated 1000 'null' networks and compared the homophily weighted degree distribution of each sex (i.e., the average weighted degree for each individual based on putative malemale or female-to-female transmission events) of the null networks to the observed and calculated a bootstrap p-value.
Selection analyses
To test if the demographic changes driven by hunting resulted in a reduction in the intensity of natural selection on FIV pco , we examined selective pressure in both time periods in each region using the RELAX hypothesis testing framework 44 . The method builds upon random effects branch-site models (BS-REL) 45 that estimates the ω ratio (the ratio of non-synonymous to synonymous mutations or dN/dS) along each branch from a discrete distribution of three ω ratio classes allowing selection pressure to vary across the phylogeny 44 . A ω ratio of one corresponds to neutral selection with values > 1 being evidence for diversifying (positive) selection along a branch, and < 1 evidence for purifying (negative) selection along a branch. Brie y, RELAX tests for relaxation of selection pressure by dividing branches into three subsets; test branches (T), reference branches (R) and unclassi ed branches (U) 44 with ω T (resp. ω R ) being the estimated dN/dS ratio on test (resp. reference) branches. The discrete distribution of ω is calculated using BS-REL for each branch class, and then branches belonging to each subset are compared. The reference estimates of ω are raised to the power of k (an intensity parameter) so that in order to simplify model comparison. The null RELAX model is when the ω distribution and thus selective pressure is the same in R and T (when k = 1). The null model is compared to an alternate model (using a likelihood ratio test) that allows k to vary so that when k >1 selection pressure on the test branches was intensi ed or k < 1 indicating that selection pressure has been relaxed 44 . In the relaxed scenario, k < 1 branches in R are under stronger purifying and diversifying selection compared to T branches (e.g., ω shifts from 0.1 to 0.001 or from 10 to 2). See Wertheim et al. (2015) for model details. T and R were selected from leaf branches (all other branches were Unassigned, U); individuals sampled from 2005-2011 (to the end of the lag period) were assigned to the R set and those sampled from 2012-2014 were assigned to T set. All branches not directly connecting to the tips were classi ed as 'U' as the majority had low phylogenetic support (posterior probability < 0.6). To further interrogate the sequence data to identify individual sites under selection, we performed the MEME (mixed-effects model of evolution) pipeline 46 . We performed both MEME and RELAX models using the Datamonkey web application 47 .
Population growth rate
We applied the non-parametric skygrowth method 35 to examine if the FIV Pco population growth rate uctuated across time and if this was related to changes in male or female population size in the treatment region. We did not do the same for the stable region as similar estimates were not available.
We tted these models using MCMC (100,000 iterations) assuming that FIV Pco population size uctuated every 6 months over a 14-year period (the estimated time to most recent common ancestor of this clade, Fig. S7). Otherwise, the default settings were used. We then performed a Pearson correlation test to assess if the trend in FIV Pco population growth was related to male and female population size estimates 28 . Measuring the correlation between population size estimates and patterns of population growth using generalized linear models 35,48 was not feasible due to the relatively small size of this dataset.
Phylogenetic diversity
To quantify phylogenetic diversity in each time period in each region, we calculated the standardized effect size (SES) for Faith's phylogenetic richness that accounts for differing sample sizes (SES for Faith's PD, 49 ). Faith's PD (hereafter PD) is the sum of the branch lengths of the phylogenetic tree linking all isolates for each subset (in this case the two time periods). As the number of isolates in each contrast differed (stable region 2005-2011: 11 isolates, stable region 2012-13: 5 isolates, treatment region 2005-2011: 10 isolates, treatment region 2012-14: 5 isolates) we calculated the standardized effect size (SES) by comparing the PD we observed to a null model that accounts for number of tips (i.e., how much phylogenetic diversity would we see for a given number of isolates by chance). We denote the standardized PD as SES.PD from here on; this was calculated across a subset of posterior phylogenetic trees from our previous Bayesian phylogenetic analyses 30 . To capture phylogenetic uncertainty in these estimates, we utilized the computational e ciency of the PhyloMeasures R package algorithm 50 to calculate SES.PD and apply this across a 1000 tree subsample of posterior trees 30 DNA sequences-GenBank accession: MN563193 -MN563239. All other data and code to perform the analysis will be available on github. Figure S2 for the FIVpco generation time distributions for each region and Figure S3 for the estimate of missing cases across year
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 6,283.2 | 2021-06-12T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
A Unifying Network of Filter Bank Multicarrier Modulation for 5G Technologies
Filter bank multicarrier is a modulation technique origin from orthogonal frequency division modulation which implements filters in multicarrier modulation system. In this paper, we address the shortcomings of OFDM and show that filter bank multicarrier (FBMC) technique could be a more effective Solution for 5G applications. Filter bank multicarrier modulation has so many advantages over OFDM and is considered to be a more competitive waveform in the future generation (5G) cellular communication systems. Here we are dealing with both perfect reconstruction FBMC (PR-FBMC) and imperfect reconstruction FBMC (iPR-FBMC). Overall, we provide a new framework, discussion and performance evaluation of FBMC and compare it to OFDM based schemes. Using MATLAB, OFDM system and FBMC has been tested with different digital modulation schemes including and compare its system parameters. FBMC system has better usage of available channel capacity and is able to offer high data rate and spectral efficiency. Simulation shows that this technology is developed to high performance parameters compared to other schemes. This paper explains the limitations of OFDM and provides an overview of implementing prototype filters with FBMC system. Keywords— CP, FBMC, FFT, ICI, OFDM, IFFT, ISI, MCM, MIMO.
INTRODUCTION As of today OFDM has been the dominant technology for broadband multicarrier modulation. OFDM is very attractive modulation as well as multiplexing technique that is used in optical wireless communication system. Several advantages of OFDM are good spectrum utilization and channel robustness. However certain applications such as cognitive radio and uplink communication OFDM may be undesirable due to some of its disadvantages. OFDM has drawbacks like unable to maintain orthogonality due to ISI among consecutive multicarrier symbols. The existing technique in OFDM to counter this issue is to introduce a cyclic prefix (CP).This introduces time overhead in the communication, resulting into a loss of spectral efficiency and low data rate in AWGN channel. The CSI corresponding to each training blocks are first estimated and then tracked and removed with the help of demodulated data. This technique is called Decision Directed Channel Estimation (DDCE) [1].
OFDM is more sensitive to time and frequency synchronization. The synchronization problem includes carrier frequency offset (CFO) and symbol time offset (STO). This is mainly due to the mismatch between the local oscillator at the transmitter and receiver. In STO, offset is affected in the frequency domain. This synchronization error destroys the orthogonality among the sub carriers which causes inter symbol interference and intercarrier interference (ICI) [2].
The technique used by FBMC to overcome these problems is to keep the symbol duration unchanged, thereby avoiding the time overhead issues. This is done by implementing filter bank with IFFT/FFT operation. The basic FBMC communication system has well designed prototype filter, the spectrum of each subcarrier can be reduced. This property makes FBMC an ideal choice for many applications such as cognitive radio and uplink networks. The polyphase structures are proposed for the implementation of FBMC system. Prototype filter design is based on nyquist theory. One of the straight forward methods is consider frequency confidents and apply symmetry conditions in nyquist filters. This technique uses pilot added and blind synchronization methods to differentiate carrier frequency of the incoming signal and oscillation signal used to demodulate it [3].
The basic principles of FBMC include filtered multitoned (FMT) and staggered multitoned (SMT) [4].However, SMT introduced interference among the subcarriers. So the proposed FBMC scheme which reduced ICI and exhibits high spectral efficiency than OFDM.In Multiple Input Multiple Output system (MIMO), multiple antennas are used in both transmitter and receiver which allows simultaneous transmission and reception. Conventional Single Input Single Output (SISO), antenna is limited in meeting the rising demands of future applications.Now a day's communication requires high transmission rate and quality of services (QOS). MIMO OFDM is one of promising technology for high data rate services. The main of this paper is to avoid interference by adopting interference cancellation(IC). Particularly successive IC (SIC) method is in terms of high BER performance and thus yielding to the overall system robustness. This SIC implementation for both single antenna and multiple antenna systems requires advanced DSP unit at receiver end. Simulation results shows that performance complexity tradeoff (PCT) levels are tested for different antenna configurations and it was seen that MIMO OFDM has the appropriate PCT level.
The major challenges faced by wireless communication are availability of bandwidth and transmission power. Also it suffers from fading and interference. All these requirements can be achieved by introducing channel impairments are mitigated by using equalization methods. BER performance of system has been tested for different techniques such as zero forcing (ZF), Minimum Mean Square Error (MMSE) and Maximum Likelihood (ML). Equalization methods are used to compare ISI created by multipath channels [5][6].
The 5G cellular mobile devices are quick enough to communicate with each other than 4g and LTE systems. The main of 5G is to improve communication speed, at low cost, low latency and better implementation than the previous generation. This can have achieved by adopting FBMC system. In order to increase the spectral efficiency even more and to improve BER performance, this paper introduces filter bank multicarrier technology. The theoretical expression of the PSD called FFT-FBMC system which reduces interference in the adjacent sub bands of about 15.8dB [7]. Simulation PSD curves shows that FFT-FBMC spectrum is much confined than conventional FBMC system. The added filter enhances the spectral efficiency. Additionally, FBMC also reduces the spectrum consumed by each guard band. Here equalizer at the receiver end equalizes the data symbols and then it is converted into bits. This paper proposed conventional ZF and MMSE equalizers and compared the Signal to Error Ratio(SER).it was noted that ZF and Minimum Mean Square Error (MMSE) FBMC has low computational time compared to OFDM system [8].
Filter Bank Multicarrier, FBMC is a form of multi-carrier modulation that origins within OFDM. It aims to overcome some of the disadvantages of OFDM. FBMC has a much better usage of the available channel capacity and is able to offer higher data rates and higher level of spectrum efficiency. FBMC system offers high spectral efficiency, data rate and BER performance. The design of wide bandwidth and high dynamic range systems with FBMC provides significant RF development challenges.FBMC is a multicarrier modulation (MCM) technique in which a prototype filter is used to achieve certain goals such as minimizing ISI, ICI and increasing the spectral efficiency. More importantly, prototype filter is introduced with anew goal of improving performance parameters such as data rate, throughput and BER performance. FBMC is definitely a promising technique for use in upcoming 5G systems [9]. This paper addresses the shortcomings of OFDM and shows that FBMC could be a more effective solution. In existing FBMC, prototype filter is designed to meet perfect reconstruction (PR) condition to maintain sub band orthogonality. Differ from OFDM, FBMC including synthesis filter bank and analysis filter bank, so that we can separate the sub bands almost perfectly in the frequency domain. Due to this property FBMC is more robust to CFO than OFDM scheme. Cyclic prefix is not used in FBMC, which leads to high spectral efficiency for long data packets. In this paper we analyses a performance comparison between OFDM method and FBMC. The OFDM symbol has a high peak to average power ratio (PAPR). This high PAPR causes non linearity effects on the transmitted OFDM symbols, spectrum spreading, intermodulation, changing signal constellation and interference to symbols. It is more sensitive to carrier frequency offset and interferences. Meanwhile FBMC technique offers low PAPR and better performance parameters compared to other schemes. Filter bank multicarrier techniques are resilience to multipath fading, enable flexible spectrum allocation and can approach the theoretical capacity limits in communications. To satisfy the upcoming needs, multiple input multiple output (MIMO) technologies are introduced with FBMC system. Deploying multiple antennas at both ends has high link reliability and the throughput with respect to single-antenna configurations. MIMO-FBMC have high data rate, low latency, high spectral efficiency and support a huge no of devices.
A. Existing Perfect Reconstruction Method
The perfect reconstruction method (OFDM) is a transmission technique where a single set of data is transmitted over a number of sub-carrier. OFDM takes the advantage like resilience to interference, resilience to narrow band fading and multipath effect. The main idea of OFDM is to split the total available bandwidth into a number of subcarriers which reduces the inter-symbol-interference, power consumption and increases the capacity and efficiency of the system. High Peak-Average-Power-Ratio (PAPR) is one of the disadvantages of OFDM which reduces the efficiency of system. In this paper, we have proposed prototype filter which eliminate PAPR effect in OFDM signals. The OFDM architecture is implemented by combining the different block as shown in Fig 1.The function of random generator is used to generate a random uniform data in the range of (0, M-1), where is the Mary number. The Mary can be either a scalar or a vector. If it is defined as a scalar, then all output random variables are independent and identically distributed. Now the data is serial which is given to a serial to parallel converter.
A serial to parallel converter formatted data into word size required for transmission and shifted it into parallel format. Once the symbol has been allocated to each of the subcarriers then they are phased mapped accordance with modulating technique. Different modulation scheme can be adopted based on channel condition, data rate, robustness, throughput and channel bandwidth (i.e., QPSK, QAM). Modulation to OFDM sub channels can be made adaptive after getting information and estimation of channel at transmitter.
The Orthogonality of subcarrier is maintained and the frequency domain signals are converted into a time domain by using IFFT Mapping. If the length of the guard interval is longer than the duration of the channel impulse response, ISI can be eliminated. However, the insertion of the guard interval reduces the transmission efficiency. Therefore, the guard interval must be chosen as sufficiently small. The most commonly used guard interval is known as Cyclic Prefix (CP).Now the modulated OFDM data is converted into analog by using digital to analog converter (DAC). The transmitted OFDM signals are transmitted through wireless channels. OFDM technique transmits the data over a large number of In comparison with other multi-carrier techniques, like CDMA, OFDM prevents the Inter Symbol Interference (ISI) by adding a cyclic prefix. The key features of OFDM are the IFFT/FFT pair. These mathematical tools are used to transform several signals on different carriers from the frequency domain to the time domain in the IFFT and from the time domain to the frequency domain in the FFT.The receiver part perform the down conversion of the signal and convert the signal into digital domain by using ADC. The synchronization is also needed during reception. The OFDM symbol is demodulated by using a FFT.
B. FBMC Architecture
We can overcome most of the shortcomings in the OFDM by using FBMC technique. In FBMC, we utilize filter banks at both transmitter and receiver end are observed to be well placed, hence the ISI in OFDM can be eliminated. On the other hand, it helps us to eliminate the effects of cyclic prefix. Let us now observe the structural characteristics.
On the transmitter side we use an array of N filters in filter bands hence we can process N signals at a given time. The transmitter section uses analysis filter and receiver section uses synthesis filter bank. Similar to OFDM the input goes through S/P and then passed through the analysis filter bank. Then the signal again is converted and the output observed is in absolute serial form. Fig 2. and Fig 3. show the transmitter and receiver design for FBMC respectively. The transmitter structure of FBMC includes filters rather than CP-OFDM.The Binary data is passed through transmitter where an N point IFFT is performed. Before transmitting the signal, it is passed through a filter to avoid interferences. On the receiver side, a band pass filter filters the incoming signal then it is passed through N point FFT operation is performed. The equalizer used at receiver equalizes the data symbols.
C. Prototype Filter Design
Digital prototypes filter design is based on the Nyquist theory. Design criteria states that the impulse response of the transmission filter must cross the zero axis at all the integer multiples of the symbol period. The condition translates in the frequency domain by applying the symmetry condition about the cut-off frequency, which is half the symbol rate. Later, a straightforward method to design a prototype filter is to consider the frequency coefficients (F) and impose the symmetry condition. In transmission systems, the Nyquist filter is generally split into two parts, a half filter in the transmitter and a half filter in the receiver. Then, the symmetry condition is satisfied by the frequency coefficients [14]. The frequency coefficients of the half side filter obtained for K=2, 3 and 4 are given in Table I Table I: Frequency prototype filter coefficients The frequency response is obtained from the frequency coefficients through the interpolation formula for sampled signals are given below: The impulse response h (t) of the digital filter is given by the inverse Fourier transform of the frequency response, which is given by: (2) Prototype filter design criterions are: • Each filter should have a flat pass band over the subcarrier in the sub band. • Each filter must have a sharp transition band in order to reduce the size of the guard bands.
• The stop band attenuation should be sufficient enough to avoid ISI between the stop and pass band
III. SIMULATION PARAMETERS
Filter bank multi-carrier technique has many advantages over other modulation technique and is considered to be perfect waveform in the future cellular communications. Simulation results show that unlike OFDM, whose system parameters are sufficiently high for 5G applications. Perfect synchronization is assumed in the simulation set up in order to prevent ISI and Inter Carrier Interference.
Then we select the cyclic prefix to be 10% of the transmitted OFDM symbol duration such that the maximum average path gain is less than this chosen cyclic prefix. More so, for simulation set up we ensure that each transmission always has a zero path delay which closely matches a perfect synchronization. Other simulation parameters used in the OFDM system simulation design are as shown in Table II. Represents the spectrum of OFDM and FBMC which shows when carriers are modulated in OFDM, side lobes spread out on either side. However, FBMC with a filter bank system, the filters are used to remove the noise. Therefore, in FBMC we get much cleaner carrier results. Also FBMC Provide high spectral efficiency and data rate. FBMC is more spectrum efficient than OFDM system. The cyclic prefix required for OFDM is not needed in FBMC thereby freeing up more space for real data. Fig 6.shows the SNR VS BER curve of OFDM and FBMC.Overall, we provide a unifying frame work that offers a low BER, high spectral efficiency and more competitive waveform for 5G communication compared to OFDM. In Fig 7.the prototype filters of OFDM and FBMC magnitude responses have been plotted against the normalized frequency is implemented. We judge the magnitude response of prototype filter for OFDM and FBMC with overlapping factor K=4.It indicates that the magnitude response of OFDM is constant throughout the frequency which means it is more affected by PAPR which reduces the efficiency of the system. Initially the response of FBMC is more than 30dB and it reduces with increase in frequency. Simulation results shows that in FBMC peak to average power ratio is reduced to 4.512. FBMC has better usage of the available channel capacity and is able to offer higher data rates within a given radio spectrum bandwidth. Parameter comparison of OFDM and FBMC are shown in Table III. V. CONCLUSION 5G cellular communication system expected to have high data rate, low latency, and high spectral efficiency and support a huge no of devices. FBMC is capable of improving the system performance. As the number of transmit and receive antennas increase, enhances data rate and support large amount of data. In this paper we are analyzing the performance of FBMC and OFDM with QAM modulation. It is observed that QAM with FBMC has better BER performance compared to other modulation schemes. Hence, it can be concluded that FBMC can be a better choice for future generation cellular communication systems.Another area where there is a need for further study is application of FBMC systems to MIMO channels. | 3,854.2 | 2020-07-20T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A blockchain-based secure storage scheme for medical information
Medical data involves a large amount of personal information and is highly privacy sensitive. In the age of big data, the increasing informatization of healthcare makes it vital that medical information is stored securely and accurately. However, current medical information is subject to the risk of privacy leakage and difficult to share. To address these issues, this paper proposes a healthcare information security storage solution based on Hyperledger Fabric and the Attribute-Based Access Control (ABAC) framework. The scheme first utilizes attribute-based access control, which allows dynamic and fine-grained access to medical information, and then stores the medical information in the blockchain, which can be secured and tamper-proof by formulating corresponding smart contracts. In addition, this solution also incorporates IPFS technology to relieve the storage pressure of the blockchain. Experiments show that the proposed scheme combining access control of attributes and blockchain technology in this paper can not only ensure the secure storage and integrity of medical information but also has a high throughput when accessing medical information.
I. INTRODUCTION
W ITH the development of technology, various emerging technologies are merging with the healthcare sector, making the process of building healthcare information technology increasingly sophisticated [1]. The World Health Organisation defines medical information as the most innovative and shareable asset. Nowadays, the number of medical institutions around the world presents an index stage growth, and the medical data generated by medical institutions also present explosive growth. Due to the deepening of the degree of information in hospital information, the information system within the hospital gradually expands from a single HIS charging system into a system with electronic medical records. The medical data is accompanied by the registration, diagnosis, and hospitalization, medical data is gradually complex and stereochemical, and the importance of privacy and security is significantly increased [2].
Currently, the combination of traditional paper medical records and centralized medical data management systems is still the main form of medical institutions to store patients' medical data as shown in Fig. 1. However, this form of medical system faces severe risks of privacy disclosure [3]. Therefore, the transformation of the centralized medical data management system to distributed medical data sharing system is an irresistible trend of the whole society [4]. However, since most medical and health institutions are isolated from each other, they store and maintain medical health data, forming data islands. This is not only not conducive to long-term records of patients with their disease development, but also caused a waste of medical equipment and a large number of medical health data resources duplication. To maximize the value of medical health data, to meet the core needs of medical information construction, and provide more humanized and reasonable services for patients, sharing data between medical institutions is an inevitable trend [5]. In addition, due to the extensive use of emerging Internet technology in the medical field, the medical data transmission methods and paths have become increasingly diversified, and gradually transferred from the internal transmission of hospitals to the transmission between medical institutions, medical institutions, and insurance and other institutions, and between patients and medical institutions, which also greatly increases the difficulty of patient data protection [6]. The above reasons lead to the characteristics of large scale, complex structure, and rapid growth of medical data, so it is difficult to find an ideal method to store medical information.
Fortunately, in recent years, the rise of blockchain technology has brought new solutions to the secure storage of medical information. In essence, blockchain is a distributed database with the characteristics of decentralization, security, and transparency [7]- [9]. As a decentralized database, blockchain provides a reliable solution to the problems of poor sharing, low effectiveness, and weak security in medical data management. Data can be recorded on the real-time shared blockchain platform, and timestamps are added to ensure the immutability of the data. The tamper resistance of the blockchain ensures the security of medical data [10]. On the licensed blockchain, blockchain members can obtain data information through access operations.
Specifically, on the license blockchain, the blockchain member can obtain information of data by accessing operations, allowing the member to view outline information, to ensure sharing of medical data on a non-licked blockchain. Mainstream blockchain projects can be divided into four categories: cryptocurrency, platform, application, and asset token. Blockchain technology is widely used in smart cities [11], [12], Internet of things (IoT) [13]- [16], smart finance [17], Internet of Vehicles (IoV) [18]- [22], and education [23]- [25]. Medical data involves personal privacy and sensitive information, such as personal name, ID number, and home address, so medical records become the primary goal of information theft, so it is urgent to combine blockchain technology and the medical sector.
Furthermore, blockchain has entered a new era with the emergence and continuous improvement of smart contracts and further development of blockchain projects such as Ether and Hyperledger. Smart contracts are programmable and Turingcomplete [26]. Transactions can automatically initiate code based on rules set by the system, and the emergence of smart contracts has laid an important foundation for merging blockchain technology and medical information [27]. In the open network environment of blockchain, the attribute-based access control(ABAC) model is a suitable and effective access control model. As a flexible fine-grained access control method [28], the model mainly determines that the data requester has the correct attributes to determine the data requester's access control authority to private data resources. So far, the application of blockchain technology in the medical field is not satisfactory. In this regard, we store medical data into blockchain by deploying intelligent contracts to ensure the privacy and security of medical data. At the same time, the ABAC model is introduced for access control to ensure that users can access them safely and efficiently. In addition, due to the huge and complex medical information, to alleviate the storage pressure of blockchain, we also combine the interstellar file system to realize the slimming of the whole blockchain and further improve the efficiency of user access.Compared with existing studies, the model proposed in this paper realizes more fine-grained access to medical information and at the same time alleviates the storage pressure of blockchain, making the throughput of the system greatly improved, which is also the advantage of this scheme.
Specifically, the main contributions of this study are as follows.
• This paper applies blockchain to medical information management and realizes decentralized management and secure storage with the help of distributed consensus and authentication mechanisms. • We design an auxiliary architecture based on ABAC, which can realize fine-grained access control and dynamic management of permissions. • In this paper, we use smart contracts to define multitier data structures, access policies, and system workflows to improve the efficiency of data storage, retrieval, and query. • We ease the storage pressure of blockchain with the interstellar file system. • This paper designs simulation experiments and verifies the performance of the scheme. The rest of this article is as follows. Section II describes the related works. In Section III, we introduce the necessary background and technologies. Next, Section IV introduces the model, assumptions, and design objectives of the proposed scheme. Then, Section V sets up two groups of comparative experiments and then analyzes the results. Finally, in Section VI, we summarize this paper and discuss further work.
II. RELATED WORK
In this section, we survey blockchain-based secure storage in section II-A and blockchain-based secure sharing in section II-B.Although existing models and schemes achieve secure storage and sharing of medical information, they fail to realize fine-grained access to medical information, which will undoubtedly reduce the user experience. In addition, most existing studies have not considered the storage bottleneck of blockchain. In order to make up for the deficiency of existing studies, this paper not only achieves the safe storage and sharing of medical information, but also optimizes the access control operation of medical information, and alleviates the storage pressure of blockchain to a certain extent, which is also the difference between the proposed scheme in this paper and the existing model.
A. Blockchain-based secure storage of medical data
The extension of blockchain technology to the healthcare field has a profound impact due to its decentralized, tamperproof, and transparent nature.
Azaria et al. [29] propose a decentralized blockchain-based MedRec system to handle EHR. MedRec has a modular design where the administrative privileges, authorization, and data sharing of the system are among the participants. Medblock [30] is a hybrid architecture based on blockchain to protect EMR. The architecture nodes of the architecture are divided into endorsement nodes, sorting nodes, and submission nodes. The consensus algorithm used is a variant of the part consensus algorithm. Conceição et al. [31] propose a generic architecture for storing patient Electronic Health Record (EHR) data using blockchain technology. Yang and Li [32] propose an EHR architecture based on blockchain. The architecture prevents tampering and abuse of EHR by tracking all events in the blockchain. Kushch et al. [33] proposed a special data structure for storing electronic medical data on the blockchain: blockchain tree. The structure of the blockchain tree is a subchain and one or more of a recorded patient identity and a sub-chain stored in additional critical information (such as diagnostic records), and blocks on the main chain are initial blocks of the sub-chain.
B. Blockchain-based secure sharing of medical data In addition to safe storage, the blockchain is equally widely used in security sharing. In medical record management, the application and research of the blockchain in the medical field have received much attention, and many research institutions around the world participate.
Xia et al. [34] proposed a blockchain-based system called men shared. The system can minimize the risk of data privacy and can be used to solve the problem of medical data sharing between medical data custodians in an untrusted environment. Zhang et al. [35] propose a blockchain-based medical data sharing scheme, which uses the private blockchain owned by the hospital to store the patient's health data, and uses the consortium blockchain to save the security index. Zhang et al. [36] combined with artificial intelligence technology and blockchain technology proposed a safe and transparent medical data-sharing platform. This platform utilizes the transparency of the zone chain for data tracking, imparting the characteristics of non-tampered. Liu et al. [37] use blockchain technology and cloud storage technology to propose a datasharing scheme for paying attention to privacy protection in the medical field. The scheme stores the original medical data in the cloud indexes the data in the blockchain and prevents the data from being maliciously modified by the tamper-proof feature of the blockchain. To realize the dynamic communication between medical alliance chains, Qiao et al. [38] propose a scheme that allows dynamic communication between healthcare alliance chains, which enables patients to securely and autonomously share their records in an authorized healthcare alliance chain within milliseconds.
III. PRELIMINARIES
This section mainly introduces the architecture of medical information security storage schemes based on blockchains and access controls. Section III-A introduces the structure of the scheme, section III-B presents the workflow, section III-C, and section III-D describes the smart contract design.
A. Blockchain technology in healthcare information storage
Blockchain helps to build decentralized data sharing and application mechanisms. Traditionally, medical information management is a unilaterally maintained information system. The drawback of this mode of management is that too centralized information management power makes it difficult to achieve real information sharing. Blockchain technology introduces the characteristics of distributed books. Since the file information input under the blockchain technology is jointly maintained and supervised by multiple parties, the joint supervision of various information data by multiple departments ensures the openness and transparency of data information and also determines the openness and transparency of blockchain technology transactions rules [39]. This will fundamentally solve the problems of low work efficiency and too chaotic a working state in traditional medical information management.
Moreover, blockchain can construct a credible deposit system. The management of medical archives information is nothing more than the four most basic processes of addition, deletion, modification, and query. However, in blockchain, the two basic processes of deletion and modification in archives information management are abandoned, and the process of archives information processing is reduced. The irreparability and security of data information in blockchain are guaranteed from the technical design. In addition, each block of information in the blockchain records the creation time and the hash value of the previous block. This chain structure marked with time itself facilitates the usual audit, tracking, and traceability, and improves the utilization rate of medical information.
Finally, blockchain can solidify data exchange and benefit allocation rules. The combination of intelligent contract and block link technology can maximize the automation of archival information sharing. Once the smart contract is implemented, it cannot stop and is not interfered with by external operations. Hospitals can use this feature to entrench interest distribution rules [40]. In medical information sharing, intelligent contracts can change the behavior of participants involved in information sharing into active participation, promote the efficiency and speed of information sharing, and truly maximize the value of medical information. In this compulsory information sharing, the secret box operation in traditional information sharing is constrained, and the quality of medical data information is ensured.
B. Hyperledger Fabric
In recent years, cryptocurrencies, represented by Bitcoin, have achieved great success, which has successfully drawn the world's attention to blockchain technology, however, such public chains have problems such as low transaction throughput, long transaction times, wasted resources, and data consistency. To address these issues the Linux Foundation created the Hyperledger project in 2015, which is one of the world's largest blockchain projects and is often used as a platform for enterprise blockchain development. Hyperledger Fabric is designed with a modular architecture that includes members, blockchain, transactions, and smart contracts, as shown in Fig. 2. Member management module for the requirements of the enterprise-level blockchain to security and privacy, the member management module has strengthened the user's joining permissions, and anyone involved in the transaction needs to be certified by the PKI public key infrastructure. The blockchain module uses the P2P protocol to manage distributed books and can configure different consensus protocols according to different requirements, and record the transaction history in chain classification, with the latest state of the World State mechanism, the specific state of the ledger is specifically shown in Fig. 3.Hyperledger Fabric employs Apache Kafka(Distributed Messaging System) based on ZooKeeper(Distributed Services Framework). Kafka is essentially a message processing system where consumers of messages subscribe to specific topics and producers are responsible for publishing messages. In the whole Hyperledger Fabric network KafKa mainly provides transaction ordering service, that is, KafKa realizes the ordering service for all transaction requests in the network. The transaction module controls the data in the transaction process in the form of deployment transactions and invocation transactions, where deployment transactions are installed on all peer nodes by Chaincode when the transaction is successfully executed, while invocation transactions are conducted by invoking the specified functions in the Chaincode through the SDK provided by the Fabric Software Development Kit. Smart contracts record the business logic agreed by members of Fabric's federated chain and can be written in common languages such as Go and Java, overcoming the shortcomings of traditional blockchains that are limited to domain-specific languages.
C. Attribute based access control model
Attribute based access control is a comprehensive consideration of user, resource, operation, and contextual access control policies. It determines whether to grant access to the requester to configure the correct attribute, that is, this policy does not need to specify the relationship between the data requester and the private data, but by judging whether the data requester's attribute determines its pair access control permissions for this private data. Since the strategy is a more stable attribute due to the system operation. Therefore, using the attribute to describe the access control policy to separate attribute management and access decision phase, and the specific implementation can increase or delete the policy according to the actual situation, implement the update modification of the policy, refine the access control particle size, and have good flexibility, sexuality and scalability. Attributes are the core of the policy, which can be defined by a quadruplet A ∈ {S, O, P, E}, where each field has the following meaning: A represents attributes, each of which exists as a key-value pair. S represents subject attributes, including the subject's identity, role, position, and credentials. O represents object attributes, including the object's identity, location, department, type, data structure, etc. E represents the environmental attributes, including time, system status, security level, current access, etc. P represents the operation attributes, mainly used to describe the subject's access to the object type, such as write, modify, delete, etc. The structure of the model is shown in Fig. 4. An attribute-based access control request (ABACR) can be defined as ABACR = {AS ∧ AO ∧ AP ∧ AE}, where AS represents the subject attribute, AO represents the object attribute, AP represents the operation attribute, and AE represents the environment attribute. R represents a set of rules, which can also be defined by a quadruplet: including DHT, BitTorrent, Git, and SFS, to achieve the primary function of storing data locally and connecting nodes to each other for data transfer. IPFS was originally designed to build a better resource network than the now commonly used HTTP protocol to compensate for the shortcomings of HTTP. Compared to HTTP, IPFS exhibits advantages such as fast download speeds, global storage, security, and data perpetuation. IPFS is essentially a content-addressable, versioned, peerto-peer hypermedia distributed storage and transport protocol. It has the following features. Content Addressable: IPFS only cares about the content of the file, generating a unique hash mark from the file content, which is accessed by the unique mark and checked in advance to see if the mark has already been stored. If it has been stored, it is read directly from other nodes, without the need for duplicate storage, saving space in a sense. Slicing large files: files placed in IPFS nodes do not care about their storage path or name. IPFS provides the ability to slice and dice large files, downloading multiple slices in parallel when used. Decentralized, distributed network structure: Such a network structure is suitable for solving bottlenecks in the blockchain's storage capacity by storing large amounts of hypermedia data on IPFS. Encrypted storage: IPFS adds a cryptographic hash unique to digital information to the encrypted data, and the corresponding hash of the stored file cannot be changed. The hash corresponds to the file one-toone. In an IPFS network, there is no need to take into account the location of the server and the name and path of the file. When a file is placed in an IPFS node, each file is given a unique hash value calculated based on its contents. When access to a file is requested, IPFS finds the node where the file is located based on the hash table and fetches the file. IPFS combined with blockchain can be a good solution to the blockchain storage problem.
IV. EXPERIMENTAL METHODS
This section mainly introduces the architecture of medical information security storage schemes based on blockchains and access controls. Section ?? introduces the structure of the scheme, section IV-B presents the workflow, and section IV-C describes the smart contract design of the scheme.
A. Scheme architecture
The architecture of the system consists of a user, an attribute-based access control model, an interstellar file system, and a blockchain, the detailed architecture of which is shown in Fig. 5.
Policy(P): It represents the access control policy based on attributes contains four elements in the set, namely AS, AO, AP, and AE.
Attribute of Subject(AS): It includes three main types of attributes, namely user ID (identifies the unique identity of the user), user role (doctor and patient), and user department (specific department).
Attribute of Object(AO): It includes the medical record ID (identifies the uniqueness of the record).
Attribute of Permission(AP): An attribute that indicates whether the user has access to the medical record, with 1 representing permission and 0 representing denial.
Attribute of Environment(AE): The environmental conditions required for the access control policy, mainly including the creation time (when the policy was created) and the end time (when the policy expires). If the current time of a policy is later than the end time, it means that the policy is invalid. IPFS: It is mainly used to mitigate the storage pressure of the blockchain. The medical data stored in IPFS will be stored in a MerkleDAG to ensure the security of the data, which is called the address hash. Then, the address hash is stored into the zone chain, thereby replacing the original data. In IPFS, the original data is subjected to the SHA256 algorithm twice and then Base58 encoding, resulting in a hash length of 33 Bytes. So the original medical information is replaced with the hash address, which will greatly reduce a block. The size of the whole blockchain is achieved.
Blockchain: The blockchain is the heart of the solution, a distributed network of trusted nodes that ensures the synchronization and storage of medical data, thus ensuring data security and accuracy. In this solution the blockchain is developed based on Hyperledger Fabric and access control can be implemented by writing smart contracts.
B. Workflow
The workflow of the proposed scheme mainly contains four parts. This section describes each part, and the specific workflow is shown in Fig. 6. The symbols used are shown in Tab. I. The basic procedure of this program is the installation of the construction and Chaincode of the blockchain network. These basic processes need to be completed by the administrator user. Process 1 is mainly divided into three steps as follows.
Step 1: Prior to building a specific blockchain network, all members of the network must register the certificate and the required certificate is issued by CA.
CA → {Cert peer , Cert orderer , Cert channel , Cert user } (6) All peer nodes and orderer nodes run in docker containers and the relevant certificates they require need to be packaged into a docker image before they can be run.
After setting up all the peer and orderer nodes, start creating channels, each in a separate blockchain and ledger as Step 2: After the above operation, a basic blockchain network has been built, and the Chaincode is written next to create an application.
The administrator user uses the Hyperledger Fabric SDK or Client to install the Chaincode, and all peer nodes must have the Chaincode installed.
Install(CC)
Step 3: Once the chaincode is completed, we need to initialize it by calling the invoke function to complete the initialization of the chaincode, and the initialized Chaincode is stored in the container.
2) Part 2: This section requires the specification of relevant access control policies and the whole process needs to be agreed upon between the user and the administrator. The policy needs to be saved to the blockchain by the administrator once it has been created.
Step 1: Administrators and users set access control policies based on AS, AO, AP , and AE.
Decide(AS, AO, AP, AE) → ABACP (12) Step 2: The administrator uploads the developed access control policy to the blockchain network.
Upload(ABACP) → Contract (13)
Step 3: The administrator runs PSC to implement operations such as adding and modifying policies and saves the final policy values to the SDB and ledger.
3) Part 3: This section implements the storage of medical information by first uploading the medical records into IPFS to get a hash address, and then saving that address to the blockchain.
Step 1: Users upload medical records to IPFS.
Step 2: IPFS translates medical records into a hash address according to its operational mechanism.
Step 3: Send the hash address to the blockchain.
Send(hash) → blockchain (17)
Step 4: Save medical information to the SDB and ledger by running the smart contract RSC.
4) Part 4:
This section is a process for accessing medical information based on attribute access control and can be divided into four specific steps.
Step 1: The user initiates a request for access to medical data.
Request → blcokchain (19) Step 2: Upon receipt of a user request, the ASC contract is called to verify that the user has access to the data.
Step 3: If the user has access rights, then the blockchain transfers the hash of the medical information to the IPFS.
Step 4: IPFS calculates the medical data requested by the user based on the hash address.
Response(Medical Record) → Cli (22) C. Smart contract of the scheme Smart contracts are not only related to the implementation of access control, but also the storage of medical information, and are therefore at the heart of this solution. There are three smart contracts in total: policy contract (PSC), access control contract (ASC), and medical record contract(RSC).
1) Policy Contract (PSC):
The PSC provides the following methods to manipulate ABACPs.
CheckP olicy(): PSC needs to verify the validity of the ABACP by this method. Each ABACP should contain AS, AO, AP , and AE, and all the four attributes should be satisfied for this policy to be valid.
AddP olicy(): The PSC needs to run the CheckP olicy() method before calling this method to add the policy, and only after the policy is legal can the policy be written to SDB and blockchain. The details are shown in Algorithm 1.
DeleteP olicy(): This method will be called in two ways. Firstly, the administrator will call this method to delete an ABACP. Secondly, when the CheckAccess() method is executed and a policy is found to have expired, then this method will be called automatically to delete the useless policy. This is shown in Algorithm 2.
U pdateP olicy(): This method is called when an administrator needs to modify an ABACP. This method is called when the administrator needs to modify an ABACP. The modification record is also written to the SDB and the blockchain. This method also executes the AddP olicy() method at the end after the policy is updated, adding the modified policy back to the blockchain.
QueryP olicy(): all policies are stored in the state database CouchDB (a kind of key-value pair database) and the administrator can query the details of the desired ABACP by using the property AS or AO. return Error 10: end if 11: return Ok 2) Access Control Contract (ASC): ASC primarily implements the access control function, i.e. determining whether a user's access control-based request matches the prescribed access control policy. The methods in ASC are as follows. CheckAccess(): This method is the core of the implementation of access control, as shown in Algorithm 3. If the method returns a null result, it proves that there is no policy that supports the request and the request is invalid. If the result is not null, it means that there is a policy that matches the request. Finally, the request is verified by validating the eligible policy, and if the attributes AE and AP in the policy are both satisfied, the request is proven to pass the verification.
3) Policy Contract (PSC):
The RSC is primarily used to store a hash address representing a complete medical record. The user first uploads the medical record to IPFS, which then return Error 7: end if 8: return Ok DeleteRecord(): This method first deletes the hash address from the SDB, and then deletes the complete medical record from the IPFS based on recordId. U pdateRecord(): When this method is executed, it first updates the medical data in the IPFS to get a new hash address, and then restores this new hash address to the SDB by calling the AddRecord() method. QueryRecord(): This method first looks up the hash address of the medical record in the SDB based on the recordId, and then sends the found hash address to IPFS to be parsed into a complete medical record.
V. EXPERIMENT AND RESULTS
This section introduces the process of the experiment and the final results, which are used to verify the performance of this solution through comparison. Section V-A introduces the experimental environment, i.e. the hardware and software resources required for the experiment. Section V-B introduces the process of creating and implementing the solution. Section V-C presents the experimental results, which are used to compare and analyze the performance of the solution.
A. Experimental environment
The hardware and software resources required for the standalone environment for this solution are shown in Tab. II.
B. Creation and realization
This section mainly includes three parts, section V-B1 mainly introduces the network structure of the scheme and initialization configuration and start. Section V-B2 introduces the installation of the Chaincode. Section V-B3 mainly introduces how to use the attribute-based access control model to call intelligent contracts.
1) Network architecture and initialization process: The scheme consists of a total of eight network nodes, and the steps for network initialization are shown below.
Step 1: Use cryptogen tools to generate organization structure and identity certificates for your network.
Step 2: Use the configtxgen tool to generate the creation block for Orderer, the configuration transaction file for the channel, and the anchor node configuration update file for each organization.
Step 3: First start the fabrics network with docker-compose, then use client nodes to create channels, and finally add each peer node to the channels.
2) Chaincode installation and upgrade: Firstly, installation. After the initialization of the blockchain network, the chaincode can be installed. The chaincode is installed through the hyperledger client node. The client node is used to install the chaincode into each peer node in turn. Secondly, instantiation. After installing the chaincode, specify any peer node to instantiate the installed chain code. Finally, upgrade. Before updating the chain code, you must install the new chain code, that is, the chain code update is only valid on the peer node with the new chain code installed.
3) System implementation: In Hyperledger Fabric users can access the blockchain via a client or an SDK, in this scenario a client written by the SDK will be used to interact with the blockchain. The specific steps are as follows.
Step 1: The CA node generates a key pair for the client, which is stored in the user's wallet.
Step 2: The administrator connects the client to the peer node, and once the link is complete, the transaction can be submitted or evaluated.
Step 3: First the orderer node completes the sorting process, then a consensus is reached between the peer nodes, and finally the status database can be queried or updated. If you want to add a policy, you can call the AddP olicy() method in PSC, as shown in Fig. 7. As shown in Fig. 8, if you want to know whether a policy has been added successfully, you can call the QueryP olicy() method in the PSC to query the details of a policy. As shown in Fig. 9, this policy can be updated by calling the U pdateP olicy() method in the PSC for some reason to adapt to the new case. If a policy becomes invalid or the administrator needs to force the deletion of a policy, the policy can be deleted by calling the DeleteP olicy() method in the PSC. This is shown in Fig. 10.
As shown in Fig. 11, if the Medical Centre needs to add a new medical record, it can do so by calling the AddRecord() method in the RSC. As shown in Fig. 12, if the medical center needs to query the details of a medical record, it can do so by calling the QueryRecord() method in the RSC. If a medical Fig. 10. Results of calling the P SC.DeleteP olicy() method record needs to be adjusted in real-time due to a new change in the patient's condition, the U pdateRecord() method in the RSC can be called to update a medical record. If a medical record needs to be deleted due to age or other reasons, it can be deleted by calling the DeleteRecord() method in the RSC. After receiving the user's request, it will automatically call the CheckAccess() method in ASC to verify whether the request is reasonable. If the request is reasonable, it will return the ' valid request ! ', otherwise the request is invalid. The details are shown in Fig. 13. The following conclusions are drawn from the above experimental results: Firstly, add and update operations take a longer time, while query and delete operations take less time. Secondly, the throughput of add and update operations is less than that of query and delete operations. The throughput does not decrease significantly when the number of concurrent requests reaches a certain value. Although PoW consensus can achieve complete decentralization, taking too long to reach consensus results in a large waste of resources. However, Kafka consensus can not only accomplish high throughput of transactions, but also provide sufficient fault tolerant workspace for consensus and ordering services. As shown in Fig. 16, in the second group of experiments, we compared the differences in consensus time between the Kafka consensus mechanism and the PoW consensus mechanism adopted in this scheme by setting the number of different nodes (between 10 and 100). The results show that this scheme can reach a consensus in a short time. Through the above two groups of experiments, it can be proved that this scheme can not only maintain high throughput in a large-scale request environment but also effectively reach consensus in a distributed system. This paper combines blockchain technology with an attribute-based access control model to take full advantage of blockchain technology to break down information silos in medical data and safeguard the security and privacy of medical information. In addition, the interstellar file system is utilized in storage to ease the storage pressure on the blockchain. The scheme uses a distributed architecture to achieve dynamic fine-grained access. The deployment and invocation of the chain code are described in detail and proof is given through experiments. In conclusion, this paper provides a practical reference for related research and can provide ideas for researchers. Future work could make improvements in the following areas.
1) This scheme is carried out on a single PC, future consideration could be given to using clusters to further optimize the performance of the distributed system. 2) This scheme is based on the consensus mechanism of Kafka. To further reduce the arithmetic power and improve the consensus efficiency, a combination of other consensus algorithms can be considered in the future, such as using a consensus approach that combines Byzantine fault-tolerant algorithms with non-Byzantine fault-tolerant algorithms. 3) This paper combines IPFS and blockchain to alleviate the storage pressure of blockchain, but this is only a transitional stage, and in the future, we should consider solving the data storage problem from the blockchain.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Authors Contribution
ZS proposed and developed the new idea of the paper and drafted it. HD and LD have substantially revised it. WX and WZ conducted the data analysis and text combing. CC is responsible for supervision. All authors approved the submitted version. All authors read and approved the final manuscript. His current research interests include database design, computer cryptography, image compression, and data structures. Prof. Chang was a recipient of many research awards and honorary positions by and in prestigious organizations both nationally and internationally, such as the Outstanding Talent in Information Sciences of Taiwan. He is currently a Fellow of the IEEE, a Fellow of the IEE, U.K and a Member of the IEICE.
LIST OF ABBREVIATIONS
Zhongdai Wu , male, born in August 1976, is the Deputy General Manager, Chief Engineer, and Chief Information Officer of COSCO Shipping Technology Company Limited, with a doctoral degree, a researcher-level senior engineer, and a senior information manager of SASAC. He has more than 20 years of experience in shipping and logistics informatization construction. He has been responsible for the construction of various large-scale informatization projects of central enterprises and has presided over the completion of one Shanghai Key New Product, one Shanghai High-tech Achievement Transformation Project, one Shanghai Application Demonstration Project, and many software copyrights, and has rich experience in project planning and management. He has published more than ten academic papers, two of which were indexed by EI. He has been awarded as one of the top ten civilizational pacesetters of China Shipping Group, Shanghai New Long March Pioneer, Shanghai Federation of Trade Unions Scientific and Technological Innovation Talent, State-owned Assets Supervision, and Administration Commission Central Enterprise Knowledge-based Advanced Worker, Shanghai Young Post Leader, etc. Main research areas: shipping informationization, container management, ship and cargo management, logistics and supply chain technology research, cloud data center construction and management, network security situational awareness, shipping e-commerce, Internet of Things application, business intelligence technology, shipping big data application, ship satellite communication, etc. | 8,667.4 | 2022-01-25T00:00:00.000 | [
"Computer Science"
] |
NOISY IMAGE SEGMENTATION USING A SELF-ORGANIZING MAP NETWORK
Image segmentation is an essential step in image processing. Many image segmentation methods are available but most of these methods are not suitable for noisy images or they require priori knowledge, such as knowledge on the type of noise. In order to overcome these obstacles, a new image segmentation algorithm is proposed by using a self-organizing map (SOM) with some changes in its structure and training data. In this paper, we choose a pixel with its spatial neighbors and two statistical features, mean and median, computed based on a block of pixels as training data for each pixel. This approach helps SOM network recognize a model of noise, and consequently, segment noisy image as well by using spatial information and two statistical features. Moreover, a two cycle thresholding process is used at the end of learning phase to combine or remove extra segments. This way helps the proposed network to recognize the correct number of clusters/segments automatically. A performance evaluation of the proposed algorithm is carried out on different kinds of image, including medical data imagery and natural scene. The experimental results show that the proposed algorithm has advantages in accuracy and robustness against noise in comparison with the well-known unsupervised algorithms.
INTRODUCTION
This Image segmentation is the process of image division into regions with similar attributes [1,2].It is an important step in image analysis chain with applications to satellite images, such as locating objects (roads, forests, etc.), face recognition systems, and Medical Imaging [3].The objective of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze [3].The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image.Each pixel in a region is similar with respect to some characteristic or computed property, such as color, intensity, or texture.
Image segmentation can be considered to be a kind of clustering, which clusters similar pixels into same group.Clustering by supervised and unsupervised learning [4] is considered as the most popular segmentation technique.Until recently, most of the segmentation methods and approaches are supervised such as Maximum Posteriori (MAP) [5] or Maximum Likelihood (ML) [6] with an average efficiency rate of about 85% [7,8].In the supervised methods a priori knowledge is needed to get a successful segmentation process and sometime the required information may not be available.
On the other hand, in unsupervised technique inherent features extracted from the image is used for the segmentation.Unsupervised segmentation based on clustering includes K-means, Fuzzy C-Means (FCM) and ANN.K-means algorithm is a hard segmentation method because it assigns a pixel to a class or it does not [4].FCM uses a membership function so that a pixel can belongs to several clusters having different degree.One important problem of these two clustering methods is that the clustering numbers must be known beforehand.ANN can change their responses according to the environmental conditions and learn from experience.Self-Organizing Map (SOM) [9,10] or Kohonen's Map is an unsupervised ANN that uses competitive learning algorithm.The SOM features are very useful in data analysis and data visualization, which makes it an important tool in image segmentation [4].Although the use of SOM in image segmentation is well reported in the literature [9,11], its application under noisy condition is not widely known.
This paper proposes a developed self-organizing-map to segment images under noisy cases with high performance.Using two Statistical features, mean and median, calculated according to a block of pixels, and all pixels in this block as part of SOM input learning data for each pixel, leads to a suitable segmentation with respect to noise.
SELF-ORGANIZING MAP
The SOM introduced by Kohonen [12], is an unsupervised learning neural network.SOM projects a high dimensional space to a one or two dimensional discrete lattice of neuron units.Each node of the map is defined by a vector Wij, whose elements are adjusted during the training.An important feature of this neural network is its ability to process noisy data.The map preserves topological relationships between inputs in a way that neighboring inputs in the input space are mapped to neighboring neurons in the map space [13].
In SOM, the neurons are arranged into the nodes of a lattice that is shown in Figure 1 [14].The basic SOM model consists of two layers.The first layer contains the input nodes and the second one contains the output nodes.The output nodes are arranged in a two dimensional grid [15,16].Every input is connected extensively to every output via adjustable weights [17].
Best matching unit and finding the winner neuron determined by the minimum Euclidean distance to the input.Let x be the input and W ij be the weight vector to the nodes.Vector x is compared with all the weight vectors.The smallest Euclidian distance (d ij ) is defined as the bestmatching unit (BMU) or winner node.
(1) Adjustment of the weight vector for the winning output neuron and its neighbors are calculated as followed: (2) Where for time t, and a network with n neurons: α is the gain sequence (0 < α < 1) and N c is the neighborhood of the winner (1< N c < n).The basic training algorithm is quite simple: 1) Each node's weights are initialized.2) Vector is chosen at random from the set of training data.3) Every node is examined to calculate which node's weight is most alike the input vector.The winning node is commonly known as the Best Matching Unit 4) Then the neighborhood of the BMU is calculated.
The amount of neighbors decreases over time.5) Update weights to node and neighbors according to equation (2).6) If N c ≠ 0 then repeat step 2.
PROPOSED SOM METHOD
Although normal SOM has sufficient result through its features such as learning capability from examples, generalization ability, and nonparametric estimation, it suffers from two main problems.First, it is highly dependent on the training data representatives [18], especially in noisy situations.So, choosing a suitable training data is one of the most important parts in SOM.Second, Normal SOM cannot recognize segment or cluster's numbers automatically.This section proposes a new SOM algorithm to segment input images in both noisy and non-noisy cases.This algorithm consists of two steps: First, training data for each pixel are chosen via a pixel with its block values and two statistical features as well.A block of each pixel is made by its spatial neighbors.Moreover, two statistical features, median and mean values, which are computed according to intensity values of the block, are employed to recognize a model of noises.Median value helps the proposed SOM model to identify salt & pepper noise and means value help it to identify Gaussian noise too.In fact, the proposed SOM does take spatial information and two statistical features into account in order to recognize model of noise and consequently segment noisy image as well.
Second, we use a maximum cluster number instead of a predefined number of SOM output cluster.In addition, a two cycle thresholding process is used at the end of SOM learning phase to remove the unnecessary cluster.In fact, by using these two thresholds, we would not need to a prior knowledge about the number of clusters in SOM method.These two cycles process are describe as below.
In the first cycle, we remove clusters which their data numbers are less than a specific threshold, T1, (clusters with few pixels).T1 is computed via the number of image pixels.Then, the data whose clusters are removed will put into a cluster according to the nearest Euclidean distance between the data and center of clusters.To reduce over segmentation problem, in the second cycle, two clusters are combined if the distance between their cluster centers is less than a predefined threshold T. Figure 2 shows a scheme of proposed method.
EXPERIMENTAL RESULT
This section presents several results of the simulation on the segmentation of medical and famous public Berkeley segmentation dataset (Fig. 3).These results illustrate the ideas presented in the previous section.Three images are shown in Figure 1.The first image is a Brain MRI that consists four objects: CSF, white matter, grey matter and background from [19].The second is an X-ray image of a vessel with intensity inhomogeneity which consists of two objects: vessel and background from [20].The target is to eliminate the vessel.The third is a camera man image which consists of three regions from [21].
These images are commonly used in papers [20][21][22][23] for image segmentation purposes and the algorithms compared have employed these images in their experiments.The original images are stored in grayscale space which take 8 bits and have the intensity range from 0 to 255.We have to cluster pixels of each image by our new algorithm and compare the result with K-Mean [24], FCM [25] and normal SOM [12] methods on image segmentation.In this paper, T1 is equal to 5% of image pixel's numbers, T2 = 75.Moreover, the maximum number of clusters is set 12 for experiments.
Figures 4 and 5 present the simulation results of three images corrupted with 15% and 25% salt and pepper noise respectively.Moreover, to show that the proposed method is robust to Gaussian noise, the next experiment is designed.We add the Gaussian noise with mean=0, variance=0.25 to the three images present in Figure 3 and presented the results in Figure 6.In the first column From Figures 4-6, it can be seen that the proposed method performs well for all images.However, the rest fails in most situations.The reason is that the proposed SOM attempts to adjust weight vectors of the winning output neuron through spatial and statistical information of each pixel.Consequently, it shows lower susceptibility to noise.But, the standard SOM algorithm and also K-Mean and FCM methods are only concerned with intensity information and this information is changed by noise.Consequently, these algorithms are sensitive with respect to noise.
CONCLUSION
In this paper, we have presented a robust and effective approach for the segmentation of natural and medical images corrupted by different type of noise.For the segmentation of noisy images, the proposed approach utilized a SOM-based clustering with spatial and statistical information which are computed based on a block of each image pixel.In addition, using a two cycle thresholding process in the proposed method leads to an automatic segmentation which would not need to a prior knowledge about number of clusters.The efficiency and robustness of the proposed approach in segmenting both medical and natural images on different type and range of noise has been
Fig. 5 .Fig. 6 .
Fig. 5.The segmentation result of the three images contaminated with 25% salt and pepper noise in the first column was generated by the K-Mean (the second column), the FCM (the third column), the standard SOM (the fourth column) and the proposed SOM (fifth column) | 2,500.4 | 2015-05-14T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Laser marking and engraving of household and industrial plastic products
Laser marking and engraving has developed in many ways into an attractive process for identifications of consumer goods made of plastic. It is a quick and inexpensive process that offers a variety of flexible options for designing identification products (barcodes, security information, codes). This report examines the possibility of marking PVC products used in the electronics industry with different colors using a CO2 laser technological system. The functional dependences of the width and depth of the marking lines on the main technological parameters – average power and processing speed, are analyzed. The analysis aims to help determine the optimal working intervals for marking and engraving by the bar coding method, as well as for the coding and reading of information on household PVC products used by visually impaired people. The analysis further aims to help determine the optimal operating intervals of speed and power when choosing a given geometry of the ablation zone in marking and engraving products for different users.
Introduction
PVC is now one of the world's major polymers and a large amount of PVC is produced worldwide because of its superior mechanical and physical properties [1]. PVCs have a relatively low intrinsic thermal stability; however, its amorphous nature allow it to mix easily with many different substances that can be used to tailor the properties of the PVC products within certain limits [2,3]. The poor thermal stability of PVC is attributed to structural defects formed during polymerization [4].
Laser marking can be used in various technological processes. A laser marking device can be readily integrated into a production line. The marking is easily legible, abrasion-resistant and can be applied to areas which are difficult to access by conventional marking methods [5]. In many branches of industry, the direct marking on the product method (DPM) is used to allow identification of the final industrial product [6,7].
Bitay has researched the markability of PVC-coated automotive insulation cables by 1064 nm and 532 nm laser beams [8]. Blazevska-Gilev et al. has studied the effects of IR laser ablation of PVC [9].
Laser systems of different wavelengths can be used to create markings on PVC. UV lasers (such as excimer lasers) cause a color change through pigmentation, but IR lasers produce ablation and carbonization of thermoplastic materials. A laser mark is created by the insulation absorbing the electromagnetic energy and the subsequent alteration of the polymer to create a visible and contrastable [10]. The quality of a mark is assessed by the mark contrast -the width and depth of its characters [11,12]. The material's color affects the amount of energy absorbed by PVC. In cases where it is not possible to create a contrasting mark, it is possible to create a legible mark by selectively melting the PVC surface. This can be carried out by a CO2 laser system [10].
CO2 lasers emits electromagnetic radiation in mid-IR range (λ = 10620 nm), which interacts with polyvinyl chloride by causing thermal degradation and sublimation or vaporization of the material [13]. Many organic polymers (including PVC) are strongly absorbing this wavelength. The main process occurring during laser light interaction with PVC is dehydrochlorization. In the initial stages of degradation, HCl starts "zipping off" from the polymer backbone, resulting in the formation of polyene sequences [13].
The aim of this study is to find the method and optimal parameters of applying information on the surface of different colored types of PVC cables by a CO2 laser, so that it meets the conditions for both visual and automated reading of the recorded information. Two of the most important factors that affect the quality of the marking are the depth and width of the characters marked/engraved on the sample PVC channel, which is the subject of this study.
Technologies of cable marking
The range of offered types of cables and wires, differing in purpose, parameters, design, etc., is quite large. They provide useful information and are easily recognizable thanks to the special marking of each type of cable.
Cable marking usually refers to the application of various characters, labels and symbols that allow the parameters of the cable or wire to be determined. This information includes technical information on the thickness of the section, the number of cores, their cross section, the type of core insulation, the type and material of insulation used and its intended purpose. This information may correspond to national, international or industry standards. Usually it includes a combination of letters and numbers and is applied by the manufacturers of the external insulation of the cable (figure 1).
One of most commonly used identifiers of cable or wire is the color of its insulation. For example, in the case of multi-core cables, the insulation of the individual cores is of a different color, which allows one to quickly determine the purpose of each core. Cables and wires can be used in different climatic conditions. Combinations of alphanumeric or numeric designations are also often used to distinguish different embodiments. The numbers often indicate under what environmental conditions it is appropriate to use a cable or wire, for example outdoors, indoors, in high humidity, etc. The class of resistance to high temperatures is also indicated.
In the marking of symmetrical cables, in addition to letters, numbers are placed, which show the number and type of groups of cable conductors, as well as the diameter of the conductors. In the case of main coaxial communication cables, figures are also affixed which are related either to the diameters of Markings can be applied to the cable by various methods or by using special equipment. One of the most common ways of marking is through the application of color or embossed printing; usually the embossed markings are colorless. The application of such marking is often implemented by means of rotating marking discs.
Inkjet and laser printers are also used for marking. They are suitable for marking cable insulation of the most commonly used materials. An advantage of using printers for marking is the ability to freely program the text that will be marked on the cable.
Equipment and materials
Laser System: An ST-CC9060 CO2 laser marking system was used in the experiments on marking the samples. It is specially designed for marking non-metallic materials. It is equipped with an original CO2 laser with a unique fully sealed cavity design and a high-speed X-Y coordinate table, (figure 3).
Principle of operation: the laser tube is filled with CO2 gas as the laser gain medium; when a high voltage is applied to the electrodes, a glow discharge is generated in the tube, which causes the gas molecules to emit a laser radiation. After amplification, the laser beam leaves the resonator through the output mirror to be used for material processing. The X-Y coordinate table is controlled by a specialized computer program to perform the samples marking.
An OLS-5000 SAF laser microscope (figure 2) was used for measuring the width and depth of the laser affected zone. The magnification used is ×2000 with a repeatability of 0.03 μm. Figure 3. Scheme of the experimental setup for marking PVC by CO2 laser.
Materials / Experimental samples
PVC is a thermoplastic that can range from soft, flexible materials to hard, rigid plastics. Rigid PVC is easily machined, heat formed, welded, and even solvent cemented. PVC can also be machined using standard metal working tools and "unplasticized," because it is less flexible than the plasticized formulations. PVC has a broad range of applications, from high-volume construction-related products to simple electric wire insulation and coatings [14,15]. The differences between flexible and rigid PVC are shown in table 2. Polyvinyl chloride can be obtained by chain polymerization of its monomer produced from chlorine (57 wt.%, manufactured by chlorine alkali electrolysis of brine yielding chlorine, sodium hydroxide, and hydrogen as co-products) and ethylene (43 wt.%) via 1.2-dichloroethane [16].
There are three broad classifications for rigid PVC compounds: Type I, Type II, and CPVC. Type II differs from Type I by its greater impact values, but lower chemical resistance. CPVC has a greater high temperature resistance. PVC has a broad range of applications, from high-volume construction-related products to simple electric wire insulation and coatings [17].
Samples of colored cables with a PVC sheath were prepared for the experimental studies. The dimensions of the samples were 30 mm×150 mm. The following types of cable colors were used in the experiments: white, blue, turquoise, red, orange, green, pink and black.
Methodology
We analyzed the width b and depth h of the marked channel as a function of the power P of the laser radiation and of the processing speed. The experiments were grouped into two series of measurements. Samples of the following types of colors PVC cables were prepared for the whole research: white, blue, turquoise, red, orange, green, pink and black. The methodology of the experiment followed the scheme below: The Table 3. Comparison of the change in the width b of the laser ablation zone for two separate speed intervals (20 mm/s÷140 mm/s and 140 mm/s ÷ 380 mm/s).
Color
Width b b20/b140 b140/b380 White As can be seen in table 3 for the three selected color samples, the average percentage of the ratio b20/b140 is 125%; for b140/b380 it is 128%. Therefore, there is no major difference between the width change in the two intervals. For the three selected color samples, the h20 values in table 4 are in average 7.94 times larger than those of h140; and the h140 values are 2.62 times larger than h380 values. It is seen that the depth change in the first interval is much sharper than that between 140 mm/s and 380 mm/s. The results of the second series of experiments were summarized and presented in figure 6 and figure 7, namely, graphs for the width b and the depth h as depending on the power P. The results of the yellow, green and blue samples were selected from the tested samples, with the aim to check the effect of the color, or more precisely, the absorption for the laser wavelength by the color of the treated surface.
To analyze the changes in the width and the depth in the processing area (marking area) as a function of the laser power, we divided the graphs in figures 6 and 7 into two intervals -from 1.5 W to 5.1 W, and from 5.1 W to 10 W.
The rate of width change Δb/ΔP between 1.5 W and 5.1 W is 11.6 times greater than that between 5.1 W and 10 W. The rate of depth change Δh/ΔP between 5.1 W and 10 W is 1.56 times greater than that between 1.5 W and 5.1 W. Tables 5 and 6 show in absolute values how many times the width b and the depth h increased at the end of the two intervals, compared to the values at the beginning of these intervals. In
Summary
The recommended depth h of the marked area depends on the area of application of the cables and the thickness of the material. For insulating materials, it is recommended that the marking depth does not exceed 10% of the total thickness. Preferably, the width of the marking is to be well perceived visually. The optimal marking speed v obtained is from 140 mm/s to 340 mm/s, and the optimal power P of the laser radiation is in the range from 5.1 W to 10 W. The color of the PVC sample does not play a considerable role in the process of marking by a CO2 laser. For the colors studied, the difference in the depth and width of the marked areas is insignificant.
However, one could notice that the graphs of the darker cables lie above those of the lighter ones, although the difference is minor. This can be explained by the higher absorption of laser radiation by dark colors than light ones.
Conclusion
This study is based on the real needs of PVC manufacturers of electrical and optical cables, but may also be of interest to other companies that manufacture PVC products for the automotive and aerospace industries. For such products, it is necessary to mark a relatively large amount of information on a small limited area of the product. The aim is, on the one hand, to achieve a maximum resistance to external influences of the marking, and on the other, to ensure a high degree of automation of the process of decoding information from QR or barcodes during its reading.
The present study examines the geometry of the marked area as a function of the average power and processing speed, which is directly related to the resistance to external influences.
In the second stage of our research we plan to focus on the contrast and readability of the marker information when applying QR codes as a function of the laser parameters and the exposure time during processing. Another important parameter that is planned to be studied is the influence of defocusing, as a large part of the marked PVC products have a spherical, conical or cylindrical shape, which in turn affects the choice of the optimal area in which the marking should be located. | 3,078.6 | 2021-03-01T00:00:00.000 | [
"Materials Science"
] |
Elementary Equations of Variant Measurement
Four variant measures are used to represent combinatorial functions including binomial coefficients. These variant measures are based on two types of m -bit vectors. Type A corresponds to non-periodic boundary conditions, while Type B corresponds to periodic boundary conditions. For each type, groups containing the four variant measures are formed, which are invariant against permutative and associative operations. By mapping two group elements of Type B on coefficients of binomial decompositions, patterns similar to Pascal’s triangle are observed.
Introduction
For any n 0-1 variables, variant logic provides a 2 n ! × 2 2 n -dimensional configuration space [16,17] to support measurement and analysis [14,15], which is a real difficulty for any practical activities [1,[9][10][11]. From a measuring analysis viewpoint [6][7][8]13], it is essential to manipulate static states and their measuring clustering as effective measures to be a core content of any 0-1 measuring framework. In this chapter, starting from m variables of a 0-1 vector, binomial expressions are applied to support the four meta measures of variant partitions and associated multinomial expressions.
Using permutative and associative operations, various variation and invariant properties are investigated. From a global invariant viewpoint, various combinatorial clustering properties are systematically explored. Let x be an m-bit vector, x = x 0 x 1 . . . x i · · · x m−1 , x i ∈ {0, 1}, 0 ≤ i < m, x ∈ B m 2 . Each x is an m bit state. From a variation viewpoint, there are two types {A, B} distinguished. Let {m ⊥ , m + , m − , m } be four measuring operators.
Type A Measures
For a pair of (i, i + 1) elements, Four measures can be calculated from the following equations.
Type B Measures
A pair of (i, i + 1) elements is linked as a ring, (x i , x i+1(mod m) ), 0 ≤ i < m (Periodic boundary conditions).
Let p be the number of 1 elements, p(x) = m + (x) + m (x), then the number of possible x vectors is
Partition
Either Type A or B, internal parameters are associated with the four meta measures. For a brief analysis, Type B will be selected as initial part, multinomial coefficients are applied to partition relevant binomial coefficients. Using m variable, p number and q branches, the following equations are formulated. Under the partition condition, vector x can be ignored.
Based on equivalent quantitative numbers, there are one-to-one corresponding on the four meta measures and relevant quantitative measures: from a global restriction to establish an equivalent expressional framework.
From an expressional viewpoint, different partitions are investigated from a single binomial coefficient to a set of multinomial coefficients with equivalent properties among different expressions. Their partitions undertaken on various levels are illustrated in the following sections. From a binomial coefficient, there are multiple levels of representations involved, the first level and the nth level can be connected as The core content of this chapter is to establish a global invariant framework using n levels of representations by deriving the functions f l and g l .
Variation Space
Let {a,b,c,d} be a set of four distinct measures. Two operations, permutative and associative, can be determined. For an ordered tuple with four measures (a, b, c, d), Permutative operator π : (a, b, c, d) → (π(a), π(b), π(c), π(d)) to map one measure to another measure.
Associative operator α: {a, b, c, d} → α{a, b, c, d} to group one to multiple measures keeping the initial ordering.
is a permutative operation and {a, b, c, d} → {a, b}{c}{d} is an associative operation.
A permutative operation changes the order of four tuple variables and an associative operation changes sequential relationship on its neighbourhood elements. In a normal arithmetical condition, two operations have conservative under add operations with global invariant properties. From an algebraic viewpoint, two operations are independent.
Invariant Combination
Using both permutative and associative operations, various combinatorial invariants can be identified.
Type A Invariants
Five invariant groups can be distinguished.
Type B Invariants
For Type B, let b = c, following simplification can be performed.
Combinatorial Expressions of Type B Invariants
Applying m ⊥ = m − p − q, m + = m − , m = p − q to replace {a, b, c, d}, there are 11 effective formula:
Corollary 1 Type B invariants include 11 nontrivial expressions.
Proof Only 0 item is a trivial one.
Two Combinatorial Formula and Quantitative Distributions
From a combinatorial viewpoint, 1. item formula is a binomial coefficient m p , 0 ≤ p ≤ m, to show various partition properties with relevant parameters. For convenient illustration, two expressions are selected: {m − p}{ p} and {2q}{m − 2q} from 2 clusters of 2b item of Type B.
Applying Chu-Vandermonde's identity to identify {m − p}{ p} as f 1 and f 2 in Eq. (18), the binomial coefficient in level n = 2 can be written as In this way, each binomial coefficient m p is composed of p + 1 pairs of binomial coefficient multiplications and a total of sums on relevant groups. For e.g., while m = 10, all coefficients are in 11 × 11 region and nontrivial values are composed of a triangle shape with reflect symmetric properties on p values.
Theorem 1 For all coefficients of Type B, sum of all coefficients in
For e.g., while m = 10, all coefficients are arranged on 6 levels of 11 × 11 regions with multiple symmetric properties.
Result Analysis
Two formulas selected from 2b item of Type B show completely different properties. In Case I, for a given m, all coefficients are distributed in one triangle area with reflection properties on p direction.
However, Case II provides multiple levels of 2D distributions and each one is corresponding to a selected q value. From three listed conditions, q = 0 and q = 5 are linear structures, the first one is located on diagonal positions of the plane and the second one is located on k = 0, p = {0, 1, . . . , 10} a horizontal region. While 0 < q < 5, all distributions are shown in as parallelograms. Each line is shown in special symmetries. We can observe associated with variations of q values, horizontal projection keeps the same, however, the vertical projection will be changed from q = 0 binomial distribution, to be a pulse on q = m/2 condition. This type of controllable properties could be useful to explore future advanced applications.
Conclusion
A new approach to decompose binomial coefficients under permutative and associative operations is proposed. Using this approach, it is feasible to investigate four meta measures in global invariant spaces. The resulting set of 192 configurations is categorized into standard group theory mechanism. From a statistic viewpoint, Type A (Five levels in 16 clusters) and Type B (Five levels in 12 clusters) provide global identifications on complicated partitions on wider restrictions, further theoretical explorations and practical applications are deeply expected in the coming period.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 1,829.8 | 2018-12-18T00:00:00.000 | [
"Mathematics"
] |
From warrior genes to translational solutions: novel insights into monoamine oxidases (MAOs) and aggression
The pervasive and frequently devastating nature of aggressive behavior calls for a collective effort to understand its psychosocial and neurobiological underpinnings. Regarding the latter, diverse brain areas, neural networks, neurotransmitters, hormones, and candidate genes have been associated with antisocial and aggressive behavior in humans and animals. This review focuses on the role of monoamine oxidases (MAOs) and the genes coding for them, in the modulation of aggression. During the past 20 years, a substantial number of studies using both pharmacological and genetic approaches have linked the MAO system with aggressive and impulsive behaviors in healthy and clinical populations, including the recent discovery of MAALIN, a long noncoding RNA (lncRNA) regulating the MAO-A gene in the human brain. Here, we first provide an overview of the MAOs and their physiological functions, we then summarize recent key findings linking MAO-related enzymatic and gene activity and aggressive behavior, and, finally, we offer novel insights into the mechanisms underlying this association. Using the existing experimental evidence as a foundation, we discuss the translational implications of these findings in clinical practice and highlight what we believe are outstanding conceptual and methodological questions in the field. Ultimately, we propose that unraveling the specific role of MAO in aggression requires an integrated approach, where this question is pursued by combining psychological, radiological, and genetic/genomic assessments. The translational benefits of such an approach include the discovery of novel biomarkers of aggression and targeting the MAO system to modulate pathological aggression in clinical populations.
Introduction
Aggression is an evolutionarily conserved, complex set of behaviors aimed at inflicting physical and/or emotional harm to others. Though adaptive in certain situations, aggressive behaviors or traits that deviate from normative standards (i.e., labeled as pathological 1 ) can lead to detrimental personal and societal consequences, as is the case for antisocial behavior. Far from being a unitary behavioral construct, aggression is often hard to operationalize and measure in the laboratory setting 2,3 . Part of this complexity is also explained by the fact that the conceptualization of aggression has changed throughout history and across cultures 4 .
Despite the inherent complex nature of aggression, numerous efforts have been made to elucidate the neurobiological and genetic underpinnings of aggressive behavior across the animal kingdom (for a review, see refs. [5][6][7][8][9] ). On this front, diverse brain areas, neural networks, neurotransmitters, hormones, and candidate genes have been experimentally related to aggressive and antisocial behavior in humans and other animals. Among these, monoamine oxidases (MAOs) have received substantial attention over the past 20 years. The renewed interest in aggression and MAOs can be traced back to 1993, when Brunner et al.
reported abnormal behavioral manifestations, including overt aggressive and violent behavior, in a Dutch family with an X-linked nonsense mutation in the MAO-A gene 10 (Brunner syndrome). As we discuss below, similar behavioral phenotypes have been observed in mice with MAO-A gene mutations that mimic that of the human Brunner syndrome. In 2013, 20 years later, Piton et al. reported the second case of MAO-A mutations in a small family with a behavioral phenotype similar to the one originally described by Brunner 11 , and this was followed by the third report of two Australian families 12 .
Because of the chief role of MAOs in the metabolism of key neurotransmitters involved in aggressive behavior, most notably serotonin (5-hydroxytryptamine; 5-HT) 13,14 , it is not surprising that a substantial body of research has linked aggressive phenotypes with the MAO system. Thus, human genetic studies associated aggressive traits with specific allelic variations in the genes that encode the MAOs 6,[15][16][17][18] . In addition, humans and mice with an absent or dysfunctional gene that codes for the isoform A of the enzyme exhibited increased 5-HT levels and manifested aggressive and impulsive behaviors 15,19 . Indeed, these results inspired both the scientific community and media outlets to refer to MAO-A as the warrior or criminal gene 20 . On the other hand, pharmacological studies characterized the behavioral changes associated with the administration of selective and nonselective MAO inhibitors (MAOIs), which, interestingly, are widely prescribed for the treatment of a variety of mental disorders, including anxiety, depression, posttraumatic stress disorder (PTSD), and Parkinson disease 21 . Although initial pharmacological studies supported a role for MAOIs in the reduction of aggressive phenotypes, data from these studies were hard to interpret because of the side effects of MAOIs, and because of their impact on a myriad of unrelated behaviors 21 .
Here, we first provide a general overview of MAOs and their physiological functions. Then, we summarize recent key findings linking MAO-related enzymatic and gene expression activity and aggressive behavior, and we provide novel insights into the mechanisms underlying this association. Using the existing experimental evidence as a foundation, we discuss the translational implications of these findings in clinical practice and also highlight what we consider outstanding conceptual and methodological questions in the field.
Overview of the MAO system
In humans and rodents, the MAO-A and MAO-B isoenzymes are encoded by the homonymous genes MAO-A and MAO-B, respectively. The validity of neurobehavioral studies linking MAO and aggression has been questioned in animal models, such as the zebrafish, which only carry one Mao gene 22 . MAO-A and MAO-B are both on the X chromosome and present similar intron-exon organization; however, they show discrepancies in their core promoter regions [23][24][25] . These differences in the regulation of MAO genes may explain their distinct responses to certain hormones and levels of gene expression across brain areas 26 and the extent to which MAO-A and MAO-B modulate antisocial and aggressive behaviors (discussed below).
In the central nervous system, MAOs break down monoamine neurotransmitters, including the catecholamines dopamine, adrenaline, and noradrenaline, histamine, and serotonin (5-HT) [25][26][27] . Thus, MAO inhibitors (MAOIs) have been commonly prescribed to treat disorders or conditions characterized by reduced levels of monoamine neurotransmitters, including affective disorders (e.g., anxiety and depression), Alzheimer's disease (AD), and Parkinson's disease 26 . The MAO-A and MAO-B isoenzymes differ in key anatomical, biochemical, and functional aspects (for a comprehensive review, see ref. 26 ). Within the nervous system, there are regionand cell-dependent differences in MAO activity. In humans, increased MAO activity has been observed in the basal ganglia and hypothalamus and reduced activity in the cerebellum and neocortex 28 . Regarding cellspecific activity, MAO-A is mostly expressed in catecholaminergic neurons, while MAO-B shows a higher presence in serotonergic neurons and astrocytes 29 . From a functional standpoint, both MAO-A and MAO-B are active toward dopamine, adrenaline, and noradrenaline 26 . However, whereas MAO-A seems to bear a larger share of the responsibility in the metabolism of 5-HT, MAO-B mostly catalyzes the oxidation of benzylamine and 2-phenylethylamine 26 .
Finally, because we discuss studies in both humans and rodents, it is worth mentioning that the MAO system presents some differences across species 26 . For instance, while both MAO-A and MAO-B catalyze dopamine, adrenaline, and noradrenaline oxidation in humans, in rats MAO-A participates primarily in dopamine metabolism 30 .
The complexity of aggressive behavior
The discovery by Brunner et al. that mutations in the human MAO-A gene lead to aggressive phenotypes 10 and that MAOs' primary function is in the metabolism of 5-HT and other monoamine neurotransmitters, justify a strong scientific interest in the modulatory role of the MAO system in adaptive and pathological aggression 13,14 . Before we discuss key findings supporting this role, we would like to underscore that aggression is a complex, multifaceted behavioral construct supported by a similarly complex neurobiological system 31 . This complexity raises a few considerations when interpreting the results discussed here. First, the MAO system is one of many physiological systems that regulate the behavioral manifestations of aggression and antisocial behavior. Second, this system interacts with a myriad of factors that underlie aggression, including political, socioeconomic, cultural, medical, and psychological factors 32 . Third, caution should be made when extrapolating results from animal models to humans, given the core differences in the monoamine system across species 26 . Finally, because aggression can be defined and operationalized in multiple ways 2 , one should always bear in mind the conceptual framework used in any given study (Box 1).
The link between MAOs and aggression: where are we?
In this section, we briefly summarize the main findings linking the MAOs to aggressive/antisocial behavior in humans and other species. Using classic published data and recent findings as a starting point, we next identify some of the outstanding questions in the field, suggest novel research avenues, and identify conceptual and technical obstacles to overcome in the future.
Experimental evidence in humans and animal models Pharmacological approach
Pharmacological studies linking MAOs and aggression have traditionally focused on the behavioral effects of MAOIs, which are widely prescribed for the treatment of a variety of neurocognitive disorders, including depression, posttraumatic stress disorder, or Parkinson disease 21 . Also, nonselective MAOIs or selective MAO-B inhibitors have been used to manage suicidal tendencies and impulsive-aggressive behavior 21,33 , whereas selective MAO-A inhibitors have been employed in Antisocial and Borderline Personality Disorders and yielded conflicting results 34 . Further research in human subjects is required to determine the global effect of MAOIs. For instance, selegiline, a selective, irreversible inhibitor of MAO-B commonly used to treat symptoms in patients with Parkinson's disease, has been shown to influence MAO-A activity in a dose-dependent manner 35 .
In mice, prenatal suppression of MAO-A leads to increased aggression in adulthood after acute administration of MAO-A inhibitors, which suggests a potential prenatal sensitization mechanism 36 . In adolescent (but not neonatal) mice, selective MAO-A inhibition also leads to increased aggression 37 . Chronic suppression of MAO-A, on the other hand, decreased aggression in adult mice 38 . Similarly, in adult mice and rats, nonselective inhibitors of MAO-A and MAO-B suppressed rather than facilitated aggression 21,39,40 . These results indicate that MAO inhibitors may exert different influences on aggression, depending on developmental stage, selectivity, and dosing regimen.
Although initial evidence from pharmacological and pharmacogenetic studies supported a role for MAOIs in Box 1 Theoretical aspects of aggression As for most behaviors, the underpinnings of aggression are complex and include neurochemical, hormonal, genetic, social, and psychological factors. From a conceptual and operational perspective, aggression in humans and animal models has been defined and measured in different ways throughout history. Although aggressive behavior is deliberate by definition, the motivating factors, modalities, and the social or biological context in which this behavior takes place to trigger a variety of forms of aggression. In the laboratory setting, a proper theoretical conceptualization of aggression is necessary for the extrapolation of results and impact on the clinical practice. In animals, multiple frameworks have been proposed to classify the different types of aggression, including defensive aggression, offensive aggressive behavior, and indiscriminate or irritable aggression (i.e., in response to a nonspecific provocation) 101 . In addition, depending on the intended goal or context, where the aggression takes place, authors like Moyer have proposed the existence of predatory, inter-male, fear-induced, sex-related, maternal, and instrumental aggression 102 . These different versions of aggressive behavior are also thought to be supported by differentiated biological mechanisms. Aggression in the animal model is assessed via behavioral tasks or situational tests that vary depending on the species and type of aggression measured 103 . In humans, aggressive behavior may be more nuanced and complex than in other animal species. Theories of human aggression and taxonomies, such as the one proposed by Krahe 104 , refer to different subtypes of aggression depending on the response modality (e.g., verbal, physical), immediacy (direct or indirect), visibility (overt or covert), instigation (unprovoked or retaliative), type of harm (physical or psychological), and so on. Classic models also include a distinction between instrumental and hostile aggression 105 . Whereas instrumental aggression aims to achieve the desired consequence that goes beyond the aggressive act (e.g., non-aggressive incentives like money, power, or advantage, or destroying someone's social status and relationships (so-called relational aggression), hostile aggression occurs when the goal is solely to cause harm in the victim. From an evolutionary perspective, others have classified aggression into proactive (i.e., a planned attack associated with an internal or external reward) and reactive (i.e., an aggressive response to a threatening event aimed to eliminate the aversive or provoking stimulus) 106 . In social contexts, we can also consider direct aggression (occurring in direct interactions between the aggressor and the victim) and indirect aggression (without direct contact, caused via a third party or object) 107 . Examples of the latter would be to harm someone's reputation or status. Historically, the assessment of the genetic and environmental contributions to aggression has been conducted in crosssectional or longitudinal studies using twin registries or case controls. In this context, aggression is typically measured via questionnaires, aggression scales, or objective data from public registries 108 . From a clinical perspective, and according to the American Psychological Association (APA), aggressive behavior can be considered pathological when it is either part of a longstanding repertoire of destructive behaviors or consists of a sudden, exaggerated reaction to a real or perceived provocation 109 . In some cases, this pattern of behavior may be the manifestation of a psychiatric disorder, including psychosis, PTSD, antisocial personality disorder, or a consequence of substance use (e.g., alcohol) 109 .
the modulation of aggressive phenotypes, data from these studies were often misinterpreted 41 and were hard to extrapolate because of the side effects of such manipulations and their impact on physiological responses and a myriad of unrelated behaviors and, including responses to stress 21 .
Genetic studies
Substantive evidence supporting the role of the MAO genes in aggression comes from studies in knockout (KO) mice. Selective knockout models for the MAO-A gene, for instance, exhibited increased aggressiveness compared to their wild-type counterparts 42 . Specifically, adult KO mice housed in groups exhibited signs of aggressive behavior, including bite wounds and faster attack to intruders in resident-intruder tests, as compared with control mice 42 . These effects were likely mediated by reduced functionality of the serotoninergic system in sensitive developmental periods of MAO-A-deficient mice 14,26,42,43 . To this end, pharmacological MAO-A blockade or 5-HT inhibition during the early postnatal period in mice-from postnatal-day (P-) 21-but not during peri-adolescence (P22-P41)-led to a similar behavioral phenotype in adulthood, characterized also by anxiety and depressive behaviors 37 . Conversely, MAO-A blockade in the periadolescence, but not early postnatal period, led to increased aggression in adulthood 37 . Of note, MAO-A "hypomorphic" mice, i.e., mice with partial but not a total loss of MAO-A, did not exhibit increased overt aggression but had reduced social interactions and perseverative responses 44 . In contrast to complete MAO-A KO mice, MAO-B KO mice did not show decreased serotoninergic function or overt aggressive behavior 25,43 . However, results from genetic or pharmacological manipulations of MAO and MAO, respectively, should be interpreted in light of the broader cognitive and emotional impairments associated with deficiencies in the MAO system 13 .
As with mice, MAO-A deficiency in humans was associated with an aggressive behavioral phenotype. Brunner et al. reported mental retardation and abnormal behaviors, including impulsive aggression and antisocial behaviors in men of a Dutch family affected by a nonsense mutation in the eighth exon of the MAO-A gene 10 . Although all cases of Brunner syndrome were members of the same family, jeopardizing further generalization of the findings 45 , this report sparked a lot of attention and research on the link between MAO and aggressive behavior. Further clinical evidence in humans was reported in studies focusing on the polymorphic variants of MAO-A, specifically variable-number tandem repeat (VNTR) polymorphisms or adjacent repetitions of nucleotide sequences. The number of tandem repeats (ranging 2-5 repeats) in the MAO-A gene has functional implications, given that the 2-and 3-repeat alleles (i.e., low-expressing alleles) of the gene, lead to lower enzyme activity than the 4-repeat variant 13,46 . Consequently, several studies identified a link between the 2and 3-repeat alleles and aggressive traits, psychopathy, and criminal behavior 13,15 . In addition, several studies demonstrated that maltreated and abused children with the low MAO-A expression allele exhibited behavioral problems in adulthood, while high expression seemed to be protective from such behavioral phenotype in this population [47][48][49] . The role of MAO-A expression in aggressive behavior should be interpreted in light of the multifactorial nature of aggression, where other genetic and epigenetic factors simultaneously influence this behavioral phenotype 49 . For instance, in female adolescent populations, where X inactivation and other epigenetic transformations may have a profound impact, the social risk appeared to be a prerequisite for later associations between MAO-A (4-repeat allele) and aggression 50 . It has also been suggested that MAO-A and testosterone interactions on aggression are related to modified gene expression 51 .
Genetic studies in humans provide evidence that MAO-A is linked to aggression but only when other environmental factors are also present during development (e.g., abuse, stressors). Other lines of research have demonstrated that the gene product of MAO-A (rather than the gene per se) influences violent traits. For instance, cortical and subcortical MAO-A activity in vivo -measured with positron emission tomography (PET)-was negatively associated with trait aggression 52 . Likewise, a [ 11 C]harmine PET study revealed that increased MAO-A binding in the PFC, indicating higher levels of MAO-A in this brain area, was negatively associated with maladaptive personality traits, such as anger and hostility 53 .
In marked contrast with the known behavioral consequences of MAO-A deficiency, the sequelae derived from absent or low MAO-B need further investigation in humans. In this regard, individuals suffering from Norrie disease (ND) have provided some valuable insights. ND is an inherited, X-linked eye disorder caused by a mutation in the NDP gene, which causes an atypical development of the retina resulting in blindness in male infants around the perinatal period 54 . These patients may also experience hearing and motor impairments, cognitive disability, along with other problems in basic physiological functions, such as breathing, digestion, or reproduction. ND patients harboring deletions or mutations affecting not only NDP but also MAO-A and/or MAO-B (which are neighboring genes to NDP) exhibit differential phenotypes; unlike patients with either MAO-A or combined MAO-A/MAO-B deficiency, patients with selective MAO-B deficiency (i.e., total lack of platelet MAO-B activity) failed to exhibit overt pathological behavior 55 . This result reinforces the specific role of MAO-A vs. MAO-B in the manifestations of clinical behavioral phenotypes and ultimately underscores the importance of syndromic cases to stimulate novel hypotheses and research in the biomedical field.
Neurobiological and anatomical correlates
Neuroimaging studies have explored the neurobiological underpinnings mediating aggression in carriers of low-vs. high-expressing alleles of the MAO-A gene. The low-expression variant, linked to increased risk of aggressive behavior, has been associated with structural and functional alterations in corticolimbic brain networks supporting emotional regulation and inhibitory control, in the prefrontal cortex, amygdala, and hippocampus 56,57 . In this line, a recent study found that healthy males carrying low-expressing alleles of the MAO-A gene exhibited differences in patterns of functional connectivity between brain areas responsible for emotional regulation, i.e., the dorsomedial prefrontal cortex, DMPFC, and empathy, i.e., the angular gyrus, AG), as compared with participants carrying high-expressing alleles of the gene 58 . Importantly, these neurobiological differences also mediated the relation of allele status and trait aggression in low-expressing allele carriers 58 . From a theoretical standpoint, Buckhold et al. suggested that increased aggression and aggressive traits in lowexpressing allele carriers of the MAO-A gene might have a neurodevelopmental origin caused by the impact of excess serotonin in brain structures critical for social evaluation and emotional regulation 15 .
In MAO-A-deficient mice, morphological and functional abnormalities are found in different cortical and subcortical areas, including the PFC, amygdala, corpus callosum, and somatosensory cortex 13 . Interestingly, the increased aggression associated with MAO-A deficiency can be rescued by forebrain-specific, i.e., cortical expression of human MAO-A. This finding suggests that, as in humans, MAO-A levels in frontal cortical networks may underlie the expression of aggressive behaviors in MAO-A KO mice 59 .
Outstanding questions and new perspectives
Gene-environment interactions, epigenetic factors, and demographic characteristics As discussed above, experimental evidence in humans and rodents suggests that the promoter region of the MAO-A gene modulates behavioral manifestations of aggression. In humans, the core promoter region of MAO-A has been located in the two 90 bp repeat sequences, in turn comprised of four tandem repeats with a Sp1-binding site each. Indeed, the Sp1 family of proteins is one of the main factors controlling MAO-A expression in humans 23 . As discussed above, these studies highlight the mediating role of early trauma and other environmental stressors in this relation. In this regard, it remains unclear how the VNTR per se possibly interacts with other nearby DNA variants or other environmental factors (e.g., stress) to elicit aggressive responses or maladaptive personality traits since childhood 6,60 . Equally important would be to elucidate the potential effect of parental raising on MAO regulation and its protective effect in the face of early stressors 47 . Epigenetic factors, such as CpG methylation, seem to affect MAO-A mRNA expression at least in females, which in turn depend on MAO-A promoter polymorphisms 61 . Also, epigenetic alterations due to maternal care have been reported in the glucose transporter gene, genes affecting glutamate receptor activity, and c-FOS 62 , but not for the MAO-A or MAO-B genes. However, c-FOS has been speculatively associated with the prolonged expression of the MAO gene in chronic stress 63 . Given the link between MAO, stress reactivity, and aggression 63 , further research in animal models should address the effect of MAO-A-deficient parents in stress responses and modulation of aggressive behaviors in their offspring. Interestingly, evidence suggests that the epigenetic factors that modulate the interaction between MAO-A and aggression may be race-specific 64 ; for instance, the aforementioned interaction between 2R-and 3R-repeat MAO-A variants (which, of note, are in contrast to 3.5R and 4R variants associated with higher transcriptional activity) and child abuse was associated with violence and antisocial behavior only in Caucasians but not in individuals from other ethnic group; however, as this study did not control for gender, it is likely its results could be pertinent to non-Caucasians, as well 65 ; indeed, both genetic and environmental factors may underlie these racial differences 64 .
Physiological mechanisms
Though the relation between genetic variations in MAO-A and aggressive behavior has been addressed in several studies, the molecular and neural substrates underlying this association still remain obscure. A recent study suggested that MAO-A genotype determines brain and heart responses to aggression-inducing stimuli. Specifically, the authors found that healthy men carrying the 4.5-repeat allele variation, in comparison to those with 2.5-and 3.5-repeat variation, exhibited increased heart rate responses and a distinct neural oscillatory profile in response to visual scenes displaying aggressive behavior 66 . One possible interpretation of this finding is that altered serotoninergic metabolism caused by this allele variation may predispose individuals to a biased fight-or-flight response. In this scenario, ambiguous signals, such as increased heart rate may be more easily interpreted by the brain as threatening and hence trigger aggressive behaviors more easily. Similar investigations should focus on identifying new clinical phenotypes across MAO-A genotypes regarding violent behaviors and aggressive personality. It would be crucial to determine whether allelic variations in the MAO-A gene have a comparable impact on enzymatic activity, neurotransmission levels, neural activity, and behavior 66 .
Neurodevelopmental disorders
MAO-A deficiency has not only been exclusively linked to aggressive phenotypes but also to the developmental origin of sensory and communication deficits 13,45 . This observation underscores the phenotypic variability associated with MAO-A. In mice, MAO-A deficiency leads to sensorimotor, social, and communication deficits, which follow alterations in spontaneous behavior in the early postnatal period 45 . These features mimic the symptoms and developmental trajectory of autistic spectrum disorder (ASD). Although more data are necessary before extrapolating these results to humans, MAO-A deficiency was recently documented in a boy diagnosed with ASD and self-damaging behavior 11 . Moreover, boys diagnosed with ASD and having the low activity 3-repeat MAOA allele exhibited more severe symptomatology, including sensory impairment, arousal regulation problems, worse communication, and aggression, with aggressive behavior is also influenced by their mother's genotype 67 . These results inspired the hypothesis that sensory and cognitive deficits in MAO-A-deficient children may aggravate aggressive and violent behaviors 13 . In addition, they justify the use of MAO-A-deficient animals as clinical models to understand these and other neurodevelopmental disorders characterized by aggressive phenotypes and lack of self-impulse control, including features of attentiondeficit and hyperactivity disorder (ADHD) 68 .
MAO-A and the microbiota
In recent years, the role of the microbes in the gastrointestinal (GI) tract in brain physiology and behavior, including aggression, has gained substantial attention [69][70][71] . Importantly, gut bacteria can generate precursors of monoamine neurotransmitters directly involved in the modulation of aggression, including 5-HT 72 . In dogs, for instance, the gut microbiome was recently related to aggression, pointing to an aggression-related physiological state in interaction with the microbes in the GI tract 73 . In addition, we suggest that the influence that MAOIs exert on behavior may be in part mediated by the gut-brain axis, because MAOIs (a) react with the bacterial cofactors NAD and NADPH, and (b) have an antimicrobial effect by inhibiting cell wall synthesis 70,74 . In this line, studies in rodents have already shown that perturbations of microbiota as a result of antibiotic administration cause decreased aggressive behavior 75 . Finally, we know that the human microbiome may produce neuroactive compounds and, thus, affect behavior 76 . As is the case in the zebrafish 77 , human gut microbes may contain mao genes, which could alter the metabolism of serotonin and other amines in both the central nervous system and in the gut-brain axis. Although the role of the gut-brain axis in the MAO-related modulation of aggression and antisocial behavior is far from being understood, the aforementioned findings justify efforts in this direction.
Methodological considerations
The simultaneous measurement of multiple amine metabolizing molecules and the parallel combination of genetic, imaging, and issue-specific psychometric techniques, hold promise for the attainment of more definitive conclusions regarding the association between aggression and MAO deficiency per se, and/or in combination with other parameters, such as pharmacological sensitization, stressful environments, and concomitant genetic variations. In addition, the progression from self-reporting questionnaires to open phenotype characteristics assessed by neuropsychological tests will be an improvement in understanding personality genetics 78 . Ultimately, we propose a psycho-radio-genomic approach, where this question is resolved by combining multiple technologies from psychology, imaging, and genomics/genetics.
Regarding the specific role of the VNTR polymorphisms, it is crucial to determine the extent to which allelic variations in MAO genes are associated with enzymatic activity, and whether the latter equally influences antisocial or aggressive behavior in humans. For instance, the reported lack of association between MAO-A gene expression and brain MAO-A activity in healthy men suggests that the MAO-A gene promoter polymorphism does not contribute to differences in MAO-A activity 79 . Similarly, a MAO-B intron 13 polymorphism did not affect MAO-B activity in platelets 80 . These inconsistencies may be due to a lack of genotype-phenotype similarities among MAO gene alterations, an issue that should be taken under consideration in future studies.
Finally, future research should specifically address sex differences in the modulation of aggression by the MAO system, given the strong interactions between MAO-A polymorphisms and known key modulators of anger and aggression, most notably testosterone 81 .
Translational and clinical implications
In this paper, we have summarized key findings linking MAOs and aggressive behavior in humans and animal models. To move the field forward, we have proposed that future investigations should address key outstanding questions about this connection, including (a) epigenetic factors modulating the impact of MAO genotype and VNTR on aggressive behaviors; (b) the effect of allelic variations in the MAO-A gene on enzymatic activity, neurotransmission, neural activity, and behavior; (c) the physiological evidence supporting MAO's influence on aggressive behavior; (d) the effect of MAO-A deficiency on aggressive phenotypes in neurodevelopmental disorders characterized by sensory and communication deficits, including ASD; and, (e) the role of monoamines and the effects of MAOIs in different behaviors influenced by the microbiota in the bidirectional brain-gut axis. We anticipate that resolving these questions will enrich our understanding of the wide variety of neuropsychiatric disorders characterized by defects in the monoamine system 82 . Two interesting possibilities include discovering novel biomarkers of aggressive and violent behavior based on MAO activity, and targeting the MAO system to treat pathological aggression.
Novel biomarkers of aggressive and violent behavior
Regarding MAO-A-related variations, saturation genome editing (SGE) can be implemented to provide further experimental evidence of the functional effects of MAO-A or other gene variants 83,84 . In SGE experiments, all possible single nucleotide variants are assayed in single targeted exons, thus, allowing functional classifications over a broad clinical spectrum 84 . As for gene products, analysis of monoamine neurotransmitters in CSF is an increasingly used practice in patients with motor deficits and may lead identification of disorders in which monoamine abnormalities are causative or part of the associated symptomatology 82 . This practice could be extended to patients exhibiting pathological aggression and antisocial behavior.
Schizophrenia and AD also offer an interesting framework to explore novel biomarkers and the role of MAO/ MAOIs in agitation, irritability, and aggressive demeanor. For instance, it has been recently found that MAO-B platelet activity, but not MAO-A VNTR or MAO-B polymorphisms, was related to severe agitation in a sample of patients with schizophrenia and conduct disorder 85 . In AD, aggressive behavior is thought to be linked to serotoninergic impairment 86 . In this line, it would be crucial to determine if MAO platelet activity is associated with irritability and aggression in these patients, as was the case for self-rated verbal aggression in female patients with fibromyalgia 87 .
Finally, we would like to highlight the potential of imaging and tracing techniques to quantitatively map MAO brain activity in different clinical disorders characterized by aggressive behaviors and traits. For instance, PET/SPECT studies have assessed the expression of MAO-B, whose location is found in astrocytes' external mitochondrial membrane, by using irreversible MAOIs in clinical populations 88 . Specifically, regarding pathological aggression, a PET study in antisocial personality disorder (ASPD) revealed decreased MAO-A levels compared to controls in different areas involved in impulse control and aggression, including the orbitofrontal cortex (OFC) and ventral striatum (VS) 89 . Given the promising value of such techniques, they should be applied to map abnormalities in the MAO system in subjects exhibiting different types of aggression and aggressive traits.
Targeting the MAO system to treat pathological aggression
Pharmacological therapy has been widely used to control aggressive behaviors in a variety of clinical populations. Because of the role of MAOs and MAO genes in the manifestation of aggressive and antisocial behavior, drugs that target this system (e.g., MAOIs) have been used to this end. However, MAOIs have a global effect on behaviors that are unrelated to aggression, and this may cause unwanted side effects. In addition, a recent meta-analysis examining the efficacy of different interventions for agitation and aggression in patients with dementia showed that non-pharmacological interventions, i.e., behavioral therapy, multidisciplinary care, etc., were more efficacious in reducing aggression and agitation in adults than pharmacological therapy 90 .
Recent advances in the field offer promising avenues to target the MAO system without using pharmacological agents. For instance, Labonte and colleagues recently identified and described MAALIN, a novel long noncoding RNA (lncRNA), that regulates the activity of the MAO-A gene in the human brain 91 . In impulsive-aggressive individuals who committed suicide, epigenetic mechanisms regulating MAALIN expression in different brain areas were associated with MAOA expression. Taking this finding further, the authors demonstrated in mice that (a) driving MAALIN overexpression in neuroprogenitor cells decreased MAOA levels, while knocking out its expression led to elevated MAOA, and (b) hippocampal MAALIN decreased expression of MAOA and aggravated impulsive-aggressive behavior 91 . Although future research is essential to identify the exact epigenetic mechanisms involved in the regulation of MAALIN and other lncRNAs in clinical practice, these findings could lay the foundation for highly specific therapies for pathological aggression.
In animal models, cutting-edge techniques like optoand chemogenetics hold the potential to identify new targets for the monoaminergic control of aggressive behavior. In so doing, gene targeting, gene overexpression, and chemo-/optogenetic modulation of monoaminergic brain centers could be applied to target the MAO system in selected pathways and assess its role in behavior 92,93 . For instance, a recent study identified a novel circuit comprised by the CA2 area of the hippocampus, the lateral septum, and the ventromedial hypothalamus, that modulates social aggression in mice 94 , whereas optogenetic activation of this circuit led to attacks, by silencing CA2 or CA2-LS projections that inhibit social aggression. In this context, it would be interesting to elucidate the specific role of MAO in this circuit in animal models of pathological aggression.
The interplay between MAO and alterations in other neurobiological systems
When considering novel MAO-related biomarkers and therapeutic targets for aggression, we should take into account the influence of other systems that are tightly related to MAO and can have an impact on behavior. Of special interest is the interaction between MAO and glucocorticoids, which may be important in chronic stress states. In humans, MAO-A is targeted by glucocorticoids in both skeletal muscle and brain cells [95][96][97] , and activation of both the HPA axis and MAO-A is well-known in the acute stress response 96,98 . The bidirectional interaction between MAO and the HPA axis might be mediated by the effects of serotonin-related stimulation of the latter 99 . Indeed, acute stressors and administration of glucocorticoids decreased functional markers of MAO-A activity, including binding of monoamines, such as 5-HT to the active sites of the MAO-A enzyme, enzymatic activity, and enzyme protein levels 96 . Dysregulation of this physiological response or individual traits related to enhanced stress reactivity 100 could give rise to maladaptive coping mechanisms, including aggression and violence, a hypothesis that warrants additional research.
Concluding remarks
The overarching goal of this review was to offer an updated commentary on the role of the MAO system in the modulation of aggression. Using earlier and recent discoveries as a foundation, including pharmacological, genetic, neurobiological, and anatomical studies, we propose novel research questions that remain unanswered in the field, as well as the potential translational solutions derived from these findings (Fig. 1). Specifically, future investigations should focus on the epigenetic factors and physiological mechanisms mediating the role of MAO in aggression, whether alterations in the MAO system underlie aggressive phenotypes in neurodevelopmental disorders and the specific brain-gut pathways that contribute to this phenomenon. Elucidating these mechanisms will undoubtedly open novel avenues for the detection of novel biomarkers of aggression and therapies focusing on targeting the MAO system to curb pathological aggression, including genome editing 99 , epigenomic 100 , and other precision medicine approaches 110 , especially for vulnerable age groups, such as adolescents.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 8,212.4 | 2021-02-18T00:00:00.000 | [
"Biology",
"Psychology"
] |
An optimized solution to the course scheduling problem in universities under an improved genetic algorithm
: The increase in the size of universities has greatly increased the number of teachers, students, and courses and has also increased the di ffi culty of scheduling courses. This study used coevolution to improve the genetic algorithm and applied it to solve the course scheduling problem in universities. Finally, simula - tion experiments were conducted on the traditional and improved genetic algorithms in MATLAB software. The results showed that the improved genetic algorithm converged faster and produced better solutions than the traditional genetic algorithm under the same crossover and mutation probability. As the mutation probability in the algorithm increased, the fi tness values of both genetic algorithms gradually decreased, and the computation time increased. With the increase in crossover probability in the algorithm, the fi tness value of the two genetic algorithms increased fi rst and then decreased, and the computational time decreased fi rst and then increased.
Introduction
With the development of the social economy and the introduction of compulsory education, more and more people are able to receive education, the population base receiving basic education has increased, and the number of people willing to receive higher education has also greatly increased [1].As a result, the enrollment scale of universities becomes increasingly larger, and the limited educational facilities and teachers' resources on the campus can hardly meet the growing demand.Therefore, multi-campus operation and sharing of educational facilities and faculty resources among different campuses have become a way to solve the demand for education [2].Although the sharing of teaching resources across multiple campuses can meet the growing demand, the increased teaching resources also make the management more difficult and require the coordinated adjustment of the sequence of course arrangements to maximize the use of limited resources [3].Hossain et al. [4] proposed a particle swarm algorithm to solve the highly constrained university course scheduling problem and added a forceful swap operation with a repair mechanism to the particle swarm algorithm.The experimental results showed the effectiveness of the proposed method.Herawatie et al. [5] solved the course scheduling problem with a genetic algorithm and avoided local optimum with a replacement policy.The experimental results showed that the algorithm functioned well.Yang and Xie [6] proposed a course scheduling algorithm based on a particle swarm optimization algorithm, designed a model of university ideological and political course scheduling system based on this algorithm, and verified the effectiveness and usability of the system by experimental and test results.In this study, the course scheduling problem in universities was briefly introduced.A genetic algorithm was proposed to optimize the scheduling scheme.The genetic algorithm was improved by using the principle of coevolution.The novelty of this study is scheduling courses with a genetic algorithm instead of manpower and using coevolution to improve the performance of the genetic algorithm in finding the best solution.The advantage of using a genetic algorithm for scheduling courses is that it can give multiple scheduling solutions more quickly and select the best solution from them.
Course scheduling problem in universities
Higher education institutions teach students according to a course schedule.A scientific and standardized course schedule can maximize the mobilization and utilization of teaching resources, thus improving teaching quality.After excluding various restrictions, the scheduling problem can be regarded as a problem of arranging and combining different courses, and different arrangements and combinations will bring different teaching qualities [7].However, in the actual scheduling problem, the optimization of the scheduling scheme is subject to various constraints, including the type of courses, the number of classes required for every course, the number of teachers, the size of classrooms, the number and size of classes, and the duration of courses.
In addition to the various restrictions mentioned above, the difficulties of the scheduling problem also include the large number of optional permutations and combinations caused by the large number of courses, class time slots, and teachers.Some of these permutations and combinations violate the restrictions, but even for the remaining combinations, the huge number makes it difficult to find a suitable scheduling solution by human effort.Therefore, this study uses a genetic algorithm to optimize the scheduling solution.
The factors involved in the mathematical description of the scheduling problem are: , , , , , , , , , , , , , , , , , , where T t refers to the t-th period, T refers to the set of time periods, M m refers to the m-th course (there are z m classes for this course), M refers to the set of courses, C c refers to the c-th class (this class includes k c students), C refers to the set of classes, R r refers to the r-th classroom (this class can tolerate y r people), R refers to the set of classrooms, P p refers to the p-th teacher (this teacher has x p courses to teach), and P is the set of teachers [8].
In the actual scheduling problem, the values of the factors involved in equation (1) will be restricted.To facilitate calculation, we default that every classroom can accommodate students in every class in equation (1).Restrictions for the scheduling problem are: (1) only one M m can be taught by P p at T t ; (2) C c can only take one M m at T t ; (3) only one M m can be taught in R r at T t .
The above restrictions are converted into a mathematical form: where V is the set of allocation events with the teacher, student, and course bound in the scheduling scheme, v q is the q-th allocation event with the teacher, student, and course bound (teacher is an element of P, student is an element of C, and course is an element of M), T 1 , T 2 , and T 3 are the judgment values for the corresponding three restrictions (the value is 1 if the restriction is not met; otherwise, the value is 0), and f 1 , f 2 , and f 3 are the total judgement values of V for the above three restrictions.When the sum of f 1 , f 2 , and f 3 is 0, it indicates that there is no scheduling conflict [9].
Scheduling optimization based on the improved genetic algorithm
As we can see from Section 2, the goal of the course scheduling problem in universities is to make the scheduling scheme more reasonable without scheduling conflicts.Traditionally, manual course scheduling involves scheduling courses with high priorities or scheduling difficulty first, arrange time periods for every course according to the course hours, and arrange courses with lower priorities in the same way, and the goal of no conflict is eventually achieved through continuous backtracking and adjustment [10].However, in universities with a large number of faculty, students, and course programs, the workload of course scheduling is heavy.Manual course scheduling for universities is tedious and more prone to scheduling conflicts.The traditional genetic algorithm [11] uses a single population for evolution.Small population size is ineffective in finding the optimal solution, while a large population size will lead to too much computation and difficult convergence.In addition, although the iterative process of the genetic algorithm will increase the number of solution candidates by mutation operation, it will still fall into the optimal local solution [12].
The traditional genetic algorithm was improved by multi-population coevolution, and the flow of the improved genetic algorithm for scheduling is shown in Figure 1.
Three initial populations of the same size are generated through coding.Every chromosome in the population represents one scheduling scheme.In general, the teaching schedule is based on a week-long cycle, and Saturday and Sunday are holidays, so the scheduling scheme spans from Monday to Friday, which means that the schemes represented by the chromosomes are for a week.The gene coding pattern of the chromosome is "time period code + classroom code + course code + class code + teacher code," and non-conflicting plural segments following the above pattern constitute the scheduling scheme, i.e., a chromosome.The choice of gene locus code is not random.There are 25 time periods, so the time period code is chosen from "01, 02, 03, …, 25."The classroom code is chosen from the classroom codes owned by the school.The course code, class code, and teacher code are provided by the school [13].The coding rules for chromosomes within the initial population are as described in Section 2. Under the premise of following the coding rules, the initial population is generated according to the custom of manual course scheduling.
Step 1, the corresponding teacher codes of one class are added to fixed time periods.Then, the codes of other teachers of that class are added to different time periods randomly to obtain the initial course schedule of the class, and the initial class schedule of all classes is combined to form an initial chromosome.The plural chromosomes obtained as above constitute the initial population.
Step 2 The fitness of chromosomes in the population is calculated.Since the random generation of chromosomes, i.e., scheduling schemes, has already taken into account the problem of course conflicts and actively excluded conflicting schemes in the subsequent genetic operation, the main consideration in the calculation of fitness is to make the existing conflict-free scheduling schemes better, so the calculation formula of fitness is: where g is the chromosome fitness, g 1 , g 2 , and g 3 are the course sequence priority, course duration discrete uniformity, and classroom utilization, ω 1 , ω 2 , and ω 3 are weights of g 1 , g 2 , and g 3 , α j is the weight value (importance) of the j-th course, t i is the sequence weight of the i-th period, and d j is the schedule discrete degree of the j-th course.
Step 3 Whether the predefined conditions (reaching the maximum number of iterations or population fitness converging to stability) are met is determined based on the calculated population chromosome fitness.If the predefined conditions are met, the optimal chromosome is selected, and the chromosome is decoded to obtain the schedule.As shown in Figure 2, every row in this two-dimensional schedule represents a classroom, every column represents a course time period, and the intersection of the rows and columns is an array of courses, classes, and teachers, (M m , C c , P p ).The genetic operation is conducted when the population chromosome fitness does not meet the predefined conditions.
Step 4 In genetic iteration [14], crossover means randomly selecting two chromosomes according to the crossover probability and exchanging the gene fragment of the same position.In this study, the single-point crossover was used, i.e., only one gene at one gene locus was exchanged in one crossover.In addition, the conflict between the two chromosomes after crossover is determined using equation (2), and if a conflict occurs, another gene locus is randomly selected again for crossover until there is no conflict.Mutation means randomly selecting a chromosome according to the mutation probability and exchanging genes at the same type of gene locus until the mutated chromosome is conflict-free.
Step 5 After all three populations have undergone one genetic operation, the three populations are subjected to a coevolutionary [15] operation, as shown in Figure 3.The updated populations are arranged in ascending order by the value of the fitness function, and half of the worst chromosomes are replaced by half of the best chromosomes in the other population (order: population 1 → population 2 → population 3 → population 1).Then, the best and worst chromosomes in every population are selected, and the worst chromosomes are replaced by the best chromosomes.
Step 6 Return to step 2 until one of the termination conditions is satisfied.There are two termination conditions.The first termination condition is that the number of iterations reaches the preset maximum number, avoiding the unnecessary computation time caused by difficult convergence.The second termination condition is that the fitness value of the algorithm converges to stable.The difference between the fitness value after the current time of iteration and the fitness value after the last time of iteration is calculated.The algorithm is considered as converging to stable when the difference reduces to within the preset range and no longer exceeds the preset range in the following 100 times of iterations.
Simulation experiments 4.1 Experimental setup
The actual university scheduling data are huge.Not only the number of courses, teachers, and classes is large [16], but also the variety of courses that teachers can teach is limited, which increases the difficulties in practice.Therefore, in this study, some of the data were selected to facilitate simulation experiments.The relevant experimental data were 20 classes, 25 classrooms, 27 teachers, 50 courses, 5 days a week, and 5 teaching time units per day.The relevant parameters of the traditional genetic algorithm are as follows: the population size was set as 60, ω 1 = 0.5, ω 2 = 0.3, ω 3 = 0.2, and the crossover and mutation probabilities were set as 0.6 and 0.1, respectively.
The relevant parameters of the improved genetic algorithm are as follows: the size of all 3 populations was 20, ω 1 = 0.5, ω 2 = 0.3, ω 3 = 0.2, and the crossover and mutation probabilities were 0.6 and 0.1, respectively.The crossover and mutation probabilities used in the above genetic algorithm are parameters obtained through orthogonal experiments.
In addition to the above basic comparison experiments, this study also adjusted the crossover and mutation probabilities of the two genetic algorithms and compared the performance under different crossover and mutation probabilities.When the mutation probability was fixed at 0.1, the crossover probabilities were 0.4, 0.5, 0.6, 0.7, and 0.8, respectively; when the crossover probability was fixed at 0.6, the mutation probabilities were 0.1, 0.2, 0.3, 0.4, and 0.5, respectively.
Experimental results
Figure 4 shows the change in the average fitness value of population chromosomes during iterations of the traditional genetic algorithm and the improved genetic algorithm.It is seen from Figure 4 that the average fitness value of the populations of both genetic algorithms increased with the increase in the number of iterations and converged gradually.The improved genetic algorithm converged to a stable fitness value (48) after about 750 iterations, and the traditional genetic algorithm converged to a stable fitness value (44) after about 900 iterations.It was found that the improved genetic algorithm converged faster and yielded better chromosomes, i.e., better scheduling solutions.
Figure 5 shows the average population fitness values and computational time of the two genetic algorithms under 0.4, 0.5, 0.6, 0.7, and 0.8 crossover probabilities when the mutation probability was 0.1.It is seen from Figure 5 that as the crossover probability increased from 0.4 to 0.8, the average fitness value of the two genetic algorithms tended to increase and then decrease, and the computational time of both algorithms tended to decrease and then increase.The comparison of the two genetic algorithms under the same crossover probability showed that the improved genetic algorithm had a larger fitness value and shorter computational time.The reason for the trend of the average fitness value and computational time of the two genetic algorithms with the crossover probability was that when the crossover probability was low, fewer new chromosomes were generated, leading to a slow search, and when it was high, the excellent chromosomes were easily split, leading to a slow search.
Figure 6 shows the average population fitness values and computational time of the two genetic algorithms at 0.1, 0.2, 0.3, 0.4, and 0.5 mutation probabilities when the crossover probability was 0.6.It is seen from Figure 6 that as the variation probability increased from 0.1 to 0.5, the average fitness value of the two genetic algorithms tended to decrease and the detection time tended to increase.The improved genetic algorithm showed a higher fitness value and shorter computational time than the traditional algorithm under the same variance probability.The reason for the trend of the average fitness value and computational time of the two genetic algorithms with the mutation probability was that the variation in chromosomes generated new gene fragments to increase the diversity of population genes, and when the variation probability increased, the diversity of population genes increased, but it also resulted in the unstable inheritance of good genes.
Both Figures 5 and 6 reflect that the improved genetic algorithm outperformed the traditional genetic algorithm in terms of average fitness value and computational time under the same crossover and mutation probabilities.The reason for the above result was that the improved genetic algorithm used the principle of multi-population coevolution to make the three populations evolve independently and replaced poor chromosomes with excellent chromosomes to enhance the diversity of high-quality chromosomes and jump out of the locally optimal solution.
Discussion
For higher education institutions, it is an important task to arrange courses for teachers and students in a reasonable manner.A reasonable scheduling plan can effectively improve the teaching management efficiency of colleges.As the education reform progresses and the importance of higher education is emphasized, the enrollment scale of colleges has increased.However, the teaching resources of universities are limited, and a reasonable scheduling plan is needed to meet the growing demand of students.The increase in the number of students makes the traditional manual scheduling methods increasingly difficult to cope with more complex scheduling problems.
The optimization of the scheduling scheme can be regarded as the search for an optimal scheduling scheme in different candidates, so this study used a genetic algorithm to optimize the scheduling scheme and also used coevolution to improve the genetic algorithm.Finally, simulation experiments were conducted to verify the performance difference between the traditional and improved genetic algorithms under different mutation and crossover probabilities, and the final experimental results are shown above.
Under the same crossover and variation probabilities, the coevolution-improved genetic algorithm converged to stability faster, and the corresponding scheduling scheme was better when convergence was stable.In addition, under the same mutation probability, with the increase in the crossover probability, the fitness values of the scheduling solutions of both traditional and improved genetic algorithms increased first and then decreased, and the computation time decreased first and then increased.Under the same crossover probability, with the increase in the mutation probability, the fitness values of the scheduling schemes of both traditional and improved genetic algorithms tended to decrease, while the computational time tended to increase.The reasons for these results are as follows.Under the same crossover and mutation probabilities, due to the introduction of coevolution, the three populations executed crossover and mutation operations independently and replaced poor chromosomes with excellent ones in turn.The independent evolution of the three populations made it possible to get rid of one of the populations even if it fell into a local optimum by the other population, and the parallel optimization search of the three populations also accelerated the convergence.Therefore, the improved genetic algorithm not only converged faster but also had a larger fitness value after the convergence was stable.Under a fixed mutation probability, a low crossover probability might reduce the efficiency of optimization search due to insufficient chromosome "exchange" within the population, but a high crossover probability might result in too much "exchange" and loss of good chromosomes, which would also reduce the efficiency of optimization search.Therefore, increasing the crossover probability made the fitness value of the two genetic algorithms increase first and then decrease and the computational time decrease first and then increase.If the crossover probability was fixed, the increase in mutation probability would make the chromosomes more likely to produce new genes, but it would also lead to the unstable inheritance of good genes, which would eventually result in slow convergence and lower fitness value in the iterative results.
Figure 3 :
Figure 3: The operation flow of coevolutionary operation of three populations.
Figure 4 :
Figure 4: Comparison of traditional and improved genetic algorithms.
Figure 5 :
Figure 5: Performance of two genetic algorithms under different crossover probabilities when the mutation probability is 0.1.
Figure 6 :
Figure 6: Performance of two genetic algorithms under different variation probabilities when the crossover probability is 0.6. | 4,650.8 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Duty-cycle Communication Protocol with Wi-Fi Direct for Wireless Sensor Networks
In order to enable environmental sensing in any place, wireless sensor networks that collect environmental sensing data through wireless multihop communication are coming into practical use. Since they may be installed without a power supply, sensor devices should be powered by batteries. For this purpose, IEEE802.15.4 has been standardized and commercialized. However, services using these sensor devices are not widespread because the sensor devices are expensive relative to the value of the provided services and the power consumption is unexpectedly large. Therefore, in this study, we proposed a new communication protocol called Duty-cycle Data-collection over Wi-Fi Direct (DDWD) for wireless sensor networks that works on cheap microchips such as ESP8266EX, which are equipped with Wi-Fi functions and are available for under 10 USD. In order to use ESP8266EX in multihop wireless sensor networks, it is necessary to transmit the sensing data to the sink node efficiently while sleeping periodically to reduce power consumption. In addition, inexpensive hardware such as ESP8266EX has low accuracy in time counting, which causes a large error in the sleep time interval and time synchronization. Therefore, the protocol must be robust against time synchronization errors. The proposed protocol adopts Wi-Fi Direct, in which a node dynamically switches its operation mode between SoftAP and STA modes according to the time-divided slot, and exchanges messages with neighboring nodes through Wi-Fi Direct. The communication path is determined in an autonomous decentralized manner. All nodes operate only in the allocated slot to reduce power consumption. By limiting the operating slot interval to about several tens of seconds, low power consumption and reliable data collection are possible at a constant time interval, even if an inexpensive Wi-Fi chip with low accuracy in time counting is used. We implemented the proposed protocol in ESP8266EX and conducted an experiment to show the reliability and low power consumption of the protocol.
Introduction
Wireless sensor networks (WSNs) are a promising technology for collecting measurements in a target field for environmental sensing and so forth. Since one of the critical problems in battery-powered sensor networks is power consumption, a tremendous amount of work has been performed to reduce power consumption in sensor networks. Typically, in sensor networks, duty-cycle behavior is adopted to reduce power consumption, in which each node alternates between wake-up and sleep modes. To improve the efficiency of duty-cycle devices in terms of power consumption, both the hardware and the software (i.e., protocols) have been improved. In the hardware, i.e., sensor devices and microchips, power consumption in the working (i.e., wake-up) mode as well as in the sleeping mode has been reduced. Recently, several microchips have implemented the deep-sleep mode, in which the current in the sleep mode has been greatly reduced by cutting off the power for all components except for the real-time clock (RTC). On the other hand, in the software, many techniques for duty-cycle MAC protocols have been developed to reduce transmission, reception, and listening power in communications. (1)(2)(3) As a result, the lifetime of battery-powered sensor devices has been enhanced to several months even with coin cells or AA batteries.
Unfortunately, these solutions have not been widely deployed for several reasons. In most cases, those protocols work on microchips designed for IEEE802.15.4, (4) and IEEE802. 15.4 devices are still too expensive for practical deployment. Additionally, since IEEE802.15.4 expects a short transmission range, we must deploy a number of devices to cover a certain field with these protocols. From the industrial point of view, the low-cost coverage of the measured field is required to achieve (1) a relatively long range, (2) multihop networks, (3) a long lifetime by virtue of duty-cycle medium access control (MAC), and (4) a low cost due to the use of commodity devices. To the best of our knowledge, solutions to achieve these aims have not yet been devised.
Recently, a protocol called Wi-Fi direct for device-to-device communications has been standardized and implemented in commodity Wi-Fi microchips. Wi-Fi has a relatively long communication range and the devices are inexpensive since they are well populated. Recently, the ESP8266EX microchip, (5) which is a microprocessor compatible with an Arduino environment while supporting a Wi-Fi communication function, has appeared. This microchip supports both a deep-sleep mode and Wi-Fi Direct, and its price is extremely low (about 5 USD for a chip). With this microchip, we can potentially achieve practical sensor devices that satisfy the above conditions. (1)(2)(3)(4) In this paper, we propose a duty-cycle MAC protocol called Duty-cycle Data-collection over Wi-Fi Direct (DDWD) for inexpensive commodity microchips such as EXP8266EX towards practical solutions of using sensor networks. Different from existing duty-cycle MACs, DDWD has a long slot such as 30-60 s and collects sensed data from devices once every 30-60 min using Wi-Fi Direct. By taking advantage of deep-sleep functionality, sensor devices sleep most of the time to reduce power consumption to a very low level; we expect a lifetime of several months with coin cells or AA batteries, and furthermore we aim to achieve a positive power balance with energy-harvesting devices such as solar panels. We implemented DDWD on an EXP8266EX chip, and through evaluation, we proved that DDWD works reliably and robustly with low power consumption.
The organization of this paper is as follows: In Sect. 2, we provide a technical introduction of related technologies, i.e., EXP8266 and Wi-Fi Direct. In Sect. 3, we describe related work in the literature. In Sect. 4, we describe the proposed protocol DDWD. In Sect. 5, we present the evaluation result of the actual implementation of DDWD. We conclude the work in Sect. 6.
Wi-Fi Direct
Wi-Fi Direct (6) is a standard presented by Wi-Fi Alliance that directly exchanges data between end devices without access points (APs). It works with usual Wi-Fi standards such as IEEE 802.11a/b/g/n/ac/ax. In Wi-Fi Direct, a device virtually becomes an AP called SoftAP, and other devices connect to SoftAP to communicate with each other. Different from the ad hoc mode of IEEE 802.11, devices can communicate with each other only if one of the devices has SoftAP functionality. This provides seamless communication in the traditional Wi-Fi environment with traditional Wi-Fi devices. In contrast, in the ad hoc mode of Wi-Fi, all devices must be in the ad hoc mode simultaneously.
ESP8266EX
ESP8266EX is one of the microcontrollers with Wi-Fi (and Wi-Fi Direct) functions developed and sold by Espressif Systems that has an extremely low price. (5) Espressif Systems Corporation provides a series of ESP microcontrollers such as ESP32, all of which have similar functionalities with variations in minor specifications. ESP microcontrollers are programmable under the development platform Arduino, (7) and a dedicated library for ESP original functions is provided to support rich functions of ESP chips including Wi-Fi and Wi-Fi Direct. In ESP8266, we can use Wi-Fi Direct functions so that STA and SoftAP modes are dynamically switched at any time, where the STA mode provides the standard behavior for Wi-Fi end devices and the SoftAP mode provides the virtual AP behavior for Wi-Fi Direct. In addition, ESP chips allow users to log in and update the program code of devices from remote locations via Wi-Fi.
It is also worth noting that ESP chips support several low-power-consumption modes as shown in Table 1. The modem-sleep mode stops the power supply to the Wi-Fi modem to reduce power consumption, the light-sleep mode stops both the Wi-Fi modem and the CPU, both of which need hardware interrupt from the RTC or external devices. The sleep mode with the lowest power consumption is called the deep-sleep mode, in which only the RTC is working while all the other circuits are sleeping, and the chip resumes from the RTC timer. In this paper, we assume the deep-sleep mode in our protocol design to greatly reduce the power consumption of the devices.
Related Work
A large number of communication protocols for low-power sensor applications have been proposed thus far. For multihop communication support in battery-driven sensor devices, IEEE 802.15.4 (ZigBee) (4) has been standardized, and the corresponding microchips have been released. However, as is known widely, the low-power performance in multihop operation is not sufficient for battery-powered sensor devices to operate in the long term. Multihop networks with Bluetooth Low Energy (BLE) have been studied in several papers. (6) However, they are also not suitable for long-term sensor operation; they are designed to operate on battery-rich devices such as smartphones. Also, the communication distance in IEEE802.15.4 and the BLE mesh is limited and too short for outdoor environmental measurements.
For lower power consumption, many low-power MAC protocols for sensor networks have been proposed. They are typically designed to provide duty-cycle operation, which alternates between wake and sleep operation modes to reduce power consumption. B-MAC supports dutycycle behavior by providing a long preamble that covers a single duty-cycle period before signal transmission, (1) which enables devices to catch up with the preamble and prepare to receive the signal even in duty-cycle operation. RI-MAC is a receiver-initiated MAC protocol in which receivers periodically transmit beacons and the corresponding senders send frames when they receive the beacons. (2) Since senders sleep when they do not have frames to send, the energy efficiency is significantly improved. Yokotani and Yoshino proposed the Joint MAC and routing protocol for Beacon-Saving (JBS), which extends RI-MAC by combining MAC with routing functions so that nodes dynamically control forwarding paths for load balancing, resulting in far higher energy efficiency. (3) These methods can be implemented on microchips supporting IEEE 802.15.4 and enable devices with AA or coin batteries to work for up to several years. However, they need high-density device placement owing to the short communication range. In addition, the price of such devices is still considerably high because the IEEE 802.15.4 protocol is not well populated.
In contrast, Wi-Fi is the most populated communication standard; thus, commodity Wi-Fi devices are available at low prices. Also, since the transmission power is relatively large, practical long-range communication is supported. If we can achieve low-power sensor networks based on Wi-Fi, it will be useful in many practical scenes. For multihop networks on Wi-Fi, a large number of studies exist in the context of traditional Mobile Ad hoc NETworks (MANET). (9) In MANET, several routing protocols such as Ad hoc On-demand Distance Vector (AODV) (10) and Optimized Link-Start Routing (OLSR) (11) have been discussed under the assumption of the ad hoc mode in IEEE802.11 working as the underlayer protocol. However, in general, such studies have not considered power consumption, and so such sensor networks are not applicable.
Structuring networks with low-power and lossy links has been developed, and a routing protocol called Routing Protocol for Low-power and Lossy Networks (RPL) has been standardized. (12) This protocol supposes sensor networks and can work over a family of Wi-Fi standards. (13) RPL constructs a Destination Oriented Directed Acyclic Graph (DODAG) for routing paths instead of a tree, as in the traditional routing protocols. Therefore, each node may have multiple parents, i.e., multiple next-hops, and may forward packets to either of them according to the condition of links. With this property, relatively efficient packet routing is possible even over duty-cycle MAC protocols. Although there have been several studies that combined duty-cycle MAC with RPL, (14)(15)(16) most of the RPL studies have been carried out on sensor network platforms such as ContikiOS (17) and TinyOS, (18) and only a few studies combined RPL with Wi-Fi. (19) No study have considered devices with low battery capacities equipped with AA or coin batteries, even in combination with Wi-Fi.
There have been a few attempts to reduce power consumption within the framework of Wi-Fi. First, there is a standard in the IEEE 802.11 family to reduce power consumption by taking a periodic sleep time. IEEE 802.11ah (20) has been standardized for IoT applications, in which long-range communication is possible by utilizing a sub-GHz band, and duty-cycle behavior is supported for low-battery sensor devices. IEEE 802.11ah specifies the downclocked operation of IEEE 802.11ac so that it supports the existing commodity hardware that reduces prices of products. However, when we consider multihop behavior on the basis of these protocols, the synchronization of duty-cycle behavior is a serious problem to solve. (21,22) Also, since the synchronization needs a certain accuracy, cheap hardware with low clock precision is not acceptable. Thus, these IEEE 802.11-based methods seem to be unrealistic for building multihop sensor networks.
On the other hand, a communication standard called Wi-Fi Direct, which realizes communications among end devices without APs, has been deployed in several commercial devices. There are several academic studies that proposed multihop communications over Wi-Fi Direct standards. (23,24) However, these studies aimed at multihop communication among resource-rich devices such as smartphones, and so cheap and resource-poor devices were not supported. To the best of our knowledge, the proposed protocol DDWD is the first proposal of a communication protocol that achieves long-life sensor networks with inexpensive commodity Wi-Fi hardware equipped with a low battery budget.
Recently, a new category of communications called Low-Power Wide-Area Networks (LPWAN) is getting populated. LPWAN covers a wide area with long-range one-hop communications. However, LPWAN devices are currently expensive, and cheap deployment is not possible. Furthermore, representable LPWAN standards do not support multihop communications; thus, we need multihop forwarding methodologies if wire-connected APs cannot cover the target area. Part of the multihop protocol design proposed in this paper will be applicable to LPWAN.
Assumptions
The sensor networks we consider consist of nodes and sinks. All sensor nodes periodically obtain measurement values, which are sent and relayed to one of the sinks. There are one or multiple sinks in a network, and they are supplied with power so that they are always ready to send/receive frames with nodes. We assume that each sensor node has its own node ID assigned by the administrators. We also assume that the interval of collecting measurement values is relatively long such as 30 min or 1 h. As sensor-node devices, we assume the use of low-price hardware. Specifically, we intend to use a family of ESP microchips, among which we choose ESP8266EX for its low price with rich-enough functions. Since ESP8266EX works as a microcontroller equipped with a Wi-Fi communication function, we can build the sensor node as a one-chip device. We assume that the sensor nodes are powered by batteries with small power budget such as AA or coin batteries, such that power consumption should be extremely low, enabling them to operate for a long time. Also, in cheap microchips such as ESP series, the precision of the system clock is low such that it is generally hard to control the sleeping time accurately. The communication protocols should work even with this unreliable clock behavior.
DDWD: protocol overview
In this study, we present a communication protocol design that works on a cheap ESP8266EX microchip with an unreliable clock function. In the proposed protocol, we set a relatively long data collection interval such as 1 h, and a single data collection period is divided into 30-40 s time slots. Each node stays awake in only two slots in a period, sleeping in the rest of the slots, to sense and relay the measured values to sinks. In a single period, nodes located farther from the sink wake up earlier and sequentially relay data to the sink in a hop-by-hop manner. Thus, the working slot for each node is assigned accordingly (see Fig. 1 for an example). Data forwarding among nodes is performed sequentially in a series of slots to collect data at the sink. In this time-slot-based design, the power consumption of each node is low because only two slots in a period are awake, while simultaneously the data collection is robust against noise and a large clock drift because of the relatively long (i.e., at least several tens of seconds) time slots.
Initially, all nodes are in the initial state where they do not know the distance from the sink, and so they do not know the time slot in which they should wake up. We define the distance of a node from the sink as the number of hops required to reach the sink. In the initial state, each node is in the STA mode of Wi-Fi, in which it searches for its parent nodes (i.e., AP) to connect. Every parent found has a distance from the sink, and accordingly the node computes its own distance. Once the distance is determined, the node transits to the stable state and changes its Wi-Fi behavior to the SoftAP mode, in which it transmits beacons that include the distance encoded in the SSID string. Since the distance is defined as the hop count from the sink, the node that received the SSID computes its distance as the included distance plus 1. In this way, every node in the network sets its distance and transits to the stable state. In the stable state, each node calculates its working slots according to its distance. In a single data collection period, each node works on two slots; in the first slot, which we call the receiving slot, it receives data frames from its children, and in the following slot, which we call the sending slot, it sends all frames stored in the node to one of the parents. Nodes are awake in these two slots, and they sleep otherwise. Note that in the slot where nodes with distance d are sending, nodes with distance d − 1 are receiving. Because slots are assigned to nodes such that nodes farther from the sink are assigned with earlier slots, the data frame generated by any node can be collected at the sink within a single data collection period. Specifically, when a node with distance d is in the sending slot, it selects a parent randomly from the nodes that advertise their distance as d − 1 and sends all the stored data to the parent. In the next slot, the parent node does the same, and the data are passed to a node with distance d − 1. In this way, the data finally reach the sink. If a node cannot find any parent candidate, the node again transits to the initial state and renews its distance.
Determining distances
In the proposed protocol DDWD, when node n is powered on, it is in the initial state; in this state, the node does not know its distance from a sink. The node moves to the STA mode in Wi-Fi and scans APs working in the communication area. As a result, node n obtains a set of parent candidates, N a . Specifically, in the scanning, node n scans APs for a certain period of time, and only the APs from which at least T beacons are received are included in N a . These APs found are the nodes working in the SoftAP mode of Wi-Fi Direct, which advertise their distance encoded in their SSID strings via periodic beacons. Thus, from N a , node n finds the minimum-distance node n m (we let its distance be d m ), selects n m as a parent of n, and sets its distance to d m + 1. If there are multiple parent candidates with the same distance, node n selects one of them randomly as n m . Once the parent is selected, node n connects to the AP n m and sends a distance determination message that is destined for the sink. This message will be sent to the sink in the next data collection period. When n m receives the message, the AP n m sends back the time information to realize time synchronization between n m and n, and finishes the connection. Note that there is a case where a beacon of the AP n m is reachable but connection or data transmission to n m is not possible owing to the different modulation methods or frame lengths used. In that case, when node n fails to transmit frames to the AP n m , n removes n m from the candidate set N a and again selects n m randomly from the other candidates. Once node n finishes this process, it starts working in the SoftAP mode and advertises its distance d n by encoding it in its SSID string. Note that, in this paper, we do not care how it is encoded. After working for a certain time in the SoftAP mode, node n transits to the stable state. When all nodes run this process, all nodes determine the correct distance themselves and finally transit to the stable state.
In Fig. 2, we show an example of the node behavior in the initial state. First, only the sink node is working in the SoftAP mode and advertises its distance as zero. Nodes A-H are scanning APs and nodes A-E are within the beacon-reachable range of the sink s. Since the sink alone is the parent candidate of nodes A-E, nodes A-E set the parent candidate as N a = {s}. Then, nodes A-C connect to the sink and determine their distance as 1. However, nodes D and E fail to connect to the sink because they are not in the communication range of the sink. After that, nodes A-C move to the SoftAP mode and start advertising their distances. Nodes D and E find nodes A-C in their continuous scanning process, randomly select one of them, and determine their distance as 2 through connection to one of nodes A-C. Node F also does the same and sets its distance to 2. By repeating this process, nodes G and H determine their distance as 3.
Time synchronization
Our protocol assumes operation on inexpensive devices in which the time clock is not precise. However, since nodes change their behavior depending on the time slot, they need to recognize the boundary of slots accurately to some extent. Thus, in our protocol, we synchronize the time every time nodes exchange messages with their parents in order to ensure relatively accurate time recognition.
The first time synchronization is performed when node n s determines its distance, as described in Sect. 4.3. Node n s connects to n r , sends a distance determination message, and receives the time information in return. Then, n s advertises its distance in the SoftAP mode, sleeps, and wakes up at the beginning of the appropriate slot as a stable state behavior. In the stable state, every time a node exchanges a message with other nodes, time synchronization is executed, where the parent sends its time information and the children set the time.
Note that the time information is not just the time description, but also includes the information to determine the waking-up slots for nodes. Thus, specifically, it includes the slot interval t slot , the maximum number of slots, S MAX , the current slot number S num , and the time elapsed from the beginning of the current slot, t. Every node can determine its wake-up slots based on these four values and its own distance, as we will describe in Sect. 4.5. The meanings of these values are illustrated in Fig. 3. Here, R > t slot S MAX holds, where R is the length of the data collection period. S MAX should be a sufficiently large value to support the maximum distance of nodes because the working slots for a node are determined according to the distance of the node.
Determining the working slots
A node first receives the time information from the parent node when its distance is determined in the initial state, and it determines its working slots from the time information and its own distance. A node has two working slots, i.e., the receiving and sending slots, where the sending slot is located as the next slot after the receiving slot.
In order to collect all data within a single collection period, the data collection starts with the maximum-distance nodes, for example, k-hop nodes with distance k. Namely, k-hop nodes send their data to {k − 1}-hop nodes in a slot, and {k − 1}-hop nodes send data to {k − 2}-hop nodes in the next slot, and so on, which continues until all data reach the sink. Figure 4 illustrates this operation. To do this, the receiving slot of node n is allocated as
Behavior of nodes in sending/receiving slots
In the stable state, nodes work only in the receiving and sending slots, and sleep in the rest. In the receiving slot, nodes move to the SoftAP mode of Wi-Fi Direct to accept frames from other nodes. In the sending slot, they send their frames and then sleeps. In the following, we explain the behavior of nodes in detail.
At the beginning of the receiving slot S m , node n r wakes up and moves to the SoftAP mode. In the SoftAP mode, node n r advertises its SSID built with the network ID, node ID, and distance, where the network and node IDs should be determined by the administrators. When node n r is connected by sender n s and receives its data, the receiver n r sends back the time information to realize time synchronization and finishes the communication. On the other hand, the sender node n s randomly wakes up in the STA mode, scans parent nodes, finds parent n r , and sends data to parent n r . Here, the sender node wakes up after a randomly determined wait, which is similar to the random back-off in Wi-Fi. This is because the same-distance nodes are located close to node n s with high probability, and so we wish to avoid collision among them by randomizing the transmission time. Specifically, at the beginning of each sending slot, sending node n s randomly chooses the waiting time t w from the range 0 < t w < (t slot /3). Note that, because sending data via Wi-Fi takes at least several seconds or more time in the case of collision or bad radio conditions, we set the range shorter than the full slot time. After waiting for time t w , node n s scans APs for a certain period of time, finds nodes both whose beacons were received more than T times and whose distance is d s − 1, and makes a set of parent candidates, N a . Then, node n s selects parent n r randomly from N a , connects to parent n r , sends data, receives the time information, disconnects, sets the wake-up timer for the next working slot, and sleeps. If n s fails somewhere in this sequence, it repeats the operation again from scanning. Figure 5 illustrates the behavior of n s and n r in slot S m , where sender n s wakes up after the waiting random time t w and sleeps after exchanging messages with n r .
Updating distances
If all parent nodes fail, or if a fatal time synchronization error occurs, no parent node will be found in the sending slot. In this case, the node transits to the initial state again to find new parent candidates and updates the distance accordingly.
Specifically, for node n with distance d n , neighboring nodes are those with distance d n − 1, d n , or d n + 1. If all nodes with distance d n − 1 disappear for some reason, node n cannot find any parent candidate in its sending slot S m and transits to the initial state. In slot S m − 2 in the next collecting interval, node n will find nodes with distance d n + 1 and set its own distance to d n + 2. Then, node n connects to the parent, sends a distance determination message to the sink, and continues to scan APs for a little more time. Later in slot S m − 1, node n will find nodes with distance d n , set its own distance to d n + 1, and similarly send a distance determination message. In slot S m or later, node n would not find any parent candidate so it finishes the initial state and transits to the stable state. In the next collection period, node n joins the forwarding process of the network. As a result, although node n is absent for one collection period, it returns to the collection process.
Overview
The proposed protocol in this paper is aimed at designing a practical protocol for sensor networks based on Wi-Fi Direct using inexpensive microcontrollers such as ESP families. Generally, inexpensive microcontrollers tend to have a low-precision clock; thus, we propose a protocol based on long slots to allow considerably large clock drifts. One of the focuses in our evaluation is to show that the proposed protocol works on real devices with commodity ESP-series microcontrollers. We implemented the proposed protocol with ESP8266EX, which is available for about 5 USD in 2020, and demonstrated that the protocol works in a real environment. The other focus in our evaluation is to measure the power consumption of the proposed protocol. In general, the power consumption with Wi-Fi communication is relatively high and not suitable for sensor networks. However, in the proposed protocol, we adopt a slotbased operation and each sensor basically works for 1-2 min in a single data collection period such as 60 min, which significantly reduces the power consumption. In this regard, we measure the power consumption performance of our devices and show that they have a sufficiently long lifetime under practical conditions.
Measuring accuracy in sleeping time
As a preliminary evaluation, we first measured the clock precision of the implemented device. We implemented the proposed protocol in EXP8266EX and tested the accuracy of the sleeping time, i.e., we measured the difference between the configured sleeping time and the actual sleeping time with the deep-sleep mode of EXP8266EX. Specifically, we tested five devices configured with sleeping times of 1-30 min.
We show the result in Fig. 6, in which the horizontal axis represents the configured sleeping time and the vertical axis represents the difference between the configured sleeping time and the actual sleeping time. The result shows that the error in the sleeping time has a strong linear correlation with the configured sleeping time, where the correlation coefficient is 0.994. Also, the variance of the error increases with the configured sleeping time. From this result, we compensate for the error by configuring the sleeping time as 1.04 × t expect , where t expect is the expected sleeping time.
Scenario
To investigate the communication performance of the proposed method, we conducted a field evaluation with eight nodes and a sink. We searched for an evaluation field where the effect from other Wi-Fi radios is relatively small and selected a paved road in Wakayama University where the strength of other Wi-Fi radios was about −80 dBm. We set the eight devices in the layout shown in Fig. 7, i.e., we set one sink node and three nodes at 30 and 60 m locations, respectively, and two nodes at 90 m locations. The devices (both the sink and the other nodes) were set at a height of 20 cm, as shown in Fig. 8, and all devices were powered on simultaneously. As the protocol parameters, we set the slot length to 40 s and the length of the collection period to 240 s, so the number of slots, S MAX , was 6, which is sufficient to afford three-hop networks. In the sending slot, nodes scanned APs 10 times, and the threshold for including the parent candidate set was T = 7 times, i.e., if an AP was detected fewer than seven times, it was not included in the set of parent candidates, N a k Na. In the association process of Wi-Fi with a parent AP, if it took more than 15 s, we judged it as a connection failure and restarted the process from scanning APs. The collected value was a dummy 12-byte string. We ran the system for 11 collection periods and evaluated the data collection performance. A summary of the configuration is shown in Table 2.
Results
In Table 3, we show the number of data values generated and collected at each node. Because the number of generated data values at each node and the number of received data values at the sink are the same, we consider that all the generated data values are received by the sink without loss. The result shows that the proposed protocol DDWD reliably worked in a real environment. Note that the number of generated data values on node 4 is 9, 2 less than 11, which is because node 4 transits to the initial state once during the experimental period as explained later. Note also that the number of generated data values on nodes 7 and 8 is 10; this is because the initial state of these distant nodes finished later than that of the other nodes such that they entered the stable state from the second data collection period.
In Fig. 9, we show the data collection paths of each data collection period to show the detailed behavior of the network. In the initial state, the distance of nodes 2, 3, and 4 is set to 1, and this distance is kept until the end of period 11. As a result, the distance of node 1 is 2 throughout the experiment. Since the parent node is randomly selected from the candidates, the data collection paths are always different in each period. In particular, in the third and seventh periods, node 4 failed to connect to the sink, and the data held by node 4 at that time reached the sink in the next period. As a result, 11 data values out of 84 took two periods to reach the sink as shown in Table 4. From the result, we observe that sometimes nodes select parents of 60 m distance, which causes network instability, and may invoke a delay or loss of data. Next, in Fig. 10, we show a histogram of the wake-up times of nodes in their sending slots. We observe that in most cases, a short wake-up time of less than 17 s is sufficient to send data to the next hop, but sometimes a longer time is needed. Note that these long-time cases are the 60 m cases. This also shows that we should avoid unstable long connections in routing to achieve the stable collection of sensed data. Note that the slot length should be appropriately determined from the wake-up time distribution to prevent failure in packet forwarding. If node density or the number of packets to be forwarded is too large for the slot length, the data collection ratio will be unsatisfactorily low. When we cannot measure the wake-up time distribution directly, we can measure the data collection ratio at the sink. When we set up a new network, we should adjust the slot length t slot such that the data collection ratio is satisfactorily high. We thus confirmed that the proposed protocol stably works on an inexpensive microchip, EXP8266EX, even though its clock precision is not high. However, the protocol sometimes selects an unstable link to forward data, which causes a delay in data delivery. To achieve stable data collection, we need to introduce a mechanism to prefer entirely select stable links instead of randomly select links (i.e., parents) from the candidates.
Scenario
To evaluate the power consumption of DDWD with our ESP8266EX implementation, we measured the lifetime of nodes in a typical scenario of sensor networks. We supposed environmental sensing to make measurements in a certain area of a field, so we assumed the Table 4 Delay of packets in reaching sink. Numbers of data reaching sink within one period 73 Numbers of data reaching sink in two periods 11 Numbers of generated data 84 Fig. 11. In this scenario, we covered the field with 90 nodes, in which nodes with distance 1 (from the sink) have the heaviest load, i.e., each node receives seven packets from each of the children and sends 22 packets (7 × 3, plus 1 generated by itself) to the parent in each data collection period. In order to emulate the node, we set a device with three children and one parent, and received seven packets from each child and sent 22 packets to the parent in each period. We show the evaluation parameters in Table 5. We used the same devices as those used in the field evaluation described in Sect. 5.3, where the devices were equipped with an NCR1850B (25) battery of 3400 mAh capacity, and minimal parameters were set to t slot , t R , S MAX , and so forth.
Note that we assumed that the average wake-up time in a sending slot was 13 s, which was the average wake-up time in the field evaluation shown in Sect. 5.3.
Results and analysis
The devices were found to have a lifetime R of 2600 periods; thus, the total wake-up time T active was 38.28 h and the total sleep time T sleep was 77.28 h. From the results, we calculated the lifetime for a data collection period of 1 h in the scenario shown in Fig. 11. From the current in the deep-sleep mode, which was 20 × 10 −3 mA, the total energy consumed in the sleep mode was computed as C sleep = T sleep × 20 × 10 −3 = 1.55 mAh. Then, the energy consumed in the working slots was computed as C active = C battery -C sleep = 3398.45 mAh. Note that the energy consumption in working slots is about C active / C sleep = 99.96% even when the data collection period is 160 s. Table 6. The results show that the device will have a lifetime of about 106 days if a data collection period is 1 h.
Discussion on deployment with solar panels
We show that the energy required by the device can be supplied with small solar panels. Generally, the estimated power generated by a solar panel is represented as E p = P × H × K where P is the capacity of the solar panel in watts, H is the average solar radiation per day, and K is the loss factor. We assumed a very small 1 W solar panel so that P = 1. According to a database on solar radiation records, (26) H is 3.32 kWh/m 2 in Tokyo if we set the solar panel horizontally. We used K = 0.7 as a generally used value for K. The amount of energy charged by the solar panel is C solar = E p / V solar , where V solar is the voltage of the solar panel. If we use V solar = 5.05 V as a typical value for a 1 W solar panel, the estimated amount of power generated per day is computed as 422.55 mAh. Then, if we assume the voltage of the battery as V battery = 3.6 V as shown in Table 5, the estimated amount of power charged per day is Table 7. Since the power consumption of our implementation computed from the evaluation above is 31.84 mAh/day, even a 1 W small solar panel is sufficient to operate the system.
Conclusion
In this paper, we proposed a new duty-cycle communication protocol DDWD for sensor networks that works on Wi-Fi Direct. For practical reasons, we assumed an inexpensive commodity microcontroller with low-precision clocks such as the ESP family. To collect measured values periodically in a relatively large interval such as 1 h, the proposed protocol works on the basis of relatively large slots of 30-40 s to achieve robust data collection. To the best of our knowledge, DDWD is the first duty-cycle communication protocol for sensor networks that works on Wi-Fi Direct. We implemented the proposed protocol on a microchip, ESP8266EX. Through field evaluation, we confirmed that our protocol worked robustly to collect measured data from every sensor device. We also evaluated the performance in terms of energy consumption and found that the required energy for our protocol is sufficiently small to operate with a small solar panel. In conclusion, the proposed protocol robustly worked on inexpensive ESP-family microchips with sufficiently small power consumption. As future work, we aim to develop a better next hop selection method to achieve more stable data collection. | 9,437.4 | 2021-01-15T00:00:00.000 | [
"Computer Science"
] |
Extracting current-induced spins: spin boundary conditions at narrow Hall contacts
We consider the possibility to extract spins that are generated by an electric current in a two-dimensional electron gas with Rashba-Dresselhaus spin-orbit interaction (R2DEG) in the Hall geometry. To this end, we discuss boundary conditions for the spin accumulations between a spin-orbit coupled region and contact without spin-orbit coupling, i.e. a normal two-dimensional electron gas (2DEG). We demonstrate that in contrast to contacts that extend along the whole sample, a spin accumulation can diffuse into the normal region through finite contacts and detected by e.g. ferromagnets. For an impedance-matched narrow contact the spin accumulation in the 2DEG is equal to the current induced spin accumulation in the bulk of R2DEG up to a geometry-dependent numerical factor.
Introduction
In recent years, there has been an increasing impetus towards generating and detecting spin accumulations and spin currents in nonmagnetic systems. Conventional means of achieving this goal are to use ferromagnets and magnetic fields to inject and/or detect spins [1]. Recently, spin generation based on two related effects, current induced spin accumulation [2,3,4] and current induced transverse spin current [6] (known as the spin Hall effect), has attracted considerable attention. In Ref. [6], the spin Hall effect was caused by the spin-orbit (SO) interaction of impurities and the effect is then called "extrinsic". The "intrinsic" SHE caused by a band structure with SO-induced spin splittings was proposed by Sinova et al. [9] for the R2DEG and Murakami et al. [10] for the hole gas in bulk III-V semiconductors with significant SO interaction. After an initial controversy, it is now generally agreed that in the diffuse regime the SHE vanishes in the bulk of a 2DEG with k-linear (Rashba and/or Dresselhaus) SO coupling [11,12,13], but remains finite for extrinsic SO coupling, intrinsic SO coupling in two-dimensional hole systems, and near the edges of a finite diffusive R2DEG [12,14]. The spin Hall effect has been observed in semiconductor electron [16] and hole [17] systems by the detection of edge spin accumulations with optical methods, and in metals by the electrical detection of spin currents via ferromagnetic leads [18]. Although initial theoretical investigations of the SHE and current-induced spin accumulation has been on bulk disordered conductors using Kubo, Keldysh or Boltzmann formalism [7,9,10,11,12,13,14,19,20], it is now understood that the bulk conductivity is not necessarily related to experimentally relevant quantities such as local spin accumulations probed by local optical or electrical probes. In this respect, a more local approach based on spin diffusion equations is advantageous [12,13]. However, spin diffusion equations have to be supplemented by suitable boundary conditions that have observable consequences. There have been many proposals in that direction [14,22,21,23,24,25,26], but a consensus has not been reached so far.
Here, we focus on the boundary conditions between a (half infinite) 2DEG with finite Rashba type spin orbit coupling (R2DEG) and a (half infinite) 2DEG without spin-orbit coupling connected by a contact that is narrow on the scale of the system, but wider than the mean free path. Such a boundary has been considered by Refs. [23,26], but for an infintely wide contact region, for which it could be shown that no spin accumulation could diffuse into the 2DEG [26]. We shall show below, however, that for a narrow (as opposite to wide) contact, the spin accumulation in the 2DEG is equal to the bulk value of the spin accumulation in R2DEG up to a numerical constant which depends on the geometry that is smaller but can be of the order of unity. These results prove that current induced spins can be extracted to a region with small spin-orbit coupling in which the spin lifetime is very long and used for spintronics applications, thus confirming our previous results [14].
This article is organized as follows: we define our model and derive spin diffusion equations in section 2. In section 3, we first recapitulate the symmetry relations for conductances with respect to measuring the spin accumulation in a normal region with ferromagnetic leads. Next we apply these relations to demonstrate that the spin accumulation from the R2DEG can be extracted into a 2DEG region. In section 4, we focus on a model for a small contact between the R2DEG and the 2DEG and solve it to demonstrate the principle of spin extraction to a region with vanishing SO interaction. The numerical simulations for the diffuse R2DEG-2DEG heterostructure are reported in section 5.
Spin diffusion equations in a 2D electron gas with Rashba spin-orbit coupling
In this paper we focus on a disordered finite size 2DEG with Rashba type spin-orbit coupling, noting that the effects of a significant Dresselhaus term can be included straightforwardly. Throughout the paper we shall assume that all length scales of this finite region are much larger than the elastic mean free path such that spin transport is governed by diffusion equations [12,13] is valid. In this section, we proceed to derive these spin diffusion equations for later convenience.
In 2 × 2 spin space, our system is defined by the Hamiltonian: where x and p are the (two-dimensional) position and momentum operators, respectively, σ is the vector of Pauli spin matrices (the 2x2 unit vector is implied with scalars), z is the unit vector normal to the 2D plane, and α parameterizes the strength of the SO interaction that can be position dependent e.g. due to local external gates, and V (x) = N i=1 φ(x − X i ) the impurity potential, modelled by N impurity centers located at points {X i }, which for the sake of simplicity we assume to be spherically symmetric, U(x) is a smooth potential that confines the system to a finite region but allows a few openings to reservoirs.
Rashba Green function
Our starting point is the impurity averaged Green's function G(k) = ( 2 k 2 /2m + αη · k − E − i /τ ) −1 , where η = z × σ, and τ is the momentum lifetime. In terms of its components, G(k) is given by where k 2 ± = k 2 F + k 2 α /2 ±k α k 2 F + k 2 α /4+ 2mi/(τ ), k α = 2mα/ and k F = 2mE F / 2 . The real space Green function is then obtained by a Fourier transform: Extracting current-induced spins: spin boundary conditions at narrow Hall contacts 4 where x = |x|. We note that we only need the large k F x asymptotics of G(x), because we are interested in dilute disorder. The conventional approximation [27] is to expand G(x) to leading order in 1/(k F r) and k α /k F : where l = k F τ /m . This level of approximation is sufficient for most spin-orbit related applications such as the calculation of Dyakonov-Perel spin relaxation, spin precession, weak antilocalization etc. However in order to study current induced spin accumulation and SHE in diffusive systems, it is necessary to go to higher order in mα/ k F and 1/(k F x). With these correction terms the asymptotic Green function becomes: wherex = x/x In the next subsection, we will use this expression to derive spin diffusion equations for a R2DEG.
Diffusion equation
We first focus on the equation of motion of the density matrix with coherent spin components. It can be shown that in the limit E F τ / ≫ 1, the energy resolved density matrix satisfies the following equation [12,13,23,24]: where ρ a = tr(ρσ a ), summation over repeated indices is implied, and ν is the density of states and Multiplying ρ(E) with the density of states and integrating over energy, we obtain the densities and polarizations, whereas accumulations are obtained by directly integrating over energy. The diffusion equation is obtained by expanding Eq. 7 to second order in spatial gradients. In a homogeneously disordered system we have: where ρ a (x) = ρ a (x; 0). We now use the asymptotic expression Eq. (5) for the Green's function and insert the resulting expression in to Eq. (8). The spatial integrals are elementary and lead to the following equations for the vector components of the density matrix, s i = ρ i /2 and n = ρ 0 : Here A similar expansion for the spin current, this time to first order in the spatial gradients, produces the analog of Fick's law for spin diffusion: When supplied with suitable boundary conditions the diffusion equations (9-11) and the spin current expression (12) can be solved to obtain all spin and charge conductances.
Here, we are mainly interested in the boundary between a R2DEG and a 2DEG (for hard wall boundary conditions see Refs. [21,23,24]). In this case, the boundary conditions require the continuity of the spin current [14,26] where s R and s N are the spin accumulations in the R2DEG and the 2DEG respectively, and n is the unit normal vector at the interface. A common choice for the matching condition for the spin accumulation at the interface is to assume that the spin accumulations are continuous (see e.g. Ref. [1]): This condition has been criticized recently in Ref. [26] in which it was demonstrated that for an infinite interface with a constant electric field parallel to it: We first note that when the charge current is perpendicular to the interface, such as for a two-probe configuration [42], these two boundary conditions agree and no controversy exists. However, for an infinite interface where the charge current density is homogeneous, the difference between these two boundary conditions is drastic: if Eq. (14) is valid, a current induced spin accumulation diffuses into the 2DEG. On the other hand, if Eq. (15) is valid, the spin accumulation vanishes in the 2DEG. We solve this conundrum below by showing that for a contact smaller than the spin relaxation length (as assumed in Ref. [14]), the two boundary conditions lead to results that agree up to a numerical factor of the order of unity. We therefore conclude that it is possible to extract spin accumulation to the 2DEG and detect it with a ferromagnet.
Onsager's relations and the spin boundary conditions
In this section we provide a general symmetry argument based on Onsager's relations, that that proves viability of electric detection of the SHE and the current induced spin accumulation by finite size contacts. Let us start by addressing the symmetry properties of multiprobe conductances relevant for the combination of a spin-orbit coupled region with a ferromagnet via a normal region ( Fig. 1), using Onsager's relations [28,29,30,31,32]. We are particularly interested in the setup shown in Fig. 1. The configuration in Fig. 1a is designed to measure the spin accumulation in the 2DEG injected from the neighbouring R2DEG. The voltage signal V directly observes boundary conditions between R2DEG and 2DEG when the charge current is parallel to the boundary. The setup in Fig. 1b, on the other hand, measures how much spin is injected into the R2DEG from the ferromagnet through the 2DEG. V measures directly the spin boundary conditions for a charge current perpendicular to the boundary. Onsager relations relate these two conductances, enabling us to relate the boundary conditions when the current is parallel or perpendicular to the boundary.
Onsager's relations
A generic SO-coupling operator consists of combinations of velocity and spin operators that are invariant under time reversal. When the spin-orbit coupled region is brought into contact with a Ferromagnetic region, the Hamiltonian of the combined system has the symmetry T H(m)T −1 = H(−m), where m is a unit vector in the direction of the magnetization of the ferromagnet and T the time-reversal operator. We now focus on a the specific four-probe setups in Fig. 2 (for a more general discussion see Ref. [32]). The currents in the leads and the respective chemical potentials of the reservoirs are related in linear response as I i = j G ij µ j . We now use the Landauer-Büttiker formalism to obtain G ij . The scattering matrix for the spin orbit (SO) coupled region and the ferromagnetic region is given respectively by S SO and S m . The symmetry properties of these matrices are self-duality (reflecting the presence of spinorbit coupling) where Σ 2 is block diagonal in the Pauli matrix σ y [33]. We are interested in the block structure of S SO singling out lead 3 combining the SO and F regions: where the matrix r SO includes all reflections and transmissions that begin and end in the leads 1, 2 and 4. Using the rules for combining S-matrices, we obtain the joint S-matrix of the combined SO|F region: Using these rules we obtain the symmetries of the combined S matrix: . Focusing on the current/voltage configuration: I 1 = −I 3 , I 2 = −I 4 , eV 1 = µ 3 − µ 1 and eV 2 = µ 4 − µ 2 [29] the relation between currents and voltages can be expressed as [30]: where the coefficients α i j can be found in Eqs. (4.a-4d) of Ref. [30]. The Onsager relations can then be expressed as: If we choose (say) I 1 equal to zero, the relation between the applied current and the spin-Hall voltage is: I 2 = V 1 (α 11 α 22 − α 12 α 21 )/α 12 . For phase incoherent conductors, we can ignore the interference terms that arise while obtaining the transmission probabilities, but the Onsager relations Eq. (23) are unaffected. For a general analysis based on Kubo formula see Ref. [32] This analysis implies the equivalence of two Hall measurements: (i) setting I 1 equal to zero and detecting V 1 generated by an applied I 2 (Fig. 1a) and (ii) switching magnetization, setting I 2 equal to zero and detecting V 2 (Fig. 1b). In other words, driving a current I 2 through the system and detecting the spin Hall voltage with a ferromagnetic contact is equivalent to driving a spin accumulation into the SO region In the next subsection, we shall exploit this symmetry to gain insight to the boundary conditions for a R2DEG|2DEG interface.
four-probe setup and boundary conditions
We now use the Onsager relations from the previous subsection to better understand the spin boundary value problem. Consider the four-probe setup in Fig. 1. When the ferromagnetic lead is a Hall contact, the vanishing spin transfer derived by Ref. [26] for a (infinitely) wide contact seems to imply that there is neither spin accumulation nor spin current near the ferromagnetic reservoir and therefore no Hall voltage. On the other hand, in the Onsager equivalent measurement, spins are injected from the ferromagnet into the normal region. Since in this case the current is perpendicular to the boundary, the spin accumulations can be matched [26] and a spin accumulation in the SO region exists. However, the diffusion equation (9) implies that a spin accumulation gives rise to a voltage drop in the spin orbit region [34,18]. Onsager's relations discussed in the previous section imply that these two voltages must be the same provided the injected currents are the same. Thus the result for an infinite contact that a current-induced spin accumulation can not enter Hall contacts [26] appears to be misleading. In the following we shall demonstrate that the spin accumulations around the Hall contact must be close up to a numerical factor around the Hall contact.
We now focus on the current-voltage setup in Fig. 1b. In this case the current is perpendicular to the boundary, so the spin accumulations are continuous across an ideal R2DEG|2DEG interface. Assuming a diffusive ferromagnet magnetized parallel to the current direction and ignoring the resistivity of the normal region, we obtain the spin current polarized in the magnetization direction entering the R2DEG: where L s = √ Dτ s is the (Dyakonov-Perel) spin relaxation length in the R2DEG and Here, L sF , D F , ν F are the spin relaxation length, diffusion constant and average density of states in the ferromagnet, respectively, δD = (ν + D + −ν − D − )/(ν F D F ), ν ± and D ± are the density of states and diffusion constants of the majority and minority spin electrons, µ is a linear function of m of order unity that depends on the details of the geometry of the contact. The spin accumulation in the SO region localized within a depth of L s at the contact aperture. acts as a dipole source for the diffusion equation: with dipole density P = −4K s−c (z × s)/D. We then estimate the potential drop in the Hall direction to be: which is proportional to the integrated spin accumulation The potential drop is therefore: up to a numerical constant. We now focus on the potential drop in the Onsager-equivalent setting in Fig. 1a. According to the boundary condition Eq. (15), the current induced spin accumulation does not enter the normal region. Then the potential drop at the ferromagnet|2DEG interface would be zero in contradiction to Onsager's relations. Let us assume that the spin accumulations at the R2DEG and 2DEG near the contact are equal to each other up to a numerical constant Z, i.e. s 2DEG = Z s R2DEG . Then the calculation of the potential drop proceeds similar to Ref. [32]. Again ignoring the resistance of the 2DEG region, we obtain a potential drop as: up to a numerical factor. Comparing with Eq. (29) and noting that we have ignored all numerical factors in the calculations above, we conclude that Z must be a numerical factor of the order unity in order to satisfy Onsager's relations. In the next section we shall consider a model for a narrow contact and show that this is indeed the case.
Model for spin accumulation near a contact
In this section we focus on the current density and spin accumulation near a finite contact between a half-infinite R2DEG and a half-infinite 2DEG (Fig. 3a). The model we adopt is sketched in Fig. 3b. Asymptotically, we have a constant current density in the left region (R2DEG) in the y direction whereas in the right region (2DEG) the charge current density vanishes. The two regions are divided by an infinitely thin and high potential barrier, except for an opening (the contact) of size W H centered at (0, 0).
We note that the solution to this problem closely follows that of an analogous one in magnetostatics [35]. We proceed by expressing the chemical potential n in terms of the (yet undetermined) solution φ of the Laplace equation: where J 0 is the bulk current density in the R2DEG. The asymmetric behaviour of φ in left and right regions is dictated by the current continuity at x = 0. The boundary conditions are: Next we expand φ in terms of the modes of the Laplace equation: Extracting current-induced spins: spin boundary conditions at narrow Hall contacts 11 The solution to the diffusion equation with the above boundary conditions then reduces to that of a dual integral equation: Such integral equations arise commonly in potential theory for mixed boundary conditions (see Ref. [35] for the solution in 3D). In our case the solution is We may now express the spin accumulations in terms of A(k). For the sake of simplicity, we at first disregard the precession term, proportional to K p , in the spin diffusion equations Eqs. (9)(10)(11). We shall be particularly interested in the question whether current-induced spin accumulation in the spin-orbit coupled region can leak out of the contact, into the normal (i.e. no spin-orbit interaction) region. In the bulk of the R2DEG, the current is in the y direction, so the current-induced spin accumulation is polarized in the x direction. Then the general solution to the spin diffusion equations in the R2DEG region is given by: where δs x satisfies the source-free (i.e. zero charge current) diffusion equation that can be expanded as: where κ = k 2 + L −2 s . For the 2DEG side (x > 0), a similar expansion gives: Using the boundary conditions that the spin current is continuous and s x is discontinuous by an amount equal to (ατ /2)dn/dy [26], we find that the accumulation in the 2DEG satisfies: and D(k) is determined from A(k), through the following dual integral equations: if |ȳ| > 1. Here we have introduced dimensionless variables q = kW H /2,ȳ = 2y/W H and λ = W H /2L s . In the limit λ ≫ 1 (wide contact), expanding Eq. (40) to leading order in λ −1 we obtain that D(k) vanishes like λ −1 , in agreement with Ref. [26]. In the opposite limit λ ≪ 1 (narrow contact), we again expand Eq. (40), this time to leading order in λ. We then identify the resulting integral equation with the y derivative of Eq. (34) times ατ /2. Thus we show that D(k) = − ατ 2 kA(k)/2 solves Eq. (40) up to order λ 2 corrections. Then the spin accumulation in the 2DEG near a narrow contact is given by: We see that the spin accumulation in the 2DEG does not vanish even when the mobilities of both sides are equal. For comparison, we also calculate the spin accumulation under the assumption that there is no jump in the accumulations. We obtain that in this case the spin accumulation is twice as large as s + x (0, y). The presence of the term proportional to K p generates z-polarized spin currents going into the 2DEG, owing to the precession of y polarized spin accumulation as it diffuses out of the R2DEG, but does not change the general picture presented above. We conclude that the choice of the boundary condition for spin accumulation near a narrow contact is not important qualitatively, because either boundary condition produces identical result up to a numerical factor, in agreement with the Onsager's relations.
Numerical results
In this section, we shall provide a numerical demonstration of the results of the previous section, i.e. the possibility of extracting spin accumulations to a normal region with small contacts. We focus on the discretized version of the hamiltonian (1). Discretization with lattice spacing a yields the following tight-binding representation of H 0 [36]: −c † n,m c n,m+1 + iᾱc † n,m σ y c n+1,m − iᾱc † n,m σ x c n,m+1 + H.c.
where n(m) is the x(y)-coordinate of the site (n, m),ᾱ = (ma/ )α. The abbreviation c † n,m = (c † n,m,+ , c † n,m,− ) was used, where c † n,m,σ (c n,m,σ ) creates(annihilates) an electron at site (n, m) with spin orientation σ with respect to theẑ-direction. We also define the spin precession length L SO = πa/ᾱ, which is related to L s by L SO = 2πL s in the dirty limit, but remains well-defined for ballistic systems where there is no spin relaxation. In this model, instead of dilute localized scatterers, we shall assume Anderson disorder: the dimensionless onsite potentialŪ is set to a different random valueŪ ∈ [−U 0 /2; U 0 /2] at each lattice site (n, m) of the disordered region, where U 0 accounts for the strength of the disorder [43]. The parameter U 0 is related to the momentum relaxation rate τ and the electron mean free path l = v F τ by: x =34a x =65a x =73a where ǫ F = ( 2 /2m * a 2 ) −1 E F and E F is the Fermi energy. In the rest of this section, we choose U 0 = 2 and ǫ F = 0.38 in order to ensure that the transport through the system is diffusive. With this choice of parameters the mean free path l ≈ 7.4a is smaller than any length scale characterizing the system. In order to study the spin accumulation extracted to a normal region we focus on the setup shown in Fig. 4, where a normal region (i.e.ᾱ = 0) with a size of 80a × 14a is attached to a Rashba spin-orbit coupled wire of infinite length, width W and constant finite spin orbit couplingᾱ > 0 via a contact of size W H . Disorder of strength U 0 is present inside the normal region and in the spin-orbit region for −50a < y < 50a. We shall use the nonequilibrium Green function method [44] to calculate the lesser Green function G < ( r; r ′ ) which is related to spin accumulation according to and to the electron density through Here, we focus on the ensemble averaged accumulations s x and n . The variances are also of interest [40,41], but we shall not consider them here. We apply a small bias δV between the chemical potentials of the top and the bottom lead and generate a current in y direction. The left panel of Fig. 4 shows the electron density n inside the system when a current is passed from the top to the bottom. Due to the disorder in the central region (−50a < y < 50a) the electron density decreases from top to bottom. In the right panel of Fig. 4 we show the dependence of n on y for three different values of x. We observe that n decreases linearly in the bulk of the spin-orbit region (solid line), showing that the system is diffusive. For x = 65a (circles) the side contact at x = 68a disturbs the homogeneous current flow. Inside the normal region, x = 73a, n is approximately constant (dashed line).
The current driven by δV , generates a spin accumulation in the bulk of the R2DEG. According to Eq. (10), s B x = (ατ /2)( dn B /dy ) in the bulk. Our simulations agree well with the diffusive result as shown in Fig. 5 for large enoughᾱ. For smaller values ofᾱ, L SO becomes comparable to the overall length of the disorder region L = 100a. In this regime ballistic processes can no longer be neglected, causing slight deviations from the diffusive theory.
Having demonstrated that our numerical system is diffusive, we now focus on the spin accumulation in the normal region. In Fig. 6, we show the spin density s x averaged over 50000 impurity configurations inside three distinct systems with L SO = 25a. We note that in agreement with Ref. [26], when the interface between R2DEG and 2DEG is infinite (top left panel), the spin accumulation in the 2DEG is much smaller than the bulk spin accumulation. Nevertheless, when the size of the contact is made smaller (top right panel), we observe that the spin accumulation inside the normal region increases, reaching a comparable value to the spin accumulation in the bulk when the size of the opening is comparable to L SO (bottom Panel). In order to demonstrate this further, we evaluate s B x by averaging the spin accumulation in the bulk over the blue square shown in Fig. 6 and s P x by averaging the spin accumulation in the normal conducting side-pocket over the white square shown in Fig. 6. In Fig. 7 we plot the ratio s P x / s B x as a function of L SO /W H , for various values of system and contact sizes. We observe that starting from small L SO /W H , the spin accumulation increases with L SO /W H , approaching to ≈ 0.5 − 0.7. This value is in between the estimates 0.5 and 1.0 based on diffusion equations using the boundary conditions Eq. (15) and Eq. (14) respectively. For small values of L SO /W H (Fig. 7, left panel), s P x / s B x is of order (L SO /W H ) in agreement with the analytical calculation above. We note, however, that in this limit the system we considered is close to the clean limit L SO ∼ l, where deviations from the diffusion equations might be expected. Currently, we are working on larger systems in order to explore small L SO /W H in the dirty limit [45].
Conclusions
In this work, we considered the problem of extracting current-induced spins generated in a region with spin-orbit coupling into a region with vanishing (or small) spin-orbit coupling, where the spin relaxation time is long. To this end we focused on the spin boundary conditions between a spin-orbit coupled region and a normal region. Although for an infinite interface the spins are confined to the spin-orbit region via the boundary spin Hall effect, we have shown by solving a model problem as well as doing numerical simulations that for a finite interface the spin accumulations generated in the spin-orbit region can be extracted to a normal region. The amount of extracted spin accumulation is equal to that of the spin-orbit region up to a geometrical factor of order unity. | 6,759.2 | 2007-08-02T00:00:00.000 | [
"Physics"
] |
Jawdat Sa id and the Islamic Theology and Practice of Peace
: Among the leading Islamic thinkers and activists promoting a theology of peace based on the Qur‘anic revelation is Jawdat Sa “ id. Framing his role by an analysis following the conceptualiza-tion of Shahab Ahmed the Qur‘anic context of the ideas of Sa “ id are presented, and these ideas are contextualized within the recent Syrian revolution before it turned into civil war. Fundamental ideas of the theology of Sa “ id help to explain the thoughts of a lesser known activist of nonviolent action based on a specific and revolutionary interpretation of the Qur‘an.
Introduction
The violent turn of the Syrian revolution has often been described; the nonviolent activism, of more importance, at the beginning of it has been less described. A driving force during the early period were local committees (Perlman 2019), e.g., the Local Coordination Committees (Marei 2020). These committees created a new national community (cf. Ismail 2011) that may be compared to the new communities envisioned by Jawdat Sa " id (cf. below). Sa " id was an important thinker inspiring the nonviolent beginnings and local committees whose influence reached out well beyond Syria. Jawdat Sa " id and his recent role in the Arab and Islamic world have to be understood in the context of the Syrian Revolution 1 : "The Syrian Revolution began nonviolently. The vast majority of participants maintained nonviolence as their path to pursue regime change and a democratic Syria, until an armed flank emerged in August 2011. Since then, the revolution has morphed. The original uprising began at the grassroots, and solidarity across lines of sect, religion, and ethnicity was strong among the grassroots population. However, from midsummer to autumn, 2011, armed resistance developed; political bodies formed to represent the revolution outside Syria; and political Islamists of various sorts entered the uprising scene. Since then, armed resistance has overshadowed nonviolent Syria. It should not be a surprise to find that nonviolent resistance diminishes after the emergence of an armed resistance. What is remarkable is that nonviolent resistance in Syria has continued, despite being overshadowed by the raging battle between the regime and the militarized flank of the revolution, and despite being beleaguered by tensions with the armed resistance." (Kahf 2014, pp. 1-2) To include the influence of Jawdat Sa " id on the early Syrian revolution in our considerations, we have to mention that in 1998, the town of Daraya in the countryside of Damascus took Sa " id's ideas about nonviolence as a starting point for their activities. However, in 2003, these young people were targeted by governmental persecution. The members of this group organized a series of multicultural seminars on nonviolence in the city of Homs. Their collective was not a religious one but spiritual-ethical and intersected with the circles of the followers of Sa " id (Kahf 2014, p. 1). The nonviolent way of actions in this Syrian revolution was inspired-among other factors-by the ideas of Jawdat Sa " id. The opening sentence of the main page of this Syrian thinker and author reads: "We live in a world in which four-fifths of its population live in frustration while the other fifth lives in fear." 2 Thus, we may not talk about ideas but about a practical perception of the world based on the need to erase inequalities affecting our societies leading to violence. There are few studies on this Syrian author and activist who has a unique position in contemporary Islam (Müller 2010;Kahf 2014;Lohlker 2016;Murtaza 2016;Ollivry-Dumairieh 2016;Rak 2016;Belhaj 2017;Zecca 2020). As an activist, he participated in the nonviolent opposition in Syria in 2011. In 2013, he had to migrate to Turkey. His writings, however, are still read in the Arab world (and sometimes beyond).
The corpus we are using for our analysis is the comprehensive set of original writings and videos at jawdatsaid.net. To give an overview, we may mention (a) several books, (b) many articles, (c) videos (e.g., lectures illustrated with background pictures), (d) audio files (most of them from 2007 and 2008), (e) other articles in journals, (f) interviews in journals, and g) contemporary Islamic issues. Some books in English and French are available, but the bulk of the material is written and produced in Arabic.
Methodologically speaking, this study of the ideas of Jawdat Sa " id performs a close reading of selected texts by Sa " id to explore his way of thinking. These texts are framed by the approach developed by Shahab Ahmed (cf. below). The style of this article is rhizomatic (cf. Lohlker 2021, p. 122), allowing for a precise presentation and reconstruction of the ideas of Sa " id using the quotations as points of intersection between the ideas presented. We are aware that may be not understandable to readers expecting conventional narratives, but this way of presenting Sa " id is well in line with advanced philosophical approaches and sampling as a method of artistic research (cf. Navas 2012).
Biography
Who is Jawdat Sa " id? The best short biography was written by Crow 3 : "Jawdat Sa " id was born in 1931 in the Circassian village of Bi'r 'Ajam, south of Qunaytra in the Golan Heights. His family (named Tsai) was part of the wave of Circassian immigration from Russian territory into the Arab provinces of the Ottoman empire in the late nineteenth century. At the age of fifteen he was sent to study in Cairo at the prestigious Al-Azhar University, graduating in 1957 with both a university degree in Arabic literature and a diploma in education. After returning to Syria he taught for over ten years, first in the Dar al-Mu'allimin (Teachers' College) in Damascus and then in high schools in and around Damascus, including teaching "morale" in military schools (e.g., in the city of Homs in central Syria). Increasingly, he found himself demoted to less prestigious schools. In 1968, Sa " id was dismissed from his government employment as a teacher, due to his advocacy of ideas on Islamic peace and their implications for radical social transformation, for his published views (his first book appeared in 1966), and for his activism through lecturing in mosques, civic centers, and within Syrian intellectual and social circles. In 1968 he was imprisoned by the Syrian authorities for a year and a half. He has been to prison under the Ba'th regime five times, usually for periods of several months, the last time being in 1973. During the early 1980s, when the Syrian Ikhwan al-Muslimin (Muslim Brethren) were actively opposing President Asad's regime, he was often interrogated and watched, although he has never been a member of the Muslim Brethren. For well over a decade he chose to live in voluntary internal exile, working in Tolstoy-like fashion at his family's apiary in Bi'r 'Ajam. This exemplifies his conviction that intellectual freedom must be linked to gainful work. His withdrawal from active social engagement, coinciding with the clash between the Islamist opposition and the Syrian government, was motivated by his understanding of the Islamic requirement to avoid fitnah or civil discord and violence. Since the early 1990s, Sa " id has gradually become more active within Syria, cultivating contacts and engaging in dialogue with a wide spectrum of religious, political, and social trends within the Sunni religious establishment [ . . . ], with Communists, Arab nationalists, and the Union of Arab Writers [ . . . ]. This reflects Sa " id's commitment to accepting other viewpoints, fostering a more secure sense of community and common purpose among Arab Muslims, and tolerating the pursuit of different directions in finding solutions." (Crow 2000, pp. 64-65) His stay in Egypt from 1946 to 1958 was (cf. below) crucial for the intellectual development of Sa " id. Important writers influencing him were Abu A'la Mawdudi (d. 1979 CE), Jamal al-Din al-Afghani (d. 1897 CE), Muhammad 'Abduh (d. 1905 Iqbal (d. 1938 CE) andMalik Bennabi (d. 1973 CE), two of the most influential thinkers of the modern Islamic world.
After staying in Egypt, he traveled to Saudi Arabia and then the United Arab Republic of Egypt and Syria; then he traveled to Iraq, India, and Pakistan. Thus, he gained first-hand knowledge of many parts of the Islamic world. He also met the influential Islamic scholar Abul Hasan 'Ali Nadvi (d. 1999 CE) in India. Thus, we may sketch his influences before finally returning to Syria (cf. above). 4
A Shahabian Approach
We will situate the ideas and practice of Jawdat Sa " id in the Con-Text of revelation. Following Shahab Ahmed, this hermeneutical engagement is based on the previous hermeneutical engagement being present as Islam (cf. Ahmed 2016, p. 356). Ahmed writes: "Con-Text is thus the entire accumulated lexicon of means and meanings of Islam that has been historically generated and recorded up to any given moment: it is the full historical vocabulary of Islam at any given moment. When a Muslim seeks to make meaning in terms of Islam, he necessarily does so in engagement with and by use of the existing terms of engagement-that is, in engagement with and by use of the existing vocabulary of Islam. The vocabulary of Islam registers, denotes and makes available the meanings of previous hermeneutical engagement; the meanings of previous hermeneutical engagements are, in other words, discernibly embedded in the semantic units of this existing vocabulary of forms. Thus, in a given time or place, for the meaning of an act or utterance to be recognizable in terms of Islam it must be expressed in the vocabulary of Con-Text." (Ahmed 2016, p. 357) Other important terms for the analysis of Ahmed are Pre-Text and Text. Pre-Text is not to be understood as chronologically prior to the Text of the revelation/the Qur " an; it is ontologically and alethically before it but encompasses "the Unseen Pre-Text of the Revelation" (Ahmed 2016, p. 347) as being continuously present in the world and in Islam. The hermeneutical engagement with the Text/the Qur " an takes place in the world of the Unseen of the Pre-Text and is made livable in the Con-Text. The Con-Text can be attributed and traced to the Text and Pre-Text and provides the web of meaning(s) by which Muslims live their hermeneutical engagement with Revelation (Ahmed 2016, pp. 358-59) Taking up this framework, we may start to analyze the ideas and practice of Jawdat Sa " id as an example of hermeneutical engagement with the Revelation. In the context of modern Islamic thought in the Arab world, his position is specific but present until today, contrary to the impression that violent and fundamentalist ideas are dominating the field of discussion.
Returning to the problem of societal change addressed in the beginning, we may refer to Zecca, who wrote in her review of an anthology of translations of writings of Jawdat Sa " id in Italian that his ideas may be analyzed as a reaction to the conditions of the contemporary Arab world and its despotic regimes. Hence, change of this situation is a core idea of Sa " id: "Sa " id defends the possibility of a pacific change which should establish democratic political systems based upon human rights. It is impossible, according to Sa " id, for war to be a vector of change, especially because he considers violence, as a mode of action, anachronistic in relation to the evolution of humanity within our time. It defines the man who resorts to violent action as someone who lives in an 'abrogated time'. He compares young men sent to war to the human sacrifices of ancient populations [ . . . ] and, referring to the endless status of war of the Arab states, he underlines the stupidity of governments who continue to buy weapons from Occidental companies in order to fight one against the others [ . . . ]. Appealing to the unity of the Muslim world, Sa " id exhorts to the end of arms trade, also comparing weapons to fetishes of the Jahilīyya (pre-Islamic or ignorance) period [ . . . ]." (Zecca 2020, p. 215) These remarks hint at the subterranean linkages of the ideas of Jawdat Sa " id to the Arab revolutions after 2010, mentioned before. Hence, the impact of Jawdat Sa " id was the need to change the situation of Arab societies and the Islamic world. To develop this idea, he began to rethink shared notions of what being Islamic means.
During and after his stay in Cairo at the al-Azhar university, he was deeply involved in the contemporary discussion in the Arab and Islamic world. His main persons of reference were Muhammad Iqbal (Hillier and Koshul 2015; Majeed 2009) and Malik Bennabi (Seniguer 2014;Sherif 2018). Unlike the move of Syrian opposition toward a violent strategy in the 1960s, he published his first book in the year 1966 when Sayyid Qutb, famous for his book Milestones, a programmatic work of the first wave of modern violent Jihadism, was executed. Sa " id's book may be read as an answer to this text that was based on the experience of the repressive regime in Egypt (cf. below). A writer and activist in a likewise repressive context in Syria was able to create a theory of nonviolence understood as an integral part of Islam. This may be proof that the results of the hermeneutic engagement with the Qur'anic revelation may be even contradictory, as Shahab Ahmed wrote.
The Path of Adam's First Son
The first book of Jawdat Sa " id 5 we mentioned is The Path of Adam's First Son: The Problem of Violence in Islamic Activism (cf. Menghini 2019, p. 58;Sa " id 1993). In another text, Sa " id mentioned that the first he publicly spoke about The Path of Adam's Son 6 was in 1965 during the Friday prayer during the month of Ramadan. He describes the emergence of this idea during his time as a student at the University of al-Azhar, where he experienced the incertitude and upheaval of the Arab and Islamic world (Sa " id n.d.). Which kind of theory emerged from this situation? The relationship of law and religion in the Muslim community has to be constructive and dynamic. It should not follow the method of imitation and blind acceptance (taqlid) that, for Sa " id, has been a decisive factor of the decline of the Islamic world as a whole.
"In this case Sa " id was strongly influenced by another great Muslim thinker of Jewish descent Muhammad Asad , who commented in his highly acclaimed book Islam at the Crossroads, that whereas Islam was a perfect system for mankind, it was its believers who failed to live according to its message.
One recurring theme in Sa
" id's thought is the need to observe laws, which constitute a profound part of knowledge, he believes. He particularly strongly stresses the notion of change which needs to occur, quoting the Qur'ān: Verily never will God change the condition of a people until they change that what is in their souls. Law allows duties, obligations, and freedoms to be established, but it is injustice that destroys societies. It is humans that are faulty, not the law itself. Law is supposed to protect everyone. In the cycle of history, people relinquish their right to protection and leave it to the law. Sa " id warns that when a person gets his right to self-protection, by which he means any kind of violent means, the individual once again becomes part of the law of the jungle, force. Law on the other hand is opposed to violence. The question one needs to ask is when exactly did the shift between the law of violence and dialogue take place?" (Rak 2016, pp. 35-36) 7 The theory he presents is to be found in his book The First Son of Adam. We will follow the presentation of Rak. The starting point of the book is the story of Cain (Qabil) and Abel (Habil) as told in the Qur'an-another case of hermeneutic engagement with the text of the revelation. It reads: "And recite unto them, with truth, the account of Adam's two sons, when they offered a sacrifice, and it was accepted from one of them, though not accepted from the other. One said: 'I will surely slay you!' [The other] said: 'God accepts only from the reverent. Even if you stretch forth your hand against me to slay me, I shall not stretch forth my hand against you to slay you. Truly I fear God, Lord of the worlds. I desire you should be burdened with my sin and your sin and so become one of the inhabitants of the fire. Such is the recompense of the wrongdoers.' Then his soul prompted him to slay his brother, and he slew him, and thus became to be among the losers. Then God sent a crow, scratching the earth, to show him how he might conceal his brother's nakedness. He said, 'Oh, woe unto me! Am I not able to be even as this crow and conceal my brother's nakedness?' And he came to be among the remorseful." (Sura 5, al-mā'ida, pp. 27-31) 8 Abel refraining from slaying his brother materialized the philosophy of nonviolence so dear to Sa " id. The ultimate result of Cain slaying Abel is the grief and sorrow of Cain as described by Sa " id. Thus, Habil brings the historical shift in human behavior by not acting violently.
"Humanity arises from violence, the period of muscles-as Sa " id states-to the period of mind and comprehension, leading it to grant moral values a growing presence in one's actions. The choice between the right and wrong actions is still voluntary, but in Abel's choice to act against violent methods one can notice the introduction of the law of dialogue, openness to the Other that is visible in acts of moral responsibility, which is one of the key factors driving human nature in its decisions. A different decision, that made by Abel, would only bring human regression. God by creating people and granting them the role of being His viceregents on earth expects that humankind will finally start acting according to the role that is presented to them. The shift in authority, first based on violence, later leads to comprehension. Sa " id sees this as an evolution from the law of the jungle to the law of understanding. This behaviour is full of trust in human evolution. Violent actions are perceived as a form of regression understood as blasphemy, which is considered a major crime in Islam because it means acting against nature and God's order." (Rak 2016, p. 36) For Sa " id, knowledge and nonviolence is to be understood from a Qur'anic perspective. However, it is necessary to move beyond the realm of texts and include the historical experiences of humanity. Yet he understands the human fallibility and tendency to misinterpret, especially, the messages of the prophets. However, experience may help to find a way out. Sa " id argues for the need for a diversity of readings. Following the example of the ancestors would lead to what is called taqlid, blind acceptance of former views, restricting openness, diversity, and progress in Muslim thought. He points to Iqbal's idea of the difference between religion and the human understanding of religion. Experience is vital to a true understanding of religion (Rak 2016, p. 38). The concept of the need for experience may be regarded as another Islamic legitimation of nonviolent activism conceptualized by Sa " id to create a new history. Sa " id shies not away from controversial points when criticizing the Muslim orthodoxy. Racism or ethnocentrism stem, according to Sa " id, from the denial "of the possibility of prophecy to other religious and cultural figures. It is interesting to note the many quotations he himself uses throughout his writings from other than Islamic sources." (Rak 2016, p. 39) Two important other concepts are equity and justice. Equity is for Sa " id the perfect realization of tawhid or believing in one God and unity, a narrow path. 9 Justice, on the other hand, is best exemplified by the Qur'anic saying "There is no coercion in religion." (cf below).
"Equity for him means no more than the process of denunciation of tyranny and the act of prohibiting religious coercion. It is interesting to note that Sa " id sees tyranny as a specific case of breaching the teachings of Islam-and calls it an example of polytheism, an unforgivable sin. According to Sa " id, the call for equality is vital for human prosperity. The main problem of mankind is connected to the rejection of the need for equality, or equity, which can give some people the feeling of superiority, a nearly godlike position among others. This superiority is embedded in the arrogance of people, which is an obstacle not only in building everyday relations but, in the believers' eyes, may also prevent one from entering paradise in the hereafter." (Rak 2016, p. 39) Rak sketches the concept of Malik Bennabi that there is a certain state of mind or the conditio humana allowing for the emerging of a disposition to be colonized. This state of mind creates the conditions for being colonized. The root causes are the weakness and apathy emerging in Arab societies including the loss of communities dispersed into assemblies of individuals (cf. below). For Sa " id, the vital element of the story of Habil he refers to is the ability to end oppression and to build a new society based on equal rights. To use an argument of Sa " id (1993), nonviolence means a shift to the nervous system from the muscular system. The main example for Sa " id are the prophets addressing the minds of people and not their bodies. This means that no physical actions are needed (Rak 2016, p. 40). Physical action will be needed when change to a nonviolent society has to take place (cf. below).
Nonviolence is, for Sa " id, an act and idea of freedom since it can be traced back to disobedience, "the negation of the need to take harmful action against another. A disobedience to the culture of muscles as he calls it" (Rak 2016, p. 40). Not engaging in violence is the final proof of intellectual freedom.
Change
Sa " id stresses the need for individual and societal change in his book referring in its title to sura 13, al-ra " d, 11: "Truly God alters not what is in a people until they alter what is in themselves." 10 The title is: Until They Alter What Is in Themselves. 11 The interdependence of individual and collective change as spiritually inspired is described by Sa " id as a dual change. The first change is that instilled by God in his creation; the second one, that of the humans, is inspired by God, "a gift from God". Humans will be able to realize this inspiration when they are willing (cf. below) to change themselves. The change, however, is relevant for the individuals. It is a collective change of an entity composed of these individuals.
In this book, Sa " id directly criticizes some thoughts of Sayyid Qutb, one of the forefathers of modern-day Jihadism. This indicates the involvement of Sa " id in the ongoing discussions in the Arab and Islamic world of this period. Sayyid Qutb may be regarded as the paragon of the movement advocating the use of force and coercion against all other Muslims and non-Muslims.
The denial of coercion (ikrāh) is-as mentioned before-crucial for The Path of the First Son of Adam. This concept is further discussed in other texts that may help us understand the hermeneutic engagement of Sa " id with the Qur'anic revelation and to situate him in the contemporary landscape of Islamic discourses
Lā ikrāh fi 'd-dīn
A Qur'anic verse discussed by Sa " id especially is Sure 2, al-baqara, 256. Usually, the shortened version is used: "There is no coercion in religion." 12 "The tempter to error (tāghūt) 13 is the one who brings coercion (ikrāh). Hence, it is ordered not to believe in the tempter. The believers are told to believe in God for whom it is true that there is no coercion in His religion (dīn). 14 He is not afraid of suffering defeat from renouncing coercion. He trusts in textual logic (mantiq), in the humans (insān) and in God in whose religion is no kind of coercion.
"As to coercion in religion, the removal of coercion is of its most important chapters, more important than all the other chapters. In particular, politics (siyāsa) based on coercion is no [true] leadership. There is no truth (rushd) but error (ghayy) 15 and deceit. [ . . . ] According to the strength of coercion truth is far away and the Shari'a 16 is defective or not existing at all. [ . . . ] It may be said that according to the advice of the Qur'an to watch out in the future since adopting coercion 17 since the history of this issue is pitch-black." (Sa " id 1998) Since Jawdat Sa " id mentions the West as a paradigm for adopting coercion and making it the source of predominance, we may identify one element of the Pre-Text of this interpretation. The other main element is the tyranny of the contemporary Arab world. The references to the Qur'anic revelation are easily identifiable. These presuppositions and the reference of the Qur'anic revelation enable the believers to make a deliberate choice for the devotion of the Qur'anic injunction to resist oppression and coercion. This kind of resistance is, for Sa " id, legitimate if it does not lead to coercion and violence. Hence, these paragraphs make the call to nonviolent resistance based on the Qur'anic revelation visible. The framework for the nonviolent opposition in Syria mentioned at the beginning of this chapter is laid out.
The crucial factors that will enable the change needed in society, especially Muslim societies, are described by Sa " id as a manifold endeavor: work or activism ('amal), will (irāda), ability (qudra), and the application of these principles. They are sketched in a book 18 called al-'Amal: Qudra wa-Irada or Work as Ability and Will (Sa " id 1984).
Work as Ability and Will
Since the will to choose a nonviolent path to action without turning to coercion is essential for the theories of Sa " id, we have to turn to this book. Sa " id compares the spirit of God with the will of humans. Thus, he argues: "This is to demonstrate that the body's spirit is its will; once the will is lost, then the body must die-it decomposes in the same way as the individual body decomposes and reverts to its constituent elements. When the community decomposes, its individuals, having lost the common will, will revert to their primitive interests: struggling to preserve their individual lives, not caring about the development of society. It will be an aggregate of individuals, each unto himself/herself. Indeed, the community comes into being at the time its individuals have wills that go beyond themselves as individuals and encompass the others-It is then that the society begins to exist as a body; and it is then that it is true of it to apply the Verse of the Qur'an: "To every people is a term appointed"; (10:49). It is when this happens that you imagine an ummah with a span of life, like an individual. The bond that brings a society together is a will that unites the individuals: one faith, one aim, one ideal . . . An ideal is the spirit of the society." (Sa " id 1984, p. 175) Sa " id's idea of will includes the need to uphold a common will lived in a society inspired by the existence of a community that embodies a super-individual spirit. For Sa " id this community is the Muslim ummah as the ideal community. This ideal community Sa " id is talking about is, for him, the nonviolent society he envisions. This community is embedded in the Islamic worldview of Sa " id and may be illustrated by one example. Sa " id distinguishes between two groups: "In Islamic tradition, we contrast two groups, the faqihs (scholars of Islamic legislation and rulings), and the Sufis. The latter identify themselves as the 'people of the will, or sincerity', and they designate the Sufi learner as the 'murid, i.e., the searcher for the Truth'. To them, the illiterate, the most ignorant, can ascend to a supreme level of sincerity and will. I find this a very good application of our theoretical discussion of the will: it indicates that the will can rise to a very high level even in the illiterate and the children, both female and male, as may be attested by their willingness to offer their money and life." (Sa " id 1998, p. 283) Hence, the will to change the personal life and society to a nonviolent one is open to every human willing to act accordingly. One author writing on Sa " id voices some criticism.
Criticizing Sa " " " id
Menghini wrote in his article on Sa " id that Sa " id's theory of Habil's path, his contextualization of nonviolence, and his exposition of the revolutionary potential of the Islamic idea of the one God (tawhid) as part of Sa " id's theology of nonviolence allow for a deep understanding of his ideas. Menghini, nevertheless, identifies some points to be criticized concerning his argumentation and practicability. Sa " id's selection of passages from the Qur'an and Hadith is not-according to Menghini-sufficiently explained and allegedly arbitrary. Menghini argues that there are other interpretations of these passages available (Menghini 2019, p. 56).
This kind of criticism is an example of a misunderstanding of the creative hermeneutical engagement with the Qur'anic revelation as a defective form of scholarly writing. The way of writing of an Islamic activist and believer is to be understood in internal terms as a coherent set of ideas aiming at establishing a theory of nonviolence based on the Islamic tradition. A criticism of the arbitrariness of the selection of Qur'anic and Hadith passages quote by Sa " id reveals a misunderstanding of the hermeneutical approach of Sa " id and is assuming a structure of the Qur'an and Hadith similar to a European novel of the nineteenth century. Furthermore, taking one book as representing all of the thoughts of one author reveals an underlying Orientalist worldview, assuming that the system of being Islamic of this author can be derived from just one source. Menghini is referring to the fundamentalist writer Abu A'la Mawdudi 19 as presenting another interpretation than Sa " id and ignoring the diversity of Islamic interpretations of sources.
This difference in interpretation demands that Sa " id provide a more structured explanation of the reasoning behind his interpretation of this passage. The same can be said about other passages he chose to include in his argument, especially those where alternative readings suggest that nonviolence may be more of a response to circumstances than conscious adherence to Habil's path. When the Prophet Muhammad, for example, invited Muslims to be patient and not use violence against polytheists in Mecca, the reason could be connected with the equilibrium of forces specific to that moment, more than with a conscious choice of nonviolence. The non-Muslims in Mecca were much stronger than the Muslims, and so the choice to not confront them with force could have been made by a strategic circumstantial justification.
The criticism goes on to assume that Sa " id should have to write a book based on academic definitions. Reading Sa " id, we have to bear in mind that these texts (and videos) of an Islamic activist are not academic texts on early Islamic history. The demands that these texts have to follow the rules of another field of intellectual production are absurd. The absurdity is multiplied by other demands for definitions and explanations: "For instance, when explaining how the distinctive society will be created, Sa " id does not define what is meant by 'society'. Is this society simply Islamic, or should it be defined in terms of reach on a national or global scale?" (Menghini 2019, p. 57) Although we might say that the critique of Menghini may be excused as a published by journal of undergraduate students, it reveals some methodological shortcomings detected in more elaborate scholarly works. Worse, it is one of the few articles analyzing a book of Sa " id available. 20 Thus, this article may be regarded as a paradigmatic case. Nevertheless, Menghini continues: "the relevance of Sa " id's work is clearly demonstrated in his innovative position on, and interpretation of, the principle of nonviolence. In constructing Habil's path as a new madhhab modeled on the lives of the prophets, Sa " id shows how nonviolence is a recurring theme throughout the history of Islam. As such, he makes a convincing argument that nonviolence is truly an "Islamic" principle.
[ . . . ] Moreover, Sa " id's theorization of nonviolence as a methodology sets him apart from many other philosophers both inside and outside the Islamic world." (Menghini 2019, p. 57) This leads us to look at the context of Islamic ideas of nonviolence. Despite the overwhelming literature on states, groups, and individuals promoting military Jihad as an Islamic duty, there is a sector of Islamic thought and activism promoting nonviolent activism. We may just mention the Pashtun activist and leader of the Khudai Khidmatgaran Abdul Ghaffar Khan (Easwaran 2002) or the Indian writer Wahiduddin Khan (Omar 2008a(Omar , 2008b. For our context, the Syrian writer and activist Afra Jalabi, who integrated the ideas of Jawdat Sa " id in her thought, is important (Jalabi 2018). She is the scion of a family of nonviolent activists.
Conclusions
Jawdat Sa " id's nonviolent reading of the Qur'an and his engagement with the Qur'anic revelation is a paradigmatic case to illustrate the many ways Muslims can engage with the revelation and-in his case-turn it into a tool for nonviolent activism. 21 Leaving aside the question of religious truth, the ideas of Sa " id maybe the heritage of the beginnings of the Syrian resistance before it was turned into violence and part of the heritage of this historical moment to further the development of nonviolent ideas and practices as a legacy for humanity. It is not a study on Syrian ideas. It is a study on part of the global discussion on nonviolence. Sa " id, e.g., has been lecturing in many countries and to global media. 22 His global approach can be understood by videos on his ideas. Thus, he is a Syrian thinker but not limited to Syria in his worldviews. 23 Further analysis of Jawdat Sa " id's thinking and practice will have to identify the sources of his ideas and the difference in presentation in writings, audio-visual presentations, commentaries, and other ways to convey his ideas.
Funding: This research received no external funding.
1
We intentionally refrain from trying to give a bibliography of the recent Syrian development since this article is focused on the ideas of one Syrian thinker and activist. The following quotation gives an outline of the Syrian revolution. The best visual biography is Jawdat Sa " id Twitter Channel (2021). The exact birth date given in the video is 31 January 1931. 4 The video Jawdat Sa " id Twitter Channel (2021) shows a picture of Sa " id reading and a picture of Gandhi at the bookshelves in the background.
5
A study of all publications, interviews, videos, etc. is far beyond the scope of an article. Unfortunately, current research is far away from producing a book-length study that would be needed. However, this article is an overview using carefully selected texts to give an outline of the ideas of Sa " id. | 8,426.6 | 2022-02-11T00:00:00.000 | [
"Philosophy",
"Political Science"
] |
An integral equation representation for American better-of option on two underlying assets
In this paper, we study the problem for pricing of American better-of option on two assets. Due to two correlated underlying assets and early-exercise feature which requires two free boundaries to be determined for the option price, this problem is a complex. We propose a new and efficient approach to solve this problem. Mellin transform methods are mainly used to find the pricing formula, and explicit formula for the option price is derived as an integral equation representation. The formula has two free boundaries which are represented by the coupled integral equations. We propose the numerical scheme based on recursive integration method to implement the integral equations and show that our approach with the proposed numerical scheme is accurate and efficient in computing the prices. In addition, we illustrate significant movements on the option prices and two free boundaries with respect to the selected parameters.
Introduction
The problem of option pricing has received a lot of attention because the option is one of the most popular derivatives in the financial market. Black and Scholes [1] first solved the option pricing problem when the underlying asset follows a geometric Brownian motion and provided the closed-form solutions for European option prices. Since the Black-Scholes model was proposed, various option pricing problems have arisen with the development of the financial market. Among the option pricing problems, American option problem has been widely studied by many researchers over past three decades. The main reason is the feature that American option can be exercised at any time before maturity unlike the European options which can be exercised only at maturity. Because of this feature, it is well known that there does not exist the closed-form pricing formulas for the American options. To provide the prices of American options without closed-form pricing formulas, several numerical approaches and analytical pricing formulas have been proposed. For the American option valuation, various numerical methods such as lattice methods [2][3][4], finite difference methods (FDM) [5,6], analytical approximation methods [7,8], Monte Carlo (MC) simulation methods [9,10], integral representation methods [11,12], and hybrid methods [13][14][15] have been developed. These methods have often been used to price American options. In this paper, we consider a type of American option and study the valuation of option on multi-assets.
The options which have multi-assets have been popular with investors in the market because the multi-asset options are useful for hedging or diversification in practice [16].
In fact, there exist various kinds of multi-asset options exchange options [17][18][19], spread options [20,21], quanto options [22,23], basket options [24,25], rainbow options [26,27], etc. Among the multi-asset options, we focus on better-of option which is one of rainbow options. The better-of option, which is called "option on the maximum of two risky assets", was first introduced by Stulz [28]. Stulz provided a closed-form pricing formula of European better-of option under the Black-Scholes model [28]. However, there has been no closed-form pricing formula of American better-of option because of the features of American style option. Recently, Gao et al. [29] studied the pricing of an American betterof option using the numerical method. They proposed a primal-dual active-set (PDAS) to solve numerically the discrete linear complementarity problem arising from the pricing of American better-of option. We also deal with the valuation of American better-of option in this paper. Specifically, we derive the analytical pricing formula of American better-of option as an integral equation based on the partial differential equation (PDE) approach.
The main contribution of this paper is to present a new approach for pricing American better-of option. To the best of our knowledge, there is no explicit pricing formula for American better-of option. To solve the PDE for American better-of option price, we adopt Mellin transforms as the main approach. Mellin transform approaches have been employed widely for PDEs in option pricing. Using the properties of Mellin transforms, the PDEs for some options can be replaced by the simple ordinary differential equation (ODE). The applications of the Mellin transforms on the option pricing were first considered by Panini and Srivastav [30], and they provided the solutions for prices of European options and American options. After this pioneer work, various types of options including the standard options have been studied based on the Mellin transform approaches. For instance, American lookback options [31], barrier options [32][33][34], Russian options [35], basket options [36], vulnerable options [37][38][39][40], etc. In line with this research, we propose an efficient approach using the properties of Mellin transforms to obtain a pricing formula for American better-of option and provide the explicit solution as the integral equation.
This paper is organized as follows. In Sect. 2, we formulate the pricing problem for the American better-of option on two correlated assets. In Sect. 3, we study the valuation of the option based on the PDE approach and analyze two free boundaries of American better-of option. Using the Mellin transforms, we provide the explicit pricing formula of American better-of option as an integral representation. In Sect. 4, we propose the numerical scheme for the implementation of the integral equation for the option price and show some numerical results to show the accuracy and efficiency of our approach and the properties of free boundaries and option prices with respect to some parameters. In Sect. 5, we present concluding remarks as well as direction for future work.
Model formulation
Under the risk-neutral measure P, we assume that the dynamics of the correlated underlying assets S 1 and S 2 are given by where r > 0 is the constant risk-free interest rate, q i > 0 and σ i > 0 (i = 1, 2) are dividend rate and volatility of ith underlying asset S i , respectively. B 1 and B 2 are the standard Brownian motions defined on the probability space ( , F, P), where F is the natural filtration generated by (B 1,t ) T t=0 and (B 2,t ) T t=0 . We assume We now consider an American better-of option on two assets with a given maturity of the option T > 0. In absence of arbitrage opportunities, the price V (t, s 1 , s 2 ) of American better-of option is expressed as the following optimal stopping problem: where U t,T is the set of all F -stopping times taking values in [t, T]. By a standard approach for the optimal stopping problem (see Peskir and Shiryaev [41].), V (t, s 1 , s 2 ) satisfies the following two-dimensional parabolic variational inequality: where the domain D 2 T and the operator L 2 are given by Let us consider the following transformation: In terms of the value function P(t, z), P(t, z) satisfies where the domain D 1 T and the operator L 1 are given by Then, we can define the continuation region CR z and the exercise region ER z as follows: According to Theorem 7.2 in [42], there exist two free boundaries ξ low (t) and ξ up (t) such that The two regions CR z and ER z are rewritten as where (c) The following smooth-pasting conditions are established: (d) The optimal stopping time τ * solution to (2) is given by Thus, we can deduce that P(t, z) satisfies the following inhomogeneous parabolic partial differential equation (PDE): where h(t, z) and g(z) are given by The continuation region, the exercise region, and the free boundaries are illustrated in Fig. 1.
Figure 1
The continuation region CR, the exercise region ER = ER up ∪ ER low , and the free boundaries ξ up (t), ξ low (t) of P(t, z)
Valuation of American better-of option on two assets
In this section, we present the main results of this paper. Specifically, we derive the explicit analytic formulas for the value function P(t) and two free boundaries ξ up (t) and ξ low (t).
The main idea to derive the analytic formulas is applying the Mellin transform to the inhomogeneous PDE (10). Let P M (t, x), H M (t, x), and G M (x) be the Mellin transforms of P(t, z), H(t, z), and G(z), respectively, i.e., By utilizing the Mellin transform to the inhomogeneous PDE (10), we obtain the inhomogeneous ODE as follows: Then, we can easily have the solution of inhomogeneous ODE (11) as follows: Applying the inverse Mellin transform to both sides of (12), we obtain Let us denote Q(t, z) by Since Q(t, z) is the inverse Mellin transform of exp{ It follows from the Mellin convolution theorem (see Proposition 3.1 in [35]) that Lemma 1 Let A be an arbitrary real number and B be a positive constant. Then the following equalities hold: where N (·) is the standard normal cumulative distribution function. Proof First, we consider where the second equality is obtained from the transformation w = log(s/u). Similarly, we obtain From (15) and Lemma 1, we have the following proposition.
Proposition 1
Moreover, the smooth-pasting condition (8) allows us to state the next corollary.
Corollary 1 Two free boundaries ξ up (t) and ξ low (t) satisfy the following coupled integral equations: From substitution (4), we finally have the integral equation representation for V (t, s 1 , s 2 ), which is the price of American better-of option on two assets, in the following theorem. (2) is presented as the following formula:
Numerical results
Since the explicit analytic formula of V (t, s 1 , s 2 ) in Theorem 1 is expressed by two free boundaries ξ low and ξ up , we need to solve the coupled integral equations of two boundaries in Corollary 1. Although the coupled integral equations are rather complicated, we can solve the equations by using the numerical scheme combined with Chiarella and Ziogas [43] and Huang, Subrahmanyam, and Yu [44]. We briefly summarize our numerical scheme in the next subsection.
Numerical implementation
We can rewrite the couple integral equations in Corollary 1 as where .
We now present how to solve numerically the coupled integral equations of ξ up and ξ low in (16). First, we partition the time-interval [0, T] into (N + 1) time-steps with end points Let us denote ξ i up and ξ i low by the numerical approximated value of ξ up (t i ) and ξ low (t i ), respectively.
For t = t 1 , we can approximate the coupled integral equations (16) by utilizing the trapezoidal rule as follows: Since ξ up (t) and ξ low (t) are decreasing and increasing functions for t ∈ [0, T], respectively, it follows from ξ up (T) = ξ low (T) = 1 that This implies that Hence, we deduce that, for i = 1, 2, . . . N , Thus, we can rewrite the coupled equations in (17) as Note that the only unknowns in (21) and (22) are ξ 1 up and ξ 1 low , respectively. By applying the bisection method to (21) and (22), we can find ξ 1 up and ξ 1 low . Recursively, we find ξ i up and ξ i low for i = 2, 3, . . . , N by solving the following coupled equations: Using the values {ξ i up } N i=0 and {ξ i low } N i=0 , we can approximate the value function P(t, z) as where For a sufficiently large number of sub-intervals N , ξ N up , ξ N low , and P n (t, z) converge to ξ up (0), ξ low (0), and P(t, z), respectively (see Huang,Subrahmanyam,and Yu [44]). To accelerate the convergence speed, we can apply a three-point Richardson extrapolation scheme developed by Geske and Johnson [8] as follows:
Numerical experiments
In this subsection, we present the results of numerical experiments. Specifically, using the numerical scheme proposed in Sect. 4.1 and the formula in Theorem 1, we demonstrate the accuracy and efficiency of our approach and examine the significant movements of the boundaries and prices with respect to some parameters. For the experiments, the baseline parameters are based on the works of [29,42].
In Table 1, we present a comparison between our explicit pricing formula and the binomial tree method (BTM) [42]. The values obtained by BTM with 20,000 time steps are considered as the benchmark values, and 'R-err' in Table 1 denotes a relative error defined by R-err := 'Our approach' -'BTM ' 'Pricing formula' .
Comparing the values obtained by our formula with the values obtained by the BTM, we can find that 'R-err' is very small in Table 1. Additionally, to calculate each option price, our approach takes less than 0.01 seconds. On the other hand, the BTM approach takes more than 37 seconds. That is, we conclude that the approach based on our explicit pricing formula is accurate and efficient. Figure 2 illustrates the behavior of two free boundaries (optimal stopping boundaries) of American better-of option with respect to two dividends (q 1 , q 2 ) and two volatilities of underlying assets (σ 1 , σ 2 ). Figure 2(a) and Fig. 2(b) show that the areas of continuation region become narrower as q 1 and q 2 increase, respectively. In Fig. 2(a), we find that the upper free boundary is more sensitive to variable q 1 than variable q 2 . On the other hand, in Fig. 2(b), we can see that the lower free boundary moves more sensitively with respect to variable q 2 . Figure 2(c) and Fig. 2(d) show that the area of stopping region becomes wider when the volatilities of two underlying assets increase. From Fig. 2(c) and Fig. 2(d), we can find that the stopping region is more affected by the volatility σ 1 of underlying asset S 1,t than by the volatility σ 2 of underlying asset S 2,t . We note that the boundaries rarely change as time to maturity (Tt) increase if the volatility σ 2 is very small. Figure 3 illustrates how the prices of option change when the initial value of S 1,t increases. As shown in Fig. 3, there exist significant differences between prices near at-the-money. Figure 3(a) and Fig. 3(b) show the effects of dividends on the option price. As expected, we can see that the option with high dividend is cheaper than the option with low dividend. Figure 3(c) and Fig. 3(d) present the movements of the option prices for different volatilities. We observe that the option price has a high value as the volatility increases. We also find that the option prices are more sensitive to σ 1 than σ 2 .
Concluding remarks
In this paper, we proposed a new approach for pricing of American better-of option based on the PDE approach. We represented the option pricing problem as a free boundary problem and considered the Mellin transforms to solve the PDE. From these approaches, we derived an explicit pricing formula of American better-of option with two free boundaries, which satisfy the coupled integral equations. Hence, the pricing formula was provided as the integral equation representation.
The derived integral equation involves simple integrals. Thus, the prices and the boundaries for American better-of options can be computed more efficiently. To show the efficiency and accuracy of our approach, we performed some numerical experiments with the binomial tree method for the simulations and compared the values of American better-of options by the formula with the simulation results. The results show that the pricing formula is computationally efficient and accurate. Moreover, we presented several graphs to analyze the behaviors or sensitivities of the prices and free boundaries. From the graphs, we found the significant movements of option prices and free boundaries with respect to the selected parameters. | 3,669.4 | 2022-05-12T00:00:00.000 | [
"Mathematics"
] |
Design and performance of the field cage for the XENONnT experiment
The precision in reconstructing events detected in a dual-phase time projection chamber depends on an homogeneous and well understood electric field within the liquid target. In the XENONnT TPC the field homogeneity is achieved through a double-array field cage, consisting of two nested arrays of field shaping rings connected by an easily accessible resistor chain. Rather than being connected to the gate electrode, the topmost field shaping ring is independently biased, adding a degree of freedom to tune the electric field during operation. Two-dimensional finite element simulations were used to optimize the field cage, as well as its operation. Simulation results were compared to ${}^{83m}\mathrm{Kr}$ calibration data. This comparison indicates an accumulation of charge on the panels of the TPC which is constant over time, as no evolution of the reconstructed position distribution of events is observed. The simulated electric field was then used to correct the charge signal for the field dependence of the charge yield. This correction resolves the inconsistent measurement of the drift electron lifetime when using different calibrations sources and different field cage tuning voltages.
Abstract The precision in reconstructing events detected in a dual-phase time projection chamber depends on an homogeneous and well understood electric field within the liquid target.In the XENONnT TPC the field homogeneity is achieved through a double-array field cage, consisting of two nested arrays of field shaping rings connected by an easily accessible resistor chain.Rather than being connected to the gate electrode, the topmost field shaping ring is independently biased, adding a degree of freedom to tune the electric field during operation.Two-dimensional finite element simulations were used to optimize the field cage, as well as its operation.Simulation results were compared to 83m Kr calibration data.This comparison indicates an accumulation of charge on the panels of the TPC which is constant over time, as no evolution of the reconstructed position distribution of events is observed.The simulated electric field was then used to correct the charge signal for the field dependence of the charge yield.This correction resolves the inconsistent measurement of the drift electron lifetime when using different calibrations sources and different field cage tuning voltages.
Introduction
The strongest direct constraints on dark matter in the form of weakly interacting massive particles (WIMPs) come from noble liquid-gas dual-phase time projection chambers (TPCs) [1][2][3][4][5][6].The XENONnT experiment, located at the INFN Laboratori Nazionali del Gran Sasso (LNGS) in central Italy, deploys a dual-phase TPC with a liquid xenon (LXe) target of 5.9 t and set an upper limit on the spinindependent WIMP-nucleon elastic scattering cross section down to 2.58 × 10 −47 cm 2 for a 28 GeV/c 2 WIMP mass at 90 % confidence level [1].
A particle interacting in the liquid xenon target produces a prompt scintillation light signal (S1) and frees ionization electrons.The S1 vacuum-ultraviolet (VUV) photons are detected by a top and a bottom array of photomultiplier tubes (PMTs), while the electrons drift upwards following the electric drift field created by a cathode and a gate electrode.They are then accelerated into a high electric field region between gate and anode.There they are extracted into the gaseous phase and produce a secondary proportional scintillation signal (S2) before being collected on the anode electrode.The localized nature of the S2 signal allows a e-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>d Deceased e Also at INFN -Roma Tre, 00146 Roma, Italy f Also at Coimbra Polytechnic -ISEC, 3030-199 Coimbra, Portugal g Also at Physikalisches Institut, Universität Heidelberg, Heidelberg, Germany an (x, y)-position reconstruction based on the detected light distribution in the top PMT array, while the time difference between the S1 and the S2 signal gives an estimate for the z coordinate.The ratio between S1 and S2 provides information about the nature of the underlying interaction.For a given S1 signal, nuclear recoils (NRs) of WIMP or neutron interactions are characterized by a smaller S2 signal than electronic recoils (ERs) from beta or gamma interactions.
The electric field at the interaction point in the LXe also affects the signal ratio S2/S1.For this reason, a homogeneous and well understood electric drift field is crucial for a good discrimination between NR and ER events, and to achieve the best sensitivity for a WIMP search.The electric fields of the XENONnT TPC are produced by a set of five electrodes (anode, gate, cathode and two screening electrodes) and a field cage enclosing the active volume.The field cage consists of an inner and an outer array of concentric conductive rings connected by two redundant resistor chains.A sketch of the TPC with the position of the electrodes and the field cage is shown in Fig. 1.
This paper focuses on the simulation and design of the field cage for the XENONnT experiment, with particular emphasis on its improvements with respect to the predecessor experiment, XENON1T [7].The design and implementation of the field cage are described in Sect. 2. The field simulation setup and the optimization of the resistor chain are summarized in Sect.3, focusing on the freedom to tune the drift field by changing the voltage applied to the topmost ring of the field cage, treated as an independent electrode.Finally, the field cage tuning results of XENONnT are discussed in Sect. 4. This section also shows the matching of data to simulations of the XENONnT electric field during
Outer array
Inner array
PTFE wall
Fig. 1 Sketch of the XENONnT TPC.The zoom-in shows a detail of the double-array structure of the field cage, whose implementation is shown in Fig. 2.
the first science run, which includes a charge-up component on the TPC reflective walls.
The XENONnT field cage
A WIMP scattering in LXe is expected to produce a small scintillation signal S1, hence it is crucial to maximize the light collection efficiency (LCE) of the detector.In addition to the use of VUV-reflective polytetrafluorethylen (PTFE) walls enclosing the full instrumented target [8], XENONnT deploys electrodes which are highly transparent to light.This was achieved using a parallel wire grid design with an optical transparency exceeding 95 % [9].The electrodes need to sustain high voltages, as high electric drift fields are known to reduce the fraction of ER events misclassified as NRs, improve the discrimination power between single and multiple scatter, and reduce the maximum electron drift time, limiting the accidental coincidence background [10].
The design drift field of XENONnT was 200 V/cm, aiming at a larger value than achieved in XENON1T, while considering the past difficulties for dual-phase TPCs in reaching high voltages at the cathode [11].
The optical transparency of the electrodes translates into a significant field leakage into the drift volume of the extraction field from above the gate and of the reverse field from below the cathode.This results in an inhomogeneous field within the active volume, leading to a spatially-dependent S2/S1 signal ratio.This spatial dependence negatively impacts the discrimination power between signal-like nuclear recoils and background-like electronic recoils, ultimately affecting the final sensitivity to WIMPs [10].The field cage plays a crucial role in addressing the problem of field inhomogeneity, forcing a constant voltage gradient within the active volume and effectively mitigating the field leakage through the electrodes.
The field cage is composed of an inner and outer set of oxygen-free high conductivity (OFHC, 99.99 %) copper rings that are connected by a chain of resistors and enclose the entire length of the TPC.It is positioned on the outside of the reflecting panels to prevent scintillation photons from being lost due to the photoelectric effect as they hit the copper rings of the field cage, which would reduce the LCE and release single electrons.A section of the field cage, along with the chain of resistors and various PTFE parts, is shown in Fig. 2.
The double-array structure of the XENONnT field cage introduces a novel approach in the field of dual-phase TPCs.The rigid outer rings act as structural elements, while the smaller inner rings, which come into contact with the PTFE reflectors, facilitate charge removal.The small dimension of the inner rings is necessary to minimize the inevitable local field distortion induced by the presence of conductive elements close to the active volume.For the same reason, despite their wider surface area, the outer rings have a nondiscernible impact on the local drift field.The XENONnT field cage was constructed to make contact with the exterior of the PTFE walls wherever feasible.This decision was prompted by the observation in XENON1T of an inward push in the reconstructed (or observed) radial position of the events correlated to the (x, y) geometry of the TPC [12].The radial distortion was explained as a charge-up of the PTFE walls, exhibiting a time dependence.The azimuthal dependence, known as "bitestructure" showed a stronger inward push around the panels than around the pillars.This effect was attributed to a smaller accumulation of charges on the pillars with respect to the panels, possibly due to a more efficient removal process resulting from the contact of the XENON1T field cage with the pillars.
The active region of the XENONnT TPC is a prism with a height of 148.6 cm and a 24-sided polygonal cross-section, with 132.8 cm between opposite sides.Two PMT arrays and stacks of electrodes limit the TPC at the top and bottom.The electrode stacks as well as the PMT arrays are supported by 24 PTFE pillars.A total of 24 3 mm thick PTFE "sliding" panels are mounted in between the pillars and they interlock with 24 PTFE "blocking" panels directly mounted on the pillars.A total of 355 clipping notches are incorporated into each of the sliding panels to maintain contact between the inner rings of the field cage and the insulator.In addition, the sliding panels feature 0.25 mm diameter through-holes at the center of each clipping notch.They serve the purpose of facilitating the removal of free charges present on the inner side of the wall, as the mobility of electrons along the PTFE surface is expected to be larger than across the material bulk.
The 71 inner rings consist of 2 mm diameter wires, taken from a single OFHC copper spool.The wire was first stretched around a mock-up of 133.1 cm inner diameter, cut to the right length, and both ends were threaded.During installation the ends were connected using polyether ether ketone (PEEK) fasteners, allowing the circumference to be adjusted by a few millimeters.
The outer field cage array consists of 64 rigid copper rings, each having a 24-sided polygonal shape with a 135.5 cm distance between opposite sides and a cross section of 15 × 5 mm with a 2.5 mm rounding radius.The outer rings are positioned along z between −7 and −145 cm (z = 0 being the vertical position of the gate) and with half pitch offset from the inner field cage rings.Each outer ring consists of two halves connected by four countersunk M3 stainless steel bolts.One half ring is meant to be fixed in position and it features two additional holes close to the junction which are used to connect the resistor chains.The other half ring can be removed for easier access during maintenance.
The geometry of the field cage is mostly constrained by the compact TPC design and its key role for the mechanical stability of the detector.The minimal radial distance between inner and outer field cage arrays is 8.7 mm.The radial position of the inner rings is determined by the PTFE wall they are clipped into, whereas the outer radius of the outer rings is limited by the high-voltage feedthrough (HVFT) running to the cathode along the full length of the TPC.While a larger radius of the field cage would improve the drift field homogeneity, a smaller distance between outer rings and the grounded stainless steel sleeve of the HVFT increases the risk of discharges.
The vertical distribution of the field cage arrays is constrained at the top and bottom by the position of the gate and cathode frames.A pitch of 21.6 mm at liquid xenon temperature was chosen to facilitate the assembly of the resistor chain within the reduced intra-array space.Additional inner rings are included in the design at the top and bottom ends of the field cage: four rings right below the gate and two above the cathode (compare to Figure 1).These extra elements are installed with half the normal pitch.They improve the field homogeneity in regions dominated by edge effects and field leakage coming from the electrodes' transparency.
The voltage divider of the field cage is entirely realized using 5 GΩ SMD resistors with 1 % tolerance by OHMITE [13].These resistors were already used in XENON1T and extensively tested against failures.They are arranged in or-der to ensure a linear potential drop along the z-axis.The inner and outer set of rings have independent resistor chains, which are connected at the top and bottom to form the voltage divider of the field cage.This minimizes the impact on the electric field in case of a broken resistor, while simplifying its assembly.Two redundant sets of voltage dividers are implemented on opposite sides of the TPC.
The electrical connection for the outer rings is achieved by clamping the end of a 0.4 mm OFHC wire, soldered to the resistor, to the countersunk holes via an M3 screw.The resistor is then held in place by a 7 × 7 mm PTFE element inserted between two rings.Given the small wire diameter of the inner array rings, a spring loaded connection was realized.Dovetail notches on the reflecting panels support counterpart PTFE pieces on which the resistors are mounted.This joint establishes a spring-loaded connection using M1.7 set screws in electric contact with both the resistors and the inner copper rings.The connection was tested for stress due to temperature changes using liquid nitrogen, proving a good reliability.
All the materials used to machine and assemble the field cage and its resistor chains were screened and thoroughly cleaned to ensure radiopurity [14].
Electric field simulation
The resistor chain and the design of the field cage was optimized based on the simulation of the electric field of the TPC.The simulations were performed using COMSOL Multiphysics ® v5.4 [15], in particular the AC/DC module with finite element method (FEM) analysis.This method involves the discretization of the geometry into smaller elements, an operation known as "meshing".The electrostatic equations are then solved at the vertices, or "nodes", of each element and interpolated in between.
Given the great number of simulations needed during the design and optimization of the field cage, as well as the high computational power required for a full 3D simulation, a 2D-axisymmetric model of the detector was implemented.Decreasing the dimension of the problem reduces the number of nodes needed in order to be able to simulate the full detector.However, this excludes non-axisymmetrical features, such as the polygonal structure of the field cage or the wire grid nature of the electrodes.In the 2D-axisymmetric simulation the TPC is constructed as a cylinder with electrodes made of concentric wires.While this approximation impacts the electric field close to the electrodes, the expected effect in the detector's active volume was estimated to be marginal by simulating a small-scale TPC using both 3D and 2D-axysimmetric geometry.Comparing the electric field inside the active volume, a difference lower than 0.5 % was found.This is notably smaller than the effect of introduc-ing a charge distribution on the PTFE walls, as discussed in Sect.4, and it is thus considered negligible.
The TPC is contained inside a vacuum-insulated doublewalled stainless steel cryostat, which acts as a Faraday cage.This means that the simulation of the TPC environment can be restricted to the grounded inner vessel.Given the 2Daxisymmetric nature of the simulation, elements that only cover a small azimuthal angle are excluded from the model.This includes the PTFE pillars, the HVFT to the cathode, and the resistor chains.These elements were studied separately with local 3D simulations in order to evaluate their impact on the drift field and to assess the risk of breakdown.The PMTs are approximated by a concentric structure in the 2D simulation.The impact of this approximation on the drift field is expected to be negligible as they are located behind the screening grids and far from the active volume.
The dimensions of the TPC elements span several orders of magnitude, ranging from the 216 µm diameter of the electrode wires up to the 1.5 m length of the reflector panels.For this reason, the mesh size ranges from 30 µm around the electrode wires up to 25 mm in the center of the LXe volume, where the electric field is most uniform.The final mesh consists of 4.8 × 10 6 elements and 2.4 × 10 6 nodes.When the field within the active volume is compared to the same geometry simulated with a coarser mesh, the average difference is within 1 %, being larger close to the electrodes.Hence, we conclude that the uncertainty from meshing can be ignored.
As discussed in Sect.2, the field cage geometry was strongly constrained by mechanical requirements.For this reason, the uniformity of the electric drift field was optimized by selecting the voltages applied to the field cage.If the voltage drop is proportional to the vertical separation of two consecutive field shaping elements, then the voltage gradient is constant and the electric field is uniform.The voltages applied to the top and bottom of the field cage should match the effective potentials in those positions, which differ from the voltages of gate and cathode due to the field leakage effect previously described.At the top, this matching is done by independently biasing the topmost inner field cage ring.This freedom in bias voltage represents an important innovation, as it enables the tuning of the field homogeneity during operation of the filled detector.This permits adjusting to different electrode configurations or exploring the effect of the field homogeneity on the signal, as done in Sect. 4.An additional HVFT would have been necessary for a similar solution at the bottom of the field cage, considering the requirement for voltages as low as −30 kV.Such a solution was not implemented.Instead, a fixed resistance between the bottommost inner field cage array element and the cathode was installed.As it is not possible to change this resistance once the detector is assembled, its value was optimized considering the possibility that the design cath-ode voltage might not be reached.The electric field inside the XENONnT TPC was simulated using the electrodes' design potentials of −1 kV at the gate, 6.5 kV at the anode and −30 kV at the cathode.Different combinations of the topmost inner field cage ring voltage V top and the bottom resistance R bot were considered.Fields were simulated for a voltage V top between −1.2 and −0.5 kV, and a resistance R bot between 5 and 10 GΩ.
Two independent figures of merit were used in order to evaluate the performance of the different configurations: the field spread within the 4 t fiducial volume (FV) as defined in [16], and the size of the charge-insensitive volume (CIV).The field spread is defined as the difference between the 5 th and 95 th percentile of the electric field magnitude divided by its mean.The charge-insensitive volume is a region of the detector characterized by the complete or partial loss of the ionization electrons.The electrons freed in such a volume follow the electric field lines ending on the PTFE walls, accumulating on the wall and thus not producing S2 signals.
The reverse field region between the cathode and the bottom PMT array is an example of irreducible CIV, and it is therefore ignored in this discussion.The CIV is calculated by propagating the electrons along the simulated field lines from different positions within the TPC active volume and checking whether they end up on the wall surface or reach the liquid-gas interface.These figures of merit were computed using the custom module PyCOMes [17], developed to handle COMSOL output format and perform fast calculations of field lines and electron propagation.The mass of liquid xenon inside the CIV (M CIV ) is shown in the (V top , R bot ) parameter space in Fig. 3.When the topmost inner field cage ring is biased more positively than the gate electrode, the electrons drifting through the TPC are more strongly attracted to it.This improves the uniformity of the drift field within a limited range of voltages.As the bias voltage increases further relative to the gate electrode, the field lines begin to terminate on the PTFE wall, and at approximately V top = −0.85kV, the CIV abruptly increases.A larger CIV is also observed for high values of R bot .This is due to an increasing local field distortion in the bottom edge of the detector.
Compromising between field homogeneity and CIV, the values V top = −0.95kV and R bot = 7 GΩ were chosen for the XENONnT design field.This corresponds to M CIV = 1.2 kg and a field spread of 3.5 %.We checked the performance of the electric field with the bottom resistance value R bot for different configurations of the electrode voltages, with the result that the chosen resistance performs sufficiently well for a wide range of scenarios.
Comparison to data
The XENONnT detector is periodically calibrated using 83m Kr.The metastable isotope has a half-life of 1.83 h and decays via a two-step transition of 32.2 keV and 9.4 keV with an intervening half-life of 157 ns [18].This source is used to monitor the spatial response of the detector and its time evolution, assuming its homogeneous distribution [12,19].It is therefore possible to compare the observed 83m Kr event distribution to the expected one from simulations.The simulated distribution comes from a set of electrons uniformly produced within the active volume.Each electron is propagated according to the electric field map including diffusion and drift values as coming from literature [20,21].The (x, y)-position is the electron location at the liquid-gas interface including the position reconstruction resolution, while the z information is derived from the drift time.
SR0 field and wall charge-up matching
During the commissioning phase of the experiment, a short circuit occurred between the cathode and the bottom screen electrode, limiting the voltage of the cathode.For the first science run (SR0), the electrodes were set to a voltage of 0.3 kV at the gate, 4.9 kV at the anode and −2.75 kV at the cathode.This resulted in an average electric drift field of 23 V/cm.The topmost inner field cage ring voltage was set to V top = 0.65 kV, which was optimized based on simulations, by means of the procedure described in Sect.3.
Similarly to XENON1T, an azimuthally dependent distortion at high radii is observed in the 83m Kr distribution, reflecting the 24-sided-polygonal structure of the PTFE walls.Nevertheless, the distortion shows a different behaviour than what was observed by XENON1T.In XENONnT the events around the pillars are pushed more inwards than the events around the panels, leading to a localized strong reduction of the rate, as shown in Fig. 4. The observed "bite-structure" supports the XENON1T hypothesis of PTFE charge-up discussed in Sect.2, which drove the field cage design of XENONnT.While the field cage rings touch both panels and pillars, the panels are expected to release accumulated charges more easily than pillars because of their thinner geometry and the presence of through-holes.The efficient charge removal is supported by the absence of timedependent features in the reconstructed radial position of the 83m Kr events.Figure 5 shows the evolution over SR0 of the 90 th percentile of the radius distribution of 83m Kr events in three slices of z.Unlike previous experiments [12,22], no increase of the inward push is observed over time.In addition, the observed (x, y)-distribution has two symmetric features crossing the TPC as a result of the transverse wires of the gate locally deflecting the electrons when drifting towards the liquid-gas interface.These wires were installed both at the gate and anode electrodes to counteract wire deformation under electrostatic force [9].The regular pattern perpendicular to these features is due to a combination of the wire grid of the anode electrode with the geometrical configuration of the PMTs in the top array.
The 90 th percentile of the radial distribution r 90 is evaluated for both 83m Kr data and simulation in 30 bins of z by averaging over the azimuthal angle.The simulation consists in the propagation of 10 5 electrons with initial position uniformly distributed in the active volume.The 90 th percentile is sufficiently large not to be affected by the transverse wires, but not sensitive to possible outliers at high percentiles.A mismatch between data and simulation can be clearly seen in Fig. 6, with the difference between r 90 from data (black circles) and from simulation (blue triangles) being on average 4.7 cm.The mismatch can be effectively resolved by considering a charge accumulation on the PTFE walls, as already demonstrated in previous works [22].The corresponding surface charge density σ w is determined by matching the observed radial distribution with simulations including a charge distribution at the walls.This density is parameterized using a linear function: where h TPC = 148.6 cm is the height of the TPC, σ top is the surface charge density at the top of the panels and λ is the charge density difference between the top and bottom of the panels, i.e., σ bot = σ top + λ .A linear model describes to first order the observations reported in [22].Field and electronpropagation simulations were performed for σ top between −2 and 1.5 µC/m 2 and for λ between −1 and 1.5 µC/m 2 , both with steps of 0.1 µC/m 2 .For each combination of σ top and λ , r sim 90 was calculated for n z = 30 bins in z along the TPC as described above.The chi-square was estimated for each simulation in the following way: where σ 2 90 is the squared sum of the statistical percentile uncertainties of data and simulations.The The simulated radial distribution after adding the wall charge-up component is shown as cyan squares in Fig. 6, with a maximum difference with respect to the observed distribution of 1.6 cm at the very top of the TPC and 0.3 cm on average.Including the surface charge density, the predicted field spread is 13.2 % within the FV and the chargeinsensitive mass is 112 kg.The corresponding SR0 electric field map including charge-up is shown in Fig. 7.The hypothesis that a failure of the resistor chain causes the mismatch between the simulation and the data can be ruled out, as the total resistance of the field cage was measured to be (92 ± 11) GΩ, in good agreement with the expected value of (87.25 ± 0.05) GΩ.Moreover, the simulation of the failure of a single resistor shows an insufficient impact on the observed position distribution.
An independent validation of the field map including wall charge-up comes from the measurement of the electron lifetime during SR0.The electron lifetime τ e − is the characteristic time constant of the exponential decrease of the S2 signal as a function of drift time t d .This is due to electrons being trapped by impurities in the liquid xenon.To determine the electron lifetime τ e − , an exponential function is fitted to the median of the S2 area across different drift times.Previous analyses of XENON1T data revealed a discrepancy in the measurement of the electron lifetime when using different radioactive isotopes, such as 83m Kr and 222 Rn [12].These isotopes vary in their decay products and energy, resulting in different ionization densities within xenon.For this reason, the electric field affects the charge signal of each calibration source differently, leading to a different spatial dependence of the S2 signal in presence of an inhomogeneous drift field.The electron lifetime measured according to the above approach is thus an effective value τ eff e − that includes a relative field effect on the charge yield, Q rel y (x, y, z): Figure 8 shows the results from the measurement of the electron lifetime during a joint calibration using 37 Ar and 83m Kr sources, which was carried out after the end of SR0 [23].
The "uncorrected" electron lifetime comes from the exponential fit of the S2 median area, and it corresponds to τ eff e − of Eq. (3).The "corrected" lifetime is obtained by weighting the measured S2 area for the relative charge yield Q rel y as coming from the field map.This is the best estimate of the true electron lifetime τ e − .Since each calibration source is affected differently by the electric field, Q rel y is estimated for each isotope.The 83m Kr charge yield is modeled using data from [24]. 37Ar data are extrapolated to lower electric fields using the results from [25].The charge yield of 222 Rn alphas finally is modeled using NEST v1 [26,27], which is more consistent with recent measurements than the latest version [24].When not corrected, the electron lifetimes from different sources (left panel) do not agree among themselves, as this corresponds to assuming a perfectly homogeneous electric drift field.The lifetimes are corrected using the simulated electric field maps both with (right panel) and without (middle panel) the inclusion of the charge-up of the PTFE walls.The different measurements are in agreement when charge accumulation is assumed, but when it is not included, the discrepancy is even more pronounced than in the uncorrected case.
4.2 Impact of the field cage tuning on the drift field Dedicated datasets were taken after the end of SR0 to assess the impact of different voltages of the topmost inner field cage ring V top on the drift field.This was varied from 0.3 kV to 1 kV during a 83m Kr calibration, while keeping the voltages of all other electrodes at their SR0 values.While the electrode voltages mostly impact the magnitude of the electric field, the independent biasing of the topmost inner field cage ring influences primarily its homogeneity.From simulations, changing V top from 0.3 kV (same as gate) to 1 kV translates into a 7 % stronger field within the FV, while reducing the field spread by 400 %.For this reason, this is the first direct measurement of the effect of the field homogeneity on signal production and the transport of S2 electrons in a multi-tonne LXe TPC.
The reconstructed 83m Kr (r, z) distribution and the 90 th percentile radial distribution are shown in Fig. 9 for different field cage tuning voltages.As V top increases, a decreasing radial inward push is observed.As discussed in Sect.3, a more positive voltage at the top of the TPC attracts electrons counteracting the inward push, resulting in a more uniform distribution.However, by increasing V top the charge-insensitive volume increases.The CIV cannot be inferred from the observed position distribution even for V top > 0.75 kV, when > 10 % of the total TPC volume is charge-insensitive.For these configurations, the edge of the position distribution is flat over z, showing no inward feature.The comparison of r obs 90 for different V top with the corresponding r sim 90 including the SR0 wall charge distribution returns a good match for voltages V top below or equal to 0.75 kV.A difference of up to 5 cm in the 90 th percentile radial distribution is observed for a V top of 0.9 kV and 1 kV.This hints towards a mismodeling of the charge distribution or the possibility that the charge distribution reaches a new equilibrium for high voltages V top .
The change of the CIV is reflected by the change of the observed event and signal rates for different V top values, as shown in Fig. 10.An event is defined by the pairing of an S1 and S2 signal [23] half-life of 86.2 d, a daily decrease in rate of (1.01±0.12)% is considered in the calculation, verified by comparing the rate before and after the test using the same SR0 field configuration.As expected, the S1 rate is constant for different V top , while S2 and event rates are fairly constant up to 0.75 kV, but quickly drop for larger values.This observations proves that a fraction of the active volume is chargeinsensitive and that this depends on the electric field configuration.The fast increase of M CIV is explained by the anodic behaviour of the topmost inner field cage ring.At these voltages the drifting electrons are collected on the PTFE walls at the very top of the TPC.In this situation even a small change of V top leads to a large fraction of field lines being lost at the edges, although the impact on the intensity of the field is negligible.Thanks to the large rate of 83m Kr events collected during the test, it is possible to measure the electron lifetime individually for each voltage.A clear dependence on V top is shown in Fig. 11.The observed increase of the uncorrected electron lifetime (black circles) is explained by the lower electric field at the top of the TPC as V top increments, as it is suggested by the field simulations.A smaller electric field leads to a reduced charge yield, finally resulting in an higher uncorrected electron lifetime.As these data have been taken within few hours, the fast change of the electron lifetime as V top increases cannot be due to a change of the impurity concentration in the liquid xenon.Similarly, the small variation of the electric drift field for different V top values cannot account for the change of an order of magnitude in the electron lifetime [28].Finally, the uncorrected electron lifetime measured right before and after the test in standard field conditions agrees well, excluding a possible evolution over time of the lifetime.
The effects due to the non-uniform electric field on the uncorrected electron lifetime can be accounted for by simulating the electric field, similarly to what is done in Fig. 8. Figure 11 values of V top .The corrected values agree with the constant average of 38 ms, considered to be the true electron lifetime τ e − and further demonstrating the capability to correct for the electric field effect solely based on simulations.
Summary
This work demonstrated a good understanding and effective control of the electric field inside the active volume of the XENONnT TPC.The novel double-array structure of the field cage allows for mechanical stability, while ensuring contact between the conducting field shaping elements and the PTFE walls, facilitating the removal of charges accumulating over time.The absence of a time evolution in the distribution of the observed event position confirmed an efficient removal.The innovative independent voltage bias of the topmost field cage ring makes it possible to match it to the local effective potential, a combination between gate and anode voltages due to field leakage through the gate.The detector was simulated using the FEM software COMSOL Multiphysics ® , using an approximate 2D-axisymmetric geometry.The bias voltage of the topmost inner field cage ring and the value of the resistor between the field cage and the cathode were chosen to optimize the charge-insensitive volume and the field homogeneity.
During SR0, the spatial distribution of 83m Kr calibration data was compared to the one calculated based on the electric field simulation.A linear surface charge density along the PTFE walls of the TPC was included in the field simulation to improve the agreement of the reconstructed position distribution between simulation and data.The best match to data was obtained with a charge density distribution ranging from −0.5 µC/m 2 at the top of the walls to −0.1 µC/m 2 at the bottom, reducing the average difference between simulated and observed 90 th percentile radial distribution from 4.7 cm down to 0.3 cm.The resulting field map was used to correct the relative charge yield of S2 signals used for the estimation of the electron lifetime from different sources.This resolved a long-standing discrepancy and further validated the simulations.
A dedicated test to investigate the impact of the topmost inner field cage ring voltage on the field uniformity was performed using the data from a 83m Kr calibration source.An average difference <1 cm of the 90 th percentile radial distribution is observed between data and simulations when including the reflector charge-up for voltages below 0.75 kV.Above this value, a deteriorating agreement in the position distribution, together with a strong decrease in event and S2 rates, indicating a significant increase of charge-insensitive volume.The measured electron lifetime as a function of the topmost inner field cage ring voltage showed an apparent increase of an order of magnitude, which cannot be explained by the change of impurity concentrations.However, as the S2 signals are corrected for the field dependent charge yields evaluated using the proper electric field map, the electron lifetime measurements for the different runs agree within the uncertainties.
The presented design of the field cage for the XENONnT TPC represents a novelty for the dual-phase TPC technology, allowing for control over the homogeneity of the field while minimizing known effects of charge accumulation on the detector walls.Together with the good understanding of the electric drift field, this elevates the capability of TPC detectors for dark matter searches improving the sensitivity to WIMPs and potentially setting a new standard in the field.
Fig. 2
Fig. 2 View of the XENONnT field cage from the outside of the TPC during assembly in the clean room.It is possible to discern the different elements: the inner array rings (a) clipped in the notches (b) on the sliding PTFE panels (c) and connected via the resistor chain (d).The outer rings array (e) and its resistor chain (f) are also visible.The pillars (g) are still open as this picture was taken during the assembly.During nominal operation, covers (h) are placed to fix the outer rings.The indicated dimension is given at liquid xenon temperature.
Fig. 3
Fig. 3 Charge-insensitive mass M CIV as a function of the voltage of the topmost inner field cage ring V top and the resistance R bot between the bottom of the field cage and the cathode.The contour lines represent the relative drift field spread within the 4 t fiducial volume.The red star shows the configuration picked for XENONnT, with V top =−0.95 kV and R bot =7 GΩ.
Fig. 4 Fig. 5
Fig. 4 Reconstructed (x, y)-position distribution of 83m Kr events.The distortion at high radii follows the distribution of PTFE pillars and panels, the cross section of which are overlaid in the figure.The diagonal features crossing the TPC result from the transverse wires of the gate electrode and the distribution of the PMTs in the top array.
χ 2 best fit yields σ top = (−0.50± 0.06(syst) ± 0.02(stat)) µC/m 2 , λ = 0.40 ± 0.15(syst) +0.20 −0.10 (stat) µC/m 2 .These values correspond to a surface charge density of −0.5 µC/m 2 at the top of the panels and −0.1 µC/m 2 at the bottom.The statistical uncertainty was determined by resampling the simulated position distributions for each parameter combination, a technique known as "bootstrapping", and then assessing their χ 2 best fit.The systematic uncertainty was obtained by repeating the χ 2 minimization with different binning in z and percentile values, and taking into account the coarse binning for the σ top and λ parameters.
Fig. 6
Fig. 6 (r,z) distribution of 83m Kr events near the walls of the TPC.The 90 th percentile of the radial distribution along 30 bins of z is shown in black.The same quantities coming from the simulation with and without PTFE reflector charge-up are shown in blue and orange, respectively.
Fig. 7
Fig. 7 Electric field map determined from 2D-axisymmetric simulations including a linear charge distribution on the PTFE reflectors matched to the radial distribution of 83m Kr events.The black lines indicate the contour of the electric field, while the dashed grey lines are field lines starting at different radii and same z.
Fig. 8
Fig. 8 Electron lifetime measured using three different radioactive isotopes: 37 Ar (light blue squares), 83m Kr (blue triangles) and 222 Rn (black circles).The results on the left plot use the uncorrected charge signals, while middle and right plots include a drift field correction based on the field map without and with charge on the PTFE wall, respectively.The data were taken during a simultaneous 37 Ar and 83m Kr calibration after the end of SR0.Due to emanation, minute levels of 222 Rn are present in the detector at all times [14].
Fig. 9
Fig. 9 Reconstructed position distribution of 83m Kr events for different voltages of the topmost inner ring of the field cage.Red circles and black triangles are the 90 th percentile radial distribution coming from simulation and data, respectively.The TPC active volume boundaries are shown as black dashed lines.
Fig. 10
Fig. 10 Rate of events (light blue squares), signals S1 (black circles) and S2 signals (blue triangles) for the same 83m Kr source, but different topmost inner field cage ring voltages.All rates are corrected for the 83 Rb source decay.
shows the corrected electron lifetime for different Electron lifetime measured using 83m Kr for different topmost inner field cage ring voltages.Blue triangles and black circles are electron lifetime with and without field correction, respectively.The cyan dashed line corresponds to 38 ms, which is the average value of measurements with field correction. | 9,535.4 | 2023-09-21T00:00:00.000 | [
"Engineering",
"Physics"
] |
Bending-induced inter-core group delays in multicore fibers
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Bending-induced inter-core group delays in multicore fibers Viktor Tsvirkun, Siddharth Sivankutty, Géraud Bouwmans, Olivier Vanvincq, Esben Ravn Andresen, Herve Rigneault
Introduction
In recent years challenges were tackled on the way towards robust fiber-based minimally invasive lensless endoscopes that would ultimately operate in a clinical setting. Both multicore [1,2] (MCF) and multimode fibers [3] (MMF) have been considered together with wavefront shaping devices providing the ability to control the relative phase between the different injected fiber modes. MMF are appealing for their commercial availability, their small diameter (∼100 µm) and their large number of modes that can be used for focusing [4] and imaging [5,6]. When dealing with flexible endoscopes phase control through MMF has been challenging because the fiber transmission matrix (TM) is strongly altered when the fiber is bent. Interestingly, recent works have identified parameters and regimes where deformation-induced changes in the TM can be minimized [7], measured and compensated for dynamically (in limited conditions) [8,9] or even predicted [10], which brings hope to build MMF-based flexible lensless endoscopes. MCF, in strong contrast with MMF, are made with a multitude of single mode fiber cores which show weak or zero coupling, they maintain and translate, to some extent, their output diffraction patterns when a simple phase tip and/or tilt is applied on the input wavefront. This so-called "memory effect" [11] has been extensively used for imaging using scanning [12][13][14] or wide field modalities [15][16][17]. Because a freely moving distal end is expected to deform and compress the MCF inner and outer curvature sides, respectively, it was noted that a slight angular bend (< 3°) causes tip and tilt to be added to the phase profile at the distal tip of the MCF [13,18], similar to the memory effect. Hereinafter we refer to this type of deformation as an L-type bend. When dealing with ultrashort pulses, the bending may also affect the group delays between pulses traveling in the different fiber cores such that they don't overlap and interfere anymore at the distal tip [19,20], precluding any focusing and imaging. This deformation-induced inter-core group delays are important in the context of 2-photon flexible lensless endoscope [14,21] that might ultimately require active phase [18] and group delay controls [20].
In the scope of designing and building flexible lensless 2-photon endoscopes, this paper investigates numerically and experimentally the deformation-induced inter-core group delays resulting from bending MCFs with large (up to 200°) angular bends and assesses the impact on imaging performances. We concentrate our investigation on MCFs showing virtually no core-to-core coupling and having an infinite memory effect [22].
Experiments
Our experimental set-up ( Fig. 1) utilizes phase-stepping spectral interferometry to measure delays between the ultrashort pulses, transmitted through the cores of interest, with respect to the reference core, as detailed in [20]. The output beam from a femtosecond laser source (Amplitude Systèmes t-Pulse, λ = 1030 nm, τ = 170 fs, repetition rate 50 MHz) is expanded with a telescope (not shown) to overfill the aperture of a spatial light modulator (Hamamatsu LCOS-SLM X10468-07). The SLM is used to shape the wavefront entering each of the MCF cores at its proximal facet, relayed via a lens L1 and a microscope objective MO1 (Olympus Plan N, 20x NA 0.40). Transmitted light is collected from the MCF distal end using a 10x NA 0.25 microscope objective MO2 (Nikon Plan) and filtered with a linear polarizer LP (Thorlabs LPNIR100) to ensure maximum contrast of the interference fringes, as the output states of polarization in the MCF under study are random [23]. Distal facet is imaged with a lens L2 onto a camera CCD1 (FLIR FL3-U3-32S2M-CS) in order to monitor the evolution of the transmitted z ' x z power during fiber bending. Far field of the distal end face (via a lens L3) is coupled into a multimode fiber (core diameter 62.5 µm), which relays it to an optical spectrum analyzer OSA (Yokogawa AQ-6315A). Magnification in this configuration is chosen so that only a part of the far field much narrower than the spatial interference fringe is spatially selected with the MMF probe (k x D > 2π, where k x corresponds to the fringe spatial frequency and D is the inner probe diameter). We use phase-shifting spectral interferometry technique, previously described in Ref. [20], to measure the inter-core group delays in the MCF. The entire far-field interference pattern is imaged onto a camera CCD2 (Thorlabs DCU223M) to record the point spread function (PSF) of the fiber imaging system for different bending conditions. The entire distal detection part, including the clamped MCF distal end, is mounted onto a portable unit allowing its translation in xz plane (see Figs. 1,6). Bent fiber geometry is recorded with a mobile camera (Appendix A provides an example of such recording).
Inset in Fig. 1 depicts the MCF under study, which was described in [14]. Note that this MCF exhibits very low cross talk between its cores (< -25 dB). The MCF was fabricated with the following parameters: Ge-doped single-mode cores with a parabolic refractive index profile (maximum difference of 0.0031 compared to silica) and mode field diameter of 3.6 µm, triangular lattice pitch 11.8 µm and the outer diameter of 357 µm (including the double cladding). The total length of the MCF used in the experiments is approximately 300 mm.
Simulations
We perform the bent fiber simulation using the curvature loss formula, typically employed to predict the bending losses in both single mode and multimode fibers [24]. We consider two main effects which induce additional group delays across the MCF face -the fiber elongation/compression in the circular bend and the local refractive index change due to stress-optic effect upon bending. A circularly curved segment of the fiber is transformed to an equivalent straight one via conformal mapping. The modified refractive index distribution across the fiber cross-section is given by the following [24]: where n (x, y) is the refractive index of the unperturbed waveguide structure, R is the radius of curvature, ν is the Poisson's ratio, and p 11 , p 12 are the photoelastic tensor components. The exponential term accounts for the change in optical path length, whereas the term in the square brackets describes the changes to the physical refractive index with the photoelastic effect. We consider a 300 mm long cylinder of 200 µm diameter made of silica with the following material properties: Poisson's ratio ν = 0.17, refractive index of stress-free material n = 1.45 (fused silica for λ = 1030 nm) and photoelastic tensor elements p 11 = 0.16, p 12 = 0.27 (the latter are also wavelength-dependent, the closest values we found in literature are for λ = 1150 nm [25]). For the given values of material properties the geometric change is more significant than the stress-optical contribution. Three characteristic types of bends were studied as the ones most commonly encountered in the experimental conditions. In this article they are referred to as L-, Uand S-type geometries and are produced by applying a force ì F as shown schematically in Fig. 2. For the sake of simplicity, only F x component was chosen to be non-zero, and adjusted to produce displacements of the same magnitude [Figs. 2(a)-2(c)], reflecting the experimentally achievable ones. Fixed constraints are defined at the input fiber facet (L-type bend) and at both end facets for U-and S-bend cases. For the forces applied to x y cross-sections at the corresponding z positions as shown in Fig. 2, we trace the resulting deformed geometries and calculate the total bend-induced group delays ∆[∆τ i ] using Eq. (1) and a least-squares curve fitting [26] to estimate the local radii of curvature along the fiber. ∆[∆τ i ] is calculated as (OPD i0 − OPD i ) /c, where c is the speed of light, OPD i is the optical path difference between the central and the ith core due to local length and refractive index changes upon bending, and OPD i0 is the intrinsic OPD for the ith core (equals to zero in simulations).
Substituting the values of material properties into the stress-optic term of Eq. (1) and replacing the exponential by its first-order Taylor expansion, we can estimate the bend-induced delays by integrating along the fiber: where s is a curvilinear abscissa (ds 2 = dx 2 + dy 2 + dz 2 in Cartesian coordinates). We can further substitute ds/ρ = dα, where α is the turning angle. Hence we obtain where ∆α is the variation of angular increase along the curve formed by the fiber, i.e. ∆α = ∆α n −π, where α n denotes an angle between outward-pointing normal vectors of proximal and distal MCF facets for a deformed fiber geometry without loops. ∆α is referred to as "bending angle" in this paper.
Simulations
Inter-core group delays due to bending are expected to arise from the refractive index inhomogeneity among the cores and change in physical length. Therefore, in our simulations we estimated the group index change across the MCF face, and calculated the total accumulated optical path difference (OPD) and the corresponding ∆ [∆τ i ] (since the group index only exhibits linear longitudinal variations). The model fiber diameter was chosen to be comparable with the circumference which includes the cores in our sample MCF (see inset in Fig. 1).
The simplest bending geometry, which we refer to as L-bend, is simulated by applying a force to the distal fiber end while the proximal end is fixed [ Fig. 2(a)]. OPD change along the arc length (z axis in the unperturbed geometry) for a given x, y [ Fig. 2(a)] represent the situation for one MCF core, taken at the same coordinates. Finally, we calculated the spatial distribution of ∆ [∆τ i ], allowing to retrieve the induced group delay for any given location (x, y) of the core within the MCF [ Fig. 2(g)] with respect to the central core.
The same type of simulations was next performed for the remaining two types of bending, whose denominations were inspired by the corresponding deformed geometry shapes [Figs. 2(b)-2(c)]. In the case of U-bend, the force is applied at z = 15 cm plane, producing a symmetric displacement and OPD distributions along the fiber length [Figs. 2(b), 2(e)]. When estimating the total ∆ [∆τ i ], it resulted in a cancellation of bending-related effects such that across the fiber face its maximum absolute change is less than 1 fs, most likely reflecting the numerical error of the curve fitting procedure. In the case of the S-bend, two forces of opposite directions and different magnitude were applied at 1/3 and 2/3 of the fiber length, which resulted in the asymmetric deformed geometry and OPD distributions along the fiber [Figs. 2(c), 2(f)]. Nevertheless, the overall ∆ [∆τ i ] along the total MCF length is very small (< 1 fs) and can be interpreted as the numerical error -as in the case of U-type deformation, see Fig. 2 All the above calculations were performed using Eqs. (1) and (2). Note that we obtain the same results as the ones displayed in Figs. 2(g)-2(f), using Eq. (3) and substituting ∆α = 22°for L-bend and ∆α = 0 for U-and S-type geometries, and this without the need to integrate over the changing radii of curvature along the deformed fiber.
Experiments
Measurements of induced inter-core group delays for each type of bending geometry are shown in Figure 3. To have a precise estimation of the bending-related inter-core group delays, we first perform a calibration of the intrinsic GDD (relative to the fabrication process and material imperfections), the MCF being kept straight in its reference geometry, then this intrinsic GDD ∆τ i is subtracted from the subsequent inter-core group delays measurements while bending the MCF to give ∆[∆τ i ]. Figure 3(b) displays an example of MCF inter-core group delays ∆τ i spatial distribution, measured for an MCF held relatively straight in its reference geometry [ Fig. 3(a)]. Measurements from the cores with low SNR were discarded; the displayed data covers 125 out of 169 cores. For the intrinsic GDD measurement we obtain a normal distribution ∆τ i with 2σ = 188 fs [ Fig. 3(c)], which is comparable to [20]. Subsequent measurement of the bending-induced inter-core group delays for the L-bend where the distal end was displaced by ∆x of about 7. Fig. 3(e) show a clear trend spanning from negative added delays to positive ones in the range of about ±150 fs over the entire MCF facet. Force in this experiment was applied along the negative direction of x axis (as indicated by the white arrow), and a qualitative agreement with the stress-induced refractive index simulation can be seen when comparing Fig. 3(e) and Fig. 2(d). Measurements for U-and S-types of bending geometries are displayed in Figs. 3(g)-3(i) and Figs. 3(j)-3(l) correspondingly. We aimed to achieve displacements along the x axis similar to the L-bend case for an easier comparison between the obtained values. For both of the double-clamped geometries, we found that ∆[∆τ i ] spreads show a marginal contribution with 2σ 20 fs, which we relate to the measurement uncertainty. Note that the error bars (three standard deviations) in presented GDD measurements using phase-stepping interferometry were below 20 fs for most of the cores. We scaled ∆[∆τ i ] colorbars according to Fig. 3(e) (L-bend) experiment for easier comparison of the obtained values given that ∆x fiber displacements are also comparable in all three cases. Here again the close to zero ∆[∆τ i ] trend is in good agreement with the simulated results Figs. 2(h), 2(l).
We focused next on L-type bending, where the proximal and distal MCF facets are not parallel anymore. This is the situation similar to the real endoscopic operation where the distal MCF facet is free to move. As a geometrical parameter of the bend which reflects the evolution of ∆[∆τ i ] spread magnitude we used ∆α, introduced earlier in Eq. (3). Therefore, our reference geometry measurement as well as U-and S-bends in Fig. 3 Fig. 4(g)] exhibit a gradient along the bending axis, whose magnitude is proportional to the bending angle ∆α. As opposed to the first set of measurements (Fig. 3), now the added inter-core group delays exhibit larger ∆[∆τ i ] distribution spreads, as can be observed in Figs. 4(d)-4(f). Fig. 4(g). An example of such R fitting for ∆α = 180°bending geometry is given in Appendix B. As it can be seen from the plots, bending-induced inter-core group delays can span as far as almost ±1 ps for ∆α = 180°bend, which in turn would require a group delay compensating scheme such as a group delay controller (GDC) [20] to assure the temporal overlap of pulses at the distal MCF facet. For large bending angles one would expect to see some polarization mode dispersion [27]; therefore we performed all our measurements for a single polarization. Additionally, the bend radii in our experiments were kept sufficiently big in order to not induce a considerable bend loss (the transmitted intensity was continuously monitored with the CCD1 looking at the MCF distal facet).
The imaging performance of a 2-photon MCF lensless endoscope is directly related to the peak irradiance, requiring a spatially and temporally compact focal spot [14]. To assess the spatial aspect, we perform an investigation using CCD2 camera placed in the far field of the distal MCF facet (Fig. 1), and use the SLM to produce a distal focused spot [ Fig. 5(a)]. In pulsed laser mode, bending-induced inter-core group delays reduce the number of temporally overlapping beamlets thus reducing the delivered power at the focus. As a figure of merit, we used an L-bent fiber with ∆α = 180°together with the laser in CW operation regime so that individual beamlets from all the cores interfere and generate the brightest focal spot, with power P CW [ Fig. 5 which the coherent combining of the ultrashort pulse will be evaluated for various bending angles. We now move to pulse laser mode (τ = 170 fs) and investigate for various ∆α the focus power loss resulting from bending-induced inter-core group delays. Emergence of a spatiotemporal effect due to added inter-core group delays can be easily observed from Figs. 5(a)-5(c), where the PSF evolves from a circular to an elliptical shape. This can be intuitively inferred from the anisotropy of the OPD change due to bending along only one axis. In other words, the large bending-induced inter-core group delays focus the various frequencies within the spectral width of the pulse into different spatial spots; this effect is reminiscent of the time-space coupling as observed in temporal pulse shapers [28,29]. With this one observes a bending-induced distortion of the focal spot in Figs. 5(c), 5(e) where its full width at half maximum (FWHM) γ along the bending axis γ x is 3 times larger than along y axis for ∆α = 180°L-bend (Appendix A).
We illustrate this effect by calculating the far field distribution from the known intensity and group delay data at MCF distal facet. Figure 5(f) shows the complex field E distribution for the given laser bandwidth, simulated along the bending (x) axis in the CCD2 plane, with the experimentally measured ∆[∆τ i ] values taken into account. Note that different wavelengths get focused (come in phase) at different spatial coordinates. This is analogous to the broadening of an interference fringe due to partial coherence, in this case along the bending axis. Note that on the experimental data, along the orthogonal axis, the fringe width is still diffraction limited. Figure 5(g) represents a simulated case with no added inter-core group delays, resulting in a fully coherent state, thus retaining a symmetric and diffraction limited focus.
Discussion
The first important conclusion that can be drawn from the data presented in Figs. 2, 3 is that only the L-type bending geometry has a considerable impact on the inter-core group delays in the studied MCF. Both U-and S-bends exhibit a self-compensation of the bend-induced inter-core group delays on a scale of the entire fiber length, not affecting the total group delays spread. Second, in the case of L-bends, the added spatial inter-core group delays follow a linear law ∆[∆τ i ] = ax i , where the slope a depends on the bending angle ∆α, and x i is ith core coordinate along the bending axis. More generally, for a given ∆α and known MCF length one can have an estimate of ∆[∆τ i ] for an arbitrary location (x, y) whose precision depends mainly on the bending angle estimation error. ∆[∆τ i ] dependence was found to be linear versus bending angle ∆α, as shown in Fig. 4(h) and from Eq. (1) for a constant L and substituting R = L/∆α. One would expect the same behavior during fiber bending along y axis (generally speaking, along any direction in x y plane), assuming isotropic material properties. Additionally, experimental data in Figs. 4(g), 4(h) indicates that parameter a depends only on the angle between fiber end faces and not on the radius of curvature or relative position of input and output facets. For large angles of the L-bend and with femtosecond pulses, we found that spatial ∆[∆τ i ] distribution affects the imaging system PSF through the loss of coherence between some of the delivered pulses, precluding several cores from contributing to the generation of the interferometric focal spot and thus lowering the delivered peak power. Both these characteristics become crucial when such fibers are employed for nonlinear imaging (such as 2-photon for instance) and therefore a group delay controller [20] covering ∆[∆τ i ] range for the maximum bending angle would be required to ensure constant imaging performance in a flexible lensless endoscope set-up.
Conclusion
We experimentally studied multiple MCF bending geometries in terms of ultrashort (fs) pulse delivery and imaging performance. U-and S-type bends, even for large displacements, did not induce any significant additional inter-core group delays, whereas the added inter-core delays in L-type bending geometry depend linearly on the angle between distal and proximal fiber facets and the core coordinate along the bending axis. When using ultra-short pulses, imaging performance in terms of focal spot quality degrades for large bending angles when one fiber end is moving freely, requiring an active inter-core group delay compensation in the picosecond range in order to secure a stable and diffraction-limited imaging performance. These trends show a good agreement with a simple linear model (Eq. (3)) for the inter-core group delays as predicted by our simulations, where the bending-induced refractive index change for a single-clamped bending geometry (L-type) varies linearly along the bending axis with zero dispersion line crossing the fiber face at its middle, perpendicular to the applied stress. Simulations of the bending geometries where both ends are fixed display axial symmetry of bending-induced refractive index change, resulting in no significant dispersion of mean refractive index along the total fiber length. Our investigations highlight the suitability of MCFs for highly miniaturized, robust and flexible multi-photon endoscopes that could ultimately operate in clinical settings. | 5,014.8 | 2017-12-07T00:00:00.000 | [
"Engineering",
"Physics"
] |
Thin film lithium niobate electro-optic modulator with terahertz operating bandwidth
We present a thin film crystal ion sliced (CIS) LiNbO3 phase modulator that demonstrates an unprecedented measured electro-optic (EO) response up to 500 GHz. Shallow rib waveguides are utilized for guiding a single transverse electric (TE) optical mode, and Au coplanar waveguides (CPWs) support the modulating radio frequency (RF) mode. Precise index matching between the co-propagating RF and optical modes is responsible for the device’s broadband response, which is estimated to extend even beyond 500 GHz. Matching the velocities of these co-propagating RF and optical modes is realized by cladding the modulator’s interaction region in a thin UV15 polymer layer, which increases the RF modal index. The fabricated modulator possesses a tightly confined optical mode, which lends itself to a strong interaction between the modulating RF field and the guided optical carrier; resulting in a measured DC half-wave voltage of 3.8 V·cm. The design, fabrication, and characterization of our broadband modulator is presented in this work. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (130.3120) Integrated optics devices; (160.3730) Lithium niobate; (230.4110) Modulators; (230.4000) Microstructure fabrication. References and links 1. M. De Micheli, J. Botineau, P. Sibillot, D. B. Ostrowsky, and M. Papuchon, “Fabrication and characterization of titanium indiffused proton exchanged (TIPE) waveguides in lithium niobate,” Opt. Commun. 42(2), 101–103 (1982). 2. Y. Shi, “Micromachined wide-band lithium-niobate electrooptic Modulators,” IEEE Trans. Microw. Theory Tech. 54(2), 810–815 (2006). 3. M. Levy, R. M. Osgood, Jr., R. Liu, L. E. Cross, G. S. Cargill III, A. Kumar, and H. Bakhru, “Fabrication of single-crystal lithium niobate films by crystal ion slicing,” Appl. Phys. Lett. 73(16), 2293–2295 (1998). 4. G. Poberaj, H. Hu, W. Sohler, and P. Günter, “Lithium niobate on insulator (LNOI) for micro-photonic devices,” Laser Photonics Rev. 6(4), 488–503 (2012). 5. A. Rao and S. Fathpour, “Compact lithium niobate electrooptic modulators,” IEEE J. Sel. Top. Quantum Electron. 24(4), 1–14 (2018). 6. A. Guarino, G. Poberaj, D. Rezzonico, R. Degl’Innocenti, and P. Günter, “Electro–optically tunable microring resonators in lithium niobate,” Nat. Photonics 1(7), 407–410 (2007). 7. C. Wang, M. Zhang, B. Stern, M. Lipson, and M. Lončar, “Nanophotonic lithium niobate electro-optic modulators,” Opt. Express 26(2), 1547–1555 (2018). 8. V. Stenger, J. Toney, A. Pollick, J. Busch, J. Scholl, P. Pontius, and S. Sriram, “Integrated RF photonic devices based on crystal ion sliced lithium niobate,” in L. P. Sadwick and C. M. O. Sullivan, eds. (2013), pp. 86240I 1–8. 9. A. J. Mercante, P. Yao, S. Shi, G. Schneider, J. Murakowski, and D. W. Prather, “110 GHz CMOS compatible thin film LiNbO3 modulator on silicon,” Opt. Express 24(14), 15590–15595 (2016). 10. L. Cai, Y. Kang, and H. Hu, “Electric-optical property of the proton exchanged phase modulator in single-crystal lithium niobate thin film,” Opt. Express 24(5), 4640–4647 (2016). 11. V. Stenger, J. Toney, A. Pollick, J. Busch, J. Scholl, P. Pontius, and S. Sriram, “Engineered thin film lithium niobate substrate for high gain-bandwidth electro-optic modulators,” in CLEO: Science and Innovations (Optical Society of America, 2013). 12. L. Chen, J. Chen, J. Nagy, and R. M. Reano, “Highly linear ring modulator from hybrid silicon and lithium niobate,” Opt. Express 23(10), 13255–13264 (2015). Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14810 #328924 https://doi.org/10.1364/OE.26.014810 Journal © 2018 Received 20 Apr 2018; revised 21 May 2018; accepted 23 May 2018; published 25 May 2018 13. L. Chen, Q. Xu, M. G. Wood, and R. M. Reano, “Hybrid silicon and lithium niobate electro-optical ring modulator,” Optica 1(2), 112–118 (2014). 14. L. Chen, M. G. Wood, and R. M. Reano, “12.5 pm/V hybrid silicon and lithium niobate optical microring resonator with integrated electrodes,” Opt. Express 21(22), 27003–27010 (2013). 15. P. O. Weigel, M. Savanier, C. T. DeRose, A. T. Pomerene, A. L. Starbuck, A. L. Lentine, V. Stenger, and S. Mookherjea, “Lightwave circuits in lithium niobate through hybrid waveguides with silicon photonics,” Sci. Rep. 6(1), 22301 (2016). 16. A. Rao, A. Patil, P. Rabiei, A. Honardoost, R. DeSalvo, A. Paolella, and S. Fathpour, “High-performance and linear thin-film lithium niobate Mach-Zehnder modulators on silicon up to 50 GHz,” Opt. Lett. 41(24), 5700– 5703 (2016). 17. P. O. Weigel, J. Zhao, K. Fang, H. Al-Rubaye, D. Trotter, and D. Hood, “Hybrid silicon photonic – lithium niobate electro-optic Mach-Zehnder modulator beyond 100 GHz,” arXiv:1803.10365 (2018). 18. L. Chang, Y. Li, N. Volet, L. Wang, J. Peters, and J. E. Bowers, “Thin film wavelength converters for photonic integrated circuits,” Optica 3(5), 531–535 (2016). 19. L. Chang, M. H. P. Pfeiffer, N. Volet, M. Zervas, J. D. Peters, C. L. Manganelli, E. J. Stanton, Y. Li, T. J. Kippenberg, and J. E. Bowers, “Heterogeneous integration of lithium niobate and silicon nitride waveguides for wafer-scale photonic integrated circuits on silicon,” Opt. Lett. 42(4), 803–806 (2017). 20. Y. C. Shen, “Terahertz pulsed spectroscopy and imaging for pharmaceutical applications: A review,” Int. J. Pharm. 417(1-2), 48–60 (2011). 21. D. Shrekenhamer, C. M. Watts, and W. J. Padilla, “Terahertz single pixel imaging with an optically controlled dynamic spatial light modulator,” Opt. Express 21(10), 12507–12518 (2013). 22. M. C. Kemp, P. F. Taday, B. E. Cole, J. A. Cluff, A. J. Fitzgerald, and W. R. Tribe, “Security applications of terahertz technology,” in R. J. Hwu and D. L. Woolard, eds. (2003), pp. 44–52. 23. T. Nagatsuma, G. Ducournau, and C. C. Renaud, “Advances in terahertz communications accelerated by photonics,” Nat. Photonics 10(6), 371–379 (2016). 24. J. Macario, P. Yao, S. Shi, A. Zablocki, C. Harrity, R. D. Martin, C. A. Schuetz, and D. W. Prather, “Full spectrum millimeter-wave modulation,” Opt. Express 20(21), 23623–23629 (2012). 25. K. Aoki, J. Kondou, O. Mitomi, and M. Minakata, “Velocity-matching conditions for ultrahigh-speed optical LiNbO3 modulators with traveling-wave electrode,” Jpn. J. Appl. Phys. 45(11), 8696–8698 (2006). 26. M. Lee, “Dielectric constant and loss tangent in LiNbO3 crystals from 90 to 147 GHz,” Appl. Phys. Lett. 79(9), 1342–1344 (2001). 27. D. K. Ghodgaonkar, V. V. Varadan, and V. K. Varadan, “A free-space method for measurement of dielectric constants and loss tangents at microwave frequencies,” IEEE Trans. Instrum. Meas. 37(3), 789–793 (1989). 28. D. L. K. Eng, B. C. Olbricht, S. Shi, and D. W. Prather, “Dielectric characterization of thin films using microstrip ring resonators,” Microw. Opt. Technol. Lett. 57(10), 2306–2310 (2015). 29. D. L. K. Eng, Z. Aranda, B. C. Olbricht, S. Shi, and D. W. Prather, “Heterogeneous packaging of organic electro-optic modulators with RF substrates,” IEEE Photonics Technol. Lett. 28(6), 613–616 (2016). 30. I. Krasnokutska, J. J. Tambasco, X. Li, and A. Peruzzo, “Ultra-low loss photonic circuits in lithium niobate on insulator,” Opt. Express 26(2), 897–904 (2018). 31. D. L. K. Eng, S. T. Kozacik, I. V. Kosilkin, J. P. Wilson, D. D. Ross, S. Shi, L. Dalton, B. C. Olbricht, and D. W. Prather, “Simple fabrication and processing of an all-polymer electrooptic modulator,” IEEE J. Sel. Top. Quantum Electron. 19(6), 190–195 (2013). 32. Y. Shi, L. Yan, and A. E. Willner, “High-speed electrooptic modulator characterization using optical spectrum analysis,” J. Lightwave Technol. 21(10), 2358–2367 (2003). 33. C. J. Huang, C. A. Schuetz, R. Shireen, S. Shi, and D. W. Prather, “LiNbO 3 optical modulator for MMW sensing and imaging,” in R. Appleby and D. A. Wikner, eds. (2007), pp. 65480I–1–9. 34. M. Y. Frankel, S. Gupta, J. A. Valdmanis, and G. A. Mourou, “Terahertz attenuation and dispersion characteristics of coplanar transmission lines,” IEEE Trans. Microw. Theory Tech. 39(6), 910–916 (1991). 35. J. Chiles, M. Malinowski, A. Rao, S. Novak, K. Richardson, and S. Fathpour, “Low-loss, submicron chalcogenide integrated photonics with chlorine plasma etching,” Appl. Phys. Lett. 106, 111110 (2015).
Introduction
Despite its ubiquity in fiber-optic telecommunications and attractive nonlinear properties, the evolution of LiNbO 3 integrated optics can be considered sluggish relative to its Si and III-V counterparts.Discrete devices fabricated in bulk single crystalline LiNbO 3 generally rely on low index contrast optical waveguides with large bend radii [1], and specialized micromachining processes for sustaining broadband operation [2], which inhibits dense integration.Although the first instance of CIS LiNbO 3 was reported in 1998 [3], the recent widespread availability of full 75 mm wafers of CIS thin film LiNbO 3 from a number of distributors: NanoLN (China), Partow Industries (Florida), and SRICO (Ohio), has provided a fertile environment for LiNbO 3 device research and innovation [4,5].
Notable devices that take advantage of the high index contrast provided by a thin LiNbO 3 substrate are tunable ring resonators [6], Mach-Zehnder interferometers [7], switches [8], and standalone phase modulators [7,[9][10][11].Developed in parallel to these are various hybrid devices, that rely on either Si [12][13][14][15][16][17] or Si 3 N 4 [16,18,19] for loading and guiding of an optical mode.A common theme among all devices mentioned herein is that they possess a reduced mode size.The reduced mode size leads to vastly improved EO activity over their bulk predecessors, most notably resulting in reduced half-wave voltages.Reduced half-wave voltage length products coupled with the ability to bend and fold the high index contrast optical waveguides leads to a substantially decreased device footprint ideal for future integrated photonic systems.
Up to this point however, the other major advantage of thin-film LiNbO 3 , the significantly lower permittivity of the material system, has yet to be convincingly exploited [7,8,11,16].To this end we present the first LiNbO 3 -based EO modulator that is engineered to perform continuously from DC to THz frequencies.It is a device that can be used to optically upconvert RF signals directly at a system's RF front-end sensor, or antenna element.In so doing, the received RF signal becomes a sideband on an optical carrier that can be subsequently processed and, or routed using low loss conventional off-the-shelf optical components.A broad range of applications in the THz regime, including sensing [20], imaging [21,22], and high data rate communications [23], are currently limited by the inherent difficulties in routing THz signals electronically.Given the results presented in this work, we propose that optical routing of THz signals can be enabled by an EO up-converting modulator to provide both a simple and effective frontend alternative.
Device design and fabrication
A schematic of the broadband phase modulator's interaction region can be seen in Fig. 1(c), where the devices begin with commercially available CIS thin film LiNbO 3 on insulator procured from NanoLN.The substrate consists of a 700 nm thick x-cut LiNbO 3 device layer, affixed to a 500 µm thick quartz handle wafer via a 2 µm thick plasma enhanced chemical vapor deposited SiO 2 intermediate bonding layer.A single mode rib waveguide sustains the ypropagating TE polarized light to be modulated.The rib is 1.1 µm wide at the top and 1.8 µm wide at its base; the etch depth is 160 nm resulting in a sidewall angle of 24.57degrees.Lumerical FDTD Mode Solver is used to simulate the waveguide structure and provides an effective optical group index (n opt ) of 2.2608 for the fundamental TE mode at 1550 nm.The discrepancy between bulk LiNbO 3 's optical indices (n extraordinary = 2.14 and n ordinary = 2.21) at a wavelength of 1550 nm and the simulated group index stems from structure dependent waveguide dispersion and LiNbO 3 material dispersion.
To form the optical waveguide an 80 nm thick chromium blanket layer is first sputtered onto the substrate.A soft-mask is patterned on top of the Cr layer with NR9-1500P photoresist from Futurrex.The soft-mask pattern is transferred into the Cr hard-mask with a time multiplexed Cl based inductively coupled plasma (ICP) dry etch.After pattern transfer, any residual resist is removed in an O 2 plasma ash.The waveguide pattern is finally transferred into the LiNbO 3 with a directional, highly anisotropic LiNbO 3 etch, obtained using an ICP CF 4 (6 sccm)/N 2 (28 sccm)/O 2 (0.5 sccm) etch.The etch is time multiplexed to prevent overheating of the sample.The number of cycles determines etch depth and each cycle consists of 1 minute etching in a 600 W plasma under 400 W bias.The etch rate of x-cut LiNbO 3 is ~27 nm per minute and the selectivity between LiNbO 3 and Cr is ~5.4:1.Any remaining Cr is stripped in a chemically selective wet etch. of frequency.M wn in Fig. 3 Fig. 1 Cross modu of 15 norma across The modu configuration standard gold photolithograp deposited via resulting in 3 done with NR hard-baked at survive the el followed by a layer.Au is th photoresist an deionized wa CPW electrod optical waveg CPW.Electro width, and ga interaction re ~30 Ω up to 5 taper into the possess a sign the launch is 1. (a,b) SEM imag -sectional schema lating RF electric 550 nm are overl alized to its maxim s the GSG electrod ulating electric , which are d d electroplating phy, electropl a electron bea 300 nm total t R9-3000P phot t 120 C for 30 ectroplating ba a brief Ti wet e hen built up to nd metal seed l ater, and a KI des patterned guide situated ode dimension ap, are 1.8 µm gion is 0.92 cm 500 GHz.At th e interaction re nal width of 53 ~50 Ω up to 5 ges of a fabricated atic of the modula field at 110 GHz laid onto the illu mum value, while des.c field is appli efined directly g process and ating, and see am evaporation thickness.Nex toresist from F 0 minutes to d ath.Between li etch in a hydro o the desired el layers are strip based Au etch directly on th in the gap bet ns in the modu m, 9.5 µm, and m and the sim he input and ou egion over ano 3 µm and gap o 500 GHz to ma Fig. 3 500 G Plotted in values for hal following rela | 3,190.6 | 2018-05-28T00:00:00.000 | [
"Physics",
"Engineering"
] |
Comment on se-2021-151
The long-wavelength geoid or gravity anomaly in North America is thought to reflect mantle density anomalies and associated flow, and glacial isostatic adjustment. The paper uses the gravity anomaly together with RSL data in North America in a joint inversion for the radial viscosity profile, showing large sensitivity to the seismic to density scaling. New in the paper is a regional inversion based on representations of the kernels by Slepian functions. The inversion is done separately for a western region and eastern region in North America. The obtained viscosity profiles show a weak asthenosphere in the west and and large viscosity jump in the eastern region, in agreement with the expected first order difference in mantle structure. A constraint on lateral viscosity variation for North America is welcome and would have implications for geodynamic models including glacial isostatic adjustment and tectonics. The method and figures re clear. However there are a few main issues that should be addressed, the first of which is likely to impact the results and hence will require a major revision of the paper. In addition, the text requires additional discussion on the potential impact of assumptions, and references on some aspects of the paper are currently missing, see the specific comments. There are several incomplete sentences and typos, please see the annotated pdf where some of the textual issues are pointed out. The references mentioned in the comments can be found at the end of the review.
The long-wavelength geoid or gravity anomaly in North America is thought to reflect mantle density anomalies and associated flow, and glacial isostatic adjustment. The paper uses the gravity anomaly together with RSL data in North America in a joint inversion for the radial viscosity profile, showing large sensitivity to the seismic to density scaling. New in the paper is a regional inversion based on representations of the kernels by Slepian functions. The inversion is done separately for a western region and eastern region in North America. The obtained viscosity profiles show a weak asthenosphere in the west and and large viscosity jump in the eastern region, in agreement with the expected first order difference in mantle structure. A constraint on lateral viscosity variation for North America is welcome and would have implications for geodynamic models including glacial isostatic adjustment and tectonics. The method and figures re clear. However there are a few main issues that should be addressed, the first of which is likely to impact the results and hence will require a major revision of the paper. In addition, the text requires additional discussion on the potential impact of assumptions, and references on some aspects of the paper are currently missing, see the specific comments. There are several incomplete sentences and typos, please see the annotated pdf where some of the textual issues are pointed out. The references mentioned in the comments can be found at the end of the review.
best regards, Wouter van der Wal
Main issues
The long-wavelength gravity field is not only caused by GIA and mantle convection but also anomalies in the crustal thickness and density anomalies in the lithosphere. Correcting for a crustal or lithospheric signal is done in recent papers that fit gravity anomaly data in North America (Kaban et al. 2014;Metivier et al. 2016, section 3.2;Reusen et al. 2020) and it is standard in global studies also when long-wavelength signal is studied (e.g. Wen and Anderson 1997). In North America the crustal signal contributes tens of mGal up to spherical harmonic degree 15 (Reusen et al. 2020 figure 6). This is especially significant in the western region where the gravity anomaly itself is not as large. Therefore the gravity anomaly needs to be corrected for variations in crustal thickness and density anomalies in the lithosphere before fitting the GIA and mantle convection model. There is the additional complication that crustal thickness variations will contain part of the GIA signal and isostasy can not be assumed in the region (Reusen et al. 2020).
The paper inverts a long-wavelength signal with a regional model. However, most of the variance in the gravity field is coming from degree 2 and 3 which are caused by very deep sources (e.g. Liu and Zhong 2016), which means the gravity anomaly will also be sensitive to anomalies in a much wider region surrounding the region of interest (I could not immediately find references that shows kernels as a function of horizontal distance). It is not clear how accurate the regional inversion is when signal outside the region of interest is not included, but in my opinion this should be demonstrated in the paper which proposes regional inversion.This can be investigated for example, by fitting only the higher degree signal, or by varying the size of the region of interest .
Referencing: The non-uniqueness in the inversion is mentioned (line 396) but not discussed. It is investigated for GIA by Paulson et al. (2007) and for mantle convection by Thoroval and Richards (1997). The effect of lateral viscosity variations on dynamic topography or the geoid is discussed in e.g. Ghosh et al. (2010), Cadek and Fleitout 2003. Results of viscosity inferences can be compared with other inversions of viscosity profiles for North America (Wolf et al. 2006;Kuchar et al 2019;Metivier et al. 2016;Reusen et al. 2020;Mao and Zhong 2021), at least the ones that also use gravity data.
In section 3.1 the gravity anomaly is fit with only the geoid kernels, resulting in a variance reduction of around 40%. The results are very different from those of the joined inversion. Since the added value of the manuscript is in doing a joined inversion, it would help the flow of the paper to remove section 3.1 or place it in an appendix.
Specific comments
Title: strength of a material usually refers to yield stress. I suggest to change the title to something like the following: Regional gravity constraints for North America reveal upper mantle viscosity differences across the continent.
49: It is stated that Mitrovica and Forte showed considerable potential, but it is not clear what is the gap in the literature that you will address. The text from line 225 onwards is useful to add in the introduction. 82: subset: this seems important information that is not discussed further in the paper. How is the choice made which functions are included and how could that affect the results?
128: scaling with the 10^21 Pa s value. This effect of this choice should be discussed. 233: The scaling is a crucial parameters and constraints on them have implications for other models. I suggest to better introduce the choices made for the scalings. Are the chosen values common in the literature? Is it expected that they hold for North America? 299: ICE-6G is created by fitting RSL data, therefore a good fit with RSL data is to be expected. Is the fit obtained here better than the fit of the original model? 325: The fit with RSL data is poor as you also note in line 434. Looking at figure 9 it is unlikely that that is due to missed tectonic signal. It is likely that the crustal signal plays a large role in explaining the gravity anomaly; this should be quantified. 334: That is surprising given that most of the gravity anomaly signal is in the eastern region. Can you speculate why the joined inversion is dominated by the solution for the western region? | 1,736 | 2022-02-22T00:00:00.000 | [
"Geology"
] |
A Study on Supply Model of Rural Highways in China
Rural highway has a significant impact to build a moderately prosperous society in all aspects, and it is an important critical infrastructure to build harmonious society. Highway has its own economics attributes, and rural highway also has its own characteristics. This paper combines with rural highway characteristics to analyze the supply model and problems of rural highway. It summarizes several mixed supply models of rural
Economic analysis of highway 1.Common attributes of highway
Highway has a first effect to economic development.
As an important part of the transportation industry, highway industry not only itself has a huge economic benefit, but also has a strong correlation effects for the other industries development.It could promote the economic development in tangible and intangible aspect, and let the associated industry be the first industrial of economic development.
Highway industry profit has been restricted to a certain level.For highway industry, if it is fully supplied as pure public goods by the government, it would be subjected by financial resources and the result would be insufficient supply.However, as it is provided by fees, if the price charged too high, it will definitely lead to improve the downstream industry costs, causing prices to rise.So in order to achieve the maximized welfare of society members, it must control highway industry profit level.
Private property of highway
Highway has Quasi-public product attributes, and it determines that it has private product characteristics in a certain extent, such as exclusivity, etc..The characteristics is the foundation of the highway market supply.As a commodity, the realization form of highway's value is different from general merchandise.It has to be gradually achieved by retail selling the use value (i.e., collection of tolls).As a result, it will lead to the highway industry interests different from social welfare objectives.It has major performance that social welfare objective, highway investor, must first realize their economic interests.That economic benefit will increase the difficulty of pursuing social welfare.
Characteristics of rural highway investment projects
The particularity of investment projects of rural highway is shown as follows: A. It has a range of regional benefits.
The analysis of the admixture accommodation of rural highways
Rural highways are quasi-public goods, and the mainly public characteristics are strong positive externalities.To make economic efficiency achieved, the government can provide quasi-public goods directly at lower prices to encourage people to increase consumption, so as to achieve efficient consumption.If it is absolutely free of charge and is supplied by the government, the result is over-consumption which may bring welfare loss.If this kind of quasi-public goods is supplied entirely by the private sector, it is prone to excessive fees and high charge standard.Meanwhile rural highway is typical congestible public goods in the quasi-public goods.When the supply of highway is given, the marginal cost of highway congestion in the village will increase, and the marginal construction cost will decline with population increase.Therefore, when cost=gain, it is optimal population.In many rural areas of our country, especially western regions, because it is remoteness from the national, provincial distant, the requisite construction of rural highway is large quantity, and it requires more funds.Because the residence is remote, the residents are few.The "club" is failed to achieve the optimal size of the amount of members.It needs investment in the country to come forward to fill the gap, and help these areas construct rural highway.
Take rural highway construction as an example to make a game analysis.If only two parties are involved in the negotiations game, one is representative of the government which considers whether the state highway construction investment is profitable, the other one is the public representative which considers whether donate part of fund to get provision of public goods such as rural highway.Assume that the basic construction price of a rural highway is 10 million; both the State and the farmers themselves will get 75,000 Yuan as income.The game theory model of confidence of rural highway construction is showed in Table 1.
Table 1.The game theory model of confidence of rural highway construction.
Government Public (villagers) Positive decision Negative decision Positive decision
(10, 10) (-2.5, 7.5) Negative decision (7.5, -2.5) (0, 0) When the two sides made a positive decision, the result is the most satisfactory.In the confidence gambling, both sides prefer to choose cooperative strategy.If the government can give some "advance commitment", the public is willing to pay part of the cost while the government will be willing to grant the construction-related subsidies.The game between government and private is easier to reach an agreement.
The analysis of supply problem of rural highways
The main problems are: First, the administrative villages' accessible asphalt roads are not enough.Total length of highways in rural areas is a lack, and a lot of villages are not accessible to highway, causing unsolved travel problems for villagers; Second, rural highway construction quality is varying degrees of disease.Rural highway construction projects, wide distribution, engineering and management personnel need quality and quantity suited to large-scale rural highway construction, quality control is weak, inadequate funding, project management difficult; Third, the implementation of local matching funds cannot be placed.Rural highway construction, mostly local matching funds committed to the policy of investment, has an impact on the progress and quality; Fourth, rural highway maintenance management tasks is difficult.The vast majority of rural highways just organized a number of temporary and seasonal assault conservation, and maintenance and management responsibilities are not clear, so there is no stable source of funding.There is a big security risk.Relying on state investment, there are some problems.Rural highway construction and development is the principles of the "Three-Self" principles, namely "self-built, self-support, self-management"."Three-Self" policy is strongly dependent on the geographical characteristics, resulting in a "Matthew effect" on rural highway construction.That is adequate funding for economically developed areas of rural highway construction, highway construction speed is high, and in turn, it promotes greater economic prosperity; remote backward areas have no money to build highways, the result is there is no money to build highways, so poor highway conditions lead to economy even further behind.
"One
Project One Discussion" supply model "One project one discussion" refers to use the form of democracy to raise labor and money, which is a funding model to beneficiaries to raise money for the supply of public goods, with the nature of raising fees.The core of "one project one discussion" is that in the provision of rural highways, to collect money from farmers must be consent agreement by farmers firstly.Its potential implication is that if farmers do not agree, it is difficult to reach a unified consensus, and it is difficult to provide a rural highway.
"One project one discussion" conditions for success are: first, a strong collective economy.Conference farmers is mainly how to spend money collectively, and do not need the farmers to pay themselves; second, the matters discussed is related to the vital interests of the villagers and the amount of investment is not a big; third, village cadres has a good personal ability and charisma.
"One project one discussion" conditions for failure are: first, the farmers cannot afford; second, funding upper limit and it is difficult to get enough funds needed; third, village cadres are afraid of difficulties; fourth, the villagers worry income inequality and funds were misappropriated; fifth, cadres and the masses have disagreement; sixth, in the majority opinion consistent case, a few people finally led boycott of procedure and the result is unsuccessful; seventh, few people refuse investment, and after the project Completed, the not funded enjoyment the achievement.
Not all villages have characteristics of the three factors that lead to success.Every village has the Factors leading to the failure more or less.As long as the existence of any one reason it could lead to "discussions" mode to failure.Therefore, the effectiveness of "one project one discussion" at the village level to provide public goods is questionable.
The analysis of supply modes and funding channels of rural highways 4.1 Public-private mixed supply model of rural highway
State investment represents the full sense of the government investment; "one project one discussion" can be understood as a typical "private supply".Through the above analysis, it could get the result that relying solely on the state invested or entirely by self-organized villagers was unable to achieve the effective provision of rural highways.Rural highways should be used in "government-led -Folk active participation" of the public -private mix supply model.
Private enterprises and individuals to invest
Allowing private capital into infrastructure, public utilities embody the principle "the people's livelihood and in addition to the field of national monopolies must allow private capital to enter the".
It is in favour of raising funds and improving social services, and it is conducive to that when state capital exiting certain industries, the competition mechanism can be smoothly introduced to achieve an effective alternative of private capital.That is beneficial to national economic development and maintenance of the national economic security.
Donations from benefit units and individuals
Expressway and first grade arterial highway need hundreds of thousands million yuan per kilometer inputs.But rural highway construction funds are basically within 1million Yuan and even some three or four grade highways need 200-300 thousand Yuan per kilometer relatively.Thus, each unit and individual donations may be a drop in the bucket for expressway and arterial highway.However it is a power cannot be reckoned for rural highways.
Domestic Policy banks
Government policy bank fund is from tax funds, treasury bonds and funds from other parties have different ways of borrowing money, and so on.As an example, the State Development Bank as one of China's three policy banks is primarily to finance large-scale national infrastructure, public utilities and policy projects.
World Bank loan
The World Bank is the largest international aid agencies in the world.It is an international bank consortium, and our country is also one of shareholders.It provides lowinterest loans to developing countries to support economic development in these countries.China has the World Bank to build highways precedent.
BOT and PPP mode
1. BOT mode BOT financing model is the use of domestic and international projects in the field of highway construction financing a better model.BOT model has the complexity and systemic, so prevalence of multi-project negotiations, but the success of the negotiations and begin implementation.And because the rural highway toll restriction, it is not many companies are willing to construct.It has not yet been used in BOT mode of financing in China.
PPP mode
Public Private Partnership mode (PPP), the core of the mode is the introduction of private capital in the supply of public goods process, and forming partnerships between the public and private sectors.Government could get rid of inefficient supply deficiencies in production of public goods and services, and turn to be the one which need services, develop service standards and the limited role of managers.In deed restriction mechanism, the private sectors provide public services production, and the public sector (or government) pays their production costs to the private sector in return for compensation and benefits.
Lottery of highway construction
Highway in China, especially the rural highway, has a great political significance.And at the same time, because the low grade of rural highway is non-fee, fundraising is very difficult.Highway construction has the nonprofit nature like sports and welfare.The issue of "highway construction lottery" is a good funding model for rural highway construction.
Conclusions
For a long time, rural highway as public goods has been charged with the task of construction investment by the state.The people also believe that the construction of highways is the obligation of the country.However, highway is essentially quasi-public goods with congestible character.Rural highway has personal characteristic in some extent, and it provides the possibility of private sector participation in the supply of rural highways.
China's current rural village with a population of generally small scale cannot reach the "club" membership optimal model point scale.It needs investment of the country to fill the gap to help rural highway construction in these areas.In the confidence game both of them will prefer to choose cooperation strategy.And if the government gives some "pre-commitment" before the game, then the government and private mixed supply model is easier to form more efficiently.
County financial is difficult, and it is limited that funds can be invested on a rural highway construction.And over-reliance on central financial support is also irrational.Farmers "one project one discussion" is farmers mainly fund raising.But the success of "one project one discussion" has many uncertain factors.And most of our villages are not fully equipped with factors such success."One project one discussion" mode is very easy to go failure.In this paper, it is given rural highway publicprivate mix supply model in China, and briefly discussed some financing the supply channel of private sources.
3. 1
State investment modelState investment has mainly the following ways: (1) State budget investment, refers to the state budget and funding sources included in the national plan of investment in fixed assets.Since the conditions of use of such funds have the advantage of offers and safe, is actively fighting around transportation construction investment.(2) Bond funding investment.According to experience, every 1 billion in highway infrastructure bond funds attracted banks and other sources of local matching funds totaling 533 million Yuan.(3) Local government investments.Since establishment in 1984, our township finances, county revenues have been expanding, increasingly standardized expenditure management, in promoting rural economic development, ensure the implementation of government functions play an important role.
demand of highway grading standards is not high
. Rural highway construction must meet the requirements of economic development within limits, to avoid pursuing high-grade unilaterally.It is on the basis of local subsistent or potential traffic flow.DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015 | 3,178 | 2015-01-01T00:00:00.000 | [
"Economics",
"Engineering",
"Geography"
] |
Marine-Derived Compounds Targeting Topoisomerase II in Cancer Cells: A Review
Cancer affects more than 19 million people and is the second leading cause of death in the world. One of the principal strategies used in cancer therapy is the inhibition of topoisomerase II, involved in the survival of cells. Side effects and adverse reactions limit the use of topoisomerase II inhibitors; hence, research is focused on discovering novel compounds that can inhibit topoisomerase II and have a safer toxicological profile. Marine organisms are a source of secondary metabolites with different pharmacological properties including anticancer activity. The objective of this review is to present and discuss the pharmacological potential of marine-derived compounds whose antitumor activity is mediated by topoisomerase II inhibition. Several compounds derived from sponges, fungi, bacteria, ascidians, and other marine sources have been demonstrated to inhibit topoisomerase II. However, some studies only report docking interactions, whereas others do not fully explain the mechanisms of topoisomerase II inhibition. Further in vitro and in vivo studies are needed, as well as a careful toxicological profile evaluation with a focus on cancer cell selectivity.
Introduction
Cancer is the second leading cause of death in the world after cardiovascular diseases, affecting an estimated 19 million people and causing approximately 10 million deaths in 2020 [1].
Chemotherapy represents the main anticancer therapeutic approach. Nowadays, the principal clinically employed anticancer drugs are natural products, or their structural analogs [2][3][4][5][6]. However, several factors limit their effectiveness: (i) their efficacy is inversely proportional to disease progression; (ii) occurrence of chemoresistance; (iii) severe toxicity caused by lack of selectivity against cancer cells [7,8]. For this reason, the discovery of anticancer agents characterized by an improved pharmaco-toxicological profile remains a major aim of pharmacological research.
One of the principal targets of drugs used in chemotherapy to stop the aberrant proliferation of cancer cells is topoisomerase (topo) II [9].
Topo is a class of nuclear enzymes essential for cell survival. They regulate the topology of DNA and are involved in replication, transcription, proliferation, and chromosome segregation during the cell cycle. Vertebrates express two different isoforms of topo II-α and β-and although they possess 70% sequence homology and show similar enzyme activity, they are expressed and regulated differently [10]. (1); flexing of the G-segment in the presence of metals ions (2); formation of the cleavage complex (3); closing the gate to constrain the T-segment to pass through the G-segment (4); ligation of the G-segment (5); release of the T-segment (6); release of the G-segment (7); enzyme ready for a new catalytic cycle (8).
Thus, the inhibition of topo activity allows the blocking of the cell cycle and then conduces to cell death [11]. Topo II-mediated DNA breakage is a critical step for cell survival and must be finely regulated to avoid a possible fragmentation of the entire genome [9]. In a healthy cell, there is fine control of the formation of cleavage complexes, which are short-lived and reversible. Topo II inhibitors are compounds capable of modulating the formation of cleavable complexes and altering this equilibrium. Thus, the inhibition of topo activity allows the blocking of the cell cycle and then conduces to cell death [11]. Topo II-mediated DNA breakage is a critical step for cell survival and must be finely regulated to avoid a possible fragmentation of the entire genome [9]. In a healthy cell, there is fine control of the formation of cleavage complexes, which are short-lived and reversible. Topo II inhibitors are compounds capable of modulating the formation of cleavable complexes and altering this equilibrium.
There are two different mechanisms described for topo II inhibition: (i) poisoning or (ii) catalytic inhibition. Poisoning is the main mechanism and acts on the stabilization of the cleavable complex, leading to maintaining the permanent breakage of DNA. Indeed, when the levels of cleavable complexes become high, they cannot be repaired by topo II, thus becoming irreversible DNA lesions that activate different signaling pathways and result in cell death by apoptosis [12]. On the other hand, catalytic inhibition implies that the inhibitor prevents the formation of the cleavage complex. If the amount of cleavage Neo was highly cytotoxic in several tumor cell lines [25,26]. In addition, neo was equally cytotoxic in wild-type A2780 ovarian cancer cells and in multidrug-resistant (MDR)-expressing A2780AD cell line (Table 1). Of note, taxol, DOXO, and amsacrine (m-AMSA) had a 15-, 33-, and 8-fold lower cytotoxicity than neo [25]. In vivo, the administration of neo (12.5-50 mg/kg for 19 days) to Balb/c nu/nu mice bearing HCT-116 and KB xenograft reduced tumor growth (Table 1) and displayed the same efficacy as ETO [25]. DT was cytotoxic on different tumor cell lines. Additionally, DT had a selective cytotoxic effect on tumor cells, since the cell viability of rat alveolar macrophage NR8383 cells was more than 80% after exposure to the highest tested concentration of the compound [35]. In the same study, DT (0.01-10 μg/mL) was found to inhibit topo IIα using a cell-free DNA cleavage assay with an enzyme-mediated negatively supercoiled pHOT1 plasmid DNA. In the presence of topo IIα, DT at low concentrations (0.01, 0.1, and 1 μg/mL) caused DNA relaxation, and at high concentrations (2.5, 5, and 10 μg/mL) blocked DNA relaxation. This means that DT interferes with the topo IIα catalytic cycle [35]. However, the compound did not generate linear DNA [35], which is associated with the stabilization of topo II-DNA cleavage complex typical of topo II poisons [37].
The link between the inhibition of topo IIα and the apoptotic activity of DT is controversial. DT increased the apoptotic fraction of K562 cells at concentrations of 2.5, 5.0, and 10 μg/mL. Moreover, the compound at 0.5 and 1.0 μg/mL activated caspase-3 (Casp-3) and cleaved poly (ADP-ribose) polymerase (PARP), while at 5 μg/mL it decreased Casp-3 activity and PARP cleavage. DT also induced the phosphorylation of various DNA damage-related proteins, including H2A histone family member X (H2A.X), ataxia telangiectasia mutated (ATM), breast cancer gene (BRCA), and ataxia-telangiectasia rad3-related (ATR) in the same concentration-dependent manner. Additionally, while 2.5 μg/mL of DT increased intracellular reactive oxygen species (ROS) levels in a timedependent manner (0-60 min), at 5 μg/mL, ROS levels rose up to 30 min and then gradually decreased time-dependently [35]. This could possibly explain the lower activation of Casp-3 and the lower phosphorylation of DNA damage-related proteins in cells treated with DT 5 μg/mL. At the same time, the pre-treatment of cells with the ROS DT was cytotoxic on different tumor cell lines. Additionally, DT had a selective cytotoxic effect on tumor cells, since the cell viability of rat alveolar macrophage NR8383 cells was more than 80% after exposure to the highest tested concentration of the compound [35]. In the same study, DT (0.01-10 µg/mL) was found to inhibit topo IIα using a cell-free DNA cleavage assay with an enzyme-mediated negatively supercoiled pHOT1 plasmid DNA. In the presence of topo IIα, DT at low concentrations (0.01, 0.1, and 1 µg/mL) caused DNA relaxation, and at high concentrations (2.5, 5, and 10 µg/mL) blocked DNA relaxation. This means that DT interferes with the topo IIα catalytic cycle [35]. However, the compound did not generate linear DNA [35], which is associated with the stabilization of topo II-DNA cleavage complex typical of topo II poisons [37].
The link between the inhibition of topo IIα and the apoptotic activity of DT is controversial. DT increased the apoptotic fraction of K562 cells at concentrations of 2.5, 5.0, and 10 µg/mL. Moreover, the compound at 0.5 and 1.0 µg/mL activated caspase-3 (Casp-3) and cleaved poly (ADP-ribose) polymerase (PARP), while at 5 µg/mL it decreased Casp-3 activity and PARP cleavage. DT also induced the phosphorylation of various DNA damagerelated proteins, including H2A histone family member X (H2A.X), ataxia telangiectasia mutated (ATM), breast cancer gene (BRCA), and ataxia-telangiectasia rad3-related (ATR) in the same concentration-dependent manner. Additionally, while 2.5 µg/mL of DT increased intracellular reactive oxygen species (ROS) levels in a time-dependent manner (0-60 min), at 5 µg/mL, ROS levels rose up to 30 min and then gradually decreased time-dependently [35]. This could possibly explain the lower activation of Casp-3 and the lower phosphorylation of DNA damage-related proteins in cells treated with DT 5 µg/mL. At the same time, the pre-treatment of cells with the ROS scavenger N-acetyl cysteine (NAC) inhibited the apoptotic activity and the protein expression of phosphorylated H2A.X (γ-H2A.X) induced by DT at 5 µg/mL [35]. This result points out that, although inhibition of topo IIα is associated with the activation of DNA damage-related proteins, overproduction of ROS also contributes to increase DNA damage and seems to be the major pro-apoptotic trigger. ROS-induced apoptosis by DT has been found to involve the IKK (IκB kinases)/NFκB (nuclear factor kappa B) and PI3K (phosphatidylinositol 3-kinase)/Akt signaling pathways, as demonstrated by the reduced expression of IKK/NFκB-related proteins and the increased phosphorylation of Akt [35]. Given that the continuous activation of IKK/NF-κB pathway promotes tumorigenesis [38], its inhibition by DT could be considered an additional mechanism of its antitumor effect. However, Akt activation is associated with tumor aggressiveness and drug resistance [39]. Hence, further investigation should be carried out to clearly understand the effects of DT resulting from the activation of Akt.
Regarding apl-1, Shih and colleagues explored its antitumor activity on leukemic and prostatic cancer cell lines, focusing also on its ability to inhibit topo II. Apl-1 was highly cytotoxic (Table 1) and induced apoptosis through the dysregulation of the oxidative balance, as demonstrated by the excess of ROS and NOX (active nicotinamide adenine dinucleotide phosphate oxidase) production [36]. In addition, apl-1 reduced the activity of the PI3K/Akt/mTOR (mammalian target of rapamycin) pathway, a mechanism associated with an antitumor activity [40]. Moreover, apl-1 inhibited the relaxation of supercoiled DNA, showing an IC 50 (concentration that inhibited the 50% of DNA relaxation) value of 1.37 µM ( Table 1). As DT, apl-1 did not generate linear DNA [36], meaning that it could not stabilize the DNA cleavage complex. A further study determined that apl-1, despite increasing phosphorylation of H2A.X, did not produce DNA single strand breaks (SSBs) and DSBs, and did not increase the number of nuclear γ-H2A.X foci [41]. All these findings show that apl-1, in contrast to its oxidized derivative, acts as a topo IIα catalytic inhibitor, without inducing DNA damage.
Apl-1 inhibited the protein expression of heat shock protein 90 (Hsp90) in PC-3 and Du145 prostate cancer cells, making it a dual target inhibitor [36]. Hsp90 chaperon ensures the stability, integrity, shape, and function of critical oncogenic proteins (also called Hsp90 client proteins), which play critical roles in signal transduction, cell proliferation and survival, cell-cycle progression and apoptosis, as well as invasion, tumor angiogenesis, and metastasis [42]. Other marine topo II inhibitors, in addition to apl-1, possess this dual inhibitory activity of topo II and Hsp90, as discussed in the next sections. This is probably due to the similar ATPase domain structures of topo II and Hsp90 [43]. Other studies found that apl-1 inhibited the Wnt/β-catenin pathway through the proteasomal degradation of βcatenin [44] and the epidermal growth factor (EGF)-dependent proliferation of breast cancer cells (MCF-7 and ZR-75-1), probably by blocking the phosphorylation of EGF receptor [45]. Moving toward the later stages of the carcinogenic process, apl-1 showed antimetastatic and antiangiogenic effects: in PC-3 and Du145 cells, it inhibited cell migration and colony formation, and suppressed the EMT process induced by the transforming growth factor-β1 (TGF-β1) [36].
Overall, apl-1 exerted a marked antitumor activity in different tumor cell models and modulated multiple targets. Despite this, conflicting results are reported regarding its selective activity toward cancer cells. In normal rat macrophage cells (NR8383) and normal human skin cells (CCD966SK), the IC 50 , calculated for its cytotoxic effects, was almost 4− and 17−fold higher, respectively, than the average IC 50 calculated for tumor cells (0.39 µM) [36]. However, apl-1 induced apoptosis and blocked cell-cycle progression indiscriminately in leukemia (THP-1 and NOMO-1) cells and in bovine aortic endothelial cells [41]. Thus, the toxicological profile of apl-1 needs more in-depth studies.
Makaluvamines
Another type of alkaloids produced by sponges are pyrroloiminoquinones, which include makaluvamines and batzellines.
Makaluvamines ( Figure 4) were isolated from sponges mainly belonging to the Zyzza genus. In the 1990s, these compounds were the subject of intensive studies to evaluate their antitumor activity. All makaluvamines (A-V) exhibited a marked cytotoxic activity. [46][47][48]. In addition, makaluvamine A and C reduced the tumor mass of human ovarian carcinoma OVCAR3-xenograft in Balb/c nu/nu athymic mice (Table 1) in vivo [49].
Another type of alkaloids produced by sponges are pyrroloiminoquinones, which include makaluvamines and batzellines.
Makaluvamines ( Figure 4) were isolated from sponges mainly belonging to the Zyzza genus. In the 1990s, these compounds were the subject of intensive studies to evaluate their antitumor activity. All makaluvamines (A-V) exhibited a marked cytotoxic activity. [46][47][48]. In addition, makaluvamine A and C reduced the tumor mass of human ovarian carcinoma OVCAR3-xenograft in Balb/c nu/nu athymic mice (Table 1) in vivo [49]. Regarding the ability of makaluvamines to inhibit topo II, the results are somewhat ambiguous: makaluvamine G did not inhibit topoisomerase II; for the other makaluvamines, there are conflicting data on whether they act as topo II catalytic inhibitors or poisons. Makaluvamine N inhibited more than 90% of the relaxation of supercoiled pBR322 DNA at 5.0 μg/mL [46,49], while makaluvamines A-F modulated topo II-mediated decatenation of kinetoplast DNA (kDNA) differently [49,50]. Overall, makaluvamine B was inactive, while makaluvamine A and F were the most effective, exhibiting IC90 (concentration that inhibits 90% of kDNA decatenation) values of 41 μM and 25 μM, respectively [49]. Later, Matsumoto et al. demonstrated that different Regarding the ability of makaluvamines to inhibit topo II, the results are somewhat ambiguous: makaluvamine G did not inhibit topoisomerase II; for the other makaluvamines, there are conflicting data on whether they act as topo II catalytic inhibitors or poisons. Makaluvamine N inhibited more than 90% of the relaxation of supercoiled pBR322 DNA at 5.0 µg/mL [46,49], while makaluvamines A-F modulated topo II-mediated decatenation of kinetoplast DNA (kDNA) differently [49,50]. Overall, makaluvamine B was inactive, while makaluvamine A and F were the most effective, exhibiting IC 90 (concentration that inhibits 90% of kDNA decatenation) values of 41 µM and 25 µM, respectively [49]. Later, Matsumoto et al. demonstrated that different makaluvamines promoted the formation of cleavable complex. Makaluvamine C, D, and E (33-466 µM) cleaved radiolabeled pUC 19 DNA in the presence of human topo II in a concentration-dependent manner, although they showed fewer and weaker cleavage sites than ETO and mitoxantrone. In addition, when also testing other makaluvamines at 91 mM using a cell-free cleavage assay with radiolabeled rf M13 mp 19 plasmid DNA, they found that makaluvamine I and H were the most efficient in inducing topo II-mediated cleavage of plasmid DNA, showing a 61% and 33% of cleavage, respectively, compared to the 100% of ETO, at the same tested concentration (Table 1). In both assays, makaluvamine D and E exhibited a comparable behavior, i.e., a weak and marked formation of cleavable complex, respectively, whereas makaluvamine C was more efficient in cleaving plasmid DNA than radiolabeled pUC 19 DNA [51]. Overall, this latter study points out that makaluvamines may act as topo II poisons. In support of this hypothesis, there are various data. Firstly, makaluvamine A intercalated into DNA and induced DNA DSBs in the neutral filter elution assay, which measures the formation of protein-linked DNA DSBs, compatible with the generation of DNA cleavable complex. The effect was comparable to that of the known DNA intercalating topo II poison m-AMSA [49]. Similar findings were reported for makaluvamine C [50]. Secondly, the most active makaluvamines (A and F) were much more cytotoxic in CHO xrs-6 cells compared to CHO BR1 cells (DSBs repair-competent): they exhibited a hypersensitive factor (HF, i.e., the ratio of IC 50 on xrs-6 to that on BR1 cells) equal to 9 (for makaluvamine A) and 6 (for makaluvamine F), and thus equal to or higher than that of m-AMSA (HF = 6) [49]. Similarly, makaluvamine I showed a 5-fold lower IC 50 in xrs-6 cells (0.4 µM) compared to AA8 DNA repair-competent cells (2 µM) [51]. This evidence shows a typical behavior of DNA intercalating topo II poisons. Overall, it is very likely that some makaluvamines have the formation of cleavable complexes as their predominant mechanism and thus act as a poison. However, the lack of extensive studies does not allow to clearly identify the mechanism of topo II inhibition of the different compounds. In addition, further experiments on their activity on in vitro or in vivo models are needed to identify their potential use as anticancer agents.
Recently, different makaluvamine analogs as well as a hybrid derived from makaluvamine A and ellipticine have been found to inhibit the catalytic activity of topo II and block DNA relaxation [52,53]. However, the hybrid derivative was equally cytotoxic on both prostate cancer cells and normal fibroblasts, thus demonstrating a non-selective activity toward tumor cells [53].
Batzellines
Batzellines are a group of alkaloids isolated from the marine sponge Batzella sp. (Figure 5), structurally linked to other marine substances such as makaluvamines and discorhabdins.
Mar. Drugs 2022, 20, x FOR PEER REVIEW 9 of 52 Among them, isobatzelline A, isobatzelline C, isobatzelline D, and secobatzelline A were highly cytotoxic on a panel of pancreatic cancer cell lines (Table 1). Surprisingly, cytotoxic activity was found to be inversely proportional to the inhibition of topo IImediated DNA decatenation [54]. Isobatzelline E and batzelline B, which are not among the most cytotoxic, inhibited 95% and the 63%, respectively, of DNA decatenation at 25 μg/mL; at the same concentration, isobatzellines A, C, and D, which are the most cytotoxic, inhibited 36%, 27%, and 26% of topo II-mediated DNA decatenation, respectively. These latter significantly intercalated into DNA, while the most potent topo II inhibitor isobatzelline E was the less potent DNA-intercalating compound [54]. This different Among them, isobatzelline A, isobatzelline C, isobatzelline D, and secobatzelline A were highly cytotoxic on a panel of pancreatic cancer cell lines (Table 1). Surprisingly, cytotoxic activity was found to be inversely proportional to the inhibition of topo IImediated DNA decatenation [54]. Isobatzelline E and batzelline B, which are not among the most cytotoxic, inhibited 95% and the 63%, respectively, of DNA decatenation at 25 µg/mL; at the same concentration, isobatzellines A, C, and D, which are the most cytotoxic, inhibited 36%, 27%, and 26% of topo II-mediated DNA decatenation, respectively. These latter significantly intercalated into DNA, while the most potent topo II inhibitor isobatzelline E was the less potent DNA-intercalating compound [54]. This different behavior seems to influence the mechanism by which batzellines interfere with cell-cycle progression in a different way. In fact, only the most potent topo II inhibitor isobatzelline E blocked cells in the G2 phase of the cell cycle, whereas all the others, characterized by a less pronounced inhibitory activity on topo II and a greater ability to intercalate into DNA, blocked cell-cycle progression in the S phase [54]. Overall, these results indicate that batzellines cytotoxicity relies upon both topo II inhibition and DNA-intercalation, and that the more batzellines intercalate into the DNA, the greater the cytotoxicity of the specific compound [54]. Bearing in mind the close similarity with makaluvamines and, especially, the marked ability of isobatzellins A, C, D to intercalate with DNA, more in-depth studies should be carried out to assess whether batzellines induce DNA damage and act as topo II poisons by promoting the formation of DNA cleavable complex.
Hippospongic Acid A
Hippospongic acid A (HA-A) is a triterpene isolated from the marine sponge Hippospongia sp.
Both the natural enantiomer (R)-HA-A ( Figure 6a) and the racemate (±)-HA-A (Figure 6b), which consists of the natural stereoisomer [(R)-HA-A] and the unnatural one [(S)-HA-A], dosedependently inhibited both human and yeast topo II relaxation activity, showing an IC 50 value of 15 µM. Inhibition of topo I has also been observed, although with a higher IC 50 value (25 µM), together with the inhibition of DNA polymerases within 2-fold higher IC 50 values [55]. (R)-HA-A and (±)-HA-A at 10 µM blocked cell-cycle progression in both G1 and G2/M phases, and induced apoptosis in NUGC-3 human gastric cancer cells. The G1-phase arrest was probably due to the inhibition of DNA polymerases, while the G2/M-phase block was mainly due to the inhibition of topoisomerases [55]. Based on these results, it seems likely that several mechanisms, namely inhibition of topo I, topo II, and DNA polymerases, are involved in the compound's antitumor activity rather than the exclusive inhibition of topo II.
Mar. Drugs 2022, 20, x FOR PEER REVIEW 10 of 52 fold higher IC50 values [55]. (R)-HA-A and (±)-HA-A at 10 μM blocked cell-cycle progression in both G1 and G2/M phases, and induced apoptosis in NUGC-3 human gastric cancer cells. The G1-phase arrest was probably due to the inhibition of DNA polymerases, while the G2/M-phase block was mainly due to the inhibition of topoisomerases [55]. Based on these results, it seems likely that several mechanisms, namely inhibition of topo I, topo II, and DNA polymerases, are involved in the compound's antitumor activity rather than the exclusive inhibition of topo II.
10-Acetylirciformonin B
10-Acetylirciformonin B (10AB) (Figure 7) is a furanoterpenoid derivative isolated with other terpenoid-derived metabolites from the marine sponge Ircinia sp. [56]. Among all the isolated compounds, 10AB was the most cytotoxic (Table 1). Interestingly, it seems to exert a selective cytotoxic effect for cancer cells: in HL-60 cells, 10AB at 6.0 μM induced 80% apoptosis; in rat alveolar NR8383macrophages, it suppressed cell viability by 18.3% [57]. A previous study reported that in HL-60 cells 10AB induced Casp-dependent apoptosis and promoted the formation of DNA DSBs, accompanied by the phosphorylation of H2A.X and checkpoint kinase 2 (Chk2), two markers of nuclear DNA damage [58]. A more recent study showed that 10AB-induced DNA damage may be related to its ability to inhibit topo IIα catalytic activity: 10AB (1.5, 3.0, 6.0, and 12.0 μM) inhibited DNA relaxation without producing linear DNA (like the topo IIα poison ETO), and at 3 μM decreased the protein expression of topo IIα in HL-60 cells. All these findings indicate that 10AB could act as a DNA damaging agent and compromise the topo IIα catalytic cycle, leading to apoptotic cell death [57]. In this regard, in HL-60 cells 10AB (1.5, 3.0, and 6.0 μM) disrupted MMP (mitochondrial membrane potential) and reduced the protein expression of anti-apoptotic proteins (Bcl-2 and Bcl-X) as well as of other proteins involved in the apoptotic process, as X-linked inhibitor of apoptosis protein (XIAP) and survivin. 10AB also generated ROS, activated the mitogenactivated protein kinases (MAPK)/extracellular signal-regulated kinase (ERK) pathway, and inhibited the PI3K/PTEN/Akt/mTOR signaling pathway [57]. Akt transcriptionally regulates the expression of hexokinase II (HK-II) [59]. HKs are enzymes that catalyze the Among all the isolated compounds, 10AB was the most cytotoxic (Table 1). Interestingly, it seems to exert a selective cytotoxic effect for cancer cells: in HL-60 cells, 10AB at 6.0 µM induced 80% apoptosis; in rat alveolar NR8383macrophages, it suppressed cell viability by 18.3% [57]. A previous study reported that in HL-60 cells 10AB induced Caspdependent apoptosis and promoted the formation of DNA DSBs, accompanied by the phosphorylation of H2A.X and checkpoint kinase 2 (Chk2), two markers of nuclear DNA damage [58]. A more recent study showed that 10AB-induced DNA damage may be related to its ability to inhibit topo IIα catalytic activity: 10AB (1.5, 3.0, 6.0, and 12.0 µM) inhibited DNA relaxation without producing linear DNA (like the topo IIα poison ETO), and at 3 µM decreased the protein expression of topo IIα in HL-60 cells. All these findings indicate that 10AB could act as a DNA damaging agent and compromise the topo IIα catalytic cycle, leading to apoptotic cell death [57]. In this regard, in HL-60 cells 10AB (1.5, 3.0, and 6.0 µM) disrupted MMP (mitochondrial membrane potential) and reduced the protein expression of anti-apoptotic proteins (Bcl-2 and Bcl-X) as well as of other proteins involved in the apoptotic process, as X-linked inhibitor of apoptosis protein (XIAP) and survivin. 10AB also generated ROS, activated the mitogen-activated protein kinases (MAPK)/extracellular signal-regulated kinase (ERK) pathway, and inhibited the PI3K/PTEN/Akt/mTOR signaling pathway [57]. Akt transcriptionally regulates the expression of hexokinase II (HK-II) [59]. HKs are enzymes that catalyze the phosphorylation of glucose, i.e., the first step of glycolysis, and are upregulated in many tumors characterized by a high glycolytic activity. Moreover, HK-II has a pro-survival activity and protects mitochondria against mitochondrial apoptotic cell death by interfering with anti-and pro-apoptotic proteins and decreasing ROS generation [59]. Thus, downregulation of HK allows the shift of cancer cells' metabolism to oxidative phosphorylation and increases ROS levels, which leads to cell death. The demonstrated ability of 10AB to downregulate p-Akt protein expression may lead to the downregulation of HK-II. This means that 10AB-induced apoptosis seems to be mediated by topo IIα inhibition and oxidative stress, as well as the perturbation of metabolic and cell survival pathways.
Manoalide-Like Sesterterpenoids
In 1994, Kobayashi et al. isolated four sesterterpenes from the sponge Hyrtios erecta [60]. Among them, manoalide 25-acetals ( Figure 8) inhibited the DNA-unknotting activity of calf thymus topo II, showing an IC 50 value of about 25 µM. In addition, it exhibited antitumor activity on CDF 1 mice inoculated whit P388 leukemia cells, with a T/C% score (the ratio between the tumor volume in the treated group and in the untreated control group) of 150% at 1 mg/kg (Table 1) [60].
Manoalide-Like Sesterterpenoids
In 1994, Kobayashi et al. isolated four sesterterpenes from the sponge Hyrtios erecta [60]. Among them, manoalide 25-acetals ( Figure 8) inhibited the DNA-unknotting activity of calf thymus topo II, showing an IC50 value of about 25 μM. In addition, it exhibited antitumor activity on CDF1 mice inoculated whit P388 leukemia cells, with a T/C% score (the ratio between the tumor volume in the treated group and in the untreated control group) of 150% at 1 mg/kg (Table 1) [60]. All the derivates were tested on multiple leukemia cell lines ( Table 1). The compounds L2, L4, M7, and M9, bearing a 24R, 25S configuration, were the most effective, thus assuming that the cytotoxic activity was configuration-dependent [61]. The administration of M7 to immunodeficient athymic mice (1 μg/kg every day for 33 days) reduced the tumor growth of Molt-4 xenograft by about 66%, without affecting body weight [61].
M7 has been shown to act as a catalytic inhibitor of topo IIα. Moreover, it inhibited DNA relaxation with an IC50 value of 1.18 μM and promoted the formation of supercoiled DNA products in the presence of topo IIα [61]. Compared to manoalide 25-acetals, the inhibitory activity of M7 toward topo II was greatly higher, although purified topo II from two different organisms were used: human for M7 [61] and calf thymus for manoalide 25acetals [60]. The topo IIα catalytic inhibitor activity was associated with DNA damage, as demonstrated by its ability to promote the phosphorylation of ATM, Chk2, and H2A.X and to induce DNA DSBs at 0.75 μM in Molt-4 cells. M7-induced DNA damage has been found to activate apoptotic cell death, as indicated by and the activation of Casp-3, -8, and All the derivates were tested on multiple leukemia cell lines ( Table 1). The compounds L2, L4, M7, and M9, bearing a 24R, 25S configuration, were the most effective, thus assuming that the cytotoxic activity was configuration-dependent [61]. The administration of M7 to immunodeficient athymic mice (1 µg/kg every day for 33 days) reduced the tumor growth of Molt-4 xenograft by about 66%, without affecting body weight [61].
M7 has been shown to act as a catalytic inhibitor of topo IIα. Moreover, it inhibited DNA relaxation with an IC 50 value of 1.18 µM and promoted the formation of supercoiled DNA products in the presence of topo IIα [61]. Compared to manoalide 25-acetals, the inhibitory activity of M7 toward topo II was greatly higher, although purified topo II from two different organisms were used: human for M7 [61] and calf thymus for manoalide 25-acetals [60]. The topo IIα catalytic inhibitor activity was associated with DNA damage, as demonstrated by its ability to promote the phosphorylation of ATM, Chk2, and H2A.X and to induce DNA DSBs at 0.75 µM in Molt-4 cells. M7-induced DNA damage has been found to activate apoptotic cell death, as indicated by and the activation of Casp-3, -8, and -9, the disruption of MMP, and the cleavage of PARP [61].
Heteronemin
Another marine sesterterpenoid-type product, heteronemin ( Figure 10), was separated from the Hippospongia sp. sponge [62]. Heteronemin was able to induce apoptosis as well as inhibit the proliferation different cancer cell lines [63,64]. Interestingly, in hepatocellular carcinoma HA22T HA59T cells, heteronemin induced both apoptosis and ferroptosis [65], a non-apopt programmed cell death mechanism characterized by the iron-dependent accumulation lipid ROS [66]. Due to the well-known occurrence of multi-drug resistance caused by deregulation of apoptosis [67], the evidence that heteronemin is a ferroptosis induce very interesting.
Deepening the molecular mechanisms involved in heteronemin's cytotoxicity prostate cancer cells, Lee et al. found that it induced both autophagy and apoptosis [ Autophagy promotes either cell survival or cell death in a context-and cell-depend manner [68]. Autophagy induced by heteronemin seems to possess a cytoprotective ef rather than a pro-apoptotic one [62]. Indeed, heteronemin (1.28 and 2.56 μM) activa LC3-B II (LC3-phosphatidylethanolamine conjugate), a marker of autophagy, but at 5 μM, when apoptosis was markedly induced, autophagy was blocked. Moreover, the p treatment with two autophagy inhibitors (3-methyladenine and chloroquine) raised percentage of LNCaP apoptotic cells [62]. Similarly, in A498 renal carcinoma cells, inhibition of autophagy increased the pro-apoptotic activity of heteronemin [69].
The marine sesterterpene completely inhibited DNA relaxation in the cell-free D cleavage assay and reduced topo IIα protein expression in LNCaP cells, which resulted the block of the total catalytic activity of the enzyme. Heteronemin did not produce lin DNA, thus assuming its inability to stabilize DNA-topo II cleavable complex [62].
Mechanisms other than the inhibition of topo II are possibly involved in antitumor activity of heteronemin.
Heteronemin suppressed the expression of Hsp90 and that of its client proteins, t being able to modulate the expression of oncogenic proteins and transcription fact involved in tumorigenesis [62]. Moreover, it blocked NF-κB activation via proteaso inhibition in K562 cells [70] and the activation of ERK1/2 and STAT3 in breast cancer c [63,64]. In LnCaP cells, heteronemin (1.28-5.12 μM) disrupted MMP, foster Heteronemin was able to induce apoptosis as well as inhibit the proliferation of different cancer cell lines [63,64]. Interestingly, in hepatocellular carcinoma HA22T and HA59T cells, heteronemin induced both apoptosis and ferroptosis [65], a non-apoptotic programmed cell death mechanism characterized by the iron-dependent accumulation of lipid ROS [66]. Due to the well-known occurrence of multi-drug resistance caused by the deregulation of apoptosis [67], the evidence that heteronemin is a ferroptosis inducer is very interesting.
Deepening the molecular mechanisms involved in heteronemin's cytotoxicity in prostate cancer cells, Lee et al. found that it induced both autophagy and apoptosis [62]. Autophagy promotes either cell survival or cell death in a context-and cell-dependent manner [68]. Autophagy induced by heteronemin seems to possess a cytoprotective effect rather than a pro-apoptotic one [62]. Indeed, heteronemin (1.28 and 2.56 µM) activated LC3-B II (LC3-phosphatidylethanolamine conjugate), a marker of autophagy, but at 5.12 µM, when apoptosis was markedly induced, autophagy was blocked. Moreover, the pre-treatment with two autophagy inhibitors (3-methyladenine and chloroquine) raised the percentage of LNCaP apoptotic cells [62]. Similarly, in A498 renal carcinoma cells, the inhibition of autophagy increased the pro-apoptotic activity of heteronemin [69].
The marine sesterterpene completely inhibited DNA relaxation in the cell-free DNA cleavage assay and reduced topo IIα protein expression in LNCaP cells, which resulted in the block of the total catalytic activity of the enzyme. Heteronemin did not produce linear DNA, thus assuming its inability to stabilize DNA-topo II cleavable complex [62].
Mechanisms other than the inhibition of topo II are possibly involved in the antitumor activity of heteronemin.
Heteronemin suppressed the expression of Hsp90 and that of its client proteins, thus being able to modulate the expression of oncogenic proteins and transcription factors involved in tumorigenesis [62]. Moreover, it blocked NF-κB activation via proteasome inhibition in K562 cells [70] and the activation of ERK1/2 and STAT3 in breast cancer cells [63,64]. In LnCaP cells, heteronemin (1.28-5.12 µM) disrupted MMP, fostering mitochondrial dysfunction. Due to the overproduction of ROS and Ca 2+ release, heteronemin promoted oxidative and endoplasmic reticulum (ER) stress, therefore triggering the unfolded protein response (UPR) signaling network to re-establish ER homeostasis [62]. Oxidative and ER stress results from the activation of protein tyrosine phosphatases (PTPs) [62]. PTPs modulate the levels of cellular protein tyrosine phosphorylation and control cell growth, differentiation, survival, and death. PTPs exert both tumor-suppressive and oncogenic functions in a context-dependent manner [71]. Pre-treatment of LnCaP with a PTP inhibitor reduced heteronemin-induced ROS generation and ER stress, thus demonstrating that in this experimental setting, PTPs exhibits a tumor-suppressive mechanism and participates in the antitumor activity of heteronemin [62].
Oxidative stress was also involved in the heteronemin-induced anticancer effects in Molt-4 cells. In this cell line, it enhanced γ-H2A.X protein expression, probably due to apoptosis rather than DNA damage occurrence. Indeed, although γ-H2A.X is the most sensitive biomarker of DNA damage, its measure by ELISA and/or immunoblotting allows to evaluate the total H2A.X protein levels in a sample, but apoptotic cells with pan-nuclear H2A.X expression cannot be differentiated from surviving cells, which may alter H2A.X quantification. In contrast, the fluorescent microscopic quantification of foci is the most sensitive approach and can distinguish between pan-nuclear staining and foci formation [72]. The increased γ-H2A.X protein expression induced by heteronemin in Molt-4 cells was demonstrated by using Western Blot, as for all the other sponge-derived topo II inhibitors, and, unlike other studies, the expression of other DNA damage-related proteins was not evaluated. Thus, it is not clear whether heteronemin induces DNA damage in this experimental model.
In vivo, heteronemin inhibited the growth of Molt-4 and LnCaP xenograft in Balb/c nude mice and in immunodeficient athymic mice, respectively, treated with 0.31 µg/g (three times a week for 24 days) and 1 mg/kg (every day for 29 days) of heteronemin [62,73].
However, considering the marked antitumor activity of SS1, a possible in vivo study of this compound should be considered as well.
SS1, SS2
, and TPL were cytotoxic on many tumor cell lines [74] (Table 1). All th compounds inhibited DNA relaxation, reaching almost 100% inhibition at the high tested concentration (20 μg/mL). There was no information regarding the production linear DNA [74]. Topo II inhibition was associated with DNA damage: SS1 (0.0625μg/mL) increased the protein expression of γ-H2A.X and, at 0.0625 μg/mL; it also indu DNA DSBs in Molt-4 cells [74]. Although SS2 enhanced γ-H2A.X protein expression, difficult to associate this event exclusively with DNA damage since neither other mar of DNA damage nor the formation of DSBs have been evaluated. SS1, like heterone [62], promoted ROS generation and ER stress and induced mitochondrial apoptosis [ In addition, SS1 shared with heteronemin the ability to inhibit Hsp90 protein express
Mar. Drugs 2022, 20, x FOR PEER REVIEW 15 o and that of its client proteins [74]. Although Lai and colleagues investigated SS1 m deeply than TPL, the latter was also tested in a Molt-4 cells xenograft animal mo showing that its daily administration (1.14 μg/g) for 33 days inhibited almost 50% xenograft tumor growth in male immunodeficient athymic mice [74]. Authors justif their choice to only test TPL in vivo by the small amount they were able to isolate for other two compounds. However, considering the marked antitumor activity of SS possible in vivo study of this compound should be considered as well.
Both compounds strongly inhibited either the topo II-catalyzed DNA relaxation a the protein expression of topo IIα in Molt-4 [75,76] and K562 cells [76]. For D relaxation, xestoquinone showed an IC50 value of 0.094 μM [76], and halenaquin showed an IC50 about 5.5-fold lower (0.017 μM) [75]. These results indicate that they as potent catalytic inhibitors of topo II. However, they did not form DNA-topo II cleav complex, since no linear DNA was observed in the cell-free DNA relaxation assay [75, Additionally, molecular docking studies reported that xestoquinone was capable binding topo II with a docking score of −26.9, although a similar or even a lower va Halenaquinone and xestoquinone exhibited a comparable cytotoxic activity [75,76]. In vivo, the administration of halenaquinone (1 µg/g for 30 days) and xestoquinone (1 µg/g for 50 days) suppressed the growth of Molt-4 xenograft in immunodeficient athymic mice, without affecting body weight (Table 1) [75,76].
Both compounds strongly inhibited either the topo II-catalyzed DNA relaxation and the protein expression of topo IIα in Molt-4 [75,76] and K562 cells [76]. For DNA relaxation, xestoquinone showed an IC 50 value of 0.094 µM [76], and halenaquinone showed an IC 50 about 5.5-fold lower (0.017 µM) [75]. These results indicate that they act as potent catalytic inhibitors of topo II. However, they did not form DNA-topo II cleavage complex, since no linear DNA was observed in the cell-free DNA relaxation assay [75,76]. Additionally, molecular docking studies reported that xestoquinone was capable of binding topo II with a docking score of −26.9, although a similar or even a lower value was observed for topo I (−24.0) and Hsp90 (−15.5) [76]. These results demonstrate that the compound can bind to multiple targets. Xestoquinone (7.84 µM) treatment of Molt-4 cells markedly increased the expression of multiple DNA damage markers (p-Chk1, p-Chk2, and γ-H2A.X), pointing out that its topo II catalytic activity inhibition induced DNA damage [76]. No markers of DNA damage were evaluated for the congener halenaquinone. Nonetheless, given the close similarities in the antitumor mechanisms of both compounds, it cannot be excluded that congener halenaquinone was a topo II catalytic inhibitor. In fact, both compounds have been shown to inhibit the activity of histone deacetylase (HDAC) in vitro [75,76] and in a Molt-4 xenograft mouse in vivo model [76]. This is not so surprising, as several studies report that topo II and HDAC mutually modulate their activity [43]. In addition to this, ROS overproduction [75,76], induction of ER stress, and binding to protein Hsp90 [76] recorded for both compounds led to apoptosis. Notably, the two polycyclic quinone-type metabolites promoted both apoptotic pathways as the disruption of MMP, decrease in anti-apoptotic proteins (Bcl-2, Bcl-X, Bid), increase in pro-apoptotic ones (Bax, Bak) (all markers of intrinsic apoptosis), and activation of Casp-8 and -9 (markers of extrinsic apoptosis) were observed in Molt-4 and K562 cells [75,76].
Alongside halenaquinone and xestoquinone, other polycyclic quinone-type metabolites were isolated from the sponge Xestospongia sp. [77]. All studied compounds inhibited topo II (Table 1). Among those, adociaquinone B ( Figure 13) was the most potent with an IC 90 (the concentration inducing the 90% of inhibition) < 11 µM and 78 µM for DNA decatenation and relaxation, respectively. In contrast to xestoquinone and halenaquinone, adociaquinone B was a non-intercalating DNA topo II poison. In fact, it strongly promoted the formation of the enzyme-DNA cleavable complex to the same extent as mitoxantrone, a known topo II poison [78]. However, in contrast to mitoxantrone, adociaquinone B did not intercalate into DNA since it was not able to displace ethidium bromide from calf thymus DNA [77]. Secoadociaquinone A and B, two other Xestospongia sp. metabolites, inhibited topo II activity in the cell-free DNA decatenation assay without exhibiting cytotoxicity since they were unable to permeate cell membranes. Thus, it is not sufficient to test the inhibitory activity of topo II only on cell-free systems, as very often the physicochemical properties of the tested compounds prevent their entry into cells and consequently a possible interaction with intracellular targets, such as topo II [77].
Mar. Drugs 2022, 20, x FOR PEER REVIEW 16 o Alongside halenaquinone and xestoquinone, other polycyclic quinone-t metabolites were isolated from the sponge Xestospongia sp. [77]. All studied compou inhibited topo II (Table 1). Among those, adociaquinone B ( Figure 13) was the most po with an IC90 (the concentration inducing the 90% of inhibition) < 11 μM and 78 μM DNA decatenation and relaxation, respectively. In contrast to xestoquinone halenaquinone, adociaquinone B was a non-intercalating DNA topo II poison. In fac strongly promoted the formation of the enzyme-DNA cleavable complex to the sa extent as mitoxantrone, a known topo II poison [78]. However, in contrast mitoxantrone, adociaquinone B did not intercalate into DNA since it was not abl displace ethidium bromide from calf thymus DNA [77]. Secoadociaquinone A and B, other Xestospongia sp. metabolites, inhibited topo II activity in the cell-free D decatenation assay without exhibiting cytotoxicity since they were unable to permeate membranes. Thus, it is not sufficient to test the inhibitory activity of topo II only on c free systems, as very often the physicochemical properties of the tested compou prevent their entry into cells and consequently a possible interaction with intracell targets, such as topo II [77].
Leptosin F
Leptosin F (LEP, Figure 14) is an indole derivative containing sulphur that is derived from the fungus Leptoshaeria sp., which grows on the marine alga Sargassum tortile [82].
Leptosin F
Leptosin F (LEP, Figure 14) is an indole derivative containing sulphur that is derived from the fungus Leptoshaeria sp., which grows on the marine alga Sargassum tortile [82]. Yanagihara and colleagues demonstrated that LEP potently inhibited the growth of RPMI-8402 T cell acute lymphoblastic leukemia cells-more powerfully than ETO and with an IC50 value in the nM range-and induced apoptosis [82]. A pro-apoptotic effect has also been reported for LEP in normal human embryo kidney cells (293 cell line), where it activated Casp-3 at doses as low as 1 to 10 μM [82]. These results could indicate that LEP does not act selectively against cancer cells, but rather on all rapidly proliferating cells.
The in vitro kDNA decatenation assay revealed its ability to inhibit topo II [82]. Gel electrophoresis of the kDNA after decatenation assay showed that LEP did not act as a catalytic inhibitor of topo II, as the authors instead stated. Further studies would be necessary to define the exact mechanism of interaction between LEP and the enzyme. Moreover, since the compound concentration required to exert cytotoxic activity on RPMI-8402 cells was extremely lower (nM range) than that required to inhibit topo II (µM range), the cytotoxicity of LEP at the cellular level might involve other pathways in addition to the inhibition of topo II.
Pericosine A
Pericosine A (PA, Figure 15) is a metabolite produced by a strain of Periconia byssoides OUPS-N133, a marine fungus originally separated from the sea hare Aplysia kurodai [83]. Some studies reported the ability of PA to induce growth inhibition on different cancer cell lines [83,84] (Table 2). Furthermore, in mice inoculated with P388 leukemic cells, PA increased the median survival days compared to vehicle (13.0 versus 10.7 days) Yanagihara and colleagues demonstrated that LEP potently inhibited the growth of RPMI-8402 T cell acute lymphoblastic leukemia cells-more powerfully than ETO and with an IC 50 value in the nM range-and induced apoptosis [82]. A pro-apoptotic effect has also been reported for LEP in normal human embryo kidney cells (293 cell line), where it activated Casp-3 at doses as low as 1 to 10 µM [82]. These results could indicate that LEP does not act selectively against cancer cells, but rather on all rapidly proliferating cells.
The in vitro kDNA decatenation assay revealed its ability to inhibit topo II [82]. Gel electrophoresis of the kDNA after decatenation assay showed that LEP did not act as a catalytic inhibitor of topo II, as the authors instead stated. Further studies would be necessary to define the exact mechanism of interaction between LEP and the enzyme. Moreover, since the compound concentration required to exert cytotoxic activity on RPMI-8402 cells was extremely lower (nM range) than that required to inhibit topo II (µM range), the cytotoxicity of LEP at the cellular level might involve other pathways in addition to the inhibition of topo II.
Pericosine A
Pericosine A (PA, Figure 15) is a metabolite produced by a strain of Periconia byssoides OUPS-N133, a marine fungus originally separated from the sea hare Aplysia kurodai [83].
Leptosin F
Leptosin F (LEP, Figure 14) is an indole derivative containing sulphur that is derived from the fungus Leptoshaeria sp., which grows on the marine alga Sargassum tortile [82]. Yanagihara and colleagues demonstrated that LEP potently inhibited the growth of RPMI-8402 T cell acute lymphoblastic leukemia cells-more powerfully than ETO and with an IC50 value in the nM range-and induced apoptosis [82]. A pro-apoptotic effect has also been reported for LEP in normal human embryo kidney cells (293 cell line), where it activated Casp-3 at doses as low as 1 to 10 μM [82]. These results could indicate that LEP does not act selectively against cancer cells, but rather on all rapidly proliferating cells.
The in vitro kDNA decatenation assay revealed its ability to inhibit topo II [82]. Gel electrophoresis of the kDNA after decatenation assay showed that LEP did not act as a catalytic inhibitor of topo II, as the authors instead stated. Further studies would be necessary to define the exact mechanism of interaction between LEP and the enzyme. Moreover, since the compound concentration required to exert cytotoxic activity on RPMI-8402 cells was extremely lower (nM range) than that required to inhibit topo II (µM range), the cytotoxicity of LEP at the cellular level might involve other pathways in addition to the inhibition of topo II.
Pericosine A
Pericosine A (PA, Figure 15) is a metabolite produced by a strain of Periconia byssoides OUPS-N133, a marine fungus originally separated from the sea hare Aplysia kurodai [83]. Some studies reported the ability of PA to induce growth inhibition on different cancer cell lines [83,84] (Table 2). Furthermore, in mice inoculated with P388 leukemic cells, PA increased the median survival days compared to vehicle (13.0 versus 10.7 days) Some studies reported the ability of PA to induce growth inhibition on different cancer cell lines [83,84] (Table 2). Furthermore, in mice inoculated with P388 leukemic cells, PA increased the median survival days compared to vehicle (13.0 versus 10.7 days) ( Table 2). In the same study, the authors reported that PA at 100-300 mM inhibited topo II and at 449 µM inhibited the epidermal growth factor receptor (EGFR) by 40−70%. Since PA seems to exert its inhibitory effects on topo II at very high concentrations, it is unlikely that this mechanism of action was responsible for its in vitro and in vivo antitumor effects. The inhibition of EGFR, a protein kinase known to promote cell proliferation and counteract apoptosis [85], could be a more plausible mechanism [83]. The lack of important information on its antitumor activity in vitro and in vivo does not permit a clear characterization of the anticancer activity of PA. Therefore, further experiments should be conducted to fully understand the potential usefulness of PA in the oncological area.
Marinactinone B
Marinactinone B (MB, Figure 16) is a γ-pyrone derivate isolated from the bacterial strain Marinactinospora thermotolerans SCSIO 00606, found in the sediments of the northern South China Sea [86].
Mar. Drugs 2022, 20, x FOR PEER REVIEW 30 of 52 (Table 2). In the same study, the authors reported that PA at 100-300 mM inhibited topo II and at 449 μM inhibited the epidermal growth factor receptor (EGFR) by 40−70%. Since PA seems to exert its inhibitory effects on topo II at very high concentrations, it is unlikely that this mechanism of action was responsible for its in vitro and in vivo antitumor effects. The inhibition of EGFR, a protein kinase known to promote cell proliferation and counteract apoptosis [85], could be a more plausible mechanism [83]. The lack of important information on its antitumor activity in vitro and in vivo does not permit a clear characterization of the anticancer activity of PA. Therefore, further experiments should be conducted to fully understand the potential usefulness of PA in the oncological area.
Marinactinone B
Marinactinone B (MB, Figure 16) is a γ-pyrone derivate isolated from the bacterial strain Marinactinospora thermotolerans SCSIO 00606, found in the sediments of the northern South China Sea [86]. MB was evaluated for its anticancer activity against breast (MCF-7), pancreatic (SW1990), hepatic (HepG2 and SMCC-7721), lung (NCI-H460), and cervical (HeLa) cancer cell lines. It exhibited cytotoxicity at medium-elevated concentration values only against SW1990 (99 μM) and SMCC-7721 (45 μM) cell lines. It was also a very weak inhibitor of topo II with an IC50 value of 607 μM [86]. With such a high IC50 value, MB is not a promising compound per se. However, given its interaction with topo II, MB could constitute the basis for the development of analogues with antitumor activity.
Aspergiolide A
Aspergiolide A (ASP, Figure 17) is an anthracycline [87] isolated from Aspergillus glaucus, which was obtained from the marine sediment around mangrove roots harvested in the Chinese province of Fujian [88]. MB was evaluated for its anticancer activity against breast (MCF-7), pancreatic (SW1990), hepatic (HepG2 and SMCC-7721), lung (NCI-H460), and cervical (HeLa) cancer cell lines. It exhibited cytotoxicity at medium-elevated concentration values only against SW1990 (99 µM) and SMCC-7721 (45 µM) cell lines. It was also a very weak inhibitor of topo II with an IC 50 value of 607 µM [86]. With such a high IC 50 value, MB is not a promising compound per se. However, given its interaction with topo II, MB could constitute the basis for the development of analogues with antitumor activity.
Aspergiolide A
Aspergiolide A (ASP, Figure 17) is an anthracycline [87] isolated from Aspergillus glaucus, which was obtained from the marine sediment around mangrove roots harvested in the Chinese province of Fujian [88].
Mar. Drugs 2022, 20, x FOR PEER REVIEW 30 of 52 (Table 2). In the same study, the authors reported that PA at 100-300 mM inhibited topo II and at 449 μM inhibited the epidermal growth factor receptor (EGFR) by 40−70%. Since PA seems to exert its inhibitory effects on topo II at very high concentrations, it is unlikely that this mechanism of action was responsible for its in vitro and in vivo antitumor effects. The inhibition of EGFR, a protein kinase known to promote cell proliferation and counteract apoptosis [85], could be a more plausible mechanism [83]. The lack of important information on its antitumor activity in vitro and in vivo does not permit a clear characterization of the anticancer activity of PA. Therefore, further experiments should be conducted to fully understand the potential usefulness of PA in the oncological area.
Marinactinone B
Marinactinone B (MB, Figure 16) is a γ-pyrone derivate isolated from the bacterial strain Marinactinospora thermotolerans SCSIO 00606, found in the sediments of the northern South China Sea [86]. MB was evaluated for its anticancer activity against breast (MCF-7), pancreatic (SW1990), hepatic (HepG2 and SMCC-7721), lung (NCI-H460), and cervical (HeLa) cancer cell lines. It exhibited cytotoxicity at medium-elevated concentration values only against SW1990 (99 μM) and SMCC-7721 (45 μM) cell lines. It was also a very weak inhibitor of topo II with an IC50 value of 607 μM [86]. With such a high IC50 value, MB is not a promising compound per se. However, given its interaction with topo II, MB could constitute the basis for the development of analogues with antitumor activity.
Aspergiolide A
Aspergiolide A (ASP, Figure 17) is an anthracycline [87] isolated from Aspergillus glaucus, which was obtained from the marine sediment around mangrove roots harvested in the Chinese province of Fujian [88]. ASP was cytotoxic on different human and murine cancer cell lines (Table 2) [88].
Wang et al. have delved into the antitumor efficacy of ASP in vitro and in vivo.
The compound induced Casp-dependent apoptosis as early as 12 h after treatment [87]. In addition, ASP increased γ-H2A.X protein expression. Considering its anthracyclinic structure, it has been hypothesized that the inhibition of topo II could be involved in its apoptotic activty. The kDNA decatenation assay demonstrated that ASP inhibited the enzyme in a fashion comparable to DOXO. The results of in vivo experiments in H22 hepatoma-bearing mice and on BEL-7402 cancer xenografts (Table 2) corroborated the in vitro findings. ASP reduced tumor volume dose-dependently in H22 mice and showed comparable activity to that of DOXO (2 mg/kg). In BEL-7402 xenografts, ASP showed significantly milder activity than DOXO. Interestingly, in both in vivo models, ASP altered mice body weight considerably less than DOXO, suggesting less toxicity than the benchmark anthracycline [87]. The study also investigated the pharmacokinetic profile of ASP, which has been shown to distribute throughout the body in a perfusion-and bloodflow-dependent manner, and was able to concentrate in tumor tissues. Additionally, ASP penetrated the blood brain barrier. No clinical signs of toxicity or organs morphological changes were found in mice treated with the maximal tolerable dose of ASP (more than 400 mg/kg) [87], which is considerably higher than the dose necessary to produce the antitumor effects. The genotoxic potential of ASP was also evaluated via the in vivo bone marrow erythrocyte micronucleus assay. The number of micronuclei produced following treatment with ASP was comparable to the negative control, suggesting that ASP was not genotoxic [87].
Anthracyclines are proven to cause significant cardiotoxicity and electrocardiogram abnormalities including long QT syndrome, a potentially lethal condition induced by several drugs [89]. Long QT syndrome has been found to be caused by the blockade of hERG (human ether-a-go-go-related gene), a gene codifying the pore-forming subunit of the potassium channels, which are relevant for cardiac repolarization [90]. Thus, Li et al. investigated the in vitro inhibitory rates of ASP on the hERG current. The resulting values indicated that ASP was unable to inhibit the hERG channel, and hence it is unlikely to produce cardiotoxicity through this mechanism [87].
On the whole, the studies reported above identify ASP as an attractive candidate in the oncological area. However, further studies will be necessary to clarify whether the effects of the compound can be attributed to topo II inhibition.
Jadomycin DS
Jadomycin DS (JAD, Figure 18) is a polyketide produced by the bacterium Streptomyces venezuelae ISP5230 under stress conditions [91]. [87]. In addition, ASP increased γ-H2A.X protein expression. Considering its anthracyclinic structure, it has been hypothesized that the inhibition of topo II could be involved in its apoptotic activty. The kDNA decatenation assay demonstrated that ASP inhibited the enzyme in a fashion comparable to DOXO. The results of in vivo experiments in H22 hepatoma-bearing mice and on BEL-7402 cancer xenografts (Table 2) corroborated the in vitro findings. ASP reduced tumor volume dose-dependently in H22 mice and showed comparable activity to that of DOXO (2 mg/kg). In BEL-7402 xenografts, ASP showed significantly milder activity than DOXO. Interestingly, in both in vivo models, ASP altered mice body weight considerably less than DOXO, suggesting less toxicity than the benchmark anthracycline [87]. The study also investigated the pharmacokinetic profile of ASP, which has been shown to distribute throughout the body in a perfusion-and bloodflow-dependent manner, and was able to concentrate in tumor tissues. Additionally, ASP penetrated the blood brain barrier. No clinical signs of toxicity or organs morphological changes were found in mice treated with the maximal tolerable dose of ASP (more than 400 mg/kg) [87], which is considerably higher than the dose necessary to produce the antitumor effects. The genotoxic potential of ASP was also evaluated via the in vivo bone marrow erythrocyte micronucleus assay. The number of micronuclei produced following treatment with ASP was comparable to the negative control, suggesting that ASP was not genotoxic [87].
Anthracyclines are proven to cause significant cardiotoxicity and electrocardiogram abnormalities including long QT syndrome, a potentially lethal condition induced by several drugs [89]. Long QT syndrome has been found to be caused by the blockade of hERG (human ether-a-go-go-related gene), a gene codifying the pore-forming subunit of the potassium channels, which are relevant for cardiac repolarization [90]. Thus, Li et al. investigated the in vitro inhibitory rates of ASP on the hERG current. The resulting values indicated that ASP was unable to inhibit the hERG channel, and hence it is unlikely to produce cardiotoxicity through this mechanism [87]. On the whole, the studies reported above identify ASP as an attractive candidate in the oncological area. However, further studies will be necessary to clarify whether the effects of the compound can be attributed to topo II inhibition.
Jadomycin DS
Jadomycin DS (JAD, Figure 18) is a polyketide produced by the bacterium Streptomyces venezuelae ISP5230 under stress conditions [91]. JAD shares three common features with ETO and DOXO: (i) a lactone ring, (ii) a quinone moiety, and (iii) a copper-mediated DNA cleavage activity. To estimate the molecular interactions of JAD, binding studies were conducted using a nuclear magnetic resonance spectroscopy (NMR) method that allows the identification of molecules capable of binding a ligand-protein with binding affinity (K D ) in the µM−mM range [92,93]. JAD bound topo IIβ. However, the overall K D for JAD-topo IIβ complex was equal to 9.4 mM, suggesting that the bond formed between JAD and topo IIβ is weak [91]. The high binding constant between the compound and topo IIβ does not depict JAD as an attractive anti-cancer drug. Moreover, JAD interacted unselectively with several unrelated enzymes including serum albumin [91], making it difficult to determine its actual mode of action and severely compromise its hypothetic in vivo application.
3.6.
2RA was cytotoxic [94], blocked the cell cycle in the G2/M phase, and triggered Caspdependent apoptosis in HepG2 cells. To determine whether 2RA was able to interact with human topo IIα, a molecular docking study was performed, demonstrating that 2RA was able to bind to the active receptor pocket with a binding energy of −7.84 kJ/mol [94]. In addition, an increased formation of hydrogen bonds in the protein-ligand complex was recorded compared to the protein, indicating that the protein-ligand complex had a higher binding affinity and stability than the protein [94]. However, in vitro studies should be conducted to demonstrate that 2RA is a topo II α inhibitor.
Streptomyces sp. VITJS4 Ethyl Acetate Crude Extract
Streptomyces sp. VITJS4 bacterial strain was isolated from the marine environment in Tamil Nadu, India [95]. VITJS4 ethyl acetate crude extract exerted cytotoxic effects against HepG2 and HeLa cancer cells with identical IC50 values of 50 μg/mL and induction of apoptosis. Hence, this would suggest a cell line-independent mechanism of action [95]. Gas chromatography-mass spectrum analysis (GC-MS) identified a phthalate derivative, namely 1, 2-benzenedicarboxylic acid, mono-(2-ethylhexyl) ester, as the major bioactive metabolite among the 52 bioactive compounds of the ethyl acetate extract, which is probably responsible for the activity observed on the two human cancer cell lines. Molecular docking analysis was conducted to assess the interaction between the compound and topo IIα. What emerged is the formation of bonds at the active pocket of protein with a binding energy of −5.87 kJ/mol [95].
Sulochrin
Sulochrin ( Figure 20) is a benzophenone derivative isolated from Aspergillus falconensis after cultivating it on a solid rice medium containing 3.5% of (NH4)2SO4 [96]. 2RA was cytotoxic [94], blocked the cell cycle in the G2/M phase, and triggered Caspdependent apoptosis in HepG2 cells. To determine whether 2RA was able to interact with human topo IIα, a molecular docking study was performed, demonstrating that 2RA was able to bind to the active receptor pocket with a binding energy of −7.84 kJ/mol [94]. In addition, an increased formation of hydrogen bonds in the protein-ligand complex was recorded compared to the protein, indicating that the protein-ligand complex had a higher binding affinity and stability than the protein [94]. However, in vitro studies should be conducted to demonstrate that 2RA is a topo II α inhibitor.
Streptomyces sp. VITJS4 Ethyl Acetate Crude Extract
Streptomyces sp. VITJS4 bacterial strain was isolated from the marine environment in Tamil Nadu, India [95]. VITJS4 ethyl acetate crude extract exerted cytotoxic effects against HepG2 and HeLa cancer cells with identical IC 50 values of 50 µg/mL and induction of apoptosis. Hence, this would suggest a cell line-independent mechanism of action [95]. Gas chromatography-mass spectrum analysis (GC-MS) identified a phthalate derivative, namely 1, 2-benzenedicarboxylic acid, mono-(2-ethylhexyl) ester, as the major bioactive metabolite among the 52 bioactive compounds of the ethyl acetate extract, which is probably responsible for the activity observed on the two human cancer cell lines. Molecular docking analysis was conducted to assess the interaction between the compound and topo IIα. What emerged is the formation of bonds at the active pocket of protein with a binding energy of −5.87 kJ/mol [95].
Sulochrin
Sulochrin ( Figure 20) is a benzophenone derivative isolated from Aspergillus falconensis after cultivating it on a solid rice medium containing 3.5% of (NH 4 ) 2 SO 4 [96]. Sulochrin was cytotoxic on L5178Y murine lymphoma cell line with an IC50 value of 5.1 μM [96]. The compound was not cytotoxic on MDA-MB-231 human breast cancer cells; however, at a concentration of 70 μM, it dramatically reduced cell migration [96]. Molecular docking studies indicated the interaction of sulochrin with topo II. With a free binding energy of −12.11 kcal/mol, the compound showed a robust stability through the formation of several stable bonds within the active sites, comparable to that exerted by DOXO (−16.28 kcal/mol). Molecular docking studies also demonstrated the capacity of the compound to even bind within the active sites of two further enzymes: the cyclindependent kinase 2 (CDK2) involved in cell-cycle progression, and the matrix metalloproteinase 13 (MMP-13) involved in the EMT process, with moderate free binding energies [96].
3-hydroxyholyrine A
3-hydroxyholyrine A (3HA, Figure 21) is an indolocarbazole produced by the marine-derived bacterium Streptomyces strain OUCMDZ-3118 in the presence of 5hydroxy-L-tryptophan [97]. 3HA exerted cytotoxic effects on many tumor cell lines (Table 2) and reduced the expression of the antiapoptotic protein survivin more potently than ETO in MKN45 cells [97]. In supercoiled plasmid DNA relaxation assay, 3HA potently inhibited the activity of topo IIα enzyme at 1.0, 5.0, and 10.0 μM. Of note, 3HA exhibited an inhibitory activity at concentrations lower than ETO (50 μM). The inhibition of topo IIα resulted in DNA damage, as demonstrated by the concentration-dependent increase in the expression of γ-H2A.X. Sulochrin was cytotoxic on L5178Y murine lymphoma cell line with an IC 50 value of 5.1 µM [96]. The compound was not cytotoxic on MDA-MB-231 human breast cancer cells; however, at a concentration of 70 µM, it dramatically reduced cell migration [96]. Molecular docking studies indicated the interaction of sulochrin with topo II. With a free binding energy of −12.11 kcal/mol, the compound showed a robust stability through the formation of several stable bonds within the active sites, comparable to that exerted by DOXO (−16.28 kcal/mol). Molecular docking studies also demonstrated the capacity of the compound to even bind within the active sites of two further enzymes: the cyclin-dependent kinase 2 (CDK2) involved in cell-cycle progression, and the matrix metalloproteinase 13 (MMP-13) involved in the EMT process, with moderate free binding energies [96].
3-Hydroxyholyrine A
3-hydroxyholyrine A (3HA, Figure 21) is an indolocarbazole produced by the marinederived bacterium Streptomyces strain OUCMDZ-3118 in the presence of 5-hydroxy-Ltryptophan [97]. Sulochrin was cytotoxic on L5178Y murine lymphoma cell line with an IC50 value of 5.1 μM [96]. The compound was not cytotoxic on MDA-MB-231 human breast cancer cells; however, at a concentration of 70 μM, it dramatically reduced cell migration [96]. Molecular docking studies indicated the interaction of sulochrin with topo II. With a free binding energy of −12.11 kcal/mol, the compound showed a robust stability through the formation of several stable bonds within the active sites, comparable to that exerted by DOXO (−16.28 kcal/mol). Molecular docking studies also demonstrated the capacity of the compound to even bind within the active sites of two further enzymes: the cyclindependent kinase 2 (CDK2) involved in cell-cycle progression, and the matrix metalloproteinase 13 (MMP-13) involved in the EMT process, with moderate free binding energies [96].
3-hydroxyholyrine A
3-hydroxyholyrine A (3HA, Figure 21) is an indolocarbazole produced by the marine-derived bacterium Streptomyces strain OUCMDZ-3118 in the presence of 5hydroxy-L-tryptophan [97]. 3HA exerted cytotoxic effects on many tumor cell lines ( Table 2) and reduced the expression of the antiapoptotic protein survivin more potently than ETO in MKN45 cells [97]. In supercoiled plasmid DNA relaxation assay, 3HA potently inhibited the activity of topo IIα enzyme at 1.0, 5.0, and 10.0 μM. Of note, 3HA exhibited an inhibitory activity at concentrations lower than ETO (50 μM). The inhibition of topo IIα resulted in DNA damage, as demonstrated by the concentration-dependent increase in the expression of γ-H2A.X. 3HA exerted cytotoxic effects on many tumor cell lines ( Table 2) and reduced the expression of the antiapoptotic protein survivin more potently than ETO in MKN45 cells [97]. In supercoiled plasmid DNA relaxation assay, 3HA potently inhibited the activity of topo IIα enzyme at 1.0, 5.0, and 10.0 µM. Of note, 3HA exhibited an inhibitory activity at concentrations lower than ETO (50 µM). The inhibition of topo IIα resulted in DNA damage, as demonstrated by the concentration-dependent increase in the expression of γ-H2A.X.
Wakayin
Wakayin ( Figure 22) is a pyrroloiminoquinone alkaloid isolated from an ascidian, commonly called sea squirt, belonging to the species Clavelina [99]. In early studies evaluating its activity, wakayin induced cytotoxic effects on the human colon HCT-116 cancer cell line with an IC50 value of 0.5 μg/mL. On the same cell line, it inhibited topo II enzyme at a concentration of 250 μM [99]. Moreover, wakayin exhibited a higher cytotoxicity on DSBs repair-deficient CHO xrs-6 cells than on DSBs repair-proficient CHO BR1 cells. Their IC50 ratio was indeed 9.8, higher than that of ETO corresponding to 7.0. Those results clearly indicate DSB induction as a mechanism involved in the cytotoxicity of wakayin [100]. Taking into account this evidence and the planar quinonic structure of wakayin, it was hypothesized and then demonstrated that wakayin inhibited the decatenation of kDNA in a concentration-dependent manner in the range of 40 to 133 μg/mL [100]. However, the difference between the concentration inhibiting the purified enzyme (40-133 μg/mL) and the concentration exerting the cytotoxic effects (0.5 μg/mL) suggests that other mechanisms, not just topo II inhibition, could contribute to wakayin-induced DNA damage.
Ascididemin
Ascididemin (ASC, Figure 23) is a pyridoacridine alkaloid isolated from the mediterranean ascidian Cystodytes dellechiajei collected near the Balearic Islands [101] as well as from Okinawan ascidian Didemnum sp., from Kerama Islands [102]. It has been reported that ASC was 10-fold more cytotoxic in CHO xrs-6 (DSBs repair deficient) than in CHO BR1 (DSBs repair proficient) cells, while exhibiting identical toxicity in CHO-BR1 (SSB repair-proficient) and CHO-EM9 (SSB repair-deficient) cells, raising the hypothesis that DSBs were involved in its in vitro anticancer activity [103]. Moreover, ASC was cytotoxic on human leukemia, colon, and breast cancer cell lines In early studies evaluating its activity, wakayin induced cytotoxic effects on the human colon HCT-116 cancer cell line with an IC 50 value of 0.5 µg/mL. On the same cell line, it inhibited topo II enzyme at a concentration of 250 µM [99]. Moreover, wakayin exhibited a higher cytotoxicity on DSBs repair-deficient CHO xrs-6 cells than on DSBs repair-proficient CHO BR1 cells. Their IC 50 ratio was indeed 9.8, higher than that of ETO corresponding to 7.0. Those results clearly indicate DSB induction as a mechanism involved in the cytotoxicity of wakayin [100]. Taking into account this evidence and the planar quinonic structure of wakayin, it was hypothesized and then demonstrated that wakayin inhibited the decatenation of kDNA in a concentration-dependent manner in the range of 40 to 133 µg/mL [100]. However, the difference between the concentration inhibiting the purified enzyme (40-133 µg/mL) and the concentration exerting the cytotoxic effects (0.5 µg/mL) suggests that other mechanisms, not just topo II inhibition, could contribute to wakayin-induced DNA damage.
Wakayin
Wakayin ( Figure 22) is a pyrroloiminoquinone alkaloid isolated from an ascidian, commonly called sea squirt, belonging to the species Clavelina [99]. In early studies evaluating its activity, wakayin induced cytotoxic effects on the human colon HCT-116 cancer cell line with an IC50 value of 0.5 μg/mL. On the same cell line, it inhibited topo II enzyme at a concentration of 250 μM [99]. Moreover, wakayin exhibited a higher cytotoxicity on DSBs repair-deficient CHO xrs-6 cells than on DSBs repair-proficient CHO BR1 cells. Their IC50 ratio was indeed 9.8, higher than that of ETO corresponding to 7.0. Those results clearly indicate DSB induction as a mechanism involved in the cytotoxicity of wakayin [100]. Taking into account this evidence and the planar quinonic structure of wakayin, it was hypothesized and then demonstrated that wakayin inhibited the decatenation of kDNA in a concentration-dependent manner in the range of 40 to 133 μg/mL [100]. However, the difference between the concentration inhibiting the purified enzyme (40-133 μg/mL) and the concentration exerting the cytotoxic effects (0.5 μg/mL) suggests that other mechanisms, not just topo II inhibition, could contribute to wakayin-induced DNA damage.
Ascididemin
Ascididemin (ASC, Figure 23) is a pyridoacridine alkaloid isolated from the mediterranean ascidian Cystodytes dellechiajei collected near the Balearic Islands [101] as well as from Okinawan ascidian Didemnum sp., from Kerama Islands [102]. It has been reported that ASC was 10-fold more cytotoxic in CHO xrs-6 (DSBs repair deficient) than in CHO BR1 (DSBs repair proficient) cells, while exhibiting identical toxicity in CHO-BR1 (SSB repair-proficient) and CHO-EM9 (SSB repair-deficient) cells, raising the hypothesis that DSBs were involved in its in vitro anticancer activity [103]. Moreover, ASC was cytotoxic on human leukemia, colon, and breast cancer cell lines It has been reported that ASC was 10-fold more cytotoxic in CHO xrs-6 (DSBs repair deficient) than in CHO BR1 (DSBs repair proficient) cells, while exhibiting identical toxicity in CHO-BR1 (SSB repair-proficient) and CHO-EM9 (SSB repair-deficient) cells, raising the hypothesis that DSBs were involved in its in vitro anticancer activity [103]. Moreover, ASC was cytotoxic on human leukemia, colon, and breast cancer cell lines [102]. Cytotoxicity elicited by ASC (Table 3) was related to the induction of Casp-dependent apoptosis, even at the lowest concentrations [102,104]. Meanwhile, it inhibited the growth of the non-malignant African green monkey kidney cell line BSC-1, revealing a lack of selectivity against cancer cells [103].
ASC was shown to inhibit topo II activity at a concentration equal to 30 µM [105]. Nearly 10 years later, Dassonneville and colleagues evaluated its interaction with topo II and demonstrated that this compound can (i) inhibit DNA ligation after it has been cleaved by topo II, and (ii) stimulate DNA cleavage with most cleavage sites having a C on the side of the cleaved bond [104]. Based on these results, ASC could be defined as a site-specific topo II poison for the purified enzyme, although its activity appeared to be inferior compared to the positive control ETO [104]. However, the capability of ASC to function as a topo II poison was not demonstrated in cellular assays. Indeed, comparing the cytotoxic activity of ASC on human leukemia cells sensitive (HL-60) or resistant (HL-60/MX2) to mitoxantrone, ASC was cytotoxic with similar IC 50 values (0.48 µM for HL-60 and 0.65 µM for HL-60/MX2) [104]. Matsumoto and coworkers performed a cell-free assay to clarify the mechanism of action of ASC. The results proved that ASC was able to cleave the DNA in a concentration-and time-dependent manner, even in the absence of topo II. Moreover, experimental results demonstrated (i) the generation of ROS, (ii) that antioxidants treatment protected against DNA cleavage, and (iii) that cells deficient in ROS-induced damage repair system were more susceptible to ASC. On the whole, those results suggest that ROS production is involved in the cytotoxicity of ASC [106]. The production of ROS could be due to the direct reduction of ASC iminoquinone heterocyclic ring to semiquinone, with production of H 2 O 2 [106]. Considering the potential of ASC to intercalate in DNA, it is probable that ROS production occurs in proximity of the nucleic acid, thereby producing DNA damage [106].
Umemura and coworkers evaluated different GA3P formulations bearing high (>80%) and low (<20%) lactic acid percentage (GA3Pl+ and GA3Pl−, respectively) [108]. Both preparations of GA3P inhibited kDNA decatenation with similar IC 50 values (0.048 µg/mL for GA3P+ and 0.052 µg/mL for GA3P−), proving that GA3P was a topo II inhibitor and that lactic acid percentage had no impact on topo II inhibition [108]. Gel electrophoresis of pT2GN plasmid DNA revealed that GA3P+ did not induce the accumulation of cleavable complexes and acted as a catalytic inhibitor. Furthermore, the analysis of plasmid DNA showed that GA3P+, when simultaneously added to teniposide, inhibited the stabilization of teniposide-induced cleavable complexes [108].
In a large panel of cells, the polysaccharide slightly inhibited cell proliferation with GI 50 values ranging from 0.67 to 11 µg/mL [108]. However, no further cellular assays were undertaken to elucidate the cytotoxic activity or the possible death mechanism exerted by the compound. Despite evidence showing that GA3P+ was a topo II catalytic inhibitor, its chemical profile and high molecular weight can hamper its entry into the nucleus and its interaction with DNA or topo II. Certainly, further studies will be required to clarify the mechanism of action of GA3P against cancer cells.
Echinoside A
Echinoside A (ECH, Figure 24) is a saponin isolated from the sea cucumber Holothuria nobilis (Selenka), an echinoderm retrieved from the sea ground of the Dongshan Island (P. R. China) [109]. ECH exerted a broad-spectrum anticancer activity against a panel of 26 human and murine cancer cell lines with very similar IC50 ranging from 1.0 to 6.0 μM [109]. Fluorescent TUNEL staining of ECH-treated HL-60 cells and DNA fragmentation indicated that the observed cytotoxicity resulted from Casp-dependent apoptosis. The potent effects observed in cancer cells were confirmed by in vivo experiments on animal cancer models (Table 3).
An extensive and comprehensive set of in vitro experiments with topo IIα enzyme was conducted to investigate its topo II inhibitor activity. The results indicate that ECH effectively reduced the pBR322 plasmid DNA relaxation and suppressed kDNA decatenation [109]. An assay with top IIα extracted from HL-60 cells proved that ECH 0.5 μM induced the formation of stable cleavage complexes, which is a common mechanism for topo II poisons, along with intercalation in DNA. However, two different experiments (Table 3) reported that ECH was a non-intercalative agent, even at high concentrations [109]. The activity of ECH toward topo IIα-DNA binding was evaluated using a fluorescence anisotropy assay, which revealed that ECH inhibited the binding between the enzyme and DNA. Molecular docking studies clarified that ECH, through its sugar moiety, established strong hydrogen bonds with the DNA binding site of topo IIα, working as a catalytic inhibitor that competes with DNA for the substrate [109]. Further studies explored the effects of ECH on the cleavage/religation equilibrium using a cell-free assay. ECH produced an increase in DNA cleavage and enhanced DSBs formation, without significant effects on religation [109]. The ability of ECH to promote DNA cleavage without affecting DNA ligation makes it similar to topo II poisons such as ellipticin, genistein, and quinolones [110,111], which act with the same mechanism. However, ECH has been found to possess the peculiar characteristics of i) blocking the ECH exerted a broad-spectrum anticancer activity against a panel of 26 human and murine cancer cell lines with very similar IC 50 ranging from 1.0 to 6.0 µM [109]. Fluorescent TUNEL staining of ECH-treated HL-60 cells and DNA fragmentation indicated that the observed cytotoxicity resulted from Casp-dependent apoptosis. The potent effects observed in cancer cells were confirmed by in vivo experiments on animal cancer models (Table 3).
An extensive and comprehensive set of in vitro experiments with topo IIα enzyme was conducted to investigate its topo II inhibitor activity. The results indicate that ECH effectively reduced the pBR322 plasmid DNA relaxation and suppressed kDNA decatenation [109]. An assay with top IIα extracted from HL-60 cells proved that ECH 0.5 µM induced the formation of stable cleavage complexes, which is a common mechanism for topo II poisons, along with intercalation in DNA. However, two different experiments (Table 3) reported that ECH was a non-intercalative agent, even at high concentrations [109]. The activity of ECH toward topo IIα-DNA binding was evaluated using a fluorescence anisotropy assay, which revealed that ECH inhibited the binding between the enzyme and DNA. Molecular docking studies clarified that ECH, through its sugar moiety, established strong hydrogen bonds with the DNA binding site of topo IIα, working as a catalytic inhibitor that competes with DNA for the substrate [109].
Further studies explored the effects of ECH on the cleavage/religation equilibrium using a cell-free assay. ECH produced an increase in DNA cleavage and enhanced DSBs formation, without significant effects on religation [109]. The ability of ECH to promote DNA cleavage without affecting DNA ligation makes it similar to topo II poisons such as ellipticin, genistein, and quinolones [110,111], which act with the same mechanism. However, ECH has been found to possess the peculiar characteristics of (i) blocking the noncovalent binding of topo IIα to DNA by competing with DNA for the DNA-binding domain of the enzyme, and (ii) hindering topo IIα-mediated pre-strand passage cleavage/religation equilibrium. Taken together, the studies presented above suggest that ECH is a potent non-intercalative topo II inhibitor with a peculiar mechanism of action. It acts as a topoisomerase poison (stabilization of cleavable complexes and induction of DSBs) and a catalytic inhibitor (inhibition on the topo II-DNA binding, interference with the pre-strand passage cleavage/religation equilibrium). Due to these characteristics, it constitutes a promising starting point for the development of anticancer drugs based on topo II inhibition
Eusynstyelamide B
Eusynstyelamide B (EUB, Figure 25) is a bis-indole alkaloid extracted from the marine ascidian Didemnum candidum found in the Great Barrier Reef [112].
Mar. Drugs 2022, 20, x FOR PEER REVIEW 40 of 52 noncovalent binding of topo IIα to DNA by competing with DNA for the DNA-binding domain of the enzyme, and ii) hindering topo IIα-mediated pre-strand passage cleavage/religation equilibrium. Taken together, the studies presented above suggest that ECH is a potent non-intercalative topo II inhibitor with a peculiar mechanism of action. It acts as a topoisomerase poison (stabilization of cleavable complexes and induction of DSBs) and a catalytic inhibitor (inhibition on the topo II-DNA binding, interference with the pre-strand passage cleavage/religation equilibrium). Due to these characteristics, it constitutes a promising starting point for the development of anticancer drugs based on topo II inhibition
Eusynstyelamide B
Eusynstyelamide B (EUB, Figure 25) is a bis-indole alkaloid extracted from the marine ascidian Didemnum candidum found in the Great Barrier Reef [112]. EUB was able to induce cytotoxicity in breast MDA-MB-231 and LNCaP prostate cancer cells [112,113]. Table 3 reports the differences in gene and protein expression between MDA-MB-231 and LNCaP cell lines, emphasizing the cell line-specific mechanisms of EUB. The COMET assay and the quantitative evaluation of γ-H2A.X foci supported the production of DNA damage via DSBs in both cell lines.
With the aim to investigate whether the observed DNA damage derived from the direct interaction of EUB with DNA, a displacement assay and a DNA melting temperature analysis were performed. Both demonstrated that EUB did not directly interact with DNA but instead acted as a topo II poison [113]. EUB was also highly cytotoxic in two non-transformed cell lines (NFF primary human neonatal foreskin fibroblasts and RWPE-1 epithelial prostate cell line), with IC50 values even lower than that reported on tumor cell lines. NFF and RWPE-1 cells are highly proliferating and express high levels of topo IIα [114]. This means that the effects of EUB were not specific for cancer cells. Further in vitro and in vivo studies have to be performed to assess the safety profile of EUB. EUB was able to induce cytotoxicity in breast MDA-MB-231 and LNCaP prostate cancer cells [112,113]. Table 3 reports the differences in gene and protein expression between MDA-MB-231 and LNCaP cell lines, emphasizing the cell line-specific mechanisms of EUB. The COMET assay and the quantitative evaluation of γ-H2A.X foci supported the production of DNA damage via DSBs in both cell lines.
With the aim to investigate whether the observed DNA damage derived from the direct interaction of EUB with DNA, a displacement assay and a DNA melting temperature analysis were performed. Both demonstrated that EUB did not directly interact with DNA but instead acted as a topo II poison [113]. EUB was also highly cytotoxic in two' nontransformed cell lines (NFF primary human neonatal foreskin fibroblasts and RWPE-1 epithelial prostate cell line), with IC 50 values even lower than that reported on tumor cell lines. NFF and RWPE-1 cells are highly proliferating and express high levels of topo IIα [114]. This means that the effects of EUB were not specific for cancer cells. Further in vitro and in vivo studies have to be performed to assess the safety profile of EUB.
Conclusions
Of the compounds discussed in this review, only a few acts as topo II poisons (adociaquinone B and EUB) and as catalytic inhibitors (neo and apl-1). Several others exhibit topo II inhibitory activity but, due to the paucity of experimental evidence, their mode of inhibition has not been elucidated, making it difficult to establish their mechanism of action.
Although topo II inhibitors, particularly topo II poisons, are successfully used as anticancer agents, the occurrence of drug resistance and severe side effects, such as cardiotoxicity and the development of secondary malignancies, limit their use [43]. An approach to overcome these limitations could be the use of dual inhibitors. Multiple marine-derived compounds described in this review such as 25-acetals manoalide, xestoquinone, HA-A, and M7, inhibit both topo I and topo II [55,60,61,76], while for others, topo II inhibitory activity is accompanied by the inhibition of Hsp90 [36,62,74] or HDAC [75,76]. The resulting advantages are manifold. Simultaneous inhibition of topo I and topo II could reduce the possible onset of resistance. The same advantage can be achieved by inhibiting topo II and Hsp90 [43]. Concerning topo II and HDAC inhibition, HDAC inhibition-mediated histone hyperacetylation increases chromatin decondensation and DNA accessibility. These effects may promote topo II binding and enhance topo II inhibiting activity [43]. Among the marine compounds presented in this review, heteronemin is the most interesting. Indeed, its cytotoxic activity was highly multimechanistic, with inhibition of the catalytic activities of both topo I and topo II and inhibition of Hsp90, associated with oxidative and ER stress. However, the dual inhibitors are often compounds with a high molecular weight [119], which could limit their druggability and their safety profile as well as indicate that their pharmacokinetics should be thoroughly explored Another issue to consider is the ability of topo II inhibitors to cause DNA lesions that, if not repaired or not cytotoxic, could lead to chromosome aberrations and secondary malignancies such as leukemias [120]. Although topo II catalytic inhibitors are usually associated with no or limited direct DNA damage [121], some marine-derived topo II catalytic inhibitors presented in this review induce DNA DSBs and/or increase the protein expression of DNA damage-related proteins. Thus, it would be of great relevance to clarify whether their genotoxicity results from their topo II catalytic inhibition or involves different mechanisms. A further concern related to the toxicological profile is the lack of selectivity toward cancer cells exhibited by some marine compounds, which prompts more extensive studies on non-transformed cells to assess the safety of such molecules.
Lastly, some marine compounds exhibited a strong binding affinity for topo II, demonstrated through molecular docking studies. Among those, the most interesting are neo, ECH, and sulochrin, which are characterized by a binding energy of -61.8, -39.21, and -12.11 kcal/mol, respectively. However, in some cases, this interaction has not been confirmed by cellular assays, making it difficult to know whether topo II binding leads to the actual inhibition of the enzyme activity. Thus, at least DNA decatenation and/or relaxation assays are necessary to confirm their topo II inhibitory activity. These cell-free assays certainly provide early indications of the effective inhibition of topo II. However, they may not be sufficient because, as shown for secoadociaquinone A and B and GA3P [77,108], their inhibitory activity on the purified enzyme does not necessarily lead to the inhibition of topo II at the cellular level.
In conclusion, in this review, we reported current studies on marine-derived compounds targeting topo II, highlighted their pharmacological potential, and discussed their toxicological issues. | 19,614 | 2022-10-27T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
Stiffness of sphere–plate contacts at MHz frequencies: dependence on normal load, oscillation amplitude, and ambient medium
The stiffness of micron-sized sphere–plate contacts was studied by employing high frequency, tangential excitation of variable amplitude (0–20 nm). The contacts were established between glass spheres and the surface of a quartz crystal microbalance (QCM), where the resonator surface had been coated with either sputtered SiO2 or a spin-cast layer of poly(methyl methacrylate) (PMMA). The results from experiments undertaken in the dry state and in water are compared. Building on the shifts in the resonance frequency and resonance bandwidth, the instrument determines the real and the imaginary part of the contact stiffness, where the imaginary part quantifies dissipative processes. The method is closely analogous to related procedures in AFM-based metrology. The real part of the contact stiffness as a function of normal load can be fitted with the Johnson–Kendall–Roberts (JKR) model. The contact stiffness was found to increase in the presence of liquid water. This finding is tentatively explained by the rocking motion of the spheres, which couples to a squeeze flow of the water close to the contact. The loss tangent of the contact stiffness is on the order of 0.1, where the energy losses are associated with interfacial processes. At high amplitudes partial slip was found to occur. The apparent contact stiffness at large amplitude depends linearly on the amplitude, as predicted by the Cattaneo–Mindlin model. This finding is remarkable insofar, as the Cattaneo–Mindlin model assumes Coulomb friction inside the sliding region. Coulomb friction is typically viewed as a macroscopic concept, related to surface roughness. An alternative model (formulated by Savkoor), which assumes a constant frictional stress in the sliding zone independent of the normal pressure, is inconsistent with the experimental data. The apparent friction coefficients slightly increase with normal force, which can be explained by nanoroughness. In other words, contact splitting (i.e., a transport of shear stress across many small contacts, rather than a few large ones) can be exploited to reduce partial slip.
: Sketch of the mechanisms underlying partial slip. A Hertzian contact under a tangential load has infinite tangential stress at the edge of the Introduction Partial slip is a widespread and multifacetted phenomenon. When a contact experiences partial slip, parts of a contact stick to each other under a tangential stress, while others slide. Partial slip is found in many tribological situations of practical relevance. This includes fretting wear [1,2] granular media [3], earthquakes [4], and the collision between particles [5]. Early models of partial slip were formulated independently by Cattaneo [6] and Mindlin [7], who were concerned with a Hertzian contact. If the entire contact area sticks, a continuum treatment predicts a stress singularity at the rim of the contact ( Figure 1A). However, infinite stress is unrealistic, and among the mechanisms removing the singularity is partial slip. Partial slip implies that those areas, where the tangential stress exceeds a certain critical value, slide and thereby lower the local stress. Cattaneo and Mindlin assumed that the frictional stress in the sliding zone, σ, is proportional to the normal pressure, p, as in Coulomb friction ( Figure 1C). The ratio of σ and p is the friction coefficient, µ. From the Cattaneo-Mindlin (CM) model, one can derive predictions for the width of the sliding region (which is of annular shape) and for the force-displacement relation ( Figure 2D below) [8,9].
Partial slip as such is an accepted and frequently observed phenomenon. The details of the CM model, however, are being debated for a variety of reasons. Etsion [11] gives a detailed account. The first category of problems originates from the numerous assumptions in the formulation of the model. For example, the normal pressure is assumed to stay constant during tangential loading. A second set of limitations is related to the idealized conditions. The CM model ignores roughness, capillary forces, plastic deformation, and the effects of contamination. In particular, plastic deformation can lead to junction growth, which stiffens the contact rather than weakening it [12,13].
There is a particular shortcoming that is on the one hand widely observed, but also easily fixed on a heuristic level on the other. The CM model ignores viscous dissipation. In consequence, the energy dissipated in reciprocating sliding scales as the cube of the oscillation amplitude in the lowamplitude limit. Following from this scaling law, the damping of a resonator, which experiences particle slip in one way or another, should go to zero at small amplitudes. An explanation of the contact resonance method, which probes these relations, is given below. Deviating from this scaling prediction, the contacts usually do damp a resonance even at the smallest accessible amplitudes. This type of damping must be related to linear viscoelasticity, meaning that the corresponding stresses are proportional to displacement ( Figure 2C). While such viscous processes are not contained in Figure 2: (A) For a narrow contact between a sphere and a plate, deformation occurs close to the contact, only. The contact may be depicted as a spring, or more generally, as a Voigt element, where the latter also accounts for viscous dissipation (B). In the case, where the sphere is heavy enough to be clamped in space by inertia, it can be depicted as a wall (right-hand side in Β). (C,D): Illustration of how Δf and ΔΓ depend on the shape of the force-displacement loop. (C): viscoelastic contact and (D): partial slip according to Cattaneo and Mindlin. The frequency shift is roughly proportional to the ratio of force and displacement at the turning point (full dots). ΔΓ is proportional to the area inside the loop divided by u 0 2 (hatched). (C) and (D) adapted with permission from [10], copyright 2013 the American Physical Society.
the CM model, they can be added into it in an ad hoc manner (see Equation 12).
In the Results and Discussion section, we address a further rather fundamental criticism of the CM model, which starts out from the extent to which a macroscopic view of friction guides its formulation. Macroscopic concepts enter the CM model at two separate instances. Firstly, a sliding stress proportional to the normal pressure is commonly associated with Coulomb friction. In Coulomb friction, the tangential force is related to the actual area of contact, to be distinguished from the nominal area of contact due to surface roughness. These arguments should not apply on the nanoscale. Savkoor has responded to this criticism with a modified model of partial slip, which assumes the tangential stress in the sliding zone to be constant, independent of normal pressure [14,15]. The value of the constant stress, τ 0 , is the free parameter of the model. Savkoor solved the equations of continuum elasticity and derived the force-displacement relations. These relations differ from the CM model in the details, but deciding between the two models based on the shape of the force-displacement loop is somewhat of a challenge. Interestingly, it is rather easy to distinguish between the Savkoor model and the Cattaneo-Mindlin model with the contact resonance method because the Cattaneo-Mindlin model predicts a linear dependence of frequency and bandwidth on amplitude, while the same relations are parabolic if derived from the Savkoor model. This difference is easily observed in experiment [16].
A second element of the CM model of genuinely macroscopic nature is the notion of a stress singularity at the edge of the contact. It is essential that this peak in stress at the edge is indeed strong enough to locally initiate sliding. Gao and Yao have mathematically analyzed a related problem, namely the detachment of a fiber end from a flat surface under tensile load [17]. Such a contact displays a peak in tensile stress at the edge, which governs the pull-off force if the contact diameter is larger than about 100 nm. Pull-off then results in crack propagation. Partial slip in the Cattaneo-Mindlin sense also results in crack propagation, where the modes of crack opening are II and III, as opposed to mode I, which operates during pull-off [18]. Gao and Yao find that the crack propagation mechanism becomes inefficient once the contact diameter falls to below 100 nm. For small contacts, the stress concentration at the edge becomes less and less significant. Translated to the tangential load problem, the analysis by Gao and Yao shows that the transition from stick to slip may occur by crack propagation (that is, by partial slip in the Cattaneo-Mindlin sense), but that small contacts may also start to slide as a whole. Even if partial slip at individual contacts is found, it is expected to be more prominent for larger contacts because the maximum level of stress depends on the ratio of the contact diameter to the radius of the crack tip.
From an engineering perspective, partial slip (also called microslip) has a slightly different meaning. It mostly denotes a small tangential displacement at contacts between rough surfaces. These small displacements per se have little influence on the strength of the contact. They are still of immense practical relevance because they cause fretting wear [19][20][21], which is a special type of corrosion. Microslip at multicontact interfaces is different from partial slip in the Cattaneo-Mindlin sense because it involves a debonding of the weakly coupled load-bearing asperities and, also, because new contacts can form at large relative displacements [22]. Depending on the distance between the individual load-bearing asperities, these are elastically coupled to each other [23]. If the contacts are tightly coupled, there is crack propagation with a peak in stress at the crack tip. Otherwise, the analysis should be based on an ensemble of contacts with a distribution in contact stiffness and contact stress. Bureau et al. have provided such a model [24], making extensive use of the Greenwood-Williamson formalism [25].
The experiments below rely on the contact resonance method. The contact resonance method is also applied on the macroscopic scale [26] and in AFM-based metrology [27]. In particular, the mathematics is closely related to what was reported in [28] and [29]. Differing from many experiments performed with AFM [30,31], the contacts here have a substructure and it is this substructure, which gives rise to the phenomena under discussion. Also, hysteresis is more important in QCM experiments than in AFM experiments. A contact is established between a resonator (which is a quartz crystal microbalance here and is the cantilever in AFM experiments) and an external object. The geometry is configured such that the contact does not overdamp the resonance, but rather shifts the resonance frequency and the resonance bandwidth by small amounts (termed Δf and ΔΓ below). The contact resonance method is well suited to detect nonlinear force-displacement relations because nonlinear behavior leads to a dependence of Δf and ΔΓ on amplitude, u 0 , while such a dependence is absent when the system obeys a linear response. Partial slip results in a nonlinearity and whether or not a contact undergoes partial slip can therefore be inferred from the dependence of Δf and ΔΓ on the amplitude. More quantitatively, the Cattaneo-Mindlin model predicts Δf and ΔΓ to scale linearly with u 0 in the low-amplitude limit and this prediction can be tested easily.
The experiments were undertaken with a quartz crystal microbalance (QCM). The QCM is mostly known as a device for thickness determination, but it can equally well be employed to measure contact stiffness. In this regard, it is helpful to view the QCM as a shear wave reflectometer. The amplitude and the phase of the wave reflected at an interface is related to the stiffness of this interface. Acoustic reflectometry was used to measure contact stiffness as early as 1971 [32]. The work reported below is concerned with discrete contacts (as opposed to a multicontact interface), but the physical picture is closely analogous to what is developed in [32]. The presence of contacts at a resonator surface changes the reflectivity of the resonator surface and thereby changes the resonator's frequency and its bandwidth [26].
There is a different (but equivalent) way of explaining the measurement principle. The resonator can be represented by a lumped element circuit [33], as shown in Figure 2B. The main resonator is at the bottom. Its resonance frequency is given as 2π(κ R /M R ) 1/2 where κ R is the effective stiffness and M R is the effective mass. The sample is the small sphere at the top. Because the contact zone is small (Figure 2A), it can be represented by a spring and a dashpot arranged in parallel (a Voigt element). If the resonator is coated with a rigid thin film (or with nanoparticles rigidly attached to the surface), this load increases the resonator's effective mass, thereby lowering the resonance frequency. In the lumped element representation, this amounts to the sphere at the top in Figure 2B being small and the spring being stiff. Applied in this mode, the QCM determines the value of the effective mass, hence the name "microbalance". However, millimeter-sized spheres such as the ones studied here are not samples of this kind. They are so heavy that they do not follow the resonator's MHz motion, but rather are clamped in space by inertia [34]. In the lumped element representation, they are depicted as a wall, attached to the surface across a spring and a dashpot (a Voigt element). It is essential that the contact diameter is much smaller than both the sphere diameter and the wavelength of sound. The deformation is then localized; the bulk of the sphere remains undeformed. The force follows from integration of the stress distribution over the contact area; the displacement is evaluated in the undeformed regions far outside the contact zone. The ratio of force and displacement is the contact stiffness. As we show in the modeling section, the spring constant and the dashpot's drag coefficient can be easily determined from the shifts of frequency and bandwidth. The ratio of the two represents the loss tangent.
The representation of the contact as a Voigt element only holds as long as the contact behaves linearly. Partial slip, however, results in a nonlinear behavior. Even in the precense of partial slip, one can use the lumped element representation for the sake of an intuitive understanding. Roughly speaking, the apparent contact stiffness decreases at elevated amplitudes because the sticking portion of the contact decreases. The "apparent contact stiffness" here is the stiffness as derived from the frequency shift (Equation 2 below). This intuitive picture can be backed up with a rigorous mathematical model. We briefly recapitulate the mathematics in the modeling section.
In previous work [10], we have reported details of the experimental setup and elaborated on the mathematical details of what the Cattaneo-Mindlin model and the Savkoor model predict for the functions Δf(u 0 ) and ΔΓ(u 0 ). The authors in [10] focused on how the amount of partial slip depends on contact size. For the current work, the sphere size was chosen large enough to always guarantee partial slip. An improved experimental setup allowed for a detailed quantitative analysis in both the linear and the nonlinear regime. All experiments were repeated 9 times, which allows for a robust analysis of statistical errors. Finally, we compare experiments undertaken in air to experiments using the same sample, but immersed in water.
Experimental
Modeling A QCM loaded with discreate contacts: linear and nonlinear regime We first consider the viscoelastic contact. According to the small-load approximation, the complex frequency shift at small amplitude is given as [35,36] (1) Δf and ΔΓ are the shifts of the frequency and the half-bandwidth at half-height, respectively. The parameter Γ is related to the dissipation factor, D, by D = Γ/(2f). f F is the fundamental frequency, which is often 5 MHz.
is the area-averaged complex amplitude of the tangential stress at the resonator surface, and u 0 is the amplitude of oscillation. The ratio of stress and velocity (where the latter is equal to iωu 0 ) is the complex load impedance, Z L . In the second step in Equation 1, the stress was converted to force by area. The force, in turn, was expressed as tangential stiffness times amplitude (that is, as κu 0 ). n is the overtone order, n P is the number of spheres, and A eff is the acoustically effective area (similar to the electrode area, A eff can be derived from the experimental data [10]). κ is the tangential stiffness of an individual contact (to be distunguished from the stiffness of a multicontact interface [22]). The term iωξ acounts for viscous dissipation, where ξ is the drag coefficient. ξ quantifies linear processes in the sense that the stress is proportional to the rate of displacement. No statement is made on the mechanism(s) leading to dissipation. The drag coefficient may be linked to the viscoelastic nature of the materials involved, but also to interfacial processes (as long as these obey linear mechanics). Equation 1 can be inverted as (2) As shown in Equation 2, the complex frequency shift is easily converted to a complex contact stiffness.
Up to now, linear force-displacement relations were assumed. If linearity does not hold, the stress, σ(t), is no longer time harmonic. In consequence, there is no complex amplitude, σ 0 , which could be inserted into Equation 1. Importantly, a nontrivial time dependence can be accounted for in an expanded model. As long as stress is periodic with the frequency of the resonator (but of any other shape otherwise), the QCM measures the first Fourier component of σ(t). It then follows that [36,37] (3) In the 2nd line of Equation 3, stress was replaced by the force at the contacts, F(t), multiplied by the number density of the contacts, n P /A eff . There is a close analogy between Equation 3 and the principle of operation of lock-in amplifiers. Δf and ΔΓ are proportional to the in-phase and the out-of-phase components of the force.
Underlying both Equation 1 and Equation 3
is the small load approximation, which states that the load impedance (often called Z L , the ratio of σ 0 and iωu 0 ) is much smaller than the acoustic shear wave impedance of the crystal, Z q . The small load approximation holds as long as Δf/f F << 1, which is almost always true. If the load is small in this sense, the magnitude of the force is so small that the motion of the resonator surface remains approximately sinusoidal. Put differently, the QCM surface is under displacement control. For that reason, the time average in Equation 3 can be converted to an average over displacement, u. Note: In general, the force, F, will not only depend on displacement, u, but also on the maximum displacement, u 0 , and on the frequency, ω. Because the trajectories differ between the two directions of motion, averaging must occur separately for the two directions. The two forces are called F − (u,u 0 ,ω) and F + (u,u 0 ,ω), in the following, where the indices "−" and "+" denote movement toward negative and positive u. The chain of algebraic conversions must be F ± (u,u 0 ,ω) → F(t) → {Δf(u 0 ), ΔΓ(u 0 )}. The first entry must be F ± (u,u 0 ,ω), not F(u). By letting the forces depend on u, u 0 , and ω, we do not mean to exclude a dependence on velocity. Such a dependence on velocity would implicitly enter F + (u,u 0 ,ω) and F − (u,u 0 ,ω) since the velocity itself is a function of u and ω, given as iωu.
The transformation from F ± (u,u 0 ,ω) to Δf(u 0 ) and ΔΓ(u 0 ) take the form [10] in graphical form. For viscoelastic contacts, the force-displacement loop is an ellipse. The ratio of force and displacement at the peak (full dots) is independent of amplitude, u 0 , and Δf therefore also is independent of u 0 . The area inside the friction loop scales as u 0 2 , and ΔΓ therefore also is independent of u 0 . This may change, if the force-displacement loop takes some other shape. Figure 2D shows the force-displacement loop according to Cattaneo and Mindlin. For contacts following the CM model, Δf and ΔΓ decrease and increase with amplitude, respectively.
Partial slip and its consequences for a QCM experiment: predictions derived from the Cattaneo-Mindlin model and the Savkoor model
In Cattaneo-Mindlin theory, the tangential force, F x , and the tangential displacement, u, are related as [8] F N is the normal force and µ is the friction coefficient in the Coulomb sense. No distinction is made between the static and the dynamic friction coefficient. κ = 2G*a is the contact stiffness in the low-amplitude limit. a is the contact radius and G* is an effective modulus. The frequency shift, Δf, is related to the contact stiffness, κ, by Equation 2. G* is the effective modulus, given as (7) G and v are the shear modulus and the Poisson ratio, respectively. The indices 1 and 2 label the contacting media. Given that the contact diameter can be estimated to be larger than 1 µm, we ignore the thin films present (SiO 2 , PMMA, gold) and use the same values on both sides.
For the sake of quantitative modeling (see Figure 5 below) we keep the Poisson number fixed at v 1 = v 2 = 0.17 and express the shear modulus as (8) where E is the Young's modulus and E is a fit parameter. The contact radius, a, is assumed to obey the JKR equation, which is (9) where R is the (known) sphere radius, γ is the energy of adhesion and E* is another effective modulus, given as As before, v 1 ≈ v 2 ≈ 0.17 is assumed. Also, E 1 was assumed to be the same as E 2 (E 1 ≈ E 2 = E) with E a fit parameter. The energy of adhesion, γ, was also a fit parameter. All other parameters were fixed.
Inserting the force-displacement relation from Equation 6 into Equation 4 and Equation 5 and, further, expanding the result to first order in u 0 , one finds [10] (11) At this point, we slightly extend the CM model by including viscous dissipation. On a heuristic basis, we add a viscous term into ΔΓ which accounts for dissipative processes with a linear dependence on stress: As Equation 12 shows, the Cattaneo-Mindlin theory predicts Δf and ΔΓ to depend linearly on u 0 .
Savkoor [14,38] has formulated a modified model of partial slip, which assumes the traction in the sliding zone to be constant with a value of τ 0 , rather than being proportional to the normal stress as in CM theory. The force-displacement relation resulting from the Savkoor model is (13) where a is the radius of contact and c is the radius of the sticking area, given as Inserting Equation 13 into Equation 4 and Equation 5 and, further, expanding the result to second order in u 0 , one finds [10]: The dependence of Δf and ΔΓ on amplitude is now parabolic, whereas it is linear in the CM model.
Experimental details
The geometry of the experiment was based on a tripod configuration as shown in Figure 3. Three glass spheres with a diameter of either 2.2 mm or 1.2 mm were glued to a backing plate in the form of an equilateral triangle. The tripod was placed onto the center of the plate, where the distance of the individual contacts to the center was less than 3 mm. The three points of contact experience the same normal force and the same amplitude of motion. The weight of the tripod alone was 0.5 g. Additional weights between 0.5 and 2.5 g were added onto the backing plate, thereby increasing the normal force. There was a frame with a cylindrical hole around the backing plate, which prevented its lateral movement. With this frame in place, the sample did not shift laterally when the weight was added. The frame was essential for obtaining reproducible results. Shifts of frequency and bandwidth were acquired with impedance analysis. One frequency sweep took about 1 s. Each amplitude ramp consisted of 10-15 steps. All ramps were repeated four times (two increasing and two decreasing ramps). The first ramp often gave results different from the following three ramps. This type of running-in behavior was not further investigated. Most of the time, the data from ramps 2-4 agreed with each other within the experimental error. In particular, there were no systematic differences between increasing and decreasing ramps. Occasionally, a slow drift was superimposed onto the ramps. Quartz resonators respond to changes in temperature and static stress with slow drifts. Drifts can be reduced by mounting the crystals in the holder one day before the experiment and by controlling temperature, but they cannot be avoided altogether. Experiments were undertaken in ambient air with no additional control of temperature or humidity. For further details on the experiment (on the processing of raw data and on the calculation of the amplitudes, in particular) see [10]. Experiments were carried out with either SiO 2 -coated resonators (purchased from Inficon) or PMMA-coated resonators. The thickness of the spin-cast PMMA layer was 250 nm. Previous experiments did not find evidence of an influence of the thickness of a glassy polymer on the contact stiffness.
All experiments were carried out in both air and water. Deionized water was used throughout, but the water was not degassed. A sample, which had been previously studied in air, was flooded with water. The water level was about 3 mm; however, the exact height was not an important parameter because the QCM only senses the conditions inside the first micron of a liquid sample. Figure 4 shows a number of amplitude sweeps. The four graphs at the top and the four graphs at the bottom display data acquired in air and in water, respectively. Because water damps the crystal's resonance, the maximum amplitude achieved was 6 nm (compared to an amplitude of ≈20 nm in air). Δf (u 0 ) is always a decreasing function of amplitude, u 0 (panels on the left-hand side), while ΔΓ increases with u 0 (on the right). Figure 4A,B,E,F displays what was observed most of the time (in >80% of the experiments): Most of the time, Δf and ΔΓ were linear functions of u 0 . Occasionally, the data show a plateau at small amplitudes. These plateaus have been discussed in detail in [39]. They can be associated with a critical minimum amplitude for partial slip. A plateau occurred often for the small spheres (diameters <500 µm) examined in [10]. Further discussion is outside the scope of this work. Large spheres were chosen here in order to achieve a linear dependence of Δf and ΔΓ on u 0 . If linear behavior is observed, the complex spring constant in the low-amplitude limit is readily extracted from the data by extrapolation (see Figure 5 and Figure 7 below). Likewise, the friction coefficient as derived from the slopes of Δf(u 0 ) and ΔΓ(u 0 ) is a robust parameter (see Figure 8 below).
Results and Discussion
Very rarely, we see an increase of Δf with amplitude (data not shown). This behavior might tentatively be associated with junction growth [12]. Most of the time Δf and ΔΓ decrease and increase with amplitude, characteristic for partial slip. All data sets contain four amplitude sweeps. Data shown in the four panels at the top and the four panels at the bottom were acquired in air and in water, respectively. In liquids, the maximum achievable amplitude is lower than in air because of damping. Δf and ΔΓ decrease and increase with amplitude, respectively, as is characteristic for partial slip. Panels A, B, E, and F show typical data traces. In these cases, the amplitude dependence is linear. Occasionally, one also finds plateaus at small amplitudes (dashed ellipses in panels C, D, G, and H). In these cases, the edge of the contact sticks at small amplitudes, where the exact conditions, under which such a stick occurs, are unclear. Even in these cases, the frequency-amplitude traces are clearly not parabolic (which should result if the Savkoor model was applicable). Figure 5 shows the low-amplitude limits of Δf for the three different configurations studied. Full and open symbols correspond to data taken in air and in water, respectively. The fact that Δf 0 increases with normal load is easy to understand. With increasing load, the contact radius increases and the contact stiffness increases correspondingly. The dotted lines show an attempt to bring this understanding in line with the known models of contact mechanics. We fitted the data with the JKR model. (The Tabor parameter of the geometry under study is 10, which says that the JKR model should be applied, rather than the DMT model.) Table 1 shows the derived values of the interfacial energy, γ, and the effective Young's modulus, E. While the values are reasonable, they scatter quite significantly between the different experiments and the different configurations. As far as the interfacial energy, γ, is concerned, part of the problem is that the Table 1.
loads are rather high. A more reliable determination of γ would require more data points close to the point of zero added weight. Clearly, the numbers must be interpreted with some caution. Possible sources of artifacts are roughness, contamination, and of course the idealized assumptions of the model. The high excitation frequency may also play a role. A systematic comparison with the tangential contact stiffness determined at low frequencies would certainly be worthwhile. Unfortunately, such experiments are difficult.
The contact stiffness increased when the sample was immersed in water. Note: The contacts were not broken between the two experiments. Water was admitted to the sample compartment without removing the spheres from the resonator. An increased stiffness in water contradicts intuition insofar, as one would expect the liquid to lower the effective van der Waals attraction.
With lowered adhesive forces, the contact area should decrease and the contact stiffness should decrease, in consequence. However, this was not observed. The contact stiffness increased by about 10% in all cases.
At this point, the high frequency of the measurement presumably comes into play in the sense that the small compressibility of the liquid contributes to the contact stiffness. Figure 6 provides a sketch. When the resonator surface oscillates tangentially, the material close to the contact responds with a tangential movement, mostly, but one can also expect a small amount of rotation. The rotational component changes the width of the liquid wedge close to the contact, thereby inducing a squeeze flow of liquid. However, the mass involved in this movement is so large that inertia strongly resists the flow. (The sphere itself is clamped in space for the same reason). Because of inertial clamping, the sphere's rocking motion compresses the liquid and the liquid responds elastically to compression. The liquid's high bulk modulus in this way stiffens the contact. Again, this effect is genuinely linked to the experiment occurring at MHz frequency. It will be important when applying this methodology to biomaterials (which are usually studied in the liquid phase). The above interpretation clearly is tentative. Roughness may also play role. When water fills the micro-voids between the two surfaces, this may also increase the elastic stiffness of the contact. Figure 7 addresses the linear components of the dissipative processes, quantified by the low-amplitude limit of ΔΓ, termed ΔΓ 0 . In Figure 7B, Γ 0 was converted to a loss tangent by taking the ratio of ΔΓ 0 and Δf 0 . Interestingly, ΔΓ does not increase with normal load in the same way as Δf. It stays approximately constant. For that reason the loss tangent is a decreasing function of normal load. This result implies that the finite values of ΔΓ 0 should not be viewed as a consequence of viscous dissipation inside the materials involved. If ΔΓ/Δf were a materials parameter, it should not depend on the normal load. Also, a loss tangent of 0.1 would be unreasonably high for fused silica.
Rather, these dissipative processes should be attributed to the interface. Linear contributions to the dissipation in contact resonance experiments are well known [8,40]. While the exact nature of these processes would be interesting, the present experiments do not allow for a statement other than that they must be connected to interfacial friction in one way or another. Full and open symbols denote measurements in air and water, respectively. The fact that ΔΓ is a constant independent of the normal load, suggests interfacial processes as the source of dissipation. If the dissipation were to occur in the material, one would expect the loss tangent to be constant, rather than ΔΓ itself, because a materials parameter should not depend on the normal load.
So far, the discussion has been concerned with linear contact mechanics. The experiment is easy and there are few other techniques that give access to the same data (mostly the AFM and ultrasonic reflectometry). Importantly, the QCM also accesses the (weakly) nonlinear regime and it does so rather easily, as well. As shown in Figure 4, most data sets show a linear dependence of Δf and ΔΓ on u 0 . In the following, we use these data to derive the apparent friction coefficient from the slopes, following Equation 12. Figure 8 displays these apparent friction coefficients. Firstly, the two ways to derive the friction coefficient (from Δf(u 0 ) and ΔΓ(u 0 )) give reasonable agreement with each other. Secondly, the friction coefficients that result are in the range known from macroscopic mechanics (that is, on the order of unity). Thirdly, and importantly, the friction coefficients all decrease with normal load. The larger the contact area, the more pronounced is the partial slip. This finding is in line with the treatment of the pulling problem by Gao and Yao referred to in the Introduction. Partial slip occurs if the stress singularity at the edge is strong. The peak stress depends on the ratio of the contact radius to the radius of the crack tip and therefore increases as the normal force becomes larger. A different (but related) explanation builds on nanoscale roughness. Nanoroughness rounds off the stress profile at the edge, which avoids the stress singularity similarly to a finite radius of a crack tip. The load dependence of µ points to yet another benefit of "contact splitting" [40,41]. A large number of small contacts will experience less partial slip (less fretting wear) than a small number of correspondingly larger contacts. A side remark: The agreement between the two friction coefficients (determined from Δf(u 0 ) and ΔΓ(u 0 )) is better in water than in air. We suspect that capillary forces affect ΔΓ(u 0 ) stronger than Δf(u 0 ). A more detailed discussion of the matter would require an extension of the Cattaneo-Mindlin model by specific contributions from different forces. Such an extension is outside the scope of this work, but it is possible. It is even worthwhile, if the role contact mechanics in acoustic sensing shall be expanded.
Conclusion
Using a QCM-based contact resonance method, the stiffness of sphere-plate contacts was studied at MHz frequencies. The linearcontact stiffness increases with normal load. A fit using JKR theory is possible. The fit parameters are in the expected range, but there is a significant amount of scatter between experiments. A quantitative interpretation must be undertaken with some care. The contact stiffness increases in the presence of a liquid. Possibly, this increase is rooted in a squeeze flow close to the edge of the contact. The loss tangent is of the order of 0.1 and decreases with normal force, F N . The F N -dependence suggests that the dissipation is connected to interfacial processes. At elevated amplitudes, it was also observed that there is partial slip. The amplitude dependence of frequency and bandwidth can be fitted with the Cattaneo-Mindlin model, which suggests that the frictional forces are proportional to the normal pressure as in macroscopic friction. The friction coefficients were found to be on the order of unity. The friction coefficients as derived from Δf(u 0 ) and ΔΓ(u 0 ) agree with each other reasonably well. The agreement is better in water than in air. Finally, the friction coefficients were found to decrease slightly with increasing normal force (that is, with increasing contact area). This can explained by the finite radius of the crack tip at the edge of the contact or by nanoscale roughness. These effects are most pronounced for the smallest contacts. Contact splitting can lower the amount of partial slip and fretting wear. | 8,570.8 | 2015-03-30T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
The holographic dual of a Riemann problem in a large number of dimensions
We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the"phase diagram"associated with the steady state; the dual, dynamical, black hole description of this problem; and its relation to the fluid/gravity correspondence.
Introduction
The Riemann problem may provide a relatively simple setting in which to study the nonequilibrium physics of quantum field theory. The problem asks for the time evolution of piece wise constant initial conditions with a single discontinuity in the presence of some number of conservation laws, for example of energy, momentum, mass, or charge. In our case, we consider a fluid phase of a conformal field theory (CFT) with an initial planar interface, where the energy density jumps from e L on the left of the interface to e R on its right. We also allow for a discontinuity in the center of mass velocity of the fluid across the interface.
For simplicity, we will make a number of further restrictions. We assume a conformal field theory that has a dual gravity description via the AdS/CFT correspondence. A priori, this will allow us to study the system beyond the hydrodynamic limit. We also take the limit that the number of spatial dimensions d is very large. In this limit, we find that the system is described by two conservation equations (1. 1) where e is, up to gradient corrections, the energy density and j the energy current. These equations are a special case of equations derived in ref. [1]. In these variables the Riemann problem amounts to a determination of e and j given an initial configuration of the form (e, j) = (e L , j L ) z < 0 (e R , j R ) z > 0 . (1.2) By choosing an appropriate reference frame, we may set j L = 0 without loss of generality.
As it happens, there are extensive treatments of this type of Riemann problem in hydrodynamics textbooks. See for example ref. [2]. Typically, a pair of rarefaction and/or shock waves form and move away from each other, creating in their wake a region with almost constant e and j. In recent literature, this intermediate region has been called a non-equilibrium steady state (NESS) [3,4]. One of the main results of this paper is a "phase" diagram valid in a large d limit (see figure 1) that describes, given the conservation equations (1.1) and initial conditions (1.2), which pair of waves are formed: rarefaction-shock (RS), shock-shock (SS), shock-rarefaction (SR), or rarefaction-rarefaction (RR). A physical reason for the preference of a rarefaction wave to a shock wave is entropy production.
Recent interest in this type of Riemann problem was spurred by a study of the problem in 1 + 1 dimensional conformal field theory [3] where the evolution is completely determined by the conformal symmetry and a hydrodynamic limit need not be taken. Conservation and tracelessness of the stress tensor imply that the stress tensor is a sum of right moving and left moving parts. When j R = j L = 0 one finds a NESS in between the two asymptotic regions, characterized by an energy density (e R + e L )/2 and an energy current proportional to e R − e L . The NESS is separated from the asymptotic regions by outward moving shock waves traveling at the speed of light. (An extension of the analysis of [3] which includes a discontinuity in the center of mass velocity, holomorphic currents and chiral anomalies can be found in [5]. An analysis of shock waves and their relation to two dimensional turbulence was carried out in [6].) In more than two space-time dimensions, conformal symmetry alone is not enough to specify the evolution completely and one needs additional assumptions about the structure of the conserved currents. Recent work appealed to the gauge/gravity duality [7][8][9][10], an analogy with 1 + 1 dimensions [5], and hydrodynamics [7,[11][12][13]. These papers focused on the case j R = j L = 0 and e L > e R such that from a hydrodynamic perspective a left moving rarefaction wave and a right moving shock wave are expected to emerge.
The distinction between rarefaction and shock waves was ignored in some of these papers [5,7,11]. Indeed, when working with 2 + 1 or 3 + 1 dimensional conformal field theories, the 1 e R /e L j R /e L RR RS SR SS Figure 1. A phase diagram for the solution to the Riemann problem in a large d limit. Given a pair (e L , 0) and (e R , j R ), the selection of shock and rarefaction waves is determined by the value of e R /e L and j R /e L . The dashed and solid lines are "critical": The dashed line indicates the values of (e R , j R ) connected to (e L , 0) by a single rarefaction wave while the solid line indicates the values of (e R , j R ) connected to (e L , 0) by a single shock wave. difference between, say, an SS solution to the Riemann problem and an RS solution to the Riemann problem is very small for all but extreme initial energy differences. As the spacetime dimension d increases however, the difference between a rarefaction wave type of solution and a shock wave solution becomes significant [13]. This amplification of the difference between the two solutions serves as a motivator for studying this Riemann problem in a large number of dimensions. Interestingly, a large d limit has independently been a topic of recent interest [1,[14][15][16][17][18][19][20][21][22][23][24][25] in the study of black hole solutions to Einstein's equations. Of particular relevance to our work is the connection between black holes in asymptotically AdS spaces and hydrodynamics [26]. Certain strongly interacting conformal field theories are known to have dual classical gravitational descriptions. In the limit where these conformal field theories admit a hydrodynamic description, a solution to the relevant hydrodynamic equations can be mapped to a solution of Einstein's equations, in a gradient expansion where physical quantities change slowly in space and time. Transport coefficients such as shear viscosity are fixed by the form of Einstein's equations. Thus, one may study the Riemann problem in conformal field theories with a large number of dimensions by studying an equivalent Riemann-like problem involving an initially discontinuous metric of a black hole in an asymptotically AdS background.
Given that extensive analyses of conservation equations like (1.1) can be found in many hydrodynamics textbooks and papers, one can legitimately ask why we bother to redo the analysis here. The reason is that when working in a large number of dimensions, one can solve for the black hole metric exactly, independent of the derivative expansion (which is naturally truncated), thus obtaining an exact solution to the Riemann problem which includes possible viscous terms and is in general valid even when gradients of thermodynamic quantities are large (as is the case with discontinuous initial conditions).
Our work is organized as follows. In section 2, we rederive the equations (1.1) by taking a large d limit of Einstein's equations. We show how to rewrite them as the conservation condition on a stress-tensor, ∂ µ T µν = 0. In section 3, we compare the large d stress tensor and equations of motion to those arising from the fluid-gravity correspondence [26]. We find that both eqs. (1.1) and the stress tensor T µν are equivalent to the hydrodynamic equations that come from the fluid-gravity correspondence at large d, at least up to and including second order gradient corrections. In the same section we also construct an entropy current J µ S using an area element of the black hole horizon and show that the divergence of the entropy current is positive ∂ µ J µ S ≥ 0 in this large d limit. In section 4, we solve the Riemann problem for eqs. (1.1) and derive the phase diagram given in figure 1. Finally, we conclude in section 5 with some directions for future research. Appendix A contains a short calculation of the entropy produced across a shock, while appendix B contains plots of auxiliary numerical results.
2 The holographic dual of the Riemann problem for large d We wish to construct a holographic dual of the Riemann problem. Consider the Einstein Hilbert action A canonical stationary solution of the resulting equations of motion is the black brane solution where T is an integration constant which denotes the Hawking temperature. The solution (2.2) is dual to a thermal state of a conformal field theory with temperature T . For instance, the thermal expectation value of the stress tensor in such a state is given by is the pressure with p 0 a theory dependent dimensionless parameter. (The indices µ and ν run over the d − 1 dimensions of the (d − 1)-dimensional CFT.) As discussed in [8] a dual description of the Riemann problem necessitates an initial black hole configuration which is held at some fixed temperature T L for all z < 0 and at a different temperature T R for z > 0. This would correspond to a configuration where the expectation value of the stress tensor is given by (2.3) with T = T L for z < 0 and by (2.3) with T = T R for z > 0. Since the initial black hole is out of equilibrium it will evolve in time. Its dual description will provide a solution for the time evolution of the stress tensor which we are after. Thus, our goal is to solve the equations of motion following from (2.1) and use them to construct the dual stress tensor.
An ansatz for the metric which is compatible with the symmetries and our initial conditions is given by where the metric components are functions only of t, r, and z. (A more general ansatz which involves a transverse velocity can be found in [1].) A numerical solution of the equations of motion for g tt , g tz and g ii (i = x ⊥ or z) with smoothened initial conditions has been obtained for d = 4 in [8] for relatively small initial temperature differences, (T L − T R )/(T L + T R ) < 1.
A solution for finite d > 4 and for large temperature differences, (T L − T R )/(T L + T R ) ∼ 1 is challenging.
In an appropriate gauge, the near boundary expansion of the metric gives (2.6) Thus, in the large d limit at any finite value of r, the spacetime looks like the AdS vacuum. Only by keeping R = r n finite with n ≡ d − 1 will the O(r −n ) corrections to the metric remain observable. Our strategy is to solve the equations of motion in the finite R region subject to the boundary conditions (2.6). Following [1], we also use the scaling x ⊥ = χ/ √ n and z = ζ/ √ n so that in this coordinate system the line element takes the form where (2.8) (In a slight abuse of notation i is now either χ ⊥ or ζ.) We have used the letters E and J to emphasize these quantities' (soon to be seen) close connection with an energy density and energy current in the dual hydrodynamic description.
One can now solve the equations of motion order by order in 1/n. The equations of motion are simply Einstein's equations in the presence of a negative cosmological constant: setting L = 1 for convenience. Let a and b index the t, r, and ζ directions only, while i and j index the remaining perpendicular directions. Furthermore, letR ab be the Ricci tensor with respect to the three dimensional metric in the t, r, and ζ directions. Then Imposing that the boundary metric is Minkowski and choosing a near boundary expansion of the form (2.6) we find where the O(n −2 ) correction to g tt and the O(n −3 ) contributions to g ζζ are too long to write explicitly. The functions e and j are functions of t and ζ only and must satisfy the additional constraints (1.1). Equations (1.1) are identical to those obtained in [1,14]. We can rewrite them in terms of a conservation law (2.14) where g is an arbitrary function. Likewise, the functions e 2 and j 2 must also satisfy a set of equations which can be obtained from the conservation of (2.16) We will use and ∂ ζ interchangeably in what follows.
Comparison with hydrodynamics
Let us pause to understand (2.14). Within the context of the gauge-gravity duality it is possible to construct a solution to the Einstein equations which is perturbative in t, ζ and χ ⊥ derivatives of the metric components [26]. Such a perturbative solution to the equations of motion, which is available for any dimension d [27,28], allows for a dual description of the theory in terms of fluid dynamical degrees of freedom.
Stress tensor from fluid-gravity correspondence
To construct the dual hydrodynamic description of a slowly varying black hole, we boost the black hole solution (2.2) by a constant velocity u µ in the t, z, x ⊥ directions. The resulting line element is given by Allowing for u µ and T to become spacetime dependent implies that (3.1) will get corrected. By setting gradients of u µ and T to to be small, one can solve for the corrections to (3.1) order by order in derivatives so that the line element will take the schematic form where ds 2 (i) denotes the ith order gradient corrections to the line element. The stress tensor T µν which is dual to (3.1) takes the form also expanded in gradients. One finds [27,28] T µν which is nothing but a boosted version of (2.3) and then, in the Landau frame, and (Note that our definition of σ µν is somewhat unconventional.) An initial analysis of third order gradient corrections has been carried out in [29] for d = 5. A full analysis of all third order transport terms for arbitrary dimension d is currently unavailable. Since (2.14) has been obtained from a large d limit of a gravitational dual theory, we expect that (2.14) coincides with (3.3) when the former is expanded in derivatives and the latter is expanded around large n = d − 1. In short, we expect that taking a gradient expansion commutes with taking a large d limit. To make a direct comparison let us consider the hydrodynamic stress tensor (3.3) in the t, ζ, χ ⊥ coordinate system where the metric tensor takes the form One important effect of this rescaling is to keep the sound speed to be an order one quantity. Scaling the spatial component of the velocity field by 1/ √ n, viz., and maintaining that = (d − 2)P is finite in the large d limit, we find, and thus, and O ∂ 3 denotes third order and higher derivative corrections. Note that this constitutive relation for the stress tensor includes and encodes the large d limit of the transport coefficients (3.7). Now, we insert the redefinitions into the large d constitutive relation for the stress tensor (2.14), use the large d stress tensor conservation equations (1.1), and throw out terms that have three or more derivatives. We claim that in this fashion, we recover the stress tensor (3.11) in the gradient expansion. Thus, the large d limit and the gradient expansion seem to commute. Note that while the conservation equations (1.1) are of second order in gradients of ζ and t, the stress tensor includes at least second order gradients. The implications of (3.13) are worth emphasizing. The equations of motion (1.1) are equivalent to the standard equations of motion of relativistic hydrodynamics when the latter are expanded in a large d limit. When working with the e and j variables one obtains equations of motion which are second order in derivatives and therefore include dissipative effects. When carrying out a frame transformation to the more traditional Landau frame, more derivatives will appear. When considering the stress tensor associated with the equations of motion (1.1) one obtains more terms with higher gradients which do not contribute to the equations of motion. It would be interesting to see if one can construct an alternative to the Israel-Stewart theory using a "large d-frame" where gradients naturally truncate.
Entropy from Gravity
Within the context of our forthcoming analysis, it is instructive to compute the dual entropy production rate which is associated with the evolution of the horizon. Due to its teleological nature, it is usually difficult to identify the location of the event horizon. However, in the large d limit the analysis is somewhat simplified. Let us look for a null surface of the form R = r h (t, ζ). The normal to such a surface is (3.14) Demanding that Ξ 2 R=r h = 0 implies, to leading order in the large d limit, that The spacetime singularity which exists in our solution implies that an event horizon must be present. Since the only null surface available is (3.15), it must be the location of the event horizon. Subleading corrections to the location of the event horizon are given by To compute the change in the black hole entropy over time we compute the area form of the event horizon. Following the prescription of [30], we find that where h is the spatial (t = constant) part of the induced metric on the horizon (3.21) Thus, n e, j − e , . . .
where we have normalized the entropy density so that it is compatible with our conventions for the energy density. The second law of black hole thermodynamics amounts to In our large d limit we find that (3.24) The expectation from hydrodynamics, to second order in derivatives, is that the divergence of the entropy current is given by (3.25) (See for example (8) of ref. [31].) This expectation matches (3.24) on the nose. Note that to leading order in the large d limit the entropy current vanishes. This somewhat surprising feature of the large d limit follows from the fact that entropy production terms are suppressed by inverse powers of the dimension in the large d limit. Another way of understanding this suppression comes from thinking about the temperature T ∼ e 1/(d−1) . In the large d limit, T is constant to leading order in d. From the thermodynamic relation de = T ds, it then follows that changes in energy are proportional to changes in entropy, and entropy conservation follows from energy conservation at leading order in a large d expansion. 1
Near equilibrium steady states
We now analyze the dynamics controlled by the partial differential equations (1.1) which encode the dynamics of an out of equilibrium black hole (2.5) and its dual stress tensor (2.14). Various related holographic analyses can be found in [32][33][34][35][36][37][38][39][40][41]. As discussed in the introduction, the particular question we would like to address is a Riemann problem: What is the time evolution following from an initial condition (1.2)? We are particularly interested in the steady state solution which will emerge at late times. For convenience we will consider a reference frame for which j L = 0. Indeed, if e(x, t) and j(x, t) satisfy the conservation equations (1.1), then so do e(x − vt, t) and j(x − vt, t) + ve(x − vt, t). Thus, for constant values of e and j, we can choose a v such that j will be set to zero. The non-relativistic nature of the boost symmetry reflects the fact that the large d limit we have taken is effectively a non-relativistic limit where the speed of light c ∼ √ d has been pushed off to infinity.
Rarefaction waves vs. shock waves
Before addressing the Riemann problem in its entirety let us consider a simplified system which is less constrained. Consider (2.14) with gradient terms neglected. The resulting expression is the large d limit of the energy momentum tensor of an inviscid fluid which is known to support (discontinuous) shock waves [2] for any finite value of d. While the solution to the full Riemann problem will consist of a pair of shock and/or rarefaction waves, we begin in this section with a single discontinuous shock wave moving with velocity s. Conservation of energy and momentum imply where [Q] = Q l − Q r and Q r/l specify the value of Q to the left or right of the shock respectively. 2 The conservation conditions (4.1) are very general and are often referred to as 1 We thank R. Emparan for a discussion on this point. 2 In this section we use subscripts r and l to denote values of quantities to the right or left of the shock. In other sections we use subscripts R and L to denote quantities in the right and left asymptotic regions. In the latter case there is generally an interpolating region which we denote with a 0 subscript.
the Rankine-Hugoniot (RH) relations. In our setup they reduce to where e r/l and j r/l are the energy density and current immediately to the right or left of the shock. While these Rankine-Hugoniot relations hold for an arbitrary, piece-wise continuous fluid profile, in what follows, we are interested in the much simpler situation where e and j are constant functions away from the shocks. Amusingly, e r satisfies a cubic equation, 3 a plot of which as a function of j r resembles a fish: fixing (e l , j l ), each value of s is mapped to a point on the (e r , j r ) plane. The collection of such points is given by a fish-like curve, an example of which is given in the left panel of figure 2.
We make two observations about the fish. The vacuum (e r , j r ) = (0, 0) always lies on the cubic (4.3), corresponding to the fact that a shock can interpolate between any value of (e l , j l ) and the vacuum. Also (e r , j r ) = (e l , j l ) is the point of self-intersection of the cubic and has s = ±1 + j l /e l . The physical content of this observation is that when (e r , j r ) is close to (e l , j l ) but still lies on the cubic, we can find a close approximation to the fluid profile by linearizing the equations of motion. As we will describe in greater detail below, linearized fluctuations correspond to damped sound modes, and indeed the two regions can be connected by sound waves propagating at the local sound speed s = ±1 + j l /e l .
The shock solutions we found all solve the conservation equations (4.2). However, some of these solutions are unphysical in the following sense. Let us boost to a frame where the shock speed vanishes, s = 0. In half of the shock solutions, a quickly moving fluid at low temperature is moving into a more slowly moving fluid at higher temperature, converting kinetic energy into heat and producing entropy. We will refer to these shocks as "good" shocks. The other half of the solutions correspond to the time reversed process where a slowly moving fluid at high temperature moves into a rapidly moving but cooler fluid, turning heat into kinetic energy. This second solution, as we shall see shortly, should be discarded.
Strictly speaking, entropy is conserved in the large d limit (see the discussion following equation (3.25)). A more formal way of understanding why one should discard the bad shocks is to restore the gradient corrections but take a limit where these are small. Let us assume that in the frame where the shock velocity is zero there is an approximately stationary configuration such that time derivatives are much smaller than spatial derivatives. Boosting back to a shock with velocity s, we expect that e and j depend only on the combination ζ − st, i.e., j(t, ζ) = j(ζ − st) and likewise, e(t, ζ) = e(ζ − st). The equations of motion (1.1) become ordinary differential equations which can be integrated once to obtain (4.4) We have picked the two integration constants such that e and j vanish in the left asymptotic region. The Rankine-Hugoniot conditions (4.2) imply that e and j also vanish in the right asymptotic region. As e and j themselves vanish in the left and right asymptotic regions, we can describe e and j well near these points by looking at a gradient expansion. Near the left asymptotic region There is a similar looking equation for e and j near the right asymptotic region The solutions near (e l , j l ) and near (e r , j r ) have an exponential nature with the sign of the exponents depending on the eigenvalues of M l and M r appearing on the right hand side of (4.5) and (4.6) given by We now observe that the signs of the eigenvalues of M l and M r determine whether the shock is a viable solution to the equations of motion.
• If both eigenvalues of M l are negative, then e and j will not vanish as x → −∞. Thus we require that at least one eigenvalue of M l is positive in order for a shock solution to exist.
• If we assume there is exactly one positive eigenvalue, then 1+j l /e l > s and −1+j l /e l < s. Note that the value 1+j l /e l corresponds to the slope of one of the characteristics (i.e. the local speed of one of the sound waves), and this condition implies that this characteristic will end on the shock. Since λ l − is assumed to be negative, we have to tune one of the two integration constants of the system of differential equations to zero. This tuning means that generically the solution to the right of the shock will be a linear combination of both of the solutions near (e r , j r ). If both solutions are to be used, then it had better be that both eigenvalues of M r are negative. (Otherwise, it will not be true that e and j vanish in the limit x → ∞.) In particular, the larger of the two eigenvalues must be negative, which implies that 1 + j r /e r < s. (In terms of characteristics, both will end on the shock.) Thus, we find the constraint 1 + j r /e r < s < 1 + j l /e l . (4.8a) • If both eigenvalues of M l are positive, we still need at least one negative eigenvalue of M r to be able to connect the solutions in the left and right asymptotic regions. Moreover, for M r to have two negative eigenvalues would be inconsistent with momentum conservation (4.2). An analysis similar to the previous one yields The constraints (4.8) choose the good shocks over the bad ones. 4 Since bad shocks are not allowed, one may inquire as to the time evolution of a discontinuity with initial conditions which would have generated a bad shock. As it turns out, bad shocks can be replaced by the more physical rarefaction solutions [2]. The rarefaction solution assumes that between the asymptotic regions specified by (e l , j l ) and (e r , j r ), there is an interpolating solution where e and j are functions of ξ = ζ/t. As was the case for the shock wave, given e l and j l , there is a one parameter family of allowed values of e r and j r . These are given by e r =e l exp (±j l /e l − 1 ∓ ξ r ) , j r =e l (±1 + ξ r ) exp (±j l /e l − 1 ∓ ξ r ) . (4.9) The curve traced by (e r , j r ) also resembles a fish, and for moderate values of the shock parameters e r and j r it closely follows the cubic curve corresponding to a shock solution.
(See the central panel of figure 2.) The vacuum (0, 0) = (e r , j r ) solution can always be connected to (e l , j l ) through a rarefaction wave. The self-intersection point (e r , j r ) = (e l , j l ) has ξ = ∓1 + j l /e l , again corresponding to a sound wave type interpolation between the two regions (e r , j r ) ≈ (e l , j l ). Given that bad shocks are replaced by rarefaction waves, one should remove from the fish diagram (left panel of figure 2) the portion of the curve which corresponds to bad shocks and replace it with a curve corresponding to a rarefaction solution (central panel of figure 2). The resulting curve can be found on the right panel of figure 2: the belly of the fish and the lower part of its tail corresponds to a good shock and its back and upper tail to a 4 In appendix A, we discuss a third RH relation one can write down for the entropy current. If the RH relations for energy and momentum are satisfied, the RH relation for the entropy current will typically be violated due to entropy production associated with viscous effects. In the weak shock limit, we demonstrate that gradient corrections produce the entropy that leads to this violation of the third RH relation. Reversing the sign of the energy difference between the two asymptotic regions in eqs. (A.3) or (A.5), it is straighforward to see that a bad shock would lead to a decrease in entropy, at least in the simple case where s = 0 and jr = j l . rarefaction solution. One may compute the curve explicitly by imposing (4.8), but it can also be understood from a graphical viewpoint as we now explain.
Recall that the self intersection point of the shock wave fish (solid curve on the left panel of figure 2) corresponds to a shock velocity, s, which takes the values of the local speed of sound, ±1 + j l /e l . On the tail, s is either larger than 1 + j l /e l (upper tail) or smaller than −1 + j/e (lower tail). Thus, on the tails, the eigenvalues are either both positive or both negative. The top portion of the tail has λ ±l < 0 while the bottom portion of the tail has λ ±l > 0. As a result, the top portion of the tail must be replaced by a rarefaction wave while the bottom portion can be a shock. To decide which portion of the body of the shock fish to replace by a rarefaction wave, one must study λ ±r .
Consider a second fish which exhibits the solution to the cubic (4.3) for a given value of (e r , j r ). We will call this second fish an r-fish and the first an l-fish. Similar to the analysis of the tail of the l-fish, we find that the bottom portion of the tail of the r-fish should be constructed from a rarefaction solution while the top portion from a shock.
Consider an r-fish whose point of self intersection lies somewhere on the body of the l-fish. When the r-fish is drawn so that it intersects the back of the l-fish, the bottom portion of the r-fish's tail will go through the point of self-intersection of the l-fish (see the left panel of figure 3). As the bottom portion of the tail of the r-fish is a rarefaction, the region (e r , l r ) can be connected to (e l , j l ) by a rarefaction. Reciprocally, since we're describing a single shock or rarefaction interface between two regions, the back of the l-fish should be replaced by a rarefaction wave. We can run the argument again for an r-fish drawn to intersect the belly of the l-fish. We conclude that the belly of the l-fish must be a shock (see the right panel of figure 3).
(e l ,0) (e r ,j r ) e j (e l ,0) (e r ,j r ) e j Figure 3. A graphical determination of the "good shocks" and "bad shocks". The red fish corresponds to (e r , j r ) while the blue fish is built from (e l , 0). See the main text for a discussion.
Solving the Riemann problem using ideal hydrodynamics
Armed with our understanding of shock waves and rarefaction solutions, let us now tackle the Riemann problem we set out to solve. At t = 0, we consider a pair (e L , 0) which describes the fluid for z < 0 and another pair (e R , j R ) describing the fluid for z > 0. For a single interpolating shock or rarefaction, we have seen that given (e L , 0) there is a one parameter family of solutions that determine (e R , j R ). Thus, generically, there will not be a single shock or rarefaction solution that joins (e L , 0) to an arbitrary (e R , j R ). However, we can connect the two regions using a pair of shock and/or rarefaction waves. That is, we could connect (e L , 0) to an intermediate regime with values of e and j given by (e 0 , j 0 ) using a shock or rarefaction wave and another shock wave or rarefaction wave to connect the intermediate regime to the right asymptotic region (e R , j R ). In all cases, given the initial conditions, the pair of rarefaction and/or shock waves should be such that they move away from each other.
The strategy for determining which type of solution is allowed is to prefer good shocks over rarefaction solutions and rarefaction solutions over bad shocks. Thus, given a pair (e L , 0) and (e R , j R ) we need to establish which of the four possibilities for the time evolution of the initial state is allowed: two shocks (SS), a rarefaction wave followed by a shock (RS), or the remaining two configurations which we will denote by SR and RR.
To understand the possible solutions to the Riemann problem, let us first consider two fish diagrams: one associated with (e l , j l ) = (e L , 0) (the l-fish) and another with (e r , j r ) = (e R , j R ) (the r-fish). The points of overlap of the diagrams will give us the possible value of e 0 and j 0 . We will always choose a point where the two disturbances are moving away from each other. See, for example, figure 4.
Instead of plotting the r-and l-fishes, we can obtain closed form expressions for the various types of solutions by solving (4.8) and (4.9) on a case by case basis. In the following we provide some simple examples of such expressions. • RS configurations. As an example of the RS case, we take (e L , 0) and (e R , 0) as the asymptotic regions with e L > e R . The SR case is a left-right reflection of the RS case and therefore does not warrant further discussion.
To estimate the values of e 0 and j 0 we can follow the strategy laid out in [12,13]. For the left region we use the solution (4.9) with e l = e L , j l = 0, e r = e 0 and j r = j 0 . For the right region we use (4.2) with e l = e 0 , j l = j 0 , e r = e R and j r = 0. We find e 0 = e R s 2 , which, unsurprisingly, coincides with the large d limit of the hydrodynamic analysis of [12,13].
As pointed out in [12] the rarefaction solution will cover the location of the original shock discontinuity whenever At the point ζ = 0 in the rarefaction wave, the values of e and j are time independent (since any function of ζ/t will have a fixed point at ζ = 0). Moreover for a conserved stress tensor T µν = T µν ζ t , the first spatial derivative of T tζ and the first and second spatial derivatives of T ζζ vanish at this fixed point. Thus, one may think of the pressure at the fixed point as a "short" steady state for long enough times. "Short" implies that the region is of small spatial extent. From this perspective one has split steady states for large enough initial temperature differences. The values of e and j at the short steady state are given by e s = j s = e L exp(−1) . (4.12) • SS configurations. A simple example of the SS case has (e L , 0) on the left and (e L , j R ) on the right with j R < 0. We compute the NESS by gluing two shock waves to an intermediate region with (e, j) = (e 0 , j 0 ), similar to the RS case. Setting β = j R /e L , the intermediate NESS is given by 13) and the shock velocities for the left and right moving shocks, s L and s R respectively, are given by • RR configurations. Using e L = e R and j R > 0, we can find simple solutions that involve two rarefaction waves. 5 In this case, the NESS is characterized by where the left moving rarefaction wave extends from ξ = −1 to ξ = ξ − while the right moving rarefaction wave extends from ξ = ξ + to ξ = 1 with (4.17) 5 As it turns out in the RR phase, there is a simple expression for the steady state for all values of eL, eR, jL and jR, where We claim that given (e L , 0), the "phase diagram" of figure 1 immediately allows us to choose the correct configuration of shocks and rarefaction waves for any (e R , j R ). Indeed, following figure 4, the location of the self intersection point of the r-fish will determine the nature of the intersection of the r-and l-fish: if the intersection point of the r-fish lies above the l-fish we will always get an RR solution; if the intersection point of the r-fish is below the l-fish we get an SS solution; and RS and SR solutions will correspond to an intersection point of the r-fish in the body or tail of the l-fish respectively. Conformal invariance dictates that the phase diagram can depend on the only two dimensionless parameters of this problem, and we obtain the phase diagram in figure 1.
Note that even though the r-fish and the l-fish intersect at (0, 0), we can always rule out an intermediate point that corresponds to a vacuum. The vacuum intersection point is always along the bodies of the two fish where we have λ −,l/r < 0 < λ +,l/r . As discussed, we can not in general connect the two asymptotic solutions if we do not have two eigenvalues of the same sign (positive for l and negative for r) in one of the regions.
A numerical solution to the Riemann problem.
In the previous sections we have obtained predictions for the evolution of e and j starting from an initial configuration (1.2) and assuming that gradient corrections to the equations of motion are small. It is somewhat unfortunate that this assumption stands in stark contrast to the discontinuous jump in the initial state and one may inquire whether the analysis of the previous section is relevant for the problem at hand. In order to resolve this issue we solve the full equations of motion (1.1) numerically. We give numerical examples of the RR, SS, and RS phases described above. To our numerical accuracy, the difference in e 0 and j 0 between the ideal case which we have studied analytically and the case with gradients included which has been obtained numerically appears to disappear in the long time limit.
As it turns out, the equations (1.1) are easy to evolve numerically with canned PDE solvers, such as Mathematica's NDSolve routine [42]. To obtain various solutions one can evolve the initial condition e = e (1 + δe tanh(c sin(2πx/L))) , (4.20) in a periodic box of length L. (In appendix B, we use a more elaborate piecewise continuous initial condition.) For c sufficiently large, the initial condition approaches a square wave. As long as the disturbance has not travelled a distance of order L, causality ensures that the Figure 5. A numerical solution to the Riemann problem. The plots were obtained starting with an initial condition (4.20) with L = 8000, c = 300 and j = 0. Only one half of the box, centered around the origin, is depicted. The dashed curve corresponds to values of e and j at t = 0 while the solid curve corresponds to values of e and j at t = 800. The black, red and blue horizontal lines correspond to the predicted near equilibrium steady state associated with a rarefaction wave and shock pair (c.f., equation (4.10)), a bad shock and good shock pair (c.f., references [5,7]), and a non thermodynamic shock pair (c.f., reference [5]) respectively. The fixed point associated with a rarefaction solution which exists for δe ≥ 0.7536 . . . is represented by a black dot.
behaviour of e and j are very close to that of an infinite system where the values of e and j in the asymptotic region are fixed at some constant value. If we denote these asymptotic values as e L and e R then δe = e L − e R e L + e R and e = 1 2 (e L + e R ) . We can similarly define j and δj.
In figures 5, 6, and 7, we have plotted typical results for numerical solutions to (1.1), corresponding to RS, SS, and RR configurations. The resulting values of e and j seem to approach the predicted values of e 0 and j 0 at long times-at least as far as our numerical precision can be trusted (see appendix B). In particular, in the RS case, we approach the steady state value (4.10); in the SS case, we approach (4.13); and in the RR case, we approach (4.16). As we discuss in greater detail in the next section, one place where gradient effects show up and do not disappear as a function of time is in the shock width. One may speculate that the agreement between the predicted steady state in the absence of gradient corrections and the numerical results is associated to the fact that the gradient corrections, even though order one in our system of units, come with dimensionful coefficients. In the language of the renormalization group, they conform to irrelevant couplings. Perhaps it is for this reason that at long enough time and in a large enough box, we may be able to ignore these corrections for the most part.
Restoring gradient corrections
In this section, we try to gain a better handle over the gradient corrections and their affect on the predicted steady state values. The analysis here is incomplete and approximate. To overcome the deficiencies of paper and pencil estimates, we include some numerical solutions to the conservation equations (1.1) that provide support for the estimates. We will consider separately corrections to each of the features we found in the idealized limit: the steady state and asymptotic regions with constant e and j, a shock wave, a rarefaction wave, and the discontinuity at the edge of the rarefaction.
Corrections to constant regions
Corrections to a constant e and j region are easiest to analyze. Assuming the fluctuations are small, we look for linearized solutions of the form e = e 0 + δe exp(−iωt + ikζ) and j = j 0 + δj exp(−iωt + ikζ). We find two propagating modes These two modes are damped sound modes whose speed is shifted by the fluid velocity β = j/e. The gradient corrections appear here in the form of the damping term ik 2 in the dispersion relation. Given this result, we anticipate that we will be able to correct a constant e and j region by taking an appropriate linear superposition of sound waves. The damping suggests that at long times the solution can only involve constant e and constant j. As a side comment, an odd thing about these mode relations is that they are exact. Recall that in first order viscous hydro, we would typically solve an equation of the form ω 2 + iΓk 2 ω − k 2 = 0 for ω, in the case of vanishing background fluid velocity. If this equation were treated as exact, the solutions for ω would be non linear in k and therefore have higher order contributions, i.e. O(k 3 ), O(k 4 ), etc., when expanded around small k.
Corrections to shocks
The gradient corrections should act to smooth a shock and give it some characteristic width. We estimate this width in a frame in which the shock is not moving, i.e. s = 0. In this frame, j r = j l and e r e l = j 2 l . We can find a solution for the shock profile in the case where the shock is weak e r ∼ e l : where we have defined e ≡ e r + e l 2 , δe ≡ e r − e l e r + e l , and j ≡ j r + j l 2 .
We can see in figure 8 that even for values of δe ∼ 1/2, that e δe 2 /2 appears to be a good estimate for the slope of the shock. 6 In appendix A, we show that this shock profile produces, at the correct subleading order in a large d expansion, the correct (positive) amount of entropy predicted by the RH relations.
Corrections to a rarefaction
We will perform two estimates of gradient corrections to the rarefaction wave. The first estimate is a correction to the interior of the wave far from the edges where it joins onto constant e and j regions. The second estimate is a correction to the discontinuity where the rarefaction joins a constant region. For the first estimate, we assume an ansatz for the long time behavior of the rarefaction wave: 1+δe , e r = √ 1+δe √ 1−δe ) using the RH relations. We then plot the value of the slope of the shock after the system has settled into a steady state. This is compared with the weak shock solution (4.23), given by the dashed red line. The inset plot shows the relaxation from the initial conditions to the steady state for δe = 0.23.
1/t and log(t)/t that depend on a second integration constant c 2 and an arbitrary function e 1 (ξ), both presumably set by the initial conditions. Note that the combination ξe − j is independent of the arbitrary function e 1 (ξ) at order 1/t. In figure 9, the numerics confirm that the corrections to ξe − j do indeed scale as 1/t. Last, we would like to heal the discontinuity at the edge of a rarefaction wave. The tanh function we found above heals the discontinuity in the shock case, making the question of what happens at the edge of a shock less pressing. Consider a case where the rarefaction wave meets a steady state at ζ = 0, with the rarefaction region to the right and the steady state to the left. (We can always move the meeting point away from ζ = 0 by boosting the solution ζ → ζ + vt.) With the intuition that the second order gradients in the conservation equations are dominant and render the behavior similar to that of a heat equation with 1/ √ t broadening, we look for an approximate late time solution of the form defining χ ≡ ζ/ √ t. We find that j 0 = ±e 0 , that j 1 is constant, and that j 2 (χ) = ∓ e 1 (χ)e 1 (χ) e 0 + 4 e 1 e 0 ± 1 j 1 .
Note that the relation j 0 = ±e 0 is consistent with a rarefaction meeting a steady state region at ζ = 0. These relations for the j i lead to a second order, nonlinear differential equation for e 1 : Remarkably, this equation can be written as a total derivative and integrated to yield where c 1 is another integration constant. The integration constants reflect a translation symmetry of both e 1 and χ. We can shift χ → χ + j 1 /e 0 and e 1 (χ) → e 1 (χ − j 1 /e 0 ) ± j 1 /2. The shifts send j 1 → 0 and c 1 → c 1 ∓3j 2 1 /8e 0 in the equation (4.31). If we apply the boundary condition that both e 1 (χ) and e 1 (χ) vanish in the steady state region χ → −∞, then we must set c 1 = 0, and the resulting first order differential equation becomes separable. To match onto the rarefaction region, we require that e 1 → ±e 0 as χ → ∞. This boundary condition As we choose the rarefaction region to match onto the steady state at χ = 0, we conclude that the integration constant j 1 in the original differential equation must be zero as well. We can check numerically that a 1/ √ t scaling is consistent with the behavior at the endpoints of a rarefaction solution. See figure 10.
Discussion
We presented a solution to the Riemann problem for the conservation equations (1.1). Through fluid-gravity and the AdS/CFT correspondence, these equations describe, in a large d limit, both the dynamics of a black hole horizon and also the dynamics of a strongly interacting conformal field theory.
There are a number of possible future directions for research. The simplest is perhaps to include a transverse velocity. With a transverse velocity, in addition to the shock and rarefaction waves, there will in general be a contact discontinuity [13,[43][44][45]. It is known (and perhaps intuitive given the similarity to a counter flow experiment), that the contact discontinuity is in general unstable to the development of turbulence [46]. It would be inter-esting to see what precisely happens in our large d limit. Another more complicated extension is the inclusion of a conserved charge. The large d equations of motion in the presence of a conserved charge are available from ref. [14]. Once again, a contact discontinuity is expected (see for example [13]) although whether such a discontinuity is stable or unstable to turbulence is unclear. More ambitiously, one could consider what happens for the holographic dual of a superfluid or superconductor [19,25,[47][48][49][50][51].
Another possible direction is the addition of higher curvature terms to the dual gravitational description. One could presumably tune the d dependence of these terms such that higher order gradient corrections appear in the conservation equations (1.1) and also such that the first and second order transport coefficients are tuned away from the values examined in this paper.
Perhaps the most interesting direction for future study is the connection to black hole dynamics. What can we learn about black holes through the connection to hydrodynamics in a large d limit?
Equation (A.3) can be obtained by using a large d expression for the entropy current (3.22) along with the Rankine-Hugoniot relations for energy and momentum, (4.1) supplemented by (2.14) and (2.15). Note that in the asymptotic regions, the gradient terms will all vanish. (It is also possible to start with a finite d result, using for example refs. [12] or [13], and then take a large d limit directly.) The non-conservation of entropy (A.3) can be captured by the leading viscous corrections to the shock width (4.23) when the energy difference is small. Indeed, using (3.24) Integrating this divergence over the ζ direction leads to which agrees with a small δe expansion of (A.3).
B A bestiary of plots
In section 4.3 we studied the numerical solutions to the Riemann problem for various initial energy and velocity profiles associated with RR, RS and SS type solutions. In what follows we provide additional evidence that at late times the full numerical solution to the Riemann problem approaches the appropriate predicted steady state values e 0 and j 0 and fixed point values e s and j s .
B.1 RR configurations
To generate an RR configuration we used the initial data The analysis of section 4.2 predicts a steady state of the form e 0 = exp (−j * /2) j 0 = j * 2 exp (−j * /2) .
(B.3)
Once j * ≥ 2 one should find a fixed point with e s = j s = exp(−1). We find that the numerical solution approaches the predicted states via power law behavior, see figure 11.
B.2 SS configurations
To generate an SS configuration we used the initial data (B.1) with j * < 0. The analysis of section 4.2 predicts a steady state of the form
B.3 RS configurations
To generate an RS configuration we used the initial data exp(1) we will obtain a fixed point at the origin with e s = j s = exp(−1). An analysis of the late time behavior of the numerical solution can be found in figure 13.
B.4 Error analysis
In sections B.1 and B.3 we have fit the late time approach of the data to the predicted steady state and (or) fixed point values to a power law behavior. The fit was done using Mathematica's NonLinearModelFit routine [42]. In detail, the late time data was discretized into order 1 time steps which were then fit to a a/t α curve with a and α as parameters. The standard errors for the fit were usually of order 10 −3 to 10 −4 . Fits involving very small values of the slope parameter c in (B.2) and (B.6) (c.f., the bottom plots of figures 11 and 13) often had large standard errors. | 12,909.4 | 2016-05-04T00:00:00.000 | [
"Mathematics"
] |
Urine-derived cells provide a readily accessible cell type for feeder-free mRNA reprogramming
Over a decade after their discovery, induced pluripotent stem cells (iPSCs) have become a major biological model. The iPSC technology allows generation of pluripotent stem cells from somatic cells bearing any genomic background. The challenge ahead of us is to translate human iPSCs (hiPSCs) protocols into clinical treatment. To do so, we need to improve the quality of hiPSCs produced. In this study we report the reprogramming of multiple patient urine-derived cell lines with mRNA reprogramming, which, to date, is one of the fastest and most faithful reprogramming method. We show that mRNA reprogramming efficiently generates hiPSCs from urine-derived cells. Moreover, we were able to generate feeder-free bulk hiPSCs lines that did not display genomic abnormalities. Altogether, this reprogramming method will contribute to accelerating the translation of hiPSCs to therapeutic applications.
Reprogramming of human fibroblasts with mRNA was first achieved in 2010 [5][6][7][8][9][10] . The original protocol required daily transfections of the reprogramming factors for 20 days. This protocol was subsequently improved, requiring less than 12 transfections, and allowing feeder-free derivation of hiPSCs [11][12][13] , thus reducing the complexity of the protocol and paving the way for GMP production of hiPSCs. To date, the main source of cells for mRNA reprogramming is skin fibroblasts, a cell type that tolerates genomic rearrangements, that will be present in the fibroblasts and therefore in the subsequent hiPSC lines 2 . However, sourcing skin fibroblasts requires medical intervention and aftercare. This is a drawback in cases where the donor is a healthy child, control to a diseased relative, or when repeated biopsies might be required in order to generate hiPSCs with specific immunological features. Thus, there is a need to develop an efficient reprogramming method for a more easily available cell source such as peripheral blood mononuclear cells (PBMCs), which can be obtained through less invasive means.
In this study, we explored alternative sources of starting cell types for mRNA reprogramming. Among adherent cell types that could be easily and non-invasively collected at cell banks, we identified dental pulp cells, which are collected following wisdom teeth removal, and urine-derived cells 14 . We successfully generated hiPSCs in feeder or feeder-free conditions from both cell types. The results prompted us to evaluate bulk reprogramming, i.e., generation of hiPSCs lines from multiple clones. The advantages of bulk reprogramming are that it is less labour-intensive and limits negative clonal effects occurring during reprogramming 15 , while the drawback being the increased risk of having varying genomic abnormalities in subclonal populations of a heterogenous hiPSC cell line. Single-nucleotide polymorphism (SNP) analysis revealed that bulk hiPSCs from urine-derived cells did not present genomic duplication or deletion. Whereas, Sendai reprogramming of fibroblasts and PBMCs yielded 18% copy number variation (CNV) rate. Our work will extend the versatility of mRNA reprogramming method and help clear the remaining roadblocks to the therapeutic application of hiPSCs.
Results
Comparing mRNA reprogramming of urine-derived cells, dental pulp cells and fibroblasts. We sought to apply mRNA reprogramming on alternative cell types, that are easily accessible, such as dental pulp cells or urine-derived cells, a cell type that was recently used for reprogramming 16 . We used an in-house mRNA reprogramming protocol optimized for skin fibroblasts. The protocol comprised seeding of 150 000 cells, followed by daily transfection of 625 ng of Oct4, Sox2, Klf4, Myc, Nanog, Lin28 and nuclear GFP (nGFP) (OSKMNLg) for 11 days. This protocol allowed us to obtain 21 colonies from fibroblasts, and also worked on dental and urine-derived cells, yielding 140 and 4 colonies, respectively. We noticed that the transfection efficiency was high in all three cell types, as represented by GFP expression (Fig. 1a), despite the nGFP mRNA accounting for only 5% of the cocktail. Of note, urine-derived cells survived poorly at a starting cell density of 150,000 cells, in these transfection conditions, but hiPSC from those cells were the first to emerge (day 9 vs day 11 for skin fibroblasts and dental cells). Colonies from skin fibroblasts or dental cells were picked and transferred to feeder free conditions, in a variety of media and coating matrices, while urine-derived hiPSC colonies were transferred only to feeder cultures, due to the limited number of clones obtained (Fig. 1a). qPCR analysis of core pluripotency regulators Oct4, Sox2 and Nanog showed comparable expression levels to H9 hESC (Fig. 1b) at passage 5. For all cell sources, as no transgenes were present after day 11 in this reprogramming protocol, hiPSCs lines were readily established and could be banked at early passages. This allows to carry lines for a shorter time before validating them, therefore alleviating one of the hurdles in reprogramming which is extended passaging/sub cloning until no traces of transgenes can be detected.
We decided to focus on urine-derived cells as they can be obtained through one of the least invasive means and also because iPSCs derived from urine-derived cells have already been used to model multiple diseases [17][18][19] . To address the low survival rate and validate the reproducibility of our protocol, we increased the amount of cells seeded from 150,000 to 225,000 or 300,000 and also tested three additional patient cell lines (125, 126, 149). All test conditions yielded hiPSC colonies, except urine-derived cell line 126, which did not produce any colonies at a high starting cell density of 300,000 cells (Fig. 1c). Urine-derived cells are a heterogeneous population that varies a lot between cell preps and patients 20 . Thus, the derivation of iPSCs clones from multiple urine-derived cell lines demonstrates the robustness of the protocol. Finally, we repeated the reprogramming of patient 149 cells to establish hiPSC directly on Matrigel, a commonly used coating matrix. We were able to generate feeder-free hiPSC lines, as recently published for episomal reprogramming of urine-derived cells 21 (Fig. 1d).
Altogether, our results showed that mRNA reprogramming can be used for multiple readily accessible cell types, including urine-derived cells, and that it is reproducible on 4 different patient cell lines.
Validation of hiPSC established from bulk cultures.
Another important question in the field of hiPSC is whether we should use clones originating from a single cell, or a bulk cell population originating from multiple parental cells. As recently demonstrated, the main variability factor lies within the inter-patient variability rather than the reprogramming method or the source of somatic cells [22][23][24] . Conventional reprogramming protocols involve picking, isolating, and culturing multiple colonies, a process that is highly time consuming. This procedure is followed primarily to select for clones without transgene insertions and genomic abnormalities. In this regard, mRNA reprogramming of urine-derived cells would be advantageous. Since mRNA reprogramming does not involve transgene insertions and has also been shown to result in relatively fewer genomic abnormalities 9 in the resulting iPS cells, the number of colonies that need to be picked and screened will be significantly lower. With this reasoning, we decided to evaluate iPSCs derived from bulk reprogramming cells (hereafter, referred to as hiPSC bulks). Gene expression analysis by qPCR of core pluripotency regulators Oct4, Sox2 and Nanog showed expression levels comparable to those in H9 hESC (Fig. 2a), independent of the culture medium. There were also no apparent difference in expression levels between cells derived on feeders and subsequently cultured in defined media and cell directly derived in feeder-free conditions. To further assess the urine-derived hiPSC (u-d hiPSC), we analyzed them by digital gene expression (DGE) RNAseq 25 2 clones from a fibroblast-derived hiPSCs, f71.002 and f71.019, and parental urine-derived cells. Unsupervised Pearson correlation analysis showed that u-d hiPSC were interspersed with hESC and the f71 hiPSCs (Fig. 2b). Moreover, the u-d hiPSC did not correlate more with the parental urine-derived cells than hESCs or the f71 hiPSCs. Finally, urine-derived cells specific genes MCAM, PAX8 and NT5E 21,27 did not show persistence in u-d hiPSCs compared to hESCs or f71 hiPSCs (Fig. 2c).
Additionally, we assessed the copy number variation by analyzing our hiPSC lines by SNP chips. None of the hiPSC lines we tested gained copy number abnormalities during reprogramming (Fig. 2d). SNP analysis In comparison to the state-of-the-art feeder derivation, direct derivation in feeder-free conditions did not result in any additional abnormalities (Fig. 2d). Given that the sensitivity of the assay used for SNP analysis is around 15% 28 , we concluded that the vast majority of cells had a normal SNP profile. Therefore, in addition to clonal iPSC derivation, mRNA reprogramming would support derivation of bulk hiPSC lines, which has been proposed to be beneficial for the quality of hiPSC lines 29 . The bulk reprogramming method also significantly reduces manual labor in terms of time spent on cell culture.
Comparison of these results with those from previous reprogramming experiments of fibroblasts and PBMCs carried out at the iPSC core facility of Nantes, revealed that the mRNA reprogramming introduced fewer genomic abnormalities. In our previous reprogramming experiments, the average error rate with Sendai virus reprogramming is 10% (Fig. 2e). Those results were in line with previously published evaluation of Sendai and mRNA reprogramming 9 . This advantage of mRNA reprogramming in reducing genomic abnormalities in resulting iPSCs is of particular importance as recent reports showed that genomic instability of hiPSC lines is one of the main hurdles in translating iPSC cells to clinical applications 30 .
To assess pluripotency of the hiPSC bulks that we generated, we performed differentiation into early germ layer intermediates. Cells were seeded as monolayers and induced in specific mesoderm, ectoderm and endoderm induction media for 7 days, and analyzed by flow cytometry for CD140 or CD144 (mesoderm), SOX2/PAX6 (ectoderm) and SOX17/CXCR4 (endoderm). hESC of u-d hiPSC line 149.B11 show typical staining with around 80% CD140b+, 10% CD144+, 75% SOX2+/PAX6+ and 85% SOX17+/CXCR4+ cells (Fig. 3a). The other tested hPSC lines showed similar results, demonstrating no adverse effect of the source of cells on the efficacy of engagement into the 3 germ layers (Fig. 3b). Finally, we investigated the expression levels of lineage specific genes by DGE RNAseq in differentiated cells from all PSC lines, and showed comparable expression levels for all PSC lines, supporting again that the u-d hiPSCs lines did not have differentiation biases for endoderm, mesoderm or ectoderm lineages (Fig. 3c-e). To further show the differentiation potential of our cells, we performed directed differentiation into hepatocytes, cardiomyocytes and cells of neuronal lineage. The hepatocyte differentiation protocol has been previously described 31 . The staining showed large number of cells expressing FOXA2, alpha-foeto protein (AFP) and Albumin (Fig. 4a), typical for hepatocytes differentiated from hiPSCs. Cardiomyocytes were differentiated according to our protocol 17 , and monitored for beating colonies, and characterized at day 28 by immunofluorescence for MLC2v, Troponin I (Fig. 4b). Staining showed characteristic striated structures. Finally, we assessed ectoderm germ layer differentiation by differentiating hiPSC into neurons and stained for Tuj1 (Fig. 4c). The staining revealed axon-like patterns, typical for neurons. Each of the 8 hiPSCs bulks, originating form 3 different patient lines, were subjected to these differentiation protocols and showed proper differentiation (Fig. 4d).
Discussion
Our results show that mRNA reprogramming allows the derivation of iPSCs from urine-derived cells, in as little as 9 days. Moreover, urine-collection is the least invasive way to obtain biological samples amenable for reprogramming. The resulting iPSCs did not present any genomic abnormalities, reinforcing that shorter duration of reprogramming introduces fewer genomic abnormalities in cells 2 . Readily available cell source and less labor-intensive protocols are key for the wide-spread use of hiPSCs, with more and more patient lines in studies. Our protocol addresses both of these factors. We showed that mRNA reprogramming is less labor-intensive by allowing feeder-free reprogramming [11][12][13] , supporting derivation of bulk iPSCs, and reducing the number of clones that need to be screened for genomic abnormalities. We have also presented evidence for the applicability of this reprogramming method on readily available cells sources such as urine-derived cells.
One of the expectation of hiPSCs is their use for clinical treatments. Completely defined, xeno-free, media are available for mRNA reprogramming, in combination with GMP compatible coating matrixes 12,13 . Multiple countries are setting up programs to generate HLA-typed cells banks. However, we have to be able to check that the selected donors have genomes compatible with high-quality differentiated cells for a broad spectrum of differentiation. To address the need of immunologically-matched hiPSCs, we envision to generate hiPSCs from urine-derived cells of donors with specific HLA haplotypes. In clinical settings, clone picking will be preferred. We could generate HLA specific hiPSCs, validate that they are producing high-quality differentiated cells, and repeat the reprogramming of specific donors under GMP grade conditions. Indeed, this strategy is in line with a recent analysis showing that the major parameter influencing the variability between PSC lines is the genomic background of the donor 32 . When realized, this will be a time-and labor-effective method to derive hiPSCs for use in clinical trials. Moreover, it allows for generating immune-defined hiPSCs at lower costs, thus enabling their use in preclinical studies aimed at investigating immune-tolerance of hiPSCs progeny, a question that requires thorough investigation 2 .
Other protocols improving mRNA reprogramming have recently emerged 33 . Yoshioka and colleagues used semi-replicative polycystronic mRNA (srRNA) harboring reprogramming factors to reprogram somatic cells. The srRNA allows reprogramming of CD34+ circulating cells. However, it suffers from two caveats that we have sought to overcome with our method: (1) it requires significant quantity of blood to obtain sufficient adherent cells to reprogram, (2) the transgenes are present until passage 6, which nullifies one of the main advantages of mRNA reprogramming in our view. It will be interesting to follow the improvements upon this method, particularly those aimed at reducing the number of days that transgenes are present in reprogramming.
HiPSCs have become a useful complement to hESCs in studying human development and physiopathology and in developing new regenerative medicine treatments. The reprogramming process generates iPSC clones of distinct immunogenic, genetic, epigenetic, and functional qualities from which good clones need to be selected prior to using these cells in molecular and biomedical applications. However, screening for good clones can become an arduous and expensive task that might be ameliorated by enhancing the reprogramming fidelity of the generated iPSCs. Improvement of reprogramming techniques, in particular their kinetics and efficiency, will have a direct effect on the quality of reprograming since they reduce the work needed to screen and select good clones for downstream applications. Thus, we propose mRNA reprogramming of urine-derived cells as a valuable resource for the scientific community to accelerate the development of hiPSCs-based regenerative medicine protocols.
Material and Methods
Tissue culture. Urine samples were collected in a 250 ml bottle previously conditioned with 10% of RE/ MC medium (see below) for storage (up to 24 h at 4 °C) and transported as previously described 34 . RE/MC (1:1) medium was prepared by mixing RE medium (Renal epithelial cell growth medium SingleQuot kit supplement and growth factors; Lonza) with MC (mesenchymal cell) medium prepared separately. MC medium is composed of DMEM/high glucose medium (Hyclone) supplemented with 10% (vol/vol) FBS (Hyclone), 1% (vol/vol) GlutaMAX (Life Technologies), 1% (vol/vol) NEAA (Life Technologies), 100 U/ml penicillin (Life Technologies), 100 μg/ml streptomycin (Life Technologies), 5 ng/ml FGF2 (Miltenyi Biotec), 5 ng/ml PDGF-AB (Cell Guidance Systems) and 5 ng/ml EGF (Peprotech). U-cells were isolated from urine samples and cultured accordingly to the procedure described in 16 with slight modifications. Briefly, urine samples were centrifuged 5 min at 1200 g and the pellet was washed with pre-warmed DPBS (Gibco) containing 100 U/ml penicillin and 100 μg/ml streptomycin (Gibco). Pellets were resuspended in 2 ml RE/MC proliferation medium and cultured on 0.1% gelatin-coated six-well plates. Cells were incubated at 37 °C in normoxia (20% O2, 5%CO2) for 4-5 days without any change of medium nor moving. Urine-derived cells were further passaged using TrypLE Express (Gibco) and expanded in RE/MC medium with daily change of half of the media.
Early germ layer differentiation. hPSC lines were differentiated into endoderm, mesoderm and ectoderm using Stemmacs Trilineage Kit (Miltenyi biotec). 80,000 cells for mesoderm, 130,000 cells for endoderm and 100,000 cells for ectoderm were plated in 24 wells plates, and cultured in specific media for 7 days, as specified by the protocol. On day 7, differentiated cells were analysed by flow cytometry and DGE RNAseq.
Advanced Differentiation. Hepatocytes. We followed the differentiation protocol published in 19 Cells were analyzed using a LSR ™ . RT-qPCR analysis. Total RNA was extracted using RNeasy ® columns and DNAse-treated using RNase-free DNase (Qiagen). For quantitative PCR, first-strand cDNAs were generated using 500 ng of RNA, M-MLV reverse transcriptase (Invitrogen), 25 µg/ml polydT and 9.6 µg/ml random primers (Invitrogen).
To quantitate transcripts, absolute quantitative PCR was performed on a Viia 7 (Applied Biosystems) using power SYBR green PCR master mix (Applied Biosystems), for genes listed in Table 1. For each sample, the ratio of specific mRNA level relative to GAPDH levels was calculated. Experimental results are shown as levels of mRNA relative to the highest value.
Expression profiling by DGE-seq. 3′digital gene expression (3′DGE) RNA-sequencing protocol was performed according to 25 . Briefly, the libraries were prepared from 10 ng of total RNA. The mRNA poly(A) tail were tagged with universal adapters, well-specific barcodes and unique molecular identifiers (UMIs) during template-switching reverse transcriptase. Barcoded cDNAs from multiple samples were then pooled, amplified Read pairs used for analysis matched the following criteria: all sixteen bases of the first read had quality scores of at least 10 and the first six bases correspond exactly to a designed well-specific barcode. The second reads were aligned to RefSeq human mRNA sequences (hg19) using bwa version 0.7.4 4 with non-default parameter "-l 24". Reads mapping to several positions into the genome were filtered out from the analysis. Digital gene expression (DGE) profiles were generated by counting for each sample, the number of unique UMIs associated with each RefSeq genes. DGE-sequenced samples were acquired from three sequencing runs. All sequenced samples were retained for further analysis.
DESeq 2 was used to normalize expression with the DESeq function. Normalized counts were transformed with vst (variance stabilized transformation) function from DESeq library. This log-like transformation was used for variance analysis. SNP analysis. DNA was extracted from somatic and iPSCs samples using the QIAGEN QiaAmp kit, according to the manufacturer's recommendations. The gDNA was quantified and qualified using a nanodrop. 200 ng of gDNA was outsourced to Integragen Company (Evry, France) for karyotype analysis using HumanCore-24-v1 SNP arrays. This array contains over 300000 probes distributed throughout the genome with a median coverage of one probe every bases 5700 bases. All genomic positions were based upon Human Genome Build 37 (hg19).
DNA samples were hybridized on HumanCore-24-V1 SNP arrays according to the manufacturer's instructions by Integragen. Analysis was performed with GenomeStudio software. Chromosome abnormalities were determined by visual inspection of logR ratios and B-allele frequencies (BAF) values and comparing parental cells and iPS-derived samples. LogR ratio, the ratio between observed and expected probe intensity, is informative regarding copy number variation (i.e. deletions/duplications) and BAF is informative regarding heterozygosity. We used the SNP data to compute CNV. In particular, this type of chips allows to detect loss of heterozigocy (LOH), an important concern for hiPSC, which is not possible with classical CGH arrays.
Ethical approval and informed consent.
All experiments were carried out in accordance with french guidelines and regulations. All patients gave informed consents. Reprogramming of patient samples was approved by the French ministry of higher education and research, under No. DC-2011-1399. hESC H1 and H9 were used under agreement FE13-004 from Agence de la Biomédecine. | 4,559.8 | 2018-09-25T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Leading loop effects in pseudoscalar-Higgs portal dark matter
We examine a model with a fermionic dark matter candidate having pseudoscalar interaction with the standard model particles where its direct detection elastic scattering cross section at tree level is highly suppressed. We then calculate analytically the leading loop contribution to the spin independent scattering cross section. It turns out that these loop effects are sizable over a large region of the parameter space. Taking constraints from direct detection experiments, the invisible Higgs decay measurements, observed DM relic density, we find viable regions which are within reach in the future direct detection experiments such as XENONnT.
Introduction
A nagging question in contemporary modern physics is about the nature of dark matter (DM) and its feasible non-gravitational interaction with the standard model (SM) particles. This problem is in fact deemed straddling both particle physics and cosmology.
On the cosmology side, precise measurements of the Cosmic Microwave Background (CMB) anisotropy not only demonstrate the existence of dark matter but also provide us with the current dark matter abundance in the universe [1,2]. On the particle physics side, the dedicated search is to find direct detection (DD) of the DM interaction with the ordinary matter via Spin Independent (SI) or Spin Dependent (SD) scattering of DM-nucleon in underground experiments like LUX [3], XENON1T [4] and PandaX-II [5]. Although in these experiments the enticing signal is not shown up so far, the upper limit on the DM-matter interaction strength is provided for a wide range of the DM mass. Among various candidates for particle DM, the most sought one is the Weakly Interacting Massive Particle (WIMP).
Within WIMP paradigm there exist a class of models where SI scattering cross section is suppressed significantly at leading order in perturbation theory, hence the model eludes the experimental upper limits in a large region of the parameter space. The interaction type of the WIMP-nucleon in these models are pseudoscalar or axial vector at tree level resulting in momentum or velocity suppressed cross section [6]. The focus here is on models with pseudoscalar interaction between the DM particles and the SM quarks. In this case there are both SI and SD elastic scattering of the DM off the nucleon at tree level. Both type of the interactions are momentum dependent while the SD cross section gets suppressed much stronger than the SI cross section due to an extra momentum transfer factor, q 2 . Thus, in these models taking into account beyond tree level contributions which could be leading loop effects or full one-loop effects are essential.
JHEP05(2019)096
We recall several earlier works done in this direction with emphasis on DM models with a pseudoscalar interaction. Leading loop effect on DD cross section is studied in an extended two Higgs doublet model in [7][8][9]. Within various DM simplified models in [10][11][12] and in a singlet-doublet dark matter model in [13] the loop induced DD cross sections are investigated. Full one-loop contribution to the DM-nucleon scattering cross section in a Higgs-portal complex scalar DM model can be found in [14]. In [15] direct detection of a pseudo scalar dark matter is studied by taking into account higher order corrections both in QCD and non-QCD parts.
In this work we consider a model with fermionic DM candidate, ψ, which interacts with a pseudoscalar mediator P as Pψγ 5 ψ. The pseudoscalar mediator will be connected to the SM particles via mixing with the SM Higgs with an interaction term as P H † H. In this model the DM-nucleon interaction at tree level is of pseudoscalar type and thus its scattering cross section is highly suppressed over the entire parameter space. The leading loop contribution to the DD scattering cross section being spin independent is computed and viable regions are found against the direct detection bounds. Beside constraints from observed relic density, the invisible Higgs decay limit is imposed when it is relevant.
The outline of this article is as follows. In section 2 we recapitulate the pseudoscalar DM model. We then present our main results concerning the direct detection of the DM including analytical formula for the DD cross section and numerical analysis in section 3. Finally we finish with a conclusion.
The pseudoscalar model
The model we consider in this research as a renormalizable extension to the SM, consists of a new gauge singlet Dirac fermion as the DM candidate and a new singlet scalar acting as a mediator, which connects the fermionic DM to SM particles via the Higgs portal. The new physics Lagrangian comes in two parts, (2.1) The first part, L DM , introduces a pseudoscalar interaction term as and the second part, L scalar , incorporates the singlet pseudoscalar and the SM Higgs doublet as The pseudoscalar field is assumed to acquire a zero vacuum expectation value (vev ), P = 0, while it is known that the SM Higgs develops a non-zero vev where H = v h = 246 GeV.
Having chosen P = 0, the tadpole coupling g 0 is fixed appropriately. After expanding The mixing angle, θ, is induced by the interaction term P H † H and is obtained by the relation sin 2θ = 2g 1 v h /(m 2 h − m 2 s ), in which m h = 125 GeV and m s are the physical masses for the Higgs and the singlet scalar, respectively. The quartic Higgs coupling is modified now and is given in terms of the mixing angle and the physical masses of the scalars as λ = (m 2 h cos 2 θ + m 2 s sin 2 θ)/(2v 2 h ). We can pick out as independent free parameters a set of parameters as θ, g d , g 2 , g 3 , g 4 and m s . The coupling g 1 is then fixed by the relation Recent study on the DM and the LHC phenomenology of this model can be found in [16,17] and its electroweak baryogenesis is examined in [18].
For DM masses in the range m dm < m h /2, one can impose constraint on the parameters g d , θ and m dm from invisible Higgs decay measurements with Br(h → invisible) 0.24 [19]. Given the invisible Higgs decay process, h →ψψ, we find for small mixing angle the condition g d sin θ 0.16 GeV 1/2 /(m 2 h − 4m 2 dm ) 1/4 [20]. We compute DM relic density numerically over the model parameter space by applying the program micrOMEGAs [21]. The observed value for the DM relic density used in our numerical computations is Ωh 2 = 0.1198 ± 0.0015 [22]. The DM production in this model is via the popular freeze-out mechanism [23] in which it is assumed that DM particles have been in thermal equilibrium with the SM particles in the early universe.
We find the viable region in the parameter space respecting the constraints from observed relic density and invisible Higgs decay in figure 1. The parameters chosen in this computation are sin θ = 0.02, g 3 = 200 GeV and g 2 = 0.1. It is evident in the plot that regions with m dm < m h /2 are excluded by the invisible Higgs decay constraints. The analytical formulas for the DM annihilation cross sections are given in appendix A.
Direct detection
In the model we study here the DM interaction with the SM particles is of pseudoscalar type, and at tree level its Spin Independent cross section is obtained in the following formula where µ is the reduced mass of the DM and the proton, v dm ∼ 10 −3 is the DM velocity, and A is given by where the number 0.28 incorporates the hadronic form factor and m p denotes the proton mass. Therefore, the DM-nucleon scattering cross section is velocity suppressed at tree level. Other words, the entire parameter space of this model resides well below the reach of the direct detection experiments. The current underground DD experiments like LUX [3] JHEP05(2019)096 and XENON1T [4] granted us with the strongest exclusion limits for DM mass to be in the range ∼ 10 GeV up to ∼ 10 TeV. The future DD experiments can only probe direct interaction of the DM-nucleon down to the cross sections comparable with that of the neutrino background (NB), σ NB ∼ O(10 −13 ) pb [24]. In the present model, as we will see in our numerical results the tree level DM-nucleon DD cross section is orders of magnitude smaller than NB cross sections. For such a model with the DM-nucleon cross section being velocity-suppressed at tree level, it is mandatory to go beyond tree level and find the SI cross section. The leading diagrams (triangle diagrams) contributing to the SI cross section are drawn in figure 2. There are also contributing box diagrams to the DM-nucleon scattering process. The box diagrams bring in a factor of m 3 q (q stands for light quarks) as shown in [25], while the triangle diagrams are proportional to m q . Thus, we consider the box diagrams to have sub-leading effects. We then move on to compute the leading loop effects on the SI scattering cross section. In the following we write out the full expression for the DM-quark scattering amplitude when scalars in the triangle loop have masses m i
JHEP05(2019)096
and m j and that coupled to quarks has mass m k , In the above, the indices i, j and k stand for the Higgs (h) or the singlet scalar (s). In the expression above, we have C h = −m q /v h cos θ and C s = m q /v h sin θ. The corresponding effective scattering amplitude in the limit that the momentum transferred to a nucleon is q 2 ∼ 0, follows this formula, in which β i = m 2 i /m 2 dm and β j = m 2 j /m 2 dm , and the loop function F (β i , β j ) is given in appendix B. In the cases that the two scalar masses in the triangle loop are identical, i.e. m i = m j , then let's take β i = β j and represent F (β i , β j ) by F (β i ) which is provided by appendix B. The validity of these loop functions are verified upon performing numerical integration of the Feynman integrals and making comparison for a few distinct input parameters. C ijk is the trilinear scalar coupling, where there are four of them corresponding to the vertices hhh, hhs, ssh and sss as appeared in figure 2.
Putting together all the six triangle diagrams, we end up having the expression below for the total effective SI scattering amplitude, The Spin Independent DM-proton scattering is in which µ is the reduced mass of the DM and the proton, and where m p is the proton mass and the quantities F p Tq and F p Tg define the scalar couplings for the strong interaction at low energy. The trilinear couplings in terms of the mixing angle and the relevant couplings in the Lagrangian and, the DD cross section at tree and loop level are given in appendix B. The scalar form factors used in our numerical computations are, F p u = 0.0153, F p d = 0.0191 and F p s = 0.0447 [26]. To obtain the scalar form factors, the central values of the following sigma-terms are used, σ πN = 34 ± 2 MeV and σ s = 42 ± 5 MeV. We computed the correction to the DD cross section at loop level by including the uncertainty on the two sigma-terms. We found that the corresponding uncertainty on the DD cross section are not big enough to be seen in the plots. However, we estimated the uncertainty for a given benchmark point with m dm ∼ 732 GeV, g d ∼ 2.17, g 3 = 10 GeV and sin θ = 0.02. The result is σ p loop = (3.084 ± 0.12) × 10 −10 pb.
JHEP05(2019)096
In the first part of our scan over the parameter space we wish to compare the DMproton SI cross section at tree level with the SI cross section stemming from leading loop effects. To this aim, we consider for the DM mass to take values as 10 GeV < m dm < 2 TeV, and the scalar mass in the range 20 GeV < m s < 500 GeV. The dark coupling varies such that 0 < g d < 3. The mixing angle in these computations is chosen a small value being sin θ = 0.02. Reasonable values are chosen for the couplings, g 2 = 0.1 and g 4 = 0.1. Taking into account constraints from Planck/WMAP on the DM relic density, we show the viable parameter space in terms of the DM mass and g d in figure 3 for two distinct values of the coupling g 3 fixed at 10 GeV and 200 GeV. Regions excluded by the invisible Higgs decay measurements are also shown in figure 3. As expected the tree level SI cross section is about 10 orders of magnitude below the neutrino background. On the other hand, for both values of g 3 , the leading loop effects are sizable in a large portion of the parameter space. A general feature apparent in the plots is that for g d 2.5, the DM mass smaller than 600 GeV gets excluded by direct detection bounds.
In addition, with the same values in the input parameters, we show the viable regions in terms of the DM mass and the single scalar mass in figure 4. It is found that in both cases of the coupling g 3 , a wide range of the scalar mass, i.e, 10 GeV < m s < 500 GeV lead to the SI cross sections above the neutrino floor. It is also evident from the results in figure 4 that the viable region with m s ∼ 10 GeV located at m dm 100 GeV in the case that g 3 = 10 GeV, is shifted to regions with m dm 250 GeV in the case that g 3 = 200 GeV.
In the last part of our computations we perform an exploratory scan in order to find the region of interest which are the points with the SI cross sections above the neutrino floor and below the DD upper limits, with other constraints imposed including the observed DM relic density and the invisible Higgs decay. The scan is done with these input parameters: 10 GeV < m dm < 2 TeV, 20 GeV < m s < 1 TeV, 0 < g d < 3, g 1 = g 4 = 0.1 and g 3 fixed at 200 GeV. Our results are shown in figure 5. The mixing angle is set to sin θ = 0.02 in the left panel and sin θ = 0.07 in the right panel. It can be seen that for larger mixing angle the viable region is slightly broadened towards heavy pseudoscalar masses for the DM mass 60 GeV < m dm < 300 GeV, also is shrank towards regions with m dm 60 GeV due to the invisible Higgs decay constraint. We also realize that if we confine ourselves to dark coupling g d 1 there are still regions with m dm 400 GeV which are within reach in the future direct detection experiments.
Concerning indirect detection of DM, the Fermi Large Area Telescope (Fermi-LAT) collected gamma ray data from the Milky Way Dwarf Spheroidal Galaxies for six years [27]. The data indicates no significant gamma-ray excess. However, it can provide us with exclusion limits on the DM annihilation into bb, ττ , uū and W + W − in the final state. As pointed out in [17] the Fermi-LAT data can exclude regions in the parameter space with m dm < 80 GeV and also resonant region with m dm ∼ m s /2.
A few comments are in order on the LHC constraints beside the invisible Higgs decay measurements. Concerning the mono-jet search in this scenario, it is pointed out in [17] that even in the region with m s > 2m dm which has the largest production rate, the signal rate is more than one order of magnitude beneath the current LHC reach, having chosen the small mixing angle. In the same study it is found out that bounds corresponding to di-Higgs production at the LHC via the process pp → s → hh, with different final states (4b, 2b2γ, 2b2τ ) are not strong enough to exclude the pseudoscalar mass in the relevant range for small mixing angle as we chose in this study.
Conclusions
We revisited a DM model whose fermionic DM candidate has a pseudoscalar interaction with the SM quarks at tree level leading to the suppressed SI direct detection elastic cross section. In the present model we obtained analytically the leading loop diagrams contributing to the SI elastic scattering cross section. Our numerical analysis taking into account the limits from the observed relic density, suggests that regions with dark coupling g d 2.5 and reasonable values for the other parameters, get excluded by DD upper bounds. It is also found that regions with g d 0.25 are excluded because they reside below the neutrino floor. However, a large portion of the parameter space stands above the neutrino floor remaining accessible in the future DD experiments such as XENONnT.
We also found regions of the parameter space above the neutrino floor while evading the current LUX/XENON1T DD upper limits, respecting the observed DM relic density and the invisible Higgs decay experimental bound. The viable region is slightly broadened for the moderate DM mass when sin θ = 0.07 in comparison with the case when sin θ = 0.02, both at g 3 = 200 GeV.
A Annihilation cross sections
The annihilation cross sections of a DM pair into a pair of the SM fermions are as the following where the number of color charge is denoted by N c . In the annihilation cross sections above the dominant contributions belong to the heavier final states bb and tt. The total cross section into a pair of the gauge bosons (W + W − and ZZ) in the unitary gauge is given by And finally we obtain the following expression for the DM annihilation into two higgs bosons as and DM annihilation into two s bosons as where, m p is the proton mass, v h = 246 GeV, β h = m 2 h /m 2 dm and β s = m 2 s /m 2 dm . We present the relevant loop function in the case β i = β j , as and when β i = β j = β, the loop function F reads, The trilinear scalar couplings are C hhh = −6g 2 v h sin 2 θ cos θ − 3g 1 cos 2 θ sin θ − 6λv h cos 3 θ − g 3 sin 3 θ , C hhs = g 2 v h (6 sin 3 θ − 4 sin θ) + 3g 1 sin 2 θ cos θ − g 1 cos θ + 6λv h cos 2 θ sin θ − g 3 sin 2 θ cos θ , C ssh = g 2 v h (6 sin 2 θ cos θ − 2 cos θ) + (2 sin θ − 3 sin 3 θ)g 1 − 6λv h sin 2 θ cos θ − g 3 cos 2 θ sin θ , C sss = 6g 2 v h cos 2 θ sin θ − 3g 1 sin 2 θ cos θ + 6λv h sin 3 θ − g 3 cos 3 θ .
(B.5)
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 4,711 | 2019-05-01T00:00:00.000 | [
"Physics"
] |
Improving the Accuracy of Saffron Adulteration Classification and Quantification through Data Fusion of Thin-Layer Chromatography Imaging and Raman Spectral Analysis
Agricultural crops of high value are frequently targeted by economic adulteration across the world. Saffron powder, being one of the most expensive spices and colorants on the market, is particularly vulnerable to adulteration with extraneous plant materials or synthetic colorants. However, the current international standard method has several drawbacks, such as being vulnerable to yellow artificial colorant adulteration and requiring tedious laboratory measuring procedures. To address these challenges, we previously developed a portable and versatile method for determining saffron quality using a thin-layer chromatography technique coupled with Raman spectroscopy (TLC-Raman). In this study, our aim was to improve the accuracy of the classification and quantification of adulterants in saffron by utilizing mid-level data fusion of TLC imaging and Raman spectral data. In summary, the featured imaging data and featured Raman data were concatenated into one data matrix. The classification and quantification results of saffron adulterants were compared between the fused data and the analysis based on each individual dataset. The best classification result was obtained from the partial least squares—discriminant analysis (PLS-DA) model developed using the mid-level fusion dataset, which accurately determined saffron with artificial adulterants (red 40 or yellow 5 at 2–10%, w/w) and natural plant adulterants (safflower and turmeric at 20–100%, w/w) with an overall accuracy of 99.52% and 99.20% in the training and validation group, respectively. Regarding quantification analysis, the PLS models built with the fused data block demonstrated improved quantification performance in terms of R2 and root-mean-square errors for most of the PLS models. In conclusion, the present study highlighted the significant potential of fusing TLC imaging data and Raman spectral data to improve saffron classification and quantification accuracy via the mid-level data fusion, which will facilitate rapid and accurate decision-making on site.
Introduction
Saffron, the stigma of Crocus sativus L., which is also called "the red gold", is one of the most expensive agricultural products on the market. For centuries, saffron has been utilized as a medicinal herb, spice, and colorant [1][2][3][4][5]. Throughout history, saffron has been vulnerable to economic adulteration, which involves the mixing of low-quality spices with saffron, the addition of plant materials, and the use of natural or artificial colorants to imitate the color of saffron [6].
Color strength is one of the key attributes used to describe saffron quality. The current standard method for saffron color strength analysis is protocol ISO 3632-2, issued by the International Organization for Standardization [7,8]. This method uses UV-vis spectroscopy and high-performance liquid chromatography (HPLC) to determine the color strength and crocin content, which is the active ingredient responsible for saffron's color expression. Despite its widespread use, this method still has several limitations. For instance, some studies have reported that UV-vis spectroscopy can only detect saffron adulterants up to 20% w/w and is unable to distinguish between artificial yellow adulterants that display similar absorbance values to pure saffron [9,10]. Other well-adopted methods, such as HPLC and gas chromatography (GC), still have limitations such as long sample preparation, long test duration, high costs, and the need for specialized laboratory settings and trained personnel.
Previously, we developed a rapid and field-deployable method based on thin-layer chromatography (TLC) and Raman spectroscopy to determine saffron quality as well as saffron adulteration. By utilizing TLC as a separation substrate, the optical TLC pattern of pure saffron and adulterated saffron specimens could be easily captured via a camera and converted to digital imaging data under ambient light and 365 nm UV light [10]. Raman spectroscopy, in the meantime, provides molecular information on pure or adulterated saffron specimens from the TLC chip [11]. The established TLC-Raman method has demonstrated its capability to determine saffron grades and identify common adulterations. However, upon further execution of this method for adulterant quantification, we identified several notable drawbacks. In general, Raman signals provide information for quantifying the purity of saffron, while TLC patterns are used primarily for identifying saffron adulterations. Unfortunately, the lack of communication between these two data blocks limits the full potential of the method from being exploited. For instance, the imaging method failed to accurately determine adulteration levels at high yellow 5 concentrations (6-10% w/w) due to color saturation on TLC chips. In this case, the decrease in crocin concentration caused by yellow 5 adulteration can still be detected by the Raman spectrometer. Nonetheless, without communication between these data blocks, the final decision-making process relies on separate data analyses of the TLC pattern and Raman spectroscopy.
Data fusion, the analysis of concatenating several datasets into a single fused data block, has shown great potential to improve existing performance in spectroscopic analysis [12][13][14]. The integration of multiple datasets through data fusion enables interactivity and mutual information among each data block, resulting in a reduction of spurious sources of variability and diminished prediction errors when compared to analyses based on individual datasets. The concatenation of data can be carried out and categorized at three different levels: data level (low-level fusion), feature data level (mid-level fusion), and decision level (high-level fusion). With spectral data, data level and feature data level fusions are more suitable to be executed. In low-level data fusion, data from all measurement sources are simply combined into a single matrix after some data pretreatments for each individual data block. Mid-level data fusion involves the extraction of relevant features from each data block separately, which are then concatenated to form a single data block. Features can be defined as either relevant original variables or latent variables that are extracted through multivariate analysis models [12][13][14][15].
Extensive data fusion research has been conducted in the fields of food quality analysis and food fraud analysis. Most of these studies have yielded positive outcomes when employing fused datasets [16]. The integration of similar datasets, such as mass spectrometry, UV-vis, near-infrared (NIR), mid-infrared (MIR), or Raman spectral data, has been a primary focus in classifying or quantifying food quality attributes, such as the quality of soybeans [17], characterizing olive oil and essential oils [18,19], and determining the geographical origins of wines and saffron [20,21].
The food industry has efficiently utilized imaging data fusion techniques, such as E-eye or computer vision, for post-harvest quality analysis of fruits, providing advantages in terms of speed and cost-effectiveness [16]. Moreover, imaging analysis has garnered significant attention in various food categories. This includes assessing the sensory scores of fish fillets [22], evaluating apple fruit firmness [23], and detecting bruises in strawberries and blueberries [24,25]. Additionally, researchers have explored the fusion of imaging data with spectral data, which has proven successful in monitoring the fermentation quality of black tea and classifying green tea [26,27]. The aim of this paper was to evaluate the effectiveness of data fusion methodologies to improve the classification and quantification accuracy of adulterated saffron using Raman and TLC imaging data. In this study, a mid-level data fusion strategy was evaluated for classification and quantification of saffron quality, respectively. In brief, saffron specimens were adulterated with artificial colorants or extraneous natural plant materials at different levels, from 2% to 100%. Both adulterated and pure saffron specimens were prepared into sample solutions before each sample droplet was developed on TLC chips. Then raw imaging data and Raman spectra were collected on the TLC chip, respectively. Subsequently, featured imaging data were extracted by taking L*a*b* values at characteristic locations in the image (that were, edge and midplane), while featured Raman data were generated through variable influence on projection analysis (VIP) of the crocin characteristic Raman peaks. At last, the fused data matrix was completed by concatenating the featured imaging data and the featured Raman data. The fused data were used for multivariate classification and regression via partial least squares discriminant analysis (PLS-DA) and partial least squares analysis (PLS), respectively. The classification and quantification results of saffron adulterants from the fused data were also compared against the analysis results based on each single dataset.
Chemicals and Reagents
All natural specimens, i.e., saffron, turmeric powders, and safflower, were purchased from online sources. Each plant material was acquired from three different suppliers with a similar price range. Artificial colorants (Allura red and tartrazine) were purchased from two suppliers, which were IFC Solutions (Linden, NJ, USA) and Sigma Aldrich (Merk KGaA, Darmstadt, Germany). TLC aluminum plates (Silicagel 60W F254S) were purchased from EMD Millipore Corporation (Billerica, MA, USA).
Sample Preparation
All plant materials were pulverized using a Newtry high-speed food mill (Guangzhou, China) set at high speed for three minutes. Sample particle sizes were standardized by sieving through a 500 µm mesh sieve after pulverization of raw plant material. The resulting powders were then transferred into glass vials and stored in a light-shielded desiccator. Natural adulterants (safflower and turmeric) were predominantly utilized to manipulate the weight of saffron samples. To calibrate the analysis for the presence of these adulterants, samples were spiked with varying weight percentages of pure saffron (0%, 20%, 40%, 60%, 80%, and 100%). Lower spike levels for natural adulterants were considered, but it was deemed impractical to use trace levels of these plant materials for the purpose of adulteration in normal practices. To achieve the desired appearance and color intensity, artificial adulterants (tartrazine yellow and Allura red) were typically added to saffron in lower amounts compared to natural adulterants. Hence, we used weight percentages of 0%, 2%, 4%, 6%, 8%, and 10% to calibrate the level of artificial adulterants mixed with pure saffron. For all samples, 50 mg of pure or spiked powdered samples were dissolved in 50 mL of deionized water to prepare raw sample solutions. The untreated solutions were subsequently subjected to filtration using a 0.45 µm PES filter (GE Lifesciences, Marlborough, MA, USA). Subsequently, 2 µL of pure saffron or saffron-spiked solutions were deposited onto TLC plates using a pipette. Three droplets of the same sample solution were applied to the same location on the TLC plate, ensuring that the preceding droplet had completely dried before adding the next one.
Raw and Featured Raman Spectral Data
Raw Raman spectral data were collected at the center of each TLC pattern by a portable Raman system (TSI Incorporated, Shoreview, MN, USA), which was equipped with a 780 nm laser source. Each sample was measured in triplicate with five data collection spots for each replicate with an acquisition time of 10 s. The measurements were carried out at max laser power (500 mW) with a spectral range of 100 to 2200 cm −1 . Thus, 75 spectra were collected for each type of adulteration, and 15 spectra were collected for pure saffron. A total of 315 spectra were collected for data analysis in this study. The Raman spectrum of each adulterated sample at different adulteration concentrations can be found in Supporting Figures S1-S4. Figure 1 shows the adulterated saffron Raman spectra, which were collected on TLC silica gel substrates. Featured Raman data (1000~1050 cm −1 , 1130~1240 cm −1 , 1270~1300 cm −1 , and 1500~1580 cm −1 ) were extracted by reviewing the main compounds responsible for saffron color expression. Feature selection was also referred to as the result of the variable influence on projection (VIP, VIP value > 1) from the SIMCA software (Malmö, Sweden, version 14.1), which summarizes the importance of each Raman peak in the PLS-DA model. a 780 nm laser source. Each sample was measured in triplicate with five data collection spots for each replicate with an acquisition time of 10 s. The measurements were carried out at max laser power (500 mW) with a spectral range of 100 to 2200 cm −1 . Thus, 75 spectra were collected for each type of adulteration, and 15 spectra were collected for pure saffron. A total of 315 spectra were collected for data analysis in this study. Figure 1 shows the adulterated saffron Raman spectra, which were collected on TLC silica gel substrates. Featured Raman data (1000~1050 cm −1 , 1130~1240 cm −1 , 1270~1300 cm −1 , and 1500~1580 cm −1 ) were extracted by reviewing the main compounds responsible for saffron color expression. Feature selection was also referred to as the result of the variable influence on projection (VIP, VIP value > 1) from the SIMCA software (Malmö, Sweden, version 14.1), which summarizes the importance of each Raman peak in the PLS-DA model.
Raw and Featured Imaging Data
The collection of raw imaging data and the process to generate the featured imaging data from TLC chips could be found in our previous work with some modifications [10]. In brief, two separate images were taken under ambient and UV (365 nm) light conditions. The ambient and UV images were then combined into a single image, as depicted in Figure 2A. To collect imaging data, Adobe Photoshop CS6 (64-bit) was utilized to split RGB images into the red, green, and blue channels ( Figure 2C). A color picker tool (32 × 32 pixels square) was used to collect 20 data points from each TLC pattern per channel, with 10 points taken at ½ diameter distance from the center and 10 points taken from the TLC ring. This was performed for both the ambient light image and the UV light image under the green and blue channels, respectively ( Figure 2C). The red channel data were excluded due to insufficient pattern information on the image (Supporting Figures S1-S4).
The featured imaging data can be expressed as shown below: where the maximum, minimum, and average lightness values at the blue and green channels ( Figure 2C) were collected at each sample collection point. Each adulteration level was measured in triplicate, resulting in a total of 7560 featured imaging data collected (7200 for adulterated saffron and 360 for pure saffron).
Raw and Featured Imaging Data
The collection of raw imaging data and the process to generate the featured imaging data from TLC chips could be found in our previous work with some modifications [10]. In brief, two separate images were taken under ambient and UV (365 nm) light conditions. The ambient and UV images were then combined into a single image, as depicted in Figure 2A.
To collect imaging data, Adobe Photoshop CS6 (64-bit) was utilized to split RGB images into the red, green, and blue channels ( Figure 2C). A color picker tool (32 × 32 pixels square) was used to collect 20 data points from each TLC pattern per channel, with 10 points taken at 1 2 diameter distance from the center and 10 points taken from the TLC ring. This was performed for both the ambient light image and the UV light image under the green and blue channels, respectively ( Figure 2C). The red channel data were excluded due to insufficient pattern information on the image (Supporting Figures S5-S8).
The featured imaging data can be expressed as shown below: X Ambient = X ambient green channel data , X ambient blue channel data X UV = X UV green channel data , X UV blue channel data where the maximum, minimum, and average lightness values at the blue and green channels ( Figure 2C) were collected at each sample collection point. Each adulteration level was measured in triplicate, resulting in a total of 7560 featured imaging data collected (7200 for adulterated saffron and 360 for pure saffron). Figure 3 illustrates the fusion strategy, where featured Raman data and featured imaging data were measured and fused into one data block in the mid-level fusion model. Normalization was applied to both data blocks prior to the fusion according to the equation below [28]. Figure 3 illustrates the fusion strategy, where featured Raman data and featured imaging data were measured and fused into one data block in the mid-level fusion model. Normalization was applied to both data blocks prior to the fusion according to the equation below [28]. Figure 3 illustrates the fusion strategy, where featured Raman data and featured imaging data were measured and fused into one data block in the mid-level fusion model. Normalization was applied to both data blocks prior to the fusion according to the equation below [28]. The fused data can be expressed via the equation below: The fused data can be expressed via the equation below: X Fused = X Featured Raman , X Featured Imaging where X Fused stands for the fused data block, X Featured Raman and X Featured Imaging represent the single data blocks of featured Raman and featured imaging data, respectively. Next, X Fused , X Featured Raman and X Imaging data blocks were loaded in PLS-DA and PLS models for multivariate qualitative and quantitative analyses.
Partial Least Squares Discriminant Analysis
Partial least squares discriminant analysis (PLS-DA) was used to conduct classification analysis in SIMCA (14.1) software. The entire classification model was classified by adulterant type using Pareto scaling. This scaling method was chosen due to its ability to retain the proximity of the scaled measurements to the original data, minimizing disturbances to the raw data, minimizing disturbances to the feature data, and ensuring more reliable feature peaks that are less vulnerable to noise [29]. The significant compounds were optimized via cross-validations. In the meantime, a total of 210 files from each dataset were selected as a training group, which included 50 spectra for each adulterant and 10 spectra for pure saffron. As a result, 3D scattered point graphs ( Figure 4) and a misclassification table (Table 1) were generated. In addition, the performance of the data fusion strategy was evaluated by the remaining 105 spectra working as an external validation group, which included 25 spectra for each type of spiked saffron and 5 spectra of pure saffron ( Table 2).
Partial Least Square (PLS)-Regression of Quantification
The PLS regression model was used to determine saffron adulteration levels. Raw data, featured data, and fused data from the classification study were directly used in the quantification analysis. Spectral data and imaging data were set up as variable X, whereas the adulteration level was set up as variable Y. The result was presented as a predicted value versus an observed value, as shown in Figure 5. The performance of each PLS plot can be partially expressed by the maximum of R 2 values (goodness of fit) or the minimum of the root-mean-square error of prediction (RMSEP) values and the root-mean-square of cross-validation (RMSECV) values.
The Effect of Mid-Level Fusion on Classification Performance
As is shown in Figure 4A, the featured imaging data itself showed a clear cluster separation for safflower, turmeric, and yellow 5 adulterated samples. However, heavy cluster overlapping was observed between turmeric and red 40, yellow 5, and pure saffron clusters. These results were also reflected in the misclassification table. In Table 1A, 20 out of 50 turmeric samples were misclassified as red 40. In addition, all pure saffron samples were misclassified as yellow 5 adulterated samples. This was due to the low discrimination between these samples on TLC chips, which was also reported in our previous study [10]. In Figure 4B, the featured Raman data generated more scattered clusters for samples with natural adulterants. On the other hand, samples with artificial adulterants overlapped with pure saffron and thus led to misclassification in Table 1B. This was due to the low spike level (2-10% w/w) for artificially adulterated specimens as compared to the one for natural adulterants (20-100% w/w). The reason for spiking lower concentrations of artificial adulterants was their superior color intensity compared to natural adulterants. Therefore, less concentration was required to achieve the same color appearance as the natural adulterants. However, this posed analytical challenges when detecting these artificial adulterants. The result of mid-level data fusion on classification accuracy can be seen in Figure 4C and Table 1C. Both the PLS-DA plot and the misclassification table indicated significant improvements in cluster separation and correction rate compared to the results from each individual data block. In the meantime, the PLS-DA plot with mid-level fused data exhibited tighter clusters, yet each class was distributed with better separation, especially for red 40 and yellow 5 adulterated specimens ( Figure 4C). Each individual data block provided a complementary piece of information that helped in the classification of spiked saffron samples. For instance, red 40 and turmeric-spiked saffron samples that could not be clearly differentiated using the imaging data ( Figure 4A and Table 1A) were separated in the PLS-DA plot when featured Raman data were fused. Similarly, the introduction of the imaging data also helped enhance the classification capabilities of the featured Raman data to discriminate specimens with red 40 and yellow 5 adulterants ( Figure 4B and Table 1B). The collaborative effect between these two data blocks in the fused matrix achieved a satisfying accuracy of 99.52% (Table 1C). These findings were in line with most reported results, indicating that data fusion yielded clear improvements in classification accuracies compared to individual analytical methods. For instance, enhanced classification results in identifying hazelnut paste adulteration were reported by combining FT-Raman and NIR spectroscopy as the fused dataset [30]. Likewise, the classification of the geographical location of a medicinal plant (Gentiana rigescens) demonstrated improved performance through the fusion of UV-vis and infrared spectroscopy data [31].
Mid-Level Fusion Model Validation for Adulteration Classification
The performance of the PLS-DA model was validated using external validation samples (n = 105), and the results are shown in Table 2. It is clear to see that the validation group had an excellent classification result among all adulterated specimens. Most samples with safflower, turmeric, and red 40 adulterants were all correctly identified, with adulteration levels ranging from 2-10% w/w for artificial adulterants and 20-100% for natural adulterants. The model achieved a correct rate of 99.2%, in which no adulterated specimens were classified as pure saffron in the validation group, giving it a 100% accuracy rate on adulterated sample determination. Figure 5 shows the PLS plots that predict spike levels in different adulterated saffron samples. Three data blocks, namely, X Featured Imaging , X Featured Raman , and X Fused , were used to build each PLS plot, respectively. The R 2 and RMSECV values in Table 3 were used to describe the goodness of fit of the model and the model's ability to predict unknown samples. The RMSEP values indicate the goodness of the prediction of the model using external validation samples (n = 25). However, it is worth mentioning that the RMSECV and RMSEP values are positively correlated to the scale of the data. In this case, samples with natural adulterants (20-100% w/w) were expected to have larger RMSECV and RMSEP values than samples with artificial adulterants (2-10% w/w). Thus, artificial adulterants (red 40 and yellow 5) and natural adulterants (safflower and turmeric) should be compared separately due to the different adulteration levels between these two types of adulterants. Separately, the PLS plots based on the featured imaging and Raman data in Figure 5C(a,b),D(a,b) showed poor prediction performances in the cases of red 40 and yellow 5 adulterated samples. As previously mentioned, low spiking levels were the main reason behind this phenomenon. In addition, sample droplets suffered from diffusion problems in every direction on TLC substrates. Thus, the actual concentrations of the target adulterants on the chips were usually lower in the test spots. This issue can potentially be solved by concentrating the sample solution or dropping multiple sample droplets at the same spot. However, this approach may bring new detection challenges, such as stronger interference from crocin. and yellow 5) and natural adulterants (safflower and turmeric) should be compared separately due to the different adulteration levels between these two types of adulterants. When the fused data was used, higher R 2 and lower prediction errors were achieved in most cases (Table 3). The best improvement was observed in Figure 5D(c), where the fused data block of yellow 5 adulterated specimens produced a significant improvement in the fittings of the plot. This result indicated the existing collaborative effects when the imaging and featured Raman data were combined. These effects, although not significant for most adulterated samples, still helped improve the quantification capabilities of the model. When using actual validation samples, a similar trend could be seen with the RMSEP values. That is, the model had better performance using the fused data block for most adulterated samples, showing lower errors of prediction.
The Effect of Data Fusion on PLS Model Quantification Performances
Similar studies also reported that the integration of independent datasets, such as the fusion of electronic nose and electronic tongue data, resulted in improved regression models for predicting juice quality parameters, including pH, titratable acidity, vitamin C, total soluble solids, phenolic content, and color indices of red wine. In addition, these studies consistently found higher R 2 values in both the calibration and validation sets for the combined dataset compared to each individual measurement in most cases [32,33].
Nevertheless, it is worth noting that the improvement of the fusion strategy on prediction accuracies varied across different parameters or attributes. While certain specific variables demonstrated significant enhancements in prediction accuracies as a result of the fusion approach, the majority of variables experienced only modest improvements. Moreover, upon analyzing the quantification plots, it was observed that the fused dataset closely resembled the highest-performing individual dataset, indicating the preservation of key characteristics and similarities between the fused data and the most optimal standalone dataset [32,33].
Despite the overall improved quantification performance achieved with the fused data, there were some exceptions that brought some concerns about the effectiveness of the model. For example, the reduction in RMSEP and RMSECV values was not consistent when the fused data were used in PLS models. Samples spiked with safflower and red 40 showed slightly higher RMSEP and RMSECV values in the fused data, respectively. It was reasonable to believe that sample variation might have caused this issue. However, it still exposed one weakness of utilizing data fusion in quantification analysis. That is, the concatenation of different data blocks might cause deleterious effects when bad data blocks or outliers were introduced. This problem might not be obvious in sample classification since it is in favor of sample variation. However, it was very sensitive to quantification analysis. This inconsistent phenomenon was discussed in similar studies, where the presence of redundant information or the absence of feature information could impair the effectiveness of the fused data block [34,35]. This could be attributed to inappropriate feature extraction methods, which may result in datasets that are data-rich but informationpoor or lead to the loss of important feature information [16,36]. In the present study, the selection of featured imaging data, such as pattern location and data acquisition numbers for each collection area, was determined manually. Consequently, it would be challenging to evaluate which manually defined imaging texture or pattern was the most sensitive to different adulteration concentrations. Furthermore, in this study, all variables in the fused data were uniformly weighted, assuming equal importance for classification or quantification predictions. Although this approach was proven suitable for classification work, a variable-wise weighted fusion approach may be more appropriate for quantification tasks [36].
Conclusions
The present study examined the effectiveness of mid-level data fusion strategies on the improvement of saffron adulteration classification and quantification accuracies using TLC imaging data and Raman spectral data. Our results indicated that mid-level data fusion had excellent classification performance. Under this setting, all spiked samples were identified and distinguished from pure saffron samples. In the meantime, the validation result exhibited great capabilities to distinguish each adulterant with excellent accuracy (safflower, 100%; turmeric, 100%; red 40, 96%; yellow 5, 100%).
In the meantime, quantification accuracies of both artificially and naturally adulterated samples showed better performances using the fused data block, achieving higher R 2 values with lower errors. However, upon analyzing the results from external validation samples, it was found that the improvements in quantification accuracies were not markedly significant in comparison to the advancements observed in the classification results.
Further work needs to be performed to establish better protocols for using imaging data in quantitative analysis. This could involve assessing various strategies for imaging feature extraction to further improve the accuracy of quantification. Other data blocks can also be investigated and introduced to finalize both classification and quantification models. Furthermore, the established result can be further concatenated to produce a high-level fused result, which consists of both classification and quantification results. First, classification results are obtained from mid-level data fusion using the PLS-DA model. Then, corresponding quantification algorithms will be chosen based on the classification result. Depending on the classification results, the quantification results could be expressed either as saffron grade or adulterant level. Eventually, the final developed algorithm would be able to automatically determine saffron authenticity, adulterant identification, spike level, and pure saffron grade at the same time to facilitate fast decision-making on site. | 6,679.4 | 2023-06-01T00:00:00.000 | [
"Chemistry"
] |
Construction and Demonstration of a 6–18 GHz Microwave Three-Wave Mixing Experiment Using Multiple Synchronized Arbitrary Waveform Generators
: This manuscript details the construction and demonstration of the first known microwave three-wave mixing (M3WM) experiment utilizing multiple arbitrary waveform generators (AWGs) completely operable in the 6–18 GHz frequency range for use in chirality determination and quan-tification. Many M3WM techniques, which involve two orthogonal, subsequent Rabi π /2 and π microwave pulses, suffer from flexibility in pulse types and timings as well as frequency due to most instruments only using one, one-channel AWG and the M3WM probability decreasing with an increasing quantum number, J . In this work, we presented an M3WM instrument that allows that flexibility by introducing multiple, synchronized AWGs and adheres to the high probability transition loop pathways in carvone. The functionality and reliability of the instrument were demonstrated using a series of experiments and mixtures of the R and S enantiomers and determined to be of similar accuracy to other reported M3WM setups with the additional benefit of flexibility in pulsing schemes.
Introduction
Chirality determination in science and technology is one of the largest fundamental molecular challenges in existence today [1]. This is because the enantiomers that arise from chiral centers have almost identical physical and chemical properties, but biological processes often produce, need, or use one enantiomer in large preference to another. Synthetic processes, though, often end in a mixture of enantiomeric products, even when the chemistry utilized is selectively targeted to result in specific stereochemistry. Therefore, it is of large importance to be able to detect and quantify these mixtures quickly and accurately, especially when trying to produce these chemicals on a large scale, as is often the case in the pharmaceuticals industry.
In 2013, it was demonstrated that chiral gas molecules could be distinguished via a microwave three-wave mixing (M3WM) experiment [2]. Later, this experiment was extended to a modified CP-FTMW-type experiment [3]. No matter the setup, M3WM involves exciting, in succession, a pair of linked rotational transitions of different types (a-, b-, or c-type) and then listening to/collecting the free induction decay (FID) from a third type of transition, completing a transition loop [4]. Transitions in microwave rotational spectroscopy occur through a coupling of a molecule's electric dipole moment to an imposed electric field. Transition types arise from a specific component of that electric dipole moment with a nonzero magnitude. M3WM loops are allowed, then, because chiral species, by definition, possess electric dipole moments where all three components are nonzero in magnitude. Moreover, by definition, each enantiomer possesses one dipole moment component that is the same in magnitude but completely opposite in sign. It is this difference that is leveraged in the M3WM experiment and results in FIDs of each Symmetry 2022, 14, 848 2 of 13 enantiomer being exactly 180 • out-of-phase with one another, allowing absolute geometry determination and providing a pathway for the quantitative analysis of mixtures.
For this to be achieved, the experimental setup must contain two crucial components. The first is that the experimental setup consists of antennae optimally oriented to propagate/detect electric fields in each of the three dimensions. The second is that two of those antennae deliver a specialized pulsing scheme consisting of a π/2 pulse followed by a π pulse, where π/2 and π are the Rabi flip angles. However, in practice, not all rotational transitions can achieve the full π/2 condition because the transition moments for the various angular momentum projections on a space-fixed axis are dependent on the projection quantum number, M J [3,5]. Therefore, π/2 and π have become terminology for the microwave pulse duration needed for maximum coherence and double maximum coherence (resulting in no traditional rotational signal), respectively. In previous works, it was also shown that in order to maximize the probability of enantiomeric separation signal with M3WM and minimize spatial degeneracy influences, it is best to follow an RQP-branch (i.e., ∆J = 1, 0, −1) loop rather than QQQ or PQR cycle and to minimize loop J-states [6,7]. Lastly, because the chiral signal is proportional to the population difference and the transition dipole moment, but there is generally sufficient microwave power available, it is best to follow a scheme that starts with the largest frequency difference on the weakest dipole moment component and ends with monitoring a transition corresponding to the strongest dipole moment component also at a high-frequency difference [3].
Since the discovery of M3WM, there have been many subsequent experimental and theoretical works showing how such approaches can be utilized to distinguish between enantiomers [8][9][10][11][12][13][14], provide enantiomeric excess (ee) information inclusive of mixtures [15], and demonstrate or suggest methodologies which may be carried out in order to selectively choose or build up the population in one chiral species over another (chiral quantum coherent control) [16][17][18][19][20]. In all known M3WM experimental setups using arbitrary waveform generators (AWGs), however, no setup has utilized multiple AWG sources. Multiple AWGs or very fast digital-to-analog converted channels on a single AWG will most likely be needed to pursue chiral quantum coherent control techniques as the microwave pulse schemes are generally complex or involve synchronous pulses. Furthermore, due to the spatial degeneracy influences, microwave powering costs, and required pulse schemes mentioned above, M3WM experiments typically consist of at least one excitation/listening component in the radiofrequency region (i.e., <~3 GHz). In this work, therefore, we present the construction and demonstration of the first known M3WM experiment utilizing multiple, synchronized AWG sources entirely operable in the 6-18 GHz region of the electromagnetic spectrum.
Materials and Methods
M3WM experiments were carried out using a modified chirped pulse, Fourier transform microwave (CP-FTMW) spectrometer with multi-antenna detection, which was described elsewhere [21]. A diagram of the instrument setup is presented in Figure 1, with pertinent differences between that and the MAD-CP-FTMW experiment being described here. As described in reference [21], the MAD-CP-FTMW instrument consists of four antennae arranged in a cross pattern exactly facing each other with the supersonic expansion source centered above, pointing at the throat of a diffusion pump. Two of these antennae are located inside the vacuum chamber, and two are located outside. The two external antennae have Teflon windows allowing the microwaves to pass into the chamber where they are broadcast onto the molecular sample. Using the inside or the outside antennae in traditional CP-FTMW experiments showed no observable difference in line centers. For M3WM instruments, three orthogonal microwave fields are required to maximize enantioseparation efficiency. With our single polarized antennae (Steatite ® QWH-SL-2-18-S-HG-R), this can be achieved by rotating one of the exterior antennae by 90 • . The exterior antennae are mounted in such a fashion that they may be rotated up to 270 • . The resulting arrangement is similar to that of Lobsinger et al. [3], where three antennae (colored blue, green, and red in Figure 1) are used for the M3WM experiments, and the fourth antennae may be accessed to quickly shift back to traditional CP-FTMW spectroscopy if desired. arrangement is similar to that of Lobsinger et al. [3], where three antennae (colored blue, green, and red in Figure 1) are used for the M3WM experiments, and the fourth antennae may be accessed to quickly shift back to traditional CP-FTMW spectroscopy if desired. As shown in Figure 1, two antennae (blue and green) are used for excitation, and one antenna (red) is used for detecting the resultant free induction decay (FID) of the molecular response to the excitation. This is different from traditional CP-FTMW spectroscopy, where only one excitation source is generally utilized. In order to provide the experiment with adequate microwave power, a 1 W microwave power amplifier (Avantek ® APT-18649) and a 40 W microwave power amplifier (Microsemi ® model L0618-46-T680) were employed for excitation. On the detection antenna, an SPST switch (ATM ® PNR S1517D) and a low noise amplifier (RF-Lambda ® RLNA06G18G45) are used to block out all signals except for those resulting from the FID, which requires amplification in order to be interpreted by the oscilloscope (Tekronix ® DPO 72304DX Digital Phosphor Oscilloscope).
As mentioned previously, M3WM experiments require microwave pulses occurring in a π/2-π coherence scheme. In previous M3WM designs, this was achieved multiple ways, from using a switch on a singular AWG [3], using multi-channel AWGs [14,22], invoking a dual-polarized antenna [19], and switching electric fields [2,11,13,15]. The difference between these designs to the design presented in this work is the implementation of two synchronized AWGs to generate the microwave coherence pulses needed for the experiment simultaneously. This provides a specific advantage not possessed by the other approaches in that now the user may have control over the type of pulse (or pulse profile) while also not being limited to one pulse having to finish before another pulse begins. This arrangement allows for future experiments involving the excitation pulse schemes with greater flexibility as the waveforms are written with code instead of manipulated via As shown in Figure 1, two antennae (blue and green) are used for excitation, and one antenna (red) is used for detecting the resultant free induction decay (FID) of the molecular response to the excitation. This is different from traditional CP-FTMW spectroscopy, where only one excitation source is generally utilized. In order to provide the experiment with adequate microwave power, a 1 W microwave power amplifier (Avantek ® APT-18649) and a 40 W microwave power amplifier (Microsemi ® model L0618-46-T680) were employed for excitation. On the detection antenna, an SPST switch (ATM ® PNR S1517D) and a low noise amplifier (RF-Lambda ® RLNA06G18G45) are used to block out all signals except for those resulting from the FID, which requires amplification in order to be interpreted by the oscilloscope (Tekronix ® DPO 72304DX Digital Phosphor Oscilloscope).
As mentioned previously, M3WM experiments require microwave pulses occurring in a π/2-π coherence scheme. In previous M3WM designs, this was achieved multiple ways, from using a switch on a singular AWG [3], using multi-channel AWGs [14,22], invoking a dual-polarized antenna [19], and switching electric fields [2,11,13,15]. The difference between these designs to the design presented in this work is the implementation of two synchronized AWGs to generate the microwave coherence pulses needed for the experiment simultaneously. This provides a specific advantage not possessed by the other approaches in that now the user may have control over the type of pulse (or pulse profile) while also not being limited to one pulse having to finish before another pulse begins. This arrangement allows for future experiments involving the excitation pulse schemes with greater flexibility as the waveforms are written with code instead of manipulated via hardware. The AWGs are synchronized using a synchronization hub (AWGSYNC01) with one AWG acting as the primary unit and the others acting as secondary units. Up to four units may be controlled hardware. The AWGs are synchronized using a synchronization hub (AWGSYNC01) with one AWG acting as the primary unit and the others acting as secondary units. Up to four units may be controlled by the synchronization hub at any given time (see Figure 2 for a picture of the setup), but the M3WM experiments performed here only consist of two synchronized AWGs. Another unique aspect of the experimental setup is that the entire instrument operates in the 6-18 GHz region of the electromagnetic spectrum. This is not because of the AWGs, but because this is the optimal region for the antennae and power amplifiers.
In total, four M3WM experiments on enantiomeric mixtures of carvone were carried out. These included (A) 5 mL of pure R-carvone, (B) 5 mL of pure S-carvone, (C) 1:1 mixture of R-Carvone (2.5 mL) and S-carvone (2.5 mL), and finally (D) 3:1 mixture of R-carvone (3 mL) and S-carvone (1 mL). The enantiomers of carvone are presented in Figure 3. The R-carvone (Product No.: A13900, Purity: 98%) and S-carvone (Product No.: L07130, Purity 96%) samples were manufactured by Alfa Aesar ® and obtained through Thermo Fisher Scientific ® (Alpha Aesar, Tewksbury, MA, USA 01876). No further purification was performed on the samples after purchase. Quality documentation for the two enantiomers can be found in the Supplementary Materials. Each of the four liquid carvone samples was placed into a heated nozzle reservoir and warmed to 95 °C to promote vaporization [23]. Argon was used as a backing gas, and Another unique aspect of the experimental setup is that the entire instrument operates in the 6-18 GHz region of the electromagnetic spectrum. This is not because of the AWGs, but because this is the optimal region for the antennae and power amplifiers.
In total, four M3WM experiments on enantiomeric mixtures of carvone were carried out. These included (A) 5 mL of pure R-carvone, (B) 5 mL of pure S-carvone, (C) 1:1 mixture of R-Carvone (2.5 mL) and S-carvone (2.5 mL), and finally (D) 3:1 mixture of Rcarvone (3 mL) and S-carvone (1 mL). The enantiomers of carvone are presented in Figure 3. The R-carvone (Product No.: A13900, Purity: 98%) and S-carvone (Product No.: L07130, Purity 96%) samples were manufactured by Alfa Aesar ® and obtained through Thermo Fisher Scientific ® (Alpha Aesar, Tewksbury, MA, USA 01876). No further purification was performed on the samples after purchase. Quality documentation for the two enantiomers can be found in the Supplementary Materials. hardware. The AWGs are synchronized using a synchronization hub (AWGSYNC01) with one AWG acting as the primary unit and the others acting as secondary units. Up to four units may be controlled by the synchronization hub at any given time (see Figure 2 for a picture of the setup), but the M3WM experiments performed here only consist of two synchronized AWGs. Another unique aspect of the experimental setup is that the entire instrument operates in the 6-18 GHz region of the electromagnetic spectrum. This is not because of the AWGs, but because this is the optimal region for the antennae and power amplifiers.
In total, four M3WM experiments on enantiomeric mixtures of carvone were carried out. These included (A) 5 mL of pure R-carvone, (B) 5 mL of pure S-carvone, (C) 1:1 mixture of R-Carvone (2.5 mL) and S-carvone (2.5 mL), and finally (D) 3:1 mixture of R-carvone (3 mL) and S-carvone (1 mL). The enantiomers of carvone are presented in Figure 3. The R-carvone (Product No.: A13900, Purity: 98%) and S-carvone (Product No.: L07130, Purity 96%) samples were manufactured by Alfa Aesar ® and obtained through Thermo Fisher Scientific ® (Alpha Aesar, Tewksbury, MA, USA 01876). No further purification was performed on the samples after purchase. Quality documentation for the two enantiomers can be found in the Supplementary Materials. Each of the four liquid carvone samples was placed into a heated nozzle reservoir and warmed to 95 °C to promote vaporization [23]. Argon was used as a backing gas, and Each of the four liquid carvone samples was placed into a heated nozzle reservoir and warmed to 95 • C to promote vaporization [23]. Argon was used as a backing gas, and the sample was introduced at 50 psig. A Parker Hannifin ® (Otsego, MI, USA) Series 9 supersonic nozzle pulsed sample into the chamber at a rate of 3 Hz with 3 FIDs per gas pulse. In total, 500,000 FIDs, each FID being 20 µs in length, were averaged together for each experimental run.
The mixing scheme chosen for all four mixtures is presented in Figure 4. Transitions were selected based on the previous rotational study of carvone by Moreno et al. [24]. This study started by determining, theoretically, the three most stable structures of carvone. These structures are labeled EQ1, EQ2, and EQ3. For another to be submitted work on this molecule, we performed optimization calculations of these conformers at the B3LYP/6-311G++(d,p) level using Gaussian09 [25], and these structures are presented in Figure 5. Of these conformers, EQ2 was determined to be the most stable conformer, with the strongest of these transitions being b-type as the dipole moment component ordering is µ b > µ a > µ c . As a result, it was decided that the optimal mixing scheme would consist of a c-type π/2 "drive" transition, an a-type π "twist" transition, and a b-type "listen" transition with antennae colored blue, green, and red corresponding to the transition color scheme in Figure 4. Candidate transition loops fitting this scheme were then determined by utilizing the fitted spectral constants in the Moreno work in SPCAT to predict transition frequencies and quantum numbers. In total, 500,000 FIDs, each FID being 20 µ s in length, were averaged together for each experimental run. The mixing scheme chosen for all four mixtures is presented in Figure 4. Transitions were selected based on the previous rotational study of carvone by Moreno et al. [24]. This study started by determining, theoretically, the three most stable structures of carvone. These structures are labeled EQ1, EQ2, and EQ3. For another to be submitted work on this molecule, we performed optimization calculations of these conformers at the B3LYP/6-311G++(d,p) level using Gaussian09 [25], and these structures are presented in Figure 5. Of these conformers, EQ2 was determined to be the most stable conformer, with the strongest of these transitions being b-type as the dipole moment component ordering is μb > μa > μc. As a result, it was decided that the optimal mixing scheme would consist of a c-type π/2 "drive" transition, an a-type π "twist" transition, and a b-type "listen" transition with antennae colored blue, green, and red corresponding to the transition color scheme in Figure 4. Candidate transition loops fitting this scheme were then determined by utilizing the fitted spectral constants in the Moreno work in SPCAT to predict transition frequencies and quantum numbers. Figure 4. Transit were selected based on the previous rotational study of carvone by Moreno et al. [24]. study started by determining, theoretically, the three most stable structures of carv These structures are labeled EQ1, EQ2, and EQ3. For another to be submitted work on molecule, we performed optimization calculations of these conformers at the B3LY 311G++(d,p) level using Gaussian09 [25], and these structures are presented in Figu Of these conformers, EQ2 was determined to be the most stable conformer, with strongest of these transitions being b-type as the dipole moment component orderin μb > μa > μc. As a result, it was decided that the optimal mixing scheme would consist c-type π/2 "drive" transition, an a-type π "twist" transition, and a b-type "listen" tr tion with antennae colored blue, green, and red corresponding to the transition c scheme in Figure 4. Candidate transition loops fitting this scheme were then determ by utilizing the fitted spectral constants in the Moreno work in SPCAT to predict tr tion frequencies and quantum numbers. Figure 5. EQ1, EQ2, and EQ3 conformations of carvone, respectively, as reported by Reference [24]. The conformations are each presented in the principal axis system. EQ2 is the most stable conformer and the one from which the transition cycle of Figure 4 was derived.
The best transition loop for these experiments consisted of a drive pulse centered at 15.8137 GHz, a twist pulse centered at 6.9344 GHz, and a listen transition at 8.8793 GHz. The drive pulse was amplified by a 1 W amplifier for a duration of 7.45 µ s and broadcast into the chamber via the external horn (again, colored blue) seen in Figure 1. Secondly, the twisting pulse at 6.9344 GHz was amplified with a 40 W power amplifier and broadcast from the antenna labeled green in Figure 1 for a duration of 300 ns. The resultant listening frequency of 8.8793 GHz was received by the red-colored antenna in the schematic in Figure 1, low noise was amplified, and the subsequential FID was recorded. Timings for the drive and twist pulse were achieved by maximizing the coherence signal in a traditional CP-FTMW experiment utilizing the power amplifiers that would be employed in the M3WM experiment. Since maximizing the coherence pulse is assumed to be the π/2 condition, this timing was used directly for the drive pulse and doubled for the twist pulse.
Results
The results found at the listening frequency of each of the four experiments are presented in Figure 6. For experiments A and B, the "pure" R-(98%) and S-carvone (96%), respectively, the signal-to-noise ratio (SNR) was determined to be almost equal: 21:1 and 20:1, respectively. For experiment C, the 1:1 sample mixture, no transition was observed. Lastly, in experiment D, the 3:1 R:S sample mixture resulted in an SNR of 7.8:1. , and EQ3 conformations of carvone, respectively, as reported by Reference [24]. The conformations are each presented in the principal axis system. EQ2 is the most stable conformer and the one from which the transition cycle of Figure 4 was derived.
The best transition loop for these experiments consisted of a drive pulse centered at 15.8137 GHz, a twist pulse centered at 6.9344 GHz, and a listen transition at 8.8793 GHz. The drive pulse was amplified by a 1 W amplifier for a duration of 7.45 µs and broadcast into the chamber via the external horn (again, colored blue) seen in Figure 1. Secondly, the twisting pulse at 6.9344 GHz was amplified with a 40 W power amplifier and broadcast from the antenna labeled green in Figure 1 for a duration of 300 ns. The resultant listening frequency of 8.8793 GHz was received by the red-colored antenna in the schematic in Figure 1, low noise was amplified, and the subsequential FID was recorded. Timings for the drive and twist pulse were achieved by maximizing the coherence signal in a traditional CP-FTMW experiment utilizing the power amplifiers that would be employed in the M3WM experiment. Since maximizing the coherence pulse is assumed to be the π/2 condition, this timing was used directly for the drive pulse and doubled for the twist pulse.
Results
The results found at the listening frequency of each of the four experiments are presented in Figure 6. For experiments A and B, the "pure" R-(98%) and S-carvone (96%), respectively, the signal-to-noise ratio (SNR) was determined to be almost equal: 21:1 and 20:1, respectively. For experiment C, the 1:1 sample mixture, no transition was observed. Lastly, in experiment D, the 3:1 R:S sample mixture resulted in an SNR of 7.8:1.
Symmetry 2022, 14, x FOR PEER REVIEW 6 of 12 Figure 5. EQ1, EQ2, and EQ3 conformations of carvone, respectively, as reported by Reference [24]. The conformations are each presented in the principal axis system. EQ2 is the most stable conformer and the one from which the transition cycle of Figure 4 was derived.
The best transition loop for these experiments consisted of a drive pulse centered at 15.8137 GHz, a twist pulse centered at 6.9344 GHz, and a listen transition at 8.8793 GHz. The drive pulse was amplified by a 1 W amplifier for a duration of 7.45 µ s and broadcast into the chamber via the external horn (again, colored blue) seen in Figure 1. Secondly, the twisting pulse at 6.9344 GHz was amplified with a 40 W power amplifier and broadcast from the antenna labeled green in Figure 1 for a duration of 300 ns. The resultant listening frequency of 8.8793 GHz was received by the red-colored antenna in the schematic in Figure 1, low noise was amplified, and the subsequential FID was recorded. Timings for the drive and twist pulse were achieved by maximizing the coherence signal in a traditional CP-FTMW experiment utilizing the power amplifiers that would be employed in the M3WM experiment. Since maximizing the coherence pulse is assumed to be the π/2 condition, this timing was used directly for the drive pulse and doubled for the twist pulse.
Results
The results found at the listening frequency of each of the four experiments are presented in Figure 6. For experiments A and B, the "pure" R-(98%) and S-carvone (96%), respectively, the signal-to-noise ratio (SNR) was determined to be almost equal: 21:1 and 20:1, respectively. For experiment C, the 1:1 sample mixture, no transition was observed. Lastly, in experiment D, the 3:1 R:S sample mixture resulted in an SNR of 7.8:1.
Discussion
The results of experiments A, B, and C follow precisely what was observed in previous M3WM experiments, as the "pure" R-and S-carvone are almost identical in SNR, and no signal is observed in the racemic mixture after 500,000 averages. The only difference between the two values may be attributed to the fact that 98% and 96% are minimum purities. To validate this further, we obtained a Certificate of Analysis of the samples from the supplier (found in the Supplementary Materials). The R-carvone sample was determined to be 99.3% pure, while the S-carvone sample was 98.9% pure. This slight difference in purity can easily explain the small difference in SNR (21:1 vs. 20:1) between the two samples.
To explain the results of Experiment D, we must first try to understand the specific SNR expected in a 3:1 mixture and then check if these results match with what would be expected. In order to achieve this, we first present some well-understood principles regarding the enantiomeric excess (ee) of a mixture.
The ee of a mixture is defined by the following: where R and S are masses (or volumes) of the R and S enantiomers. Defined for optical rotation, this value is given as: Equation (2) comes from the observation that when an absolute racemic mixture is present, the optical rotation of the light is 0 • . Similarly, it was shown that M3WM experiments exhibit no signal for a racemic mixture because the FID signals deconstructively interfere due to being 180 • out-of-phase with one another. By taking this similarity into account, then, the same equation may be used to determine the ee of a mixture using SNR: However, we should note that our "pure" samples are not pure but 99.3% and 98.9% for R-and S-carvone, respectively, resulting in slight differences in our observed pure SNRs. A 3:1 mixture of R-to-S taking the purities into account, should then give a 50.2% ee of R-carvone (50% just assuming absolutely pure). By using Equation (3) for the 3:1 mixture and using the results of Experiments A and B, disregarding the small discrepancies in purities give % ee of 39% and 37%, respectively.
At first, this result would seem very concerning as the values are almost 25% off from the accepted certificate of analysis. However, the results of experiments A, B, and C, along with some of the literature results, provide the basis for the conclusion that the instrument, as constructed, is fully operational. The first is that the uncertainty in ee in previous studies was shown to be approximately ±5% across 10 experiments [22]. Our experiment falls easily within the 3 s, 95% confidence interval of this work being 11% and 13% off, respectively. Furthermore, the SNR signals of 7.8:1 and 21:1 or 20:1 are lower, to begin with, and it was also shown that chirality determination is greatly increased when signals are above 50:1 in SNR.
However, this result on its own is not satisfactory, so we investigated the FID and FFT information more to understand these results. The first item undertaken was to look at the real and imaginary FFT portions of the pure R-and S-carvone species. They are presented in Figure 7. If the FID signals are more than 90 • out-of-phase, they will be opposite in sign in the imaginary part of the FFT. This is exactly the case; however, there are dispersion signals in the real part of the FIDs, indicating that the signals may not be 180 • out-of-phase with Symmetry 2022, 14, 848 8 of 13 one another. The raw FID data, however, contains many manmade signals that overwhelm the traditional microwave signals, even in a common CP-FTMW arrangement. In order to understand the signal, we were only interested in the listen transition at 8.8793 GHz. Thus, we put the signal through a Fourier bandpass filtering process of only allowing ±0.5 MHz around the real and imaginary signal at 8.8793 GHz while blanking the rest of the spectrum. From there, an inverse FFT was employed to reconstruct the time domain FID signal in which we were interested. Those FIDs are presented in Figure 8. There is not much phase information that can be gathered from the signal at this level.
The full FIDs of Figure 8, however, do not provide the resolution to see the oscillations of the 8.8793 GHz signal. Therefore, a representative "zoom-in" of each FID is presented in Figure 9, along with an overlay of the two signals. From the overlay, it is apparent that the signals are not, in fact, 180° out-of-phase as our Real and Imaginary FT data implied. In order to show that this spectrometer is fundamentally equivalent to previous M3WM experiments, we need to show that this signal can be shown to give more accurate ee measurements than those from the magnitude spectrum alone. There is not much phase information that can be gathered from the signal at this level.
The full FIDs of Figure 8, however, do not provide the resolution to see the oscillations of the 8.8793 GHz signal. Therefore, a representative "zoom-in" of each FID is presented in Figure 9, along with an overlay of the two signals. From the overlay, it is apparent that the signals are not, in fact, 180 • out-of-phase as our Real and Imaginary FT data implied. In order to show that this spectrometer is fundamentally equivalent to previous M3WM experiments, we need to show that this signal can be shown to give more accurate ee measurements than those from the magnitude spectrum alone. Symmetry 2022, 14, x FOR PEER REVIEW 10 of 12 Figure 9. The full R-carvone (top) and S-carvone (top) zoomed-in FID signal after math filtering (see text) and an overlay (bottom) of the two zoomed-in FIDs. The FIDs are out-of-phase but not 180° as with traditional M3WM experiments.
The first determination, then, is to understand how the ee measurement depends on the phase in an M3WM experiment. For this, Shubert et al. provide the proportionality [10]: where ΦMW and ΦRF are the phases of the microwave and radiofrequency pulses typically used for the drive and twist pulses, respectively. This leads to an observed phase of the enantiomers, , at the start of the FID, tr, as [10]: However, if the drive and twist phases are out-of-phase, then the ±π of Equation (6) will readjust to the observed phase of the enantiomers. Using Equations (5) and (6) into Equation (4), we can arrive at a new proportionality: where ΦOBS1−ΦOBS2 is the phase difference between the enantiomers. The first determination, then, is to understand how the ee measurement depends on the phase in an M3WM experiment. For this, Shubert et al. provide the proportionality [10]: where Φ MW and Φ RF are the phases of the microwave and radiofrequency pulses typically used for the drive and twist pulses, respectively. This leads to an observed phase of the enantiomers, Φ obs , at the start of the FID, t r , as [10]: However, if the drive and twist phases are out-of-phase, then the ±π of Equation (6) will readjust to the observed phase of the enantiomers. Using Equations (5) and (6) into Equation (4), we can arrive at a new proportionality: where Φ OBS1 −Φ OBS2 is the phase difference between the enantiomers.
Using sine function mathematical fitting tools on the pure R-and S-carvone FIDs, we determined the Φ OBS1 −Φ OBS2 to be 136.6 • out-of-phase. The sine fitting tool utilizes a specific number of points (128 in our experiments) of the Fourier-filtered FID and fits them to the function f (t) = a sin(bt + c), where a is the amplitude of the FID, b is the frequency of the signal, and c is the phase. The fitting tool utilizes a Marquart-Levenberg algorithm using the sum of the least-squares deviations as the maximum-likelihood criterion. It should be noted here that all fits had an R 2 value > 0.99999. The mathematically derived 136.6 • was in excellent agreement with a much less rigorous Lissajous plot method employed (135.1 • ). Previous experience with FID averaging in the CP-FTMW experiment shows that there is an uncertainty of ±0.5 ps in the time domain. By using this, we can now establish a calibration phase discrepancy of 136.6 ± 0.3 • for the determination of any resultant SNR mixtures. This means we need to adjust Equation (3) to: Using Equation (8), then, for the 3:1 mixture and using the results of Experiments A and B give %ee of 53.7 ± 0.3% and 50.9 ± 0.2%, respectively. These values are in much better agreement with the exact value of 50.2% ee given earlier using the certified values and certainly agree with the reported 5% ee uncertainty in an M3WM experiment (using S-carvone SNR values, we are <1% off the accepted) [22].
The last question that requires addressing is how is the relatively large twist frequency possible? Patterson and Doyle explain that the twist frequency, ν twist ≤ c/4L, where c is the speed of light and L is the characteristic length of the sample [26]. The solenoid valve employed for the supersonic beam is 0.8 mm in diameter. Moreover, it was shown that multiple nozzle beams interact when placed within 20.5 cm of each other. This gives 0.08 cm ≤ L ≤ 20.5 cm for a scenario with one beam. This beam is not skimmed or columnated in any way. Using c = 3.0 × 10 10 cm/s, 0.366 GHz ≤ ν twist ≤ 93.75 GHz. However, it was shown that the 180 • signal persists up to 4 GHz, as has been shown by both Schnell [10] and Pate [3], but starts to become considerably out-of-phase with our value of 6.9344 GHz, perhaps starting to show that the characteristic length of the sample with one supersonic nozzle in the interaction zone is on the order of 10 cm or less. This, along with the tracking of the explicit phase of the pure signals using a calibrant, really allows for any twist frequency to be used as long as the phase discrepancies are tracked, as it is well documented that Φ listen = Φ drive + Φ twist [27].
We conclude, therefore, that this instrument, although different in multiple ways from previous M3WM experiments, was fully developed and demonstrated to be comparable to or better than other known M3WM techniques when Fourier-filtering techniques are leveraged. | 8,023.4 | 2022-04-19T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Representation Learning for EEG-Based Biometrics Using Hilbert–Huang Transform
: A promising approach to overcome the various shortcomings of password systems is the use of biometric authentication, in particular the use of electroencephalogram (EEG) data. In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. The proposed neural network architecture treats the spectrogram as a collection of one-dimensional series and applies one-dimensional dilated convolutions over them, and a multi-similarity loss was used as the loss function for subject-independent learning. The architecture was tested on the publicly available PhysioNet EEG Motor Movement/Imagery Dataset (PEEGMIMDB) with a 14.63% Equal Error Rate (EER) achieved. The proposed approach’s main advantages are subject independence and suitability for interpretation via created spectrograms and the integrated gradients method.
Introduction
Password-based authentication is being replaced by a more reliable biometric-based authentication [1]. Biometric-based authentication uses a person's unique biological characteristics for recognition. Some of the most commonly used biometric traits are a finger or palm print, the iris pattern, the timbre and spectral images of the voice, facial images, handwritten signatures, or regular handwriting [2]. Some requirements must be met for biometrics to be applicable in a real-world setting. In particular, the biometric trait must be universal, persistent, and easy to measure, and biometric-trait-based identification systems must have high performance and recognize the identity with sufficient accuracy for practical applications [3]. Most biometric authentication systems also require the user to be physically present for authorization [4]. Considerably, the most important advantage of biometric authentication is that the user experience is usually convenient and fast [5]. Modern smartphones use fingerprint and facial recognition systems, which work fairly quickly for the end-user and partially bypass the problem of forgetting a password. Among the biometric authentication systems that have not yet become widespread, we can highlight those that rely on the use of EEG data.
EEG-based systems currently have many advantages over traditional methods and have attracted considerable research interest [6]. At this point, biometric EEG signals cannot be easily replicated, ensuring that the user is alive and well, making it a more reliable choice for identity verification, although the possibility of EEG signals being faked or compromised still exists [7]. EEG data can be used not only for authentication, but also for other purposes (emotion recognition, sleep, and health studies). In [8], the researchers created a new automated sleep staging system based on an ensemble learning stacking model that integrates Random Forest (RF) and eXtreme Gradient Boosting (XGBoosting), achieving 90.56% accuracy. In [9], EEG data from six electrodes were used to detect stroke patients with the C5.0 decision tree machine learning method achieving 89% accuracy. In [10], support vector machine was also used to distinguish stroke patients from healthy subjects (98% accuracy using only two electrodes versus 95.8% accuracy achieved in [11] using electrocardiogram (ECG) data and the random tree model). EEG data can also be used for the classification of Parkinson's Disease (PD), as shown in [12] (the authors used Discriminant Function Analysis (FDA) and achieved 62% accuracy on EEG data alone and 98.8% accuracy combining EEG and Electromyogram (EMG) data). The classification of patients vs. controls for the diagnosis of PD in [13] was performed using a 13-layer neural net (88.2% accuracy). The multifunctionality of EEG data can help improve the reliability of an authentication system based on EEG data. For example, EEG data can change depending on the state and emotions of the user [14], which provides some protection in case the user is forcibly being scanned in a life-threatening situation. State-of-the-art methods (a dynamical graph convolutional neural network in [15], random forest in [16], k-NN in [17]) can classify emotions using EEG data with more than 80 % accuracy [18]. Multiple biometric data, such as facial recognition, can be used for surveillance without notifying the user, but in the case of EEG data, data extraction stops when the device is removed from the head [19].
At present, there are many studies on subject recognition using EEG data and machine learning methods. The first such study was conducted by the University of Piraeus in 1999. EEG signals were collected on a single monopolar channel using a mobile EEG device and used to train a vector quantizer network. The accuracy of the trained network was 72-84% [20]. In [21], the researchers used the k-Nearest-Neighbors (k-NN) algorithm and Linear Discriminant Analysis (LDA) to classify data from twenty participants, who were asked to perform two different tasks during signal capture: a hand movement task or an imaginary hand movement task. Accuracy ranged from 94.75% to 98.03%. In [22], a four-level (two convolutional layers and two pooling layers) Convolutional Neural Network (CNN) was used. Thirty subjects were recruited for the experiment. During the first task, participants were asked to remember their faces; during the second task, participants were asked to perform 10-12 eye blinks. The accuracy of this approach was 97.6%.
EEG-based subject-dependent recognition achieved practically perfect accuracy using a single recording session (3.9% EER in [22] using CNN and eye-blinking signals coupled with EEG signals, 99.8% accuracy in [23] using LDA and k-NN). However, the systems that achieve such high accuracy are of little use in real life for two reasons:
1.
Most researchers use EEG data from only one data acquisition session without considering the possibility of the signal being non-stationary; 2.
These approaches work only with a fixed list of users (subject-dependent).
Some researchers have tried to study and solve the first problem described above-non-stationarity. Reference [24] collected longitudinal EEG data (throughout the year) and found out that in the case of using only single-session data, system classification performance may generalize over session-specific recording conditions rather than over person individual EEG characteristics, achieving 90.8% Rank-1 identification accuracy over multiple sessions. Unfortunately, the collected dataset is not publicly available. In our work, we did not try to solve the first problem and used a dataset with only one recording session.
Regarding the second problem, subject dependency, all previous works had a fixed subject list output. In practical cases, the network should be able to recognize signals it has not encountered before in order to recognize a threat. It is possible to try to work around the problem by building separate classifiers for each user, but this is still impractical since training requires a fairly large amount of time. A subject-independent network has no classes at all. Instead, it takes data from two electroencephalogram signals, converts them into two feature vectors, and compares the distance between them to a certain threshold value. Recently, Reference [24] also considered the subject-independent classification approach, where system classification performance was tested using the leave-one-groupout methodology (the data of one of the users was not presented in the training fold and was present only in the test fold) [25]. In [26], a subject-independent classifier achieved the best validation result using the eyes-open (5.9% EER) and eyes-closed (7.2% EER) states' data (multiple sessions) and 31 s verification phase data. Still, their architecture relied on one-dimensional convolutions performed over downsampled time series data, and the output process of the system was difficult for the average person to interpret, explain, or draw conclusions about, thus creating a new problem: the interpretability of deep learning systems.
Which frequencies contribute the most to the system's output and distinguish its data from that of another subject? To partially solve this problem, we propose to use Hilbert spectrograms (obtained using the Huang-Hilbert transform and Empirical Mode Decomposition (EMD)) as the input and a publicly available dataset-the PhysioNet EEG Motor Movement/Imagery Dataset. Empirical mode decomposition with hand-crafted features has already been applied [27] on the PhysioNet EEG Motor Movement/Imagery Dataset (95.64% accuracy in the subject-dependent scenario, when each subject receives a separately built classifier). We also propose to apply an explainable artificial intelligence method-integrated gradients [28]. Such a method can increase user confidence in authentication system output, validate existing knowledge, question existing knowledge, and generate new assumptions [29].
In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. The proposed neural network architecture treats a spectrogram as a collection of one-dimensional series and applies one-dimensional dilated convolutions over them, and a multi-similarity loss was used as the loss function for subject-independent learning. The architecture was tested on the PhysioNet EEG Motor Movement/Imagery Dataset (PEEGMIMDB) [30] with a 14.63% Equal Error Rate (EER) achieved. The proposed approach's main advantage is the suitability for interpretation via Hilbert spectrograms and the integrated gradients method. The main contributions of this study are as follows: • The subject-independent neural network architecture for EEG-based biometrics using Hilbert spectrograms of the data as the input (trained using the multi-similarity loss); • The use of the integrated gradients method for the proposed architecture's output interpretation.
Dataset
The PhysioNet EEG Motor Movement/Imagery Dataset containing 1 min and 2 min recordings of 109 people from [30] was used. Subjects performed different motor/imagery tasks (4 tasks, 2 min EEG recordings); EEG recordings were also taken in the eyes-open and eye-closed resting states (1 min recordings).
Signal Processing
Initially, the EEG recordings were sets of recordings of 64 time series (from 64 electrodes), recorded using the BCI2000 system with a 160 Hz sampling rate. The data were divided into epochs of 5 s in duration (see Figure 1). To perform such a split and to process the dataset, we used the MNE Python toolkit [31]. We also used data from only 8 channels (O1, O2, P3, P4, C3, C4, F3, F4) to reduce the computational complexity, as [27] showed no significant classification performance drop after using only those 8 channels. We also used EEG data for only eyes-open and eyes-closed states, as it showed the best result in [26] and can be considered more practical from a consumer point of view (less time to authenticate the user while not requiring him/her to perform specific tasks other than him/her being still and resting). After such preprocessing, we had the following dataset dimensions: To obtain the EEG signal spectrograms, we used the Hilbert-Huang Transform (HHT). In [32], it was concluded that the Hilbert-Huang transform can help eliminate noise from the EEG signal; the HHT is the most suitable method to process signals such as brain electrical signal and, at the same time, has excellent time-frequency resolution, so the HHT is more suitable to analyze non-stationary signals. As a result of the Hilbert-Huang transform's first stage, the signal was decomposed into empirical modes. The Hilbert transform was subsequently applied to the selected modes in the decomposition. This transform allowed an effective decomposition of non-linear and non-stationary signals, which is especially useful in the case of EEG. The transformation also did not require an a priori functional basis for the transformation; the basis functions were set adaptively from the data by the empirical mode function selection procedure. An example of the EEG signal decomposition into empirical modes is shown in Figure 2.
After calculating the instantaneous frequencies from the derivatives of the phase functions by the Hilbert transform of the basis, the result can be represented in the frequencytime form. Given the Nyquist-Shannon sampling theorem and 160 Hz sampling rate, we used 60 frequency bins from 0.1 Hz to 60 Hz. The resulting spectrogram had the shape of [60 frequency bins, 801 points]. An example of the EEG signal transformation in the form of a spectrogram is shown in Figure 3. In order to prevent the mode mixing problem [33], we used the masked sifting method [34], implemented in the EMD Python package [35].
The spectrograms of EEG channel data that we obtained in the previous step were essentially two-dimensional maps. These two dimensions represent fundamentally different units of measurement, one of which is the frequency power and the other time. Therefore, the spatial invariance that two-dimensional CNNs provide may not be suitable for our task. It is better for us to represent spectrograms as a set of stacked time series for different frequency bins [36]. As such, we additionally reshaped the data to 60 time series with 801 points (Figure 4) and stacked the time series over all channels (such a transform can be easily reversed in case we want to use the integrated gradients method) and also applied min-max normalization over the (time series × channel) dimension. No further processing, such as noise removal or band-pass filtering, was applied. The resulting dataset shape was
Deep Learning Methods
One-dimensional dilated convolutions can be successfully utilized to classify time series and are more computationally efficient than LSTM blocks [37]. We propose the multichannel dilated one-dimensional convolutional net architecture described in Table 1 to generate feature vectors from the data. We used metric learning methods to map the data to an embedding space, where similar data are close together and dissimilar data are far apart [38]. In general, this can be achieved using specific embedding and classification losses such as the triplet loss [39], ArcFace Loss [40] or multi-similarity loss [41]. In this work, we used multi-similarity loss and the metric-learning framework [38] implemented in PyTorch.
The first convolution layer uses padding in such a way that the input data shape is preserved (except the channels' dimension) to correctly process the edge values. We also used Parametric Rectified Linear Unit (PReLU) as the activation function, because [42] showed that it can outperform the Rectified Liner Unit function (ReLU).
Model Interpretation
Improving the interpretability of deep models is a critical task for machine learning. One method for solving this problem is to identify the portions of the input data that contribute most to the final model output. However, existing approaches have several drawbacks, such as poor sensitivity to and instability in the specific implementation of the model. Reference [28] discussed two axioms: sensitivity and implementation invariance, which they believe a good interpretation method must satisfy.
The sensitivity axiom means that if two images differ by exactly one pixel (but they have all other pixels in common) and give different predictions, the interpretation algorithm should give a non-zero attribution to that pixel. The axiom of implementation invariance means that the basic implementation of the algorithm should not affect the result of the interpretation method. Researchers have used these principles to develop a new attribution method called integrated gradients.
IG starts with a base image (usually a completely darkened version of the input image) that increases in brightness until the original image is restored. Gradients of class estimates for the input pixels are computed for each image and averaged to obtain a global importance value for each pixel. Besides the theoretical properties, IG thus also solves another problem with vanilla gradient ascent: saturated gradients. Since the gradients are local, they do not reflect the global importance of pixels, but only the sensitivity at a particular input point. By changing the image brightness and calculating gradients at different points, IG can obtain a more complete picture of the importance of each pixel. In our work, we used the PyTorch-based Captum [43] framework implementation of integrated gradients and call the output of the integrated gradients an importance map. The block diagram featuring all output steps is shown in Figure 5. Figure 5. The proposed method framework.
Model Training
To test the architecture's performance, we used the leave-k-groups-out (the data of multiple users are not presented in the training set and are present only in the testing set) validation methodology. GroupKFold (with k = 5) from the scikit-learn package [44] was used as an iterator variant with non-overlapping groups. The same group would not appear in two different CV testing sets/folds (the number of distinct groups has to be at least equal to the number of folds). The folds were approximately balanced (the number of distinct groups was approximately the same in each fold). There were 22 (21 in the last fold) subjects' data appearing only in the test fold during each CV iteration. Each epoch, 10 data samples per class in the training fold were randomly selected, forming batches. For model training, we used the Adam optimizer (lr = 1 × 10 −4 , weight_decay = 1 × 10 −3 , 500 epochs).
After training, we generated 128-unit l2-normalized feature vector representations of the input data and computed the cosine distance matrix for the generated representations. After this, the sklearn [44] classifier CalibratedClassifierCV (using LinearSVC as a base estimator) was used to calculate the confusion matrix over different distance thresholds. In such a way, we could obtain the Equal Error Rate (EER), which is a metric always used in state-of-the-art EEG-based verification systems [45]. The EER is the location on a Detection Error Tradeoff (DET) curve where the false acceptance rate and false rejection rate are equal. In general, the lower the equal error rate value, the higher the accuracy of the biometric system is. The obtained EER value was 14.63%. The feature space with training fold samples is visualized in Figure 6 using the TSNE method [46].
The hardware used in this study consisted of one Nvidia Tesla T4 GPU card (320 Turing Tensor cores, 2560 CUDA cores, and 16 GB of GDDR6 VRAM), one 8-core CPU, and 64 GB of RAM. The DNN model was trained using the GPU implementation of PyTorch, while all other processes used the CPU. The Python programming language was used for the present study. Along with it, some libraries in addition to the ones already mentioned before were also employed: Keras [47], NumPy [48], Matplotlib [49].
Model Interpretability
After training, integrated gradients method can be applied to the model. An example output is shown in Figure 7. The integrated gradients method output in our case can be summed over the time dimension or the channel dimension. Figures 8 and 9 show the integrated gradients method output for spectrograms of four subjects, summed over the time dimension. Here, Channels 1-8 correspond to the (O1, O2, P3, P4, C3, C4, F3, F4) channels. It can be clearly seen which channels and frequencies were more important for the model feature vector output. Figure 8 demonstrates that there was a large variability within the same class and a small separation between two different classes (they look alike). We can additionally sum importance maps over the channel dimension to see which frequencies are more important for the model feature vector output and more clearly visually distinguish importance maps for each class (see Figures 10 and 11).
Discussion
The proposed architecture was tested on the publicly available PEEGMIMDB dataset with a 14.63% Equal Error Rate (EER) achieved. It had a worse EER value than in [26] (Single-Session Enrollment (SSE) and Short Time Distance (STD) with deep representations with channel-specific CNN modeling achieved an 8.1% EER and a 6.8% EER for the eyes-closed and eyes-open states, respectively; the dataset used is not publicly available), which may have contributed to different dataset subject numbers (109 in our case vs. 50 subjects in [26]), but our proposed approach's main advantage is its suitability for interpretation via the created spectrograms and the integrated gradients method (we operated on spectrograms in the time-frequency domain, and Reference [26] operated only in time domain). In some cases, the difference can not be clearly seen, as in Figure 8. However, we can additionally sum importance maps over the channel dimension to see which frequencies are more important for the model feature vector output and more clearly visually distinguish importance maps for each class (see Figures 10 and 11).
Conclusions
The proposed neural network architecture treats Hilbert spectrogram as a collection of one-dimensional series and applies one-dimensional dilated convolutions over them. A multi-similarity loss was used as the loss function for subject-independent learning. The architecture was tested on the publicly available PEEGMIMDB dataset with a 14.63% Equal Error Rate (EER) achieved. Our proposed approach's main advantage was the suitability for interpretation via the created spectrograms and integrated gradients method (we operated on spectrograms in the time-frequency domain, and Reference [26] operated only in the time domain). Future work will focus on using the Hilbert holospectrum to improve system accuracy. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data available in a publicly accessible repository The data presented in this study are openly available in PhysioNet repository at DOI: 10.13026/C28G6P, reference number [50]. | 4,718 | 2022-03-20T00:00:00.000 | [
"Computer Science"
] |
Neurospace Mapping Modeling for Packaged Transistors
This paper presents a novelNeurospaceMapping (Neuro-SM)method for packaged transistormodeling. A new structure consisting of the input package module, the nonlinear module, the output package module, and the S-Matrix calculation module is proposed for the first time.The proposed method can develop the model only using the terminal signals, instead of the internal and physical structure information of the transistors. An advanced training method utilizing the different parameters to adjust the different characteristicsof the packaged transistors is developed tomake the proposedmodelmatch the device data efficiently and accurately. Measured data of radio frequency (RF) power laterally diffused metal-oxide semiconductor (LDMOS) transistor are used to verify the capability of the proposed Neuro-SM method. The results demonstrate that the novel Neuro-SM model is more accurate and efficient than existing device models.
Introduction
With the development of electronic technology, the accurate computer-aided design (CAD) models of transistors play a decisive role in the circuit/system design with high performance and reliability [1,2].The transistor in the circuit system contains not only the active cells, but also the passive devices such as the encapsulated package circuit.As the operating frequency increases, the presence of package components cannot be neglected due to the fact that the package parasitics influence the transistor performance [3,4].In order to predict the electrical performance of the packaged transistor, the CAD model must accurately reflect the characteristics of the active cells and the packaged circuit.
Package modeling for active/passive devices have been a field of strong interest in recent years [5,6].The equivalentcircuit-based model of metal-ceramic packages which was described in terms of inductances, resistances, and capacitances was used in radio frequency (RF) and microwave transistors [6,7].When the equivalent-circuit parameters are simultaneously optimized by the device data, the exact relationship between the voltage and current of packaged transistor can be obtained.As the device structure becomes complicated, it is inaccurate and time consuming to construct packaged transistor model in equivalent-circuit modeling manner due to slow trial-and-error processes.Electromagnetic (EM) modeling approaches become essential to realize design accuracy [8,9].A modeling method based on EM theory was presented in [10] to predict the EM feature of the three-dimensional construction of a high-power RF transistor with internal matching networks.Tedious calculation of EM simulation is prohibitively expensive, especially when a significant number of the geometric and material parameters have to be adjusted repeatedly [11][12][13].
Recently, Neurospace Mapping (Neuro-SM) techniques have been recognized as useful alternatives to conventional approaches in microwave modeling [14][15][16].The Neuro-SM model can not only accurately represent the input and output relationship of the device/circuit, but also calculate quickly reducing the circuit/system simulation cycle [17].Circuit-based Neuro-SM was proposed firstly in 2003 and then received wide attention from academia and industry [18].An evolutionary Neuro-SM modeling technique with high computational efficiency was proposed in literature [19] which considered not only the voltage mappings but also the current mappings.Reference [20] used a dynamic neural network as the mapping network, and two mapping networks with analytical equations were added on the existing model in [21].These existing Neuro-SM methods mainly focus on modeling for the active cells of transistor.The problem of package modeling is not addressed in these existing works.
In this paper, we proposed a new modeling method for packaged transistor based on Neuro-SM.The proposed method roughly divides the packaged transistor into three parts: the input package circuit, the nonlinear circuit, and the output package circuit, and they are built, respectively.In addition, an advanced training method making the novel model match the device data effectively is developed, which can avoid the mutual interference of the optimized parameters for the different performance of model.To verify the availability of the proposed modeling approach, a practical example on modeling RF power LDMOS transistor is presented.
Proposed Neuro-SM Modeling for Packaged Transistors
2.1.Proposed Neuro-SM Model Structure.Packages of transistors typically contain a metal flange and a dielectric window frame.The transistor is bonded to the die-bond area inside the cavity of the window frame.Metal leads are provided at the input and output sides of the window frame to allow for connection to external circuitry.Based on the physical structure of packaged transistor, we propose to roughly divide the total structure into three parts: the input package circuit, the nonlinear circuit, and the output package circuit.The proposed Neuro-SM modeling method creates the CAD modules for the three parts, respectively, and an additional -Matrix calculation module is required to associate the three CAD modules as a whole.
There are 4 modules in the novel Neuro-SM model of the packaged transistor: the input package module, the nonlinear module, the output package module, and the -Matrix calculation module, as shown in Figure 1.The input/output package module represents the performance of the package circuits which consist of passive components such as bond wires, MOS capacitors, integrated capacitor, and so on.Because the input/output package circuit consists of linear components, the unique input signal of the input/output package modules is the frequency, and the output signals are the real and imaginary parts of 11 , 12 , and 22 .The nonlinear module represents the characteristic of the multiple active cells in the packaged transistors.The nonlinear module is constructed by the existing Neuro-SM modeling method in literature [22].Both DC characteristic and the -parameter performance of packaged transistors are affected by the nonlinear module.For the nonlinear module, bias voltages and frequency are the input signals, and the real and imaginary parts of 4 -parameters are the output signals.-Matrix calculation module plays an important role to in calculating the -parameter matrixes of the input package module, nonlinear module, and the output package module.The output signals of the S-Matrix calculation module are the -parameters of the modeled object.
Scattering-matrix analysis is applicable to any general microwave circuit configuration when all the circuit components are modeled in terms of their scattering parameters.The -Matrix calculation module is constructed based on the literature [17]. represents the -parameters of the th component.For the packaged transistor model we proposed, i equals 1, 2, and 3 representing the input package module, the nonlinear module, and the output package module, respectively.
called the connection-scattering matrix represents the relationship between the incident wave and reflected wave.components.For the proposed model in Figure 1, can be written as represented in Setting = −1 , we can obtain the total -parameters of the input package module, the nonlinear module, and the output package module, that is, the output of the -matrix calculation as represented in
Proposed Package Module
Structure.There are two reasons for employing package circuit for RF/microwave transistors.The first one is the environmental ruggedness and the mechanical strength which can protect the internal circuit of transistor.The second one is to ease external matching-circuit design and improve device performance by adding an internal matching circuit into the package circuit.To achieve high gain or efficiency, lots of active cells are added to the transistor, which result in more bond wires; MOS capacitors and integrated capacitor are used to make electrical connections.The complex structure of package circuit greatly increases the difficulty of modeling.
The package modeling method we proposed can be applied to arbitrary packaging structures, because the advanced package module is achieved only using the terminal signals, instead of the internal and physical structure information of the package circuit.
The block diagram of the package module is shown in Figure 2. The frequency is the unique input signal of the package module which is not excitated by the bias voltage.The real and the imaginary parts of the -parameters which are represented by prefix and , respectively, are the output signals of the package module.The subscripts and represent the input package and output package, respectively.In the package modules, the 21 is not selected as an output because the dual network has the relationship 21 = 12 , which can reduce the output dimension of the input/output module and simplify the model structure.
In the proposed package module, is a free variable and is the phase of the -parameter.The subscript indicates the port number of the input/output packaged circuit and the superscript and represent the input package and output package, respectively.The neural networks are used to represent the nonlinear relationship between the frequency and the 4 outputs of the neuronetwork as represented in (4) and (5). and where f ANN and h ANN represent multilayer feedforward neural network and w 1 and w 2 are vectors containing all internal synaptic weights of the neural network f ANN and h ANN , respectively.Let A represent the amplitude of the -parameters of the package circuit.The subscripts indicate the port number of the package circuit.The proposed package module adopts a free variable to calculate the amplitude of the -parameters, which make sure that the value of 11 is between 0 and 1 whatever the value of is. 11 is computed as represented in In the proposed method, the package circuit is supposed to be lossless and satisfies that quadratic sum of Return Loss The amplitude A and the phase of -parameters are obtained based on the output parameters of the neural network, (6), and (7).The formula conversion block completes the transformation from the amplitude/phase to the real/imaginary parts of -parameters.The real/imaginary parts of -parameters are adopted to make calculation procedure which combines the -parameters of the package modules and the nonlinear module easier.Moreover, for the same -parameter, many values of the phase which are several cycles apart are consistent.The values of the adjacent phase vary greatly due to the phase cycle, which increases the nonlinearity between frequency and the phase and enhances optimization difficulty.Therefore, the proposed method adopts the form of the real/imaginary part instead of the amplitude/phase.Appropriated weights w 1 and w 2 make the proposed package module describes accurately the characteristics of the encapsulated circuit without the information of the physical structure.
where represents a multilayer feedforward neural network and w 3 is a vector containing all internal synaptic weights of the neural network .meet the accuracy requirements.The same error function of DC and -parameters as represented in ( 9) and ( 10), respectively:
Proposed Training
where and (.) represent the DC responses of the packaged transistor data and the proposed model, respectively. and (.) represent the -parameters of the packaged transistor data and the proposed model, respectively.The superscript represents the training or test data index, and represents the total number of the training or test data.The optimized parameters in the proposed model consist of w 1 in the input package module, w 2 in the output package module, and w 3 in the nonlinear module.The advantage of the proposed modeling approach is modular modeling, which make different parameters control different characteristics.However, the existing training methods optimize all parameters of neural networks at the same time, which cannot get appropriate parameters for the proposed model easily.In order to improve the optimizing efficiency of the proposed model, the proposed training method complete the construction and training of the packaged transistor model by using the following four steps.
Step 1. Send the bias voltage of the fine model to the mapping network in the nonlinear module.Initialize the weight w 3 making (V , V ) equal to (V , V ), which ensure that the performance of the Neuro-SM model will not get worse than the coarse model.
Step 2. Adjust the weight w 3 of the mapping network in nonlinear model by solving (9).Obtain the bias voltage of coarse model V and V by solving (8), which make the Neuro-SM model match the fine model in the DC simulation.
Step 3. Adjust the weights w 1 and w 2 of the neural networks in package modules by solving (10).Obtain the appropriate parameters , 11 , 12 , and 22 by solving (4) and ( 5), which make the proposed Neuro-SM model match the fine model in the -parameter simulation.
Step 4. Train the proposed Neuro-SM model with DC and parameter data simultaneously.Fine tune the weights w 1 , w 2 , and w 3 improving the performance of the proposed model further.
The proposed training method enhances the advantage of the existing training method by adjusting the parameters in steps.The proposed method controls the DC/AC performance of the Neuro-SM model with different weight parameters, which reduce the mutual interference of the optimized parameters for the different performance of model and avoid changing the optimized parameters repeatedly.
Examples
RF power LDMOS transistor is the technology of choice, due to its low power consumption, high mechanical hardness, and the inherent economic advantages that silicon wafer manufacturing offers.To verify the accuracy and feasibility of the proposed Neuro-SM modeling method, the I-V and -parameter characteristics of packaged LDMOS transistor are modeled [23].Measurement data of LDMOS transistor with packages are obtained as the training data and test data.The range of the training data and test data used in this example is showed in Table 1.The proposed Neuro-SM model learns the training data by adjusting automatically the weight of the neural networks.Test data which are different with the training data are used to validate the accuracy of the constructed model.
In this example, Angelov model is used as the existing coarse model.At present, Angelov model which can match many types of transistors is considered to be the great nonlinear model.Choosing Angelov model as the coarse model improve the general applicability of the new modeling method.The mismatch between the coarse model and the measured data of the LDMOS transistor cannot be ignored even by optimizing the parameters in the coarse model as much as possible.Then, the input/output package modules Mathematical Problems in Engineering 2 gives the test error of the coarse model and the proposed model.This result demonstrates that the novel Neuro-SM method improves the current capabilities of the coarse model.
In order to further show the detail results, the I-V comparison of the coarse model and the proposed model is shown in Figure 4. Due to the low nonlinearity of DC characteristic, both the coarse model and the proposed model can match the measured data well.However, the accuracy of the proposed model is much higher than the coarse model in -parameter simulation as shown in Figure 5.These models work at bias voltage (V = 2.75, V = 28) which is never used in training data.The magnitude and phase of parameters from the proposed Neuro-SM model vary versus frequency in the exactly same way as that from the measure data.Because the 4 -parameters of the coarse model are controlled by the same set of parameters, it only provides a roughly approximation to the fine model.In the proposed model, the package module can respond to the frequency and the active module can respond to the bias voltages.The parameters in the proposed modules are independent and control different performance of the packaged transistor.Therefore, the proposed model contains more free variables and matches 4 -parameters of the fine model well simultaneously.
After being trained with DC data and -parameter data, both the coarse model and the proposed model are operated in harmonic balance (HB) simulation to further verify the effectiveness of the advanced modeling methodology.Those models work at bias voltage (V = 2.75, V = 28), fundamental frequency ( = 1.805), source impedance ( = 1.535 − 4.232Ω), and load impedance ( = 1.403 − 3.748Ω).The range of the input power is from 4.5 to 18.5dBm and the step of that is 2dBm, which allows the LDMOS transistor in this example to work in a linear region.The comparison results of the gain and the power added efficiency (PAE) between the coarse model and the proposed model are shown in Figure 6, demonstrating that the HB response of the proposed Neuro-SM model is much closer to the measured data than that of the coarse model.This result provides a good foundation for the large signals modeling in the future work.
Conclusions
A new Neuro-SM modeling approach has been proposed for packaged transistors.The novel model structure can accurately reflect the characteristics of both the active cells and the packaged circuit.This allows existing models to exceed their current capabilities.The advanced training method avoids repetitive adjustment of the optimization parameters improving the modeling efficiency.Good results are verified by the practical example.In the future, we can extend the proposed modeling method to further improve the larger-signal characteristic of the package transistors.Another potential future direction is to apply the proposed method in this work to the trapping behaviors of the gallium nitride transistors, meeting the needs of contemporary technology.Mathematical Problems in Engineering
Figure 2 :
Figure 2: Block diagram of the proposed package module.(a) Block diagram of the proposed input package module.(b) Block diagram of the proposed output package module.
2. 3 .
Nonlinear Module Structure.In order to perform the nonlinear characteristic of active cells in packaged transistor, Neuro-SM modeling method in literature[22] is used.Let the fictitious model that accurately matches the new measured/simulated data of transistors be called the fine model.Let the existing empirical/equivalent-circuit model be called the coarse model.When the accuracy of the coarse model cannot meet the modeling requirements, the Neuro-SM model including the coarse model and mapping networks is used to best match the fine model by automatically mapping the nonlinear relationship between signals of the coarse model and the fine model.Compared with other modeling methods based on space mapping, the Neuro-SM model does not require complex parametric extraction to obtain the next iteration point, which greatly reduces the time required for model development.In the novel Neuro-SM model we proposed, the nonlinear module is constructed as the structure shown in Figure3.In the nonlinear module, when the coarse model operates with the signals (V , V , ) instead of the signals (V , V , ), the output current and the -parameters of coarse model can match that of fine model accurately.The neural network is used to describe the nonlinear relationship between the signals of the coarse model (V , V ) and the signals of the fine model (V , V ) as represented in
Figure 4 :
Figure 4: I-V comparison between measured data, coarse model, and proposed model for the LDMOS example.
Figure 6 :
Figure 6: Plot of the gain and PAE between measured data, coarse model, and proposed model for the LDMOS transistor.
S 11 I f S 11 R f S 12 I f S 12 R f S 21 I f S 21 R f S 22 I f S 22 R i S 11 I i S 11 R i S 12 I i S 12 R i S 22 I i S 22 R c S 11 I c S 11 R c S 12 I c S 12 R c S 21 I c S 21 R c S 22 I c S 22 R o S 11 I o S 11 R o S 12 I o S 12 R o S 22 I o S 22 f S 11 I i S 11 R i S 12 I i S 12 R i S 22 I i S 22 S 11 I o S 11 R o S 12 I o S 12 R o S 22 I o S 22 The main diagonal elements in are the negative of the reflection coefficients at the various component ports.The other (nondiagonal) elements of are negative of the transmission coefficients between different ports of the individual i o R f S 11 I f S 11 R f S 12 I f S 12 R f S 21 I f S 21 R f S 22 I f S 22
Table 1 :
Training data and test data for DC and S-parameter modeling of LDMOS transistor.
is to find a suitable set of weights by the training process.Let the training error measure the learning performance of the proposed model.Let the test error measure the predictive ability of the proposed model.The training process is performed until both the training error calculated with the training data and the test error calculated with the test data
Table 2 :
Accuracy comparison of coarse model and proposed models for DC and S-parameter simulation. | 4,636.8 | 2018-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Incomplete tumour control following DNA vaccination against rat gliomas expressing a model antigen
Background Vaccination against tumour-associated antigens is one approach to elicit anti-tumour responses. We investigated the effect of polynucleotide (DNA) vaccination using a model antigen (E. coli lacZ) in a syngeneic gliosarcoma model (9L). Methods Fisher 344 rats were vaccinated thrice by intramuscular injection of a lacZ-encoding or a control plasmid in weekly intervals. One week after the last vaccination, lacZ-expressing 9L cells were implanted into the striatum. Results After 3 weeks, in lacZ-vaccinated animals the tumours were significantly smaller than in control-vaccinated animals. In cytotoxic T cell assays lysis rates of >50 % could only be observed in a few of the lacZ-vaccinated animals. This response was directed against lacZ-expressing and parental 9L cells but not against syngeneic MADB 106 adenocarcinoma cells. In Elispot assays interferon-γ production was observed upon stimulation with 9LlacZ and 9L wild-type but not MADB 106 cells. This response was higher for lacZ-immunized animals. All animals revealed dense infiltrates with CD8+ lymphocytes and, to a lesser extent, with NK cells. CD25-staining indicated cells possibly associated with the maintenance of peripheral tolerance to self-antigens. All tumours were densely infiltrated by microglia consisting mostly of ramified cells. Only focal accumulation of macrophage-like cells expressing ED1, a marker for phagocytic activity, was observed. Conclusion Prophylactic DNA vaccination resulted in effective but incomplete suppression of brain tumour formation. Mechanisms other than cytotoxic T cell responses as measured in the generally used in vitro assays appear to play a role in tumour suppression.
Introduction
Malignant gliomas cannot be cured despite technical developments aiding surgical resection techniques, optimised radiotherapy, and novel (local and systemic) chemotherapy. However, we have obtained broad knowledge regarding molecular alterations involved in glioma formation and tumour maintenance. This has fostered the search for novel adjuvant therapies, including gene therapy and immunotherapy, which have been pursued in several preclinical and clinical studies. However, a relevant survival benefit has not been achieved to date [2,4].
Malignant gliomas are highly infiltrative tumours contributing to their inevitable recurrence. Specific activation of the immune system has always been regarded as a worthwhile attempt in order to eliminate residual tumour cells by immuno surveillance. Several approaches have been utilised, including transfer of adoptive T cells, peptide immunization, and vaccination with dendritic cells pulsed with tumour-derived proteins or nucleic acids [9,16].
Polynucleotide (DNA) vaccination represents another straightforward approach with possible advantages. In contrast to immunisation with peptides, DNA vaccination appears to result in stronger cytotoxic, Th1-mediated responses that are regarded as crucial for effective antitumour effects [3,18]. Furthermore, the vaccine, i.e., the expression plasmids, can be readily produced in large quantities, and it is possible to immunise with the whole cDNA, obviating the need to characterise and synthesise specific epitopes. To evaluate this vaccination concept in a brain tumour model, we used a model antigen (E. coli lacZ) for repeated intramuscular vaccination prior to intracerebral implantation of E. coli lacZ-expressing glioma cells in a syngeneic rat model.
Cell lines
The 9L rat gliosarcoma cells were obtained from the Brain Tumour Research Center, University of California, San Francisco, CA, USA. MADB 106 rat adenocarcinoma cells were a kind gift from Dr. Thomas von Hörsten, Medizinische Hochschule Hannover, Germany.
Both cell lines are syngeneic to Fisher 344 rats. 9LlacZ cells had been transfected with the LNPOZ vector (kindly provided by Dr. A.D. Miller, Seattle, WA) containing the E. coli lacZ gene and the neomycin resistance gene. The latter served for selection of stable lacZ-expressing cells by selection with G418 after transfection with Effectene TM (Qiagen, Hilden, Germany). Cells were cultured in DMEM supplemented with 2 mML-glutamine, 1,000 mg/l D-glucose, and 2 mM sodium pyruvate (GIBCO BRL Life Technologies, Karlsruhe, Germany; 9L and 9LlacZ cell lines) or RPMI (GIBCO BRL Life Technologies, Karlsruhe, Germany; MADB 106 cell lines) containing both 10 % heat-inactivated foetal calf serum and 1 % penicillin/ streptomycin (Sigma-Aldrich, St. Louis, MO, USA) at 37°C in a humid atmosphere with 5 % CO 2 .
Plasmids DNA vaccination against the lacZ antigen was performed with a lacZ-containing expression vector (pcDNA3.1/His B/lacZ; Invitrogen, Karlsruhe, Germany). Control animals were injected with an empty expression vector (pcDNA3.1/His B; Invitrogen, Karlsruhe, Germany). To prevent vaccination against the neomycin resistance gene, this gene had been deleted from both plasmids, and vector integrity was confirmed by sequence analysis. Large-scale preparation of plasmid DNA was performed with the EndoFree GigaPrep® (Qiagen, Hilden, Germany). DNA was solved in 0.9 % sterile saline and stored in aliquots at -20°C.
Vaccination protocol and tumour cell implantation
Male Fisher 344 rats (250 to 275 g) were purchased from Charles River (Sulzfeld, Germany). Animals were housed according to German Animal Protection Regulations, and permission had been obtained from the local authorities. Weekly vaccinations into both anterior tibial muscles were performed thrice with 100 μg DNA per leg in a volume of 50 μl normal saline. One week after the last vaccination, animals were anaesthetised with 4 % chloral hydrate (1 ml per 100 g) and the heads were mounted into a stereotactic frame (TSE Systems, Bad Homburg, Germany). A burr hole was placed 3 mm lateral and 1 mm anterior to the bregma, and 2 × 10 4 9L/lacZ cells suspended in 5 μl DMEM without supplements were slowly injected into the right striatum with a 10-μl Hamilton syringe. The needle was carefully retracted and the burr hole was sealed with bone wax. After 3 weeks animals were killed and spleens were removed under sterile conditions and kept in ice-cold RPMI containing 10 % FCS until lymphocyte preparation. Blood was collected from the right atrium, and brains were removed and transferred into anti-freeze medium (Reichert-Jung, Nussloch, Germany), shock-frozen in liquid nitrogen, and stored at -20°C.
Histology and immunohistochemistry
The tumour volume was calculated from serial hemalaun-eosin-stained sections (200 μm distance) using the following formula: V 0 4/3 x π × 0.125 (length × height × width). Coronal sections of 10 μm were cut with a cryostat 2800 Frigocut (Reichert-Jung, Nussloch, Germany), and we identified sections where the tumour appeared first and where it disappeared. For immunohistochemical and X-Gal staining, brain slices were mounted on coated slides (Marienfeld GmbH, Lauda-Königshofen, Germany), air-dried, and stored at -20°C in aluminium foil.
The 9L cells expressing the lacZ gene convert X-Gal to 5,5′-dibromo-4,4′-dichloro-indigo, staining the cytoplasm of these cells blue. Brain slices were fixed with 10 % formaldehyde for 10 min and washed twice with PBS. Staining with X-Gal was performed in a moist chamber at 37°C overnight. Slides were washed in PBS twice for 5 min and covered with Aquatex (Merck, Darmstadt, Germany).
Cytotoxic T lymphocyte assay
Spleens were removed under sterile conditions and transferred to a petri dish (Becton Dickinson Labware, Franklin Lakes, NJ, USA) to generate a cell suspension that was passed through a 70-μm-pore filter (Becton Dickinson). The cell suspension was layered onto Lympholyte M (Cedar Lane, Ontario, Canada), and mononuclear cells were isolated by density-gradient centrifugation with 1,000 g for 20 min. After washing thrice, cells were transferred to a petri dish, and cells were allowed to adhere to the bottom at 37°C for 2 h. In contrast to monocytes/macrophages and B cells that adhere to plastic, T cells can be collected by aspiration off the culture media after gentle shaking. Cells were washed twice with RPMI, and their viability was determined by trypan blue staining. Usually 1-2 × 10 8 mononuclear cells per spleen were obtained.
Generation of effector cells for the chromium release assay was performed as described previously [17,20]. In brief, 9L/lacZ cells serving as stimulator cells were seeded in 6-well plates and irradiated with a lethal dose of 40 Gy. T cell-enriched mononuclear cells (5 × 10 6 ) were added at a ratio of 1:10, which proved optimal in preliminary experiments, and co-cultures were incubated for 7 days. On days 3 and 5, fresh RPMI (10 % FCS, penicillin G/streptomycin) containing 30 U/ml human recombinant Il-2 (Sigma-Aldrich, Munich, Germany) was added.
On day 7, target cells (9L/lacZ, 9Lwt, or MADB106; 5 × 10 5 cells) were labelled with 200 μCi Na 51 CrO 4 (Amersham-Buchler, Braunschweig, Germany) in 1 ml RPMI containing 10 % FCS in a shaking water bath at 37°C for 1 h. After washing thrice to eliminate nonincorporated Na 51 CrO 4 , cells were counted and viability rates of >90 % were assured by trypan blue staining. Target cells (TC) were adjusted to 30,000 cells/ml, distributed on 96-well round-bottom plates (Corning Incorporated, Corning, NY, USA), and let to adhere for 2-3 h before effector cells were added. Effector cells (EC) were collected, counted, and added to labelled TC in fresh RPMI in different ratios (10:1, 20:1, 40:1, 80:1) in triplicate. Contact between target and effector cells was achieved by centrifugation of plates for 3 min. Plates were incubated at 37°C for 4 h. The radioactive supernatant containing released 51 Cr was soaked off with cotton wool (Scatron Titertek harvesting system; Scatron, Suffolk, UK) and transferred to a Gamma counter (Canberra-Packard, Frankfurt, Germany). Spontaneous release of 51 Cr was determined from TC without exposure to EC (equivalent to 0 % specific release).
Maximum (100 %) release of 51 Cr was determined following TC lysis with 10 % Triton-X detergent. Specific lysis was calculated as follows: IFN-γ synthesis (Elispot assay) IFN-γ synthesis by stimulated T-cells was determined with a commercially available Elispot kit (Diaclone, Besançon, France) following the protocol provided by the manufacturer with minor modifications according to Heiser et al. [10]. In brief, PVDF 96-well plates were incubated with an antirat IFN-γ antibody (capture antibody) at 4°C overnight. The next day, freshly isolated spleen cells (5 × 10 5 responder cells per well) were washed and resuspended in RPMI containing 10 % FCS and co-incubated in the INF-γ antibody-precoated 96-well plates with 9L/lacZ, 9Lwt, and MADB106 (10 5 stimulator cells per well). After incubation at 37°C for 20 h, the cells were removed from the plate, and a biotinylated anti-rat IFN-γ antibody (detection antibody) was added and detected with streptavidine-conjugated alkaline phosphatase converting the substrate BCIP/NBT to a blue dye. Dots were counted using the Bioreader system (BIO-SYS GmbH, Karben, Germany).
Results
Intramuscular polynucleotide vaccination performed thrice with a lacZ expression plasmid prior to implantation of lacZ expressing tumour cells was associated with a strong antitumour response. Whereas control vaccinated animals revealed large tumours (183.7 mm 3 ; SD 99.2), the residual tumours in lacZ-vaccinated animals were significantly smaller (18.9 mm 3 ; SD 13.3; p<0.05) (Fig. 1). Representative tumours are shown in Fig. 2. Necrotic areas and neovascularisation were observed, but almost no infiltrating tumour growth (Fig. 2 s-u). Vaccination with saline (no plasmid) resulted in the formation of tumours similar in size to control vaccinated animals (data not shown). In the lacZ group, one animal died because of apnea during anaesthesia for tumour cell implantation. In the control group, two animals died from excessive tumour growth before the experiment was terminated. In both animals, large tumours were found at autopsy, although exact tumour volumes could not be determined because of post-mortem artefacts.
Immunohistochemical staining revealed strong lymphocytic and microglial cell infiltrates in tumours of all animals (Fig. 2). Regardless of the type of prior vaccination (control vs. lacZ vector), the pattern of infiltration did not differ.
Dense T-cell infiltrates (TCR staining) were found within tumours of both lacZ- (Fig. 2a) and control-vaccinated (Fig. 2f) animals, which was pronounced at the tumour margins and extended into peritumoral normal brain. Characterisation of T-cell infiltrates showed predominantly CD8+ T cells (Fig. 2 b and g). Staining for CD4 revealed both lymphocytes as well as dense infiltrates with microglial cells (Fig. 2c and h). All tumours also revealed NK cells that were less abundant than T cells (Fig. 2e and k). Perforin granula suggesting cytolytic activity of NK cells and T cells could only be detected in single cells (Fig. 2r). All tumours contained CD25+ cells, which probably represent regulatory CD4+ (or CD8+) T cells involved in peripheral tolerance to self-antigens ( Fig. 2d and j).
Two types of microglial cells could be distinguished following staining with Iba1, MHC 2, and ED1 (Fig. 2l-q). First and foremost, all tumours were densely infiltrated with microglial cells revealing a ramified phenotype. Such cells could be detected by staining against the MHC2 or Iba1 antigen (Fig. 2 l,m,q). Besides this, small clusters of macrophage-like cells were detected (Fig. 2 n-p). These Fig. 1 Rat glioma tumour volumes 3 weeks after intracerebral implantation of 9L lacZ cells following DNA vaccination. Animals had been vaccinated thrice with an empty expression vector ('control') or a lacZencoding vecor ('lacZ') in weekly intervals followed by intracerebral tumour cell implantation. The number of treated animals is indicated. Tumour volumes (mean and standard deviation) were determined from serial coronal sections. *p< 0.001 (Student's t test). In the lacZ group, one animal died because of apnea during anaesthesia for tumour cell implantation. In the control group, two animals died because of excessive tumour growth before the experiment had been terminated. Massive tumour growth was verified by histology, although post-mortem tissue artefacts prevented accurate measurements cells stained positive for ED1, which is detected on lysosomal membranes of cells of the mononuclear phagocytosis system (Fig. 2 n-p). Thus, ED1 appears to represent a microglial subpopulation with phagocytic activity or macrophages. In general, the intensity of ED1 staining was most prominent around necrotic areas (Fig. 2n). Nonetheless, islets of ED1-positive cells were also found in nonnecrotic areas (Fig. 2 o and p).
X-Gal staining was performed to assess β-galactosidase expression in residual tumours (Fig. 2u). Positive staining suggests that residual tumours in lacZ-vaccinated animals were not due to selection against the lacZ gene (Fig. 2u).
To evaluate cytotoxic T cell activity directed against the implanted tumours, CTL assays were performed with lymphocytes generated from in vitro restimulated spleen cell preparations. As determined by flow cytometry, >70 % of the restimulated cells were CD3-positive lymphocytes (approximately 33 % CD8+ and 52 % CD4+ cells). Restimulated cells contained less than 1 % cells that stained positive for NKR-P1, which was used to identify natural killer cells (data not shown). Strong cytotoxic responses with lysis rates of >50 % were only observed in few animals, all of which had been vaccinated against the lacZ antigen (Fig. 3). This cytotoxicity was specific for both 9LlacZ cells and the parental 9L cell line (Fig. 3). No cell lysis was observed when another syngeneic cell line, MADB 106 rat adenocarcinoma, was used as target cells (Fig. 3). Although some lacZ-vaccinated animals revealed strong cytotoxic activity, this did not correlate with tumour size or lymphocytic infiltrates as well as CD25 or perforin staining. Cell lysis was T cell receptor (TCR)-dependent since addition of a monoclonal antibody against the rat TCR (R73; 1:200) 1 h prior to target cell exposure suppressed release of labelled chromium by >50 % (not shown). Less inhibition was observed with monoclonal antibodies directed against MHC 1 (30 % inhibition), CD8 (20 %), or CD4 (20 %).
Fig. 2 Immunohistochemical characterisation of T-cell infiltrates and microglial cells detected in tumours from control-vaccinated and lacZvaccinated animals. Tumours of control-vaccinated and lacZvaccinated animals were heavily infiltrated with TCR+ cells (a and f)
that were predominantly CD8+ lymphocytes (b and g). The same region also contained moderate NK cell infiltrates (e and k). Staining for CD4 revealed dense infiltrates with cells representing CD4+ lymphocytes and microglial cells, which could also be detected in the peritumoral brain parenchyma (c and h). Microglial cells stained MHC 2 positive (q), and staining for a microglial-specific marker (Iba1) revealed the abundance of microglial cells both within the tumour and in the peritumoral brain parenchyma (l and m). ED1 staining, a marker indicating phagocytic activity, mostly revealed focal expression (n-p), in particular in necrotic regions (n). Perforin serving as a marker for cytotoxic activity of NK and CD8+ cells revealed only a few positive cells within the tumour (r). The cells staining positive for CD25 probably represent regulatory CD4+ cells involved in the maintenance of peripheral tolerance (d and j). Representative coronal brain sections were stained with haematoxylin and eosin, indicating large tumours in control animals (s) and markedly smaller tumours in the lacZ-vaccinated animals (t and u). The smaller tumours in the lacZvaccinated animals stained with X-Gal, indicating that selection against the lacZ antigen had not occurred (u) Specific responses directed against 9LlacZ and parental 9L cells (but not MADB 106) were also observed in several animals of the control group (Fig. 3). These animals had not been vaccinated against the lacZ antigen but exposed to 9LlacZ cells (intracerebral tumours). In control animals, however, the lysis rates remained below 25 % (Fig. 3). Lymphocytes derived from naive animals neither vaccinated nor exposed to tumour cells did not elicit cytolytic activity against any of the three target cell lines (Fig. 3).
To further investigate anti-tumour immune responses, IFN-γ synthesised by splenic lymphocytes exposed to 9LlacZ, 9L wild-type or MADB 106 cells was quantified by Elispot analysis. IFN-γ was produced by lymphocytes from lacZ-vaccinated as well as controlvaccinated animals when stimulated with 9LlacZ or the parental 9L cell line, but not following exposure to syngeneic MADB 106 cells (Fig. 4). The amount of IFN-γ synthesized was higher upon stimulation with 9L than 9LlacZ cells, which was independent of the vaccination status (Fig. 4). Although prior lacZ vaccination resulted in higher IFN-γ synthesis with both 9LlacZ or wild-type 9L stimulator cells, this was not statistically significant (p > 0.05; ANOVA). The level of IFN-γ production by lymphocytes of individual animals and the size of their tumours did not correlate.
Discussion
This study demonstrates that intramuscular DNA vaccination against a model antigen (lacZ) suppresses the formation of intracerebral tumours in a syngeneic rat model. Whereas is shown for representative animals of each group. Whereas high levels of 51 Cr-release (>50 %) could only be observed in lacZ-vaccinated animals (lacZ #1 and lacZ #2), specific but weaker lysis (<25 %) of 9LlacZ and parental 9L cells was also detected in control-vaccinated animals (control #1). DNA vaccination against lacZ did not result in a lacZ-restricted response. CTL activity directed against 9LlacZ cells was always paralleled by a similar response against the parental 9L cell line. However, cytotoxic activity was restricted to the 9L cell line and not observed with another syngeneic cell line (MADB) Fig. 4 Quantification of IFN-γ synthesis by the Elispot assay performed with splenocytes isolated from control-vaccinated (filled bars) or lacZ-vaccinated (open bars) animals. IFN-γ synthesis was observed in lymphocytes derived from both control-vaccinated and lacZvaccinated animals following exposure to 9LlacZ cells or the parental 9L cell line for 20 h. No such response was observed with a syngeneic rat adenocarcinoma cell line (MADB). Although IFN-γ production was higher in lacZ-vaccinated animals, this did not reach statistical significance (p>0.5; ANOVA). The amount of IFN-γ detected was higher following exposure to 9L cells compared to 9LlacZ cells. Bars indicate standard deviations in the control vaccinated animals large tumours were detected (including two animals which had died from excessive tumour growth), in the lacZ-vaccinated animals only small tumours had formed. We did not quantify the efficacy of vector uptake at the vaccination site. Although this appears unlikely, we cannot rule out that more effective uptake of the lacZ expression plasmid in conjunction with unspecific (lacZ-independent) immune stimulation was resposible for the decreased tumour size in the lacZvaccinated animals.
We chose a point of time 3 weeks after intracerebral tumour cell inoculation to assess tumour growth since this was sufficient for the formation of large tumours in the control-vaccinated group. Although at this point significantly smaller tumours were found in the vaccinated animals, tumour formation had not been prevented completely. We cannot rule out that the small tumours detected in the vaccinated animals on the day of sacrifice represent tumour remnants during an ongoing process of tumour rejection. However, it appears more likely that after completed preventive vaccination tumour rejection had occurred directly after tumour cell inoculation (i.e. solid tumour formation had been prevented altogether). The fact that solid tumours were observed at all argues in favour of an insufficient immune response merely delaying or retarding tumour growth. Others have reported long-term survival in a murine brain tumour model following DNA vaccination [12]. The effect of vaccination and reduced tumour growth on suvival time was not investigated in our model.
To investigate immune mechanisms possibly involved in the anti-tumour effects observed immunohistochemical staining and immunological in vitro assays (CTL and Elispot assay) were employed. DNA vaccination was required for lysis rates of >50 % in CTL assays, which, however, were only observed in a few animals. Similarly, IFN-γ synthesis as quantified by Elispot assays was higher in lacZ-vaccinated animals, although this did not reach statistical significance. Thus, DNA vaccination resulted in the priming of specific cytotoxic responses as expected from previous reports [13,15,21]. Despite vaccination against the lacZ antigen, this response was not restricted to lacZexpressing cells, but included the parental cell line. We did not restimulate the lymphocytes with the parental 9L cell line. Thus, it is unresolved to what extent restimulation with the antigen proper (lacZ) was required for effective target cell lysis in those animals revealing a strong CTL response. The fact, however, that parental 9L (target) cells were lysed with the same efficacy as 9LlacZ (target) cells suggests that the lacZ antigen proper was not crucial for effective restimulation. A possible mechanism is antigen spreading within the 9L (but not to the syngeneic MADB 106) context.
Notably, intracerebral tumour cell implantation following vaccination with a control plasmid also elicited 9L tumourspecific cytotoxic responses in vitro, although this response was weaker. Thus, lacZ vaccination may augment an intrinsic immune response present in the 9L tumour model occurring independently from prior immunisation. This response was specific for 9L tumours and not observed with another syngeneic adenocarcinoma cell line (MADB 106).
Although in a few of the animals pronounced responses in both in vitro assays had been observed, this response did not correlate with the anti-tumour effect of prior vaccination. Thus, cytotoxic T cell responses not detectable by CTL and Elispot assays or even unidentified effector mechanisms may play a role in tumour suppression. This is supported by the fact that the immunohistochemical staining pattern of the different treatment groups was indistinguishable. Tumour size did not correlate with the degree of lymphocytic infiltration or activation. Rather heterogeneous staining patterns were observed within individual tumours, e.g. more pronounced infiltrations around necrotic areas. This is consistent with the ambiguous role proposed for the wellrecognised lymphocytic and microglial infiltrations in malignant gliomas [8,14,19]. In fact, only single cells stained positive for perforin serving as a marker for cytolytic activity in situ. We detected immunoreactivity for CD25 (interleukin-2 receptor alpha chain), known to be expressed by activated T and B cells, macrophages, and CD4+ and CD8+ regulatory T cells. As recognised in recent years, in gliomas and other tumours CD25+ cells often represent regulatory T cells (FoxP3+), and these cells have been demonstrated to play an important role in the maintenance of peripheral tolerance [5,6,11]. The exact role of different CD25+ cell populations in tumours, however, is not fully understood. Since staining for CD25 did not differ between lacZ-and control-vaccinated animals, we did not attempt to distinguish different subpopulations of CD25+ cells.
All tumours were densely infiltrated with microglial cells. Only few microglial cells stained ED-1 positive, indicating phagocytic activity. Such cells were predominantly found in necrotic regions. The majority of microglial cells revealed a more ramified morphology. There is accumulating evidence that these cells are in the service of the tumour [8,14,19]. Microglial cells appear to promote tumour growth directly (e.g. by producing growth factors) as well as indirectly by the secretion of immunosuppressive cytokines (e.g. TGF-β, IL-10) and the expression of molecules inducing apoptosis in lymphocytes (e.g. Fas ligand). The latter factors contribute to the local and systemic immunosuppression and peripheral tolerance observed in gliomas.
Because of this we attempted to augment the anti-tumour response by intratumoral application of oligonucleotides containing unmethylated CpG motifs because of their known macrophage/microglia-activating properties. Such oligonucleotides exerted no adjuvant effect; on the contrary, they resulted in an increase in tumour size in the 9L model. This was observed both following vaccination against the lacZ gene (data not shown) and in nonvaccinated animals challenged with naive 9L cells [7]. Furthermore, we evaluated the adjuvant effects of flt-3 ligand and IL-12 expression plasmids that had been added to the lacZ plasmid used for vaccination (data not shown). Coadministration of both plasmids did not increase the anti-tumour response elicited by lacZ vaccination only, but, in contrast, there was a trend to the formation of larger tumours.
In this study we used a therapeutic regimen easily applicable to patients. Despite a robust anti-tumour effect, there are several reasons for being cautious with regard to possible clinical efficacy in glioma patients. We used an idealised setting with prophylactic vaccination in a non-infiltrative rodent model that is known to respond to different immunotherapeutic approaches. Nevertheless, we could not prevent tumour formation and the failure of all adjuvants tested thus far reflects how unpredictable and counter-productive their effects can be. This does not even touch on the issue of whether vaccination against a single antigen is sufficient, although, in our model we did not observe tumour escape due to selection against the model antigen. | 5,899.6 | 2012-11-08T00:00:00.000 | [
"Biology",
"Medicine"
] |